Robert Hansen’s gripe with Google is easy to understand. Unchecked redirects are a phisher’s dream vulnerability. What would be Google’s motivation to not fix such a blatant vulnerability? Well, there’s only a few reasons why someone would choose to purposely not fix a vulnerability:

1. they don’t care about security
2. they don’t know how to fix it
3. they believe the issue is not a vulnerability
4. the cost of the fix does not equate with the risk of the vulnerability
5. the security vulnerability is inherently inseparable from a feature/function

Common sense rules out #1, #2, and #3. If it was #4, that would mean either Google does not want to fix the vulnerabilities since it is too costly to address phishing observed in the wild (repeat: too costly for Google). Given Google’s recent trust issues with privacy and being evil in general, fixing these would seem like an easy win. So, I think #4 can also be ruled out since it doesn’t make sense from Google’s perspective.

So, we’re left with #5. But how could this be? This would mean that the redirects are inherently inseparable from a Google function or functions.

The Vulnerability
Let’s setup the problem. Unchecked redirects are a type of Direct Object Reference problems. These problems occur when a malicious user can figure out arbitrary but valid values for a certain input. This input then gets acted upon by the application without any validation. Other types of direct object issues:

  • file names (../../../etc/passwd)
  • primary key fields (, what about 103?)

The solution to a Direct Object Reference is to create an intermediary table of values that the user can select. So, in the bank example, let’s assume that the user has access to account IDs 101 and 105. Any other 3 digit number corresponds to another user’s account, and that’s why their simple numeric rotation of the number worked. We need an Indirect Object Reference. What does an Indirect Object Reference (or intermediary table) look like? A few examples:

  • checking1   -> 105
  • file_index1 -> ./marketing.html

Sure, a user can request ‘checking105329’, but the application won’t find it, because the user does not have 105,000 checking accounts. What’s vital to understand here is that an Indirect Object Reference is an offset into a list of valid values. They eliminate the ability of a user to specify arbitrary values.

Ok, so then how can Google fix their vulnerability using this technique? Instead of this URL:

The fix would be to use this one:

This way, the user can’t supply an arbitrary URL – they can only supply an index into a list of valid URLs. The value of the q parameter must match an actual object/row that’s been cataloged on the server-side. This is only possible assuming Google has that type of representation for documents, which RSnake claims (with more insider knowledge than me, granted) they have, but I’m not convinced – they don’t exactly have a RDBMS like MySQL powering Google indexing.

So even if they do have that URL ID capability- do we win? Can haz cheezburgr now? No, and for the record, if your cat really wants cheeseburgers you are a terrible cat owner. Cats should like tuna. If they really love cheeseburger, that means they’ve eaten enough of your lazily discarded cheeseburgers to become really fond of it, which means two things:

  • You should eat healthier
  • You shouldn’t leave food out

You’ll get ants.

So let’s imagine tomorrow Google was publishing these URLs: Are we fixed? Hardly. Imagine how trivially easy it would be to get a bad site onto that list of valid URLs. All you would have to do, ostensibly, is create a document. When you eventually get indexed (it is not a challenge to get indexed), you will get a urlXXXX value. So, instead of your URL will be Wait, did we just make things worse? Yes, I do believe the 2nd URL is actually more likely to be clicked on by the unassuming.

One of the problems surrounding this unchecked redirect is “I’m Feeling Lucky.” A naive attempt to address this would be  to say, “Only a page that has a #1 page ranking for a given search term can have a urlXXXX number.” Of course, as a bad guy I could create an evil document that contains a hash of some secret that is unique to anything else on the Internet and therefore easily be the first page ranked (and thus, make it onto the elite urlXXXX list).

What about solutions?

Solution #1 for Google
Make the “I’m Feeling Lucky” page return a user only if the page on the other end of the rabbit hole has a page rank score somewhere between “I’ve got 30 subscribers to my blog” and “My open source project page recently reached 20,000 downloads.” Sure, you won’t stop someone who has se0wned someone else, but you’re now raising the bar dramatically.

Solution #2 for Google
Realtime, mostly-automated blacklisting.
1. Setup spam catching accounts with, Yahoo!, Hotmail, etc.
2. Wait for spam that use Google Redirects pointing to a low ranking site.
3. Automatically add the phishing site to the already-existing blacklist when verified by a security team in realtime.

Okay, those are my thoughts on the unchecked redirects. In summary, I don’t think it’s fixable without a whole lot of work that utilizes out-of-band and reactionary techniques.

Extra Homework: The Google Gadgets Debate
I highly recommend everyone read RSnake, Kuza55 and Tom Ptaceks’ conversation on the Matasano blog about the whole Google Gadgets thing. The comments make the post, so read up.