Billy Hoffman and Matt Wood from HP presented on a new browser darknet at Blackhat, which of course the press went totally batshit for (the press love Billy et. al. as much as they love anyone – or HP’s marketing department is insanely good). I love the idea of totally anonymous P2P information sharing, but it’s just not possible in the browser if we can’t use trusted plugins. In a truly safe P2P scheme the supernodes wouldn’t have to be trusted, but this is not the case in Veiled, as they were in fact willing to point out (and gloss over just as quickly :). The simple fact is that the supernodes deliver client-side code to the nodes – code that, when not compromised, contains JavaScript that will allow the user to perform all the functions necessary for a darknet, but when compromised, can be used to subvert all those same functions. As long as you’re getting your client side code from supernodes, it just can’t be done.

With all that aside, it’s still useful research (to me). The unknown compromise of a supernode is very unlikely, especially considering the overall incompetence on the part of those who would try to shut down a darknet. And to honor the spirit of the idea, I wanted to talk about some solutions to a few of the “challenges” they noted when architecting the design of their darknet:

Problem: When a user is on the darknet they store file slices in browser local storage, which is restricted by domain. Consider that a darknet client, Alice, is connected to supernode foo.com. If the supernode is to go down, the user must go join a different supernode. The problem is: what happens to the file slices? They’re in local storage – which means when the user transfers themselves to the next supernode, bar.com, the JavaScript won’t have access to the file slices.

Solution #1: They considered this a lost cause, but I think there are a couple of things you could do to retain the file slices without exposing the information to the new supernode, which clients shouldn’t have to trust. First, the initial request to go to the next supernode could be a POST request of the following format:

POST http://nextsupernode/reflect_files_back_to_me#hashoffiles

file1data=…&file2data=…&file3data…

The server could then reflect the response for the clients to restore, complete with a hash to check (just for CRC). The second approach is slightly more complicated but allows for a general solution to the problem of “lost information” (like chat logs, keypairs, etc.) because of the origin-hopping.

Solution #2: Mallory, a user who isn’t on the darknet, tries to connect to darknet.com. Because her IP isn’t whitelisted, the DNS server for darknet.com sends back wrong IP information (like the IP for goatse.cx maybe?) or refuses to resolve. Mallory therefore can’t connect to the darknet. Alice, a legitimate darknet user, connects to darknet.com. Because the darknet knows her IP address, the DNS server returns a legitimate response and allows her the chance to authenticate to the darknet, then redirects her to an alias – www1.darknet.com.

The first thing the client-side code from the darknet gives to Alice does is this:

> document.domain = 'darknet.com';

Alice then grabs some file slices in order to help everyone share the risk of getting DMCAwned. Now imagine supernode ‘www1′ gets burned. She gets kicked off the darknet and has to find a new supernode. Once she authenticates to the next supernode, she is redirected to a sequentially enumerated sub-domain, ‘www2′. Because the next supernode is still a subdomain of the previous darknet domain, she can again execute the same code:

> document.domain = 'darknet.com'

Now she’ll have the same access to those files and the information is not lost – no need to go grab the new file slices because the old ones are still here in local storage!

Side notes
There is a pretty minor weakness in this approach – if a bad guy can hijack a supernode subdomain and trick her into visiting it (while her browser is still pinning the legitimate supernode IP address, otherwise any request to it would be redirected to a new, safe supernode), that bad guy can grab the local file slices with malicious code.

Also, using DNS as a control mechanism can probably also cause some darknet-fail due to its centralization, but you can always fall back to a less restrictive model when DNS is unavailable due to compromise. On top of that you can rotate authoritative nameservers much faster than in years past.

Wrapup

I’m sure you were totally rivitted by that – in fact I’ll try to do more posts on improving the virtual defenses of a virtual darknet. There are other ways of improving it and plenty more ways to attack it, but the rest are even more boring than this. Hope you enjoyed Blackhat!