With all that aside, it’s still useful research (to me). The unknown compromise of a supernode is very unlikely, especially considering the overall incompetence on the part of those who would try to shut down a darknet. And to honor the spirit of the idea, I wanted to talk about some solutions to a few of the “challenges” they noted when architecting the design of their darknet:
Solution #1: They considered this a lost cause, but I think there are a couple of things you could do to retain the file slices without exposing the information to the new supernode, which clients shouldn’t have to trust. First, the initial request to go to the next supernode could be a POST request of the following format:
The server could then reflect the response for the clients to restore, complete with a hash to check (just for CRC). The second approach is slightly more complicated but allows for a general solution to the problem of “lost information” (like chat logs, keypairs, etc.) because of the origin-hopping.
Solution #2: Mallory, a user who isn’t on the darknet, tries to connect to darknet.com. Because her IP isn’t whitelisted, the DNS server for darknet.com sends back wrong IP information (like the IP for goatse.cx maybe?) or refuses to resolve. Mallory therefore can’t connect to the darknet. Alice, a legitimate darknet user, connects to darknet.com. Because the darknet knows her IP address, the DNS server returns a legitimate response and allows her the chance to authenticate to the darknet, then redirects her to an alias – www1.darknet.com.
The first thing the client-side code from the darknet gives to Alice does is this:
> document.domain = 'darknet.com';
Alice then grabs some file slices in order to help everyone share the risk of getting DMCAwned. Now imagine supernode ‘www1’ gets burned. She gets kicked off the darknet and has to find a new supernode. Once she authenticates to the next supernode, she is redirected to a sequentially enumerated sub-domain, ‘www2’. Because the next supernode is still a subdomain of the previous darknet domain, she can again execute the same code:
> document.domain = 'darknet.com'
Now she’ll have the same access to those files and the information is not lost – no need to go grab the new file slices because the old ones are still here in local storage!
There is a pretty minor weakness in this approach – if a bad guy can hijack a supernode subdomain and trick her into visiting it (while her browser is still pinning the legitimate supernode IP address, otherwise any request to it would be redirected to a new, safe supernode), that bad guy can grab the local file slices with malicious code.
Also, using DNS as a control mechanism can probably also cause some darknet-fail due to its centralization, but you can always fall back to a less restrictive model when DNS is unavailable due to compromise. On top of that you can rotate authoritative nameservers much faster than in years past.
I’m sure you were totally rivitted by that – in fact I’ll try to do more posts on improving the virtual defenses of a virtual darknet. There are other ways of improving it and plenty more ways to attack it, but the rest are even more boring than this. Hope you enjoyed Blackhat!