[dns-wg] "DNS Vulnerabilities" paper hits the mainstream
Roy Arends roy at nominet.org.uk
Tue May 2 20:09:34 CEST 2006
Jim Reid wrote on 04/30/2006 09:46:25 AM: > Emin Gun Sirer's paper/presentation at RIPE52 has been picked up by > the BBC: > > http://news.bbc.co.uk/1/hi/technology/4954208.stm > > Any thoughts on how to respond to that? I respond to this on different fora, and some of asked me to iterate it here, so here goes: I saw this presentation of the study and results at ripe, and it was more marketing than science. First off, the survey mentioned tracked dependencies on servers which had names not in the delegated zone nor its own (out of bailiwick). Some dependency graphs showed more the 600 nodes. The survey sorted names by node and went on by saying 'the higher the dependency, the more vulnerable a name'. The conclusions in the end were twofold. 1: the old wisdom of having more server dependency is bad. And 2: A new form of DNS is needed. Add 1). The 'old wisdom' was imho: more authoritative servers for a single name, less points of failure, etc,etc. What it does _not_ mean (i.e. grove mistake by the authors) is that resolution graphs should be long and wide (i.e. net resides on ns1.com, com resides on ns1.org, org resides on ns1.edu, etc, etc) Meanwhile, caching was never mentioned. The big message was that somebody who abuses a vulnerability in one of those 600 nodes, would 0wnz (sic) the name, while in my point of view, a hacker would own some part of the resolution graph, depending on where this vulnerable node hangs in the tree, and not automagically the entire name. To add some sugar to this, the presentation went on to show that 17 percent of the tested servers had 'known' vulnerabilities, which then related to 45 % if the names being triviable hijackable, though no accurate methodology was given. The authors made the mistake of confusing protocol with implementation. Dependency is not equal to vulnerability. In the process, some high profile name-servers at berkeley were mentioned, and it was suggested that their operators were not professionals, and that they did not understand the high dependencies. The authors of the paper (that resulted in this presentation) came to the conclusion that server dependencies using out-of-bailiwick servers in DNS is a bad thing, and hence, there was a new kind of DNS needed, discarding the obvious solution of recommending in-bailiwick glue. Add. 2) It turned out that this 'new DNS' was already defined: some form of DNS using distributed hash tables: beehive codons. And ofcourse, at rival Berkeley, there is a similar project: CHORD. Conclusion: This was no less than a marketing talk. They have a solution, and they need to sell it. In order for it to look good, make the old solution and the competition look bad. A marketing study. Not science. Nothing original and nothing new (DJB warned about this: http://cr.yp.to/djbdns/notes.html 'gluelessness' ) . Scare tactics at most. Meanwhile, the codons server set itself has issues. It responds to responses. A few packets would effectively bring the whole codons infrastructure down. Sure, these bugs can be fixed. But if that argument is allowed for codons, it should be allowed for generic dns implementations. Building a new protocol based on the fact that there exist vulnerabilities in current implementations is circular. You'll have more bad implementations that will result in new protocols.... Roy PS, the codons folk have been informed about the vulnerability in their software.
[ dns-wg Archives ]