Skip to main content

You're viewing an archived page. It is no longer being updated.

RIPE 72 DNS Working Group Minutes

Thursday, 26 May 2016
WG Co-Chairs: Jaap Akkerhuis, Dave Knight, Jim Reid
Scribes: Michael Frearson and Fergal Cunningham

Session I - 14:00 - 15:30
Session II - 16:00 - 17:30

Session I - 14:00 - 15:30

A. Administrivia

• Agenda Bashing
• Review of Action Items
• Approval of Previous Minutes

Dave Knight opened the session. Dave mentioned that during the follow-up section, if people wanted to bring up extra items they are welcome to.

Dave then announced an addition to the second session agenda: a brief proposal about the Yeti DNS project.

Dave announced there were no outstanding actions from previous sessions and no other changes to the agenda.

There were two sets of minutes to approve: minutes from RIPE 70 and RIPE 71. Dave asked for comments – there were none – and declared the minutes approved.

B. RIPE NCC Report Anand Buddhdev, RIPE NCC

The presentation is available at:

In response to Anand's slide about strange K-root peering behaviour in Belgium, Jaap Akkerhuis, NLnet Labs, said the situation exists not only in Belgium, and that he has seen similar behaviour in Amsterdam.

Offering feedback on, Shane Kerr, BII, suggested that as it's only reverse, and people need forward DNS anyway, people are likely to have some other DNS provider, so he doesn't see any loss of value in turning down the service.

Shane then commented that he looked at ripe-663, the ccTLD guidelines, which don't mention anything about IPv6, so he was considering trying to get that (IPv6) added as a reason a ccTLD could get service from NCC. There are a few dozen TLDs that still don't have IPv6 service and Shane thinks that they should.

Jim Reid, DNS Working Group co-Chair, then asked whether the extra domains that were in the RIPE NCC's DNS infrastructure – domains such as and – were removed.

Anand replied in the affirmative and said the RIPE NCC sent an email to the DNS Working Group mailing list about it last year.

Gaurab Upadhyay, Limelight Networks, agreed with Shane's point about ripe-663 and said a lot of smaller ccTLDs might have more than three servers in the same network right next to each other, and that three servers is a high limit, therefore the criteria might want to be written more subjectively.

Blake Willis, L33 Networks, asked Anand to talk with him after the session regarding K-root peering in Belgium.

Dave Knight, Dyn, asked Anand if there is a timeline upon which the RIPE NCC is seeking guidance from the working group regarding the issue.

Anand replied that there is no rush, that the issue has been going on for a while, and that the RIPE NCC can keep tolerating the failures and refresh issues. Anand went on to explain that the aim is to improve the service for everyone in general and that the RIPE NCC would like to start doing something about later this year (2016) and into early next year (2017).

C. Root Zone ZSK Size Increase - Duane Wessels

The presentation is available at:

Paul Hoffman also briefly presented on the upcoming KSK rollover – questions came after.

Shane Kerr said the Yeti root server testbed is testing a roll to 2048 bits with their own ZSK, which they will report on in a few weeks. Shane went on to ask about the bandwidth increase, which he said seemed high, and asked why the increase was so large.

Duane said he thought the increase was due to so many queries coming in with the DO bit set, but it would take another study to determine whether or not all of those DO bit queries are junk.

Shane then asked which feedback channels Duane would be open to. Duane listed Twitter, an email address that will be available, and mailing lists.

Dave Knight then mentioned that the website didn't look like it had been updated since the signing. Dave asked if the website would be updated.

Duane said he thought the website would become more of an historical reference and would not be updated for these activities, but things could change in the future.

Jim Reid thanked Paul and Duane for communicating the information in their presentations. He then asked Duane to explain more about the rationale behind moving from a 1024-bit to a 2048-bit key for the zone-signing key.

Duane said the rationale was comprised of a couple of things: there is a NIST recommendation on key lengths, which plays a part, and there was also a big mailing list discussion and this decision is in response to that community input saying the key should be bigger.

Peter Koch, DENIC, asked if this helps all the people who get their domains signed immediately from the root.

Duane and Paul answered in the affirmative.

Peter then said he hadn't seen a general recommendation to move to 2048 bit or some intermediate size for any of the intermediate levels in the DNS tree, and that the considerations and measurements done during the experiment Duane described may or may not apply to other levels in the DNS tree. He went on to say the idea that everyone should run to 2048 bit for the ZSK may be a bit premature.

Duane said he would agree with that statement, and asked Peter where he would see such a recommendation coming from and who would do it.

Peter said he would not refer to the NIST recommendations for a variety of reasons.

Paul said people should not want to hear himself or organisations on stage recommending it – many national bodies have been recommending going past 1024 bit, but he has not heard of any national or regional organisations recommending to stay at 1024 bit.

Philip Homburg said that when Logjam announced breaking 1024 bits the same also published details on how to break 1024 RSA, and that this suggests someone with EUR 100 million spare cash can build a machine to instantly factor it, so it was dangerous to assume that this is safe for any purpose at this moment.

Paul explained that this was not a correct interpretation of the paper: the 1024 bit key that was broken was a special 1024 bit key with an equivalent strength of 750 bit. Philip argued that nobody has used this amount of cash to actually do it but the speculation is theoretical.

Paul said that this point was about how someone might build such a device, and that the work was purely theoretical at this point. He went on to explain that the theory is not impossible and it is widely assumed there are designs on it, but no one has demonstrated the hardware configuration that could possibly achieve it.

Ólafur Guðmundsson, CloudFlare, said that DNSSEC is a chain, so the weakest link decides whether the answer will be trusted or not. Ólafur reasoned that if the web browsers say “we will not trust anything that is less than ‘x'”, that means if anybody on the way there is less than ‘x' the chain will not be trusted, and it doesn't matter what the lower level does. Ólafur went on to say that this would enable everybody who is using larger keys below to be trusted, and that there were in excess of 85 domains that only use 2048 bit RSA or larger keys, so instantly the browsers will say those are good, so the next step is to get the rest of the TLDs to start moving up or switch to a better algorithm. Ólafur added that the community would then have smaller keys.

Paul agreed that keys would be smaller, and said they would return in five years to talk about that.

Kaveh Ranjbar, RIPE NCC, said there are hard links between this community and the IETF and this community and ICANN, and he hears a lot of things that are inspired or designed by W3C and subsidiaries; this community doesn't have many links to that SDO. Kaveh said that maybe the RIPE working groups need more communication with that line of standardisation.

Paul responded that once they had the 2048 bit ZSK, he thought more groups would be willing to initiate conversations, certainly the browser vendors.

D. QNAME Minimization in Unbound - Ralph Dolmans, NLnet Labs

The presentation is available at:

There were no questions for Ralph.

E. Follow up from plenary topics

What's So Hard About DNSSEC? - Paul Ebersman

This presentation is available at:

There were no questions.

Session II – 16:00-17:30

F. BIND 9.11 Release Update – Vicky Risk, ISC

The presentation is available at:

Shane Kerr, Beijing Internet Institute, asked if Dyn DB supported incremental zone transfers and dynamic DNS, and Vicky said they had not tried that yet but that it should. Shane asked if it was possible to have multiple catalogue zones and Vicky said this was possible.

Anand Buddhdev, RIPE NCC, noted that there was an effort underway to update the document for RSSAC02 statistics so this might change in the future and it would be good to make sure BIND is compliant with the new version.

Sara Dickinson, Sinodun, asked if the implementation of the EDNS client subnet supports, or recognises, the option that a client can set a 0 in the request so that its subset isn't exposed upstream.

Vicky said that was not done yet as what she is talking about is just the authoritative side implementation. She added that she didn't know that any clients did set a 0, it would be great if they did and she would support putting that in. She said the resolver side implementation could be ready for BIND 9.12 and her colleague Stephen Morris confirmed this was the case.

G. DNS Privacy Public Resolver Proposal – Sara Dickinson, Sinodun

The presentation is available at:

Sara concluded her presentation by asking if there were any RIPE NCC members who would be interested in offering a DNS-over-TLS privacy-enhanced public recursive. She also asked for discussion on whether the RIPE NCC would consider running such a service if there was demand for it.

Lars-Johan Liman, Netnod, said he would be happy to see the RIPE NCC participating in any experiments for this but he didn't see running this as an ongoing service as part of the RIPE NCC's remit. He also asked that it be implemented first over IPv6 and then backported to IPv4 if necessary as this is the mindset we should have now.

Thomas Steen Rasmussen, BornFiber, who runs a DNS service called UncensoredDNS said he would love to be the open resolver to test this out. He said there were some issues with amplification problems but provided that can be fixed, it would be ideal because the service is centred around privacy and censorship.

Sara agreed there were a few implementation details to be sorted out and at the moment the implementation coverage in the various name servers is a bit patchy. She said she would like to see this as a driver for getting features implemented across the name servers and using a variety of name servers to run some of these different trials.

Ondřej Surý, CZ. NIC, asked about making this a collaborative effort and getting the address space for IPv4 and IPv6 and each trusted organisation would run its own DNS-over-TLS resolver at its own premises and export those prefixes. He said in that way we wouldn't burden the RIPE NCC with running another service and, for example, CZ.NIC would be happy to run such a service in shared PI space.

Kaveh Ranjbar, RIPE NCC, said the RIPE NCC would fully support the implementation if the community wanted that, but at the end of running such an experiment there should be documentation on how to operate such a service so anyone else could do so as well. He said the second requirement would be to make sure the whole privacy thing at the end of the experiment would be covered, maybe only with TLS but maybe also with minimisation and other things as well, and end-to-end because for example we could also study the effects of this on the root.

Sara said there are a lot of elements to this and there should be a phased approach where we look at the different aspects, learn some lessons and move on and expand the project as we go along.

Kaveh noted the suggestion from CZ.NIC and said it is possible with different participants as all the policies for experimental address ranges are there so everything is in place.

Geoff Huston, APNIC, asked about the distinction between DNS over TLS heading towards recursive servers that then query in the open. He said they were working in the GetDNS project where you are dragging this back towards the users and eliminating the recursive from the entire picture. He wondered aloud whether in the long or medium term future of the DNS and privacy whether public recursives are a part of the picture or whether they're actually kind of in the way. He said from a privacy argument it's sort of “who am I sharing my faith with and my secrets with”. He said he could use and it's a secret between him and Google. He asked Sara to comment on that kind of tension between the intermediary versus using exactly the same technology and piloting it and bringing it back to the host.

Sara said one of the things to be explored with this was how it changes the role and the requirements on the operators of recursives and there are potentially lots of legal implications there and these are potentially different per country. She said with GetDNS, that will work in both stub and recursive mode and one of the target use cases for recursive mode is really DNSSEC roadblock avoidance, so if there are services that will let you use TLS from the stub to the resolver and you trust your resolver you can do that. She said this is a long road and we are just discovering which steps along that road we want to take.

Geoff asked, considering that the DNS is the most wildly abused system for surveillance, censorship and almost every other thing that folks see as being cheap and as a way to implement such policies, whether baby steps is enough. Shane Kerr, BII, said there are privacy advantages to resolvers as well because the queries don't go to the authority servers so there is a tension there. He said that for maximum privacy you need to look at the balance and the unanswered research question as to what the right balance is between having one or a set of resolvers and it probably sits somewhere in between where we are today where you have no privacy and Tor. Paul Hoffman, ICANN, said this was discussed in the IETF dprive Working Group and there were two main drivers away from recursive to authoritative or even stub straight to authoritative. He said in order to do effective crypto, the authoritative servers would have to be promiscuously allowing easy CPU denial of service attacks, which is why there has to be baby steps. He said the other driver is that people have a relationship somewhat with a recursive and why break that by jumping immediately away when those are the ones that could actually handle CPU denial of service attacks because they already have ACLs, which the authoritatives don't. He suggested looking in the dprive archives and that discussion was cut off at stub 2 resolver.

Tim Wicinski, IETF dprive WG co-chair, said to do recursive not just involves technology but ICANN. He said the root server operators basically have to get involved which means an involved non-technical process. He said they were trying to prove that TCP can be scaled for DNS at very high rates and that is why this is a great idea.

Duane Wessels, Verisign, asked if getdns had implemented the strict authentication yet and Sara said that it had.

Kaveh Ranjbar, RIPE NCC, took an action on the RIPE NCC to create a lightweight plan on the DNS Privacy Public Resolver initiative, not mentioning who will operate it, and if people agree with that plan we can move forward from there.

H. Shane Kerr, Beijing Internet Institute, Yeti DNS Project

Shane said that he sent an email to the working group mailing list asking the RIPE NCC to operate a Yeti root server and possibly resolver if that makes sense. He said Kaveh Ranjbar from the RIPE NCC was very supportive. He said he didn't think it involved very many resource requirements it would be very helpful.

Jim Reid, DNS Working Group co-Chair, asked if a definitive end date for the project could be published as many of these projects seemed to start and go on without a fixed end date.

Shane said this is a three-year project and it is currently on year two.

Kaveh said this was an interesting research project and that he had one concern that Yeti was a parallel root and because the RIPE NCC is a root operator and there needs to be a technical safeguard there in case Yeti wants to go a separate way from the IANA root zone file. Kaveh said he also had a lot of personal trust in Shane and other involved parties, which made him feel good about this particular project.

Shane noted that the coordinators of the WIDE project in Japan who also run such a root server probably have the same concern.

I. Panel on DNSSEC Algorithm Flexibility

Ondřej Surý, CZ.NIC, chaired a panel discussion on DNSSEC algorithm flexibility. The participants were:

• Ondřej Surý, CZ.NIC
• Lars-Johan Liman, Netnod
• Marco D'Itri, Seeweb
• Dave Knight, Dyn
• Phil Regnauld, Network Startup Resource Center

The participants introduced themselves, talked about their backgrounds with DNS and also explained what their DNS set-up was.

Ondřej asked, if the ed25519 draft is approved this year, how long would it take to deploy and possibly start signing the new algorithm.

Lars-Johan said it depends on where the requirement comes from in the DNS community and also goes what is going on with software development.

Marco said he saw no major roadblocks to upgrading if some important feature would be useful. He said the only problem would probably be the PowerDNS held-up-based platform because that backend is not currently maintained.

Dave said his organization was not doing a lot of signing but they do serve a lot of signed zones on behalf of customers so it's quite easy to be reactive, particularly on the TLD platform. He said demand was customer-driven and they used off-the-shelf, open-source software so it's quite easy to react to a customer who wants something new.

Ondřej asked if it was safe for DNS cookies when the customer asks you if will be able to deploy it and Dave said it very much was.

Phil said his case was similar to Marco's but he would see how it goes on the operational side before recommending others to do the same.

Ondřej asked if there is anything that can be done to made these issues more known within the community of people running DNS servers.

Peter van Dyke, PowerDNS, said the PowerDNS panel has been reverted into maintenance but they are looking for competent users.

Lars-Johan said it's very important to get the package maintainers of the operating systems on board because that is how things are distributed. He also disliked automated updates but said they were useful and actually enhanced things in general.

Marco said that outreach from protocol developers at the events like RIPE Meetings is helpful but for the vast majority of users probably new features should be pushed and advertised by the software vendors.

Dave noted that with DANE it's browser vendors who drive a lot of what the user requirements are.

Phil said if there was a place where users could click and check if zones are signed then there could be more demand from the customer side.

Lars-Johan said one of the problems is that the DNS is not viewed as an application; it is an application but it's the glue between the operating system and the network and the application that you actually want to run. He said no one runs DNS because they want to run DNS; they do so because they want to reach other services. He said we need to work together so it is seen as a service on its own.

Ondřej reiterated what Phil said and thought there is or was a DNSSEC name-and-shame webpage somewhere that gave A+ ratings.

Lars-Johan agreed it should come with the web services, that it should check the DNS as well because what you are checking is the web service or the SIP service but this is not in their minds.

Ondřej said this was quite a clever idea, if you could get the SSL labs to include the DNSSEC test even without counting the score.

Benno Overeinder, NLnet Labs, said there was something like this on the website and it's a cooperation of organisations and they work together to check if ISPs and service providers provide an up-to-date Internet. He suggested this is an idea that could be exported.

Ondřej said the experience from the Czech Republic was that although they tested banks and other websites, they didn't see it as a problem.

Paul Hoffman, ICANN, said the last couple of comments were maybe aimed at the wrong people who would be helping here. He said people who are running a DNS server intentionally probably update it. He said there are two classes of software: ones that don't get updated until the whole distro gets updated and ones that do in between. He thought that simply getting the authoritative and recursive servers upgraded to the software that gets updated in between would greatly increase the amount of people doing DNSSEC. Marco said this would probably not be possible because there are categories of software that are updated in stable distribution and those containing sensitive things such as important software or browsers.

Dave said the DNS community is obviously very enthusiastic about DNSSEC and want to see it innovated at pace but to everyone else this is somewhere lower in their concerns and is something they want to be conservative about, which is understandable for operating standard maintainers.

Peter Hessler, Hostserver, said for this exact reason OpenBSD has a policy of making a release every six months because when you have a five-year release cycle then it can take ten years before something shows up in the wild.

Peter Koch, DENIC, noted that some people running critical infrastructure wanted fewer features, not more. He asked the panel how many features are desirable and are we receiving good guidance from those producing the standards.

Lars-Johan said, from a root server operator's perspective, agreed with Peter that stability is absolutely paramount and more features and more diverse code always brings more bugs, which can have an influence on stability. He said this is why root server operators want to see what the general trends are before implementing new features.

Shane Kerr, Beijing Internet Institute, asked the panel if they ran RRL.

Lars-Johan said I-root does other forms of limiting but not RRL, although it is on their roadmap.

Shane said it is a feature that at least some of the root operators do implement, and root operators are like everyone else in that they want good features rather than bad or unnecessary features.

Lars-Johan agreed and said the root operators were not driven by customers in but by the general community and what they see as useful options.

Marco said customers do not push for new features so they themselves have to do some pushing of useful features.

Phil said if the software is not visible enough then it doesn't have enough user attention. He said it's not a solution per se, but if you want the software to have more visibility and dynamics then you should move it closer to the user.

Niall O'Reilly said the motivation driving all of this has to come from the business cases of end customers like banks and insurance companies. He said the DNS community needed to sell the advantages and he wasn't sure how that could be done.

Ondřej said the goal of this panel was to address this issue.

Lars-Johan asked Niall if the last time he updated his browser it was because he needed a new feature, and Niall said it wasn't. Lars-Johan said the same goes for the DNS users but the browser vendor saw that it would be good to update because it would make new features available to other people in communication with you. He said the benefits of updating may come to the overall system rather than to the individual user.

Niall said it was not about incremental upgrades to this or that component but rather it was about promoting and selling the ecosystem.

Lars-Johan responded that you couldn't motivate people unless they have the software with useful features that they could try out, and Niall said this was a fair point.

Vicky Risk, ISC, suggested that people care quite a lot about Google rankings, so if you could have the ranking applied there then that would make people take notice. She also suggested that the idea of cyber insurance would be worth looking at because if you don't have your zones signed and people can phish your customers pretending they are sending email from you, that potentially is in the big money category for cyber insurance.

Ólafur Guðmundsson, CloudFlare, said the panel is very useful to look at the one small part of the equation: the DNS ecosystem consists of producers of DNS answers, the consumers of them and the verifiers and there is something in the middle that enables the distribution of DS records. He said when a new version is going to be available to distribute through the EU registries, registrars and hosting providers that is going to take a long time. He suggested concentrating on the middle section and making it more automated. He concluded that DNS was not sexy and we need to decide what we want to break, and if we do break something it should be RSA.

Florian Streibelt, speaking for himself, said when Google started to rank pages based on things like HTTPS it added pressure, which was a good thing. He added that the DNS community is doing a very good job in not breaking things, unlike with browsers that won't show Youtube if you don't upgrade, so maybe DNS is a victim of its own success.

John Dickinson, Sinodun, suggested that if some things were classified as security enhancements rather than features then that might be productive.

Alfred Jenson asked, given how far behind DNS is when it comes to modern cryptography, how do you see transitions in the DNS system to a world where we have quantum computers?. Marco said it was still to early to answer this but it would be a good topic for someone to do research on.


There was no other business at the working group session. Jim thanked everyone for their participation and said he looked forward to seeing everyone in Madrid for RIPE 73.