Skip to main content

You're viewing an archived page. It is no longer being updated.


RIPE 66 Minutes
DNS Working Group – Session I and II
15 May 2013, 11:00 - 12:30 and 14:00 - 15:30
WG co-Chairs: Peter Koch, Jim Reid and Jaap Akkerhuis
Scribe: Marco Hogewoning, RIPE NCC (Session I); Anand Buddhdev, RIPE NCC (Session II)

A. Administrativia

Peter Koch opened the session and welcomed the participants. He introduced the other two chairs of the Working Group, Jim Reid and Jaap Akkerhuis, and thanked the RIPE NCC for providing the scribes. Peter said that the Working Group had two sessions at this meeting and asked if there were any comments on the agenda.

C. Review of Action Items

Peter continued with a review of the open action items, by mentioning that were no open items from the last meeting, but that this might be different by the end of the second session. Peter pointed out that the end of the second session would be a bit different and the DNS Working Group would integrate a report from the ENUM Working Group. He explained that this was because the ENUM group itself was not meeting in Dublin, and it was, at that point, unconfirmed if this would be a long-lasting change.

D. DNS Abuse @ .nl: SIDN Experiences with (Rate) Limiting - Stephan Rutten, SIDN

The Presentation is available at

Stephan asked the audience if they were familiar with the types of attack he mentioned and they confirmed that they were.

Regarding the question if they applied measures other than the ones he presented, one audience speaker replied they were using double NAT.

Stephan said SIDN used this technique in the past and continued his presentation.

Sebastian Castro, .NZ Registry Services, asked if they kept IP Tables and RRL in place at the same time.

Stephan answered that they did. 

Sebastian continued and asked what the response would be in case of the next big attack.

Stephan responded saying that it would depend on the type of incoming traffic and, should a big attack occur, a team would be formed to investigate.

Jared Mauch, NTT America (via remote participation), asked if SIDN had plans to redirect all questions to TCP.

Stephan responded that there were no plans but it could be something to think about.

Anand Buddhev, RIPE NCC, asked about the usage of BIND and RRL. He pointed out that the .nl zone handled many queries, mostly referrals, and asked if there was a false positive rate.

Antoin Verschuren, SIDN, responded from the floor that there were false positives but that they didnt affect the regular traffic. He said as far as referrals were concerned, they did see them, but the duplicate requests shouldn't be have been generated by resolving servers anyway.

Peter Koch thanked Stephan and noted that he had disregarded procedure by not asking if there were any comments to the minutes of the RIPE 65 meeting. He stated that the minutes had been distributed to the mailing list and asked whether anybody had read them. Seven people raised their hand and Peter declared the minutes as final.

E. New DNS Zone Parser for Knot - Marek Vavrusa, CZ.NIC

The presentation is available at

Shane Kerr, ISC, asked how tweakable the behaviour of the parser is. To clarify, he stated that there could be a couple of different errors displayed in zone files, which may be an error or not depending on how you interpret the standards.

Marek replied that the software just parsed syntax and didn't do anything regarding semantics.

Miek Gieben, Google (via remote participation), asked how fast the software was.

Marek responded, saying that it was mentioned in his presentation and that by testing it, the performance increased from 1.5 minutes to 20 seconds.

Peter Koch asked what standalone usage was envisioned for the tool.

Marek answered that, although it was included in the source code, it was not linked to Knot and could be used as standalone.

Peter Koch further clarified his question and asked if it would be possible to build extensions on this tool, for instance, if you want to do NSEC3 checking.

Marek explained that that was done by Knot and the parser just reads the zone files and didn't do any further processing

F. OpenDNSSEC - Sara Dickinson, Sinodun

The presentation is available at

Bert Hubert, Netherlabs, shared the anecdote that no good deed ever goes unpunished - he had encountered a totally non DNS-related project where somebody had replaced all hardware security modules with the software.

Carsten Strottman, Men & Mice, asked if there would be a new auditor in version 2.0, as future preference seems to lean towards external auditors.

Sara replied that they recommended external auditors.

Carsten continued by asking if there would be an API available to hook in those editors so they can report back.

Sara said there had been discussion about that, but that there were no firm plans. She stated she was open to a future request about this topic.

Carsten asked why the use of SQLite was not recommended because, for smaller installations, it is easier to maintain than MySql.

Sara responded that this decision was based on feedback from users regarding locking issues. She added that they were looking into finding fixes for this in version 1.3, but they weren't there yet. She said that MySql is more robust, especially for bigger installations and production environments.

Shane Kerr, ISC, asked if the software maintained some internal state to allow for zone transfers.

Sara replied that this was the case, according to feedback from a colleague.

Sebastian Castro, .NZ Registry Services, asked if there was any preference for MySql over postgres, as the MySql licensing scheme made some people nervous.

Sara answered it was being discussed for version 2.0, which has more abstraction from the database backends.

Sebastian confirmed that they were using SQLite and had seen the locking issues mentioned earlier. He recommended that people use MySql.

An audience speaker asked if there were any plans to allow the keys to be backed up. Sara answered that she was not aware of that feature's existence, and suggested they talk about it after the session.

Matthew Pounsett, Afilias, asked why version 2.0 was limited to tens of thousands of zones, rather than hundreds of thousands.

Sara answered they were still investigating and busy analysing the data.

G. Dynamic Zone Provisioning in YADIFA - Peter Janssen, EUrid

The presentation is available at

After his presentation Peter gave a short demonstration of the software, running it on his laptop.

Peter announced that the software was still in beta and not yet released on the website. He asked people to contact him if they were interested in testing with it.

Bert Hubert, Netherlabs, thanked Peter and stated that this was the big thing DNS had been missing for 30 years, He asked whether the possibility of making this interoperable had been considered.

Peter responded that they would like this to be used by other servers and pointed out that the protocol is easy to proxy to, for instance, XML. 

Bert asked whether firewalls were the reason to keep this in DNS.

Peter confirmed this, adding that while he would be happy to work with people on interoperability, he preferred to standardise.

Bert Hubert asked if the source code was available.

Peter replied that it wasn't, but that it could be arranged in a face to-face meeting.

Lars-Johan Limann, Netnod, asked how updates are authenticated and keys distributed.

Peter explained that, upon initial installations, keys needed to be distributed and configured manually and that subsequent updates would use the control channel.

Lars-Johan Limann noted that, in that particular scenario, it would be possible to use multiple controllers to one single server.

Peter confirmed that would be the case.

Carsten Strottmann, Men & Mice, stated that he would prefer to see the configuration moved to a private zone in the Internet class. He explained: that way it would become possible to use existing tools and there would be no need to develop new ones.

Peter suggested taking that particular discussion offline.

Vasily Dolmatov asked if there is a shared secret and how the secret is used to encrypt the payload.

Peter explained they used standard TSIG, which is more authentication than encryption. He said that, in the case of sending a new key, they needed to look further into the issue.

Vasily asked if there was any mechanism for key maintenance.

Peter responded that the task should be part of the front-end system, which should do a key rollover and distribute the new keys.

Marcos Sanz, DENIC, thanked Peter for conducting a live demonstration.

Peter Koch reminded people that the second session would start directly after lunch at 14:00. He thanked the audience and speakers, and closed the session.

DNS - Session II

15 May 2013, 14:00-15:30

H. RIPE NCC Report & News about DNSmon - Anand Buddhdev, RIPE NCC

Sebastian Castro (NZRS) asked how the credit system in Atlas would work for DNSMON. Robert Kisteleki (RIPE NCC) said that DNSMON users would receive credits automatically for these measurements, and the user would be able to tune how the credits are used.

Peter Koch asked how users could provide feedback about the upcoming DNSMON.

Anand mentioned that there was a DNSMON users mailing list for this purpose.

Peter then asked about the user's ability to adjust the parameters of measurements. His concern was that DNSMON may not offer a consistent and impartial view if users configure measurements differently.

Robert Kisteleki asked whether giving users fewer configuration options was a good thing.

After some more discussion Peter agreed to take the discussion offline.

John McAleer (Dyn Inc) asked for information about the RIPE NCC's DNS servers, and about memory usage.

Anand Buddhdev said the RIPE NCC often gets requests from people who are about to sign their zones, and the RIPE NCC is concerned that name server memory usage may be reaching limits. He suggested that John speak to him offline. 

I. ENUM WG Update - Niall O'Reilly

<Unidentifiable speaker> asked whether Niall sees any future for ENUM in Europe. Niall said he wasn't sure whether it was dead or just in deep sleep. He pointed to an example of a company in Brussels running a couple of prefixes in production, and suggested that if TERENA and other countries can demonstrate a need for ENUM they it may not be dead, but just in deep sleep. The same audience speaker then mentioned the example of 353 for Ireland, and Nominet running the ENUM registry for the UK, and asked whether the problem may be that most of the current ENUM registries are not businesses, and unable to market themselves from a business perspective.

Niall replied that while it is easy to find out how to register domain names and find service providers to provide services on it, it's not the same for phone numbers. A user gets a phone number as part of a contract with a telco, and may perhaps port it to another provider but, if the user stops paying for it, he loses the right to use that number.

The audience speaker then pointed out that in many countries like the USA, they have premium numbers, which a user can buy.

Niall said that those numbers were different from the country-code numbers such as 353, where this discussion started. Niall said he saw only two ways in which ENUM could catch up: one where the subscriber owns the number (which is unlikely to happen), or the number authority decides that ENUM is important, and pushes for deployment.

Patrik Fälström asked Niall what he would say if Patrik would mention 388, and what the commission has done with this Europe number code.

Niall said it would be a good opportunity. Patrik then asked whether the European Commission wasn't participating enough in the RIPE Working Group. Niall said he couldn't comment on that.

Jim Reid commented that ENUM registration does belong in the country-code registry. He thought that ENUM has failed because of all the other regulatory issues around authentication of phone numbers and requirements placed by telephone operators. He also said that registrars don't really understand ENUM and convincing them to buy a number or bundle it with voice packages is difficult. The business case hasn't been proven and it's not right to blame the registries, which have done a lot to try and keep ENUM alive. 

Richard Barnes (BBN) had two comments. His first one was about client support. He said that, from tests done over the last few years, only 20% of clients were shown to have ENUM support. This shows low interest from the implementer community. His second point is that we are only looking at public ENUM. He said that there may be private ENUM usage we're not aware of.

Niall responded that it's not possible to look at private implementations, and that the focus of this community has always been public ENUM because of the RIPE NCC's role as a tier 0 registry.

K. News from SSAC: Issues with Unallocated TLDs That Shortly will be Allocated, Case Study with Internal Certificates - Patrik Fälström, Netnod

Lars-Johan Liman (Netnod) asked whether this has prompted the ICANN gTLD process to take any actions to modify anything in the process.

Patrik said that a previous SSAC report, along with this report prompted PayPal and Verisign to write letters to ICANN urging them to take a closer look at the effect of queries to undelegated domains. As a result ICANN staff are doing more detailed analyses.

L. Analysis of Query Traffic to .com/.net Name Servers - Matt Larson, Verisign

Roy Arends (Nominet) thanked Matt for the presentation and asked whether Verisign had looked at queries for zones Verisign is not authoritative for.

Matt said they had, but not as part of this study.

Jim Reid said it was a fascinating presentation and then asked whether Verisign had looked at just queries, or also at other types of DNS messages such as dynamic updates.

Matt replied that they do get such queries but they hadn't looked at them specifically.

Jim then said that Matt's predecessor Mark Kosters was able to track the deployment of Windows 2000 by the number of updates hitting A-root server.

Matt agreed that was indeed the case.

Warren Kumari said it was fascinating data. He then asked Matt whether he had checked to see which sources were routed on the Internet, and which ones were randomly made up.

Matt said they didn't but it was a good avenue to investigate in the future.

Sebastian Castro (NZRS) suggested to Matt how they might be able to track validating resolvers based on the patterns of DS and DNSKEY queries.

He also suggested looking at the RD bit to identify valid cache resolvers.

Matt said it was a good idea for future analysis.

<Unidentifiable speaker> commented that they have an open resolver. They do similar analysis and they have recently seen a large botnet which does an MX query for every email it sends.

ZZ. AOB (2)