Skip to main content

DNS Working Group Minutes - RIPE 84

Thursday, 19 May 11:00 - 12:30 (UTC+2)
Chairs: Moritz Müller, João Damas and Shane Kerr
Scribe: Gergana Petrova
Status: Final


Sara Dickinson, Sinodun

The presentation is available at:

Sara gave an overview of QUIC - an inherently encrypted protocol, which reduces latency, improves error detection, and can maintain connections even as end points change IP addresses, making it a good fit for encrypted DNS. She ran through the efforts to implement DoQ (DNS over QUIC), which started in 2017. She recommended taking a look at RFC 9250 which describes three scenarios when DoQ as a general-purpose protocol: Stub-Recursive, Recursive-Auth and XFR. As of January 2022, there were 1200 DoQ resolvers.)Carsten Strotmann, and cis 4, pointed out that it is possible to use web server software like NGINX to proxy between HTTPS and DNS when using DoH. He also asked if it is possible with DoQ and commented that NGINX supports QUIC already and it might be possible to have QUIC installations on it. )

Sara explained that these proxies do support QUIC. They do not yet have native support of DoQ. She hoped it would soon be possible. She will open an issue with the implementers that to let them know that this is a standard people would be interested in trying when using those proxies.)

Carsten said that he would let Sara know if he found a solution.  )

Brett Carr, Nominet UK, congratulated Sara on the RFC. He commented that it seems DoQ solves several problems and is suitable for most areas of DNS. He asked if there are any areas of DNS for which DoQ was not suitable, or less likely to get deployed.)

Sara responded that she couldn’t think of anything in terms of the actual specification but said the challenge would be in the practicality of implementing it - such as picking up QUIC libraries and integrating them with existing DNS software. It would depend on how much work it turns out to be for some of the major open-source implementers. It's not as easy as picking up OpenSSL. One has to think quite hard about what the library and the API look like. She hopes that once the initial implementation hurdle is overcome, we will rapidly see performance data that makes clear that it’s worth deploying. 

Adam Burns, free2air, asked if DoQ supports updates as well as queries.

Sara answered that it does. There is some advice in the draft about use of 0‑RTT for updates as opposed to queries because there is a potential privacy issue, but there is no reason one couldn’t do updates over DoQ as well. 

There were no other questions.

Catalog Zones

Petr Špaček, ISC

The presentation is available at:

Petr presented a solution to the problem of managing DNS with many secondary servers and frequent updates. The configuration file on the secondary server is huge, but only has one changing variable – the zone name, so they configured it to pretend to be a DNS zone. With the data in the form of a DNS zone, they use zone transfers, notifiers and all the mechanisms they already have built into the servers. Out of the two catalog zone versions currently in use, he recommends using the second, which works with different implementations at the same time.

Matthijs Mekking, ISC, gave some more details about migrating from Version 1 to Version 2 for the catalog zones.

Petr encouraged the audience to read the documentation when doing so.

Lars-Johan Liman, Netnod, asked about the use of new semantics for existing record types.

Petr recommended going to the link to the chat in his last slide and expressing his opinion there

Jelte Janssen, SIDN, asked if catalog zones can use IXFR in addition to AXFR.

Petr answered that the zone transfer is all the same and that the only thing which is different is interpretation once you have the zone.

Peter van Dijk, PowerDNS/Open-Xchange, encouraged the audience to reach out to him if they are interested in sponsoring the implementation of catalogue zones in Power DNS.

Moritz Müller read out an online question asking whether the secondary can support multiple catalogue sources.

Petr answered that yes, you can have as many as you want.

Moritz asked on behalf of an online participant what the impact is of changing the relevant ID associated with a given zone and whether it is a convenient way to flush it at the secondary.

Petr answered that it is basically the same.

Niall O’Reilly, RIPE Vice Chair, asked if in the payload zones, the primary is the same primary of the catalogue.

Petr answered that not necessarily. There is different configuration for the catalogue zone itself and for the zones listed in the catalogue, so you can have different sources for these two sets. 

Brett Carr, Nominet UK, asked whether the secondary writes new zones into its configuration file when it picks up new zones from the primaries about the catalogue. If it doesn't, he asked how one could get a list of the currently configured zones.

Petr answered that the protocol doesn’t prescribe this, and it is implementation specific. In the case of BIND, the zone will get written to disks. You can work with them using the RMDC interface as usual. 

Brett also asked if there is some concept of approval of a zone before it's added.

Petr answered that this is again not prescribed by the protocol but is implementation specific.

There were no other questions.

DNS4EU Research

João Damas, APNIC

The presentation is available at:

Following the EU’s call for DNS4EU, Joao and Geoff set out to understand which resolvers people use. His findings show that most consumers simply follow the ISP provider’s default settings. However, there is an undeniable issue about the emergence of aspects of centrality in the DNS. He asked the WG if there is room for the establishment of a common set of operational practices for operators of DNS resolvers in all their forms.

Patrick Tarpey, Offcom, asked what the results would be if oblivious technology (oblivious DoH, Apple private relay) became more widespread.

Joao answered that we would probably see an increase in localised traffic, which we are already partly seeing. Due to VPNs he and Geoff had to do some manual fudging of the data, because they are increasingly seeing queries coming from places that look like data centres, which don't have any real users. The problem with VPNs is there is no two stage decoupling of who is asking and what they are asking for.

Patrick asked if he is planning to rerun this series of testing.

Joao answered that it is constantly running and data is being added to the website.

Peter Hessler, DENIC, commented concerning the ServFail test which saw a huge volume of traffic going to Google. He noted there are a handful of networks which hand out their own resolvers and then also Google public DNS as a fallback in case their own fails.

Joao answered that this is exactly the behavior they see, but that he cannot fault them.

Antonio Prado commented that the initial statement could be just a part of the story because among the DNS requirements there are GDPR requirements, filters ordered by law enforcement, options for parent control and wholesale parent services.

Joao agreed that he is concerned about the fact that a lot of the wording is about control.

Kaveh Ranjbar, RIPE NCC, commented that name space is a shared and limited resource, but the infrastructure discussion is about resolution, which is not shared or limited. He suggested forming a taskforce to evaluate the question if somebody should have the right to regulate when there is free choice already for citizens.

Joao agreed.

Marco D’Itri asked why consumers use non‑ISP provider resolvers, which have down sides. He guessed it is because their ISP resolvers are A) censored B) provide unwanted ads and C) are unreliable. 

Jim Reid, freelance consultant, commented that the DNS Working Group doesn't have much representation from big operator networks and that their input would be necessary. He also commented that the objectives of DNS4EU seem unclear and wondered what the EU Commission’s long-term goal is. He asked if they are going to police DNS. He commented that perhaps the RIPE NCC or the DNS Working Group could challenge the Commission to elaborate on this.

Joao answered that if the EU Commission is trying to achieve what they say they are trying to achieve, his research shows that there is already an alternative voted for by the industry.

Jim suggested that perhaps the EU Commission is thinking along the lines of the protective DNS services that are being used in the public sector in various countries.

Benno Overeinder, NLNetLabs, agreed to the formation of the taskforce that Kaveh suggested.

There were no other questions.


Adiel Akplogan,  ICANN

The presentation is available at:

Adiel presented KINDNS – an initiative to follow the evolution of the DNS protocol and promote DNS operational best practices for better security and more effective operations. There are separate sets of practices for authoritative operators of TLDs, Critical Zones and SLDs as well as for closed and private, shared private and public resolver operators. Finally, there is a set of practices for hardening the platforms their DNS services use. He invited the audience to join the mailing list if they were interested in contributing.

There were no questions.


Florian Obser, RIPE NCC

The presentation is available at:

Florian gave the RIPE NCC DNS update highlighting K-root and AuthDNS developments since RIPE 83. These were lowering the TTL on NS and DS records to one day and one hour, respectively. Zonemaster got updated to a newer version that now supports ED25519 and ED448 keys and a switch from KSK/ZSK DNSSEC keys to unified CSK DNSSEC key.

There were no questions.

Shane Kerr closed the session and invited the audience to celebrate the DNS Working Group turning 30 years old with some cake.