Skip to main content


Note: Please be advised that this an edited version of the real-time captioning that was used during the RIPE 56 Meeting. In some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the session, but it should not be treated as an authoritative record.

RIPE 56 DNS Working Group, Session 1 2:00pm, Wednesday, 7 May 2008

CHAIR: So good afternoon everybody. This is the DNS working group session at RIPE 56. Anybody [unclear] more interesting should find their chance. This is the first of the two sessions of the. My name is Peter Koch. I'd like to guide you through this afternoon session. First of all, I've been asked to make an announcement which is people are strongly recommended to take extreme care of their property. We've received reports of things being stolen. That brings me, I guess, immediately into the administrative issues that we usually start the working group with. The working group was a website. It is on the screen. We also have a mailing list. Actually, I don't know whether that request works anymore. Everybody is doing web subscription these days. When you go to the working group's home page you mind the archives and instructions how to subscribe or unsubscribe. There's three Co-chairs, Jaap Akkerhuis, who is hiding. Jim Reid hiding Jim Reid hiding in the front one. Administrative issue is. There was the occasional typo in there, of course. So the only thing that remains is for me to declare those final and send off a mail to the webmaster to mark them for final minutes for RIPE 55. Oh, yes. We have jabber monitoring and bringing comments in from outside. Actually, I've been told there's an anniversary to celebrate. It's the tenth time you're doing this job. The working group would like to [unclear] applause this. You're encouraged to do it the next ten times. We have technical support and also the online stenography. Everybody who is going to make comments, we have two floating microphones which will be handed around. Everybody is asked to state their name and affiliation. Only use the microphones because of course this session is videotaped and broadcast. Do we have external listeners at the moment?


CHAIR: We have remote participants and those presentations have been uploaded and we're going to use the microphones. If you're talking into the microphone and making comments, please do not speak and read the stenography at the same time. Try to shy away from that.

This brings us into the agenda bashing, We have a long action item walk through and the number of reports. Jim Reid will give a short report about the Trust Anchor Task Force which was set up a couple meetings ago and had active conversation on the mailing list. We'll have a report from the IETF meetings that took place. Then well have an IDN report, IDN fast track and what the status is in that regard. And then there will be an IANA report. After that we will look at the DNSSEC report. That will conclude the report section. After that we'll have a presentation from JPRS on conversations about D name when used for top level domains, and concluding the day, we'll have an outline of the registry is going to start and he's soliciting for information there. Just to keep you interested the RIP NCC is s for tomorrow as is a longer talk about unbound that you might have heard already. It is close to release or release the already. We'll have a report about DNSSEC. We'll have an overview issues on Microsoft issues. They'll share some interesting rules to capture DNS traffic and if time allows I'll talk about experiences when we changed. If you have any other business you're asked to approach one of the three Co-chairs in advance so we can arrange for a minute or one and a half.

Any questions, comments so far? Any requests to change the agenda or drop anything off it? One, two, gone. Thank you. Okay, that means I'm almost on time. The action item work through, the usual part of the agenda in the first session, just as a reminder we have a URL up there. These are the three remaining open items and I'll take care of the first and the third and then ask Jim to talk about the second. So the first one, actually, is [unclear] indicated as wishlist for lost delegations. It's an action item that's on me to come up with a document, or already there was a draft document, to move that along, generating a wish list or making recommendations towards maintainers of reverse delegation space and or top level domain registries when maintainers of name service feel they are victims of name delegations and can't get rid of it because the domain holder in case of reverse mapping, so when that is gone, the delegation, the zone is abandoned but still the delegation ask active. It was suggested to have a look at what is happening in the reverse space. There hasn't been a new version out, but I have done some reserve in the background but there was not enough results to come up with a new version. For the reverse space it turned out that the problem is or didn't show up to be that large because there are opportunities to get rid of these delegations by way of modifying entries in the database [unclear] what I think might be a good idea and I'd like to have a sense of the room how big the problem is, what might be a good idea for the top level domain issue, you have delegation towards you, you don't know who the registrant is, but you know the registry because it's identified by the TLD, I'm considering going to the center and make a survey amongst the registries there, what the current policies for this are and them come back and maybe present this here to the operators of N system name service to see what how far the policies could be adjusted or we could learn from those solutions and whether any wishes remain or whether the actual policies may be actually fit the needs of the operators of name service. So anyone still awake have any opinion about this? Okay, that was two questions in one. Anyone still awake? Obviously, thank you. Anyone believe that this is a problem we still should address in this working group, getting rid of stale or lame delegations by being able to contact the registry directly or by whatever means? How many of you do operate secondary name servers on a larger scale base that might be hundred or thousand plus? Okay, so do you feel this is a problem? Who does feel it is a problem? You want a floating microphone?

AUDIENCE: No, I don't, but you're going to hand it to me anyhow. Yes, it can be a problem. I've run into it. It was solved, so it's not a problem that's really hard to solve even though I think the procedures in that case were strong. But it can be a problem and it can happen like this. We were serving in another information that was lame. We didn't serve it because the pointer to us was lame but we got back scatter from a spam push where the spam gererators sat in that prefix which we used to serve very long ago. So the [unclear] which were about to receive this spam came to us to query. So the problem can happen. We solved it that time. It wasn't solved quickly but it was solved. So I don't want to retain this specific action item, but there can be a problem, yes.

CHAIR: Thanks. Anybody else? So Patrick.

AUDIENCE: Another thing is that really the lameness things?

CHAIR: It is because the queries shouldn't have hit us.

AUDIENCE: How did they get to you then?

CHAIR: Spam server sends a spam to a mail server. Mail server asks for the reverse path from the sending machine but that is deflected to me because we used to have the data space ten years ago.

AUDIENCE: What I see is happening though is a problem which might have more to do with the registry, registrar, change of registrar, change of DNS operator for domain which is pretty to be really honest, it's I don't know many domains where it's specified to minimize the amount of mistakes and down time itself. That is where I see we lose delegations or servers, which is also situation similar to the situation he was in.

CHAIR: It has to do with the registrant, registrar, registry path because the name of operators has no particular role in that and is probably the victim of stale delegations. This was initiated because someone from NCI had these problems. The customer lost interest in the domain. They were secondary for the domain. They couldn't get reasonable zone transfers in but they could also not get rid of the delegation. Registry policies do not allow third parties to ask for the removal of a domain.

AUDIENCE: Yes, that is correct and that's one of the things that can happen. And also in Japan they also had a problem where someone is not paying for a domain so the name is removed from the registry but you still have name servers hosting that domain for well, you have these crossdomain name things you can do hijacking. But what I wanted to say is about a year back, he promised the. to write a paper to do hand over when these kind of changes happen to minimize operational instability. If it is the case that Lee and I really get the act together and write the paper and if we're lucky we might write it in English. So I almost promise that we can do something but I don't know whether it's exactly this.

CHAIR: To cut this a bit, I have to take care a bit of the time as well. So changing hats here, I suggest that we try to sit together and figure out whether your paper to be actually fits this and maybe we can join forces and come up with some coordinated effort here, otherwise I sense that there's not too much interest and the working group would not like die if we just cancelled it. Any opposition to that path forward? So that means the three of us

AUDIENCE: What I miss here is owner action [unclear].

JIM REID: That's me.

CHAIR: Okay, but let the minutes show but for all these things.

JIM REID: Me, Jim, me. It's on the URL actually says who this is.

AUDIENCE: After one year there's no written texts, I think it shows that there is no interest.

CHAIR: So your conclusion and recommendation is to kill this?

AUDIENCE: No written text. There is nothing to decide on and it's been an action item for a year.

CHAIR: It's been more than a year. Anyone opposed to killing this? Okay. No opposition. Killed no opposition. Sorry? I'm doing the third one. RIPE203bis, the action item is on me. I received a couple of very helpful recommendations to add text and make this clarification and that clarification and I appreciate and was mostly offlined by a couple of nonNDs, the problem I have as an editor one of the major design criteria is and was to keep it rather short, to extend or increase the chances it's actually read and followed and oh, you could add this, you could add that, is actually adding and growing and outgrowing the usefulness of the document. So the editor had on I haven't made up my mind yet of how many of these recommendations I'm actually going to incorporate as a suggestion because I don't think it's useful to come up with a 12page document then.

AUDIENCE: Maybe there can be a link in the document pointing to some website or some place where the additional commands and information is held and maybe can be up dated without updating the document.

CHAIR: From a procedural point of view that's probably really possible. Yes, Leeman behind you is shaking his head. Someone would have to maintain that website anyway.

AUDIENCE: That's exactly my point. URLs change. To an identifier of the document that's fine but not in the shape of a URL

CHAIR: I would have more fear that the URL or more the content behind it would not change and there would be stale information after two years and so on and so forth. That would be a way forward to get rid of the document or get it published but I'm not sure that would solve the problem. There is input to incorporate and I am editing the current version was just not fine enough to be published as a new proposal to the working group. But text has changed in there. So my recommendation or plea to the working group would be to allow me to or ask the working group to keep this open and allow me one more iteration to come up with a version that can hopefully be last called. Any objections? Any support for this? Who would like to see me work on that? Who won't? You're so nice. Okay. So for the minutes I guess we keep this open and I'm not relieved. Next is Jim.

JIM REID: Thank you. Just before we go on to discuss 49.2. So for those out there in jabber land that's why there might be a jump in the conversations.

Anyway DNS 49.2 is the DNS migration document that's been [unclear] on and on for some last time, lastly down to my inability to work on the document in alignment with up coming RIPE meetings. However, we've had a discussion and we feel that the working group as a whole is possibly tired of this document. We've made one or two changes to it in light of comments, but really not quite sure what should happen next. The document's probably not fit to be published as an RIPE document and I think we're also not sure who the intended audience is anymore now. So we feel that this thing has pretty much come to the end of its course and we're proposing that if there's no further comments or feedback over the next month that we would close this thing down and see it done at that point. If the working group is minded to do something else with it, I think we have to go back a little bit and think who is the target audience, how we try to cope with the fight we have to deal with, addressing the configuration of multiple diverse DNS implementations so this document can grow and grow and grow, and I think it would also require a very strong-minded editor to do the work on the document and to try to stop feature [unclear] if it was undergo restructure, which is the opinion if this is to progress. The best thing to say is we're not proposing to kill it right here and now but it's up to the working group to decide. If there's no further movement over the course of the next month, we can sort of decide done and knock it off the agenda at the next meeting. Any comments or feedback on that?

AUDIENCE: Yes, is it on? Okay. Well, I can tell you the target audience because I made some comments [unclear] with some issues that we saw with in the registry and customers were complaining about it and there was no real reference document we could point them to. When delegation changed how they should do it. We constantly keep telling our registrars to de-provision zones that don't belong with them anymore and they simply ask us the question why? You're not delegating to us anymore, so we shouldn't trouble you. But the fact is that the practice shows that it does bother the registrant. So we as a registry feel we should have something like a document we can point them to and say, hey, this is where it explains why it should be done like this and how it should be done and then they would accept it.

JIM REID: Your point is very valid and I agree that a document like this should exist, the problem is I don't think it's this is DNS migration document. That was more concerned with the reverse trees rather than forward. This is a re-focusing or redirection for this particular document. If we try to put all things to all people, I think it's going to lose a lot of focus, there's no such stuff in it it won't make any sense.

AUDIENCE: This was originally taken up

JIM REID: The original came from Fernando Garcia which was the renumbering of his document some years ago. We tried to do some work on it there and tried to get a drafting group together to work on it and it was left to me to try to pick up from there. And we tried to focus on that. It was never intended to focus on thing as forward delegations as well as reverse although there's overlap between the two.

AUDIENCE: I would like to see this move forward for the same reason as my previous speaker said. Why don't we take up the offer that Patrick gave that maybe he would work on it, transfer the ownership of it.

JIM REID: Just a comment, well I don't think we need to make a decision here and now but what I think we're saying here, if it's going to move forward we need to see clear signs of someone strong-minded to take over as editor and refocus the document before we start concentrating on the text.

AUDIENCE: I'm sorry if you cut the queue I'll sit down. Wish list for lost delegations and DNS migration document sound like they're tightly tied together. If something doesn't migrate properly it ends up getting lost. So I think that those two things actually could be bound together and probably ought to be. How do you maintain a robust and stable delegation as things change around you? If that's important it doesn't matter if it's forward or reverse, it's how do you maintain a delegation. And it's probably a useful document to have. When I saw DNS migration I was thinking how do I change from buying to DSE, which is entirely different. I would rather work on [unclear] that maintained

JIM REID: So we send that proposal to the mailing lists with a dead line.

CHAIR: I'll just reiterate, there needs to be some movement inside the room here, to do further work on the document and offers of help from bill and others are noted. Please make these contributions on the list so everybody, not just those here can see what's going on. My opinion is the document will need to be restructured and I think the first thing that needs to be looked at, how is it going to be restructured, try to get some broad working sense and then find an editor responsible for progressing the document and either modifying or building on the existing text.

Okay. Thank you. Right, time to give you an up date on what's been happening with the DNSSEC trust anchor repository task force. If you may remember, the initial idea that we had from this task force spun out from the discussions we had where we were concerned about how to get trust anchors for DNSSEC validation in a centralized place that could simplify the job of switching on DNSSEC validation inside operator networks. So the plan was, the result was that the working group was split down the middle because some people thought it was a good idea to have a centralized repository and the other part felt it was a bad idea because it might take pressure off IANA to get the route signed. So we decided to create a task force to look at the issues in what would be involved in running or setting up a trust anchor repository and report back. So the charter was to figure out what the requirements are, attributes should be and not make too much concerned about where it would be placed, maybe IANA, maybe NCC or somewhere else. That was a secondary consideration. Once we did that analysis and came up with requirements we then report back to the working group and then suggest next steps and that's just what we're about to do. What did we actually do? We had a meeting at the last RIPE meeting, before the meeting took place where we discussed the set of requirements and also ongoing discussion about whether they were desirable or undesirable and whether what was meant by the set of requirements was correct and there [unclear] on liaison because there was a mixture of what could be done there and we keep track of what was happening at ICANN and the initiative, at RIPE 55 there was a presentation on IANA's efforts, and we were very, very careful that we wanted to do something that wasn't going to be seen as a rival or threat to the ongoing work. And Danielle did a great deal of behind the scenes efforts there and I want that to be formally recorded. I'm grateful for his assistance.

What happened since the task force set itself up? The IANA TAR is due to go into operation soon. I'm not going to be unfair to ask whether when it's going to start but I can tell you that at the meeting last month a resolution was passed approving the set up of this. This is going to happen banning a little delay on setting up procedural and other administrative tasks. This is going to happen. So the feeling of the task force is the RIPE community really should get itself behind the IANA efforts and really the IANA is the best place to go. It really fits very nicely with IANA's key role with all the things to do with Internet registries and also if there were to be an alternate TAR that could be destabling. We also think that the requirements that we worked on have potential use for and could be useful background information to ICANN and IANA to inform them about certain aspects about the way in which TAR could be operated and run and other aspects, so we thought it would be a good idea to have some sort of communication that goes back to ICANN informing them of our views, not much more than just informational purposes. So we came up with an all right that's been agreed with the task force members by process of consensus. I've put it out to the working group mailing list, I think two weeks ago, but there's been no substantive comment and I assume we have consensus from the working group. I'm going to ask if the working group feels that that letter is satisfactory and would be comfortable with sending that to IANA or ICANN as a formal statement from the DNS working group?

Can I have anyone indicate if [unclear] not minding for that. Show of hands, who's in favour? Who's against? Asleep? Okay. The letter is actually posted. The copy of the letter is posted on the mailing group list and also on the TAR mailing list archives as well. So our proposal now is since we have the support from the working group here is we send an letter and essentially we'll put the task force to bed. I'll also be make being a report to the closing plenary on Friday. Unlike the letter that we did for the sign the route thing we're not going to ask the RIPE community as a whole to endorse this letter that's going forward. The rationale for that is there are statements that say we are willing to help ICANN and IANA with requirements and documentation and so on and I feel that's a commitment of this working group and not a commitment from the RIPE community and I think it's unfair to ask the whole RIPE community for that support. So I think the best thing to do is say we, the RIPE community, are prepared to assist in IANA's efforts there. We're putting the task force to bed. In the extremely unlikely event that something goes wrong with the plans we may need to revive this thing. But I expect that will not be necessary and we can pretty much wind this task force up as soon as it comes on stream and hopefully that will happen sometime in the next few months. So finally I wanted to say thanks very much to [unclear] task force members and the others who contributed and made suggestions and comments on the text. Also special thanks to Daniel because he did a lot of work behind the scenes. He did a grate job making sure we didn't create too much consternation in the other circles. So thanks very much to Daniel for that. Also I think again a great deal of thanks to Richard Lamb, he's really made this IANA TAR a fantastic job. Thanks to everyone from me and everyone else here.

Questions? Okay. Thank you. We're done.

CHAIR: Okay. We have a report from the last two IETF meetings and maybe whatever happened in between.

ANTOIN VERSCHUREN: Okay. Good afternoon. I'm here to give you a short update on what happened to the last two IETF,s the one in Vancouver and Philadelphia. We'll start with if you want to I'm going to briefly go over some documents. If you want to have some details or see it for yourself, where you can see these documents or go for more information, just go to the tools IETF. I put some working groups you might be interested in, DNS extensions and operations and some other working groups that might have to do with DNS. We'll start with the DNS extensions for working group. Last time that was in Amsterdam. I left you with the NSEC 3 document which was almost done. This document has been approved. NSEC 3 is a DNSSEC resource record which really is an alternative for the NSEC record with the difference that you cannot do zone numeration and this is approved so you can see this deployed soon, I hope, somewhere.

There were still some work left for the DNS extension working group that was still left the last time we saw each other. There's the 2929 BIS which is still at processing and also 2672 BISD name. This has progress and had there are new versions of the document with some changes to it, but basically the status is still the same as last time but they did progress quite a lot. And there were some new items on the agenda of the DNS extensions working group. One is a review of EDNS 0, another one is a new definition of AXFR which is it's scattered around several RFCs and not so clear how it should work and there's also like an overview of what documents are relevant to DNS because at the last IETF there was a nice slide where you can see if you want to go and do something with DNS you have to know about 131 documents, and they should all be here, but it's not very clear anymore if you're an implementer operator which document you should be using. So an overview document was welcomed and there's work done on this. And actually last time we spoke to each other the DNS extensions working group was going to sleep because there was not so much work to be done and now all these new documents pop up, all clarifications so the DNS working group is back awake again. There was no working group meeting in Vancouver but there was a working group in Philadelphia, so it's back alive and when you're back alive even more new proposals came to the working group. There was a document about RSA-SHA256 which was not a working group document but is now accepted as an item. There were discussions about a proposal to do AXFR over UDP and also a proposals about DNS 0 X 20, you know why don't you use the upper case lower case difference and use that bit to secure your connection. In the end it turned out that it was not viable to release this, but what I'm trying to say, there are even more proposals coming in to prevent spoofing in the DNS extensions working group. So DNS extension is more alive than ever. Please look at the website to sort of like follow what's happening there.

Then the DNS Operations Working Group. There's still the reflectors-are-evil draft. Network operators should read this draft. There's all sorts of drafts related to AS 112 which is like a sink for RFC. Vendors should read that and network operators. These are the slides I gave you last time as well. Almost the same. Also, the document about the [unclear] size and also the reverse mapping of addresses to names. That was the same slides I had up in Amsterdam. They did progress. There are new versions of these documents with some changes. So the DNS operations working group didn't do nothing. There were also some new items on the agenda, one about revolver priming so when you start up a name server, how do you do priming, what are the first queries you should admit. There's the DNS trust anchor trust document which is new work and being discussed right now. Another important thing that we sort of agreed last time to do a name server configuration and management protocol requirements documents. There has been work done on that. It was done after the last IETF, so I haven't seen the real progress on that, but still that's still something that we're working on, a protocol or requirement, how you should manage your name server. And there's a new working group charter, which is basically just administrative stuff.

Then the ENUM working group, there was only some discussions on the last IETFs about things they're trying to do in the [unclear] working group which relates to the ENUM working group, where the discussion is about basic concepts about what you should [unclear] put in DNS and shouldn't and a recommendation was made that the DNS Working Groups would give an opinion on that. And the rest was regular ENUM stuff, which you'll hear tomorrow. Domain keys identified mail, not really something new there. The only new thing is it looks like they have a sort of consensus in that working groups will put keys for DKIM in the records. Lastly there was a BOF held on the IETF 71. There were discussions about the IDN specifications should be adjusted to the latest funny code standards. So there will be a serve for a new IDN definition and there is a new IDN working group form.

AUDIENCE: It's called IDN, A DNA bis. It does something with IDN, right?


Next reports there will be a working group at the IETF so they will have something to report on that. That's basically it. If you want to participate in these discussions you can read the mailing list, go to the jabber rooms and I hope to see you at the next physical meeting which will be in Dublin, so not too far away from here, in July.

CHAIR: Thank you. Any questions or comments? Okay. For the I DNA part, I take you the liberty to refer you to Patrick who made the last comment because he's one of the document editors for the I DNA bis documents. I'm sure Patrick is happy to answer them.

PATRIK F�LTSTR�M: I can answer.

CHAIR: While we're switching the presentation for Bart who is next, I'd like to ask the audience, how many of you are first time in this working group? Okay, that's roughly 15 for the minutes. Thank you. I'd appreciate a bit of feedback, how you sense the topics of today and maybe tomorrow, well that feedback after tomorrow, please. So specifically we are interested in getting feedback not only from the first timers but from all of you as to the overall structure and agenda is it suitable for you?

SPEAKER: To make it interesting I will tell you what this presentation is not about first. It is not about the introduction of IDNs in general, it is not about the Iprotocol, some people are far better informed and can answer questions on that. It's not about IDN guide lines as currently used, maybe how they have to be up dated. It is just an overview of what is happening regarding the introduction of a few IDNs, CCTLDs. So it gives you a bit of insight in policy development processes in another environment. To say in Dutch more or less a type of Mandarin science.

Okay, overview, there are two separate processes going on regarding the intersection of IDN, one is the fast track and one is the overall policy. I won't go into the details of it. There are some slides I will skip. I will touch upon the charter of the IDNC working group, a group of people working on the fast track. There's some guiding principles which the IDNC have developed and the most interesting part is the methodology, which is currently under discussion by the IDNC working group. Fast track, why a fast track approach, process? There are some issues currently. I think the first one is if you talk about IDN, ccTLDs there's no such thing as the ISO 31661 list to identify ccTLDs to associated territories in IDNs. That's the first problem we have to tackle. If you look at the overall policy, the policy and the implementation might take two to seven years which is quite sometime and it's almost an ensured employment scheme, and it's also very clear that not particularly in this region but in other regions there is an express need and demand for IDN ccTLD. What is the focus of the fast track? To development a mechanism for the selection of IDN ccTLD string and see if the current mechanism for designation needs to be adjusted to cope.

I will skip this one. The charter of the IDNC working group. First of all, let me tell you who are members of the IDNC working group, it is some ccTLD managers, members from the government advisory committee, some members from the ICANN, members from the at large constituency, chair of the and it's a very broad communitywide working group.

The methodology itself, I think if you look at the current draft, you can say there are some guiding principles which I will skip but the most important one is the working group has developed a fourstage process for the introduction of IDN ccTLDs which I'll touch other later on. The core of it is that the territory when one wants to apply for ccTLD has to show readiness there needs to be a [unclear] support for the selected string. If you look at the overall process we try to come up with a kind of checklist process so people or the communities involved can check for themselves whether or not they are eligible to enter the process. And finally and this is one of the overall principles, it is the final stages of the process and some of the elements are based on the current IANA process for delegations of ccTLDs. I'll skip this and this one. The methodology. One of the core issues is selecting a string for an IDN ccTLD. The current thinking is that as a first step a language in which the string denotes the name of the territory needs to be selected, if need be a script as well. Say the criteria for the language, that it must be an official language, there is a particular definition for it. It is not an official language, say, as defined in law, but it's a working definition, particularly for this process. And the script in which the language is written needs to be nonLatin. So a [unclear] lot of Europeans with their Roman script are not eligible for an IDN ccTLD under the fast track. So first the official language, you see the definition and the source of the definition. And it is demonstrated in a limited in limited circumstances. Again, this is for you, if you're interested you can read it back. The second step is to prepare a language table. This is part of the IDN guidelines and the ensuring that the IDN guide lines will be implemented by the IDN ccTLD and this needs to be submitted to IANA. The last stage in the preparation phase is to identify the string, for this reason and that is different again with the ISO 33 the string has to be meaningful, it has to represent the name of the territory in a selected language or part of the name or acronym or abbreviation which is meaningful.

Then the final bit of the preparation phase is that the intended IDN ccTLD operator will be selected. Again, this has to be done in territory. All these steps in the preparation phase are done in territory. There is no role for ICANN or any other group to make you or make the territory ready.

The next step is, again, what we call the confirmation phase, that the language table needs to be submitted to IANA and to be put in the IANA repository, probably this requires a change of some of the current repository rules with that implementation. Just so you know, IANA will note check the accuracy nor will it maintain the content, it is just the active submission, which should make it work.

Confirmation phase, and this is currently under heavy debate, in particularly the role, if any, of a committee of linguistic experts. It's to check if it's a meaningful representation of the name in that language. It is specially the members from the governmental advisory committee have some problems with this committee and the role as a confirmation checking advisory to the ICANN board. There is also the proposed technical committee as could be expected probably because governments don't understand it. There is no real issue or discussion regarding the role of the technical committee, but, again, the role of the committee is to ensure that the stability and security of the DNS.

Then there is say the final two steps are reporting and designation. In order to speed up the process and to ensure that the IDN CCTLD operator or intended operator, will have some experience or has knowledge about what he's doing, one of the proposed step that the CCT or manager will document the experience and report to it at least to ICANN and probably to the broader community. That is still an issue to be discussed, so people are aware of the experience of this IDN CCTLD manager and this is something different from the current practices. When all these steps when the applicant went through all these steps, the normal IANA designation rules step in and there is a request for Dell nation.

The time frame of the fast track as it currently stands, we're working on a final recommendation of the IDNC working group which will be published around the 13th of June of this year, and the ICANN meeting in Paris, the ccTLD managers present and the governmental advisory committee can express their support to enable the IDNC working group to submit their final report to the board and hopefully this will happen at the Paris meeting and again probably implementation and hopefully it will just take two to four months. Say with this time frame and if everything goes well, IDN CCTLDs could be requested for by the end of this year but there is a lot of uncertainties involved. Some references and that's it. Some questions?


AUDIENCE: Just a clarification question. You mentioned that only non-Latin ccTLDs were eligible for applying for

SPEAKER: Under the fast track. That is the proposal.

AUDIENCE: For instance, Spain, in Spanish, this is written in Latin, but it's non-ASCII, it contains non-ASCII characters. Can they apply?


AUDIENCE: Definitely not?

SPEAKER: No. That means they can't apply under the fast track?

AUDIENCE: Understood.

SPEAKER: The fast track is with these limitations. What comes out of the over all policy is written in the stars.

AUDIENCE: You mentioned something of registering with IANA

SPEAKER: [unclear] submitting at the IANA repository tables.

AUDIENCE: Isn't that going to put IANA in a place where they define a language?

SPEAKER: No. If you look at the IDN guidelines as they currently stand, there is already an IANA repository and it's for language tables. And these rules or these the guidelines and the procedures are available at the IANA website.

AUDIENCE: Second question is I saw that the ccTLD registry fee should be in territory, is that where the decision should be made or where it should actually be run?

SPEAKER: As to where it should be run that's a question outside of the fast track. So the current IANA rules and practices apply, probably. As to the decisions, if you look at the, say the fast track, the methodology as it currently stands it is the selection of the string and say the selection of the intended IDN ccTLD operator is within the territory or it's a national orator tore [unclear] matter.

AUDIENCE: It doesn't mean the actual operation should be in the territory? I can imagine if you were small Ireland somewhere where you don't have infrastructure

SPEAKER: That is outside the scope of the fast track and depends very much on territory.

AUDIENCE: The point [unclear], I guess, this is about the decision making not about the operations so you're making sure that the decision making happens within the community.

SPEAKER: Yes, which I skipped is, say, if you look at the overarching requirements for the IDNC working group, there are there are two or three which are really important also for this group, it is, first of all, to ensure the security and stability of the DNS, that's why we came up with such a thing as a technical committee. The second one is to ensure compliance with the IDNA protocol, whatever it may be, and ensure compliance with the IDN guidelines as they currently stand. And the third one which is probably for ccTLDs the most, important one, say, the current IANA practices for delegations and redelegation should be used, because as soon as you start doing that you're back into a full-blown policy

AUDIENCE: There is no defaulting to the current operator


CHAIR: Any other questions? Can you grab a microphone.

AUDIENCE: I'm not sure if I understood well, but it was mentioned that from the current ccTLD operator there would be a for a selection of the IDN ccTLD operator, the endowment of the current operator would be necessary? Did I understand that correctly? There was a list of endowments required?

SPEAKER: There is the incumbent ccTLD operator. If it wants to apply it has to go through a national process. So there is no particular favouring of IDN the current ccTLD operator. That is a national matter. There is another element which has been discussed and some members of the IDNC working group think that the current ccTLD operator has a role in, say, the selection process itself in order to ensure that the new IDN operator is capable. But in that sense, it's part of the local internet community.

AUDIENCE: So there is no conflict of interest deemed to be there?

SPEAKER: Not as far as the IDNC working group is concerned. If there is a conflict of interest then it's a national or interritory matter and that has to be dealt with, say, in territory.

CHAIR: Okay. I guess that's it. Thank you very much for presenting this interesting policy matter to us. Next on the agenda we have an up date from IANA, touching upon issues we've had on one of the previous slides and going into that in more detail.

LEO VEGODA: Hello. I'm here to is this one on? Lovely. Okay. So I'm here to give a really quite brief update on two DNS kind of issues. The interim trust anchor repository for ccTLDs and DNS measures and address hijacking. Only two issues. Fortunately the IDN thing I thought I was going to present on was done by Bart which is good because he knows what he's on about.

Trust anchor repository the concept was approved by the ICANN board last Tuesday or Wednesday depending on which time zone you self associate with. And and basically we're going to treat the trust anchor repository in the same way we treat up dates to any of the ccTLDs or anything that we manage, same authentication methods will be used and it's for TLDs so if there's a second level domain that wants to register and the TLD isn't applying the service they have to apply whatever pressure people normally apply to vendors. Secondly, we will retire the service once the route is signed. Initially it will be a distinct service because we want to bring this online as quickly as possibly. Clearly there's demand for it and people want it. In the longer term it's going to be folded into our root zone automation system. I expect there's people in this room who have been working with Kim and the on his [unclear] system which is the nice friendly way of submitting requests to IANA for up dates to TLDs. We could go put it into we just want to get out the on his M he had system and then it's there and this would go into a future release. Those are all the details that in the last seven days we agreed upon. We sort of agreed upon what we would do before it went to the board, but we haven't agreed on the final details yet. When we have agreed on the final details we will make sure that we let people know. So, yes, that's really it on the trust anchor repository. I would go and say if you've got any questions on it, please ask me but I really don't have any answers because it's that fresh.

The next topic was used but [unclear] indicated. This is using DNS queries to try and measure the use of unallocated address space. It's something that I [unclear] mentioned at the last RIPE Meeting and I made a couple of presentations on this elsewhere. The general concept is there are some people who are using IP address space that has not been allocated all ever. And that could make it a little bit difficult when that address space does get allocated because it will make it difficult for those people who have that space allocated to them and also people who are using it on their private network. So this was a little bit problematic and we had a little bit of a brain storm and thought maybe we could measure DNS queries for these unallocated addresses. So someone wants the reverse DNS for an IPV address that hasn't been allocated. They will end up at the inaddr zone servers, and we can go and do those measurements and then go and find some useful information, report back to people. And so we did some initial studies, it seemed to show we can get some useful information out of it. So now we're going to get a professional reserver to analyze the day in the life of the Internet data for a thorough analysis. If this slide means anything contributing to the day in the life of an internet is a really good thing because you can not only benefit academics, you can benefit yourself because that data can be analyzed to go and provide results that make it easier for you to operate on the internet. That's it. I don't have the results of the reserve yet because they haven't been completed. Thank you very much.


AUDIENCE: I have a quick question. The trust anchor repositories is only intended for TLD keys, [unclear], or fingerprint of keys. The consumer is the random user, so how can these people keep track of the development? You're going to post continuous information and Betatesting to the operational mailing list?

LEO VEGODA: We will make announcements when we go live with it, and I understand that the idea of the repository is that people should be able to automate queries to it so that they can make sure they go and get extra keys as they're added and so on. I don't have those details yet because I don't think the design has been finalized. But the idea is to make it as useful as possible, and obviously, now that we're going to offer this service, we want to make it available for people to use, so we will post to the DNS working group list, the operations list and so on so that speak know that they can get this information and they can prime their DNS resolvers.

CHAIR: Any other questions? Thanks again Leo for this update. Next on the agenda we have an overview of recent ICANN/APBS advance warning, if you haven't noticed we're running over time.

PATRIK F�LTSTR�M: So the material I have here. Cisco is one of the things I'm a member of. The presentation continues much more material than I will talk about. I was the buffer of either shrinking or extending my presentation. I will shrink too minus ten minutes. SSAC is a group appointed by ICANN security and stability advisory committee. It advertises the ICANN community and board on matters. Sometimes SSAC can and have the ability of picking up issues ourselves. Steve [unclear] is the chair and here you have the members. There are some in the room, not only me. SSAC publish reports and these are the reports we have published since the last RIPE meeting. This presentation includes information about three of them, number 24, 25, and 26. One thing I would like to point out is SSAC got a question did PNR so SSAC wrote a statement on that and ICANN issued an R step process about PIR so there is an open public there's a session there's a public comment period at the moment. I also happen to be a member of that process. My guess in talking to Peter is the Fast Flux hosting is what is most interesting for you. So let's go to Fast Flux here. So report 25 was about Fast Flux hosting and this is something where the operational community like RIPE have to talk a little more about, this is both technical issues but also implementation on policy, the goal is to avoid detection and take down of websites that are used for illegal purposes, specifically illegal purposes and the term is chosen very explicitly here. Today it's HDP access but it could be anything else. So what people are doing they very rapidly change content in the DNS so no one site is used long enough to isolate and shut down. So even though the police or anything is ordering a shut down order, the website is already gone. So there are a couple of different versions of this, you have the basic Fast Flux hosting where IP addresses of illegal websites are changed rapidly and this is where ICANN comes into the picture, it's also the case that NS records change quickly and you might have a double flux where IP addresses and name servers are fluxed. If you flux re-delegation of name between transfer domain registrars then you understand there's an automatic tools which you can talk to registrars with, which is a quite large opportunity for people to hide here. So what happens here, well you can guess what is actually happening here, normally these we be sits or both the name servers and websites are hosted at individual hosts and then when those hosts go away the delegations are changed and we'll talk about TTL, less than a minute. Quite often re-delegation of domain names [unclear] times a minute. So taking down things, it's actually pretty difficult to find and close websites. And this, of course, is then used for spam and fishing and other sort of things. So there are very different mitigation methods here, first of all it's important that whoever is doing these changes is authenticated so you don't mix Fast Flux and flux methods with hijacking of domains. This is the first step. Very often it is hijacking of the domain that is behind this. It's also the case that one can't prevent automated scripted changes because this is something that is used. It is possible to set a minimum TTL but on the other hand there's certainly reasons for people to have low TTL, when you really are going to do a re-delegation of a domain that needs to be up all the time. You can also have various different kind of expounded abuse monitoring but this is problematic to do if it's the case you also do transfer domain [unclear] registrars because registrars are not allowed to share information about bad customers between each other. It's also the case that there might be the case that we need actually universal terms and service agreement that everyone agrees to and signs up to but Fast Flux and other type of things is also the domaining business so there is a big grey market here where it generates money so it's difficult to enforce and get everyone to sign universal terms and another quarantine and honey pot domain and also [unclear] things which are difficult to implement on a global scale.

So what SSAC found is this is actually something that happens and normally all SSAC reports okay, describe what [unclear], is this something that really happens? Yes, absolutely. It's also the case that the various mechanisms that the community is using to stop Fast Flux to dismantle bottle necks they're not effective enough. We are the bad guys are winning here. It's also the case that the current methods used to detect and shut down illegal websites, the methods that we use today are not as effective as we want because of Fast Flux, also the case that the double Fast Flux is also used to frequent modifications can be used and monitored to identify abuse. So if you see a lot of changes to domain name that can be an indication but no one can be that monitoring yet, we don't have the tools. It's also the case that many in some registries enforce a TTL larger than 30 minutes and those kind of mechanisms are very effective. Just having a minimum saying a minimum TTL must be 30 minutes or larger, those are effective tools but not everyone is practicing those.

So the recommendation of this report and just like all reports that SSAC is creating, we don't just go away, more or less all the reports we're writing we're continuing to watch very carefully what's happening and you see that on some of the other reports I have like front running where we say we cannot find any evidence that front running is not going on, that's not to say front running is not going on. Those are two very different statements. What SSAC encourages is [unclear] the practices to try to establish best practices to mitigate Fast Flux hosting because that is a problem and also try to consider incorporating such practices in future accreditation agreements and whatnot.

So that's it. And then another documents I will not talk about. You can read the presentation yourself and ask me separately. I'm here all week.


CHAIR: Any quick questions? No. I'd like to ask you to give us the overview on the use of D name and ITLDTS, sorry for packing the agenda so much. We're running ten to fifteen minutes into the break. I appreciate your patience.

YONEDA YOSHIRO: My name is D name issues regarding IDN TLD implementation. So the operator of this presentation is the IDN TLD will be introduced in the near future, perhaps in the first place it will be in one year, and D name, which is a DNS resource record, is considered candidate to implement IDN ccTLDs corresponding to existing ccTLDs. The use of D name is not highly tested so that we have some concern about using D name for the IDN ccTLD implementation. So the purpose of this speech is [unclear] up issues regarding D name in implementing IDN TLDs.

This figure shows D name concept. The left-hand side is delegation 3 of the DNS zone so that Dell [unclear] to TLD and. So this is the Japan is to D name, the, then it is mapped like this, but this is not described in the zone itself. This is like a mirror. So when users ask name, for example, WWW example, Nippon, then the D name record, C name record name, and that is to the JP zone and then they get the answer.

So there are three methods to be realised the implementation, one is D name in root zone, but there are very large concern about the root zone stability so that the second method is considered to be candidate to implement IDN, this method has the same effect for lighting D name but this is so this method is also recommended sorry, also described in the D name of this document, so D name only zone is very easy to implement. And third method is writing D name for each registered domain name but this method is a little bit complicated so I'm [unclear] cast on the second case.

So the issues regarding use of D name are here. So from the two perspective, one is from D name perspective and the other is nonD name perspective, so for take. This, as I said previous slide, so that using D name increases [unclear] because it returns a C name or D name to the domain name. So that the increasing loads for root or TLD servers are considered. And increasing for the load for DNS servers is also considered because the name is looped. So there are lots of old software lying on the internet. So there's still I'm not sure if they can hand D name or C name correctly. And the first case is I'm not [unclear] cast here today. And the issues regarding non-DNS perspective issues is the management of services which recognise domain names. For example, web servers using name host virtual host have to write another name with IDN TLD but this has to be handled by the hosting provider or ISPs or the domain registrar excuse me domain registrant themselves. But D name doesn't care about this, so that the users registrant has to know that this is being maintained. So there are similar proposed solutions for issues. One is doing technical testing. So the testing is for the variation of loads for the DL TLD DNS servers. The coverage of a cache DNS servers are very wide because there are lots, [unclear] BIND 4, 8, 9, witnesses DNS services, and so on. And the name resolution itself has to be variated because we are not sure the old DNS can resolve the D name named correctly.

And after that, creating before that, creating documents such as list of issues and solutions and results of technical testings and guide lines for DNS settings and guide lines for the D named domain and sharing such document in general public.

I'd like to hear from you that is there any other issues or other solutions or items for technical testings? And who are involved in the testing? If you have a D name experience, please let me know and which community is appropriate to discuss this issue? Thank you.

CHAIR: Thank you


CHAIR: I guess we have time for one or two questions. Questions or comments.

AUDIENCE: Another issue that you might be another issue that you might want to look at is the people that get a domain name need to take care when they enter MX records PTR records and that type of records because they can't enter the Nippon at the righthand side of that. I see you nod, so I guess you understand. It might be hard for users.

YONEDA YOSHIRO: So the registrant, domain name registrant has to know that the IDN name is implemented by the D name so that they only they have to maintain in DNS in corresponding ASCII JP zone, and they have to know that the name is only in the JP zone, that they cannot write excuse me for example.Nippon cannot be righthand side of setting, so that has to be written in a document to the users.

AUDIENCE: The question I have is I think everyone recognises that it would be very very good to have some kind of test to value out these prop problems it's SSTP configurations, does JPRS intend to set up a test bed or are they looking for someone else to do that? For example, maybe you want to set up a mailing list where people could discuss what sort of things could and should be tested and how to go about testing them and how does that relate to the IDN work that ICANN is doing with that in the root zone

YONEDA YOSHIRO: There is going to be test for the DNS pretty soon and we are going to write the Nippon to the general public and I'd like to ask you again, any testing items or sharing their experiences and want to know how D name can work. So at this moment I don't have idea to make community, but

AUDIENCE: Could I ask you to consider setting up something like a mailing list or website where members of this community and others request contribute to your formalisation of test requirements?

YONEDA YOSHIRO: Yes, it is very much appreciated.

AUDIENCE: Unfortunately, what I pointed out when you had this presentation, it's also that ought is already testing D name on request from ICANN, so you should talk to Leeman.

CHAIR: D name is currently under revision as we learned in the IETF and should any operational problems arise or protocol either by the test or your efforts it would be useful to feed that back to the IETF feedback loop. Thanks again for this presentation and there's one presentation [unclear] thank you. One presentation left. Jakob is giving us an idea of DNSSEC survey to be.

JAKOB SCHLYTER: Good afternoon. I'm here representing PIR, the public interest registry. We're planning on having arranging a DNSSEC survey following this RIPE meeting. The primary question here is we want to know what's the current status of DNSSEC deployment within the RIPE community and we also like to know what is the what is needed to accelerate DNSSEC deployment, judge [unclear] and of course within the RIPE community. And the reason for this, of course, is the possible up coming DNSSEC enabling on .org as we heard about today. We have a number of focus areas, DNSSEC awareness, like an ISP, DNS hosting provider, what have you. Commercial plans for DNSSEC services as well as technical plans. These two tend to be mixed together sometimes or probably not.

We are possibly not interested in exactly what commercial plans you might have, but if you have commercial plans and also if you have seen any legal problems or similar in this area. Or maybe if you even have solutions to them.

Technical plans, of course, is usually something that people start with by some obscure reason, probably because that's the easiest part. We're interested in plans for key management and zone signing, if you provide DNS hosting, but also platforms for DNS validation if you are in ISP doing validation and DNSSEC solving for customers and users.

We will also be interested in if there is a need from the community for some sort of DNSSEC self certification, like DNSSEC ready or something that can indicate to customers that you and your provider will do DNSSEC in some way or maybe in what way. That is something we've seen is a problem for customers today to actually understand if their ISP do do DNSSEC in resolvers, and so on.

And also what do you need to actually move forward with DNSSEC fyou have any plans. Is there anything we can do from the TLD level regarding educational material, do you need technical solutions? Software? Whatever. So I am basically here to ask for feedback for this. I'm here for the rest of tomorrow and for tomorrow before lunch. We'll send something out a couple of weeks after the meeting. If you have feedback it will be highly appreciated at this time: We have two email addresses that we would like you to use.

AUDIENCE: Two questions. Is the outcome a positive outcome by whatever that positive is being defined, a prerequisite to deploy DNSSEC on .org or will .org deploy DNSSEC anyhow? So no matter what the outcome of this investigation is? Will DNSSEC be deployed on .org?

JAKOB SCHLYTER: I believe there's a process for that to be happening. I'm not sure if it's depending on this survey.

AUDIENCE: No it's not is the answer.

AUDIENCE: Two, there will be a number of results. Will they be internal or will you publish them in some way so that we being interested in providing helping tools to the community can actually start building those tools?

JAKOB SCHLYTER: I would expect the summary to be sent back to the community of course. The specific results, I'm not sure. That's in the specific answers are probably

AUDIENCE: The results and of course if there are specific requirements from the community for specific tools that would be very interesting to hear.

JAKOB SCHLYTER: I assume there's a give and take approach in this case.

AUDIENCE: Just want to amplify a little bit on that answer. I've been having discussions with the chief executive at PIR and her intention is to have some sort of public publication of the survey work you're going to be doing. As to what that is I don't know. But the intention is it will be somewhere. And I think the plan is hopefully to have you come back to the next meeting and explain what you dug up over the summer.



CHAIR: Thank you. And this brings us to the end of this afternoon session. I thank you all for staying over time. I thank the speakers and presenters for their interesting [unclear] and everyone for taking part in the discussion and specially the subscribes of various media including this fantastic online transcription for doing all their work including technical staff and so on and so forth. I'd like to thank my Co-chairs for helping me to bring up the agenda and I would like to invite you for tomorrow's second session. Enjoy the coffee break now.

RIPE 56 DNS Working Group, Session 2 11:00am, Thursday, 8 May 2008

CHAIR: Good morning everybody. This is the second session of the DNS Working Group. Just one or two simple administrative things. For the benefit of those out on web lines listening to us on jabber sessions please use the mikes when you have any comments and please identify yourself as part of this process so it's part of the overall record and those not in the room will know what's going on. Also any mobile phones or pagers please put on stand on. So have our jabber scribe. We have our minute taker Adrian who's on his tenth anniversary. A very nice lady here doing the stenography real time transcription.

First item is an update from our friends at NCC. Anand is first up.

ANAND BUDDHDEV: Good morning I'm Anand and I'm going to do a quick update on what we've been busy with since RIPE 55. First just a quick instruction. There's been several changes with the DNS team in the last year. We have some stability, there's myself, DNS services manager and Sjoerd on my right and Wolfgang from Austria. So that's the team.

First a quick overview of the services we manage in our department. We handle reverse DNS delegation [unclear] IPv4 and IPv6 space is allocated from the RIPE NCC and we provide second DNS service for several ccTLDs and we also operate several instances of the K-root root server. We're responsible for the technical operations of the which is also known as the ENUM zone. We also operate an A S 112 node and finally we are responsible for DNS for the forward [unclear] of the NCC and that is and the related zones.

This is a graph of the query rates that we see. There are three colours here and these represent the three busiest of our servers. The bottom one in red is known as externally and this is the server that acts as server for many of the slash 16al indications that we have.

Above that in orange brownish color is NSPRI. And the one in yellow is NSSEC which handles queries for several zones that we secondary.

As you can see we receive in express of 50,000 queries per second on an average day.

Some numbers about our reverse and forward zone DNSSEC numbers. As you can see the number of zones hasn't changed since RIPE 55 but the number of signed zones has gone up by one and this is due to the ENUM zone that we signed in November. And some NS record numbers there for you and some DS record numbers. These have gone up slightly since RIPE 55.

A little bit about ENUM operations. ENUM operations are quite stable. We signed the zone way back in November 2007 and since the 25th of March we've been accepting secure delegations. I did a separate presentation about this in the ENUM working groups so I won't go into any more detail here.

We move on to K-root operations. We still have 17 instances. There's been no change in numbers since the last meeting. However, we have been very very busy updating the OS. We've upgraded to CENTOS 4 and we've up graded to NS D 2.3.7 and during peak periods we get 16,000 per second. You can see the huge bands, the one in green represents the interest at links which gets about 16 thousand queries a second and the other colours represent all the other instances.

One very important change since RIPE 55 has been the introduction of Kroots quad A address into the root zone. This happened on the 4th of February 2008. This graphic actually shows you exactly when this happened. Up until the 3rd of February there was a background hum of queries coming over IPv6. This was probably coming from resolvers where K-roots IPv6 roots had been hard wired. The query rate went up steeply and stabilized at approximately 120 queries per second. Most of these queries actually come to our instance at AMS-IX, which is the best IPv6 connected instance that we have. As you can see the sky hasn't fallen down. Everything is stable and the introduction of IPv6 addresses for the root server has not caused any operational problems and everything is just working fine.

So some more information about our instances. At the moment we have five instances that are announcing our IPv6 prefix. Two of our global nodes in Amsterdam and Miami, they're announce announcing a /32 prefix and the nodes in [unclear], Geneva and Budapest are /48 and if anyone here would like to peer with us please get in touch with us at DNShelp at This graph shows query dates from a more recent period, from two weeks ago. The rate has gone up a little since the 4th of February, now about 140 queries per second peak time. We haven't done extensive analysis on the queries coming in, but we have had a look and most of the queries are standard, not much garbage and a lot of the queries originate from one particular provider in Germany who appeared to be running a cluster of mail servers over IPv6. And these often do reverse lookups over IPv6.

What we've also been busy with is trying to think about K-root traffic engineering. It's been quite stable, unchanged since it was originally established but we think that it's time that we make some improvements to it. The current status is that we announce a /24 prefix from all the instances. AMS6 also covered a /23 prefix, and this is where a customer who appears with our local nodes will otherwise see a black hole to K-root. To ensure that our local nodes are preferred over the global nodes, we prepend K-root numbers two or three times from the G nodes and at local nodes we have no exert [unclear] there are some draw backs, it leads the inconsistent AS path leads to unbalanced query loads. Our prefix announcements are uncolored at the moment which means when we want to debug networks it's sometimes a little difficult to see which prefix is being originated from which instance. And of course if the announcement from AMS6 then we black hole some customers. So we're trying to solve some of these issues. This is a little graphic that shows you this problem. We've looked at the source addresses from our Delhi instance and a lot of these queries come from the United States. These should be going to our instance in Miami but they're not. This is an example of some of the discrepancies that we see and we want to fix them. We want to do away with the path prepend which will allow the BGP to select the nearest node. We want to announce the /23 covering prefix from all the global nodes and [unclear] the /24 from all the globals and only announce the /24 from the locals and we also want to attach the IP address to the local K-root instances so the roots are covered and we can track them more easily. One of the other things we've been doing is DNS lameness checks. We've been doing these for the past seven months. The code is now production quality and runs once a month. We have data from the last seven months and the information is published on the RIPE NCC website. And we're now working on finalizing the next phase where we will actually be sending emails out to all the contacts to notify them that their servers are lame for particular zones and the target for this is July this year.

This is an example of a template that we're going to use to send the email messages out. I'm not sure you can all read it but it sends out a notification to say that a particular contact sorry this message is sent out to a particular contact to tell them about the zones they're responsible for and that the zones are lame.

Here are some statistics from the last few lameness checks that we've run. The green bit shows the servers that are working fine. The red bit shows servers that are lame. The average amount of lameness is 9.5%. Some zones such as 86.inAD are just 2.2% and some are 100% lameness and there are approximately five thousand in our database at the moments. That means about five thousand emails will be sent out. RIPE RIPE NCC operates secondary DNS service for many ccTLDs and one of the potential problems is competition with our members. Our members have made it very clear to us that we should start phasing this service out and we've done that for five zones last year and we are continuing with this process. .LT has been phased out and we're talking about phasing out D.

We want to finalize the lameness checking project and start sending emails out to people to tell them to fix their servers. We are also busy with hardware end of life replacing servers. We are still pushing ahead with improved IPv6 peering for Kroot as well as transit. We are talking to a few people about the Kroot any cost infrastructure. One problem that we've discovered in our software has been that the provisioning tools do not support zone transfers over IPv6. So if any of our members are running IPv6 only on their name servers, then our server NS. can't do zone servers. This is something we want to fix which will make our zone servers IPv6 compliant and we're reviewing our DNSSEC infrastructure which is a little over two years old and we want to replace the servers, the software and review all our policies and procedures. That's so it from me. Any questions?

CHAIR: Thank you. Any questions?

AUDIENCE: Lars Johan Liman. Just question. The lameness project for which domains do you run these checks?

ANAND BUDDHDEV: We're running them against all the /8s indicated to the NCC from IANA and under them.


CHAIR: Any more questions? Okay.

ANAND BUDDHDEV: Thank you everyone for listening.


CHAIR: Next up, we have a presentation on outbound. You have the T-shirts at the back of the room. Let's see what he is to say about this new presentation.

WOUTER WIJNGAARDS: So high. I'm Wouter from Unbound. Can you hear so Unbound, validating cache resolver. First I'm going to talk to you about why another resolver, why do this, the instruction, then I'll briefly skim over the features, revolvers got lots of features. We don't want too many but you need some, trust anchors, authority service, the modern paranoia resolving. I'll briefly skim over the elegant design that we're trying to have, and then I'll go into the testing that it's received and hopefully convince you that it's real good and I'll have a summary.

So why did we make another resolver? Basically in the DNS, there aren't that many open source validators right now, [unclear] got bind and this is going to be the alternative choice to having a bind validator as well as being an alternative choice to a resolver and more diversity out there and if one implementation gets a horrible bug, the other one doesn't and this makes the world a better, safer place. Where do we envision deployment? On your work group local or local host DNS resolver or validator. At large, ISPs and a couple of them are using them already and as a validating library for applications as having them as a stub in your application to do the validation right in there. In lab NL net labs, we're a not for profit public benefit foundation. We developed NSD, which is a DNSSEC aware high performance authoritative name server. This project does recursion but not authority. That's the idea here.

So we've tried to keep some of the experience and good ideas from NSD and use them while building the Unbound thing. It has existed for a long time already. I'm presenting it as a new product because it's available for use right now but it's been developed quite an extensive time ago. And thought of by the good people from NLnet, [unclear] Nominet, and they wanted to try out the various ways of doing DNSSEC and in 2007 we came around and said okay we'll make a solid C version which is production ready and we can take on the maintenance. So the first version was available January but there's a certainty release candidate, 0.11 and this is going to be called 1.0. If you want to get a preview of the version you can download it right now. We've had substantive testing from Switch, access for all and IIJ for which many thanks. This is the basic features. A resolver needs a couple of things to deliver the service. A modern resolver. So we've IPv4 and IPv6, dual stack support and IPv6 only and IPv4 only. We have access access control. It's not DNSSEC [unclear] detailed, NSEC 3 and ready for SHA 256. It has the useful tools for system administration, something to check your single validation to see if your validation is working. And very important we've made some good documentation so there's man pages that are readable. There is a website with documentation and ease of use and the code's been extensively documented as well, DO oxygen which is a JAVAtype tool. We have a feature, not entirely basic to have thread support so you can enable threads. You don't have to but you can and use multiple CPUs to scale your performance, if you want to do that. These are what I would say, if /UPTD to make a validator, you need to do this, otherwise but then Unbound's got some more, so two things I wanted to highlight, our trust anchor features and authority service. Trust anchors are very important to a validator. And you heard earlier today and yesterday talks about TARs and RIPE and IANA trust anchors. You have to get them into the validator and I tried to build as many ways to get them in there and ease of use. It's possible to enter many trust anchors in the case there's no root signing, than many [unclear] lands it can cope with that, or you can enter the DS or DNS key and we can read zone format and the bind format keys, not the rest of the configuration, just the keys, anything to make it easier for you to enter the key in the validator.

Another conscious choice has been authority service, largely absent as much as possible, wanted to avoid complicating the program. However, it's necessary to have some sort of authority service, mainly for reverse zones, so this is to be able to block outgoing reverse to 10/8 and that sort of thing, to be able to serve your local host and that sort of thing. It was really tiny and just enough to do that job. You can block domains as well with that, which is what ISPs need nowadays.

As well, if you really need to have some sort of a local, your own local zone at your site, you can use a stub zone to refer to that real authority server and [unclear] a clean separation between your authority and cursive server.

Another feature for modern resolvers is paranoia, this is completely free. This is where this is what you get when there's no DNSSEC. DNSSEC you've got keys, no reason for paranoia, if you don't get a signature all you can be is paranoid. We do the paranoia as the RFCs tell us to. RFC 2181 is our trust model. If you get a little glue address into some reply you're not going to trust that, there's a ranking between the trust we filter out glue. We do all the recommendations from the recent DNS OP draft. There's a cryptic one, not one that's doing some easy medical modular function in a weird number space. Query name matching, source random and RTT banding. We have an experimental in case you want to get even more out of your forge resilience. This is so if you don't have DNSSEC it's going to be a good quality secure resolver as well. That's the point.

So the main design of Unbound was envisioned from the start of the JAVA prototype to be easy and have nice modules. I've tried to keep that design as much as possible. This is not JAVA, this is an ancient quality C portable anywhere.

So the basic design to have threads work is to have worker thread and every thread is a fully capable resolver in itself, but they have a shared cache that they can access. It's modern, efficient. You can set the memory use to be one megabyte to whatever you need

The basic idea is there is a query coming in, then it enters a query mesh which is like the recurrence dependencies of one query on another and there's a little state machine associated and the validator is going to go, let's get out this chain of trust and do its validation and it passes on its request to the iterator module and that's doing the entire recursion thing, what you think of as a recursive resolver, name servers out there, send queries, get their replies see if we need to chase down delegations and that sort of thing. It's going to then make outgoing queries to the authoritative servers or however you've configured your set up and once it gets the replies it uses the infrastructure cache and then pass it back to the validator that's going to use its key cache and DNS and then it will be passed back to the client. We're storing not just the also storing the format of the message in a separate message cache. We don't need to assemble the message when the query arrives we know the format and quickly reply to that. The modules are designed to easily replaceable and changeable. So, for example, you can take out the validator already in the current version and what you get is something that doesn't validate but it's a recursive server and it's possible to add extra modules in there as well. The idea is to be extensible and have a clean make up.

In building the C version, I thought this is going to be a good version, I need this to be really stable and solid, so I've done extensive testing on this. The first class of test being is regression tests or maybe unit testing. That's a sort of code that's handling wire format and handling key signature encodings and stuff like that. That part has been unit tested. As well, there has been a reasonable unique feature although I don't know what the other resolvers do, there is a test infrastructure where we can present a synthetic or captured trace towards Unbound and how state machines handle this, it's been used to test all the state machines to see if they do what they're supposed to do, in DNS you can have weird corner cases with C names and whatnot. And there's been basic functionality tests which you would expect us to do, to start the thing and do queries and see that you get the right answer. We also have done some BETA tests in real life and real user queries which has been very useful. Apparently the users don't notice because they get the right answers.

So the other part of this is performance tests because we would like this to be a high performance server but how do you know. From the start off I've tried to coat it use as a high performance server but it's a resolver. So there's two kinds of performance, cache performance and recursion performance. When you do recursion you have to send queries to somewhere else so it's going to be lowered by orders of magnitude because a query somewhere else is going to take milliseconds to do whereas your memory is nanoseconds. And the problem with recursion service is you can run it now against the internet but that's not entirely fair because if you run it again it will be differently, someone might have tripped over a wire somewhere and it's going to be different. I've tried to make a set up where we have a known stable environment where the tests are fair and comparable. So I've extended the test lab that we have to work on resolvers and what you've got is the [unclear] in the middle here, the server, the caching DNS server and it's been given a couple of routings. You can configure any of them on the market. We point them to some sort of internet which is basically served by three servers that we have, one called route, one TLD and one called server and we play back a number of queries toward it using TC preplay on a recorder and with a spoof return address so all the answers end up at a little capture device that's going to get all the answers. And by having the authority servers serve a large amount of domains like www.example .com, we have something reasonably close to the real world but then stuffed in a known environment.

Okay. So these are all different computers so the actual caching server is not bothered by our testing equipment. It can run on there and go.

What does it do? First there is cache performance. This is lots of lines in the graph. I'll start at the top. This is the echo this is just showing you the system performance out there. Our system is performing up to 16 thousand queries a second and the system is dropping down and echo is just echoing the query back. So at the end this is going to stop. Then next on we have the green line which is NSD. That's not a recursor either but to make it fair we've loaded all the zones, NSD were authoritative for the entire internet and it would have the answer already. It's giving you a correct answer and sending that back as fast as possible. Then you have the performance indicated by the green line. So in the blue line, the dark blue line you can see Unbound and it's got the entries in its cache and it's responding so it's pretty similar to what NSD is doing apart from NSD being authoritative. And the performance is pretty close as well, which is the goal. We can't win from NSD here because it's doing a zone pre-compiling if you know NSD and a recursor can't do that because the data is going to be changing all the time. So we're slightly bit slower down. Then further on in the graph, [unclear] can power DNS 33.1.4. In the same situation it's getting a short burst to get the information in the cache and we query it at a high rate and see how many queries it can still answer. At 40 thousand queries a second the cache response is 40%. The light blue line is bind, the latest version I could get, which is just the version it's just a little faster than the last one. The basic picture here is Unbound is close to NSD and power DNS is just leading a little bit ahead. I'll tell you about our server set up, we're set up using Ubuntu, 1.6 megahertz, using gigabyte cards. The important pit in this slide is at the 95% that's where you want your performance to be, 95% of your users to get an answer. If you want to compare them, look over there, the little bending point it's going over 100%, bends down a little and drops real fast. You want to be at the start of that droop and compare your numbers. If I look at that it looks like twice as fast as the looks like Unbound is twice as fast as bind or powerDNS.

Of course there's also recursion. And recursion is of course looking very different. For one thing the system is able to keep it much more because it doesn't have to because echoing the [unclear] queries doesn't need to send queries somewhere else how do we force recursion? We force recursion by having unique queries. Every [unclear] query is different. It is possible to cache the name serve, and do that. And it's the same set up otherwise. What you see here is Unbound here in the dark blue, the same version of powerDNS and bind below that. It's closer together because they need to query the same authority server served by NSD and get their answer before they can get a reply out there. They're all the same shape, they start out nice then suddenly there's a dip that goes down and then it starts to smooth out in the end, smooth out to some sort of stable value it seems to be.

So here again it's the 95% that's the most important.

If you want to say things like it's twice as fast as the other one, I did some testing where I tried different query types. I've got different types of queries and at the top I'm just using query PERF which counts the numbers of replies, I've used the three different versions of recursors, Unbound seems to be faster. Then you have the ones with usually powerDNS leading over bind which an interesting dip here. PowerDNS seems to get a performance drop. And the other takeaway is if you want to say something with statistics, using a different query type could be really interesting for your marketing department. Other than that, this is not really fair. I think the previous slides were much fairer in their set up. Different query types and different set ups can affect the outcome. Unbound is performing well.

In summary, we have Unbound validating caching resolver. Open source licence. DNSSEC, standards compliant, high performance, portable, support by NLnetlabs labs, we announce our changes to support two years in advance.

And if you have any questions, please ask. There are T-shirts right over there.

Any questions?

AUDIENCE: I just want to know Robert Martin. You mentioned a lot about the trust anchor repositories and managing a lot of keys. Can you also facilitate DLV, like if IANA set up one of them?

WOUTER WIJNGAARDS: If IANA set up one of those?


WOUTER WIJNGAARDS: DLV doesn't really scale to large values so I didn't implement it. But if it would be necessary I would implement whatever necessary to get keys in there.

AUDIENCE: In terms of DNS it seems to be a little bit the way people might be going if you want trust anchors for TLDs. IANA only

WOUTER WIJNGAARDS: The trouble with DLV it's not very well in performance and doesn't scale up very nicely as far as I know.

AUDIENCE: Could you quickly go back to the slide with your first draft, if you would do that. This is probably strictly not really related to the things you have been doing but which has been bugging me for quite sometime. If you look at the red one which is the most performance one, there's a drop at sixty thousand per second some where, you start losing

WOUTER WIJNGAARDS: The operating system stops getting responses

AUDIENCE: Did you do more investigation of what is really happening? Is it the operating system, CPO or number of interrupts per second?

WOUTER WIJNGAARDS: I did look closely at that and although I'm not really trying to get the highest possible numbers on these machines, they're pretty old by design. I've seen more than a hundred thousand queries per second on new hardware, but it seems to be highly related to a device driver for Enet card.

Also the device driver, if you have a bad driver that's going to cause massive interrupts

AUDIENCE: When you look at the green and blue and all the rest, it's really not hardware or any other thing related, it's really CPU bound

WOUTER WIJNGAARDS: [unclear] yes, there's plenty of ethernet there, and you can see it's the same shapes are just repeated here, where the system performance is being reflected.

CHAIR: One more question.

AUDIENCE: I have a few questions. First, you said that you do source address randomization, that doesn't mean if you have multiple addresses on the box you can query from different addresses? Is it also possible to con figure that? It's not really good to source from an NL cast

WOUTER WIJNGAARDS: It came out of the configuration. It's got configuration to contact the authority service from his a different source address than contacting the clients and if you give more addresses to contact the authority servers with it will do source randomization on that.

AUDIENCE: The other is statistics you have only have either cache or recursion. Have you also done realworld test where it's 40 to 60 percent from cache is recursions and how one compares there? And on that, have you actually compared against nominem CNS

WOUTER WIJNGAARDS: I don't have [unclear] CNS at all. It would be interesting but it's not open source. I have no clue. Other than that, it would be interesting but I think that take 60 percent of one performance number and add 40 percent of the other number and that's going to be the result.

AUDIENCE: Might be. I'm not convinced that would be the result because the server has different things to do.

WOUTER WIJNGAARDS: For Unbound that would be the result because this is just the way Unbound is made. For other resolvers they could be doing weird stuff.

CHAIR: Thank you very much.


CHAIR: Next up we have Ondrej from the check domain registry.

ONDREJ SURY: This will be a quick update on what we have done on DNSSEC in past month. So I don't want to spend your time for lunch.

So, well, you may already heard on ENUM working group that we signed the ENUM zone and also secure location and it was published a day after us. Well, we sign on regeneration or weekly, whatever come first because there are not many changes in the ENUM zone. We are using the RIPE DISI tools. I had to make some changes to pearl co. And we plan to switch the signing to hardware security modules after we make them work.

So for the .cz we are not so brave to do something yet so we are waiting for the hardware security modules and things to settle down so we have everything tested and in place.

Right now we have two cards for testing, one is the Sun S CA 6000 and this is a card we're going to use for zone signing key and the other is NCIPHER, Nshield and this will be used for key storage and signing of the keys.

Let's talk about why. Well, for the Sun card it's very fast, very cheap, but it works only on sore us and some old version of RHEL and SLES. It can be protected only by password and only few ciphers available. It's very fast and very cheap. We think that it's enough for the zone signing keys, just to buy few of those cards and put them into service. What we have done is we can generate the keys thanks to Richard Lamb and right now we cannot sign the zone because we are stuck on some key 11 function doing some weird stuff. I have to debug that after I get back. I think we will have this ready this month. So I will send some updates to the list.

The NCIPHER, Nshield is more secure, FIPS I think you can secure the certificate on more than two cards. So you can have more people generate the key signing key and sign those zone signing keys. It also has more ciphers and more platforms are supported. We are trying to make it run on 12 [unclear]. It's slower and more expensive. What we can do with this card we can use the PKC S 11 and DNSSEC KeyGEN doesn't work yet but we received the card just before I went here so I hope we will have this ready also this month so with can start real testing.

Any questions? Sorry, I borrowed the picture. I like it.

AUDIENCE: My name is Roy Ans. I work for [unclear]. We've been doing the same thing using the 6000 cards. We've been looking at using the engine in open SSL

ONDREJ SURY: We were thinking about that too.

AUDIENCE: There is, if you contact ISC, there is a preliminary version out there that's bind 960 Alpha. It has support for these kind of things T might save you a lot of work. Another question is: If you do this directly through PK SC 11, why would you want to do hashing on the box or card, hashing on the software can be just as fast? It's the signing you want to do on the card.

ONDREJ SURY: I didn't run the test but I suppose the hashing on the card will be faster at least for the Sun crypted card. I hope so.

AUDIENCE: If you're interested let's talk about it after the session.

AUDIENCE: Okay. Lars Johan Liman. Have you been thinking about signing policies at law, regeneration periods, key validity periods and stuff like that?

ONDREJ SURY: Yes. Well we use the best current practice for DNSSEC or it's just RFC, I don't know right now. We have some he can men written for the policy and compromises and whatever.

AUDIENCE: Because we've found that the these periods of resigning policies have large impact on the zone transfers between master and slave servers it can have huge impact on the amount of data on that. So you have to take that into impact as well.

ONDREJ SURY: We have things in place for all those servers. Yes, I'm thinking about that, that it will slow down the updates on the slide servers.

CHAIR: I even a I have a question. He was asking about your policies being documented and written down. Would they be in English or Czech?

ONDREJ SURY: It's in Czech at the moment. I think we will make it in English if everything works.

CHAIR: Then we can get a common set of document, then we can distill from that best practices and things from that.

ONDREJ SURY: I think I saw some somewhere and I was trying to get access but I didn't spend that much time.

CHAIR: Peter has a question.

AUDIENCE: Peter Koch. Just a short clarification question, is NSEC 3 a concern for you?

ONDREJ SURY: Not for ENUM but for .cz, yes.


CHAIR: The next speaker is going to give us an update on some interesting properties of the Microsoft DNS implementations.

CARSTEN STROTMANN: I'm from men and mice. I would like to show you a little bit of information notes that I have on what is new in the windows Microsoft information in regards to DNS. We see more and more enterprise customers using Microsoft DNS on the Internet. Knowing what's out there and different from other DNS's out there might be beneficial. A quick question to the audience, is someone here who's running a DNS service authority [unclear] directly on the Internet? Not on an internal server but Internet?

CHAIR: Come on, you're amongst friends.

CARSTEN STROTMANN: I don't want to go over everything, just for completeness everything in there, things that I skip over, you can look after that in the slides. The first big change is the global names zone which is in there. The global names zone is an idea to go away with wins or net BI OS. That has been the windows operating system since the beginning and that was a much problem in trouble shooting name resolution issues in these networks that have both DNS and wins enabled. What is now in there is a special zone that you can configure in Windows which is called the global name zone and it works that way. Here we have a normal DNS zone, might be an active director here it's called.example. Here it has A records. Important is the entry fileserver, which is actually there. And we want to make that enable that people using single label name and having no search lists still being able to look that up. So it is possible to create a zone in the DNS server called global names and put the single label names that we want to resolve in there, like and it is recommended to put that in as a C name pointing to the real A records somewhere else. But in fact it's just a normal zone you can just put everything in there. It's just a recommendation to put the C names in there. So here in this example I have two C names in there, one is called F S and one is called server and both are pointing to this file server in this zone. So what now happens if we use the old NSLookup. If we ask for single label name like server, we get an answer back. We get an answer back for file server and we get the IP address and we get the information that there is a C name called server.which doesn't exist if we look in that zone. It's synthesised from this global name zone. We never see this anywhere, if the Microsoft DNS server receives a single label request it first tries to look it up using normal DNS techniques if that doesn't give an answer it looks in the global name zone if that zone exists. If it finds it there it then resolves it from there and sin they sizes from what's in there, responds from the zone where the C name points to.

This is not enabled by default. It needs to be enabled. But as a lot of companies want to go away with wins it will probably be enabled in a lot of installations, and this sin they sizing might be irritating if you're trouble shooting these neat works. So is it useful? Specially useful because wins won't look up and having wins in the same network is really hard to trouble shoot. You can't predict which server is doing the look up and from where you can get an answer. Going away from wins is a good idea. Also kind of nice is while wins is a windowsonly solution, this global name zone works also for other operating systems that are occurring like Unix systems.

Apart from the global name zones there's no other everything where you can put in an IPv4 address you can also use an IPv6 address. This is a big step forward for IPv6 deployment in this enterprise customer networks. It is possible in the windows 2008 server to use IPv6 addresses in URLs, which is RC 2732, that is implemented here. Also there is, because there is a lot of leg [unclear] applications that cannot work with IPv6 addresses there's a special thing built into the server that if you put in a special DNS name that ends in the DNS resolver will not look that up, instead it will convert the label that is before the IPv6 into an IPv6 address and is using that directly without ever recurring DNS at all. That is meant as a workaround for applications that can use DNS names but cannot use IPv6 addresses. So if you need this application to contact the machine that has no name but only an IPv6 address you can use this literal v6 address name.

Okay, windows 2008 now supports IDN names. It also registers its own IPv6 address towards its DNS server. So the same auto is for IPv4 is now working for IPv6, that is part of the complete support in the system.

Active directory zones are now loaded in the background. That was a big problem for customers having real big zones because restart of the DNS took maybe 30 minutes. In the next slide I have here how it works. When the DNS server starts it first loads the list of zones to load. Then it loads the routings then the file based zone, that is still one by one so until that the DNS server is still starting up and it's not answering to queries but then in .4 after loading all static zones it starts answering the queries and then all integrated zones will be loaded in the background. If a request can comes in for a zone not loaded that will be automatically rescheduled and then it starts loading the other zones in the normal order.

There's Dname support in Windows 2008 Server but only from the command line. There's no way to enter Dnames in the zone, you need to know where to do it on the command line and [unclear] there's a feature in there that are forward zones which is called conditional forwarding it can now be replicated. If you have a lot of DNS servers you don't need to go to every one one of them, you do it once and it's automatically replicated which is nice to keep the configuration clean.

Bad part is the DNSSEC support. There's no additional support for DNSSEC in windows 2008. It's the same status as windows 2003, as far as I know. If anyone knows more than that, please let me know that. And the 2003 is the old DNSSEC support so I would say that's not really usable there.

The last point is the read-only domain controller. It's not possible, because some customers want to use the convenience of zone transfer by active directory replication, it's not possible to configuration a read-only domain controller that receives DNS data that is not by normal zone transfer but still being robust enough to be deployed on the internet. In the old world active directory application was always two-way, so if you had a DNS server out on the internet and that was being hacked, the hacker could change everything in active directory and that has been replicated back to the internal network which was a bad thing. So the rep indication directories only from your masters to whatever slaves or secondaries you have on the internet to face the bad world out there.

Generic, not DNS specific improvement. There is a server core installation which is like windows without graphics for the command line people. No desk stop top, no media player, no .net which is ideal for DNS server deployment but you have to be able to master the command line on windows which is not easy but it can be done. Also there's a shift on how Microsoft implement things. They say now that everything can be done in Windows 2008 from the command line and some things can be done from the GUI. In old Window systems it was the other way around. And there's a built-in virtual machine technology which is nice to petition DNS servers, for example, to get customers to separate authoritative and caching DNS servers in separate logical operating system instances without having to buy more hardware.

And my personal prediction is that Windows 2008 will see faster adaption in the market than Vista because there is some substantial value in this new thing and we will probably see most migrations in 2009 that people really start [unclear] using that in the market. That's all for now. Any questions?

AUDIENCE: Small question prompted by your slide on Dname support where the interface doesn't allow for entering Dnames but the command line tool does. The question is: What resource record types are supported in the command line to or in the GUI which are not and is there basic support for non-resource records, RFC 3297 I think?

CARSTEN STROTMANN: Limited source of resource records that are supported by both the GUI and the command line tool. As far as I know no way to use resource records that are unknown to this product and also no support for this additional resource records or the unknown resource records

AUDIENC: Will the server read zone files?

CARSTEN STROTMANN: I haven't test tested that. I would doubt it

AUDIENCE: I would be very interested in knowing that, in a different context but let's talk offline.

AUDIENCE: Eric Coyne. Forgive my does it also support A6 records?


AUDIENCE: Say A 6, yes, we like A 6.

AUDIENCE: Is there any sense of like well, obviously it's there it's probably customer [unclear], I assume?

CARSTEN STROTMANN: The mike is not on or I don't hear it.

AUDIENCE: Sorry, I was just going to ask you if you had any insight into how great this demand for Dname has been?

CARSTEN STROTMANN: So far as I'm working with windows DNS servers that is seven years now we had no one customer asking for Dname or N 6

AUDIENCE: So they added it anyway

CARSTEN STROTMANN: So there's no demand so far, seems to be.

CHAIR: Two more questions.

AUDIENCE: So being Bill Manning.

Being open and transparent and sort of like inclusive, I tend to stick a whole bunch of different servers or service types as being authoritative for a single zone. If I put this into my cluster and the authoritative server is handing out A6 records and nap [unclear] and things that are not popular these days will it take those records and serve them

CARSTEN STROTMANN: As a caching resolving name server, yes.

AUDIENCE: As an authoritative server in a cluster? I have a list of authoritative�

CARSTEN STROTMANN: And you do zone transfer to that?

AUDIENCE: I'll send out a notify and he should pick up the update?

CARSTEN STROTMANN: I have to test it, I don't know.

AUDIENCE: Because losing data says that in fact if that happens this is really targeted towards a Microsoft only DNS cluster, you can't mix that

CARSTEN STROTMANN: Unless you restrict yourself to the resource records support from Microsoft.

AUDIENCE: Lars Johan Liman. What exactly did you mean by IDN, that the server is IDN compliant capable? How is that different from before?

CARSTEN STROTMANN: It is more that the server is more IDN compliant, the server. The resolver can look up names that are entered in UNI code and translate that correctly.

AUDIENCE: So the so they put the IPA in the resolver instead of in the application

CARSTEN STROTMANN: It does not send UNI code in DNS requests, it translate them correctly

AUDIENCE: How do you know it's UNI code that's coming in? I think I would like to talk to you more about it. There is an inconsistency here that I don't understand. When you're in the DNS system it's ASCII and then you

CHAIR: It's windows so it knows everything about everything.

AUDIENCE: Peter Koch. You mentioned separating the functions of the recurring server and the there's documentation that recommends restricting the accessibility of recursive name servers and with some of the window name servers currently deployed in the field it's an all or none decision? Has this improved? Can I restrict the recursion to particular address spaces?

CARSTEN STROTMANN: Not to my knowledge. It's still one check mark, switch it off or leave it on.

AUDIENCE: Okay. Thank you very much.


CHAIR: The next speaker has some interesting stories to tell us about DNS filter.

SHANE KERR: My name is Shane Kerr. This talk comes out of some work I've been doing analyzing some of the raw capture data that we have on our DNS servers. So basically it's a set of rules that you can use for TCPDUMP filters to get a little better look at your traffic.

So we run Afilias right now runs a cloud of DNS servers. Each one has a set of seven application servers behind it, which are answer DNS queries, behind a pair of routers and load balancers and things like that. We got a lot of traffic. So because of this it's impractical to store all these queries. We do store all the queries we get but not at each of the servers answering the DNS queries. And with all the queries that we get, it's also difficult to look at this much data. If you're doing serves for specific patterns and things like that, it just takes several minutes for each query and it's not very handy. We're doing some work, new stuff, into DNS databases but we're not quite there yet. There's also a special case for us because we do a double NAT into our cluster so source destination and IP clusters we have a specific need to match things by query ID only and time.

So what's the basic approach that I ended up coming up with to actually look at this stuff? Is that it turns out you can make Pcap filter to capture DNS packets for specific query ID or for specific query name, query type, or query class. The real motivation is occasionally we see DNSMON packets [unclear] get drop and had we want to see which of our servers they come into and see what happened with each particular query. That's the ultimate goal here.

So you don't have to use TCPDUMP for this. There are tools that exist to produce TCPDUMP style output specifically for DNS. There's DNScap. It doesn't [unclear] you do filtering based on type and class but the real motivation I didn't want to use DNScap is it failed me once and I don't want to be bitten again. Also Tshark which is part of the wireshark protocol which is very very sexy and will do this stuff. The reason I pursued TCPDUMP is one it's very cool and the other is that wireshark isn't installed every where. I have tiny routers in my home that can't install the wireshark package but TCPDUMP is fairly ubiquitous, you'll find it on almost every computer you're on. There may be other tools but I couldn't do too much reserve. [unclear]

So what does TCPDUMP store? The Pcap files they have information about the capture with all kinds of stuff you don't care about. What you care about is the ethernet packet, which is in raw form and in the DNS package it has ethernet header, IP had header, UDP had header and DNS packet. I'm only looking at UDP we do get TCP traffic but it's such a small percentage it's not really worth looking at in this case.

So in order to do this kind of whacky filtering which [unclear] see in the next slide you need to know a little bit about what the DNS packet format looks like. Fortunately, the packet size is a bunch of fixed header fields, two bytes for query ID, two bytes for query and then all that fixed size you have the beginning of the query and answer section. This is where it gets a little tricky because DNS queries are encoded in variable length and things like that. And the type of the query and the class come after the name of the query which means you can't say okay I want to look for every KSOSK, you have to query for specific names. The IPV forms vary but it give you an array to allow you look in the packet.

Each query has a variable length. So as I mentioned if you want to match specific query type or class you ever you have to look for a specific name. These names are encoded. I don't know if you can see this. You separate it with the dots and then each label it lists the length of the label in bytes and then the label and the next label and ends with a zero bytes. Another minor thing to be aware of, DNS is case sensitive but TCPDUMP isn't. It's looking at raw bytes on the wire. It doesn't know what's a character and account and things like that. What you end up with is something that looks a bit like this, it's a big rule. If you cut and paste this and throw it in a TCPDUMP it works. This is for ID server for type text and class. And this is what DNSMON will use when checking for host name. Semistandard way to do this thing.

A few other quick notes, when you get a query, when the server replies it copies the query. When you use a query matching the one you saw it will get the answers too. If you don't want the answers you have to do some other filtering based on source address or destination address.

Another quick note. The UDP doesn't work for IPv6. Fortunately the headers are fixed length so you can use the offset into the IPv6 fixed packet.

Also looking at EDNS 0 is tricky. That's done in an additional section. So the offset of EDNS 0 is in an arbitrary location later in the packet. The upshot is there's no simple way to say give me all the queries by just looking at raw packet.

I put up a web page here. There's a little JAVA script thing that you can use to build your arbitrary rules. Check it out. It's pretty cool. That's about it.


CHAIR: Thank you. Are there any questions?

AUDIENCE: Lars Johan Liman. The first one, would it be possible to do away with the casing problem by ignoring the case bit, because that actually corresponds to a bit in the character?

SHANE KERR: You mean to end it out so you could do it that way, yeah.

AUDIENCE: Would it be a help? I'm just planting the idea in your head. It seems like you should be able to do away with half of that.

SHANE KERR: No you wouldn't. Every line that's indented, you do away with one of these but you'd have to end it and then you do it with one of the cases but the rule itself would be longer.

AUDIENCE: My second question was since I'm not an IPv6 nerd, do you run into problems with these IPv6 packet headers?

SHANE KERR: I didn't see any of those in our traffic but I didn't specifically look. I don't know if they're very common in use but I have never seen any.

CHAIR: Any more questions?

AUDIENCE: Not so much a question, just some information on DNS cap. As far as I'm aware it's retained by pull [unclear] see. You can get a code 1.0 off the public RFC website F you got any feedback or issues, the best place to do it is on the DNS operations list. Very happy to take feedback for improvements.

AUDIENCE: Bill Manning. Couple of things. Extension headers for v6 probably don't show up too much in the DNS protocol these days except for screwball labs because its use would be nonstandard.

The second is that the presumption that I have is that you're doing this on data you already captured, not looking at this in real time?

SHANE KERR: What we do right now is on each of our servers, each of these serve servers, we capture all of the DNSMON style traffic. Anything looking for a host name or version or anything like that we capture it to a file in yeah, so. So, no, we do actually capture it using this rule.

AUDIENCE: From the machines that are actually doing the DNS resolution?


AUDIENCE: Okay. That's an interesting way to do it. Other people do a BPF fork and do their logging and capturing on a machine dedicated for logging and capturing

SHANE KERR: We also have a dedicated logging and capturing machine but we want to see the queries actually arrive on the machine, so...

AUDIENCE: The 3/10ths of a millisecond difference is: Okay

CHAIR: Thank you very much then.


CHAIR: We're almost up to the lunch break and I'm making an executive decision that rather than holding back your lunch by listening to Peter's presentation we'll hold over Peter's presentation to RIPE 57 in Dubai and and I'm sure you can all contain your excitement until that time. At this point I'd like to close this session and thank all the speakers both today and yesterday. Special thanks to Adrian and Robert for taking the minutes and scribe and to the very nice lady in the front who's been doing a wonderful job in the stenography. Thank you very much and hope to see you all in Dubai.