Skip to main content

You're viewing an archived page. It is no longer being updated.

RIPE 63

Minutes from RIPE 63
DNS Working Group
Thursday 3 November 2011
Viena, Austria

[Session 1] [Session 2]

RIPE 63 -- DNS Working Group Minutes -- Session 1

A. Administrative matters

Peter Koch opened the session by introducing himself and the co-Chairs Jim Reid and Jaap Akkerhuis, then he presented the agenda for the two DNS Working Group sessions.

Jim Reid intervened to bring up an item that had been dangling over the Working Group for a long time: the task force that had been set up about an interim trust anchor five years before. However, in the meantime IANA had set up a trust anchor repository so there was no more need for the task force which had been idle for a long time. He then asked the present attendees if it would be a good idea to close the task force.

Peter Koch asked for a show of hands. Most of the ones present were in favour, nobody was against and there were three abstentions. He concluded that the decision was unanimous with three abstentions and thanked Jim for bringing it up.

B. Matters arising from RIPE 62 minutes and review of action items

Afterwards Peter Koch asked for approval of the RIPE 62 DNS Working Group Session minutes as posted on the mailing list. Everybody was in favour, there were no people against or abstaining. Peter Koch thus declared the minutes unanimously approved.

Peter Koch then invited Richard Barnes to give the first talk, an update on the DANE working group and the IETF.

C1. IETF Reports -- Richard Barnes

Richard Barnes presented the project that was about adding additional security features to TLS, as a security protocol that secures a lot of the web and a lot of other VoIP applications using DNSSEC. The idea of DANE was to be able to use DNSSEC to enable domain holders, like example.com, to make some statement about their security properties, what public keys they have, with the ultimate goal to bind a public key to a domain name. He explained that there was an initial protocol in the works, proposing to have a resource record that has a number of fields to allow one to make statements about TLS and how the signature was binded. The use-cases document was complete, the protocol document was still undergoing work, starting to get fairly well flushed out, starting to close issues. Richard Barnes said that would be a good time for other people to review the documents and submit comments to the working group mailing list. He also asked for comments and questions at the venue.

Wolfgang Nagele said there seemed to be some confusion about the current implementation in Google Chrome and asked if the statement released had some implementation but not based on the draft.

Richard said it was based on one of the initial proposals that was a predecessor, conceptually similar but with slight differences.

Wolfgang clarified that he was referring to the current implementation and the fact it required one to fold the DNSSEC chain into an extension on the service side and continuously refresh that. He asked if that was still part of the protocol.

Richard answered that had never been part of the protocol, it was a separate concept that Google put forward. It was not considered in DANE but come up a little in TLS, possibly to extend TLS to carry DNSSEC information. DANE however was focused on defining the record format that would be used in such a system.

A speaker from the audience asked what the current status of plans and known rumours about browser support for use-case number three was.

Richard answered that he knew Firefox in particular had been involved in the working group, they were tracking progress, and as Wolfgang had mentioned, Chrome had a prototype out and in stable release, implementing use-case number two. He thought the browser vendors were actively interested, looking at it as increased security to their product and their users.

The speaker from the audience inquired if there was no push back or political lobbying, and Richard answered that there had been active contributions from that community.

Peter Koch asked the attendees how many had been aware of the project before that morning, and about a third to a half raised their hands. He then pointed out that apart from Richard Barnes, there was one of the Working Group chairs of that group in the room and they would be available for further discussion.

C2. DNSCCM -- Sara Dickinson

Peter then introduced the next speaker with an unmentioned item in the agenda, Sara Dickinson from Sinodun Internet Technologies Ltd. who also had a live demo to present.  The presentation is available here: http://ripe63.ripe.net/presentations/151-DNSCCM_RIPE63.pdf

Sara Dickinson thanked the Chairs for accommodating the talk and explained she would be giving a progress report about DNSCCM. She explained that DNSCCM stood for "DNS configuration, control and monitoring" and it was a software tool designed to use those three functions. Behind it was NSCP, a single cross-platform and cross-implementation protocol for name servers. The motivation behind the project was to insure DNS high-availability, and one way to achieve that was genetic diversity. Moving forward, she explained that the idea of NSCP was to bring together disparate nameservers in order to ease management. Regarding the current state of development, she explained that they were still depending on the implementation of NSCP with the support of NL.net, but although NSCP was still in draft stage, they believed doing an implementation was the right thing to do in order to
get more feedback and offer people something to play with. At the moment it was a prototype, not production ready, but the plan was for an alpha release towards year end.

Sara then proceeded with a live demo of the operation of the software.  Afterwards, she mentioned some future areas of improvement, like development of a graphical interface that would allow monitoring and multiple nameservers, core visualizations with statistics and group management.

Ending the presentation, she asked for feedback, for use-cases or for requirements. She said there was a project website and she was also available for direct contact.

Niall O'Reilly said the project looked very useful. He said he would not take meeting time but would discuss it further during the coffee-break.

Peter Koch thanked Niall and Sara and then invited the next presenter, Wolfgang Nagele from the RIPE NCC.

D. DNS operations at RIPE NCC -- Wolfgang Nagele

The presentation is available here: http://ripe63.ripe.net/presentations/124-RIPE63_WolfgangNagele_DNS_update.pdf

Wolfgang gave an overview of the DNS operations at the RIPE NCC.

Peter Koch noted that he wanted to clarify the removal of the "gov.il" domain, and Wolfgang explained that it had been removed by the domain owners themselves. Peter Koch added that there was an action point on the Database Working Group agenda to get rid of some of the attributes that were only needed in forward but not reverse DNS.

Robert Martin asked about the DS and the reverse three, if he had any idea whether people were using it for doing DNSSEC in an unexpected way.

Wolfgang answered that they could definitely confirm that, he knew ccTLDs did it and it was not a big problem, but of course if something like ripe.net would break that would become an issue.

Robert continued by noting that the TCP graph looked very static in a very dynamic environment, and inquired whether it would have been possible for them to have hit a server limit.

Wolfgang answered that it was not the case, they had done extensive load tests before rolling out the signed root and that could not have been the bottleneck; it would probably be a thousand times higher.

Peter Koch then asked about the secondary service for developing top-level domains. As new top-level TLDs were expected to develop over the following months, he wanted to know the RIPE NCC's position regarding those -- would they provide secondary service for any of them?

Wolfgang clarified that they were talking about the new TLDs approved by ICANN, then explained that it would be a separate issue compared to the secondary service normally provided by the RIPE NCC -- that one was specifically for ccTLDs with a focus on developing countries that did not have enough funding for a secondary; they would keep that service available for any country that needed it. However the new approved TLDs would be ruled out, first because they were not ccTLDs and second because anyone affording the money to apply for one of those domains should also be able to afford the infrastructure. Such requests would surely be turned down by the RIPE NCC.

There were no other questions.

E. Knot – DNS, a new high-performance authoritative name server -- Ľuboš Slovák, CZ.NIC

Ľuboš Slovák presented Knot, an authoritative DNS server developed in CZ.NIC Labs. It offers performance comparable or better than the most widely-used implementations and advanced functionality at the same time. The presentations is available here:
http://ripe63.ripe.net/presentations/145-KNOT-20111103-LS-RIPE63.pdf

A member of the audience inquired about the increase in memory footprint mentioned in the presentation and if it was possible to provide some numbers.

Ľuboš explained that it depended on the zone file, for example it could be four times the amount of memory as the zone occupies on disk.  Depending on the machine, it could vary between three to five times.

The audience member asked whether that was because of the quick hash algorithm, and Ľuboš confirmed and also added the quite complex data structures.

The audience speaker then asked when the support would be available, and Ľuboš said during the following two or three weeks.

Daniel Karrenberg commented that while working at the NSD project, his role was to test the software. In order to achieve that, they built a testlab that sent the same queries to both NSD and BIND and analysed the differences. He said the Knot team could do the same in order to get more acceptance and confidence in the software. The code for doing that was still available, not in a great state since it was just hack, but they could use it in order to compare Knot, NSD and BIND.

Ondřej, one of the co-developers of the project, chimed in to say that they actually had similar code in use, and they had gathered two months of CZ.NIC traffic in order to replay it. The code was not publicly posted anywhere, but if anyone wanted to use it, they could make it available.

Emile Aben relayed a question from the remote participation chat. Anand Buddhev from the RIPE NCC wanted to know what the plans were for future development models, and whether development was planned to continue within CZ.NIC or would be opened up. Clarifying, he asked what CZ.NIC's long-term plans to support the software were.

Ľuboš asked Ondřej to respond, and he said that was not the only project or the first one CZ.NIC was working on. They wanted to open it up as much as possible, and welcomed other people joining them. They wanted to support the project for the long-term, fixing bugs and helping others with deployment.

There were no other questions.

F. Beyond Bind and NSD -- Peter Janssen, EURid

Peter Janssen from EURid presented the new nameserver they developed, explaining the motivation behind it and giving a performance comparison with BIND and NSD.  The presentation is available here:
http://ripe63.ripe.net/presentations/154-RIPE63-DNSWG-BeyondBindAndNSD-PeterJanssenEURID.pdf

Daniel Karrenberg from the RIPE NCC asked if it was correct that two of the EURid nameservers were already running on the new platform, as stated in the presentation, and Peter confirmed.

Daniel Karrenberg then said that was a good thing, and the problem had been solved six or seven years ago, being about actually capturing responses and comparing them. He renewed the offer to provide the old code developed in the process.

Wolfgang Nagele from the RIPE NCC chimed in to note that the K-root capacity testing that had been done before rolling out the signed root zone had lead to a white paper being published that could provide more information.

Peter Janssen explained that the intention was to run some of the EURid nameservers on the new software called YADIFA. It was a long-term project and they were committed to keep it alive and running.


Daniel Karrenberg clarified that it was not his intention to sound critical, he was just providing friendly advice; having more than two
authoritative DNS servers would have been in everyone's interest.

Peter Janssen ended by explaining what YADIFA stood for: "Yet Another DNS implementation For All".

Peter Koch closed the session, reminding attendees about the second part coming after the break and the respective agenda.

RIPE 63 -- DNS Working Group Minutes -- Session 2

Jaap Akkerhuis opened the second session of the DNS Working Group at RIPE 62 by introducing himself. He then invited Wolfgang on the stage for the first presentation.

G. What was all that traffic to the root? -- Wolfgang Nagele, RIPE NCC

Wolfgang Nagele presented a report on the excessive increased query load on the root name servers that occurred for a brief time during the summer of 2011.  The presentation is available here:
http://ripe63.ripe.net/presentations/125-RIPE63_WolfgangNagele_K-root_traffic_spike.pdf

Jaap Akkerhuis asked if the other root operators Wolfgang mentioned were involved in the investigation had seen a similar pattern.

Wolfgang responded that there was confirmation that they saw it going on but they couldn't do as much in-depth analysis, however some of them actually did. He said Dwaine Wessels had promised to do a report about it sometime.

There were no further questions.

H. DNSSEC-Trigger -- Olaf Kolkman

Olaf Kolkman presented an application developed for use in DNSSEC-hostile environments such as behind NATs, Internet in hotels or a neighbourhood coffee-shop in order to test DNSSEC functionality. The presentation is available here:
http://ripe63.ripe.net/presentations/172-RIPEWG-DNSSEC-trigger.pdf

Roy Adams commented that with a little bit of help from Olaf and Jaap, he was able to install the application during the previous Monday. While he was not a power user, the software worked very well when he tried it in the hotel. He added that he was trying to make it work with OpenVPN as it overrode the DNS settings; once that would be finished he would start pushing it within Nominet as well as he was quite impressed with the functionality.

Olaf Kolkman asked that when Roy would have new insights into making it work with the VPN he should post that to the mailing list. He said that the software was intended as an end-user tool, and if they ran into problems they should be able to find the log files and discover what was happening in order to troubleshoot it -- something he was urging all the ones present to do.

Roy also added that he had browsed several websites that were broken specifically in order to test DNSSEC, and the tool had detected all of them correctly.

Richard Barnes said the presentation clearly showed the tool was performing an interesting function in terms of determining DNSSEC
support on the local network. He asked how applications could access that functionality.

Olaf responded that the software reconfigured the set-up of a machine so the nameserver pointed to the local interface. That meant that the tools which wanted to work with it would only have to look at the ID bit -- of course, that assuming the local machine itself had not been compromised.  Then if there was a failure case and DNSSEC was broken, they would not receive an answer -- not a great user experience, however that was what the software was for.

A speaker from the audience said that at the Routing Working Group, Randy Bush said the Internet was bowing because it worked; while there was no problem with DNSSEC itself, the speaker pointed out that the software did not work with the DNS resolvers of the hotel.

Olaf confirmed that the Swisscom network used by many hotels did not work with version 0.7 of their software, and it was indeed a more complicated problem to solve due to the way Swisscom handled packets.

There were no other questions.

I. The IDN Variant Issues Project, an Update -- Joe Abley

Joe Ably presented a study by ICANN to gain insight into the problem of variants for the same IDNs. Six case studies had been completed in the following six scripts: Arabic, Chinese (Han script), Cyrillic, Devanagari, Greek and Latin.  The presentation is available here: http://ripe63.ripe.net/presentations/157-jabley-ripe63-variant-issues.pdf

There were no questions.

K. News from the DNS-EASY -- Stéphane Bortzmeyer

Stéphane Bortzmeyer presented a report on the "DNS Health" conference that had been organised by The Global Cyber Security Center in Rome together with ICANN, as well as the workshop on "DNS Security Stability and Resiliency" that followed.
The presentation is available here: http://ripe63.ripe.net/presentations/20-for-ripe-dns-wg.pdf

Peter Koch commented that the SSR workshop was about security, stability and resiliency of the DNS System, and in ICANN circles the people in suits usually meant something different when talking about DNS than the people in polo shirts; in that context, looking at the agenda, it seemed that the "take-down" industry was having a very heavy weight there. He asked Stéphane to elaborate on the relationship between how much was dealing with the DNS and the infrastructure itself and how much was addressing issues that would happen with the DNS instead of to the DNS, together with his stance on that matter. He wondered if perhaps there was a loss of focus or losing sight of the stability of the infrastructure because a couple of those "world-leading security experts" appreciated the DNS as the hammer for the nails they wanted to address.

Stéphane Bortzmeyer said that the problem for a long time had been that all the issues about take-downs had been directed to the registries like in the Conficker case, registries were asked to act or risk being regarded as bad Internet citizens and the assumption that if the domain was deleted at the registry it would disappear from the face of the Earth. More recently however, the trend was for more and more requests to be directed at the resolvers like it happened in France; also, regulators for on-line gaming were asking ISPs to filter illegal gambling sites at the DNS resolver level. The same could happen in a fight agains a bot-net or another threat -- instead of asking the registries to remove a domain and risk a refusal, it could be done on the resolver. Typically, take-downs had taken a new aspect -- no longer a problem only of the registries, but also of the ISPs, a different population that was less represented in things like the DNS Working Group at RIPE, and not many of which were at the SSR. He could not say who had been there, because it would be against Chatham House Rules but most of them had come from the registry industry. It was not clear what
ISPs would think about it, however in most countries if an ISP was running a DNS resolving service (which would be a typical case for an ISP) that would require doing some sort of filtering due to mandates coming from many different organisations -- to fight against pedophilia, intellectual rights infringement and so on. Therefore, the people working at the registries would have to be prepared since take-downs would no longer bother them but would go directly to the ISP -- that would probably be the next big change.

There were no other questions.

L. Discussion "Domain Name Synthesis -- For Fun, Profit and Law Abiding Citizens."

The session continued with a Panel on the topic of "Domain Name Synthesis -- For Fun, Profit and Law Abiding Citizens".

The Panel was moderated by Peter Koch and was composed of:  João Damas from ISC, Matthew Pounsett from Afilias and Patrik Fältström from Cisco.

The panellists introduced themselves with a short description of their affiliation and activity.

Patrik Fältström mentioned that ICANN had received a question in early spring 2011 from the government advisory committee on their view of blocking. Their response was documented in document number 50, a two-page document translated in six languages, which he encouraged the attendees to read. In summary, it would read that blocking was not a black-and-white issue but more a question on what harm a certain action on the DNS flow would be creating. It also urged everyone that would be trying to or be interested in doing something like blocking or synthesizing or changing responses, to calculate the balance between benefit and harm, because any kind of touching of the flow of responses had the ability to affect services and anything else on the Internet.  After publishing that document they were also looking at various kinds of reputation systems and that was something they were investigating, who would be impacted as well as general implementations regarding blocking.

Peter Koch thanked Patrik for broadening the scope of the discussion. He then went on to note that the title of the panel was not referring to blocking for security purposes but to rewriting of different flavours that would extend maybe even to the "sitefinder" example of some time ago.

Then he mentioned that Stéphane had seeded the ground referring to the filtering discussion in Rome and one of the issues that was brought up was "self-inflicted filtering" or "third-party inflicted filtering" which might be looked at differently. Some of the statements read so far on blocking, rewriting and filtering were giving a strong sense that all that was challenging the stability of the Internet or setting it at stake, and it might have been a little schizophrenic to see some party holding warning signs in one hand and actually supporting blocking for something like bot-net fighting purposes on the other hand. He asked João if he could say something about that.

João clarified that Peter was probably referring to the fact that ISC had a very public position on government interference for whatever reason as a means of blocking Internet-wide access to a given set of names, and at the same time that they had implemented things like support for NX domain redaction in BIND. Peter Koch add RPZ on the list.  João continued by explaining that in his opinion the two had nothing in common, they were separate issues. It was quite different to block things at the registry level where it affected everyone compared to the local level where it affected only local users (especially in enterprises, where the demand for that functionality was). He said it was more a proactive move since it was a fact that the DNS was used by the bad guys, which was not going to change; some of that could be mitigated by things like RPZ. That may explain why there was a perceived difference in ISC's position in two completely different environments.

Matthew Pounsett commented that he had something to add, even for people that saw both to be very similar. When one started rewriting or blocking DNS answers, it would reduce the coherence of the DNS system and the stability of the Internet. In order to justify doing that they would have to increase stability in another way. If they were balancing that out somehow, then it could be justified -- for example using RPZ to block mail domains locally, that would help to keep people from being compromised, particularly if they were in control of how it was done. On the other hand, it would be much harder to justify blocking further away form the user as it had a wide range of effects, it was not under control of the user and in some cases had no benefit anyway -- for example commercial rerouting for advertising.

Patrik said that is was a complicated issue. They had to remember that if blocking or rewriting for example for a country, the country was the decision-maker on that domain. However the Internet was global, so that would impact users outside of that administrative domain. He thought it was very important to view registries and resolvers as one of the categories of intermediaries, and the current trend among governments and regulators at least in the EU was that intermediaries were not allowed to touch the packets whatsoever when flowing through their networks. If that was the case, there should be exceptions to that rule, but that would need to be based on legislation -- that was what the EU strongly suggested. Therefore, one would have to be careful, if implementing such a system, that they would not be breaking any laws.  Additionally, there was a big difference between blocking domain names (like a TLD registered where the name was illegal) and the interest of doing blocking because the domain was used to access services or information that was not wanted. The community needed to clearly help the discussion to separate the two cases.

Jim Reid asked if the idea of messing with the packet extended to include changing the hop count.

Patrik replied that from a general point of view that was the case, but there were exceptions like the directive in Europe that someone providing those services must also guarantee the protection of the network from incidents and malicious activity. Some of the changes could end up being technical changes that they did just because they needed to protect their network. In general however the rule was that intermediaries were not allowed to touch anything.

João asked how intermediaries were defined. Patrik explained that it was not only ISPs, but the definition included search engines for example.  João said that looking closely, interference with DNS traffic at the level of resolver was not rewriting. Patrik replied that it was good to have those technical discussions, he just wanted to make it clear that while those in the room would probably welcome an agreement where intermediaries were not allowed to touch the flowing information, there were also many parties that did not want to have such a rule.

Jim Reid commented that it was clear that bad guys did bad things, and in that case they needed to be stopped from their bad practices. However the implementation of the rewriting feature in BIND was sending a bad message that it was ok to do such a thing, something he found regrettable.

João answered that he thought Jim was wrong, and he explained that they had debated the issue internally at ISC, it was not a new development.  Strictly speaking it was outside how the protocol was supposed to work.  One thing to remember was that there was a gap: when hitting the resolver, one was not hitting the authoritative server, it was not the same answer as the authoritative server provided to begin with -- so there was already some data manipulation going on. João said he never liked external manipulation at any level in DNS. What had pushed things that way were other implementations already doing such things, so by not having that functionality in BIND it would actually cause people to go away from BIND into software that was at a much lower level of compliance. As such, the logic was "the people who want to, are doing it anyway"; if ISC didn't provide people with the software to solve the requirement, they were going to use bad software that was much worse: there were companies hacking BIND to provide such services, but with those even the responses that did exist were affected. With regards to the second part about RPZ, ISC was not providing tools for the bad guys, they already had them. It was a typical case where the good guys suffered because the bad guys were in it for the money, and they were prepared to do whatever it took to reach their goals.

Olaf Kolkman mentioned that when it came to blocking, he could go a long way with taking into account locality and other aspects, but he asked the panelists when it came to blocking versus rewriting, what were the tradeoffs in light of commercial pressure, in cases when that made it so that not all the web was visible to the user. He clarified that he was trying to understand the issue because as a Free/Open Source Software vendor they had noticed the same set of pressures. One way he could interpret it was that when under the administrative domain it was useful to block, that got the impact but if the policy implied rewriting, it was not clear they were doing the good thing. He wanted to make sure that he did the right thing as an implementer.

Patrik said he had an extremely strong opinion on that matter in that blocking was the only solution while rewriting was a definite no-no. It was preferable to not give back a response at all, like in the case of DNSSEC.

Matthew agreed and also said there were probably cases when rewriting could be done without too much trouble, but only a small number of them like in enterprises where the admin domain was small and people close together. He concurred that in the general case, if it needed to be done at all, blocking was the only option that would work widely enough to prevent serious problems.

João agreed as well, pointing out that when most people talked about redirection or rewriting they were already assuming that the action involved was web-browsing; however, if another application would have been involved, the result was unpredictable.

Patrik added that people looking to implement blocking were usually trying to solve a problem of policy and using DNS for that was not the right approach. Olaf Kolkman said it was good to allow people to adhere to their own policies and Matthew added that giving more control to the user was a difficult choice sometimes.

Maria Häll from the Swedish Government had a statement to make, she said she appreciated the dialogue and understood the issues that the community was facing and they were not black-and-white. From the committee she was part of they had to give advice and prepare statements, and towards that goal she welcomed the discussion with the technical community; it was important for them to learn the difference between things like blocking and rewriting, and she urged the community to increase that knowledge.

Patrik did not answer Maria directly, but he proposed a question as reflection for the audience regarding the new TLD process, and that was a hypothetical case when an illegal string would have been proposed as a top-level domain. He asked about the consequences of that if it would have been approved by ICANN and what reaction the internet community should have in such a case.

João said it was a hard problem since when registry-level blocking was used, they could always switch to a different TLD; but if the blocking was at the root that would become impossible since there was only one of those and breaking that assumption, making the root incoherent, would completely change the scale of the problem.

Patrik added that it was also possible for such a domain to be illegal and blocked only in some jurisdictions, and for it's business users to discover that only after investing in marketing for that domain; since investigating the legal restrictions around the world was an arduous prospect, the responsibility was creating a lot of sleepless nights for applicants.

Jim Reed intervened to move the discussion in a slightly different direction under the same area -- he asked about an un-named large ISP that had been doing rewriting for a while in order to increase ad revenue, but had decided to stop the practice in order to implement DNSSEC throughout the network and probably calculated that the profit earned by continuing the old behaviour was not worth the cost of not having a secure DNS infrastructure. He wondered if the fact that rewriting precluded the use of DNSSEC could be brought as an argument.

João said he understood how the decision had been made, since the financial people always trumped the technical ones, and sometimes DNSSEC which was only seen from a cost perspective was not seen in the best light. Patrik, speaking from his Cisco role, noted that they did something similar with NAT64 that they were advocating on some products -- implementing DNS synthesis for nodes that only had IPv6 access.

Matthew mentioned that other items to be factored in were the costs of doing the rewriting, using specialised equipment, as well as the loss of customer satisfaction when they would complain about access. João commented that part of the problem was that finance people only saw the issues caused by spam and malware and were ready to use existing tools to fix the problem. He added that the aforementioned ISP coming forward with a cost analysis might have been useful information.

Maria Häll added that the discussion in the EU system was about the difference between blocking someone and the harm it did to the Internet infrastructure as a matter of concern. The other concern was the harm done to the consumer as they would be unable to reach information from some areas, leading to fragmentation of the Internet. All those issues had to be sorted out before 12 January 2012, the date of the new ICANN gTLDs launch.

Peter Koch said that as the discussion was heading to an end he had one final remark. He mentioned that he had heard many reasons for doing blocking (like combating spam) however he had not heard anything about the governance aspects of those efforts. There was talk about a near-realtime facility that would enable certain groups to influence resolvers to block or rewrite resolution data, however a registry had liability to their customers; even in the Conficker case that had been handled manually, there had been a lot of collateral damage because people had a name on the list. He asked the panellists for their ideas on the subject.

Matthew said that there were all sorts of different situations but there was no easy answer one way or another, they were handled on a case-by-case basis.

Patrik noted that was one of the reasons he worked with human-rights groups on such issues and looked at what intermediaries could do. He said that many take-down or filtering actions were not based on legislation, and that was a concern that had to be addressed in order to have clear rules on the process.

João added that in many cases it was a problem of conflicting legislations and jurisdictions, like the example of a site in Spain that had been blocked because it was deemed illegal by the US administration, even though it had already won two trials in Spain -- it was a case of one state imposing their judicial system on another.

Patrik answered that it was indeed the fact that before harmonisation could happen between judicial systems in order to avoid such problems, they first had to have legislation in place to harmonise -- however the current problem was the amount of actions being take without legislation.

Matthew disagreed with Patrik, mentioning that particularly in terms of security and malware the rules were covered in contractual obligations and a lot of take-downs were simply violations of acceptable-use policies.

Patrik responded that it was not possible to write away human rights in contracts. While providers were entitled to take whatever actions necessary to protect their service, using a contract to override human rights aspects was not a solution.

João then questioned what would happen if the Spanish judicial system were to order the operator of the .com registry that the information on the servers they had located in Spain should have had the information reverted back to the original parties. Patrik agreed it was a good question.

On this note, Peter Koch closed the discussion offering the attendees food for thought as the issues needed further addressing.

Z. AOB (Any Other Business)

There was no other business.

Peter Koch thanked the panel, the audience and the RIPE NCC staff and stenographer for their assistance, thus ending the DNS Working Group session. He asked for feedback on the panel format and suggestions for other panels for the next meeting.