Skip to main content

ipv6-wg

Note: Please be advised that this an edited version of the real-time captioning that was used during the RIPE 56 Meeting. In some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the session, but it should not be treated as an authoritative record.

RIPE 56 IPv6 Working Group 2:00pm, Wednesday, 7 May 2008

CHAIR: Welcome everybody. This is the IPv6 Working Group session. We are obviously going to try to start just in time, because we have a loaded agenda. I tried to get a lot of topics on the agenda, actually already last week approach those kind of people and was very, very unsuccessful. Blue then suddenly yesterday, at lunch, basically, I manage today find too many people who actually could do a good talk here. So, we ended up with a fairly full agenda anyway.

First of all, I got a request from the RIPE NCC to warn people a little bit. Several laptops have walked away. I have not been told whether they were IPv6 enabled laptops or not, but just watch them and be careful because apparently it's not entirely safe to leave them up attended.

First of all, we have the administrative stuff. We have already ascribed so we don't have so [unclear] time on that. First item will be a quick update from, which is actually fairly random selection of registries. The RIPE NCC obviously because it's the RIPE NCC service region and it's, this type APNIC simply because I ran into George and he was willing to do a talk. So...

Then after that, according to the agenda, it's a talk by James Aldridge on what happened during the v6 hour. We actually ran into a scheduling problem with Curtis who is going to do a talk on the end and he had some issues so we are going to switch the talk from Kurtis with James, so James talk will be at the end of the agenda.

Then we will have the follow-up on the discussion that we had about the global routing table status and also the talk by Bernardt Schmidt on problems that he is seeing in the routing system.

Then we have a standard item always on the agenda but always different people of course doing it, and this will be a quick talk about IPv6 traffic that is being seen on the Amsterdam Internet Exchange point. Obviously if there is other people who have something to say about what they are seeing recollect that's perfectly [unclear] you don't need to have slides. But if you have something interesting to share you are welcome to do it during the agenda item and share it with us.

Then from there we go to a little bit of deployment experience, again from Amsterdam Internet Exchange.

Then we go to the talk, not from Kurtis but from James about IPv6, the experiences [unclear] during the RIPE meeting and then finally a [unclear] item that always allows people to do short announcements on events, interesting things regarding IPv6 that's happening somewhere in the next couple of months so people can take a look at that. That always goes very fast.

I would like to ask and Ray to start with his talk. And then if George also could be ready so that we could quickly move over to the talk before that.

The other thing is because I think the agenda is fairly packed we might run a little bit over time. So I think it's actually because of the topics it's more interesting to run a bit over time than to cut everybody short. So just a little warning in advance. Let's start.

ANDREI ROBACHEVSKY: Good afternoon, my name is Andrei Robachevsky, CTO of the RIPE NCC. Because IPv6 is rather hot topic this time, I decide not just to give a quick update what is on IPv6 and what is not with regards to RIPE NCC services, but give you more insight into our plans to make RIPE NCC services fully IPv6 capable.

So, this table actually shows how we are with IPv6 with regards to our services. And if you look at this, there are a few things that we need to fix, or rather, bring into the IPv6 before we can say that we are fully IPv6 capable.

Our plan is actually to to first with email, because expertise mail, and some other services have dependence on on it and also the provision of DNS system and RIPE database update. Then we tackle some web stuff which is db come start and finally we approach LIR portal. So that will make all RIPE NCC services fully IPv6 capable.

And what we want to achieve: We want to achieve that by the end of this year, and IPv6 only client, similar to what we had like two hours ago, should be able to use all the services offered by the RIPE NCC to the membership and the community. We also want to make our IPv6 services aware content-wise. One example is DNS1 who is moving towards the service of IPv6.

Well, we also are [unclear] careful about the quality, because times migration to IPv6 actually results in worse customer or client service experience. So we set up some monitoring notes to monitor starting with our website, and this slide actually shows how different response time to our web service from different monitoring points of IPv4 and IPv6.

Well the difference is not enormous, I would say. But, still, it is there. And we will plan to extend the number of points and actually monitor the service as well. And that's it.

CHAIR: Any questions? Then we'd like to move onto the talk by George.

GEORGE MICHAELSON: Hello, anyone who has ever had to do any kind of a live demo or other interesting activity at a conference will have had the feeling that in this country I believe is called schadenfreude earlier today, there but the grace of God go I. It's a horrible feeling when it blows up on you. So big sympathies and claps for getting it back again. It's going to happen to somebody every time.

So IPv6 services at APNIC. Well we are taking a lead from the coordination group, the engineering coordination group at the NRO level whose kind of sent a message that we really need to look at this at a coordinated level at all of the registries and report back. We conducted an internal review very similar to the level that and re showed and we are looking at the location and the services we offer and the overall IPv6 availability. And the approach we have taken is we want to try and get to things we can deal with easily, the classic get rid of low-hanging fruit and then in the medium-longer term just make services neutral against whatever protocol stack you are coming to us with. We are conscious of the internal and external aspect of this.

So, our current status is that v6 the member portal is available. The worldwide web is available and that indirectly supports whois access. Most of or whois traffic now is based on web-based screen scraping so it's very strange that the community at large has decided would rather interface through web server to get whois rather than go direct to a whois server. That's just what they could.

For DNS, until yesterday I would have said that we only had two of our three prime DNS locations serving quad A because Hong Kong was officially v4 only but I'll come to that in a minute and these FTP site is also available on v6.

V4 only, but an indirectly path via v6 who is the whois service because we run a J whois server, which is bound on a v6 address. V4 only is email. I was actually very interested that Andre put that one up as his main flag first target. We have taken a quite cautious approach about the stability of email access and that's going to be happening later in the year. We are also found that our currently VPN platform doesn't work well in v6. That's also a v4 only liability.

Most of the back office servers themselves and the services, while they are tonne dual stack enabled hosts, they are not actually configured properly to work as a v6 presenter server so we have to address that.

The activities for this year and into next year are the connect improvements because that will give us the confidence that weak make services like mail depend on v6 but we have just literally, last night, had transit enabled in Hong Kong and that means all of points of presence are now fully v6 active and very shortly we will add a quad A record to the node in Hong Kong. So awful our NS services will now be v4, v6.

The, mail, internal service that's a work in progress. We are aiming for sometime late another this year, the desk tops are already dual stocks. Clients are enabled. The intention is that all new service deployments should very consciously be able to support 6.

But apart from tech activity, there has been other stuff going on. The training programme in particular, has had quite a lot of expansion to cover v6. We have got a training lab built around CISCO equipment and it has a connection to a tunnel broking service we put up at the local exchange point so we are now able to offer v6 training for an ISP, how to do address management, how to do router configuration. How to do a v6 exchange point, what the strategies look like, what you do inside and this includes getting visible routing you outcomes in the global network so people get a more solid sense that this is actually viable technology and we are looking expansion in a that course to cover 6 to 4 tunnel. Teredo, service opportunities in the ISP, get the message out to people in region that this is technology they can understand and use. We think the lab is quite innovative and has really expanded our training opportunities quite a lot by looking at this level of detail.

We have been doing [unclear] R&D work. You saw a product of that with Geoff presenting during his measurement of uptake in the net, one of the figures was there a defer he thinks analysis on requests for quad A versus A. That's using the DSC facility that the measurement factory developed which is also being used by Oark that eats one of or collaborative relationships and we are conscious that we have to integrate research activities in v6, v4 relativities, take up a v6 into our general activity in and research.

We have been involved in the promotion activities, things like the IPv6 hour which obviously has a range of experiences but it's good to expose people to it this stuff and it's only going to get better.

There is the general out reach and promotion activity.

Cheers.

(Applause)

CHAIR: Are there any questions? No, that makes me disappointed. I always like that people have questions.

AUDIENCE: I just want to mention in LACNIC we do have our email who is what they call Ray out portal and web server on the IPv6. We are just finishing our desktop and office running IPv6 and we got that last month and so... we got most of the service up.

CHAIR: Very good. Can you drop your business card with me soon after this so that maybe next time you can actually do a full update here. Okay, more comments, question? No. I am rushing the presenters a little bit. That doesn't mean that people cannot ask questions because in fact I am rushing the presenters so we have enough time to actually have a discussion, so don't feel like I'm trying to move onto the next topic if. If you have questions or even want to interrupt the speaker because there is something coming up that you really want to ask, then do so. I mean, this is an interactive session and we encourage people to immediately stand up and have discussion here.

Kurtis.

KURTIS LINDQVIST: Thanks David. To give this presentation. Some of you might have been seen this before. It's a short overview of what we did in Netnod in deploying IPv6 and the experiences we had. David did ask me to speed up a [unclear] those of you who knows me that's the least of my problems.

So let's see if anyone actually catches this. What was we set out to do. Net in a runs, besides running an exchange point in Sweden and other things we do have some services associated with this and we have a small network. And we had this grand scheme, we wanted to deploy IPv6 on [unclear] services. We want it to work just like v4 and just, but just using IPv6 as a transport and then we wanted this through the production quality. For us was that we didn't want to deploy different names for this for people trying to use this, we didn't want to deploy any separate infrastructure. We wanted dual stack everywhere. We wanted production quality so that people using this got the same experience as trying to use the servers as we built them before. You might think this is such a straightforward idea. We'll come back to that.

But it also meant that we decided before could do this. And put it into production. We wanted to get the same type of statistics, the same type of monitoring we had for the v4 services.

And so, we put all the services that Netnod runs into three categories. The exchange point lance themselves, allowing for peering across v6. All the services we operate at the exchange points and last, we also run I Route Servers net and we do Anycast for a number of TLDs around the world. Those were the categories of v6 we sold.

The IX [unclear] very straightforward. You go to RIPE. Ask you for your IPv6 allocation and I think there is a /64 for single and a /48 otherwise. We get assigned a /48. We run exchange points in five different cities in Sweden and at each of the exchange points we had [unclear]. One of...

So you will all in all, we used our /48 from the ISP policy to deploy v6 and all this V lance just took one /64 for VLAN an has been hand handed them out.

We were running the same infrastructure as for v4 and again, it's the same V lands that we use for v4 traffic. We just gave them v6 addresses. We didn't do any fancy UI 64 mapping. We gave them a static v6 address and the last thing the sort of 73, I keep calling objecting ate, [unclear] I guess it isn't, the last number was the same as for v4, so you that could easily see which peer was which, mapped between them. V6 addresses and that's actual the system of the policy we do between the different VLANs for MTUs, so you can [unclear] see what a provider is. We will assign one of the numbers across the board, across all the exchanges in Sweden.

So that was fairly straightforward, quite ease [unclear] it seems to work.

Then we went on to Netnod Services Services at exchange services. We aid plied for the each allocation of a /32 from [unclear] we got the /32 and we decided to do our address planning. It is interesting to start doing this from scratch because being a small LIR, what we do currently is that the address plan consisted that you take the addresses you have, whatever they might be, and when you start from scratch you the luxury, especially with a /32 you have the luxury of using quite a lot of space and leaving room for growth and all these novel things that most of us had never done before, and that interest, we did a few rehashes to get this what we thought was right and we started deploying it I think it's almost right. We decided to take, break out one /48 for each of the city services we have, for each of the five locations, we have six, there is two in Stockholm. And we did some sort of /PWAOEUPB re chop allocation scheme and we then left space in a /32 for customers and future services growth in this as well.

That was fairly easy. So [unclear] basically we went through and took all the LANs we had, all the V LANs erring, all the weird overlay stuff we were using and just list it had all, sort it had, cleaned up a lot of net infrastructure and assigned a /64 for each of them.

We actually start today deploy this. We went out and enabled v6 in infrastructure. So all loop back interfaces all the v6 addresses. Then we went around and did tomorrow [unclear] and then we took LAN by LAN service and deployed the v6 on them. Then we moved on to start putting up the I BGP sessions and they had the hoop backs on that. We used OSPF version 3 on this. That seems to be I'll come back to so. Issues, but it did work.

We have actually, it is worth saying in doing this you actually gone through two vendors, not because we used up the first vendor but it happened that deployment came in between a planned upgrade of equipment, and we actually switched vend another doing this, we had to reconfigure a lot of this stuff as we changed vendors.

We went on and [unclear] six on the office LANs, all the office infrastructure, not all of this does v6 but where we could.

And then we picked some servers and start today deploying this one by one. All services from given static addresses. Again we are not really keen believers in the EUI 64 thing. It's a nightmare, but we'll get back to that.

On the office LAN we used router advertisements for assigning addresses. We would like to IPv6, so currently we do all the DNS resolution in v4.

We started doing AAAA for web and mail etc..

Monitoring, we used [unclear] for this. We don't do we do monitoring on the v6 service and that is done over v6 transport. There is some initial problems with pearl [unclear] lease that needed upgrading to get not to support v6s but to get them to come out in the right bite order. What interesting thing is worth noting here as you go through and do this you also double the number of alarms you generate because when you lose it you get the alarms for v6 and v4. When you do thresholds on number of alarms, you might want to have a think about this. I do tend to get a lot more alarms then things go down.

So far everything had gone fairly easy and straightforward. You could surf from your office computer to a net in order website. It was exciting and so far it seemed we were really happy. And then...

There is a world out there, and another world than your own network, run by your friends. And we started noticing actually by accident when we started to research our service that is we always could do before, we started noticing that downloading upgrades no longer worked because we tried to use v6. So we started sort of going through this and we had this naive belief that when we got rid of our old tunnelled transit provider that we had transit for, and went to pure native transit across the exchange point that everything would work much better. We had some of these presentations yesterday. It doesn't. It almost works worse, oddly enough. It turns out that we had customers who started to report faults to us, they couldn't each us etc.. one of things I have to admit which [unclear] interesting and you are going to laugh at me and I'll have to be ashamed of this for the rest of the week. Initially when we started announcing the first LIR prefix for testing, I made a tiny typo and instead of typing 2 A 01 I typed in 2001: V F etc.. this turned out quite propagated route 4. If you wonder who owns the real prefix it happens to be a tiny Japanese provider known as KDDI. Most of the world seemed to prefer our prefix over KDDI. So to the point yesterday about filtering, yeah, there is no filtering in place. People don't seem to care.

Part of the problem was when we were trying to debug some of these problems, we noticed that the problems quite often lied in large transit providers, the intermediate hubs to so. Comments made yesterday, they seemed to consider this as a lab. Well the problem is that these labs are being used by other people for production services. And if you were lucky enough to find someone in this ASS who were willing to admit they were rung IPv6. Get them to fix it was even harder because they were saying it's not a production service. Well maybe not for you, but you are currently harming those of who are trying to make this a production service.

Well lesson learned is if you do start providing transit for people, make sure it works, and be prepared to actually have someone who will acknowledge that you are reasoning this and will acknowledge that you actually have traffic flowing through your network on v6.

We also had the other problem is this beloved vendors of ours, and I had this discussions with several of the vendors, and I have this, I know this might be not acceptable to you, but I do believe that most of these vendors have gone through the IPv6 ready chequebooks just to sell stuff to the DOD and the regression testing of these features consisted of "It compiles" at least that's every now and then that's the impression you get from several of the hardware vendors. I don't want to blame this too much on the vendors. I think the vendors are, they are in the business of making money and they will go after whatever makes them money. And if they can do a chequebooks and people are not going to use it, then the quality is going to be as false.

Part of the problem is that a lot of things that you are seeing is not necessarily bugs, it's lack of debugging tools, it's lack of clear [unclear] etc. Which are not necessarily broken, but make this really hard to take into production or to actually use fully and exactly the same way as we used IPv4 and I can't really blame the vendors, the problem is that very few people asking for this feature are asking for these things to be made and we are 20 years behind, on the 20 years of operational experience of turning poorly written RFCs into running code for IPv4 is something that we are going to have to go through maybe, according to Geoff three years for IPv6 and a lot of you guys have [unclear] doing this yet. Without everybody one around the world feeding these poor vendors data and complaints, they are not going to do anything about this.

I am going to leave out all the software bugs we found but there was many.

Lessons learned. The IX side was very straightforward and easy. There is certainly a lot more debug that go could be available not only for the exchange point switches but actually for anything that that is to do with this.

There is an ongoing discussion between exchange points that they sub at the same LAN or a dedicated LAN. Again we wanted to make this production grade on exactly the same way as v4 worked.

Other 51 members have IPv6 addresses allocated. We don't do any Sflow stats on our members, so we don't know how much traffic they do and how many of them peer but at least 18 of them asked for addresses. Looking at the number of peers that we have with other people it seems to be fairly used actually.

EU 164 addresses. . Completely useless. Might work on work stations but on servers, loop back interfaces, point to point links you really don't want to have 64 extra bits to keep track of. It's really hard.

From an operational point of view, it's really hard to manage that. For the work stations, fine, it might be work.

We use static addresses we used hard coded addresses, on all this equipment.

The point to point links and /64, I guess we didn't have to. But we did. No reasons, it just happened.

There are some known traps if you start adding quad As people who might have worked on a v6 recently might well run into, they are well known. For us we found it easier to do this dual stack and treat it exactly the same way. It made our operational side [unclear], it made deployment easier.

Netnod members seems to have a great interest in this. We organised a workshop on v6 two weeks ago. We had 75 attendees. When we did the same thing for multicast, we had 20.

We still have to do some more work, the casting side is not not v6 enabled. We have to upgrade the kernels on the machines we use for what is internal to the sites all around the world and we dot casing kernels and that's taking sometime.

We do have a unicast DNS address, but it's not necessarily considered production by us. It probably will, fairly soon. And we'll pick some sites we can get v6 connectivity around the world installed and anycast them as well.

In the Stockholm we have done the other sites, the /32 policy makes it a bit having of how we are going to correct [unclear] because there is no connection at the moment between the sites. We will probably start announcing more specifics, to our peers at the exchange points and then probably built tunnels back to Stockholm for the aggregate traffic being collected in Stockholm, probably.

So it's a great site for some more v6 training. We have to do a lot more training of the staff internally and get people more familiar with debugging this, especially this global routing problems we are seeing.

So it wasn't that hard. It does take some planning. It will take a lot more time than you thought, mostly due to deferences and annoying lack of commands for debugging facility etc.. it does seem to work. We are going to provide it as a right up to the Vicky, and last, David asked me to provide some statistics. We don't actually have much statistics to show you, but these are, as George talked about, these are these DSC plots from some of the I Route Servers we are collecting, and I guess it's almost pink, whatever the colour it's the quad A queries we are seeing in Tokyo and Stockholm. It's hard to draw any conclusion out T but I just thought I'd show you this. And I compared this that's the London site we see a lot more. That's just a day. And then we have a unicast site for a number of TLDs as a comparison where with we see less yellow squares than we see at the route service. And that was it.

Questions?

CHAIR: No questions? I am disappointed.

AUDIENCE: Your global routing problem, you had fat finger system on the route announcement, had you registered for route? Is it any of your peers filter?

KURTIS LINDQVIST: That was actually before I was going to say a very short period of time, it wasn't that very short, but it was fairly short. It was four days we announce this had incorrect routing until somebody pointed out to us it wasn't our route. It wasn't registered no, and to the best of our knowledge none of the peers actually filtered it. It actually propagated quad 4.

AUDIENCE: Filtering and registration in IPv6 still should seem pretty simple.

KURTIS LINDQVIST: Correctly type route, we did have a route object. That was much easier to register too actually. That's a feature.

AUDIENCE: I just wanted to...

CHAIR: One second. I am sorry that I start with you, but I should have done it with all the other speakers. Please identify yourself who you are because we have also people on the Internet listening in.

AUDIENCE: My name is Jean Camp and I just wanted to say that part of what is clearly a filtering problem but it also seems to me that you have some tremendous usability problems and if you are asking for massive widespread adoption, it might be easier to change the software and to add features so that if you, someone at your level of skill, can accidentally announce themselves to be in Japan, so imagine what your standard engineer might have difficulty with. So my only point is, I think maybe I applaud your investment in training and I think that some investment in formal usability might resolve some of these issues. That's all.

KURTIS LINDQVIST: I mean, the incorrect routes announcement is isn't unique to v6, it's even worse in v4.

AUDIENCE: I am not going to argue that v4 is highly usable. In fact, did a study and they found a ridiculous proportion of all the Linksys routers returned because they were broken. The people had just decided well we'll add a password and then they forgot it or they were using, you know they decided to kind of they decided to implement a security feature and then locked themselves out of their own router and then took it back. So...

KURTIS LINDQVIST: My point was more that there is tools to do upgraders lack indicating v6 that we have for v4 when it comes to debugging. The particular problem of fat fingering isn't better or worse in v4 and v6. I don't see a way that we could preshape equipment that's telling us what routes to filter. That's part of the problem. If people if the other people would filter, we would actually wouldn't have an issue because that's what mostly happens in v4. But in v6 for this odd reason this highly visible route manage today get propagated by the wrong source for a long [unclear] I think it has nothing to do with widespread propagation T just happened that happen, as someone said yesterday, people do laboratory work and people just don't care and v4 if someone can hijack YouTube in v4 and no one cares, me hijacking KDDI might not be such a big [unclear], I don't know. KDDI might disagree.

CHAIR: For better or for worse, what I'm seeing a lot is also like people, you know, are already happy they got the v6 to work in the first place and things like filtering is something they postpone until after they got something to work. That of course causes ises issues occasionally.

KURTIS LINDQVIST: I do think that part of the problem is that when people bring this up they [unclear] inadvertently start providing transit to other people. That is one of the large problems and I think that was part of the preparation was it, yesterday, or it's been a long week. And that is actually the biggest problem because they don't know about this. When you start complaining that you are providing transit and it doesn't work. They say I don't provide transit. Actually you do, maybe you shouldn't, but...

CHAIR: I actually still do have a question, but let's first ask Gert and Bernardt already to show up at the front.

One question that I had. You actually did the same exercise with two different vendors. That's what you mentioned. So, because you did it already one time, was it a lot easier the second time you did it or did you feel you had to repeat the whole inter size?

KURTIS LINDQVIST: It was easier in the sense that we had it wasn't exactly the same deployment because we actually had learned mistakes the first time and we did change the deployment somewhat when we did it the second time. The two vendors were similar enough that they actually deployment was straightforward. They do have very different debugging tools, again that is an issue, to the extent they had any debugging tools at all. And it certainly helped doing it a second time, it was easier, absolutely.

CHAIR: Thank you. Gert and Bernardt's turn. For the routing table report that is now actually being done in a plenary, but we normally want to highlight a few issues that came up there, but especially the part of discussion of on how we can improve things etc.. so I'll hand the mike to the two people that talked about routing tables.

GERT DOERING: Welcome back everybody. Actually since both talks have been touching each other pretty closely, I just decided to come up with Bernie to please fire all your questions about IPv6 routing table at us. Oh please. Somebody.

BERNHARD SCHMIDT: Short update. I have received a mail from NTT vary just 30 minutes ago that they started filtering peak ASNs from the filter feed. At least their Mexican tunnel is already not transporting prefixes, to, for example, to Tiscali any more and they are doing the same thing for the other three filler tunnels. So, things are starting to get better, I hope. I'm not sure whether this idea will be perfect. But it's worth a shot.

Something that should be pointed out here is that we are not finger pointing because we think these are evil people. We are finger pointing because we are technicians and they are technical problems that need to be solved by bringing stuff to people's attention. So, we are not claiming any of these networks are evil. They just need to be a little bit more aware. And if you feel that we have mistreated you, please speak up.

Everybody is really peaceful after lunch.

CHAIR: Really no questions. But you guys didn't even know what [unclear] I say did, or did you?

What exactly did Kurtis do?

CHAIR: Announce something that didn't belong to him.

GERT DOERING: For how long did you announce it?

AUDIENCE: Four days.

GERT DOERING: When?

CHAIR: A long time ago.

GERT DOERING: Maybe I presented it a long time ago

AUDIENCE: It was in January.

GERT DOERING: Mmm I need to look at my tables. I have seen somebody who fat fingered an announcement, but it was eight days and it was sort of more prominent. You have been lucky this time. Don't do it again.

AUDIENCE: Okay. So Gert, you are not filtering yet?

GERT DOERING: Basically, I am filtering towards my downstreams and I am not filtering what comes in over, especially the I want to see all the crap links. The box that's doing the measurements stuff is getting really fully unfiltered links over weird tunnels from all the places, well because I need to look at it it's actually not giving all the funny prefixes to the production boxes, so some of the really weird things don't end up in our, well, backbone routing table things. But I'm not currently filtering on our peering links except for max. Prefix things, because it's a little bit hard to do right now. But I do filter my customers.

AUDIENCE: Somebody mentioned that the fat filtering was somebody that you can do with IPv4 as well. It's a lot easier to do it with IPv6 and not detect it, because people tend to be familiar with their IPv4 addresses and they notice typos more easily. Can you suggest a solution to this?

GERT DOERING: Well, wearing my not routing table but address policy hat, there seems to be work going on regarding certificates for things, and if you need to have a formally signed certificate by the NCC that this is your address space and your upstream will not accept route announcements unless the certificate validates, then there is no chance for accidental mistakes. There might be still room for purposely evil announcements, but for the typical missing a zero here, having an extra zero there fat fingered things, the risk should go down enormously.

Basically you should have prefix filters on your customer sessions already. So if you can get the wrong prefix out, in other words, there are already two people who fat fingered your upstream and yourself.

The other problem is that currently unless you really use the RIPE data or your regional registry database to build the filters, you build the filters based on the mail from your customer saying this is our new nice and shiny prefix, we are going to announce it to you and if they mistype that email, the prefix filter at the upstream might already be wrong. So, there is more and more risks for things to slip through. But having no filters at all is of course calling for accidents v4 as in v6.

CHAIR: Then, David Kessens, not speaking as working group chair. I have one little, for my personal experience, the IPv4s addresses are shorter and stuff like that so you are more inclined to just remember them and type them over. But I have experienced myself at least with v6 addresses, you just cut and paste them, there is no way you should ever try to type them out. Be very diligent about it and only use the cut and paste and never even try to type it.

GERT DOERING: That's the danger of sending and email upstream. If you mistype that email the upstream will cut and paste it. But yes I agree.

CHAIR: There is one additional thing to that, obviously that doesn't solve everything because I do actually have one case where actually IP addresses were allocated at a certain node registry and the typo already happened there. So...

GERT DOERING: I think we can happy live with one accident of that sort per year or per ten years. I seem to remember...

CHAIR: More questions? No. Okay. Then we move onto the next agenda item. It's actually two agenda items that will happen at once. /AR yen will talk about his experiences at Amsterdam Internet Exchange regarding software works and other problems with equipment. And he will discuss with the IPv6 statistics that he has seen there.

ARIEN VIJN: I work for the Amsterdam Internet Exchange and at the last RIPE meeting my colleague Hank presented some figures on IPv6 usage at AMS-IX and I'll given you an update on that. Last time we had in our database, 81 members registered at being IPv6 enabled router, 81, but last time we actually found in use 64 addresses using ICMP enabled.

Today we have 113 addresses registered, we actually found 117. So that seems good news. Of these 113 addresses, there are nine not in use, and it also includes one address we use to discover IPv6, so potentially 116 BG talk BGP talkers.

This includes our own two ASes.

If you look at what kind of routers are actually enabled on IPv6, we see actually a limited number of brands. Of course usual suspects, Cisco and Juniper, Foundry is being used, AVICI is being used and all of the AVICI routers are IPv6 enabled and that surprises me. We have 44, what I call, nicks boxes attached to the Amsterdam Internet Exchange. Would you think it would be fairly simple to enable IPv6 on those routers. But only 14 of the 44 are enabled there.

Relatively most Juniper users, well not all of them, but a large amount of Juniper users use IPv6.

So, why these differences? Well I think it's mostly due to the allocation method. If you have IPv4, then we'll give you one address. On IPv6 other members can self-assign the addresses. We have a scheme that is based on the AS number. We already do that for years. It's a fairly simple scheme, based on a decimal AS number. I must say that 4 bite AS numbers are not in this scheme yet but we can fit them in as well, we have already thought about that.

And what actually is happening is that members are just not telling us that they are enabled IPv6 and they don't tell us also that they disabled it again.

So, people seem to value IPv6 over time differently. Sometimes there is new hardware, we see change and what is happening is after the new router is installed, IPv6 is disabled. Sometimes when I ask what happened to your IPv6, the new router cannot do IPv6 or they just don't dare to enable it yet.

There was a security issue with IPv6 last summer somewhere. People disabled it. It sits in the way, so just disabled.

Personnel changes, it's one engineer in a company, he moves to another position, people turn it off as soon as it sits in the way. And also, what that's more an assumption, but sometimes, well there are stricter procedures, change management and stuff and IPv6 is moved out.

So, now about the volume of IPv6 over.

We determined [unclear]. Like cut I say we enabled that. One address in 8,000 frames coming by and we look actually at each ether type. It's only IPv6, native IPv6. All the transition mechanisms are discounted as IPv4. We are a layer 2 exchange so we don't look further in the pacts, no hopes on statistics on that.

So, to put things in perspective with the volume. Our daily volume peaks at 4:00 gigabits a second, last it was 300 gigabits. Last time we peaked at 90 megabit, mostly it was around 30, 40 megabits a second. Today it's quite higher. It peeked at roughly 220 megabits a second, and therefore, what is it? 124 megabits a second.

Here is the graph of [unclear] over a bit longer period, and you will see there is a steep increase in early April and that's basically due to use net fees that are switched on. That is actually nothing new, because they were there before. And this is actually an example of what I just told you about people switching off v6 because they have a new router, so on the exchange came a new router in one of the peers and P6 was switched off. That was actually happening.

So, conclusions: The number of IPv6 users grew. Self-assignment is a nice thing, but if you haven't maintained database then there are inconsistencies, what's new here. In a still very, very, tiny fragment of the traffic. We talk about hundreds of gigabits of traffic, in the v4 world and only hundreds of megabits a second in a v6 world.

So, this was this first part of my presentation.

CHAIR: Let's give an opportunity for people to ask questions at this point. Anybody or comments or whatever. Okay, we move over to the second pit of the presentation, that is more about challenges and issues that you have encountered.

ARIEN VIJN: Okay, this presentation is also scheduled for the EIX work group and I believe this is a very short version of that.

We are 2-layer exchange, we actually don't have much to do with IPv6. For us it's just payload. We have our own servers available on IPv6 for the most part, already for years. But, on the exchange itself it's only layer 2. So yeah, what's this presentation all about?

Well, it's about what we see on multicast and flooded traffic. We have monitor box just sitting into the peering V LAN and what do we see on there? It's also about the port security feature on M6. Port security actually allows one source MAC address and if the switch sees another source MAC address, it either locks it or shuts the port. That's a loop method which is fairly successful and is also sense a syslog message. And looking at these SIS log messages, I don't suspect you can read this, we found a very typical pattern, 06 then some numbers then 2001.07 F 8. It looks awfully familiar with the peering LAN prefix allocated by RIPE: Of course syslog also tells you to switch and port and what do we determine about that?

It's a very typical port security violations. And because we know that a loud MAC address, OUI we can see what kind of router that is, it's only from coming from one brand of router. All interface types and it's only coming from v6 enabled routers.

If we do correlation with some SIS log messages, we see that it actually occurs during a time v6 enabled peers are enabled. So not when the peer becomes unavailable, but only, during the period, randomly, we get this burst of violations. So, it's going on for quite a long time, so we captured some violating frames and we could not make much out of it. But if you shift the bits by 18 bites, then it becomes clear that it's actually or BGP messages coming from the violating routers.

We informed this vendor informally through contacts. We know that they tried to replicate the issue but they were unable to do that. I also asked other Internet exchanges if they see this kind of violations, and they don't. So, for some reason it seems typically for M6.

It doesn't seem to be harmful. It's just BGP messages for an unavailable peer. They get locked by the switch. And we have no include why clue and why it's only apparently happening to us.

I have other issues. We have one other vendor brand that actually since ICMP v6 ND from other IP address that they are actually are using, this is recently discovered thing. Again, it doesn't seem to break anything, but I hope to tell you more about this tomorrow.

And fairly recently, we found a burst of ICMP multicast listener reports, those are answers of course, or listener queries. And that seems to be, well that that is specific to CISCO routers that formed routers multi Ruth indicating their own AS. So some people are already using two.

And the solution is actually to no IPv6 and will deroute you are to your interface context. So that's for some operational people here, perhaps interesting.

Those were the issues. So what are the conclusions? It's mostly harmless.

That's all.

CHAIR: Thank you. Any questions, comments, other experiences that you would like to share or nothing, again. Okay. Thank you very much.

Them the next presentation will be James actually with the update on what happened during the v6 hour, especially what went wrong. Because we are a working group, we can start with how it went wrong and how we are going to fix that.

JAMES ALDRIDGE: Okay. Some of the slides I borrowed from this morning but I have expanded a bit in the technical detail.

Basically the v6 hour and building an IPv6 only network. A quick overview of the networks at this RIPE meeting. The v6 transition mechanisms we have in place, the RIPE meeting IPv6 network. V6 hour statistics and some questions.

The normally at RIPE meetings, every RIPE meeting until now we have had basic dual stack network. At this meeting we are using different address space than normal. We have got a we are borrowing a /32 from Telecity Germany, which we have broken up a bit and very inefficiently, we are only using it for a week so we had no requirements to make a detailed addressing plan or anything. Honestly, we are going to return it on Friday. Admitting there is no way the NCC can give themselves any v6 address, so we might keep it. It's very tempting to keep it, but no.

For this meeting we have add an IPv6 only network, RIPE mtgv6, and IPv6 with a local RFC 1918 [unclear] with the help of Windows XP.

The network looks kind of like that. We have got the basic dual stack network provided by two Juniper J 3000s. Well, each connected to the switch, full VRP set up for both v6 and v4. The v6 only networks are behind the Cisco 7301. That does NAT PT to translate v6 to and from v4. We have got another box sitting on both the v6 networks running the top 2 to provide a DNS application at gateway.

NAT PT, RFC 2766.

We are running these services, which is now, almost a week old. Apparently the T 3 or later versions should also work, I am told. Because of the way we are doing the address translation, any sites on the out site world will probably see the traffic coming from 19320029 P which is the address of the Cisco router. It does an unnumbered stuff somewhere.

The application layer gateway, synthesises quad A records, software running on a free box.

So, on a IPv6 network, if DNS says this machine you are trying to access only has an A record, it's not a lot of use. So, we have the DNS proxy. It has a particular configured v6 prefix, combines the two and returns a synthesised quad A record. The prefix used is known by the NATPT gateway and so it can strip it back to a v4 address when it leaves the v6 only network.

And that looks kind of like this. The machine on the network does a query for a random domain name. The proxy forwards that to a dual stacked machine, name service somewhere else. In which case it's row dense which is our server on the normal dual stack network. That does its normal get the /RELT from Internet. If that's an A record, it returns the A record. At this point the proxy knows it's no use, so it's creates a synthesised DNS reply, which correspondence to the /SRR address and the prefix is the that.

Okay. The machine now has a v6 address. So it sends it a pact out. The NATPT gateway does its magic, strips off the v6, converts it to v4. Return traffic and that's all fine. Now, we are struggling with the next line for quite a while. The actual CISCO and NATPT configuration. It's one line per interface and half a dozen other lines. I don't know why it took us so long. Anyway, this is cut from the configuration that's actually running on the route I remember. According to the documentation, because we have now switched to the overload option, which basically maps everything to a single IP address so it's doing port and address translation, the v4 mapped NATPT on the IPv6 prefix line is not necessary. We were trying to map it to a pool of v4 addresses earlier but we couldn't get that to work. And then we have got by weird IOS magic, that config on our router does the right job. And we got that working about lunch time on Tuesday. There is other stuff, RSPF, but that's just to tie in our router with the rest of our network.

A few statistics. This is the traffic and the CPU load on the CISCO 7301 that was doing NATPT and handling the v6 only networks for the past dayish. Not a lot happening overnight. All times here are UTC so add two hours for the local time. We peaked at about, today we peaked at about 2.6 megabits on the v6 and 1.5 megabits on the v6 XP network and CPU load on the router went up to around 10% at worst case. We also looked at the number of clients that were on the different networks at any time by looking at the associations on the base stations. I think the v6 hour shows up quite clearly when the red line drops to almost nothing, or the red area drops to almost nothing at the top. We had some issues with the access points, it seems that when we turned off the two dual stack networks, they stopped accepting any new associations. At the end of v6 hour we actually we started the problem access points. That particularly affected this room, but I think that's largely because this room had the most people in it and the four access points had the most associations per access point.

If we compare the traffic the bottom graph there shows the, or would show if it was on a larger scale if anyone was actually moving around from one access point to another. That's broken down by associations per access point. We have got 10 access points deployed for this meeting. There is a small sign of some movement, and as the machine as the access points here were not accepting connections, we were getting neighbouring access points taking, trying to take over, but somewhat poor signal.

If we compare the previous graph of traffic, 2.6 and 1.5 megabits with the overall traffic to and from the network. Today we have peaked at around 25 megabits per second on the link to the [unclear], so do the maths. V4 is still bigger than v6.

And any questions, comments?

Basically my conclusion basically for the v6 at NATPT, it works until you start messing around with the access devices, but that was nothing to do with the NATPT seemed to be working quite happily and all the v6 only networks were working quite happily until we disrupted the network by actually reconfiguring them to take the dual stack networks off.

CHAIR: Questions:

AUDIENCE: Lorenzo, from he will. From once tried to get it to work and we gave up. But we will try your config. The question I have was: Is the DNS look up going through the router over IPv4.

JAMES ALDRIDGE: There is nothing going through that router on IPv4.

AUDIENCE: So the [unclear] peak to they has the v6 address of the external resolver, base cache named server so forward is over v6. If it's v4 only F you put a /PWOT on the network that only has a v4 address, there is no way it can get to the outside world. What we do for the Windows XP LAN is basically provide an unrouted RFC 19 space and that gives the 1171602 address of the to the D box which then forwards to a real v6 address on the outside.

AUDIENCE: It would be interesting to try with a resolver that doesn't talk to the outside world that is authority I have. Because we have tried that.

JAMES ALDRIDGE: You have a problem if you have a v6 authority I have name server. When do you a look up you hit a top level domain that Dunne

AUDIENCE: If you are only looking up your own zone it should work. It expects to see the DNS query. Maybe it's been fix [unclear] in the code so we'll try it, thank you anyway.

JAMES ALDRIDGE: Apparently the IOS is supposed to do the DNS application layer gateway but we couldn't get that to work.

AUDIENCE: I want to comment on something you just said to Lorenzo which is that for your own zone it will work. It won't if there is a nonv6 between route NU.

AUDIENCE: That was a hack anyway. I was trying gently to point that out.

AUDIENCE: Just one short comment because it took me some hours to figure out. Cisco NATPT does not work if IPv4 and IPv6 are in the same interface. So, if you run your network dual stacked and basically have a nut router on a stick it doesn't work. You need different interfaces.

JAMES ALDRIDGE: Yeah, we are using the interface doing NATPT are v6 only.

AUDIENCE: We tried to run a central router in our network to do NATPT for all our access networks, and it just didn't work and the error message is totally unclear and as soon as you split IPv6 and IPv4 and different interfaces is starts to work, so be aware there is a bug.

JAMES ALDRIDGE: We are not sure whether they are bugs or features, but we did spend several hours Monday night and Tuesday morning trying to get this to work. We did eventually lunch time on Tuesday come up with this config and it that seemed to work, so we are happy.

CHAIR: Did you post that on the wiki or not?

JAMES ALDRIDGE: I haven't got editor rights on the wiki, so...

CHAIR: This is David Kessens not speak as working group chair. I was sitting at the back of the room at the time of the hour, and I heard quite a bit of grumbling going on there. People running Macs that are supposed to work flawlessly and stuff like that. Is there anything you have to say about that?

JAMES ALDRIDGE: My Mac worked flawlessly. In fact both my Macs, 10.4 and 10.5, worked flawlessly. We had a lot of users on the v6only networks. It was only disruption caused when we did the access point reconfiguration that screwed things up. As far as I can see. There may be issues by having to manually type in the DNS resolver address. If you get that wrong, things don't work.

AUDIENCE: So, just to check back a little bit. Like, how this thing was working. Can we do by show of hands to get a little idea for whom did it actually work? Who was able to use v6 only and just access most not all, but most? That's a fair amount of people. Not overwhelming, but and for how many people did it not work?

So we have two people at the mike now. I think Rob has something really urgent. So go ahead.

AUDIENCE: Just a follow-up question: How many people for whom it worked, did try this out for the first time this morning? Okay. Thank you.

CHAIR: So we have a lot of people actually already tried it.

AUDIENCE: From what I saw, it worked and it didn't work. It worked until IPv4 was switched off and then it stopped working and this tells me that IPv6 works fine as long as you have got IPv4. And it also tells me that I won't be trying this in my production environment any time soon.

JAMES ALDRIDGE: I wouldn't try doing a major reconfiguration of all the access points on my network during the middle of a working day on nigh network back at the RIPE NCC. But the problem I think we had people on the v6only networks before we turned off the dual stack network. I believe that a number of them were actually getting useful work done. They had connectivity. What happened when we turned off the two the dual stack networks on the access points, is we manage today break the access points, not that we manage today break the IPv6 network in particular.

AUDIENCE: Are you saying that if we did the same tomorrow, there would be no problems, it would work fine?

JAMES ALDRIDGE: I would say that reconfiguring access points in the middle of a working day is a bad idea.

AUDIENCE: That's the only way you get a representative

AUDIENCE: It could have been done other ways and the problem is basically operational. Something was tried that hadn't been rehearsed. It should have been tried at midnight last night to make sure what was being done would work, it wasn't tried, we were all victims. Not a pact moved.

JAMES ALDRIDGE: We did try just working on one access point beforehand. That seemed to work. What happens when you reconfigure nine access points simultaneously, who knows.

AUDIENCE: We do.

JAMES ALDRIDGE: Well we do now.

AUDIENCE: Wilfried, one of those guys who have turned on IPv6 since years on my box and I just wanted to be a little bit more specific. It was working perfectly for me with the XP set up before because I was also yesterday using this v6 only network and I could do almost, with one minor thing, do almost anything I want today do. It just broke due to the access point problem, well bad luck. The only thing I found and that may be someone additional data point: If you get the impression that something stops working, it's not necessarily a problem with IPv6 in the first place, because I could no longer get with an SSH with an old SSH client on XP back to my Linux box in the office, but obviously it was not a problem of IPv6 and it was not a problem of the Linux server side because one of the colleagues helped me debug that, and with a different client over v6 it worked perfectly. So it was just a problem of my ancient SSH client. So I think we should really be a little bit more discriminate. It topped working, it had to be IPv6. Not necessarily. And I think Fergal, I could perfectly understand your point of view that you don't want to do it to your production network any time soon, but I think we really need these real life s in a pretty safe environment where we don't break customer connections but just our own and we try to debug what's working and what's not workings so from my end a big thank you in capital letters to anyone who was involved in setting this up. Thank you.

AUDIENCE: I am speaking as an IPv6 user. I am actually quite unhappy, because IPv6 was working perfectly well for me all morning and then somebody took away the wireless. I expected that the IPv4 would just be turned off and not that the wireless would be turned off with the v6 on it, and no, I didn't want to go to the v6 only wireless, because I assumed that my box is perfectly well capable of doing v6 in a mixed environment and I wouldn't even miss before, so that didn't really make me happy.

And then after I was in the right network, the access points crashed to that was just worse. So actually my experience was bad, and usually the IPv6 experience in this whole week has been quite good.

After complaining, something more to think about, I am not exactly sure what the underlying idea behind this exercise was. Was the idea to demonstrate that we really need to do lots of more work until an IPv6 only network can be used? In that case, running NATPT is a stupid idea because it just hides that nothing works if we turn off v4 or was the idea to turn off that demonstrate that NATPT can indeed be used as a migration mechanism. In I which case I don't think we are the right audience, so even if it was phrased as a question, basically I think we should do it again but without NATPT, demonstrate v4 with 66 openly and show people that their favourite sides might not work. We have actually done quite some efforts yesterday and today to (sites) to make one of our customers community socialising things fully v6 capable and then well it was just not necessary, because NATPT would have hid hidden it anyway. So...

AUDIENCE: I would like to answer his question. I am the one who started these jokes back at, so I'll take the blame. You asked what the goal was. The goal was that the deployment to v6 and the transition at the end of IPv4 is not my customer's problem. They just want pacts. They don't care if they are delivered on v4 or v6 or on a donkey. There is coming a time when they can't get v4 space they, in my case, and I think in many of our cases, are the enterprise user, okay, the consumer, we kind of know what to do, they are kind of either at home on a NAT or you are going to see a, you know, Comcast double NAT if you were at ITF, that kind of approach, one of those major approaches, but only five people in this room are thinking something of that scale. The rest of us are trying to deal went price customers, etc., so the idea was we are the enterprise. This is what that customer will see when they can only get v6 space and yet, they need to get to the Internet.

AUDIENCE: So the plan was including transition mechanisms?

AUDIENCE: Right. The transition mechanism will be in place for past my retirement unfortunately.

AUDIENCE: I hope not to see so much NATPT as migration mechanism.

AUDIENCE: Does that answer your question? That was the goal is what happens to an enterprise, and we are the enterprise.

CHAIR: Lorenzo.

AUDIENCE: I am an IPv6 user. I would like to say thank you to all involved. I have never seen it work and it worked and I am surprised. But this, I think this is incredibly important because it's all well and good to say, oh, if we turn off v4 nothing will work. Yes we know that. Not even Google is going to work, if you type www.google.com on a v6 only connection, it doesn't work. Guess what? Cheats by design. The problem here is we have to decouple the content side from the customer side. We have to decouple content from access and NATPT is the only way I see of doing this. Once you have NATPT you can my great your customers whenever you like when they run out of net 10, when they run out of public space we can get onto the Internet and most of the applications can work. Skype didn't work, but chat works, browsers works, everything else. Mail, all this stuff works. So your access can have a reasonably functioning v6 connection, your websites can be v4 and once that's done, you can start migrating 9 the [unclear] this is the way out of the address crunch I think and thank you for showing that it works.

AUDIENCE: I was actually going to say, Geoff Huston, APNIC. We tried this at APNIC at v6 only and it was kind of a frustrating exercise of hunt and pick v6 only exercise. That really isn't a experience we expect the industry to go through. I think as a number of folk have said the more realistic expectation is that you are going to have to deploy NATPT and I was actually impressed after watching the INTF deprecated only a few months ago to actually understand first hand that the stuff is it actually quite neat, and that most of the browsing experience, most of the application experiences there were working seamlessly. That I think is a take away that we should have done in APNIC, actually created more like an enterprise environment that is going to have to be deployed in a few years time. So that experience and the way in which you have done the to the D lining up against the router is actually a nice piece of work and I think we should try this again that way. V6 only, I am sorry, I am just not impressed with that kind of hunt and pick work. But v6 and transition at the edge is actually what we are about to live through.

And I, unlike Lorenzo, am not completely convinced it's going to work properly, but unless we try it now...

AUDIENCE: I think this won't be the first time that Randy Bush and I find each other agreeing by contradicting each other. But, I think the important thing was not that there should have been a rehearsal at midnight or whenever the night before when things are quiet, because the important thing was we were the enterprise as Randy says, but this was the rehearsal, because the rehearsal at midnight the night before doesn't test anything because the conditions are too much different. And I was very pleased with this rehearsal because it broke for me and it gave me lots more things to focus on when I get back home and I am not quite so pessimistic as my colleague Fergal, but we have a lot of work to do, the two of us, we are the only two people interested in IPv6 in our university, or so it seems, and we have a lot of hearts and minds stuff to do before we try something like this kind of demo because, as he was pointing out at coffee time to go recollect Fergal was, we get one shot at this. We can't screw the enterprise more than once and this morning's exercise was sort of a salutary experience of somebody else doing that for us so that we could learn and again I'd like to echo thanks to everybody involved.

AUDENCE: My research half says that a failure gives me as much information as a success. My operator half says I don't have do this to users, and on the other hand, if it had been smooth, you wouldn't have noticed it and you wouldn't have thought it was there. So, maybe, but it's like the experiment with the ITF, I never really understood it because it showed some things, but it made the user not no user pacts not get there and my prime directive (packets) is use the movers packets. But mistakes happen. Operational mistakes happen. At least it happened in family, and so on and so forth. So... what the heck. We can try it again now, if you want.

AUDIENCE: I am not trying to send that crucial email right now.

One comment about IETF way of doing and not have that PT was maybe to show that we actually need to work on those mechanisms and okay we deprecated NATPT but there are actually requirements working on the next version, so to say, and work continues in this and actually trying to improve things, so...

CHAIR: Okay. So I think we are getting to the end of the allocated time. So it's time to wrap-up.

I think one important question to ask actually is like, did the people here find that this was a useful experiment to do here? I'm not asking that we have to do it again. I am just asking like was was it useful use of your time?

And are there any people who actually thought it was the opposite. We should not have done this?

Thank you.

You know, I don't think it's time to ask like whether we have to repeat this thing because I think it's time for everybody to go home and reflect a little bit on all

JAMES ALDRIDGE: We can certainly build a v6 A networks at the next RIPE meeting in the same way we have now. I just don't envisage doing an hour switch off.

CHAIR: I think one of the nice things we had this IPv6 hour switch off is that it's got our eye balances on the problems and to make sure there is some issues that need to be solved and we cannot assume that everything is ready and fine to go. And I think that that goal has definitely been achieved. So...

AUDIENCE: Just one comment. The experiment is ongoing by the way. You can connect to picks on the network. Switch v4 and test NATPT as long as you want.

CHAIR: So, I want to say thank you James for doing this presentation.

There is one final agenda topic and that's the standard topic that we always have that give people an opportunity to announce events, new initiatives or whatever in the RIPE region, so normally people get a chance to show one or two slides or just a quick announcement.

JORDI PALET MARTINEZ: Okay, it's just a couple of slides. I have more slides, but I will skip them.

Just an announcement of a new project that started March 1st. The basic idea of the project is to support the deployment of IPv6 by means of hands-on workshops. The partners of the projects are entities from different countries, most of them European, but we have people from Latin America, LATNIC, we have AfriNIC, and we have also the participation of RIPE and APNIC. So, we are probably going to do different workshops, some of them maybe together with RIPE activities back-to-back or we're still working on an exact plan. We will have different tools like including e-learning, tutorials, and the idea is also to get you're inputs about what is needed so we can work in developing new models about IPv6 training for your use and basically what we are want to do is to train trainees so we can multiply the effect of the work.

So, if you have suggestions/ to develop your own agenda so we work on the models, you want to organise a workshop targeted to [unclear] industry or a different specific region or whatever, just contact me and we can work on how we can develop that workshop. That's it. Thank you.

CHAIR: Thank you. So that brings us to the end of any ways. I would like to this is the end of the working group session. I would like to thank all the speakers for their presentations but because this was a bit of a special RIPE meeting because we have this IPv6 experiment, I think we have to thank all of ourselves for participating in this experiment, exposing the issues and all of the work that we still have to work on because it's pretty clear there is still a lot of work to do and let's go home and work on that and see you later next time.

(Applause)