Skip to main content


Wednesday, 29 October 2008 13:30

Routing WG

The session commenced at 1:30 p.m.: Routing
ASN /*

CHAIRMAN: Welcome. This is routing working group session for the RIPE meeting in Dubai. We are a little bit time constrained, sorry for being a couple of minutes late.

The agenda is pretty full. There is the first items, I don't know if anyone would have anything to add. We will be adding one item in fact probably after item B concerning some work that has propped up.

So, this is the agenda, just to have an overview. It will definitely take us through the whole slot. If you have minor items, you can always add them at the any other business part.

The RIPE NCC is providing both a minutetaker and a Jabber scribe, which we thank. The minutes of the last meeting, they were circulated sometime ago, we haven't seen any comments. Unless someone has comments right now I'll proceed to declare them final.

Okay. They are now final.

Present here today as an we have the full set of routing working group chair for a long time. Rob Evans, who is over there, and does a great job. And Joachim Schmitz has joined us after a long absence, my name is Jo�o Damas. Where is Shane?

SPEAKER: Hello, my name is Shane Kerr and I am going to be talking about the IRRToolSet software community.

So, I am not going to start with that, I am going to talk about Lawrence lessing. I don't know if any of you know him, but he does a copy write stuff. He does creative comments these days and if you have ever seen him talk recollect it's fantastic. There is a picture of him there. So, the thing about him is that his presentation /RS actually quite interesting and it doesn't really matter what he is talking about, you kind of like to hear him talk. I realise I am going to be here after lunch and many of you might be a bit sleepy but I want to be active and involved and interested in what's going on because it's the IRRToolSet. An every time this comes up, there is always a lot of people who express interest in it and it is apparently a useful tool, but there has been a long history of problems with it and I'll go into that in a bit. Mostly in terms of getting support and people who actually work on it.

What is it?

The IRRToolSet is a set of tools to work with Internet routing policies. So there are routing registries which contain information about routes and these are stored in the routing policy specification language and it's a vendor neutral way to put in information about what you are advertising and what you'll accept on your routers and so the IRRToolSet is a set of tools you use to take this information and you can do useful things with it. The example is you can actually build router configures, or portion of you're router configures base on the what people have published about their AS numbers and things like that.

So this is the site where you can go to now to look at the tools and there is documentation and all kind of good stuff.

So, I am going to go back in the history and we are going to talk about where the IRRToolSet comes from and why I am presenting today.

So, it's actually quite an old set of tools and it was originally developed as an part of the routing arbiter tool set and this was from an earlier time when there was some different ideas about how we would address scaleability and routing on the Internet and things like that. But because the IRRToolSet was useful, it stayed around long after the rest of the work in this area died. And it was called the RAToolset because it was part of the routing arbiter project, but then it was taken over by the RIPE NCC because there was not really any resources being put into maintaining it. And I think it was Jo�o Damas, he thought it was something that's useful to the community, it's been kind of abandoned, so RIPE NCC has some resources, it will take it on.

But what happened over time was that, I was at the RIPE NCC at this time and my group was the group that kind of maintained this stuff and we realised that the RIPE NCC isn't actually a software development company. It does a lot of software development in the context of the other work that it's doing. So, we looked around and tried to find people who were software development companies and that's how it came to be at ISC. They write bind, DHPC server and they do a lot of networking stuff as well. It made sense to transfer the IRRToolSet one more time from the RIPE NCC to ISC.

So the status in mid2008 a few months ago, was that ISC was maintaining this code and ISC has a fairly well developed software engineering process. I was at ISC at this time  well slightly before this time, and part of that process is all software changes are reviewed by other software engineers. There is standards for documentation. There is tickets that you can go through and review and all this stuff. And it's there for a reason. It's there to maintain the high quality of the ISC code. It works pretty good. But it's not a free process. It does mean that it's heavier weight.

So, no one from outside of ISC could commit to the source tree or anything like that. And patches had  time had to be taken away from other projects within ISC to apply these patches to the IRRToolSet. So you can kind of look at it. There is a famous book called the Cathedral and the Bizarre. With the idea of at that cathedral is the way of maintaining software and the bizarre is more community driven. At that time the IRRToolSet was very definitely in the Cathedral model. This is the place you go to when you did nor changes and do something. What ended up happening was a lot of patches took a long time to get applied. Simple things got sent to the mailing list and were discussed but there weren't any resources at ISC to do is. It didn't really match really to the model. That's where we were a few months ago.

So one of the biggest users of this software is the RIPE NCC. So, they decided that they wanted to try to increase the turn around time I guess you could say. Try to revive the interest in the IRRToolSet. Because there are people who said this is very useful to us, we are interested in this. We use it everyday. We want changed to happen. And so the RIPE NCC decided well what can we do to do this? They approached ISC and they said well we'd like to actually move this in a more public community of development. The IRRToolSet software community.

So, everyone agreed. Move forward and so where are we today? We are here. And here is the current status. What happened was they agreed onseting up a track site. Track is mostly an on line software development framework. That sounds really weird, but what it really means is there is there is a WIKI, there is also a way for it to integrate with the subversion source tree. There is a way to do ticket tracking as well. So it's basically just kind of, you get your error reporting and your future requests and your release status and all your documentation all combined into one area. And that's up and running right now. And that's the URL that I showed you earlier.

The source code has been converted. Originally the source code for this project was was called PR CS or something like that and it was then converted into CVS now it's converted and hopefully it will stay there for another decade or so and then get converted to the latest thing.

Not all but most of the patches I could find on the mailing list have been merged into the source tree. There is a few outstanding ones but right now everything is tracked in tickets so nothing should be forgotten.

And there is one more work action that's on my plate right now which is to wrapup all the patches that have been integrated and all of the changes that have been made and make one final release under the ISC banner which is not a community release.

So, and in theory the community is interested in this stuff. And that's what remeans to be seen. When the site was up running assent a message to the mailing list, there was a lot of interest and it really kind of revived at that time. So there are people who are interested in this stuff. The main issue is that it's no one day job. No one gets paid to make changes to this /STOFT software. People are running their own code which I am hoping was just a symptom of difficulties to get changes merged in earlier.

So, this is the URL where you go go to and take a look at the documentation. It's not just for developers. It also tells you how to intall T it has documentation about how to use it. There is links to tutorials that have been given and everything. The site is also completely open right now. If you want to change it, go ahead and click on edit. It's like the Wikapedia, there is no restrictions. You source code repository does have restrictions. If you want to make changes and comit them yourself, please let me know or send a mail to the mailing list, also possible, and we'll get your authentication added. So, the community right now as an I see it, is the mailing list. Anyone that's free to participate. If people in the community decide they want a more organised structure, they want it more closed, then that's completely up to the people involved. For my own personal involvement, I intend to remain involved in a capacity of kind of facilitating things. I am devil definitely not in charge of this effort. It's hosted by ISC and I actually haven't made any useful changes to the code for a long time. It's all been herding in other people's editions and fixes and things like that.

So, if it is to be a community, it needs the help from the people who are using it or people who have time and interest. So, this is me begging you to participate and make this an actual community development project and kind of keep it alive.

That's it.

CHAIRMAN: Thank you Shane. Does anyone have any comments or questions regarding this project? No. In that case 

AUDIENCE SPEAKER: Eric, RIPE NCC, so when will it support ASN 32?

SPEAKER: I think this is a nice example of what Shane was just talking about.

That's a fine question. The question was about 4 byte ASNs right. Well, there are patches out there and I deliberately chose not to merge them as an a kind of test case for this. I don't remember who wrote them. There is a ticket on website listing this as an an action item. I'd be very happy if anyone would volunteer to do the work in merging that in.

AUDIENCE SPEAKER: I didn't intend to comment on that, but well, okay, as far as I know, there are several proposals for patches. Of course, if the R PS protocol is changed regarding the representation of your SA numbers that have some influence, and in fact a little bit of thinking and definition has to go into what actually support means. For me, it was a fairly late surprise to figure out that well, okay, actually you need two different modes, whether you are generating code for an old only 16 bit system or a new one. Anyway, having the site up now is very much appreciated. One question I am seeing is, Shane, you are saying, well if the ISC procedures are there to protect the high quality of the ISC code. Unfortunately they are tool set didn't come in with the ISC quality of software and that made things even harder. And one of the really nasty thanks is that in fact there is no real documentation of the Internet workings of the tool set available while the tool set makes heavy use of object oriented concept and data hiding. Or one of the questions is, is there any perspective of (oriented) well, okay, how we could get a little bit of documentation out there?

CHAIRMAN: Well, things like what you just mentioned were never in the to to list as far as ISC was concerned. Much less from now on. ISC, I mean this whole transition by the way I should say was possible thanks to the RIPE NCC who decided to fund the activity, okay. As an part of the deal, ISC also committed to having periodic time, engineering time dedicated to wrapup all the community contributions into a releasage, but the work from now on is up to the community. The code was not the best code I have ever seen to be begin with. No one I have ever seen has found it enjoyable to work with this code. People who do it because they need it because they use it. From now on, it's basically, we will support this, keep the site running, keep the repositories accessible, produce the releases when there are patches but it definitely is up to the community to come up with the things. For instance, if there are several 32bit ISN support patches produced how the there, it is up to the community to decide which one they want to pick, which one is the one that fulfils their need and agree on it. And once there is one final, then we will go around and round it up and produce the next release.

SPEAKER: And I am sorry about the problems getting a 32bit filter. I actually wasn't aware of that. And what I have tried to do is if I see something on the mailing list, I try to just kind of copy the information and put it on website and there is no reason it has to be me that does that, so of course the web, it's  is it a mailman? I don't know, but there is a way to search the mailing list, but of course having it in a predigested way and not having to figure out which of the 4,000 messages you are looking at is the one that actually solves your problem is quite handy, anyway...

CHAIRMAN: Okay. So we'll skip to the 

AUDIENCE SPEAKER: Maybe just one last comment. RIPE NCC sponsored somehow this effort and I would say I am very pleased what I see. I think it's a very good step forward and hopefully, even with this old and sort of legacy stuff and the code, it was still revive this project and let it leave for another few years.

CHAIRMAN: That's my hope as well. Thank you.

I am going to, as an I mentioned earlier, introduce a new point in the agenda here, hopefully it's a short one. At the previous RIPE meeting, a possible proposal was put forward to have a discussion of use of RK PI certificates for the authentication of RSP help objects the when this was discussed, I think it was discussed, I think most the people at least I think everyone actually, thought that the right PDP was not the right way to go about this thing and it should be dealt with in a separate venue, namely the routing working group. One of the processes being Randy, that old fox, he decided he wouldn't withdraw the policy proposal until we actually did something in the other venue we were talking about. So, we have this  so, we are going to do something about it. We have been talking around to see how we are going to deal with this.

This is the sort of harder than average subject to go about and so therefore, we thought that it would be good to have a small focus group to come up with the original plan. I know there is already work going on to deal with this, BBN has done some stuff, the RIPE NCC has also done some work. So, the proposal I put to you right now here is that:

We, as an we have done in other occasions here at the RIPE, that we form a taskforce chartered to look at this issue and come back to hearing, two meetings, three meetings at most, with a report on what it is that the RIPE NCC should be doing to fulfil the expressed need. Of course, we would need the participation of the RIPE NCC in this. And to address that in that way and hopefully be able to clear the process that's pending in the PDP which doesn't really belong there.

I have been trying to go around the corridors fishing people for this. I had most success with Rudigar, so I'll name him the first member. I definitely want people from the RIPE NCC to have someone designate today participate and help out with the evaluation of what needs done, basically because eventually as an the outcome of all RIPE taskforces ends up being actions on the RIPE NCC, so they better be aware.

And anyone else who has shown an interest and an understanding of both RPSL and RPKI could come forward and participate in this effort. I am volunteering myself for whatever I still remember of RPSL. And others are welcome. So this is an open call.

AUDIENCE SPEAKER: As an I have been volunteered already, well okay, I am not really answering your question. Additional volunteers are welcome, but  the status that I am seeing currently is that, well, okay, there are essentially two implementation options which are kind of addressing different modes. The implementation option that needs to be done for support via the RIPE database server has been drafted quite nicely by and re already and well, okay, at the Dublin IETF there was at least one slide showing kind of his supposed output of that. So, well okay, I am guilty of sitting on drafts too long and not passing them on, so well, okay, I guess that's going to happen quite quickly and I also have to report that the other option for implementation, which is better for people who want to do the validation of the data themselves is being attacked by BBN and seen, I have seen very good output from test runs there, so it looks like essentially the things are very close to completion on that track. And, again, I think, I think there are not too many open issues on the RIPE track.

CHAIRMAN: So much the better. Randy.

AUDIENCE SPEAKER: Randy Bush: I think the real question is there is starting to be running code and how do reprotect actual work from study groups and things like that? I have two lives, one I am an operator and one I am a researcher and I just want to warn you I am studying to see how much bureaucratic process we can wrap around about 500 lines of code.

CHAIRMAN: An infinite amount.

AUDIENCE SPEAKER: I have some experience in actually trying to make some progress with this proposal and at the beginning, it looked like a quick hack that can deliver lowhanging fruit to us. But once we started looking at the details and that's really the devil is in the details, things got a bit complicated. And we were facing some trade offs that well this doesn't have to be  so, I think now we have two options:

Either we really go to a quick hack and call it a hack and accept the limitations but really go for a very, very simple solution, not trying to you know, solve all the problems of the world.

Or we, indeed, look at why the picture? Because what is really we are trying to solve here is that we are trying just to represent Roy in RPSL or are we trying to improve security of the Internet routing registry? We all know that current schemeers, current mechanisms of securing routing registries are pretty outdated. They are very cumbersome. They somehow do not always achieve the goal and they can't solve all the user cases. They can only solve a fraction of the used cases that people have.

Now, is the question, can we use RK PI as an a leverage to solve some of those problems? If that is the question, then I would say that the proposal that Curtis and Randy put forward is not the only way to solve those problems. So just to summarise:

I think there are two ways: Either we go for quick hacks, we call them quick hacks, we do this quickly and probably dirty or we form taskforces and look at the bigger picture.

CHAIRMAN: Okay. The standing to me at least from what I have heard from several people is that there is not even an agreement on whether the quick hack is the way to go or the other way, the more structured approach is the way to go. And hence, I would like at least to document this, the taskforce suggestion I put forward here I did, because the taskforce mechanism at RIPE is a very light weight thing and all it does is ensure that the RIPE NCC does dedicate resource to say this and that the few people that are named assume responsibilities for actually doing work.

AUDIENCE SPEAKER: Just to be clear, Curtis and I were asking for the quick hack. The correct solution we know and it has nothing to do with this. It's the real RPKI, origin checking, etc., etc.. That's coming. Don't spend a lot of effort on this.

CHAIRMAN: It's not the intention. It's to ensure that it runs to completion quickly.

AUDIENCE SPEAKER: Okay. Again, my intention was this should be a quick hack and I disagree slightly with Randy because, well, okay, actually the quick hack provides some very early incentive for populating the RPKI which will help, which will help to actually speed up the real solutions if they come in quickly.

CHAIRMAN: I am just acting as an a facilitator. If you believe that you can get together, the three of you, and put something together and at the same time withdraw that other thing that's got stuck in the PDP, I am more than willing to take that solution. If it turns out that things don't work the way you expect, we can always revisit this. It's entirely up to you. This is about facilitation, not about overcharging. Awed awe just a comment from the Jabber. Geoff Huston has volunteered to participate.

AUDIENCE SPEAKER: Andre, just for the record, I am volunteering to participate in the task force.

CHAIRMAN: I will round you up, I'll ask you about your commitments and we'll see if this will be enough or we actually need to form a taskforce to ensure that there is a bit more structure and things actually get done.

Thank you. Okay. So we resume the previously announced programme. Dave, if you could come up.

SPEAKER: Thank you very much, my name is Dave Wilson, with my RIPE hat on, I am a member of the address supporting organisation Address Council elected by your good selves. And in my day job, if you like, I work for the national research network in Ireland in the network development group.

One of the things that I did about two months ago, I decided I want to get some experience with using a 4 byte IS number as an if it was a client of ours and see what happened. I was confident it would work, that the session will come up with our routers that would be able to exchange routes and exchange traffic. I was more interested in what would happen further out in the Internet and if he was able to see that and analyse that.

Now, I am still working on this, there is only within a month or so of actual measurement taking place here F that. But one of the things that happened that occurred, cause add bit of a shock for my knock and I think it might be a bit of a surprise to yourselves as well and I want to mention it.

So, as an I say I was fairly sure low cooperation would work, I was fairly sure that the back compatible part, where the 32bit AS numbers transferred in the AS 23456 and back at the other end it BDP would hang together. I was wondering if we'd see that further out in the Internet. We need to do something quickly because the 1st January, 200 /# is almost upon us and there is only one RIPE meeting between then and now. I put together the test plan which was get a 32bit ASN number on the a time limited basis from the RIPE NCC, announce it, stick a machine in that space, and see what happens that hopefully doesn't make or lose anyone any money.

So, that space was assigned. And I went through a short process of notifying our peers that this session or this prefix was going to appear with this strange AS number. That was fine. And then updated the RIPE database and scheduled the routing change.

The first thing I did was put the new route object and update to our AS set and the right database. As an a new member of course. This happened had a couple of days before our plan to bring up any  or bring up the announcement of the route on our existing sessions.

And I got a bit of a shock the next morning when I came in one of our B to B sessions had gone down. The session was up. We were receiving all routes from T we were sending traffic to it and the traffics being dropped. Why did this happen?

Well, like I said, I added AS 3.15, which is the AS we were assigned to our AS set. This is a valid change. It was accepted by the RIPE database. The next morning our peers generated the route filters. And on at least one of those peers the use of IRRToolSet in this case, I am not saying current installation, but the one they were working on that day which was only a a month or so ago, broke and it fed into a script which broke in an unpredictable way, never expected this to happen. You don't necessarily expect that software to break on that valid input from that database. The routing filters broke. They LoST most of our prefixes in the routing filters which is fine. It means getting rerouting via transit, the traffic didn't flow because our peer in this case, (traffic) either you could call them an upstream because they have been so many prefixes, applied strict UR PF. And so we were sending them lots of traffic, highly local preferenced towards pretty much all the academic institutions in Europe going out this 10 Gig link and the traffic was dropped because all the source precise not recognised as an being legitimately advertised by us. That's about a 4 and a half hour out ledge not just on our link but on all sites that would be connect via that happen link. What did we have to do.

Out came the member of the set, I did create, and this is probably breaking the rules now at this point, I created two objects for the prefix because they were both in theory legitimately routed  sorry, legitimately originated by one AS or the other. They are still in the babe.

So the same day, and I don't know how let me away with this, I started announcing the prefix. I originate that had first from our own AS number to get a base line, set up my smoke ping box. Thought, what's a good set of fairly reliable device that is one can ping and get results back from that are scattered around the world in the RIPE TTM boxes of course, did exactly that. Then five days later withdrew the original announcement, advertised it again, this time originated from 3.15. Transiting our own 32 AS and the world did not end. There was a short outage while the announcement was withdrawn and then readvertised and connectivity was restored. With a few differences. We did see some effects.

The next problem I had I was I could look at all this data. It was pretty. There were some changes but it was very hard to correlate that with any kind of change to routing. It was probably the case and it probably the case that when we with /TRAU a prefix and announce it even sourcing from a different AS, all that's changed it the order in which any given AS has received that announcement, and therefore the routing decision that they make could well be based on that. I can't see their decision because I couldn't get trace routes back. Only interesting traffic here being that I am concerned /P announcements is traffic back towards us, I didn't have that information from my smoke ping. I thought wouldn't it be great if I could take our test traffic box and put it behind this AS number. So, the guys at RIPE with me sounding very pushy and difficult emails were good and kind and renumbered that box in fairly short order. We managed to get it up and working on the new LAN. I don't have anything that shows anything is broken here T looks to me like if you do, route the traffic, once you get the announcement sent, once you get it out on the Internet and get traffic coming back there is no big change. There have been some changes but so far nothing I can see that really shows a difference between (changes) the  that really highlights the 32bit A X numbers being a problem. I think if you withdraw the announcement and then send the same announcement, you will get the same small instability that you see in this case.

We want to prove this. So the next steps that I planned to do here working with someone else on this, I am going to start flipping a coin, I am due to do it now and then originate the prefix based on that, about once a week. Either from 12:13 or from 3.15. Wee /*PL we'll do this for a few weeks. I'll sleeve the smoke ping running to see what the changes are and then start using the other tools you have there. TTM and RIS to try and identify them. What I am lone the look out here are effects that I can reliable repeat but change depending on the origin and not anything else. That's why it's flipping the coin

The lessons here are and the reasons why I asked for a bit of time in the working group is, AS dot appearing in the RIPE database was a not backward compatible change. Which is fine. I am not going to complain about that, but ASPLAIN itself is disallowed in the RIPE database. That's probably why this otherwise would have to have an extra mode in interpretation. What I want to raise here is we don't use IRRToolSet, we never have and our peers don't have any 3 /# bit ASNs and yet this bit both of us. Even though we never used IRRToolSet and even though we have no control over the updates that our peers make to the software, we can still be bitten by software that's been at the employed and hasn't been updated recently. Patches are available I must emphasise. If you do want to find the current patches for IRRToolSet for 32bit ASN in AS dot format. That's where to find them.

Henk kindly provided me with the information here. Obviously there are several proposals out there now suggesting to use ASPLAIN format and I understand that the document that suggests this is now on last call at the IATF, is expected to be called in the next month or so. Obviously this means that anyone who is currently using code including I guess patched versions of IRRToolset is either power or print that format will have to replace that code.

That's what I discovered. I hope you don't mind I took a little time to mention the big surprise we had and I'll be happy to hand back to the chair. Thank you very much.

(Applause (

AUDIENCE SPEAKER: Gert Doering, owner of 3.3 and well, being nasty to my neighbours as well. Thanks for the expert and for the feedback. We did it in a sort of less well organised way. We just announced one of our /24s with origins 3.3, put it all in the RIPE database and I got an angry call from Rudigar that all his scripts broke. Which was sort of the goal to, not to anow Rudigar of course, but to sort of have an early warning of what's going to explode. Our up supreme filters didn't probably pick up the prefix and well peering had interesting issues.

It has been mentioned that AS dot is a stupid idea because it's incompatible but I actually still think it was a good idea because it is incompatible because you know you have to do something and you will not be surprised when the 16 bit counter overruns and your data type is just 16 bit and you will just not notice. But anyway, that drain seems to have gone off. Thanks for the expert.

AUDIENCE SPEAKER: Just a few remarks on, well, okay, seeing the thing kind of from the other side.

We are generating filters based on IRRToolSet. Luckily we had already done some improvements on IRRToolSet, so we didn't fall into the trap that your upstream had, that well okay, the standard IRRToolSet, if it runs into any problem, it just continues and doesn't set exit status or anything. So if you have incorporated that software into some larger scripts and you rely that it works always well, you may have very unpleasant surprises. I had the unpleasant surprise that well, okay, our production run ran into an abnormal end and said well, okay, we have got announcements, there are some problems. When I fixed that with some manual work around, unfortunately it turned out the same problem was always seen for three other peers because they were also out on an upstream.

Okay, the important lesson there is well, okay it's very important that the software that is used in production actually reports errors, and if we haven't got IRRToolSet fixed in that regard in the public version, we need to.

The other thing is well okay, two years ago at the RIPE meeting, there was the discussion about the incompatible extension of RPSL and I know I stood up and said well okay, I would like to see a plan that, well okay, has an appropriate time frame for, well, okay addressing the clients. That didn't happen and well okay, having seen the whole thing, I think that in the future, incompatible changes which will be required will have to be addressed in a way so that actually existing clients do not get impacted. And there are ways to do it.

SPEAKER: If I may if there is time. I have one comment on the first point you make. You are right of course I do want to point tout that one thing we did observe, without naming names, was that wasn't simply a matter of someone running IRRToolSet and having it in a /KROPB job which are uploaded directly in the router. These fed into scripts. Those scripts themselves which are probably not a part of the IRRToolSet distribution failed. No one but the individual has control over those and I as an their peer don't have control over those and in some cases there was human interventions which itself was fallible and the worst still happened. So even those weren't good enough to save us.

CHAIRMAN: Very quickly Lorenzo.

AUDIENCE SPEAKER: Just curious, why didn't you announce two prefixes one from 12.13 and one from 3.15 instead of flipping the coin.

SPEAKER: Good point. I didn't ask for two prefixes. We do announce lots of prefixes from 12.13 so one could make the comparison there. I think I'll do that actually. But my fear there is that the other prefixes will probably be treated in a different way because it's subjected to different instabilities and because it had been on the Internet for a long time already.

AUDIENCE SPEAKER: Hank: On the RPSL point, about two years ago Katy and I wrote an Internet draft on what had to be change indicate RPSL code when going to ASN 32. We were told to hold that draft until the formatting discussion finished. Now, that it has, well, we can revive it, though I think it will shall much shorter than the first iteration.

CHAIRMAN: Okay. Thanks. The Internet draft on presentation of AS numbers is well passed. Mine it's like just a matter of time that it shows up as an a proper RC, I think if it's not already. I only have one quick person from the RIPE NCC left, is if his plane is allowed currently, but it is the new chosen format, will you incorporate that fairly quickly perhaps? The change in the RIPE database software to allow ASPLAIN?

AUDIENCE SPEAKER: I think we'd have to comply with the ITF standard so the answer is yes. The thing is slightly more complicated. I think it will be discussed tomorrow in the database working group, is that when we implement AS dot, there was a clear flag date. Right, there was no 32bit AS numbers and all of a sudden we started allocating. Now there is no flag date per se. Right, because 1 January is not really a flag date, it is slightly change in the policy, not in the state of the AS numbers. So, more coordination will be required with other areas in IANA as well.

CHAIRMAN: Thank you. Thank you Dave.

The next speaker.

SPEAKER: Good afternoon, this is content that I have prepared with Pierre Francois. The objective of this talk is to give you, to show what are the low hanging fruits with the IP FR technology and then to quit equip you with the questions that you should ask yourself before attempting to use what I would say are the very high hanging fruits because they might imply a lot of complexity for a marginal gain.

The plan of the talk in order to give you this analysis is the following: We are going to describe what is the principle of simplicity, a few terminology and then look at requirements for routing conversions because it's often here that you will find a source of unneeded complexity.

Then a quick review of the IGP converge answers status because it's the foundation for everything, it's always needed so we'll quickly look at this and then we'll see what are the low hanging fruits and the questions to ask to assess the other fruits on the other tree and hen then

The simplicity three of simplicity. We are talking about IGP technology so a quote from interesting. He says simplicity is prerequisite for reliability. If you look at the some of the components, the complexity is quite important and so, it's quite obvious that you'll have an impact on reliability there.

Another way to look at it is from a picture and you have a Japanese garden, it's very simple but still for many people very beautiful. And the curve that I like a lot through the projects that I tried to do either internally or with customers is actually to plug the gains versus the costs because too often people talk about for example 50ms, but only on the vertical axis, they don't look at what are the cost of complexity that I involve by these decisions. And inthe principle of simplicity or the kiss principle is to kind a name in the curve where you maximise the gain while you minimise the costs. And although an optimum exist the where you can have maybe all the gains that could you dream of of 50 milliseconds in all the cases you could think of, the cost it incurs to you on your design and deployment is huge compared to the marginal benefit.

So it's very important to consider the two dimensions of this trade off decision.

In terms of terminology. Very quickly. We are going to talk about the IGT convergence, upon managed or unmanaged link or node failures. We take any link or node failures within the network or at the edge except the peering links, because that's the BGP problem and a simple solution to this is the BGP prefix independent convergence that was present at NANOG six months ago. /*F /*F we are going to talk about IPFRR which is about local protection. Which means that you precompute in advance of the failure, back up valve that you preinstall in the data plain and then you detect the failure locally and you use your precomputed backup path and the idea of this backup path is they do not require any awareness of your neighbours about a failure. They are going to participate not backup path but without knowing that the link fail and that's a key priority of a local protection technique.

We are going to talk about micro loop. It's a natural property of an IGP protocol. It has been like this since the beginning. It's a natural property. When you have an even into the topology, the IGP converge and as an the different routers converge they actually do not converge exactly at the same time obviously and depending on the timing of evens, they may have inconsistent will tables where they point at each other and they create a trenchant loops, these are occurring while the network is converging. We call them mike re loops. So and so fart of the IPFRR technology or research project has been to see, can we avoid micro loops and so we look at what are the gains and costs of these techniques.

This lied is about requirement. Extremely important to define carefully what is really your requirement. There is a big difference between the perceived requirement and the real requirement and unfortunately, once you define your requirements, it really often defines how much complexity your design and deployment will be. So, my personal view is that if you look at running outages, human being actually do not bother if the outage is less than 1 or 2 seconds. Even for VoIP, it's well deployed, people know. And the key point piss more important than the outage is the frequency, but nobody talks about it which is already a good sign that there is not a good analyst here.

Then in the /TREU industry a few service providers have already standardised on the 200 you millisecond signed of objective because the target where the human being no longer notice the outage and then there is this 50ms where  so the 200 milliseconds I think it's a good target to go to, then there is the famous 50 milliseconds perceived requirement and it's physically where humans have a perception where they would be better off if they had it. Actually nobody it able to prove any gain from it.

So the chart that you have on this slide is actually proof of this. There is quite a lot of video deployments and so it's good to talk about video. And in the video context, people say you absolutely need 50 milliseconds. Otherwise the visual experience is too impacted. And so we did the tests with different types of video traffic, either low motion, high motion, SD or HD quality and weing /TPHRUGed on the X axis the outage that was created in terms of packet loss. And on the Y axis there is a measurement of the visual imparliament, can be seen on the screen and as an you can see from the result of this testing, there is almost no difference if the outage is 50 milliseconds or a few hundreds of milliseconds. But in terms of costs for your design and deployments, if you select one or the other, the cost might be huge. The cost difference might be huge.

So, really spend a lot of time wondering what is your true requirement and I think it's okay, for example, and I think it's a good tar target to say a few hundreds of milliseconds in all cases is a good target. If I can maximise with simplicity the number of cases where I have 50 milliseconds it's grates. It's not going to give you a guarantee for 50 milliseconds all the time. It's going to give it to you most of the time with very slow cost complexity structure.

So, IGP convergence, the foundation. Why do I describe it as an the foundation? Because, whether you do MPLS, whatever you do, you always need it, because it's the one that is going to tell you whether a BGP net stop is available. It will tell you whether a pin source is available and even for MPLS TE when you have a protection and fallback object the backup path you might might have CAC reservation and you want to make sure your CAC is updated tore your video streams /*E need to do the CAC on rerouting. This is driven by the BDF convergence.

I don't know any deployment that is a full coverage. Most of the time they only protect linkings. They rarely protect nodes  well, if not always, the they only protect a subset of the Internet so there are always links or nodes that are not protected and the fallback it the I GB.

In says of catastrophic even the the only /TPAUBG bat II GB. Whatever you should do is you should always care about it and make sure if you add a project or invest into your network is actually goes in this direction because it's a good direction. You'll benefit from it.

So where are we? Very quickly. It's important to know where the technology is because it helps you assess the gain  the real gain of some additions to your network on top of this. And so, this is  I took what I have, so it's a CRS 3.6, I am not  it's not a CISCO advertisement, but just to give you an idea of what exists today in terms of technology. So CRS IOX 3.6 in a lab in triangle topology. It's a local failure you see in the next light how you extrapolate to a complete network but that's a very good measurement to do. How much time it takes for IS IS to detect the failure, computer, update the FIB, update the FIB and up late the line card and update the traffic it rerouted. On the Y axis you have the time it takes in milliseconds computed with real packets. On the X axis you have the number of prefixes in the topology.

So, there are four priorities of prefixes, critical, high, medium and low. Critical are typically the IT P.V. sources, high are the most important P G, medium are the others BP next up and the infrastructure that represents the majority of the prefixes are in the low priority.

So. We did a test one hundred times. You have the different percentile. The maximum is already good. You see for 500 prefixes the first 500 prefixes your most important prefixes it's sub milliseconds. You have a huge network, you have three times more you have 15 hundred prefixes you had 140 milliseconds. That's for a local failure.

Now, and this doesn't require any tuning. There is no tuning required at all. /TP just the behaviour of the system. Let's see how you can use this and now deal with failures on a global network wide basis. You need to analyse the cost of flooding. So that's a word that we did a few words ago. We took different worldwide topologies and we analysed the failure of each possible link in the worldwide topology and for each possible link failure, we computed the flows in the traffic matrix which were going through that link and for each flow in the traffic matrix we computed which router closer to the failure would reroute on to an alternate path avoiding the failure. And over all we waited each distance between the failure and the rerouting router by the amount of traffic on this flow in the traffic matrix.

So, what we found out is the typical distance between a failure and the rerouting node. It gives you how much you need to flood through the network. And this is weighted by the traffic and it is measured both in number of hops, how many hops you need to flood through until you get to your rerouting router. Or it's also measured in milliseconds. For our topologists we knew how many milliseconds was needed based on the speed of rate of fibre.

The left diagram it the result of the study for the number of hops and it's very rare you need to flood more than three hops away. The right chart shows that it's extremely rare that you need to flood more than 25 milliseconds away which is a continent boundary. And we did not do today the study to basically find this. We knew it before doing the data. We just wanted to do the study to confirm the intuition. That's a big difference between academic world and the real designs and deployment is that actually, often it's much simpler in the reality because it's a human being that design the network with a lot of redundancy. And so the redundancy means that you do not need to flood far away and especially when you have a worldwide network. If you have a failure in Asia you contain it in Asia. If you have a failure in the you recollect S you contain it in the US. Do you not ask a failure in the US to be recovered by a convergence in Europe. Yes, there could be a convergence in Europe but it would simply switch from a working path to a working path.

So, based on this study, you can basically  we typically estimate that if you do the local testing and you add a budget of 50 milliseconds for the flooding, that's a fair summation for the complete network. Why 50 milliseconds? 5 hops multiply 5 millisecond per hop plus 25 millisecond of propagation, 50 milliseconds together. If you want to be really, really conservative commute 500 millisecond, doubling it by two and it will give you the budget. If you do this test and couple this with plus 50 or plus 100 it gives awe picture for your complete network.

You see between the two components we are in the order of a few hundreds of millisecond.

This is the measurement of 9 hundred router IS IS network all in level one. We placed the CRS in all possible position in this network, computed how much time it would take to do a full short path tree computation, less than 10 milliseconds. So computing it extra is extremely quick, it's no longer a problem today.

So, now recollect let's focus, once we have this foundation, that gives awe few hundreds of milliseconds in all cases for your network, now let us look what are the low hanging fruits that you could add on it with the project.

And so the first one is the loss less local maintenance. It's extremely simple. It's simply that to realise that a big share of the evens on the network are related to maintenance and instead of going onto the router and doing a shut of the interface, which basically breaks the hardware information and so creates packet loss and then use lies size to converge and then you go tonne to an alternate path. So you disrupt and converge, bad idea. Instead of doing this you simply do the same operation but in the context of IS IS. You say you shut the  so you hardware is still following the package while IS IS is computing from alternate path. And you have zero loss of connectivity. If you want to do really something on top of the foundation, do this. It's typically 30 percent of the outage at least that are related to maintenance, it will give you zero loss instead of a few monies of milliseconds of loss. It requires no software deployment. To new technology. No ITF protocol that you need to understand, test or so on. Just a little operational practice.

In the past some people from done it but simply costing the matrix up F you want to operate a link you go to the link and you say you cost is to a very large matrix. I don't think it's a good idea because if you do this, there are two issues: First you need to do it it on the two sides of the link while if you shut the awe judgement do you only on one side F you play with the matrix most likely you open yourself for operational mistakes. When you put the the link back into service potentially the human being will have forgotten what was the true matrix. Shut is very simple. No issue technology. 30 percent of the outage on average and you replace a few monies of milliseconds of loss by zero.

So that's the optimum lowhanging fruit. Now, really part of the IPFRR project is a very lowhanging fruit is the per link LFA technology.

So, first a dilemma that is worth to know when you look at this project. /STPHEFPLT dilemma) if I am A and I want to protect the link to B, want to pro detective the red link, and I have, for example, a few hundred routes which I route via B and I would like to find a backup path for that link, the priority tells you that if you find a backup path that is valid for the packets going to B itself, then this backup path is valid for any route used through B. So intuitive. Think about it but just take it as an a statement now.

It's very good because once you have this priority it means you can only search for a backup path to B itself and then it will applicable for any route going through B.

So, it's simply benefiting from this. I am A and I check what is the reachability of my direct neighbours? For example, C is a direct neighbour and I ask myself, why is the path from C to B if the path from C to B does not go via me, then it means I can trust it, because if the link A B fails, if I send a packet to C it will go to B without going through A B, that was an outcome of the fact that the route from C to B doesn't go via A B. And so if you have a neighbour with that priority, you call it an /HREF that, it's an loop free outer net for the link you want to protect.

Very simple. So what other properties of this. Computing, if your neighbours are loop free outer net for a link you'd like to protect, is entirely automated you don't need to have any human operation in T doesn't require any IETF protocol change. All the information is already available in the IS IS classical database, there is no new protocols. It supports incremental deployment. In this example you can only have the possibility to compute LFA on A and that's fine T doesn't ask for any cooperation from C. It doesn't ask for any cooperation from B. If you have different vendors F you have different type of routers, if you have different type of software in your network, you benefit a lot of from incremental deployment.

As an /TP doesn't require any IETF protocol change, any IS IS change there is no interruptability testing. So again much easier for the operator.

Then from a testing view point, how much does it achieve if you do the testing and you  if you fail the link A B in this case and C is an LFA the loss of connectivity is sub 25 milliseconds. And it is independent of the number of IS IS routes which A route via B. If it's 10,100 or 1 thousand. It's always 25 milliseconds. It's prefix independent. And it is applicable to MPLS LDP networks. That was a wrong decision made five years ago to call this project IPS FRR. People think it's not nor MPLS but this is entirely usable for LDP networks.

The next property is so ask yourself how often do I have this magic LFA, it seems stupid it must be so rare. That's the study we did. We have here 7 European topologies, 5 US topologies and 1 Japanese topology and then the last bar is the average. And look at the blue bars, this is the coverage which means it is the percentage of links in these backbone topologies which can be protected by the simple per link LFA. On ample, over 7 European networks, 5 US networks and one Japanese network the average is 78 percent of the links. For 78 percent of the link you have  which means that typically the rule of thumb is that for 75 percent of the events you have sub 25 millisecond. For 25 percent of the evens you fallback on IGP convergence, a few monies of milliseconds and that's basically the balance you have to make yourself to say, do I really want to see for 50 milliseconds all the time? Is it not like a hot spot where I minimise my complexity and then I maximise the benefit?

And then what about the POP? So, it's a part of the work that we are doing can peer now, we'll publish a paper later on on building blocks to design for maximising simple per link LFA, a variation of per link LFA coverage. But as an a starting point for this, if you take this very common POP topologies, and if you call C the matrix between the two Co. routers and D and U the unique directional matrix from the Co. router to the aggregation router or the aggregation router to the PE router or from the core router to the PE router in a downstream direction its it's D in the upstream direction it's U, if you call them like this, the only thing you have to check is C is smaller than D plus U, which in practice is always like this. So, if C is smaller  sorry, which in practice is very likely to be like this.

If C is smaller than T plus U in such topologies, but other as well I don't have the time to go to go through all of this  then all the links in the POP are protected with per link LFAs and there is no micro loop possible in these topologies, so it's extremely interesting to consider this. So it's already a high coverage in the backbone, but I think there is even more potential at the edge. Especially for this PE devices which are typically the older generation suffering more from the workload where maybe the IGP convergence is a bit slower because they see the entire network in front of them and they may be of older generation. And so they are typical dual owe men this is to brainer, if you lose one you go via the other link and typically per link LFA applies.

That's it for the two first low hanging fruits. Now the last section of the talk is about giving you lowhanging fruit which consist in what questions should you ask yourself before going for the next fruit in the IPFRR tree because they might be very high and the gain from them is pretty small. So it's important to ask yourself these questions and we ask ourselves these questions and we'll show you the data we got from the topologies we analysed.

So, the topic we have not yet discussed too much is micro loop. Micro loop, micro loop avoidance techniques that have been defined with the the IPFRR technology. Should you use them? The first question is, how likely is this that you have micro loop in your topology? And so, we build a tool with Pierre which used an algorithm to assess whether a mike loop could be developed for each possible link failure that could appear in your topology. The chart shows the following. On the X axis you have the percentage of links in three topologies, we analysed and then on the Y axis the proportion of the traffic matrix that experienced micro loop if this percentage of the Linx  if these linking go down.

And so what we see is for 80 percent of the linking for this three service provider topologies, there is no micro loop at all. It's not even possible. And when what you see is for more or less 10 percent of the links in the topologies, yes, if they go down, it could create some micro loop on a significant part of the traffic matrix. Okay.

The study is based on algorithmic study of the topology. It doesn't tell you it will be a micro loop. It tells you it is impossible to have a micro loop. When it tells you it's possible, it means there might be a certain time pattern between the different router events such that the micro loop would form. It doesn't tell you that really it will occur in practice, so we will analyse this later.

The next question you should ask yourself is: Even if it occurs, how long will it be? There are two answers. The first one is to do with a analysis of what you would expect. So, what is a micro loop? It's the natural behaviour of an IGP that when it converge some routers might not up state the table for each prefix at the at the same time and maybe the router is pointing to another router that is pointing at him so and so you have a little micro loop that occurs for some fraction of time.

And as an the convergence, in terms of current technology is on the order of a few monies of milliseconds. Has the microloop is a function of the dealt aconvergence between convergence. It's obvious it will be a fraction of a few monies of milliseconds. So from an analysts few point you expect a few tenths of milliseconds of micro loop if it occurs.

So, Pierre has been building a simulation environment based on the work we have been doing together where he has a characterisation of the CRS and the IOS software in terms it of shortest pat free computation, rib update, FIB update including the... update time. He computes for each possible node in the topology the number of prefixes that really need to be updated into the hardware. So, it's a good model of how the router would work.

And it's the simulation is based on a true service provider topology, a Tier 1,900 node in a single level in ISIS.

What we did, we failed 500 links in this backbone and for each possible link failure, the network was simulated and he computed the amount of make row loops that occurred and this is plotted on the Y axis. On the X axis you have the links. Micro micro) first I think you have 80 percent of the links they show no micro loop at all. This is totally expected because from an algorithmic viewpoint we know that it's proved lodgey it is impossible for them to occur in 80 percent of the case in the topology we are looking at, which is either the red or green topology. I no longer remember whether it's A or B.

Let us look at the 20 percent. We said in 20 percent of the cases, there could be micro loop that would form and from an analyst's viewpoint we would expect a few tenths of milliseconds. That's what the simulation tells. It's interesting to know when he did the simulation, he loaded each router with ten loop backs and this is taking into account the micro loop for 9,000 loop back in the network which is a mistake because it's way more negative than the reality. But it just telling you that the micro loop are, for most of the time and for most designs, likely insignificant. And so, maybe in your case it will be significant, I cannot say, but what I can say the lowhanging fruit for you is do ask yourself the question, do I care for this. The algorithmic analysis, it's very easy to do and you will see what the the proportion of cases where it could occur.

Then you do an assessment of if it occurs, how long would it be? And then if you would want to have some simulation done, or work, I think Pierre is really welcoming nip interaction with service provider to actually continue his work. So don't hesitate to contact him or me.

If, at the end of this analysis, you think micro loop, it's important for me, I think I need to do something, the first step that you have to ask yourself is: Should I not like care about my IGP? Does it not mean that my IGP is not optimized enough that, it's converging too slow on my topology, why is such? If after that step you still need to do something, then here is the midhanging fruit. The midhanging fruit is an algorithm that you can compute on an off line station, you take a PC or a Linux box, it takes as an input your topology, it takes as an input the link or node you want to operate on and it gives you the set of matrix increments you need to apply on the link you want to operate on such that you will be able to put it out of the network for maintenance without any micro loop. So it only applies for maintenance up and down. It doesn't apply for unplanned outage, that's why it's the not optimum, but it's still deals with like a good fraction of the outage for micro loops in a very simple manner for an operator. No network change, no new software, so new protocol. No new hardware, no implication on the complexity and reliability of your network.

So here is an example of how it works. If I would like to operate the link B C, if I do my loss less local maintenance approach, indeed locally on B there would be no loss, but in this specific topology, depending on our B and A converge, there could be a little microloop forming between B and A for traffic going to C. So if you think after all the questions I mentioned and I think it's unlikely but if you still think at the end you need to do something, what the tool would provide you with in this case, is would tell you easy: Configure B C with matrix 2, wait a little, then configure B C with matrix 4 and that's it, you can bring the link down. It will have not created any micro loop anywhere in the network. And indeed it the first step placing the matrix B C to 2, it pushes A to go via C, when you put B C to 4, it pushed B to go via A DC, at aunt which point no longer any traffic go via B C and you can work on the link.

So, he has analysed the applicability of the algorithm on real topologies and the outcome is that for most of the Linx, is only requires two or three matrix changes and again the kiss principle here will tell you, okay, if you really want to do something, just do it for the case where it's simple, two or three matrix and then the tail of the curve, forget about it, just shut the urgency, do a low loss less maintenance, you might have a micro loop for a few tenths of milliseconds. Do you really care? Could you want to increase your complexity in your network from this?

So we we reached the conclusion.

What we tried to do through the talk is highlight that requirements are extremely important. They are the source of often unneeded complexity. Really spend a lot of time there. And I think it's fine to define a requirement to say in terms of routing outage, a few hundreds of milliseconds is what people care, I should do it all the time and in most of the time I should try to do 50 milliseconds. Then you can kind of like trade off political perceptual requirements with a network design that will simplify, that will be simplified and in the end I think much more robust and reliable. Which the end customers will actually prefer.

Remember the code from the extract. Simile is prerequisite for reliability. Do not look at the gains without considering the implied complexity.

We have seen that the IGP convergence it the always needed foundation. It's on the order of a few hundreds of milliseconds. Then there are three low hanging fruits.

Loss less local maintenance. Per link LFA, automated no new ITF thing, incremental deployment. No  testing. Sub 25 milliseconds. Prefix independent. Coverage for 13 real service provider topologies, 78 percent. Coverage in the POP, I think most the POP structures are very nice for per link LFA. We have seen the slide.

In terms of micro loop. Ask yourself the question, how important is it? If you really think something is important, first optimise your IGP. Second consider the off line tool. It has no impact on your network and then in the end. No re/HRAOEUPBLG once, IPFRR was a bad term. It's equally applicable to an M BL S network. For example a combination that might make sense, if you already have a full mesh in the backbone is the use the per link in the POP to simplify your full mesh so that it doesn't go too close to the edge.

Finally use tools. We have built a few tools to help you do this analysis, don't hesitate to contact us.

Many thanks for the time to talk and many thanks for your questions say here or after.

CHAIRMAN: We can take one question before we we... no questions. And thank you.

SPEAKER: All right. Hi, my name is Franz, I am from the information services department at the RIPE NCC. And I'd like to tell you a little bit about MyASN and how we improved it in the last couple of months.

First of all, if you know MyASN, ask yourselves the questions. Do you know when your prefix gets hijacked? Do you know when your transit provide err suddenly changes? If you don't, soon you will. And you will hopefully be a bit of a happier engineer.

What is MyASN? Well, it's the alarm system of the routing information service. The routing information service has 15 RRCs all over the world and they collect BGP updates from 620 peers, more actually.

And users get notified when someone else announces their prefix or someone unexpectedly gives transit to them. And this can be done via either SIS log or email. This is already really nice and the system works and I think many of you use it already, but just recently we improved it alot and that's what I'd like to show you.

Since we have many information services at the RIPE NCC such as an TTM or DNSMON we wanted a unified alarm solution for all of our things so we invented this very nice three stage system. It consisted of an input of in this case, it would be the routing information service, in other case it is could be DNSMON or TT N. Then there is a filter stage where you can define /OU to react on the data that comes in and an output stage to say okay I want to be alarmed via email or SIS log. And the nice thing about this, this is all plugin based so we can easily extend it.

I want to give awe quick example. As an I said we have three stages, input, filter and output in this example we are going to look at RIS as an an input. This gives BGP updates and time stamps to the system. Then we can put a filter in place. We want to monitor that my prefix original original it's a from AS 3333 and we connect it to the BGP update of the RIS input. And I want to be informed via email whenever something bad happens.

Also, we don't have that in the system yet, but just to show you how flexible it is and how much further we can extend it and in the future we could also have SMS and we simply connect it also to that filter and we get a nice SMS notifications.

However, some of you only work from 9 to 5 and might not want to be disturbed at night by the system and so we can put in another filter, 9 to 5 filter connected to time and the SMS output plugin and you will only be informed between 9 and 5.

So now you can already see how flexible it is and how easily we can extend it and make it work with other systems.

So, what does it bring you? Well it is available today. I know some of you already have moved your accounts to the new system. We sent the in announcement I think two weeks ago. It will be the unified alarm system for all information services. At the moment it support RIS and DNSMON. TTM will follow in the future. We have full IPv6 support. We are proud of that. Possibility for new notifications, we are thinking about SMS or Jabber. And a very nice thing that will make you very happy, because I know MyASN hasn't always been the fastest to inform you have about things. Now we have a much faster response time and we do have about five minutes lay tense between something happening and you getting informed about it.

It's available at RIPE dot net dot IS /alarms. If you have an old account and want to migrate it to the new system. You can go to this new URL.

A quick pick at the user interface. This is the overview of your alarms in the future you might also have TTM alarms in there or DNSMON alarms. You can easily activate, edit or view me say alarms. This is the interface for adding a new alarm. And this would be the overview of the alarm where you get nicely aggregated messages. So that you also don't  you don't get spammed by the system of course, but if you want to go into more detail and have a a look at the all the single messages that have been generated, you can see that in there.

All right, quick involvement, how you can get in touch with us if you want to know anything about this or give us feedback, email to this address. You can email me directly also. I would also have a few small questions for you. Once you have use it had, how do you like the new interface? What kind of notifications would you want somebody mentioned SN MP already, Jabber would be maybe something, if you have any other ideas and of course any other feedback is welcome as well.

And I am also doing a demo stand tomorrow after the test traffic working group. A few people visited me there already yesterday. I am happy to welcome you there and show you a little bit more about this and our other information services.

Thank you very much. Any questions?

CHAIRMAN: Any questions for Franz?

AUDIENCE SPEAKER: It's not exactly about MyASN but the other thing you just mentioned in talking is in the future DNSMON alarms, when will this be coming?

SPEAKER: It is already active. It works for our DNSMON subscribers, it is available at the very same interface as an the MyASN.

AUDIENCE SPEAKER: This is great stuff as an you know I was pushing for this. Another thing you mentioned five minutes update. That's still not really life life. Will there every be a life life thing?

SPEAKER: We hope so. With the system we certainly have the possibility to make this even better and even more real time. We are currently  Eric especially, is working very hard on getting our new database service and our new database scheme up for the RIS and that will certainly improving. Right now the problem is a little bit  we have a very large my sequel database that we are at the moment trying to make new and the latency will be much smaller. We hope to get to real time obviously.

AUDIENCE SPEAKER: Last thing. As an a suggestion, as an an alternative notification mechanism instead of email, can I propose Jabber as an the one that you would be working on first thing because that's something that our monitoring systems can then pick up in life real time I would say. I am not that interested in SMS, that we can do ourselves once we get the live notification over. Any other communication channel.

SPEAKER: All right. Okay. Thanks for that feedback.

CHAIRMAN: Thank you. Any other questions? No. Then, thank you again very much for presenting this and making it so clear.

This was the last agenda item. We have one more which is called AOB. I don't know if anyone has any other business that they would like to bring up. Okay. Otherwise we are done. At 3:30, don't forget, we start going out for the RIPE dinner activity. It is quite a logistical endeavour regarding 55 all four wheel drive vehicles. People are expected to board in packets of six, which fits in each of them. So don't get nervous, everyone will get in there. Just it may take a little while for the 55 vehicles to go and load. Take it easy and enjoy yourselves.

If you are the kind of people like me, who feels cold when temperature goes down, it does tend to get a little bit chilley in the desert when the sun goes down so don't forget to take a jacket with you.

Thank you,