Skip to main content

Monday

Note: Please be advised that this an edited version of the real-time captioning that was used during the RIPE 56 Meeting. In some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the session, but it should not be treated as an authoritative record.

RIPE 56 Opening Plenary, Session 1 2:00pm, Monday, 5 May 2008

CHAIR: Good afternoon, everybody. It is five past, the announced starting time, so it's starting time. Welcome everybody to this RIPE meeting, the 56th RIPE meeting already. My name, if you don't know me, is Rob Blokzijl, I am the Chairman of RIPE and I am chairing this first session of a series of plenary sessions of RIPE.

Before we go to the regular programme, I have a few things by way of announcements and other things, in the first place it's my pleasure already now to thank our host pour this meeting, the [unclear] for what this morning already looked like a very pleasant environment but of course, we thank them especially for the beautiful weather they have organised.

(Applause)

We can only say keep up the good work for the rest of the week, boys and girls.

I want to basically remind you of a couple of things. On Wednesday morning, we have what seems to become a tradition in Internet-related meetings, we have a v6 hour meeting, this protocol which is more than ten years old already is going to be put to a test in by way of switching off IPv4. You will find all the relevant information on ROSIE, the meeting website this week, rosie.ripe.net, including information of what you can expect and how you can prepare yourself if you want to participate in the experiment or better, prepare your laptop. It's all there, but don't leave it to the last minute. Go through the instructions and if you have questions, there are people around here to help you.

Secondly, at the last RIPE meeting we did by way of experiment, live transcript of some of the sessions. Evaluation after the meeting showed us that it was well received by many, many participants as an aid in helping to follow the proceedings, discussions, presentations, so at this RIPE meeting we will do this again, but this time, as a full coverage of all the sessions, so there will be live scripting services, live transcripts in both meeting rooms for the whole week.

Again, this is not meant to be the official record of the meeting; it is an aid in understanding for nonnative English speakers or listeners, to communicate better.

Last but not least, I mentioned ROSIE again already, I will mention it again. There will be daily updates on all events going on this week, including the social events in the evening, so do regularly check if you want to participate in the social events for things like departure times changes, we have had changes in the past. Do check ROSIE, and last but not least, one of the social events is the RIPE dinner on Thursday, if you have not ordered your tickets and you might decide, yeah, I would like to go there after all, please go to the registration desk, there are still tickets, a few tickets available, so you can, in the last minute, decide to go and get your ticket. This is the only event where you really need a ticket for.

So, that is the things I want to say by way of opening, I wish you all a very pleasant and fruitful and entertaining and useful week here at the 56th RIPE meeting. And now, without further ado, I want to introduce the first presentation of this afternoon, which is going to be a presentation on MPLS by Rahul from founding networks.

RAHUL VIR: Hello everyone. I am Rahul Vir from Founding Networks and today we will be discussing how to increase capacity in [unclear] networks. You are very familiar with the need to increase. The discussion here will more concentrate towards how to increase the band rate and one of the things we should be watching for

So, the main things I cover here is the hundred gig bit Internet standardisation effort and why we need load sharing in the meanwhile and beyond and then how to boost capacity in the networks as well as how to efficiently utilise the increased capacity that we have in the networks.

So starting on with the 100 Gig bit Internet effort, so some of you are very familiar with this thing, for those who are not very closely, the start rate, the 2006 where a high speed study group was created and in the first, in 2006 plenary session, they were given six months to come up with a par and after that the process got extended by six more months but 40 got added to that so finally what we have today is task force that working on standardisation effort and it's working on 100 Gig bit Internet with short distances with multimode and single mode going up to 40 km's. With 40 gig bit that is for the back plain as with the short distances and at the very recent meeting it got added the ten kilometre range also got added.

So if you see in the time line over here, the time, expected time for is around July of 2010, so going on to the next side, this is a couple of years off, and what we are expecting is that a transition to 40 Gig and 100 Gig over the Internet is going to first happen in high capacity links, in networks transit networks and Internet services providers, these are the people currently using very high band width on critical links. Now 40 Gig bit, although technologically it might be achievable and the performance and cost points people are looking for are not available yet and more importantly the 40 and hundred Internet standardisation is going to happen at the same time. So what we are predicting is that the 40 nanometre technology for the silicone is going to make hundred big nine time frame and the final on July 2010.

So, this is not going to solve our immediately immediate need of additional band width going above 100 Gig bit today. One of the ways to do it is to using link aggregate and EMPC so the demand is there today, 100 Gig Internet is two years office. OC 768 is unaffordable for many so equally cost multipath used in conjunction with N times 10 Gig link aggregation is far more affordable and solution today.

So what are the benefits of load sharing? First is the need for higher band width. Of course we want to use lies our current investment, we want to add band width slowly and at a cost that we can afford. And then, along with that, we want to offer increased protection. So, we want end to end protection at the path level by having diversion path, one plus N protection on the links and also make sure that all the links are actually getting used at the same time [unclear] not that there is an idle capacity.

We want to scale beyond 100 Gig today and the benefits of load sharing will continue to be there beyond the standardisation effort when it happens because there is already a need for going beyond 100 Gig of capacity in some critical links today. So what are the factors affecting load sharing.

First is protocols, now equal cost multiparts provides [unclear] tip parts, determined by either GBG or and this provides part diversity.

Link aggregation it offers multiple links between two loads for load sharing and call it bundling or trunks but basically it provides link diversity. And then the third point which is very important is data forwarding, how are the packets load shared over these links. So the load balancing ago go rhythm is responsible for efficient utilisation of these additional links that we added, as well as the fields in the packet that are looked at for load balancing.

So, let's look at some of the methods to boost capacity.

So routing protocols, we looked at, IGP, the affect the path taken by IP traffic and then for I following the path of IGP or IGPDE depending on using constraints or not and LDP follows the IDP topology. For the BG affect as well as parts of IP and so you can have [unclear] he fell equal cost [unclear] BG reachable by multiple LSP parts as well as multiple LSP parts that actually end up in the GGP.

So what are the considerations for these using routing protocols. So because in a box the more parts that enough ECMP you have more path diversity. Now, another important thing we discussed about earlier is use of ECMP along with link aggregation which is very important ask happening at LIR too with EC M [unclear] link can be used as a layer 3 link so you can have a conjunction of ECMP and at the same time also it's unusual when you are using link aggregation the band width because of the, band width could be changing depending on how many links you have in that and that could be used to calculate the metric and IGP and BGP and that can be used for which path to take.

And another important point is that we want even the distribution of traffic. Some of the boxes you might see don't do even distribution of traffic unless it's two to the power N so it's two links or four links traffic distribution is even. 3 and 5 traffic distribution may not be even so you want to watch out for that, as well.

Now, going on to MPLS signalling allows multiple LSPs to the same destination. With RSVP you can have path can take multiple equal cost paths could be there but to establish an LSP, you can differentiate between those two. How do you do that? The criteria could be a number of hops so the least number of hops could be one way of choosing it because less the number of hops the less the probability of failure. Least fill is another method you can use the bandwidth, the interfaces that has the highest level band width so the traffic distribution is even. And most fill you pick the path with lowest available band width. The advantage of this is that you leave the other parts with higher band width level and for that you can leave room for future high band width LSP that is come up. LDP of course provides multiple equal cost label paths that can be used.

So let's see in this example we have IP how do you map IP traffic to LP Ss traffic and also GBG you have 3, SB and C all equal cost. So you can assign in these 3 different ways, one is you assign a prefix to LSP so you have full control, you can predict with exactly the traffic is going to be mapped to. The other way is all the prefixes, all the VRF, single VRF would be mapped to a single LSP thank provides good operator control. However, this two approaches are not very good at load sharing so you can actually load share on a per flow basis so you can have a very good traffic distribution across all these LSPs so they are multiple approaches in these.

The next thing is for pseudo when you are use VPWS multipoint service you can have wires which is going over LSPs, so you can LSP over here equal cost. So to map to that, you can have these four different ways, one is you can bind to least used LSP and that would give a good distribution of sort of across all the LSPs. You can bind the wire to LSP with most available band width. Now, various services have these dedicated band width requirements. You could use that. Other ways is to explicitly bind it, it provides full operator control because you can through CLI go ahead and bind it. And then last approach is the full based approach where you can splitting traffic based upon flows into different LSPs.

Now, link aggregation is another method. Some of the things you look for where you have two notes is that link aggregation could be dynamic, where you are using LSP access link protocol and it provides increased availability there is a, you could sues a static link aggregation group, the advantage of static would be this a multiwindow scenario where you have [unclear] system it could still work pause it's attic. Then I can to watch out for is link capacity, approximate you are [unclear] for higher link capacity on hundred wig you knee to have at [unclear].

So to overcome that, there is a flow based forwarding approach. In the flow based forwarding approach, the packets are identified based upon the IP header and depending on in this we have purple and grey flow, so the purple flow is maintained on link one and grey is maintained on link two. So this way the flows are all in the same, there is no packet reordering on the flows and this is the hashing method which is one of the most popular load sharing schemes.

So let's see this in a layer 3 flow, IP how do you define a flow T could be IPv4 and it could be the sourcing destination address. It works very well however in a scenario where off host A and host B and there is a lot of traffic going through, everything maps on a to a single link so to overcome that in this particular example we have drawn this figure, you could actually do the flows based on layer 2, layer 3 and layer 4 information. So there is better traffic distribution between two hosts and this is host A is sending to B, the traffic gets matched to link one and send message and gets mapped to link 2.

So, we are looking at the source method and destination and, also looking at source IP and destination IP layer 3 and then the HIV protocol and the destination ports at the same time.

Now, we looked at a simple IP traffic, now how about MPLSPE router, ingress and egress. What is happening at ingress is we have the end point sending traffic on to the LSPs and egress LSP graphic is going on to end points. So ingress P E what we could do is use load sharing principles of layer 2 and 3 and send that over LSP if multiple or if going over a link aggregation group that could work as well.

On the egress PE, the packets exiting out of the [unclear] you can look at label information, or the other way is through a per flow basis where you can look at the label information as well as the layered 2 and layer 3 flows information. OK now comes interesting part now. Now we have looked at ingress, egress, at the LSR level we don't really know what the packet is because it's in a label, it could be carrying layer 2 or 3 traffic so that where you have information you have to speculate what the packet is. The way to do it is you check the first level after the bottom most level. If that is 4 or 6 we can speculate that as IPv4 if that is not the case and then now that you know what the packet is you could use labels, layer 2, 3 information and so on to separate the flows out.

So, summarising the load balancing part, it's very important to have a good balancing agro rhythm because you need to efficiently use lies the extra capacity and links that you have added into the system. Now, to achieve 320 Gig of band width on a link when using 32 port of 10 Gig you have to make sure that not only you add these ports but also the traffic is evenly distributed across all of these. So, the efficiency of the algorithm is very important. Other considerations include what are the number of fields in the packet, can you just look at layer 2 or 3 or 4 or more than that? Because that improves the distribution of traffic. And then what are the how many hash buckets are used, because if you have more hash buckets you have better distribution.

And then since we could use ECMP and LAG at the same time to make sure that there is the correlation between them is very minimal, otherwise some of the links could actually be overburdenened with traffic.

And the last thing is that we treat each packet differently, so packet could be layer 2, 3 and treat accordingly.

So here is an example, experiment in the lab, you have tester 1, two router and two router that connected with a 32 port link aggregation group. Send 1,000 routes from tester 2, to tester one and the next step you transmit 64 bite packets at one gigabit and distribution of IP addresses.

So the packets are load balancinged across 32 port lag and the tester receives all the traffic. Now, traffic at this 32 port lag group and see how the traffic is distributed. So here are the results: You see there is a very small difference between packet rates across links so distribution is very even. Across the 32 port lag group which basically provides you that the full band width that is available through these.

OK. Other I think that we have to worry about, so we have looked at the flow based networks and the flow based hashing is actually very effective in most cases, but there is something that you need to watch out for. One of them is the polarisation effect if, you have a multistage network and all the nodes in the network are running the same hash algorithm, let's say here flow A and B mapped to the same hash bucket, they will always be mapped with the first link so you have all utilisation in some part of the network in a multistage network.

Now, to neutralise this polarisation affect you could use an IP node, or anything you need in that box, and calculate the hash which includes this unique ID which means your flows would be sent to different links so wouldn't be utilisation in certain parts of the network. This is an improvement. It doesn't completely solve the problem because the same flows where [unclear] still mapped on to the same link. So, to completely neutralise that what you want is the ability to vary the hash ago [unclear] you have 3 flows the map to the same link but you change the [unclear] rhythm which you can do through configuration and these flows could be completely separated out and that gives you complete neutralisation and gives you better link diversity and flow diversity.

So, to summarise: The multiple load balancing options are capacity. This this can increase throughput beyond the link capacity, if you have 10 Gig you can take it to 100 Gig or 200 or 300 Gig. This is useful now and useful even beyond 40 big bit inter [unclear] standardisation is done because currently people are running above 300 Gig already in some critical links and of course this is very cost-effective and efficient.

Load sharing also improves network utilisation because if off very efficiental rhythm you have this [unclear] these links and utilising the whole network by evenly distributing the traffic through the whole network so it works on multiple paths and multiple links.

Flow based forwarding offers many advantages for because of the efish enter utilisation of increased capacity rather than purely IP based approach or approach that doesn't that sends the packets around about way. There are things that we need to watch out for link polarisation effect but there are option toss neutralise them and we should use them. And finally, this is not a one size fits all approach; you have to look at a network and choose the optimal scheme based on traffic types and poll size might have in your network.

So at this time I would like to take any questions that you have.

So I don't see any questions. OK. Thank you.

(Applause)

CHAIR: The next presentation is by Mark Dranse of the analysis of the Middle East cable event beginning this year, I think it was.

MARK DRANSE: Good afternoon, my name is Mark Dranse, I am the Information Services manager with the RIPE NCC, I am going to give you a quick talk this afternoon about what happened in the Middle East at the start of this year and the cable cut situation and what we saw using some of the tools that we provide.

There is just three cables which connect Europe with Egypt into the Middle East and beyond, back in January and February multiple cuts led to failures and huge disruption throughout the region. The time line of events has been covered many times in different analyses in other places so I am not going to go into that. Instead, I want to show you how you can use and combine some of the free tools provide by the RIPE NCC to get a different perspective on the effects of network events like these.

This presentation is actually a condensed version of some research carried out by Robert Anthony from the NCC science group, reports to our chief scientist and one of their roles is to carry out indepth analyses such as these. The full reports was published as an article earlier this year and you can read the whole thing on our website at the address at the bottom of the screen there.

As you know, the NCC run several data collection networks and tools to [unclear] network operators like you manage your networks and diagnose problems, the vast majority of this analysis was carried out using these tools and the data they collect. The routing information service or RIS is a looking glass with history which collects GBP data using route collectors located at 16 major ISPs around the globe, and my colleague Eric will talk more about RIS in the routing working group on Friday morning. TTM or test traffic measurement measures connect at this time of and paths between fully connected mesh of nodes across the planet. Anyone can subscribe to the TTM, we will talk more about this in the test traffic working group on Thursday morning, I think.

Finally, DNSMON is an application which sets on top of TTM and monitors over 200 services for you can access all of these tools online at the addresses listed there.

So moving into the analysis of the cuts them [unclear] RIS store tables of the total number of prefixes, if can't see prefix it's probably not announced. When we look at the data alongside the cable [unclear] and repair times, the reduction in prefixes isn't actually very significant, what this tells us is the globally only a small percentage of prefixes were actually affected.

If we have a more local view now we assign country code to each country and look at the Middle East region, the effects become a lot more pronounced. This graph shows the number of prefixes seen in RIS over time compared to those visible at midnight UTC on 30th of January just before the cuts happened. Egypt Sudan and Kuwait initially suffered [unclear] a 40 percent drop in prefix visibility. Iran almost 20 percent and even as far as Bangladesh we see a 15 percent drop.

Alongside this drop in visible prefixes RIS observed an expected surge in ASP path [unclear] as networks disappeared from routing tables or were rerouted at the IPL you can see just over 60 percent of all [unclear] paths associated with this region underwent some change on the [unclear] of January and that the total distinct number of distinction AS paths dropped by 14 percent overall during the cut period. Over the same period we see a pronounced increase in average AS path length of around 9 percent, 24 [unclear] sides with the first cut and didn't return to normal for some ten days after that time.

So first of all some observations [unclear] DNS Monday. As I said use TTM probes to monitor 200 global DNS servers, the only probe in the Middle East is TT 138 which is shown mare in man mar. This shows the percentage of unanswered DNS queries [unclear] mar over a two-week period. Occasion small glitches like these ones shown here can cause total loss and are usually due localised issues with the DNS serve being monitored but when everything is lost it indicates a problem with the textbox or connect at this time of. This graph clearly shows two and a half day period during which the textbox couldn't reach any of our monitored DNS serves as well as period [unclear] where connect at this time of was generally unstable.

To see more interesting data from DNS Monday we go in search of monitored DNS servers in the Middle East and find some anycast instances of the K-root, these are local K nodes which means the routes shouldn't be propagated and only local clients ever see or query them as opposed to global which answer queries from everybody. Monday morgue although we don't publish the data we keep it for analyses such as these.

So we look at the data gathered from a textbox at the Amsterdam Internet exchange and we see prior to the cuts this box favoured the K global node which is also located [unclear], I don't think if it's too small to read [unclear]. However with the first cut the switched to the K node as E mill in UAE and this [unclear] visible from Amsterdam throughout but we see the responses were up to 7 times slower and this [unclear] back up links were congested. We think this anomaly was caused by unintentionally it's an at this route then became preferred by clients threw [unclear] to the pre penned willing of the to failing where they are present. Fortunately this was noticed and fixed before the cable cuts were fixed.

Second example here is some unreachable prefixes which we [unclear] BGPlay of which that is screen shot is integrated into our [unclear] paths and prefixes visually. Here, on the 30th of January, just after a few hours after the can you tell us this slash 24 in Egypt had good visibility from the point of view of the RIS. Moving on just 3 minutes later the prefix completely disappears. The final path in blue there is lost just seconds [unclear] screen capture was taken.

Some days later on the 5th of February at 11578, the /24 suddenly reappears for a few minutes but is then lost again, perhaps someone was doing some manual reconfiguration trying to patch up what was broken, we don't know.

It's not until the 10th of February at 3�p.m. when we see the first signs of full recovery, as route announcements start to arrive at the RIS peers. 30 minutes later and the prefix is now fully [unclear] once more but overall it was completely knocked off line for 11 days.

If we focus on the S path which carried this [unclear] fix before and after the cuts, we see it start carrying all 47 at some point in January. Because the drop [unclear] align, probably carried out manually. Ends on [unclear] of January but the path did not recover for a further ten days after that.

Next an example where we can see a route but no traffic reaches it. As I mentioned before, TTM is our global network of interpreter connected nodes, they are hosted by providers like yourselves at locations across the planet and form a full mesh carrying out measurements of delay, loss, trace route in both directions we are very fortunate to have a node hosted in [man] ma which is currently the only TTM in the Middle East. Remember as we [unclear] DNSMON this box lost connect at this time of for two and a half days so let's have a closer look at what went on.

As TTM does by directional measurements we have to choose a pair of nodes and look at the relationship between them N this case we looked at the [man] ma node down here at the bottom and TT 01 back home in Amsterdam 6789 traffic between them goes via London. Next I overlaid the delay plots for this link and the red path shows the number of hops as measured by trace route and black the delay in each direction. The scales might not be so icy to read, it's covering the same time period and vertically the same scale as well. What you see is that the delay shot up when connect at this time of returned especially in the direction Amsterdam to Myanmar, it was around hundred milliseconds before and peaked at 1.2 seconds after the cut. The other interesting about about this outage it shows it selves if we look at risk Kay at that, there is, in previous case where is connect at this time of had gone away, the prefix lost all visibility completely in GGP as shown here but for this profix, BG play actually looks more like this throughout the entire outage so the network was still being announced which is a little bit strange in fact. This was fully visible throughout. The first shot is actually from January the 30th, not 9 first of February, before anything happened. If we jump ahead to the 1st of February, at a quarter past 4, nothing has changed. To two minutes later and there is a small batch of updates with the red ring around them there. Which leaves the prefix at its least visible but it's it's still definitely there. 40 minutes later and it's back to normal again. What we are basically seeing there is a prefix was announced constantly during the cut period and we wondered how and why it was unreachable. Looking at the trace roots to Myanmar, they got from Amsterdam to Teleglobe in London so we [unclear] the prefix was [unclear] from a router there rather than from one in the Middle East and this is why when the cables were cut it doesn't cause a route withdrawal. This tells to us treat BGP [unclear] data with caution, it doesn't necessarily imply reachability.

Number 4, some fill on path changes. Rather G G play can only look at a prefix and not a whole AS so we have to use a this /23 originated from Bangladesh. At 4:30 in the morning UTV very little has happened but we find ourselves at the start of a large clump of PGP updates again circled in blue on the left over there. Keep your eye on activity incited red circle in the right of the plot as time progresses. Five hours after the cut around 12:40 there were no routes passing through AS 702 is U Europe mygrated via Hong Kong and Singapore at this time. On the 29th of January before the cuts, there were almost 250 routes going via this AS path all up to the cut on [unclear]9 of January at which point the link stopped carrying any prefixes at all and it's not until the cable fix on the 8th of February that any prefixes return to this path. Because the path was dropped at 4:30 prior to the flag cable cut risk data implies this link was actually using a different cable than that one.

Now, one last one, a quick example of BGP churn, the AS path changes graph for this provider who I won't name caught our eye as being quite different. Instead of going down because of the loss of connectivity the number of distinct AS paths actually went up, we can see the average AS path length in red increased at the same time. What we found in the risk data is that these 26 prefixs are announced in batches with the same AS path but that after the cuts they were multiple announcements per batch each with different AS paths.

So at 4am on [unclear] of January, before the outages, there were two primary transits. Nodes on the purple there is an impending and prolonged enormous suffrage BGP updates ahead of us.

8:23am now shortly after the second cut and signs of rerouting are visible. The histogram shows we have entered into this period which actually contained 10,000 updates in the space of 90 hours, that is one announcement or withdrawal every 30 seconds, I think.

Seven more hours pass and GP play show the majority of players reaching this using different and longer paths as was shown in the initial graph. 7 more hours and we see yet another AS has now taken over transit for most of the peers who had switched before.

Progressing even further up the histogram you can see that by the following afternoon we have got yet another completely different routing state for this profix. What this example shows is how an explosion in BGP activity increased the pot as a result of the cable cuts. The number of distinction AS paths actually doubled and the average AS path length increased by 20 percent. Because ever the constant high rate of change we can't be [unclear] BGP converged during that period or if the routes were even ever useable.

Right that is it I am raid frayed. As I mentioned at the start, the Kay at that came [unclear] RIS TTM, these can be found at the information service portal online at the address there. And fortunately this week my colleague Frans who is milling around somewhere is manage the IS memo stand just outside this room to the left so if you want to find out more of any of this stop and talk to him or grab me and I talk to me. I see Randy hovering by a microphone which leads me to this slide.

RANDY BUSH: I really want to thank you. Y especially for combining measurements of the data plain and the control plain, i.e. where packets went versus where BGP announcements went and we have had too many presentations by fast talking folk who measured BGP data and then used the word traffic where they have no bloody idea where the traffic went and you have actually shown that at least in one case the traffic didn't go there and the BGP data pardon me, the traffic went there and BGP data said no. I am interested in why, in the later examples in the second part, did you not pursue TTM data, or if you did, you haven't said anything interesting about it. Was it boring and the same as the BGP data or you just didn't have time and enerror gee

MARK DRANSE: I can't carry out the analysis. Adds I mentioned it was some work carried out by our science work, if you want the answer to that question you will need to talk to them

RANDY BUSH. Ah so this slide is entirely false or misleading at least. No you will accept questions. You didn't say answers, I guess.

CHAIR: Any more questions without guarantees for answers? Somebody wants to answer something and Mark can...

No. If not not, thank you, thank all your colleagues for interesting work.

(Applause)

CHAIR: The last speaker before the coffee break is Kurtis or new proposal that he will introduce here and which will be I think discussed in the finally, the Routing Working Group.

KURTIS LINDQVIST: This is policy proposal 2008-04 submitted to the routing working group last a week ago, something like that, more or less.

So, the first of all this is not at all my idea; it's it has mine and Randy's name on it, Randy wrote the text and I submitted it and wrote the slides. So previous team, if you have any questions...

So, the idea is fairly simple: The current IRR that is used for a lot of route filtering is fairly insecure and weak and there has been a lot of ideas propose today use the RPKI to assign some of these data, these proposals have been not necessarily advanced very far Rob Blokzijl came up with the idea if the RIPE NCC or any of the IRRs were to publish new IRR registry that contained route objects generated from the route attributes in the RPKI they would have authorisation, sorry, thank you in the RPKI, they would be more secure and have a stronger trust model.

And these would then be generated out of the global and cover all the other place, or all the global space.

And the operators can then use existing tools they had for building of the RARs and they could use the new registry to build similar filtering capability but this time they would have a more formally verifiable structure because the RPKI data that was the source of generating the new IRR registry. And operators prefer this where other RIR data hopefully or tentatively prefer the stronger verifiable objects or the weak in the other in the other or existing IRRs.

IRR publication point, RIPE NCC could make open source tool available that would allow operators to actually generate this data themselves and skipping any points or weaknesses in the actually IRR publication, by the NCC they could run this tool themselves. It would probably be identical to the one NCC or allow for a stronger or somewhat stronger trust model.

Well why? We would have, the router filters generated based on this data would be more reliable. If the operators prefer them over the weaker objects. And it would allow the community to do use existing tools rather than new tools and new methods to verify these prefixes and build prefix building capabilities and this idea is probably a lot simpler than a lot of the other ideas proposed and therefore might be somewhat easier to implement and have a less opposition or pain threshold into getting it to work and deployed.

Next steps as Rob said this is submitted to the routing working group. I have to admit when submitting this it wasn't completely clear to me which working group to submit this to. We decided together with some discussion with some working group chairs to take this to the routing working group at least. As I first tried to discuss the merits and ideas behind this and how to hash out the implementation. Once that is done we can all go and filter. There was a really short and really fast idea of this.

Every time I print this always fun to see how much the stenographer has picked up but anyway...

I don't know if you are going to do discussions now or keep that for routing working group. The idea is to have the discussion on Friday in the routing working group. If anyone have any questions right now that you won't ask on Friday?

CHAIR: Or after having received the answer.

KURTIS LINDQVIST: Yes

CHAIR: Between now and Friday you think you have got the wrong answer. OK. Are there any questions? We have time for a couple of questions and since we have time we should not artificially...

KURTIS LINDQVIST: I didn't [unclear] scare people off.

CHAIR: Are there any questions? No. You have until Friday to come with questions.

KURTIS LINDQVIST: And our text is on the RIPE website, policy proposals.

CHAIR: OK. Thank you Kurtis.

(Applause)

CHAIR: This brings us to the end of the first bunch of presentations. It all went a bit quicker than we had thought so that means you have a bit longer coffee break. See you all back at 4 o'clock in this room for the second part of this afternoon. Thank you.

(Coffee break)

The session resumed as follows:

CHAIR: All right folks. It's almost four o'clock, time to get started again.

We have a fairly full session this afternoon, and we also have a social coming up, so we are on a pretty strict schedule. The first speaker in this session is Tiziana Refice from science group at RIPE NCC and Rome Tre University and she is going to talk about the YouTube prefix hijacking.

TIZIANA REFICE: Okay. Thanks, Hank. Okay. So good afternoon. I am going to present you with analogies of an event occurred a couple of months ago on the high hacking of the YouTube prefix. This work was a result of a collaboration between the science group NCC and the BGP team at Rome Tre University. Let's start with a very briefly introduction from the user perspective. What did the user experience of this problem?

On February 24, 2008, around 6 p.m., UTC, the YouTube website was unreachable and this lasted for about two hours when the situation was back to normal. But what did really happen behind the scenes? The science group produced a video with the old animation of the event using BGP, a tool that mark previously already illustrated to you. So, I am going to play the video, the speaker is, you'll problem recognise him, I hope you enjoy it.

The video was then played. Let us look at what the RIPE NCC routing information saw of the recent unauthorised announcement affecting YouTube. This is BGP, a tool developed by Roma Tre University Italy in close cooperation with the RIPE NCC. The red circle at the bottom right represents Pakistan Telecom's number 17557 and the red circle on the bottom left the YouTube auto number system 36561.

On February 24, at 18:47 UTC Pakistan Telecom start today originate a /24 covering space assigned to YouTube. We can see that that route spread rapidly in the part of the Internet opened which the RIPE. One roughly a minute the route is visible at most of the autonomous systems shown in this animation and traffic starts to flow towards Pakistan.

17 pence minutes later at 20:07 YouTube starts to announce the same route itself in order to route the traffic back to its own service.

Another ten minutes later, around 20:17 (UTC), routing policy changes occur that move traffic for this route away from Pakistan Telecom's upstream PCCW, AS 3491. Another 20 minutes later, Pakistan Telecom's autonomous system number is repeat indicate thought [unclear] announced by AS 34 [unclear] in order to make them less preferable.

Another eight minutes later the unauthorised announcements are no longer propagated and both routing and traffic revert to YouTube at around 21:03 UTC. These animations and many other tools are publicly available at ris.ripe.net.

There are will the prototypes of new services such as the RIS dashboard shown here. Please pay us a visit on these pages. We are very interested to hear your comments.

(Video presentation)

Well, I hope again that you enjoyed it. So, what happened? The dynamics is quite, I think the animation is quite effective though it contains a lot of technical details. So I'll go through again through this event just to highlight the measure step of what happened.

So, let's start with normal behaviour. Usually, YouTube announces /22 and in the Internet and all the happy user can usey access its website. On 24 February 2008, 6 p.m. UTC, Pakistan Telecom started announcing a /24 partially overlapping the YouTube prefix due to the longest prefix match, most of the user in the Internet selected this route as the route towards the YouTube. So most of the traffic was redirected towards Pakistan Telecom. It's worth noting that all the announcements of this hijack route passed through just one of the Pakistan Telecom upstream provider, just PCCW.

So, what could YouTube do about that? It started playing the same game announcing a /24, so now there are two routes for the same prefix on the Internet. What happened? The Internet slit, there are some users using, selecting the prefix, the Pakistan Telecom announcement, some users selecting the YouTube announcement. So, YouTube, this way, regain about 40 percent of the user, but of course the situation is not fixed yet.

What else? Going the same direction, announcing something even more specific. /25 but as we know /25 usually are not propagated by internal system so the the fat was really not that fat of the solution.

The next step was the propagation of the autonomous system of the Pakistan Telecom system number in the announcement. Now the route is less preferable because it's longer than before. But actually, the problem, I mean a few minutes later we cannot really say at least using the RIS data that there are two possible scenarios. The first one Pakistan Telecom stop announce can the hijacking prefix. Second possible scenario seems that everybody agrees with that, is that PCCW actually filter this route out.

So, again, a few minutes later, the situation is back to normal. Let's also note that a few days later YouTube just stop announcing both the /24 and /25.

So, this event, anyhow, influenced the visibility of YouTube prefixes? Not really. If you look at /22, it's almost visible from in the same way as was before the event. And as you can see, during the time period of the event till around the end of February, we could also see both the /24 and /25 and as I told you before, the propagation, the visibility was of /25 was quite limited.

So, what lessen that we can learn from this event? If we can a customer and my prefix is hijacked, what should I do? How to react to the problem? As you can, you can announce the very same prefix, this mitigates the problem, it doesn't involve it. Plus, remember, after the event is finished, you should really stop to announce the new let's say the /24 this way at this time. More over, everything that is more specific than /24 just doesn't have much. And anything on the other hand, collaborating with your upstream provider is much better.

To prevent the problem, unfortunately now the current routing system doesn't really provide that much. On the other hand, for ISPs, there is something else. How to react to the problem. There are some best practices, like having procedure to help customers, or peers and upstream providers, but first of all, you can really prevent it with route filtering.

So, this is kind of conclusion of my presentation, but there are some further analogies presented by luke a, so please ask your questions afterwards.

(Applause)

LUCA CITTADINI: Okay. Good afternoon, thank you for an introduction. I am from Rome Tre University and I will introduce you to a new extension of BGP that we use to carry on, we partially use to carry on at analysis of the YouTube hijacking attack.

So, most of you are already familiar with BGP user inter phase and working system so ill not go into the details. One thing I want to stress is actually that, is that the visualisation presented in the BGP system actually helped the user to filter out the noise within the BGP data collected by the routing collection service and to really understand what's going on in the network.

So, the visualisation action is a noise filter, but what do you do if you want to analyse a specific BGP update and you want to get more information about it. So this is exactly the problem we faced when we are analyzing the YouTube hijacking attack and this is what we developed the extension which we call BGP path. So while you are just browsing with BGP through the routing updates, you can select a specific one, analyse with it BGP by clicking on a new button. And this is how that updated, you selected would look like.

On the top panel, some information about the routing, just like in the standard BGP system. I want you top focus on the central frame. Here, the up date is depicting. The old part is depicted using a solace stroke a dash stroke and the new part is depicted using a solid lines, in particular, this update was taken as an example because it's one of the updates during the hijacking attack, in particular the red AS in the centre of the window is the YouTube SAS 36561 and the one on the bottom right corner is Pakistan Telecom. So the aim of our tool is to provide an easy to understand visualisation of the perspective of the selective peers that received this particular update.

So, let's take a closer look by zooming in. One of the main feature of this tool is that the ASS are not placed randomly on the window but they are placed according to the a hierarchy. This hierarchy captures at least to some extent the customer provided relationships. This way the user can understand the context, and the text of the information that was not so clear in the BGP screen that I showed you before. For instance in this running example, we can see that the ARIN net AS on the bottom left corner actually switched from the YouTube route to the route announced by its upstream provider, NTT Communications, which was just forwarding the route coming from Pakistan Telecom. And again, the layout is actually telling that our net is preferring passing through an upstream provider rather than using the direct peering that he has with YouTube AS and this was not completely evident from the BGP screen shot.

One of the other features of our tools is the ability to visualise the evolution that the AS path experienced during time and this is done to a path.

This plot, you can select multiple prefixes and plot them on the same chart together with different colours. For instance, these two prefixes represent the hijack /24 prefix which is drawn with the red line and the more specific, /25 that YouTube started announcing at a centre point which are represented by the green line.

On the Y axis you have different AS paths that the prefix went through and time is on the X axis of course. So, if we focus on the left part of the chart, we can see that the attack is ongoing because the hijacked /24 prefix is seen as announced by AS 17557 which is Pakistan Telecom. So, with this visualisation we can actually visually appreciate the current move that YouTube move made to counter the hijack. First YouTube started announcing the same hijack /24 prefix [unclear] and this makes the red line shift from the old path to a new one in which the YouTube AS, AS 36561, is correctly seen as the originator. And the second countermove taken by YouTube is to announce a new pair of more specific /25 prefixes, all of a sudden at 20 and 18 UTC and since this prefix did not exist in tables beforehand, the previous path is represented by a dot, dot, dot line.

And actually, being able to visualise the routing history of the enables the user to assess the impact of a time of the event is investigating about. For instance, while we were developing BG path we experienced we found out in the network some shortlived event, like the one I am showing you. Some event that recurred in time signalling periodic behaviour and even longlasting prefixes instability all in the while.

The last thing I can talk to you about is the possibility of tracing the number of prefixes on each inter mail link of a time. This feature is accessed by clicking on the buttons that are on the top right corner of BG path. So this plot shows the rank. This is the [unclear] of prefixes carried by an inter domain link. We are able to compute the rank using two different perspectives. A local one, which represented the single collective peers and a global one which is an aggregated perspective, that considers all the collective peers at the same time.

By inspecting the evolution of the rank match of over time, the user is able to spot some unusual link usage patterns like the one I am showing you. For instance, back up links that can be used when the primary ones fail. Link failures and so on: If you are feeling familiar with those plots, it's only because mark showed them before this one.

So, let's summarise what we talked about.

We developed this new extension to BGP which we called BG path, and three main features of this tool are a layered layout that helps the user grasp the context of the specific event or specific BGP update. The ability to see, to visually see the path evolution that a set of prefixes experienced which helps the user assess the impact of a time of the event he is investigating about. And the plots of the rank magic that is the number of prefixes routed through and into the main link which helps the user understand how the data link was used over time.

A prototype implementation of this extension is publicly available at URL that is shown in the slide and we plan to further develop that. We are adding to a real time update processing system and we we are planning to add some, to develop some other systems with particular interest in the prefix instability and link usage metrics.

I want just to remember this work, this project was carried on as part of a continued cooperation with RIPE NCC.

So, this concludes my talk. Thank you for your attention. And questions are please take your questions after Daniel's talk. Thank you.

(Applause)

DANIEL KARRENBERG: My name is Daniel, from RIPE NCC. I am going to present some work that's mainly by done by Rene Wilhelm and Anthony Anthony but I have actually instigated and helped doing the work so I can answer questions about it.

How does this come about? This is actually, after we discussed, and there is actually a programme committee for this so if you are wondering how the talks get found the speakers get dragged onto the podium /P even if they don't want to, and these kinds of things, there is a group of people who actually try to make this programme as worthwhile for you as possible. They consist of the RIPE working group chair people and some longstanding friends and they are all volunteers. And we have a mailing list and what happened on that mailing list when I proposed these talks, was that [unclear], said, yeah, this is a nice talk but we should have some discussion because when people hear these talks, they might be tempt today just turn all the aggregated announcements into /24s to prevent some hijacking going on.

And he said "This [unclear] me" and you can only [unclear] advertise because if we did that the routing table would roll by several orders of magnitude. So I think we might have some of this discussion here, maybe some of it in the routing working group. I am looking for guidance later.

Sort of, a very short while later, Philip Smith said, well you know, this isn't happening for years, what are you afraid about? Why do you think half of the BGP table is /24s? At which point I the sort of half decided to look whether there are any friends. Because this hijacking stuff is getting more and more press and it might be that one could see an increase in /24 announcements and things like that. After a little while, Joe actually said it might be a good idea to do some research here because we have been repeating that all this /24 stuff is going on, but we might because of traffic engineering and because of this and that and the other, but we don't really know so we might actually want to do some solid research, even if the results are not immediate or easy. And this prompted this talk. And I'm not going to find the end of this research, but I am going to share our first results and a little bit of a quick look at what's happening here.

And by the way, also as information for the last few, Joe has been the person who has been coordinating all of this, so if you like the programme be sure to go and talk to him afterwards. It's all volunteer work and positive reactions are always appreciated.

And constructive criticism of course.

So, I said this is work in progress. This is an artist's impression of me working and thinking.

So what did we do? Well, I am going to show you a number of colourful graphs. We are actually all based on Internet routing data from the RIS. And since we are talking about a routing phenomenon, namely the prefix links, relative prefix links it's better to talk about routing and not about traffic.

We look at five route selectors of the RIS only, so not the whole RIS. That gives us more than 100 peers, so it gives us a quite good view. And we look at the time from around 2001 to the present. And that's not arbitrary, because the RIS only started in late 2000, so it's basically all the RIS data that we are looking at.

And we are also decided to disregard local routes, that is routes that we only see from a few peers. Some RIS peers give us routing tables that include some of the local internal stuff and so on. We disregard that. We only want to know widely propagated routes in the Internet.

So, this is the first one. It's from, as I said, all the X axes are from late 2000 to the present time and the Y axis here is the number of routes seen with a specific prefix. And you can see that they range from /16 to /24. The /16 are on the bottom. This is the brownish colour, and the /24 is the what comes out pink on the projector. There are a few of blue residue there, which are prefixes that are actually shorter than /16 is.

And indeed, from just looking at it, Philip's observation is absolutely correct. It's not only correct now, but it's correct over time, in fact over the last seven years roughly; that roughly half of the BGP table, of the global BGP table is made up of /24s. That's interest. It's also surprising, it was surprising to me I have to admit because I hadn't looked lately at what this distribution is.

If we plot this in a relative way, so it's not the absolute numbers but it's normalised to one at the top, we can see a little bit more of the trends of the relative trends. And we see again that actually Philip's estimation was slightly slow, because it's more than half; it's consistently more than half of the routes, the prefixes that are seen are /24s.

And there is some trends here like the relative number of /16s is going down, but there is not really any huge strength here. Especially there is not a trend of seeing more /24s. So if everybody was afraid of hijacking and was deaggregating, we would see that here, and we don't. We actually see that the number of /24s is the relative number of /24s is decreasing slightly.

The following thing is we actually looked at what we call least specific routes. Those are routes with prefixes that have no less specific route in the routing table. So, they are not covered by another route. You need these routes to actually get there. So what this means is like, the /24s that you see here are ones that are not part of a larger aggregation that's being announced. That was surprising to me. So that a lot of /24s, and if you look at it relatively, we see that it's roughly it's slightly less than half that are not part of a larger aggregate block. That suggests that the traffic engineering techniques of multihoming by announcing a short prefix covering a lot of address space and a number of longer prefixes for traffic engineering purposes or for multihoming purposes, that may occur, but it doesn't explain this large amount of /24 routes. So there are actually a lot of routes out there, /24s out there that are announced on their own, if you wish.

If we look at the absolute numbers again, I am just jumping back, is that the least specific routes are actually about half the number of routes in the routing table, so it's not just like a small amount; it's a significant amount.

Okay, what we have done then is just a few other sort of quick look and see analysis.

This is analysis of again looking only at the /24s, so this is not the whole routing table now but it's only the /24s, there are roughly currently more than 120,000 of them. And we looked at, combining this information with allocation data. So with the data that RIPE NCC has about how, under which inform policies these, this address space was allocated. And it's not just the RIPE NCC, it's all the RIRs. It's something called stat files which the IRRs have been publishing which shows whether a prefix was assigned or allocated, the NCC did that.

And we assume that assigned means PI so it's under the provider independent scheme and allocated means PA, so it's under the provider aggregated regime. And what we see here is that apparently the provider aggregated /24s are increasing faster than the provider independent /24s. So, the absolute amount of /24s cannot be explained with that kind of policy that people get PI address space and announce it in small chunks. It can not be it's not a major factor here, we saw immediately.

Another relatively interesting one is we looked at the same /24s by year of allocation, so that's when the address space was actually given out by the RIRs. And what we see here is it's taken up more quickly. What we see down here is the 1983 to 1999, so it's all sort of pre this century. Interesting to note that this address space that was even allocated more than seven years ago, more than eight years ago is still slowly finding its way into the routing tables, so it's still increasing, it's not that so people find this address blocks and are routing them more and more. So this is not static. You see them being used. The other thing that one can see is quite graphically is for instance for the 2000 ones, it took quite a while before it became, you know, this many. Whereas if you look up here for the 2006 and 2005 ones, they got to the routing table much more quickly than the little bit older things.

So it looks, at the time from allocation to actually visibility in the routing table is decreasing.

AUDIENCE: Can I make one comment on that slide. It would be really interesting if you broke out that green section to be basically preside err, that right CIDR transition period and then post

It would be interesting but the source data doesn't provide that. That's true. The source data that we are speaking isn't tagged with that, but we are actually looking

DANIEL KARRENBERG: You could do it by date, because you know that preCIDR was anything before 92 committee speak we know that but we don't have the allocation in this particular data so before 2000, so we are actually trying to reconstruct this.

AUDIENCE: It would be great to see what percentage of total is old style swamp space. Thanks.

DANIEL KARRENBERG: Okay. So, this is all I want to show right now. All I have to show right now. We are looking a little bit further into this, you know, trying to understand what whether these routes of the long prefixes of the /24s are somewhat different from the routes of the aggregates of the shorter prefixes. And maybe see how they are different. But that work isn't finished yet. This is not presentable yet and is still going on. So, I'd just like to spend one minute to tell you how this was done and to ask for a few suggestions.

So, behind the scenes of all this is actually not just a new plot but a whole new machinery for analysis of data concerned with IP address space and auto [unclear] systems, so Internet number resources. We have made a database and a machinery that makes is easier than previously to do these kind of analyses. We have RIS data in there from the start of the RIS to the present, from 5 route collectors and actually to give a hint at the complexity of this, to make these plots, we actually evaluated 7 times to the 9th. That's billions to the Europeans and trillions to the Americans. So quite a lot of data points needed to be evaluated. And we also threw into [unclear] machinery all sorts of other data we could find about Internet number resources. RIR stats files, actually there is the whole RIPE database with all the history that we do have at the moment is in there and some other stuff.

The main point of this is it's easy to design and implement and rerun analysis like this. And it's not ad hoc work any more. Or not so much ad hoc work any more.

Anyway, this is work in progress. I just want to share the first results. We will actually publish results shortly when we have a little bit more to show. We have new machinery for further analysis in place. And if people want to join Vince in making good suggestions for further analysis, and particularly analysis that would inform us about why the prefix length are distributed like they are in the global routing table, why there are so many /24s, that would be interesting. And I am quite emphatically joining Joe in saying you know we don't want speculation and guesses, we want some solid empirical data to base some conclusions on.

Okay. As usual, the credits:

This is the crew who actually did the machinery. There is six of us and this is, you see us here doing one of the design meetings and I think we are at this point in four different places in three different time zones. But the people who actually did most of the analysis and the graphing were these. So I am just the messenger. And that concludes this part of the presentation and now there is time for questions about our presentation.

So we have a couple of minutes of questions.

MICHAEL DILLON: Michael Dillon, BT: You said that you analysed the routes and there were no covering prefixings for most of shows /24s. Did you compare it to the actual RIR allocation amount? For instance, did somebody get a /22 from the RIR and then announce four /24s?

DANIEL KARRENBERG: The answer to this is, the answer to this is these are the ones that don't have covering routes. The least least specific routes. If you compare those with those, you have at the end of the period about 250,000 total here and about 120,000 total there. This is purely routing. This is comparing it's taking the routing table saying, this graph only shows the ones that don't have less specific ones. If you look at this one, this speaks a little bit more to it but only for /24s. It's not exactly answering your question. The green ones are PIs and I would assume that they are probably /24s, the red ones are PAs, they are probably bigger ones but we could look

MICHAEL DILLON: I would suggest to you do look at it because I think people will interpret your statement about not having a covering prefix as saying people are not doing traffic engineering. Whereas, you are doing traffic engineering if you get a /22 from the RIR and then only announce four /24s. So the RIR allocation size would be interesting.

DANIEL KARRENBERG: That's a different kind of traffic engineering but that's a good suggestion.

RANDY BUSH: It's not only traffic engineering but it's false security or semifalse security at the expense of all of us. What that gentleman says said is dead on, you really need to go look at the allocation data to see what is being announced in terms of allocation because that tells you if people are fragmenting or not, that's the only way you are going to know.

DANIEL KARRENBERG: That point is well taken. That's one thing we do so. Unfortunately we can only do it for the RIPE NCC allocations because we don't have the data for the other ones.

RANDY BUSH: Those data are available. I think the registries might be able to cooperate to that extent.

DANIEL KARRENBERG: Would I certainly like that. Would I certainly like that.

CHAIR: For those of you listening remotely that was Randy Bush, so Marco.

AUDIENCE: Just, maybe a small announcement on the distribution graph to plot in the minimum allocation sizes at least for the right reason. You probably see that, really, since RIPE started allocating /21s, the number of /21s should probably increase a bit.

DANIEL KARRENBERG: You can barely see it here. I have another graph which I didn't have, for time reasons didn't show that actually shows this and it will be in the publication, yes.

CHAIR: I think that's it. Let's thank the speakers.

(Applause)

The next will be Anja Feldmann.

ANJA FELDMANN: I want to report on some joint work that I have been doing together with Vinay Aggarwal and [unclear], two of my HPZ students and [unclear] at Munich. What is the context of all of this? And the context of course is that appear to be traffic, at least according to some statistics, is quite substantial. And I am actually curious to hear what your experience is in terms of how much P2P traffic you have in your network.

What we have in terms of what people say is said when you look at the data from about '93 so '2006 that initially it was all this email and FTP traffic. Then the web came along in '95 and then at some later time, 2000, P2P began to shut up and it's almost taking up about 60 percent according to this statistic here.

Some other people from the Germany claim that the traffic is now of P2P is accounting to about 74 percent and there are other statistics that are equally frightening. Of course some other people claim no, P2P traffic is still in the order of let's say 20 to 30 percent. But whatever it is, it is a substantial amount of traffic and wouldn't it be nice to have some kind of control over this kind of traffic.

Now, why? Let's take a look at it from an ISP perspective. After all we are here at RIPE.

What is good about it? Well, users love P2P, they want it and of course if you are developing a new application, P2P is easy because you don't need any kind of another by the network. You can actually do it all yourself and a lot of people have done kind of improvements to the network on the application side as P2P just because they can do T was this actually getting invasion into the network is quite hard. So once you have more P2P application is forces a demand on everybody. It's been jumping on the [] P2P bandwagon. Nowadays, whatever you say, everything wants to be P2P. Peer-to-peer TV, P2P that.

Now of course peer-to-peer also comes with a price. If peer-to-peer systems falls an over play on the application link which means they are [unclear] cases routing and we have a routing work group for a reason. Route something not a simple task. And of course it all ignores what we actually do know about the network. It's topology it actually redoes everything. So you get inefficient powers, much longer routers and everything is screwed up. And of course when you want to do traffic engineering and you have overlaying that work, whatever you do on the traffic engineering side, maybe can be done by the P2P overlay because you have two control loops that are competing against each other so that's actually fairly nasty.

So, what can one do about that? You are in a dilemma. Cannot just kind of say, okay, let's forbid peer-to-peer traffic. Some have [unclear] to cut power cuts our cut it from blocks certain connections, but that's they have been getting into a little bit of trouble here. I don't want to go down the road of net neutrality but it's somewhat related to some these aspects here. So cannot quite forbid it so you have to live with it but wouldn't it be nice to actually take some advantage of some of these aspects here.

So how to go about this:

Well, what we have is we have the underlay that's got multiple ASes that you have a certain topology here. Then you have the overlay and that's setting up their own kind of topology here. But wouldn't it be nice if this topology here could take this topology into situation and leave some of in network instead of crossing all the network boundaries.

Well, right now we have random or round trip time based peer-to-peer selection here and they basically ignore all of the knowledge that one could in principle have the underlay. Why is that the case? Because this information is not available. Try to look up what the topology of a certain AS is. Even ask somebody here and you probably will not get the correct answer.

So, what is our idea here? Well, actually the ISP does know its network. It knows where the nodes are, the knows the band width. It knows the geographical location, it knows its service class. It knows it routing policy. And the distances to the peers. So, it has a lot of information that the peer to pee network actually has to infer. So, why not give out some hint about this information, not in detail, so that the peer-to-peer network doesn't actually have to infer this information.

So what is our idea? Our idea is that any ISP should offer an oracle server that actually has in information here and that can then kind of help the peer-to-peer network to actually build neighbour hid or decide if it has multiple peers where it can actually download it, who would be the right kind of information, the right node to actually download this information from?

Now, what is the benefit of it? Is there a benefit of it? Well, I want to convince you in the rest of the talk that actually both. Of the the peer-to-peer network as well as the ISP can benefit from it. But before going down that road, I just want to point out that this is actually collated into some sense to the proposal made by the peer four peer group which also is about being able to have a common interface between the application and the network and then of actually communicate so that you can actually have better performance for both systems. And the example is a modified eye track error bit torrent, where we have we are conveysed on the just the plain pure peer-to-peer protocol.

Now, let's go into what we are really proposing. What is this oracle?

Well, the oracle, to the peer-to-peer user, it's nothing else than kind of a server here that they can query with a set of, this is a set of possible peers, this is from which I can connect to or a set of possible IP address that is I can actually download this information to. Now this oracle has a map. It actually knows this information and so it can make a decision: Ah, is it better to go on to this network here or this node, C, D or E and not give out the exact information but actually just rank these various different IP addresses.

So what does this mean? The insight is the ISP actually does know its network. And it means offer a service here. Do a network based enabler here, just a service like a DNS server, that the peer-to-peer server can actually download the information to where it can connect to and then rank them. By just ranking these IP addresses, you actually have multiple possible ways of doing it. You can rank it according to, is it within your own AS? Or is it a better band width? So you can do traffic interlink as well as keep traffic within the network.

So, how does this work? We have our AS information here. We have a new node that is joining this peer-to-peer network and then the first step, this may have possible different set of peers here like peer 1, peer 2, 3, 4, that are connected at different locations here. It can then ask the oracle of its network, hey, which one would you prefer me to actually connect to? Then, get back an audit list here and then you can actually, this peer-to-peer network, as long as it likes the answers that comes back here, can actually pick peer 1 and peer 1 is actually nicely and local and close to it and so it will probably actually connect to this one here.

Now, the result of this is that you actually, as service provider, get back some control on the kind of topologies that peer-to-peer network will build.

So, instead of actually having this complicated structure here, you might have a structure that actually much more resembles the underlying network over here. And we will see evaluations of this concept in a moment.

So the idea is: That because you have this oracle, you can actually go ahead and localise the topology and also your traffic.

So, the idea is to have a very simple service, namely taking the list of IP addresses and then rank them according to your preferences and then relay that back. This was very simple, can be run as a web server, as a UDP service, the implementation shouldn't be difficult.

The benefit for the peer-to-peer system is it doesn't actually have to do random measurements anywhere. It doesn't have to worry should I download it from that or not. In principle it should be getting better performance and for the ISP, well, if you can localise the traffic, you don't have to pay the peering cost. You don't have to pay the upstream cost, that should be a benefit of

Of course some people may say, but hang on, now I have to provide all of these IP addresses through the those are the actual guys that are doing peer-to-peer. But hang on there are lots of legal peer-to-peer applications and so just because somebody actually contacts, doesn't mean they are doing something illegal and you are not offering any content. It's just a redirection service.

So, how do you evaluate such a system? Well, you want to kind of study what is the impact of the topology, the impact on congestion, kind of find out, is the peer-to-peer user actually benefiting from this. Because if the peer-to-peer user isn't benefiting, then why should the peer-to-peer user actually use such a system? Just to make it easier for the ISP, probably not. And users have their own dynamic.

What we did was actually do a sensitivity study, as well as assimilator, as well as some other additional studies of it. And using different ISP and peer-to-peer topologies to check that this is not just a random effect, look at lots of churn, content availability, figure out how the query patterns change, and does this only work when everything is beneficial to it or in the worse case? And evaluate the effects on the end user.

Now, how about it?

Well let's first take a look at whether the localisation of the topology actually works. So let's just take a look at the topology after you use this kind of oracle service to keep your neighbourhoods within the AS. Now, this is a fairly large sized kind of assimilation here and here you have a graph that is the outcome when you do random picking of your peer. You see one big block. No structure.

Here, you see if you actually try to keep the connection within the AS. And you start to see the structure. You start to see the various different ASes here. So it's actually able to keep most of the connection within the AS and only if a few outside. Now, you wonder: Oh, did didn't the topology become completely disconnected? And no, it doesn't, because you usually still have a few edges that are going remotely so your connectivity actually is maintained. So nothing to worry about here.

Now, whenever you want to actually do a evaluation on the end user performance, you actually want to look at simulations because otherwise it's hard to predict what the end user performance would have been.

For that purpose we actually use an SSF net because it actually has more built in IT built in. For BGP, so the whole set of different rules. And possibilities of actually doing the full simulation.

Now, the drawback of it is cannot do two large simulations. So we couldn't work with such a topology here, so this one here was done in a different simulation setup. So, here we are limit today about 7 hundred of peers and restricted size in terms of the topology. But, we can do real performance studies with all effects. The peer-to-peer system that we use is a Gnutella based one so actually it does content search via [unclear] an content exchange via http.

Now this means that results are base today kind of limit today Gnutella. No, because a lot of other systems work in a similar way, and so the insights that we are gaining here should be transferable to shows systems as well.

With regard to the topologies, we kind of varied these topologies, so we kind of resembled one according to Germany. One according to the setup in the US, and some which are taken by basically going to some kind of a word like topology where you have a tiered structure. You have a set of 1 AS, a set of tiered 2 and 3 ASes that are interconnected. And all of these kind of consists of between 12 and 316 ASes and about 700 peers that are distributed among them. Now, how do you do this kind of distribution when you are looking at population density? For example, you know how many people are living in New York versus Minnesota and so you can kind of scale those accordingly.

Also, with regards to the world topologies, where we kind of did a sensibility study and varied the number of peers that we had within each one of them. Now, once you do that, you have a set of different topologies but we still need a different set of user patterns.

Now, what are the things that I actually influencing the performance? Well, there is a question of churn, how long are these users going to be on line? When are they going off line? That depends how your graph issing like. That depends on how it's developing. That depends on can you actually find the content. And so we use these various different distributions here like which are heavy tailed, which are close to the observed behaviours and some other ones to just kind of look at some worse case behaviours, like uniform and [unclear] son and it's a similar thing for the content in terms of availability: Is it like do you have many free riders are are most peers sharing the content and you would hope of, which is of course not the case. So do you optimistic and pessimistic scenarios.

And once you do that, you can find out and come down to the bottom line, namely, how much of your content is going to stay within the airs. How much is going outside. And we have these various different graphs here. On the X axis, we have different scenarios like the Germany scenario, the US scenario, the world 1, world 2, world 3 scenario. Each time for the kind of most realistic case, namely the churn distribution was viable and the content type, distribution was also viable.

Now, the unbiased case is the one without oracle and you have something like between 5 and about 25 percent to 30 percent of your traffic staying within the AS. Everything else is going outside.

Now, what about with the oracle? Well instead of being down here at about 30 percent, you are close to 80 percent. Instead of here being at about 21 percent, you are going to go up to almost 60. Similar kind of gaps for all the other topologies. So, the good news is, the content now stays within the ISP's network. You can control where it's going to go. But it's not going outside any more. That also means that the effects are nice, because the content is available at different places, you can actually download is from different places so why not tell the user to download it from inside your network.

This is also consistent with some results that are done by Telefonika on then tried to use a similar kind of set up here for BBC news feed. Now of course the question is: Does this all depend only on these specific distributions or are these sites consistent for other kinds of distributions? So we changed the five sites distribution, we also changed the session lengths between uniform, paragraph eight, all these various distributions. This was the unbiased case for one of the topologies, and these are the results for the one with the oracle. And you see these were the results that we were just looking at and this is what's happening for the other distributions. So the results are just getting better. Even nicer than that. So the contents stays within the your network.

What about the user experience? Well, the bottom line is, the mean download time is being reduced by about 1 to 3 seconds. Now, how do read this plot. This is basically your box plot where for everyone of these topologies you have your maximum value, your minimum value, the mean value here and the two are the first quarter and the second quarter. Now, if you kind of do the comparison between the one without the oracle and the one with the oracle, you see a reduction of the mean download time from about 7 to about 5 seconds. So that's a two second reduction. That is actually substantial. And the similar set up is available on all of these as well, and it's not just true for the medium, it's also true for the [unclear] tiles, so that's the good news.

How about the different kind of setups here? Well, if you change your kind of distributions, than, yes, this was the one without the oracle; with the oracle the results are, again, significantly lower and now you see the difference of course much better, because this scale only goes from 8 to 5, so in some sense, I have kind of magnified this part over here, the mean values.

But, the take away here is, you can actually benefit the users benefit as well as the ISP which is also benefit.

Now, what we are claiming here is that this kind of an oracle service is actually something that is simple and easy to implement and the evaluation shows that the other graph kind of keeps nice. The maximum distances almost the same, the kind of topology structure remains connected. But you actually do get reduced AS distance in terms of where your traffic goes. It actually stays within the AS and you can even buy yourself more according to higher speed links if you actually want to.

Also, the traffic congestion actually kind of reduces. So you can kind of show that doing certain theoretical analysis, that with this kind of a selection mechanism, you are getting close to the optimal distribution of the traffic within the kind of network that you can actually hope to have. So, the congestion analysis is also very beneficial here.

So, the benefits of such a kind of a concept are: That you can have a very simple service, namely a service that just needs a rank IP addresses according to the preferable, kind of ordering of these IP addresses and with that one can, in principle, regain the control of the peer-to-peer traffic network through one he is network. And the benefit for the user is that they also get improved performance. So that's actually quite nice.

Now, what is upcoming here? We are working on an open source implementation of this kind of oracle server so that you can actually give it kind of a topology information and information and if you want utilisation feed which will then kind of do the [unclear] of the IP addresses and the idea is to have a UDP based system just like the DNS system is, so that the overhead is actually minimal, and we are also planning on having softer patches available for Gnutella, bit torrent, [unclear] and some peer-to-peer TV which is based on one of the other software systems here.

Now, this should be available hopefully by the end of the year and I would be very, very happy to talk with any one of you about how to actually deploy such a service within anybody's network.

Now, website where you can find all of this information is this one down here. And there is also first steps within the IETF with with regards to actually finding out to standardise some of these kind of peer-to-peer application systems interactions together with the ISPs and there will be an IETF workshop on May 28th on the topic of peer-to-peer infrastructure and we are planning to be present there as well.

Now, of course, once you have such a service, one can even think of doing, taking this one step further. One could actually have these various different or various different ISPs kind of collaborating and once you do that, you can actually do some of the things that, some of researchers interest [unclear] dreaming [unclear] in the past. Namely have something like a global coordinate system. That's years ago. Let's first control this peer-to-peer traffic and that's why I want to thank you for your attention here and I am open for questions. Thank you.

(Applause)

Thanks. Questions?

AUDIENCE: A couple of observations. One of the things you mentioned is that the data isn't available, we have seen various examples today that might take some work but there is a lot of topology information available.

ANJA FELDMANN: I didn't want to [unclear] there is not topology information available. But the peer-to-peer systems, as they are today, it's very, very difficult to actually, for them to infer the live topology at any given point so that they can make a optimal decision, which is optimal from their perspective. And yes

AUDIENCE: I understand, but going for it, the one thing that's currently mostly missing from the publicly available information is the one thing that you are interested in, is the cost factor. Now, you already said people are not very willing to share it out of competition, things, but the other thing is putting on my evil hat: If I start publishing cost information, together with topology information, if I am running a net, [unclear] I know exactly where it hurts. Either somebody is paying too much or somebody is less because its cost factor goes up. So I know specifically where to target.

ANJA FELDMANN: I didn't say publish this information. You just need to run this oracle [unclear] within your own network which has this topology information available.

AUDIENCE: Yeah, and I have also a little somebody in any network and a lot of new lines which can access that information.

ANJA FELDMANN: But this one here can actually do any kind of rankings at any point. You can randomise it, you can actually just need to on a certain number of the [unclear], bias it according to where you want it to be and that's where you gain your benefit from. And, yes, there is always a tradeoff between revealing some information to steer the traffic and revealing no information and not being able to steer the traffic.

AUDIENCE: One of the comments regarding your, well, legal issues, I don't think it will be the first system that's not actually providing content and it's [unclear] being taken down due to copyright violations. Awed awed

I am not sure that I followed all the details of your evaluation, but I was wondering if you could talk a little bit about how this would sort of apply to bit warn and if you looked at all the distribution of users versus the distribution of the band width, those users have and what would happen when this when your system you have large files and people needs lots of pieces of those large files and most of the available band width is provided by, you know, a few users in Sweden and everyone in the US is downloading from them.

ANJA FELDMANN: Well, in principle, whenever you have a system that is where the content is only available at a few places, you have pretty much no way of steering the traffic away from those locations, but as most of the content luckily is heavy tailed, so you have a few sets of information that are extremely popular, that make up almost all of your demand, you have it usually available in actually quite a number of different locations. Now, with regards to whether it's possible with regards to bit torn to steer the kind of content to the various different locations, the tell [unclear] results say yes, you can do it and also the peer 4 group has had a modified I tracker which basically takes into account where your IP address is located in terms of giving you possible peers to kind of connect, then you are taking the same kind of information and the same principle of cooperation and you are gaining similar kind of performance boost with regards to keeping the content inside your network. So, we haven't done it with our system, but I think in principle, the same thing applies because the tracker has more, many more IP addresses that are possible. You can rank those, that can be done by the tracker or that can be done by the client itself and just the way of you are going to do it, it needs to be handled but can be done.

PAUL FRANCIS: Paul Francis from Cornell. A couple of things. First of I was wondering what would happen if you know you have a knew tell asystem and it makes a query and say that query is ten addresses, then it learns another one, does it now crave for 11 addresses or?

ANJA FELDMANN: Well, the way that you can I didn't say anything about how you are going to exactly do your peer-to-peer system modification. Now, it really depends on what peer-to-peer system you have. In principle, the more IP addresses you give to the oracle, the better choices can be made. So, your benefit will be larger, the more choice it's going to have. And I didn't say you need to limit it.

AUDIENCE: You know when I read the P4P thing, I thought it was incredibly complex, and I wondered why they don't just have a similar and give a pair of addresses and the thing replies and says good, bad or middle. Speak that's basically what we are doing here.

AUDIENCE: Not even rank them. Just give an A, B or C answer and�

ANJA FELDMANN: You know, any specific oracle implementation can do it. Its own way. You can actually just kind of group them into kind of a best, medium, bad. And then let it be up to the client.

AUDIENCE: Sure, but that's still a different interface from what you described.

ANJA FELDMANN: Easy enough to do.

AUDIENCE: I am little distressed P4P is doing its sting and you describe P4P which is something of a parallel activity.

ANJA FELDMANN: The idea of the workshop here, the one that I just mentioned, is to actually find out what the various different components of these various different proposals are, to kind of unify them, to kind of move ahead on that regard.

CHAIR: I am going to close the line. The three people at the mikes can ask their questions.

AUDIENCE: My question is, if you have a typical peer-to-peer clients actually also evaluate round trip time measurements, and these tend to follow network behaviour very, very closely in real time, how would you convince me, as an implementer of one of these things, to take your data over the RTT measurements and how frequently would you say I need to go back to you?

ANJA FELDMANN: So, one thing is, RTT measurement doesn't say anything about the throughput. This system can duly tell you something about the available band width and it also knows what kind of access benefits the other peer on the other end will have. Because you know what kind of DSL or whether it's university or whatever it is.

AUDIENCE: It's not that convincing because I know the throughput immediately when I try it and the

ANJA FELDMANN: But you first have to try it.

AUDIENCE: You have to first try it but that doesn't take all that much and it doesn't generate all that much traffic actually if I do it right.

And the other thing is that the experience shows that actually the RTT is a reasonable predictor for bandwidth in many situations.

ANJA FELDMANN: The other benefit you have of using, kind of from it depends on what kind of a peer-to-peer system you are designing. Of course a lot of service provider are now thinking about adding their, running their own services as peer-to-peer services. And there you can actually build in the [unclear] to this kind of a system upfront. Now, there will always be some peer-to-peer systems that will not use this system. And that's okay.

AUDIENCE: I am just wondering

ANJA FELDMANN: Basically, your benefit is that you don't have to do all the implementation of the random time measurements and all of those, so this thing here is being a good citizen, because otherwise you may there might be some people that are not going to allow you continue on like it is, and I think there are some benefits to it, a reduction in terms of download time production in the order of 30 percent is not so bad. But, yes, these weren't any kind of systems that were actually taking RTT into account. I know that.

AUDIENCE: That was going to be my next question.

ANJA FELDMANN: I know where the limits are.

AUDIENCE: So, you have of course been looking at the P4P code and the various interfaces. I didn't see you went into any details on the actual interface to this oracle and whether you, or so my questions are two:

The first one is: Currently, the actual data that you are collecting is something that other people are collecting as well, because this data on the topology, the routing information for the peer-to-peer network is very, very valuable information, valuable enough for people to put up probes themselves and also some of these peer-to-peer networks are actually selling the data they are collecting on the market, so the access to these servers are actually pretty interesting. So the first question is: Regarding if you have the actual business models around this data that you are collecting and the second one regarding the interface itself, which currently, as far as I have seen, is a selfbased pretty cool protocol which is used to access these systems which you all oracle and basically know [unclear] there, is that something that you presume that ITF can pick up and create the actual specification of interface to these oracle [unclear] systems.

ANJA FELDMANN: I think it would make a lot of sense to actually standardise the protocol with which you can access certain servers. Now, I believe that such a service doesn't have to be run as a soap or something like that. It could be a very, very simple UTB based protocol which just has a list of possible IP addresses in it, that's a way

AUDIENCE: But the problem is the security, because the data is highly invaluable that you have in that system.

ANJA FELDMANN: Hang on. Are we talking about how to access this service or the database that this oracle server needs in order to actually satisfy these queries?

AUDIENCE: It ends up being both because if it is the data in the database is valuable N that case you need to secure also the connection to the database.

ANJA FELDMANN: Well this server here will have to have some kind of information about this topology here. Now, yes, you will have to make sure that this will not release this kind of topology information, but that can be handled. That's a typical server kind of a problem. The protocol was which to access this is just the set kind of: Ranking peer, these IP addresses are good and these IP addresses are bad. That is a very simple basic response kind of an interface that doesn't need state, that won't reveal the topology information itself.

AUDIENCE: Last question: What would be the time on the policy changes and is there any push system towards the clients so longstanding sessions can actually be rerouted based on new topologies information?

ANJA FELDMANN: Our idea is not to reroute existing system, but whenever you actually want to download a new file, whenever you want to pick a new neighbour, you query the system again and I believe that just due to the whole value of the whole amount of churn that is inherent to peer-to-peer system, the existing peer-to-peer topology will follow very nicely this kind of a setup over here. Now, you can of course add dynamic to it but we are not yet at this stage, this may be stuff to work on with with regards to future work.

AUDIENCE: Is there at least some form of time delay tore whatever, so once I query do I build up a lifetime database of where my peers are or should I disregard information?

ANJA FELDMANN: We can see how it all works. But right now we imagine that you would probably cash this information for that you could cash this information for a short time period, then. Of course you don't want to have too much in T you don't want to over create the system but some of these things I don't know yet. It will also depend on how many users are going to actually use such a system. Sorry, no answer on that one yet.

CHAIR: Okay. Thank you.

(Applause)

CHAIR: The last talk of the afternoon is K�nzler.

FREDDIE K�NZLER: Everybody who was in Florida, the peering forum could now move to the bar already because we give basically the same presentation again with some updated speakers.

My name is Freddie K�nzler and this is Tom Billeter, VP Business Development. Myself I am a network architect of peer-to-peer IP television and Thomas will give a brief introduction first on I come later to the technical stuff.

THOMAS BILLETER: Thank you Freddie, and thank you for the chance to be here tonight. Freddie and I share this speech, I am taking care more of the business side of the presentation and Freddie will take care of the technical details.

So, anyway, I think that first of all, I should say thanks to you all because I guess your the guys that make this wonderful best effort ecosystem Internet work. You make it work in a way that allows me as a TV consumer to see TV on my PC and most of all allows also our employer to build a new business.

The reason we are showing the picture is basically to make a statement like who is a combination of technology know how brought in by a Professor [unclear] of the University of Michigan and of a business person, that have been friends for 20 years and who has been the chance for them to reunite after his [unclear] and his Ph.D. team had worked on the technology for more or less seven years that was start indicate 2005 by the two people and we are in the meantime a team of 50 spread between the US and in Zurich in Switzerland.

We have spent so far in excess of 15 million dollars to build our business and most of this money has gone no surprisingly to the content providers.

So, what does Zattoo do, we are do knowing that are than bring a plain vanilla TV experience to the PC. We call ourselves a virtual cable company and we are using the Internet as a cable system and we are using your laptop as a setup box and we are borrowing the screen of your PC to kind of provide, again, a very simple TV experience.

Our screen does look very much at the end of the day look a TV screen. The software is a peer-to-peer based and you can download the software from our website and there you go, you can see TV.

Currently, this is a free offering and it's like a main line up of a cable TV. We will add some subscription or a pay offering later this year.

And so, a free offering obviously are mainly funded by advertising. I will spend sometime around our advertising product as well.

We are often asked basically how we compare to other P2P players around other media players that are kind of active in the video Internet industry and you can have probably a way to look at this world in two dimensions. One is basically the length of the offering, are we talking about clips? Short sequences or a full 24/7 programme, and the other dimension is is it a life offering or is it rather an archive kind of, or a shifted offering? And if you look at the two spaces, you basically have a category winner or a gorilla in the market for all of those.

YouTube obviously is more in the short sequences and archive consent kind of gain, which is major league baseball is really streaming live sport content but only for the time of a game. Joost is peer-to-peer company that basically does their own programming, they are acquiring content rights, shows and event and different sorts of things and packaging together a new programme. And at the end of it, what Zattoo does is really offering the fact that we are retransmitting classical TV channels; we are offering a 24/7 and live TV experience:

We started our business in Europe mainly due to the content regulation that allows us to do our business as a cable re-transmitter. We are currently active in some seven or eight countries: Belgium, Denmark, France, Germany, Norway, Spain, Switzerland, UK, and hopefully we will open a number of additional countries in the next couple of weeks, so for the euro 2000 date football championships, we are most of the qualifying countries on line.

North America is planned for the second part of the year and also in Asia we are doing some scouting.

So, for your user I think that one of the things that our founders were told in the very beginning when they tried to build our business that was Internet users are different, they will never look TV in the Internet. The Internet is no demand on the economy so why would you want to look at live TV on the Internet? And we are really finding out that our users love our product. Are using live TV on their PC very much the way they are looking at TV on the main screen.

But, with a big difference, and this big difference is really what is relevant to the advertising community. We are attracting a very young population of TV viewers, the one that classical TV stations were losing because these guys were no longer really using a TV set to consume video consent. In the bracket 28 to 34 we have 60 percent of our viewing space. The classical TV channels would probably have the same kind of proportion, more in the 46 to 65 bracket.

So, why are the users, viewers looks to start with? And the interesting thing is that a large part of users are really multitaskers, they are looking at TV I would do probably, while they do emails, while they surf the Internet. So it's kind of a great opportunity also for guys like me to have a look at the football match or a political debate or to news while kind of doing some other work as well. So, this is the 29 percent.

This is by the way a survey done with 10 thousand users so it does have to be a little bit relevant. 22 percent, are the guys that disappeared to the classical TV space because they don't own a TV set while the last two categories are more the guys that use it either a second screen when the first one is us autoed by somebody else in the family or because they are using the laptop in a different room, in a different place from the one that TV set is.

By the way, we have, today, roughly 2 million subscribers, or at least registered customers in Europe, and we hope to basically get to 5 million by the end of the year.

All right, this becomes a bit more technical. We have to distinguish countries because every country we mentioned gets a different line up in stations, they can see and we do this with GOIP information. Actually, the IP address we are using here with the W land, is meant to be in The Netherlands for obvious reasons because RIPE is there, but we tried to move it to Germany now, virtually of course, so you all can register on zatto.com. I hope it works, otherwise try again tonight in the hotel.

So we use max. Mind. Come as our information.

Just one very important notion is, the TV rights typically are cleared on a country by country basis and as a user, in spite of being Swiss or German, you will be given the TV lineup that we have licence for a specific country. So as a registered Swiss, when I come to Germany, I would be just offered a proposed German line up of our service.

So, Zattoo works with three major operating systems, Mac, Linux, Windows. We use MPEG 4. We use 4 dot 264 for video and AVC for audio and following the formal presentation by [unclear] yes, it was actually our peer-to-peer algorithm has all right some intelligence such as ASN awareness. We try to get the neighbouring peers within the same ASNs.

In a perfect world, peer-to-peer would work perfect of course. We would see 100 percent P2P ratio and the seating server, 5 hundred kilobit stream would be required only once, all lines would organise themselves as P2P. But the Internet is not perfect. We see a lot of over subscription, over booking, everybody does it, no one admits it. Then we also have this commercial residential broadband offerings, which have low upstream band width. Typically less than 5 hundred kilobits and then we see of course also lays [unclear] cheat packet loss.

You will know that, so we have to subsidise some missing bandwidth with repeater service.

We have deployed already a bunch of service, some 5 hundred machines operational, typically they are installed next to the big Internet exchange in Amsterdam, Frankfurt and so on. We can install a lot more machines in the next few weeks. And we currently see some 15 gigabit of traffic and expect about 50 gig in a month during the soccer matches of Euro 2008.

Here is a little side note I would like to point you on a white paper from Bill Norton, Equinix, it's about two years old. The next wave of massive disruption to the US peering echo system. If you are interesting in the commercial aspects of CDNs and peer-to-peer.

So, why is the infrastructure dispersed? I mean, as mentioned, we have a lot of band width and it's growing fast and usually the commercial interesting points are also the places where the big Internet exchanges are. Peering, of course is a factor as we all know. We do love peering. So, if we don't peer yet, I mean, ask me. And we also do generally hot potato routing, but we try we are currently in the building process of a 10 gig ring across Europe to get the peer routes more efficient.

A factor what we also see is we have kind of a bad traffic pattern. We have a four hours peak with a factor of 1:4, 1:5 compared to the average ratio. And the supplies to one time zone. I mean Spain is actually the same time zone but they have a bit of a different TV behaviour; they watch TV in the afternoon and then late nights. And Switzerland has its peak at 9:30, I think this applies also for Germany and most other European countries.

The traffic pattern during the week: We see 25th was actually a Sunday no, 24th, 25th, Sunday and Monday and then the usage goes down over the week until the other Sunday. So, this is the pattern of the TV consumption.

As we are real time and have high peaks, we need to overprovision a lot, because when we don't reach the 5 hundred kilobits at the end user, I mean, then we lose the user because then the experience is gone, so we need decent networks.

Then we have also a high peak loads during major sports events. I mentioned, or Thomas mentioned all right Euro 2008, Formula 1 tennis, it depends of course on the country, if Roger Fedderer is about to win, then the Swiss will watch; if Rafael Nadal is better, then we have more Spanish users.

Zattoo is partly carried over in it 7. This is also a commercial factor. [unclear] 7 is the commercial provider in Switzerland that started back in 2000, end of 1999 on the, [unclear] 7 has some international footprint today with about 1,200 peering session with some 750 partners. One cannot build this overnight. And also, newcomers to an exchange have a hard stand usually until they get a decent number of peerings.

We try to be a good citizen. We use local peerings. We try to bring the traffic wherever you need it, if possible. On the client side, we don't use all the resources of your computer. CPU, memory and so [unclear] we also don't use all your upstream band width, because Zattoo is meant to be an application while you are surfing the web, while you are chatting, while you are emailing. As Thomas mentioned already. And Zattoo doesn't use any resources once the programme is shut down.

This becomes a big issue, we have had this in the presentation a minute ago, the suffer from a lot of traffic, I mean this is nothing new to you, so we try to make the load as easy as possible.

So, I hand over to Thomas now, again, for the business side.

THOMAS BILLETER: So, we like to believe that we are a sustainable business and not just one that is waiting to be acquired by a large guy. And the way we think that we can make money and are demonstrating it in Switzerland where we started is really through advertising, and our advertising product is really a very simple one. Again, as you might recognise it, there is always buffering time when you start streaming in the Internet, so we have a five to seven seconds of time when you switch from one [unclear] to the other that can be used for advertising. These are like five seconds of glory for advertiser where they have an undivided attention by the consumer, it just has selected a new channel. It's waiting for new content to come. He will not walk away, he will really see the add. This ad does not compete with any other content at any other point in time. So we are having obviously a very, very good performance matrix on the ads we serve. Today these are mainly animated [unclear].

Just, again, for advertisers, it is important to know how many people they can reach within a certain given time where the campaign is active. This is an example coming from our original contrary Switzerland where we do serve roughly 6 million contacts with consumers per month. And if we look our statistics, you typically would reach around 200,000 during a campaign. This is a good number for a small country like Switzerland. Obviously in larger countries, these numbers will be significantly higher.

Freddie has mentioned that we are trying to be good citizens in terms of way we use technology or the resources in the network. We also try to be good citizens in the whole legal system, in the whole TV ecosystem and we don't want to be seen or look at ourselves as disrupting existing businesses so we do try to partner up the incumbents of the TV space and these tend to be the ISPs, we try to partner up on the customer relations side. Some ISPs because they want to provide an experience to their band width or to their access users try to kind of propose services like Zattoo as a good reason to switch to them. In this respect, we again our survey has demonstrate that had roughly 10 percent of our users have switched or upgraded their broadband service because they wanted to use a service like Zattoo, so this is obviously generating some additional value, some additional revenues to the ISPs.

Also, we would like to make it an effective service so we partner up with ISPs to optimise the infrastructure space and in particular, we will have some first instances where we combine the P2P capability with the ability to take a multicast video signal provided by incumbent tell comes as typically part of their IP TV service.

Last but not least, some of the ISPs that have high end IP TV service have some exclusive contents, typically sport so the ability to partner up with them will again give us the opportunity to offer our users some additional content experience.

Now, the broadcasters that are kind of always believed to be the losers in this switch to the Internet trend, Zattoo really helps them kind of regain some of the audience they were losing so we extend the reach they have, and in these terms we believe we are not really competing with the channel operators. We can add interactivity to their services, to their product, not only in the normal show or content part, but also in the advertising and so we are starting some of the partnership with broadcasters and their advertising partners to provide the interactive advertising.

Last but not least, because advertising is really our bread and butter to kind of make it a sustainable valued proposition to users, we need to partner up with advertiser and advertising platforms and we are doing this again on a market by market basis. Typically, again, we look at the TV channels that have a high market shares and these are typically already represented by advertising agencies so we tend to partner up with those.

Here here are just a list of advertisers that have already been on Zattoo. A lot of them you will recognise, some of them are local brands.

Well, the biggest constraint we see on developing Zattoo is the rights of the re-broadcasting the TV, and there was in Spiegel recently, a quote, if we would show what would be possible if we would be allowed, and I think this is a good resume of this presentation. I think this IP TV development will be huge in the next few months and years, and actually, I look forward to it, but it's also a big challenge.

Please sign up today. Zatto.com. I hope it works, otherwise try again from the hotel, there is a German IP address if you are still in The Netherlands, and you at least get an idea of the system. If you are not yet in a Zattoo enabled country.

So, we come to questions now.

While there are questions, maybe we can watch Zattoo a bit.

(Applause)

So we have a couple of minutes for questions.

AUDIENCE: Question as a user, one of my personal habits of watching TV is watching not so much of it, but that actually I also like to watch it when I am on travel, and I understand that you know, and I experience actually that that's not possible because you know, of the rights situation. You might get a little bit devious here, just a suggestion, by offering a sort of relay service. You know, if I have a computer somewhere at home that could relay Zattoo to me while I am on travel, I might actually be inclined to leave that relay running for a long time providing nice P2P for you and you might want to get your lawyers to actually see whether you can do that. If you can do that it's a good idea.

FREDDIE K�NZLER: I will then probably create a second company that does sling box for Zattoo just under a different brand but you have a valid point.

AUDIENCE: I don't want a sling box, I already have one.

FREDDIE K�NZLER: Obviously a software sling box. But anyway, I think the right situation [unclear] one of the very intriguing one because the Internet offers now an ecosystem that has a totally different capability than that all of the content deals or were made on and this industry would probably take a couple of years to adjust to that. But obviously we receive a lot of emails of people that would be willing to pay hundreds of dollars to get their cultural content wherever they are in the world.

AUDIENCE: My point was not for the sling box. My point was if you provide the sling box, it's actually could be a Zattoo client, so it's part of the P2P thing. If somebody else provides the sling box you have no benefit.

FREDDIE K�NZLER: You are perfectly right, but again I think these are a couple of avenues we might consider. At the moment that we are well established and we will not have the whole industry kind of fighting us. But, thanks for the suggestion.

AUDIENCE: I am just curious, what percentage of content do you actually deliver peer-to-peer?

FREDDIE K�NZLER: It varies. Of course, actually I don't have the exact figures, because it changes with every release we are rolling out. But, if well, if I say less than 50 percent, is this an answer for you?

AUDIENCE: Is that a correct answer? Speak this is a correct answer.

AUDIENCE: Less than 20 percent?

THOMAS BILLETER: No, it's more than 20 percent.

FREDDIE K�NZLER: It depends on your network topology. If you are sitting in your office LAN and everybody watches the same football game, whatever, then you see more than a hundred percent because you are sharing in the campus LAN.

AUDIENCE: A question that secretaries this one to the previous talk. If ISPs were to offer an oracle service telling you where they wanted their users to get your content from, would you substitute that for your own measurements?

FREDDIE K�NZLER: Well, I cannot answer that because this is done, this question must be answered by the development team. But I think we point them to the presentation we have just seen a minute ago.

I would like to give a business answer to your question. Obviously I don't know the intrinsics of the technology challenge but we definitely want to partner with ISPs. The fact that we are starting to us auto the multicast to deliver the content with one larger Telco in Europe is just an indication of the fact that we are willing to put some technology work into making it a better experience and administer efficient delivery.

AUDIENCE: I am thinking of a different approach to supporting the nomadic customer than the one that Daniel suggested and I am wondering whether it makes sense either, and that is the same approach that my sip [unclear] provider gives me, I just register with his proxy and it doesn't matter where in the world I am. I still have a Dublin phone number.

FREDDIE K�NZLER: That again is something that through the identification would already be there. The problem is that this industry really clears the right not on the fact that you are a Swiss citizen or a German citizen but on tact that you are currently either in Germany or in Korea or wherever you happen to be, so it's not really your identification that counts but where you are.

Put it that way: We would love to bring you your channel, your favourite channel wherever you are in the world, but we are not allowed to.

We are working on it.

AUDIENCE: My question is very simple: As far as I understood, you are using [unclear] clause software and it's for me as a user, an line [unclear] user problem for that I never know what it does, for example, tracing with Skype peer-to-peer client may show you mean, many interesting things which is not related to at all. This is question number one, how you may guarantee to users that your subscriber will not do anything illegal.

The next question is depending on first. If somebody will hack your protocol... for video and will the client which will not include your hardware or just will filter hardware and your client. What will you are your actions?

FREDDIE K�NZLER: Well, if you don't install flash, you don't see adverts. I mean, but then it's your fault to see a lot of things not.

Regarding the DRM system we have implemented, I mean, we are required to have some DRM mechanism. Everybody there have been some attempts to hack and pull streams from our servers from the M player people, well, it was immediately detected. They couldn't do it any further because we prevented it. I mean, it's not because we don't like people which are from the open source scene. We have a lot of open source parts in our software, or in the whole system, but we, for legal reasons, we cannot make it open.

AUDIENCE: Understood. It was prevented administratively or technically?

FREDDIE K�NZLER: There are technical barriers in DRM systems, so you get reauthenticated all the time when you watch and if you don't have a valid log in, cannot watch TV. It's that simple.

Again, perhaps I would like to add a business connotation to the question. One obviously being advertising free at Zattoo, I mean, a lot of our users recognise that the only way for us to provide a service that they want to use is really to getting advertising revenues and they have repeatedly stated to us that they are fine with the adds that we are doing, particularly that they are non intrusive as opposed to what you recognise nice in the Internet where you have all sorts of baners covering your content. We are really providing this one five second adds while you have to wait anyway. So [unclear] to be not really disturbed by the advertising we provide.

AUDIENCE: One is about the five second ad. Do you have any feeling of how that changes users' behaviour? Do people switch channels less?

FREDDIE K�NZLER: I mean, you have a very valid point. Five seconds is all right a low motivation to do the classical zapping experience that you would do in the Internet. If you don't get an ad, you have five seconds of our Zattoo logo. And if you have the ad, you have the five second ad. So far our measurements have not shown any change in behaviour. Again we are not a zapping kind of value proposition. Typically a session for a user for a Zattoo user is three channels, which is not more than that.

AUDIENCE: The other yes is about user behaviour in different countries. Do you have any feeling on whether that's because of the users or because of the content? If you gave Swiss content to Spanish users, would they become more Swiss?

FREDDIE K�NZLER: Interesting question. But, again, to our surprise, the content consumptions pater that we see in every market are really a hundred percent what you would expect in the local market on normal TV. Our users look at the same shows, the same news, the same sports as their other folks would be doing the same local market. We don't get very diverting profiles, usage profiles. So I would expect that if we were to offer Swiss content to Spanish users, it would just not be looked at. So, unfortunately we cannot make Europe Swiss.

AUDIENCE: Last question: You have a peer-to-peer network with 7 hundred servers and in a number is going up all the time. You asked a question earlier about the proportion of peer-to-peer traffic. I am not sure I understood whether that proportion was going up or down.

FREDDIE K�NZLER: I think the [unclear] limb [unclear] factor of our peer-to-peer ratio is the upstream band width of access connections. So when the Internet will deliver higher than 5 hundred K upstream bandwidth on a standard basis, our peering rate will go up dramatically.

AUDIENCE: But, at the same time, your network is developing?

FREDDIE K�NZLER: Right.

AUDIENCE: And my question is: Whether a long with those developments, the peer-to-peer proportion of traffic is going up or down? Peak it is going down in this area where we are kind of [unclear] a lot of low speed connectivity added to our typically the [unclear] that come in the very first months are the guys that have a very high or a very fast access connection anyway, so that's some of the observations.

And another point is probably that this is really independent of what kind of content comes and how well it is distributed, so we are not seeing a general pattern but a lot of localised peaks that have a different quality. So we are trying to find out ourselves how it will go over time.

AUDIENCE: Thank you.

FREDDIE K�NZLER: To add on this, we look forward to launching in Sweden because they have a lot of [unclear] at home already deployed and this comes usually a long with similar ethical band width. So hopefully this will also increase the peer-to-peer ratio.

(Applause)

CHAIR: Thank you, that concludes today's session.

See you all tomorrow.

(The session then concluded)