[ncc-services-wg] FW: [ipv6-wg] Publication of default assignment sizes for end-user network ?
michael.dillon at bt.com michael.dillon at bt.com
Wed Oct 28 14:40:15 CET 2009
You might want to have a look at this idea as a way to kill two birds with one stone. -----Original Message----- From: ipv6-wg-admin at ripe.net [mailto:ipv6-wg-admin at ripe.net] On Behalf Of michael.dillon at bt.com Sent: 28 October 2009 13:38 To: ipv6-wg at ripe.net Subject: RE: [ipv6-wg] Publication of default assignment sizes for end-user network ? > Now this does not forbid you to register individual assignments into > the public database, but doing so poses various problems. Not only the > sheer number of assignments can be a problem, but also keeping that > registration up-to-date will have serious impact on your day to day > operations and especially with residential users there might be > privacy issues as well. If we would have a distributed database protocol to extend the RIPE database into the LIRs, then this would not be an issue. I'm thinking of something like DNS. For instance, if I need to look up data on three /48s in fdb8:e914::/32 I would first send a lookup request to RIPE. The RIPE db would tell me that all data for fdb8:e914::/32 is in a server at fdb8:e914::dbdb. I would cache that information, and send my first lookup request to the distributed server. After getting my reply, I would prepare to send the second request for fdb8:e914:c20f::/48, notice that the /32 is already in my cache, and use the cached server instead. This is roughly the way DNS lookups work with the result that the detailed data does not have to be kept in one central location. I would like to see the RIPE db move in the same direction. At the same time, I would like to see the spec opened up a bit so that the distributed server operators would be free to add additional attributes to existing objects, and additional objects so that there is the possibility of using the same distributed lookup mechanism for geographic or language identifications as well. --Michael Dillon