Policy Statement on Address Space Allocations
Dennis Ferguson dennis at Ipsilon.COM
Fri Jan 26 01:49:48 CET 1996
Yakov,
>> It's the other way round: SPRINT should tell his customers he can't
>> guarantee 100% global Internet connectivity because he disagrees with
>> the current address allocation policy of the IANA/InterNIC/RIPE NCC/AP-NIC.
>
> Would you assume that anyone whose address allocation follow
> "the current address allocation policy of the IANA/InterNIC/RIPE NCC/AP-NIC"
> is guaranteed 100% global Internet connectivity ?
It is pretty hard to guarantee 100% of anything, but my guess is that
"IANA/InterNIC/RIPE NCC/AP-NIC" would be willing to follow any sort of
guidelines for how they should allocate address space such that ISPs
would endeavour to do their best to route the resulting address topology.
Even if this means always allocating in /17 units.
The problem is that no one has ever bothered to set meaningful engineering
goals for maximizing the life of IPv4. As far as I know the only targets
we have are the entirely qualitative, warm-and-fuzzy ones of "smaller"
forwarding tables and "better" efficiency of address utilization. So
everyone goes off and implements their own policies to make things "smaller"
and "better", and we make proposals for making things even "smaller" and
"better" still, without the faintest idea of how to quantify "smaller" and
"better" in a consistent fashion. So what end result could we have other
than what what we got, which is people doing conflicting things, but
perfectly justifiable things based on their own notions of "smaller" or
"better", and then pointing fingers of blame at each other when the
end result is broken, or doesn't conform to their own personal notions
of "smaller" and "better"?
And I don't think the inability to guarantee 100% of anything justifies
ignoring the problem, or not dealing with it as the operational engineering
issue that it is. It is broken, and an inability to make 100% guarantees
doesn't constrain one from making an attempt to engineer a fix, where
"engineer" should mean you have some ability to quantitatively evaluate
the outcome.
We've got a basic conflict between "smaller" and "better", whose resolution
will require (in the absense of really good renumbering technology)
constraining our insistance on efficient address utilization by measuring
the effect this has on routing tables. We need to get some quantitative
goals assigned to this so we can measure what is "good" and "bad". I'd
(again) suggest the following:
(1) Let's try to make a realistic estimate of what the end state for the
IPv4 address space should be. I.e. how many (or few) routes should we
be aiming at being able to carry by the time the address space is entirely
allocated. Let's come to some consensus about what this number should
be (call it 200,000 routes for the sake of current argument), document
it, and have everyone include it in RFI's for future routers so that
the goal, whatever it is, is clearly defined for hardware vendors (who
can then complain if the target is unreasonable, so one can adjust
it down accordingly).
(2) Let's then look at the amount of extant global routing information from
each class-A-sized space, keeping each one to the average required to
meet the end-state estimate (about 900 routes per class-A to hit the
200,000 mark). We then squeeze down on existing blocks to fit into this
(by imposing filtering, or whatever, if necessary). We also suggest
that when any new block a registry is allocating from hits the magic
route limit they quit trying to fill in spaces and go on to another
class-A sized block (getting more efficiency out of the current block
is useless anyway if we're not going to be able to route it). We don't
worry about the length of individual prefixes, just the total routes
for the block, as this gives registries some flexibility in accomodating
both small and large providers and sites.
(3) Then we can spend IETF meetings looking at charts with numbers which
have real targets to compare them to, and bashing those with measureably
bad numbers, and applauding those with measureably good numbers, and
figuring out what to do if the numbers we picked look unachievable, or
if we need new technology to do better. And otherwise actually
engineering the problem, rather than just talking about how to get it
"better" and "smaller".
And I mostly think that the IETF group into which this issue fits is not
doing its job unless it can get us to a point where address allocation
people and ISPs aren't issuing disclaimers about each other's behaviour,
and instead have got some mutually agreed upon, and verifiable, goals
and targets which everyone works towards.
Dennis Ferguson
[ lir-wg Archives ]