[At-Large] I: [ALAC-Announce] ICANN News Alert -- Notice of Preliminary Determination To Grant Registrar Data Retention Waiver Request for Ascio Technologies, Inc. Danmark - filial af Ascio Technologies, Inc. USA

Karl Auerbach karl at cavebear.com
Sun Dec 20 00:34:45 UTC 2015



On 12/19/2015 04:31 AM, Christian de Larrinaga wrote:

> Karl Auerbach wrote:

>> I've never heard that claim before.  I've run experiments with DNS and
>> found surprisingly few limits on how far it can expand.  (For example,
>> in one experiment [more than a decade ago] we ran Bind with tens of
>> millions of top level domains and then ran query traffic [in which we
>> mixed a fair amount of absent names to make it more real-life.])
>>
>
> I'm intrigued. Was this done to establish evidence that a flattening of
> the hierarchy would not be a technical problem?

The DNS hierarchy of today is extraordinary flat - almost all of the 
fanout of DNS tree occurs at the third level or deeper.  The root fanout 
is fairly constrained by UDP packet size limits to about 13.  The root 
zone fans out to only a few hundred - now moving to maybe a couple of 
thousand (most sparsely populated) TLDs.  The vast majority of name 
queries pass through the [com, net, org, in-addr.arpa] branches before 
the real spreading of DNS occurs.  [I suspect that the .be and .ly 
branches get a fair amount of traffic - but they are themselves pretty 
flatly arranged.]

Back to our experiment:

ICANN kept making Chicken Little noises about how the sky would fall if 
the DNS root were to exceed a couple of hundred TLDs and thus utter care 
and decades of study would be needed.

I (and a couple of others) said "that's rubbish".  So we took a fairly 
vanilla, but reasonably powerful, PC of the era running Linux, and 
stuffed as much memory into it as we could.

We wrote a script that took the .com zone of that era (several tens of 
millions of names if I remember right) [don't ask how we got it, I don't 
remember].  The script turned it into a root zone file with delegations 
to non-existent machines.  We loaded it into bind, waited a bit for the 
file to be digested, then began testing.

(We also generated several synthetic root zones of various sizes in 
which we generated names of various lengths using random character 
sequences.)

We generated queries to that pseudo root server.  Since recursion was 
disabled (as it is disabled on all real root servers) the fact that the 
delegations went nowhere was not particularly relevant.

The queries were not simple one-at-a-time queries.  We overlapped 
queries and mixed in a good blend of missing names.

We were surprised how well it ran.  It pretty much demonstrated that the 
ICANN theory that the DNS would go "boom" was a bogyman.  It 
demonstrated that ICANN could allocate a ten new TLDs a day and still be 
well within the technological limits of DNS resolvers based on decade 
old hardware.

Our experiment was simple, and it did not involve zone transfers of 
notifications or things like that.  But at least we did something 
concrete rather than merely waiving hands.

I told the ICANN board about these experiments, but in typical ICANN 
fashion there was no interest in following up with other actual 
experiments to ascertain whether there was an actual basis for ICANN's 
fears of DNS expansion.

It wasn't until a decade later that ICANN participated in the 
one-day-in-the-life-of-the-internet data capture and analysis experiment.

(I had also suggested that ICANN undertake to induce the creation of a 
DNS early-warning monitoring system - and even lined up a worldwide 
array of no-cost servers to run the monitors on - and also a system of 
DNS-in-a-box DVDs that could be disseminated so that people in disaster 
areas could start to bring back their local communications while they 
waited for the world to dig its way back in [I've lived in several areas 
that were hit by disasters, so I've had practical experience with this 
sort of thing.]  But those proposals got zero traction in ICANN.)

BTW, in later years ICANN did get more technically involved - ICANN's 
role in internationalized domain name and DNSSEC have been good.

> ...The DNS has been taken over by those
> using it as a pseudo business registration service.

I agree that ICANN imposed a very simple-minded business model onto DNS 
right from the outset.

And ICANN has never reviewed those decisions from 1998 and even ossified 
some of that into legal granite - such as the gifting in perpetuity of 
.com/.net to Verisign in order to get Verisign to drop a lawsuit.  Even 
worse is the gifting of fiat registry fees to Verisign and others with 
never an inquiry as to the actual costs of providing those registry 
services.  By my calculations that ICANN gift is costing internet users 
over a $billion a year, every year, in excessive, unverified, unaudited 
registry fees.


> Incidentally I am not knocking the work that Jon Postel and Paul
> Mockapetris started back in 82 ish and many others have done some
> amazing work on DNS which we all depend on today.  But it seems to have
> gone as far as it can.

I rather disagree that DNS is running out of steam.  It is a very 
successful design that has great scaling properties.  And the decision 
of the root server operators to deploy anycast technology (a decision 
that they made on their own despite ICANN's silence) was perhaps one of 
the great unheralded tectonic advances to the internet's resiliency.

A couple of years back there was a multi-month long workshop on cloud 
computing - a lot of big names/internet pioneers were at the meetings at 
SRI and Google and elsewhere - we quickly zoomed into naming as a real 
issue: How does one name cloud things that move and divide and join 
(especially when third parties may have persistent transaction 
relationships with specific instances of those cloud things)?  I was 
intriged by attribute based lookup systems, such as IFmap.  But at the 
bottom of those systems often were good old DNS names.

		--karl--


More information about the At-Large mailing list