[At-Large] I: [ALAC-Announce] ICANN News Alert -- Notice of Preliminary Determination To Grant Registrar Data Retention Waiver Request for Ascio Technologies, Inc. Danmark - filial af Ascio Technologies, Inc. USA

Christian de Larrinaga cdel at firsthand.net
Tue Dec 22 17:27:16 UTC 2015

comments inline (I've left your post in full because it's very

Karl Auerbach wrote:
> On 12/19/2015 04:31 AM, Christian de Larrinaga wrote:
>> Karl Auerbach wrote:
>>> I've never heard that claim before.  I've run experiments with DNS and
>>> found surprisingly few limits on how far it can expand.  (For example,
>>> in one experiment [more than a decade ago] we ran Bind with tens of
>>> millions of top level domains and then ran query traffic [in which we
>>> mixed a fair amount of absent names to make it more real-life.])
>> I'm intrigued. Was this done to establish evidence that a flattening of
>> the hierarchy would not be a technical problem?
> The DNS hierarchy of today is extraordinary flat - almost all of the
> fanout of DNS tree occurs at the third level or deeper.  The root
> fanout is fairly constrained by UDP packet size limits to about 13. 
> The root zone fans out to only a few hundred - now moving to maybe a
> couple of thousand (most sparsely populated) TLDs.  The vast majority
> of name queries pass through the [com, net, org, in-addr.arpa]
> branches before the real spreading of DNS occurs.  [I suspect that the
> .be and .ly branches get a fair amount of traffic - but they are
> themselves pretty flatly arranged.]
> Back to our experiment:
> ICANN kept making Chicken Little noises about how the sky would fall
> if the DNS root were to exceed a couple of hundred TLDs and thus utter
> care and decades of study would be needed.
> I (and a couple of others) said "that's rubbish".  So we took a fairly
> vanilla, but reasonably powerful, PC of the era running Linux, and
> stuffed as much memory into it as we could.
> We wrote a script that took the .com zone of that era (several tens of
> millions of names if I remember right) [don't ask how we got it, I
> don't remember].  The script turned it into a root zone file with
> delegations to non-existent machines.  We loaded it into bind, waited
> a bit for the file to be digested, then began testing.
> (We also generated several synthetic root zones of various sizes in
> which we generated names of various lengths using random character
> sequences.)
> We generated queries to that pseudo root server.  Since recursion was
> disabled (as it is disabled on all real root servers) the fact that
> the delegations went nowhere was not particularly relevant.
> The queries were not simple one-at-a-time queries.  We overlapped
> queries and mixed in a good blend of missing names.
> We were surprised how well it ran.  It pretty much demonstrated that
> the ICANN theory that the DNS would go "boom" was a bogyman.  It
> demonstrated that ICANN could allocate a ten new TLDs a day and still
> be well within the technological limits of DNS resolvers based on
> decade old hardware.
> Our experiment was simple, and it did not involve zone transfers of
> notifications or things like that.  But at least we did something
> concrete rather than merely waiving hands.
> I told the ICANN board about these experiments, but in typical ICANN
> fashion there was no interest in following up with other actual
> experiments to ascertain whether there was an actual basis for ICANN's
> fears of DNS expansion.
> It wasn't until a decade later that ICANN participated in the
> one-day-in-the-life-of-the-internet data capture and analysis experiment.
> (I had also suggested that ICANN undertake to induce the creation of a
> DNS early-warning monitoring system - and even lined up a worldwide
> array of no-cost servers to run the monitors on - and also a system of
> DNS-in-a-box DVDs that could be disseminated so that people in
> disaster areas could start to bring back their local communications
> while they waited for the world to dig its way back in [I've lived in
> several areas that were hit by disasters, so I've had practical
> experience with this sort of thing.]  But those proposals got zero
> traction in ICANN.)
> BTW, in later years ICANN did get more technically involved - ICANN's
> role in internationalized domain name and DNSSEC have been good.
>> ...The DNS has been taken over by those
>> using it as a pseudo business registration service.
> I agree that ICANN imposed a very simple-minded business model onto
> DNS right from the outset.
> And ICANN has never reviewed those decisions from 1998 and even
> ossified some of that into legal granite - such as the gifting in
> perpetuity of .com/.net to Verisign in order to get Verisign to drop a
> lawsuit.  Even worse is the gifting of fiat registry fees to Verisign
> and others with never an inquiry as to the actual costs of providing
> those registry services.  By my calculations that ICANN gift is
> costing internet users over a $billion a year, every year, in
> excessive, unverified, unaudited registry fees.
I remember those arguments. I think there was some validity to them in
the sense nobody had really tried to run a flattened hierarchy so I
think your test was the correct approach. As you say it should have
stimulated things. But running into a wall after providing empirical
data is something I've experienced as well. I ran a domain spring clean
in 2001 for .uk.  It can be found on the wayback machine.

A few key findings.

- the registrar / registry players were against an independent look at
quality of how zones were being managed.
- DNS suffers entropy as records go out of date. Quite how WHOIS is
supposed to keep up when DNS itself isn't able to do that should be
better understood.
- Managing DNS servers such as Bind takes effort and regular updates to
deal with vulnerabilities.  Publicly accessible DNS is highly visible as
a target.

That said the DNS as a technology has scaled well and as you imply could
have scaled further. The question is why do we not see every user and
edge point running their own domains or DNS under the ICANN managed
domain industry?

I expect one reason is that running DNS servers is non trivial (entropy
/ software updates / dependency hell etc).

Another candidate is the policy behind DNS implies a heavy overhead,
loss of privacy and cost including an ongoing commitment to name your
devices and services using public DNS.

So it is not surprising that people use URLs tagged onto third party DNS
or increasingly private name spaces / registries outside the DNS
entirely and then hook into a convenient domain for managing peering
interfaces and so on.

>> Incidentally I am not knocking the work that Jon Postel and Paul
>> Mockapetris started back in 82 ish and many others have done some
>> amazing work on DNS which we all depend on today.  But it seems to have
>> gone as far as it can.
> I rather disagree that DNS is running out of steam.  It is a very
> successful design that has great scaling properties.  And the decision
> of the root server operators to deploy anycast technology (a decision
> that they made on their own despite ICANN's silence) was perhaps one
> of the great unheralded tectonic advances to the internet's resiliency.
> A couple of years back there was a multi-month long workshop on cloud
> computing - a lot of big names/internet pioneers were at the meetings
> at SRI and Google and elsewhere - we quickly zoomed into naming as a
> real issue: How does one name cloud things that move and divide and
> join (especially when third parties may have persistent transaction
> relationships with specific instances of those cloud things)?  I was
> intriged by attribute based lookup systems, such as IFmap.  But at the
> bottom of those systems often were good old DNS names.
>         --karl--
Yes that is the situation I've seen too. I participated in establishing
a 250 million end point ENUM service or rather private ENUM service
about a decade ago for an early VoIP and SIP trunking application
service. As a form of iENUM it used DNS technology but it was not
visible to the ICANN DNS and peered with other networks privately
outside the ICANN DNS. 

Using the DNS as a technology is potent as it has a known code base and
significant experience exists to manage it as a distributed service.

But through a combination of charging so much for domains and making the
policy cost so high the ICANN community has largely lost the business
case for their variant of the DNS for naming the Internet edge.

I think that observation is relevant for ALAC and other policy fora to
put in their pipe because policy for an ICANN DNS that is engaging all
users and devices is rather different in scope to one that is only
engaging hosts and intermediaries and the joins between the two need
careful handling.

Merry Xmas!


</C> --

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://atlarge-lists.icann.org/pipermail/at-large/attachments/20151222/94ebd681/attachment-0001.html>

More information about the At-Large mailing list