[At-Large] ICANN Blog : Relying on ICANN Community-Developed Processes for a Safe, Secure Interne

Karl Auerbach karl at cavebear.com
Fri Jan 7 08:02:12 UTC 2022


On 1/6/22 21:54, bzs at theworld.com wrote:
> The Orange Book (which defined A/B/C security) was mostly about
> compartmentalization, how to keep people with different levels of
> clearance, or just no "need to know", away from each other on shared
> mainframes. With an eye towards the possibility that some of those
> people on the inside might be hostile actors.

My department wrote the original drafts of the Orange Book.

It is amazingly hard to specify, with the mathematical precision we 
needed for formal software verification, that one security domain may 
not send information to a lower classification domain. That was called 
the *-property ("star property").

We wrote some other documents regarding expansion of the Orange Book 
ideas into the realm of networks.  Those papers were physical paper and 
have sunk into the infinite maw of the US gov't - not quite classified 
but rather in some forgotten banker's box or file cabinet at some 
forgotten location.

One paper I really wished I could find is one where I worked on methods 
of debugging a tightly secured operating system.  We were working with 
capability architectures at the time so we even had the hardware working 
against debugging.

You and I have both had a long time interest in network tools (and 
dies), so I suspect we share the satisfaction of being in the right 
place at the right time with the right tools.

> It was obsolesced by computers becoming cheap enough that you just
> didn't share resources between disjoint departments, and other
> factors.

One could only wish that that tendency would have persisted. These days 
we tend to live in a world where data is held in big shared storage.  
Those big storage places have, of late, often exposed their goods either 
due to weakness in their software, weakness in internet access 
protocols, or rather frequently procedural failures ranging from people 
responding to phishing attacks to things like DNS resolvers that give 
back misleading answers.

A lot of security is knowing when to open the door of access and knowing 
when to slam it shut and push the alarm button.  That often requires 
firm credentials of the "subject" trying to make the access.  We've been 
reluctant on the internet to create what amounts to a lord-of-identity.  
That goes against the libertarian bent of many people.

But even when identity is asserted we often do a weak job of 
authenticating it.  For example TLS connections often validate the full 
credential chain of one of the ends of the connection, not both.

What we found, and this was even more important on networks, than inside 
a single machine, was that the chain of access was important.  If A 
wanted resource X it often was important that A make a request of B 
which in turn could access X.  Capability systems were a great tool for 
this, but those don't work well across networks.

And then we come to my main point - how do we fix things when security 
walls are present and working?  How does Joe Repairman prove that he can 
come in and hook up to the crown jewels of the data flowing in a network 
or stored somewhere?  How does Joe Repairman do that over a network?

I spent much of the 1980's keeping the networks of a very large bank 
alive.  When I stated we had a mere $20Billion flowing through our nets 
every night.  That's a pittance by today's standards.  I often had to 
get deep access (sometimes in a physical sense - some of our gear was 
buried deep under the streets of San Francisco.)  We had locks and 
barriers out the wazoo.  And encryption and procedures.  But we had 
means by which I could get in, turn off encryption, fix things, and get 
out. Things have gotten far more reliable in the intervening years, 
which is good because today that kind of job would be nearly impossible 
to do, at least not as quickly, due to the layers of security that have 
grown.

I've run into problems dealing with distributed attacks.  When I tried 
to chain back through the various carriers, especially as I dealt with 
carriers across an ocean from me, I often found myself unable to obtain 
needed data because those providers didn't know me from Adam or just 
didn't want to help.

Most of us have done that school experiment where we boil up a batch of 
sterile agar, put it into a sterile petri dish, cough on it, cap it, and 
then watch it for a few weeks.  It begins with massive growth and then 
it starts to poison itself.

The internet is like that petri dish - its grown like crazy but it is 
starting to pollute itself, either with hostile attacks, failing gear, 
or just bad software.  The internet so far has had enough excess 
resources that we could sort of push those problems off to the side.  
But that era of averting our eyes is ending; we are going to have to 
become much more serious about finding problems and fixing them with 
dispatch.

     --karl--




More information about the At-Large mailing list