Mailing List Archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [tlug] What would happen to the Internet if the US fell off the map



On Fri, 3 Aug 2007 15:14:20 +0900
"Josh Glover" <jmglov@example.com> wrote:

> On 02/08/07, Attila Kinali <attila@example.com> wrote:
> > That's what you think. The internet isn't as redundand as you'd imagine.
> 
> Yes, that is what I think, and unless you give me evidence to the
> contrary, I think my knowledge of Internet redundancy is at least
> equivalent to your own.

Giving evidence is difficult, because the exact structure of the
internet is not completely known and even less understood. About
every half year there is an other study that publishes new results
on how the internet is actually structured and disproving a few of
the older studies.
 
> Please ask Steve Turnbull how fast the Internet can rebuild itself.
> His experience is local, but the protocols scale.

Local rebuild is far easier than global rebuild. For one thing,
short haul lines are easier to replace and for another, you
can share them a lot easier.

As an example. Not too long ago, a construction worker cut
half of the fibers connecting east and west Switzerland
(yes, he cut trough half a meter of concrete and steal) which
meant that half of the ISPs couldn't connect all their customers
to the internet (those who were on the "wrong side" of the cut).
What the ISPs did then, was to route all traffic trough the
remainign half of the fibers until an emergency connection
could be established which took around 2 days. After that,
nobody feelt anything from the cut anymore.

Of course, this was only possible, because most of the fibers
were either not fully used, or completely idle. There was some
increase in lag and some packet drop, but nothing serious.

Now let us think this a bit further and see what would happen if
this had been on a more global scale.

Let's cut one or two connections going out of New York to Europe.
Would that change anything? No. The ISPs would simply reroute the
traffic trough the other connections between New York and Europe.
You'd hardly feel it. Maybe a little bit of more delay. Packet los
shouldn't happen because of the oversizing.

Ok, now let's shut down all connections between New York and Europe.
What will now happen is that the traffic will have to be rerouted
trough the connections further south (IIRC mostly Florida). But
these lines are not as big as the ones from New York. They will be
horribly swamped. This will result in a extremly fast grow of the
queues in the routers. Delay will rize. Packet loss be considerable
to huge because the queue size of the routers is fairly limited.
Even the throtteling of TCP wont help here because the source of
the traffic isn't one point, but millions of uncoordinated hosts.
Even if each of them sends only one packet, all of them together
will make up a huge amount of data.

Because this situation would be not bearable, the ISPs would be
forced to filter the traffic going trough their connection
(which they have to do anyways to some extend). So they'd first
drop all traffic that is unnecessary, like filesharing tools
etc. The internet as a general network wouldn't work flawlessly
anymore, but at least mail and web would still somehow function.

<side note>
I do not touch here the technical difficulties of filtering
high bandwidth traffic. As an excercise, think about how
you would filter 1Gbit/s, 10Gbit/2, 40Gbit/s and 80Gbit/s,
given todays technology (PCI 64@example.com: ~4Gbit/s shared; PCI-e:
2Gbit/s/lane bidrectional, not shared).
</side note>

> Please read the aforementioned Corey Doctorow story, and see if you
> agree with his assessment; I certainly think it sounds reasonable.

I've read it. And i have to somewhat disagree. Although the big
datacenters and internet exchange points will continue to work,
most of the end points will be imediatly, or very soon (ie less than
a day) after a power outage disconnected (only the very big data centers
have big enough backup systems to accomodate for an power outage longer
than a few hours). Beside that the connectivity of the end nodes 
(also known as the last mile) is usualy copper with repeaters etc,
which do not have any UPS. Even long haul fibers need repeaters
which, depending on the construction, need either power at the end
of the fiber (optical amplifiers with end point power injection)
or need power directly at the point were the repeater is. So even
if the data centers are still functional, their connections might not
be anymore.

> > First, let us evaluate what the USA provides in terms of communication.
> > Network wise, it's the connection between east asia, australia, europe
> > and the rest of americas.
> 
> But there are other pipes, which would be swamped at first, then come
> back up as their owners route around them. Underwater long-haul pipes
> would be spliced (at great cost, but who cares? the Internet is now
> *vital* to most business and government functions around the world).

You have a slight missunderstanding of the order of magnitude of
size difference we are talking here. Europe<->USA has an aggregate
bandwidth in the two to three figure Tbit/s range. The other connections
from Europe out are well below a Tbit/s, if not in the lower Gbit/s range.

What you call "swamped at first" is called "non functional" in networking
terms. As in the example above, TCP will simply fail because there are so
many sources competing for the same resouce, that the congestion control
algorithms would fail. And the error recovery system of TCP would make
it even worse, by sending packets, whos ACK has not been received will
be send again and again, creating even more traffic that has to be routed.

Filtering, if possible at all, will be horribly difficult as even
just mail would create more traffic than the connection can handle.
And filtering by AS is also not an option as we want to provide a fully
functional internet to everyone.

> > And even if there would be, it would be only very small connections,
> > a few Gbit/s at most, as it has only to accomodate the small traffic
> > into these internet wise rather underdeveloped countries and those
> > would become unfunctionall as soon as the routes would switch.
> 
> Yes, but control would return. Small pipes can be rate-limited, and
> the rate limits would bubble back up the pipes to local ISPs. The Net
> would slow to a 28.8 modem-like crawl for a little while, but it would
> be back.

Every pipe is rate limited. And no, the problem is not that one
or two lines would operate slightly above their capacity and have
to drop some packets. We are talking here about at least two magnitudes
of overloading. All our current conguestion control systems fail in
that scenario. And with fail, i mean that they will not only not work
anymore, but will make the situation even worse than it already is.

> > Result will be that East Asia/Australia/Pacific area will be cut
> > of from Europe/Afrika/West Asia.
> 
> Nah, I don't buy this; see above.

Did my explenation convince you now?

> > DNS should be pretty much fine, as the root dns servers are
> > well spread over the world and even the non-country tlds are
> > not as USA centric as they used to be.
> 
> Good. I am glad to hear this has been fixed.

At most the administrative data base would be gone. It would be a major
pain to reconstruct it, but at least the DNS system would not fail.
It would just be frozen in its current state.

> > The problem is rather with the
> > small services like archive.org which do not have the financial
> > backing to support a global infrastructure.
> 
> But you forget about mirroring. :)

Yes and no. I know how much hassle it is to have a mirror
set up (I'm admin of a webserver of an OSS project),
i don't want to know how much more work it is to get
something like archive.org mirrored.
 
> > Not at all. DARPAnet was a lot more redundand than the internet
> > today.
> 
> No. You are flat wrong.

On what basis do you claim that?

> > Most routes these days are manually or half-manually assigned.
> 
> Right, and thus they can be manually re-assigned (though I would not
> agree with your use of the word "most").

They are. Most connections between ISPs are either paid for, or
are made using peering agreements. Either way there is a contract
involved on how much traffic, what traffic, origin and destination
is allowed to pass trough this pipe. Although, Tier-1 and Tier-2
use mostly BGP with a few additional rules in their routers, Tier-3
uses mostly hand written rules with some additions using BGP for failover.

> > Thank god for the
> > oversizing made during the .com buble, otherwise we'd have daily problems
> > with flooded backbones. (side note: there are hardly any satelite
> > connections used these days, as their delay is too high)
> 
> Again, the Internet has become to important to too many people to run
> into serious scaling problems. If you build it, they will come...

The internet scales well. It's still not optimal and could do better
(most restrictions in scalability come from comercial restrictions),
but in the general sense, we had not had any big issues until now.
The internet is also quite fail safe, given only localized failuers
happen. But it is definitly not faile safe on any large scale event.
For reference, see the effect of SQL Slammer on the internet end-to-end
connectivity of non-wurm traffic.

> I still think the Internet would be fine without the US. Not right
> away, of course, but the Internet can recover quite nicely from all
> manner of catastrophes.

Only after high bandwidht connections between build to replace
the ones that were lost.

				Attila Kinali

-- 
Praised are the Fountains of Shelieth, the silver harp of the waters,
But blest in my name forever this stream that stanched my thirst!
                         -- Deed of Morred


Home | Main Index | Thread Index

Home Page Mailing List Linux and Japan TLUG Members Links