Mailing List Archive
tlug.jp Mailing List tlug archive tlug Mailing List Archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]Re: [tlug] What would happen to the Internet if the US fell off the map
- Date: Sat, 4 Aug 2007 16:25:24 +0900
- From: "Josh Glover" <jmglov@example.com>
- Subject: Re: [tlug] What would happen to the Internet if the US fell off the map
- References: <d8fcc0800708012132m59e42e7w6309cb156f9d9bb7@mail.gmail.com> <20070802094017.8ce7b1f7.attila@kinali.ch> <d8fcc0800708022314y5b6c5ff2r94bd63aab01ad4ef@mail.gmail.com> <20070803112611.db36a85f.attila@kinali.ch>
On 03/08/07, Attila Kinali <attila@example.com> wrote: > On Fri, 3 Aug 2007 15:14:20 +0900 > "Josh Glover" <jmglov@example.com> wrote: > > > On 02/08/07, Attila Kinali <attila@example.com> wrote: > > > That's what you think. The internet isn't as redundand as you'd imagine. > > > > Yes, that is what I think, and unless you give me evidence to the > > contrary, I think my knowledge of Internet redundancy is at least > > equivalent to your own. > > Giving evidence is difficult, because the exact structure of the > internet is not completely known and even less understood. Sorry, I probably took offence where it was not intended (as was pointed out to me in an off-list reply). I felt that your tone was condescending, and let my feathers get a bit ruffled. Here is what I meant: if you have worked for a major ISP or have some kind of special insight, you really should say so. Otherwise, we have to treat your opinion as equivalent to anyone else's; just a semi-informed guess. I keep using Steve as an example, but when he posts on an economic topic, we all know that he is a Professor of Economics at Tsukuba-dai and that informs how we treat his opinion. If Steve was new to this list and posted about an economic topic, I would expect him to mention that he is an expert. If Curt or Zev posts about Japanese mobile phones, I expect them to mention that they've spent the last N years working with them professionally. When Keith or Mauro or I talk about an Amazon issue, we should state our affiliation. This is full disclosure, and helps others judge how much weight to assign to what we say. Please do the same; I don't know you or what you do for a living, but you obviously have more knowledge about the Internet than the average lay-geek. Please tell us roughly how you came by it (e.g. are you a hobbyist? security reseearcher? work for an ISP? work for the government? etc.). > About every half year there is an other study that publishes new results > on how the internet is actually structured and disproving a few of > the older studies. I know this, and I base my argument solely on my knowledge of how the Internet was *designed*, because the implementation varies wildly. > Local rebuild is far easier than global rebuild. For one thing, > short haul lines are easier to replace and for another, you > can share them a lot easier. Of course. But my argument is simple: 1. TCP/IP and the commonly used routing protocols were *designed* to handle nodes dropping off the map. 2. New pipes can be laid extremely quickly when necessary. 3. The Internet is of sufficient importance that governments and large corporations will spend whatever it takes to repair it in case of a catastrophe. This argument cares not a whit for how the Internet is currently implemented, and I will be the first to admit that beyond the very high level overview, I am not conversant with the details. By way of example, I know how DNS works, but I did not know, until you or someone else pointed it out in this thread, that the root servers are now geographically dispersed. Last I heard, and this is knowledge from the last time network security was part of my day job (around 2002), this was not the case. However, not knowing where the DNS servers live does not impair my understanding of the protocols that make DNS work. > As an example. Not too long ago, a construction worker cut > half of the fibers connecting east and west Switzerland [...] > What the ISPs did then, was to route all traffic trough the > remainign half of the fibers until an emergency connection > could be established which took around 2 days. After that, > nobody feelt anything from the cut anymore. Does this example not support my argument? > Now let us think this a bit further and see what would happen if > this had been on a more global scale. Yes, let's. :) > Ok, now let's shut down all connections between New York and Europe. > What will now happen is that the traffic will have to be rerouted > trough the connections further south (IIRC mostly Florida). But > these lines are not as big as the ones from New York. They will be > horribly swamped. This will result in a extremly fast grow of the > queues in the routers. Delay will rize. Packet loss be considerable > to huge because the queue size of the routers is fairly limited. > Even the throtteling of TCP wont help here because the source of > the traffic isn't one point, but millions of uncoordinated hosts. > Even if each of them sends only one packet, all of them together > will make up a huge amount of data. I agree with all of this. But why do you assume that the ISPs are sitting on their hands? No halfway decent network admin will rely on the clients' TCP stacks playing nice; they'll start changing routing policies and choking way back on the throttles so their queues don't overflow. > Because this situation would be not bearable, the ISPs would be > forced to filter the traffic going trough their connection > (which they have to do anyways to some extend). So they'd first > drop all traffic that is unnecessary, like filesharing tools > etc. The internet as a general network wouldn't work flawlessly > anymore, but at least mail and web would still somehow function. OK, now we are on the same page. > <side note> > I do not touch here the technical difficulties of filtering > high bandwidth traffic. As an excercise, think about how > you would filter 1Gbit/s, 10Gbit/2, 40Gbit/s and 80Gbit/s, > given todays technology (PCI 64@example.com: ~4Gbit/s shared; PCI-e: > 2Gbit/s/lane bidrectional, not shared). > </side note> Mate, this problem is solved by the backbone operators, because they don't give a damn. They make sure the traffic that is important to them gets though, leaving others to fend for themselves. This is the beauty of a best-effort network, you can change the definition of best effort to match the situation, and the network does not crash; it routes around the trouble *somehow*. > > Please read the aforementioned Corey Doctorow story, and see if you > > agree with his assessment; I certainly think it sounds reasonable. > > I've read it. And i have to somewhat disagree. Then I guess we will have to agree to disagree, because I certainly read nothing in that story that shattered my willing suspension of disbelief. I did not read the story with a critical eye towards his portrayal of the 'Net, 'tis true, but I am pretty touchy about bullshit tech in my fiction. (Just ask my wife, who occasionally has to ask me--with the patience of a saint--to stop pausing the goddamn movie every five seconds to explain why this or that tech reference was bullshit. It got really bad when we watched the first season of "24" on DVD. :) Actually, Attila, if you have the time, it would be pretty cool to work up an annotated version of "WSRtE", where you point out the bits you consider questionable. Then other experts (i.e. not me) can respond. I seriously think it would be a cool project for TLUG, and Cory Doctorow would surely get a kick out of it (he's a very cool, down-to-earth guy who would welcome interesting discussion on his piece). Our wiki is waiting. :) > You have a slight missunderstanding of the order of magnitude of > size difference we are talking here. Europe<->USA has an aggregate > bandwidth in the two to three figure Tbit/s range. The other connections > from Europe out are well below a Tbit/s, if not in the lower Gbit/s range. I know this; if not the exact numbers, I am well aware of the respective orders of magnitude. > What you call "swamped at first" is called "non functional" in networking > terms. As in the example above, TCP will simply fail because there are so > many sources competing for the same resouce, that the congestion control > algorithms would fail. Again, TCP congestion control is not the only thing at work here. > And the error recovery system of TCP would make > it even worse, by sending packets, whos ACK has not been received will > be send again and again, creating even more traffic that has to be routed. But the ISPs can simply start dropping these packets. TCP is not as dumb as you make it out to be here; incremental backoff is designed to prevent the spiralling cluster-fuck that you describe above. > And filtering by AS is also not an option as we want to provide a fully > functional internet to everyone. No we don't! In the case of a catastrophe, I would expect that all network owners go into "fuck you, I gotta get mine" mode. And this is actually good for the 'Net as a whole, because it will keep the overall traffic down. You know that most TCP/IP stacks will shut down an interface automatically if it encounters a certain duration of high packet loss (where "high" and the duration are completely arbitrary), right? ISPs cannot, should not, and will not rely on this, but the majority of TCP/IP clients out there are pretty well-behaved. > Every pipe is rate limited. I know. And moreover, that rate limit can be adjusted downwards from the realistic maximum if need be. > All our current conguestion control systems fail in > that scenario. And with fail, i mean that they will not only not work > anymore, but will make the situation even worse than it already is. OK. This I simply refuse to believe. You may be entirely correct, but you are going to have to give me specifics in order to convince me. Which congestion control systems are you referring to? How will they fail? How will the spiral get out of hand? > Did my explenation convince you now? Nope. :) > At most the administrative data base would be gone. It would be a major > pain to reconstruct it, but at least the DNS system would not fail. > It would just be frozen in its current state. That is acceptable, isn't it? > Yes and no. I know how much hassle it is to have a mirror > set up (I'm admin of a webserver of an OSS project), > i don't want to know how much more work it is to get > something like archive.org mirrored. I know, but these mirrors do exist. That is my point. > > > Not at all. DARPAnet was a lot more redundand than the internet > > > today. > > > > No. You are flat wrong. > > On what basis do you claim that? My knowledge of the history of the Internet. And I am all but certain that what you are calling "DARPAnet" was actually named "ARPANET". The DoD group that funded the early research was called DARPA, but the net was simply ARPANET. On what basis do you make your original statement? > The internet scales well. It's still not optimal and could do better > (most restrictions in scalability come from comercial restrictions), > but in the general sense, we had not had any big issues until now. > The internet is also quite fail safe, given only localized failuers > happen. But it is definitly not faile safe on any large scale event. Again, I disagree. But I don't know what your definition of "large-scale event" is. Maybe you should state it clearly. For the record, when I say "large-scale event" I am talking about a catastrophe similar to Cory Doctorow's fictional one, or something like the US getting attacked by UFOs like in "Independence Day" (examples carefully chosen and T-word carefully avoided in hopes that the roving eye of A Certain Country's Department of "Heroic" Sadomasochism will not focus on me). > For reference, see the effect of SQL Slammer on the internet end-to-end > connectivity of non-wurm traffic. Yeah, the Internet lived to tell the tale, right? > > I still think the Internet would be fine without the US. Not right > > away, of course, but the Internet can recover quite nicely from all > > manner of catastrophes. > > Only after high bandwidht connections between build to replace > the ones that were lost. So we agree after all! :) The crux of my argument is that the protocols that make the Internet work were designed with nuclear annihilation in mind, and that the Internet is so important that building high-bandwidth links to replace ones that went dark will be almost immediate. Weeks at most. Not months. -- Cheers, Josh
- Follow-Ups:
- Re: [tlug] What would happen to the Internet if the US fell off the map
- From: Stephen J. Turnbull
- Re: [tlug] What would happen to the Internet if the US fell off the map
- From: Attila Kinali
- References:
- [tlug] What would happen to the Internet if the US fell off the map
- From: Josh Glover
- Re: [tlug] What would happen to the Internet if the US fell off the map
- From: Attila Kinali
- Re: [tlug] What would happen to the Internet if the US fell off the map
- From: Josh Glover
- Re: [tlug] What would happen to the Internet if the US fell off the map
- From: Attila Kinali
Home | Main Index | Thread Index
- Prev by Date: [tlug] Supporting Linux
- Next by Date: Re: [tlug] Learning to Program
- Previous by thread: Re: [tlug] What would happen to the Internet if the US fell off the map
- Next by thread: Re: [tlug] What would happen to the Internet if the US fell off the map
- Index(es):
Home Page Mailing List Linux and Japan TLUG Members Links