Mailing List Archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [tlug] Is Japan closer to U.S. or ...?



Well, to get small things out of the way first:

On 2014-12-23 11:05 +0000 (Tue), Darren Cook wrote:

> P.S. As some early holiday entertainment, "tar czf a.tar *" and I just
> saved the world. Phew. :-)
>    http://xkcd.com/1168/

I don't see what's so hard about this in the age of GNU tools.
"tar --help" will work just fine. :-)

On 2014-12-23 11:05 +0000 (Tue), Darren Cook wrote:

> Ilya's book is worth a cover-to-cover read, but this section covers the
> importance of latency:
> 
> http://chimera.labs.oreilly.com/books/1230000000545/ch10.html#LATENCY_BOTTLENECK

I've had a quick glance at this book, and it looks to be excellent.
It does have issues: I'm pretty sure I heard the word "bufferbloat"
long before 2010, and I'm definitely sure that the problems caused by
over-large buffers have been well known since the 90s. But that's kind
of a quibble, given how much good information is there that's not well
known outside of the professional network engineering community.

On 2014-12-24 11:16 +0900 (Wed), Stephen J. Turnbull wrote:

> My point is that (in another version of the Knuth quote) "97% of
> the time" the bottleneck is elsewhere and effort on optimization is
> wasted.

While this is true within computers, when it comes to services delivered
over wide-area networks, 97% of the the time the biggest bottleneck
is the latency. This even applies to bulk data transfer operations if
the latency-bandwidth product is high (which is not unusual). As I
wrote to someone else on this topic, the conditions under which tens of
milliseconds of difference in latency do not make a difference are rare.
For this to be unimportant, you need two specific condtions:

    a) you are doing a bulk data transfer taking non-trivial
    aamount of time, and

    b) the bandwidth-delay product is within the size that the
    particular version of TCP in use by both ends can handle.

b) can vary considerably depending on the extensions available to both
sides, e.g., TCP large window extensions

Two good places to start when looking at the latter are [1] and [2].

[1]: https://en.wikipedia.org/wiki/Bandwidth-delay_product
[2]: http://bradhedlund.com/2008/12/19/how-to-calculate-tcp-throughput-for-long-distance-links/

But there's more to it than this; in the case of most web pages,
condition a) does not hold. Rather than a single large data transfer,
most web pages do many (usually dozens, not unusually even hundreds)
of HTTP requests. Persistent HTTP connections[3] help considerably
with this, since they can avoid not only a great number of TCP 3-way
handshakes, but also the connections (should) rapidly progress through
the TCP slow-start process and ramp up to full bandwidth. That said,
even HTTP/1.1 clients and servers still have the option not to support
pipelining[4], which brings RTT back in to the equation again. And, of
course, there's also potentially the HOL blocking problem, as mentioned
in [4] below.

[3]: https://en.wikipedia.org/wiki/HTTP_persistent_connection
[4]: https://en.wikipedia.org/wiki/HTTP_pipelining

A lot of work goes in to clevery designing web pages to optimize their
load time, including especially where and when things are loaded on the
page. This is why we have tools such as the Network table in Google
Chrome's Developer Tools pane (which will visually show you precisely
where you're seeing the effects of connection latency) and Google's
PageSpeed Module[5], which helps you choose and apply optimizations that
reduce the effect of latency throughout the entire system.

[5]: https://developers.google.com/speed/pagespeed/module

Beyond that there's a lot more to almost any application than just TCP;
as we've seen in the HTTP example above, depending on their design and
how they're used, higher-level protocols running over TCP can still
suffer from longer RTTs even if TCP is able handle the link well.

And then we come to UDP, which is what is typically used for games and
the like. Even in a fairly well-optomized game such as World of Tanks, a
difference between a 100 ms and 130 ms RTT can easily be noticed by an
experienced player. I have several hundred hours on their US west coast
server and a couple of thousand of hours on their Singapore server, and
easily notice the difference in responsiveness (and the difference in
responsiveness on a single server when similar changes happen over the
lifetime of a battle).

> But I have some experience with papers by financial economists, and
> don't think they know anywhere near as much as they claim. Their
> statistical techniques are mathematically sophisticated but the claims
> of applicability to real data are never substantiated by successful
> prediction in a valid sample.

Well, financial markets are a weird thing, because they are actually
influenced by the things people write about them. My favourite example
of this is the way option prices changed after Black and Scholes
published their famous equation. Previous to that, pricing of options
did not follow their equation very well at all; once they published it,
traders started using it to price their options and within a few years
all the options markets had fallen in to line with the Black Scholes
equation.

But you're correct that attempting market predictions in general is
nonsense. The basic issue, as I see it, is that the markets are already
predicting various economic realities, and when you're trying to predict
the market, you're trying to predict what prediction it's going to make,
which strikes me as just silly.

> Cleverness is not a virtue in the conduct of science, unfortunately.
> I'd be better served with half the cleverness and twice the
> diligence...

I would too, but being clever is at least more fun than being diligent. :-)

On 2014-12-26 01:09 +0900 (Fri), Stephen J. Turnbull wrote:

> The point was that I don't feel qualified to talk about the ins and
> outs of TCP/IP performance tuning, but I was very bothered by the form
> of the original request (which is closer, for arbitrary metrics on a
> delay-prone network in the sphere).

As a network engineer, I found the original request pretty reasonable.
Yes, we could probably give better advice if we knew the particular
application, but given that both the U.S. and Singapore links from
Japan have gobs of bandwidth, I find it very difficult to think of any
application that would run as well from well-connected U.S. server as
it would if it were on a well-connected server in Singapore. Basically,
if you've got to serve Japan from one of the two, you're very, very
unlikely to go wrong by picking a good hosting service in Singapore.

And tangentally related to this, I've recently been reading Arthur C.
Clarke's book _How the World was One_, which is a fascinating history of
the rise of global communications.

cjs
-- 
Curt Sampson         <cjs@example.com>         +81 90 7737 2974

To iterate is human, to recurse divine.
    - L Peter Deutsch


Home | Main Index | Thread Index

Home Page Mailing List Linux and Japan TLUG Members Links