Mailing List Archive
tlug.jp Mailing List tlug archive tlug Mailing List Archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]Re: [tlug] Re: Talk about fast HTTP
- Date: Sat, 18 Oct 2008 10:43:33 +0900
- From: "Bruno Raoult" <braoult@example.com>
- Subject: Re: [tlug] Re: Talk about fast HTTP
- References: <48F850AA.705@bebear.net> <13249286.664581224234776140.JavaMail.root@gla1-mail1> <20081017230648.GD1501@lucky.cynic.net>
On Sat, Oct 18, 2008 at 8:06 AM, Curt Sampson <cjs@example.com> wrote: > On 2008-10-17 10:12 +0100 (Fri), Sach Jobb wrote: >> Sorry, I wasn't aware of the fact that you were soliciting responses. >> This is actually quite interesting to me since my company is >> consistently tuning it's web applications for performance. > > I wonder if it's worth having a panel discussion after the talk. > It sounds as if there may be several of us who design and build > high-performance HTTP systems who might have useful experience to share. This could be interesting if you remove the "HTTP" word. I believe that most companies are looking for performance in networking (I really mean system and software tuning here, not physical limits of the hardware/lines), whatever the TCP protocol is. > As an example, one of Starling's clients has a site that consistently > saturates 5 Gbps/sec. of connectivity for several hours every day. (Yes, > that's the equivalant of five GigE links, or fifty 100Base-T links.) One > of the interesting things that came out of this is that bandwidth from > the disk is often the system bottleneck, and you may need to construct a > system that looks rather different from what you would think in order to > deal with this. Sorry, I don't really understand what you mean: What do they saturate exactly? On the same subject, everybody who had worked on grids met similar issues, when thousands (I mean 5K+) CPUs work together to make a single calculation (for instance to calculate VAR in banks). There is always a few bottlenecks, which are, in order: 1) sending data from the data source to nodes: database, underlying disks, network, etc... 2) necessary exchanges between the calculating nodes (node X and Y results will be used by the waiting node Z, which already got results from node T). Mostly for HW and development costs reasons, few companies have jumped to new technologies such as Infiniband. But they know that of course RDMA would solve the nodes data exchange latency, and generally speaking Infiniband should put away the network bandwidth/latency bottlenecks. This would leave only a few issues, including the data source performance (your "HTTP"). This is exactly what we could discuss here. This obviously (mostly?) includes design, as you say. br. -- 2 + 2 = 5, for very large values of 2.
- Follow-Ups:
- Re: [tlug] Re: Talk about fast HTTP
- From: Curt Sampson
- Re: [tlug] Re: Talk about fast HTTP
- From: Sach Jobb
- References:
- [tlug] Re: Talk about fast HTTP
- From: Edward Middleton
- Re: [tlug] Re: Talk about fast HTTP
- From: Sach Jobb
- Re: [tlug] Re: Talk about fast HTTP
- From: Curt Sampson
Home | Main Index | Thread Index
- Prev by Date: Re: [tlug] VirtualBox Abuse?
- Next by Date: Re: [tlug] Child abuse :-(
- Previous by thread: Re: [tlug] Re: Talk about fast HTTP
- Next by thread: Re: [tlug] Re: Talk about fast HTTP
- Index(es):
Home Page Mailing List Linux and Japan TLUG Members Links