Mailing List Archive

Support open source code!


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: IDE vs SCSI for RAID



"Stephen J. Turnbull" wrote:

> Yeah, well, my ex-wife works for Quantum, that would explain _any_
> reliability problems.  ;-)

I had a similar thing to say about Fujitsu. Thats why I avoid almost
everything from them.

> But you're comparing apples and oranges, here.  It's ambiguous the way
> you put it, but I gather your SCSIs were not RAID'ed.  Of course an
> older set of ordinary SCSI drives are going to compare badly with RAID
> array built from new drives.  Nor do you specify what variety of SCSI,
> etc.  And how about the file systems?  There's a good chance that you
> switched from ext2 to Reiser or something like that (you don't say so
> I assume not, but that would make a difference too).

No, the setups were pretty much similar other than SCSI/IDE. The SCSIs (68
pin Wide) were also on RAID-5. The Quantums were just a year old when the
first one developed bad sectors. That was replaced by a new Quantum (big
mistake, I know) of the same size, but when 2 more died within weeks, we
realised it was time to dump the system.

I was planning to switch to Reiser, but after hearing bad things about RAID-5
and Reiser, left it with ext2. Might do so in the future. I guess the main
problem was Quantum and nothing to do with SCSI, but the cost is
exponentially higher for high end SCSI systems. That would not have been a
hurdle in the first place, if we were supporting an airline res. system. :-)

> Also, how are you measuring the access times?  What really matters is
> does throughput hold up when you're thrashing?  This is where IDE
> tends to fall down.  If all that space is serving a couple score
> humans running MS Office, the file server is not going to thrash.  If
> it's trying to support an airline reservation system, you're gonna
> have problems.

Out put from 'hdparm -tT /dev/md4' on IDE server (at a busy time) :

/dev/md4:
 Timing buffer-cache reads:   128 MB in  0.88 seconds =145.45 MB/sec
 Timing buffered disk reads:  64 MB in  1.66 seconds = 38.55 MB/sec

The disk reads were about '24 MB/sec' for the SCSI setup.

> My point is not that you can't do a good job at a lower price with
> IDE.  It's that it depends on how you use the system.  Me, I would
> gold-plate the bus, the RAM, and the disk controller/drive combo.  CPU
> speed etc is not as important for most applications, in particular not
> for servers which are normally I/O bound.  In I/O bound applications,
> SCSI definitely has an edge.

Right about that. I do wonder about how busy the server gets since its only
on a 100Mbps LAN. Thats nowhere near the 38MB/sec the disks are able to churn
out.

Home | Main Index | Thread Index

Home Page Mailing List Linux and Japan TLUG Members Links