Mailing List Archive
tlug.jp Mailing List tlug archive tlug Mailing List Archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]RE: [tlug] Backup to IDE HD instead of Tape
- Date: Thu, 18 Sep 2003 10:15:57 +0200
- From: patrick.niessen@example.com
- Subject: RE: [tlug] Backup to IDE HD instead of Tape
> -----Original Message----- > From: James Cluff [mailto:jc@example.com] > Sent: Thursday, September 18, 2003 2:41 PM > To: tlug@example.com > Subject: RE: [tlug] Backup to IDE HD instead of Tape > > > > > On Wednesday, July 9, 2003, at 03:30 PM, > > patrick.niessen@example.com > > wrote: > > > > I utilise all the latest technology available in Akiba > (250GB SATA > > > drives & 3ware SATA RAID), but with a budget of +-300,000 > Yen still > > well below > > > the > > > price of a DDS4, Ultrium or a Tape Library. > > > I saw those SATA drives at Com/3 on DIY, IIRC. Takaiyo! > > Did you consider going with the 120GB disks until the price > comes down > > on the 250's? I figured that by the time they fill the 120GB, > > the 250GB > > will be about 10,000 yen. > > //// > >I considered the 200 GB ones as the 250GB were not stocked well. The > compaq > >case is pretty cramped so I needed to go for the biggest ones to get > >500GB. with Raid 5. The controller can accomodate up to 8 > drives, so > >if I change the case I can add some more storage later. The > idea is to > >be able to pass off the additional HDs as > "replacement/upgrade" so that > >they do not go into the Asset register, but are instead booked as > >"Accesories" into accounts. So I can buy more of them spread > over the > >year / next year. I definetly need a hot and a cold spare, > but wait a > >while for strategic reasons ;-) > > > I hung onto this message originally posted months ago because > I found it interesting, I was wondering patrick, how fast can > you backup onto this system? > Well, its actually very fast. The file server backup is incremental by using rsync. After syncronisation I copy the whole backup with "cp -al". So in effect only changed files are copied, for all others hard links are created. Wed Sep 17 20:00:01 JST 2003 Wed Sep 17 20:00:01 JST 2003 Wed Sep 17 20:00:02 JST 2003 Wed Sep 17 20:00:38 JST 2003 Wed Sep 17 20:01:21 JST 2003 Wed Sep 17 20:06:01 JST 2003 Wed Sep 17 20:06:21 JST 2003 Wed Sep 17 20:07:15 JST 2003 Wed Sep 17 20:27:29 JST 2003 Wed Sep 17 20:27:32 JST 2003 Wed Sep 17 20:27:33 JST 2003 Wed Sep 17 20:27:34 JST 2003 Wed Sep 17 20:27:36 JST 2003 Wed Sep 17 20:27:36 JST 2003 This is a typical log for backup of all servers around 20G in total. 3.8 Gig are written directly (not logged here) from the MS Exchange server to a file using samba. Other interesting stats:Filesystem Size Used Avail Use% Mounted on /dev/root 3.7G 279M 3.3G 8% / /dev/sda5 3.7G 1.1G 2.5G 29% /usr /dev/sda6 460G 65G 396G 14% /srv none 46M 0 46M 0% /dev/shm Monthly Backup is to remote tape using XFS, which takes rather long, due to XFSDUMP integrity checks, and the ssh tunneling I suppose. The job started at 8:00h morning and finished at 12:35h, so about 4 1/2 hours. xfsdump -L 2003-8_Monthly -s backup/Weekly.1 - /srv | ssh uljptyo0004 "cat > /dev/st0" Date: Thu, 18 Sep 2003 12:35:50 +0900 (JST) xfsdump: using file dump (drive_simple) strategy xfsdump: version 2.2.4 (dump format 3.0) - Running single-threaded xfsdump: level 0 dump of uljptyo0005:/srv xfsdump: dump date: Thu Sep 18 08:00:01 2003 xfsdump: session id: dc199224-a94f-4830-b91f-478bdff44cff xfsdump: session label: "2003-8_Monthly" xfsdump: ino map phase 1: parsing subtree selections xfsdump: ino map phase 2: constructing initial dump list xfsdump: ino map phase 3: pruning unneeded subtrees xfsdump: ino map phase 4: estimating dump size xfsdump: ino map phase 5: skipping (only one dump stream) xfsdump: ino map construction complete xfsdump: estimated dump size: 14155065024 bytes xfsdump: /var/lib/xfsdump/inventory created xfsdump: creating dump session media file 0 (media 0, file 0) xfsdump: dumping ino map xfsdump: dumping directories xfsdump: dumping non-directory files xfsdump: ending media file xfsdump: media file size 14069129312 bytes xfsdump: dump size (non-dir files) : 14038333016 bytes xfsdump: dump complete: 16482 seconds elapsed xfsdump: Dump Status: SUCCESS I have to continue my write up again, and upload the daily / weekly and monthly scripts (although there are very simple and bad, with no error checking). Patrick
Home | Main Index | Thread Index
- Prev by Date: Re: [tlug] perl hashes
- Next by Date: Re: [tlug] perl hashes
- Previous by thread: RE: [tlug] Backup to IDE HD instead of Tape
- Next by thread: [tlug] perl hashes
- Index(es):
Home Page Mailing List Linux and Japan TLUG Members Links