
Mailing List Archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[tlug] dd block size
- Date: Wed, 15 Aug 2007 22:22:22 -0400
- From: jep200404 <jep200404@example.com>
- Subject: [tlug] dd block size
- References: <d8fcc0800707162334w4c694ba2yd2b9b296e7964f94@mail.gmail.com> <469C673B.20104@stoicviking.net> <d8fcc0800707170018q1382f7a3me9d151ecc213aed5@mail.gmail.com> <200707172022.56133.tlug@extellisys.net> <20070717163913.109ec888.attila@kinali.ch> <d8fcc0800707171616r4d4b8d44i17d88d220b80a05d@mail.gmail.com>
Josh Glover wrote:
> Is it necessary to set the blocksize at all with dd?
Usually not, but there are exceptions.
It depends on what your I/O is.
Even when it is not _necessary_ to set the blocksize,
doing so can significantly affect I/O speed.
In Jacob Hopkins' COLUG presentation about recording compressed
digitized video, he used dd bs=64k to record the video,
and mentioned that he was _required_ to use that block size.
I don't recall what the bad consequences of not using a 64k block
size were.
When reading from or writing to the vast majority of hard drives,
the driver makes it work pretty much regardless of the block size
specified by dd.
Even if the driver can work with any block size,
the block size can have a significant effect on the speed.
Someone mentioned this recently on TLUG.
Here's a summary of the time needed to wipe a 20GB drive. [1]
Block Size Time
---------- -------------
1 megabyte 5853 seconds
512 bytes 12968 seconds
1 byte 60308 seconds (just for fun!)
I don't know what behavior is _guaranteed_ to happen
at the end of a drive when large block sizes are used that do
not evenly divide into the size of the disk. In this case,
it looks like nothing bad happened. I.e., all 20020396032
bytes transferred, even when I used the 1 megabyte block
size which doesn't divide evenly into the drive size.
[jep@example.com ~]$ bc
39102336*512
20020396032
scale=10
./(1024^2)
19092.9375000000
[jep@example.com ~]$
If I care, I use something evenly divisible into
/proc/ide/hda/capacity (times 512). The capacity
factors into many primes,
39102336: 2 2 2 2 2 2 2 3 3 7 13 373
allowing many possible nice block sizes.
Mere arithmetic convenience can be another reason to specify
a non-512 byte sector size for dd. When reading ISO-9660 images
from CDs and DVDs, I always specify a 2k block size. [2]
Since mkisofs reports the size generated as the number of
2k blocks, using 2k blocks allows me to use that number as is,
without having to do the 4* arithmetic to convert to
512 byte blocks. Likewise, I just use the block size
and count specified by isoinfo, without converting to
512 byte blocks. [3]
Jim
----------------------------------------------------------
[1]
root@example.com cat /proc/ide/hda/capacity
39102336
root@example.com fdisk -l /dev/hda
Disk /dev/hda: 20.0 GB, 20020396032 bytes
16 heads, 63 sectors/track, 38792 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Disk /dev/hda doesn't contain a valid partition table
root@example.com date;time dd if=/dev/zero bs=1M of=/dev/hda;date
Tue Aug 14 22:17:22 EDT 2007
dd: writing `/dev/hda': No space left on device
19093+0 records in
19092+0 records out
20020396032 bytes transferred in 5853.172711 seconds (3420435 bytes/sec)
real 97m33.175s
user 0m0.142s
sys 5m0.982s
Tue Aug 14 23:55:00 EDT 2007
root@example.com date;time dd if=/dev/zero bs=512 of=/dev/hda;date
Tue Aug 14 23:55:01 EDT 2007
dd: writing `/dev/hda': No space left on device
39102337+0 records in
39102336+0 records out
20020396032 bytes transferred in 12968.859289 seconds (1543728 bytes/sec)
real 216m8.861s
user 0m20.598s
sys 3m32.826s
Wed Aug 15 03:31:18 EDT 2007
root@example.com date;time dd if=/dev/zero bs=1 of=/dev/hda;date
Wed Aug 15 03:31:18 EDT 2007
dd: writing `/dev/hda': No space left on device
20020396033+0 records in
20020396032+0 records out
20020396032 bytes transferred in 60308.103305 seconds (331969 bytes/sec)
real 1005m8.105s
user 166m7.907s
sys 628m9.755s
Wed Aug 15 20:16:30 EDT 2007
root@example.com
[jep@example.com ~]$
----------------------------------------------------------
[2] I'm not worrying about the rare CD-ROMs with 512 byte sectors
like Sun and some others used. Hopefully, those were of
hard drive-ish filesystems, and not ISO-9660 filesystems.
[3] Hmmm. I wonder what block size isoinfo reports for those
rare 512 byte sectored CD-ROMs.
Home |
Main Index |
Thread Index