Backing up disk with DD saving space

The problem with DD is that it copies the whole disk, In reality, the disk could have 10GBs but that dump file has to be of the disk size, lets say 100GBs

So, how do we get a dump file that is only around 10GBs in size.

The answer is simple. Compressing a zero fill file is very efficient (almost nothing).

So, frst we create a zero fill file. with the following command, i recommend you stop the fill while there is still a bit of space on the disk especially if the disk has a database running that could need to insert.. so stop the running fill with ctrl+c before you actually fill the whole disk

cat /dev/zero > zero3.fill;sync;sleep 1;sync;

At this point, you can either delete the zero.fill file or not, It will not make a difference in the dump size, deleting is recommended, but it wont make much of a difference.

Notes
sync flushed any remaining buffer in memory to the hard drive
If the process stops for any reason, keep the file already written and make a second one and a third and whatever it takes, do not delete the existing one, just make sure almost all of your disk’s free space is occupied by zero fill files.

Now, to DD and compression on the fly (So that you won’t need much space on the target drive)

If you want to monitor the dump, you can use pv

dd if=/dev/sdb | pv -s SIZEOFDRIVEINBYTES | pigz --fast > /targetdrive/diskimage.img.gz

Or if you like, you can use parallel BZIP2 like so, in this example this is a 2TB hard drive

dd if=/dev/sda | pv -s 2000398934016 | pbzip2 --best > /somefolder/thefile.img.bz2

without the monitoring

dd if=/dev/sdb | pigz --fast > /targetdrive/diskimage.img.gz

Now, to dump this image back to a hard drive

Note that using pigz for the decompression in this situation is not recommended, somthing along the lines of this

DO NOT USE this one, use the one with gunzip
pigz -d /hds/www/vzhost.img.gz | pv -s SIZEOFIMAGEINBYTES | dd of=/dev/sdd

Will work, but it will decompress the file in place before sending it through the pipe, so the recommended way to do it on the fly is with gunzip, this is also true because there are no benefits from parallel gzip while decompressing

gunzip -c /hds/www/vzhost.img.gz | pv -s SIZEOFIMAGEINBYTES | dd of=/dev/sdb

Or

pigz -d /hds/www/vzhost.img.gz | dd of=/dev/sdd

My records
The following are irrelevant to you, this is strictly for my records

mount -t ext4 /dev/sdb1 /hds

dd if=/dev/sdc | pv -s 1610612736000 | pigz --fast > /hds/www/vzhost.img.gz

One that covers doing for part of a disk

Assume i want to copy the first 120GB of a large drive where my windows partition lives, I want it compressed and i want the free space cleared

first, in windows use SDELETE to zero empty space

sdelete -z c:

Now, mount the disk on a linux partition

dd if=/dev/sdb bs=512 count=235000000 | pigz --fast > /hds/usb1/diskimage.img.gz
dd if=/dev/sdb bs=512 count=235000000 | pbzip2 > /hds/usb1/diskimage.img.gz

If it is advanced format, you would probably do
dd if=/dev/sdb of=/hds/usb1/firstpartofdisk.img bs=4096 count=29000000

or something like that

Now, if we have a disk image with the extension (.bin.gz) and we want to extract it to a different directory, we can pipe it as follows

gunzip -c /pathto/my_disk.bin.gz > /targetdir/my_disk.bin

One thought on “Backing up disk with DD saving space

Leave a Reply

Your email address will not be published. Required fields are marked *