Backing up disk with DD saving space

The problem with DD is that it copies the whole disk, In reality, the disk could have 10GBs but that dump file has to be of the disk size, lets say 100GBs

So, how do we get a dump file that is only around 10GBs in size.

The answer is simple. Compressing a zero fill file is very efficient (almost nothing).

So, frst we create a zero fill file. with the following command, i recommend you stop the fill while there is still a bit of space on the disk especially if the disk has a database running that could need to insert.. so stop the running fill with ctrl+c before you actually fill the whole disk

cat /dev/zero > zero3.fill;sync;sleep 1;sync;

At this point, you can either delete the zero.fill file or not, It will not make a difference in the dump size, deleting is recommended, but it wont make much of a difference.

Notes
sync flushed any remaining buffer in memory to the hard drive
If the process stops for any reason, keep the file already written and make a second one and a third and whatever it takes, do not delete the existing one, just make sure almost all of your disk’s free space is occupied by zero fill files.

Now, to DD and compression on the fly (So that you won’t need much space on the target drive)

If you want to monitor the dump, you can use pv

dd if=/dev/sdb | pv -s SIZEOFDRIVEINBYTES | pigz --fast > /targetdrive/diskimage.img.gz

Or if you like, you can use parallel BZIP2 like so, in this example this is a 2TB hard drive

dd if=/dev/sda | pv -s 2000398934016 | pbzip2 --best > /somefolder/thefile.img.bz2

without the monitoring

dd if=/dev/sdb | pigz --fast > /targetdrive/diskimage.img.gz

Now, to dump this image back to a hard drive

Note that using pigz for the decompression in this situation is not recommended, somthing along the lines of this

DO NOT USE this one, use the one with gunzip
pigz -d /hds/www/vzhost.img.gz | pv -s SIZEOFIMAGEINBYTES | dd of=/dev/sdd

Will work, but it will decompress the file in place before sending it through the pipe, so the recommended way to do it on the fly is with gunzip, this is also true because there are no benefits from parallel gzip while decompressing

gunzip -c /hds/www/vzhost.img.gz | pv -s SIZEOFIMAGEINBYTES | dd of=/dev/sdb

Or

pigz -d /hds/www/vzhost.img.gz | dd of=/dev/sdd

My records
The following are irrelevant to you, this is strictly for my records

mount -t ext4 /dev/sdb1 /hds

dd if=/dev/sdc | pv -s 1610612736000 | pigz --fast > /hds/www/vzhost.img.gz

One that covers doing for part of a disk

Assume i want to copy the first 120GB of a large drive where my windows partition lives, I want it compressed and i want the free space cleared

first, in windows use SDELETE to zero empty space

sdelete -z c:

Now, mount the disk on a linux partition

dd if=/dev/sdb bs=512 count=235000000 | pigz --fast > /hds/usb1/diskimage.img.gz
dd if=/dev/sdb bs=512 count=235000000 | pbzip2 > /hds/usb1/diskimage.img.gz

If it is advanced format, you would probably do
dd if=/dev/sdb of=/hds/usb1/firstpartofdisk.img bs=4096 count=29000000

or something like that

Now, if we have a disk image with the extension (.bin.gz) and we want to extract it to a different directory, we can pipe it as follows

gunzip -c /pathto/my_disk.bin.gz > /targetdir/my_disk.bin

Shrinking linux disks in vmware workstation

Here is the theory behind what we are doing

1- Fill all empty space with zeros, you can do that by writing a gigantic file full of zeros to fille up all empty space then crash when no space is left to put the file

cat /dev/zero > zero.fill;sync;sleep 1;sync;

Delete the file we just made, zeros are left behind

rm -f zero.fill

Shut down the VM, and go to the windows host running the vmware workstation

Navigate to the directory where the .vmdk files are located.

Changing the root password in an LXC container

If you forget your LXC container’s password, you can reset it from within the LXC host

1- chroot into the containers filesystem
chroot /var/lib/lxc/vm51/rootfs

2- issue the passwd command and enter the new password for the container

3- type exit to get back to the LXC host prompt

 

Another way is to simply fire the container up, then run

lxc-attach -n vm51

then execute the passwd command like you normally do, then the exit command

It is very important to understand that if you don’t have something such as fail2ban on your server, it could be that someone had bruit-forced into your container and changed the root password, in that case, i would completely recommend deleting the whole container and re-creating it from scratch

The reason is that we don’t know what the attacker (if any) had installed inside the system

Tar error and how to overcome

For some reason, while i was extracting half a terrabyte of a tar.gz file with the following command

tar -xvf thisfile.tar.gz

i got the following errors

tar: Skipping to next header
tar: Error exit delayed from previous errors

So, it turns out that tar files terminate with a big bunch of zeros, to tell the tar files to not consifer that bunch of zeros a terminator, you would use the -i switch (before the F not after)

So the command would look like

tar -xvif thisfile.tar.gz

Seems it worked for me, it may or may not work for you, but this is one of the reasons you could get this error. because tar dopes not tell you what the exact error is.

Using axcel, quick example

Using axcel

axel -a -s 10240000 -n 5 URL

-a is the nicer one line view
-s is maximum speed, here it is 100Mb (10MB)
-n is the maximum number of connections

———————————–

To download a list of files
1- Put them in a text file (Make sure the line feeds are linux (\n))
2- Run a while loop from terminal

while read url; do axel -a -n3 $url; done < /root/download124.txt

Tar and compress directory on the fly with multi threading

There is not much to it. the tar command piped into any compression program of your choice.

For speed, rather than using gzip, you can use pigz (to employ more processors / processor cores), or pbzip2 which is slower but compresses more

cd to the directory where your folder is in

then

tar -c mysql | pbzip2 -vc > /hds/dbdirzip.tar.bz2

for more compression
tar -c mysql | pbzip2 -vc -9 > /hds/dbdirzip.tar.bz2

for more compression and to limit CPUs to 6 instead of 8, or 3 instead of 4, or whatever you want to use, since the default is to use all cores
tar -c mysql | pbzip2 -vc -9 -p6 > /hds/dbdirzip.tar.bz2

tar cvf – mysql | pigz -9 > /hds/dbdirzip.tar.gz

Or to limit the number of processors to 6 for example
tar cvf – mysql | pigz -9 -p6 > /hds/dbdirzip.tar.gz

Now, if you want to compress a single file to a different directory

pbzip2 -cz somefile > /another/directory/compressed.bz2

Dynamic Round Robbin DNS (DDNS with round robin support)

We have just developed an application in-house for Dynamic DNS with round robin (for our own “validation through IP” purposes) that functions as a dynamic DNS with round robin features.

We could make this application public if it gets enough attention and is of use to many people.

The application is fully functional at the minute, but if it gets attention, we can improve the user interface, and make it public.

The Dynamic DNS with round robin support takes into account that connections that have not contacted for update should be removed from the round robin record.

* Username and password verification
* modifiable ttl in sync with frequency of IP checks
* Almost infinatly Scalable system.
* PHP client, easy to create any other client. PHP update script can run as cron job.
* remove frm list when no update requests is received for a user set amount of time, return to list once an update request is sent again
* Super fast
* For multi homed links, the password per hostname (not per zone) eliminates the risk of an update request through a different eithernet adapter that is the main link of another machine.
* Security through MD5 sums that change with the IP change (Your passwords are never transmitted during an update)
* if 2 machines are using the same IP address, the anti_duplicate_valuesarray will limit the round robin records to that value only once.
* the dead record currently only dissapear when a different IP changes, 2DO: change must reflect when another updates !

Disable Windows has detected a hard disk problem message in windows

The following are the steps to disable the error message associated with a bad hard drive, the message that windows will display after every login, we will disable it from within windows without disabling it in the BIOS.
error

The message above reads (On my computer, on yours, the disk model number and the names of the volumes will probably be different.

Windows has detected a hard disk problem.
Back up your files immediately to prevent information loss, and then contact the computer manufacturer to determine if you need to repair or replace the disk

Then, you are presented with the following two options

Start the backup process

Ask me again later
-- If the disk fails before the next warning, you could lose all of the programs and documents on the disk.

In the show details dialogue you should see 

Immediate steps
Because disk failure will cause you to loose all programs, files and documents on the disk, you should back up your important information immediately, try not to use your computer until you have repaired or replaced the hard disk.
Which disk is failing
The following hard disks are reporting failure.
Disk name: TOSHIBA MK3264GSXN ATA Device
Volume: C:, D:, E: 

My advice would be

Do not disable S.M.A.R.T. from BIOS, rather, ask windows not to display this message, this is because for a failing disk, you would want the S.M.A.R.T. data accessible from other programs or to keep an eye on it.

To disable this error message from within windows, do the following

click the start button and enter the word “task” in the search box, Task Scheduler should appear, right click it and chose run as administrator.
Once it is open, follow the tree to your left as follows
“Task Scheduler Library” => “Microsoft” => “Windows”. => “DiskDiagnostic”

As shown in the image, select the second entry, right click it, then click disable.

The following is the dialogue
diskdiag

close, and restart your computer to check if it worked.

The Linux DD command

To detect the progress or how far dd has come in a running copy, open a second terminal window, run top to get the id of the dd process, then issue the command kill -USR1 xxxx (replace xxx with the actual ID of the process), now it may appear that nothing happened, but swicth the terminal to the one dd is running in

you should see something like

1036902161+0 records in
1036902160+0 records out
530893905920 bytes (531 GB) copied, 29702.1 s, 17.9 MB/s