Copying my VPS to a new disk

Submitted by gil on

I've been a customer of TornadoVPS since 2011. The VPS came with an ext3 formatted filesystem which was at the time the default filesystem in the Debian installer. In the meantime the kernel's ext3 driver depreciated and eventually removed prompting me to do the in-place upgrade to ext4. However, at some point I wanted to get a new, clean filesystem with bigger inodes, fast_commit support and year 2038 fixes. Here's how I copied the filesystem over to a new ext4 partition.

The new disk

TornadoVPS support was willing to attach a second block device to my VPS to support my migration with the understanding that I would turn the original disk over to them to be deleted when I was done copying my data over. The new block device was hotplugged into the VM and was ready to be formatted right away:

fdisk /dev/xvdb
mkfs.ext4 /dev/xvdb1
tune2fs -O fast_commit /dev/xvdb1

There are plenty of useful ways to set up fdisk, just don't forget to have partitions aligned and give some space at the start of the disk for grub's data. (That is, have the start of the first partition not be right at the start of the disk.)

Backups

I rebooted into a rescue system to do the off-site backup of the original disk. You could use dd to dump the whole thing but there's a nifty e2image command that just dumps the blocks used by the filesystem. You can dump it over SSH easily:


ssh -i ~/.ssh/id_xyz root@the_rescue_system e2image -apr /dev/xvda1 - > backup_YYYYMMDD.img

To restore the image, you'd do it in reverse:

cat backup_YYYYMMDD.img | ssh -i ~/.ssh/id_xyz root@the_rescue_system e2image -apr - /dev/xvda1 backup_YYYYMMDD.img

Copying everything over

I rebooted into a system-provided rescue disk to do the copy although you may be able to do this from the initrd shell or single user mode. I didn't want to be running anything off of the disk, even mounted read-only, just to be safe.


mkdir /mnt/disk1
mkdir /mnt/disk2
mount /dev/xvda1 /mnt/disk1
mount /dev/xvdb1 /mnt/disk2
time rsync -ahHAXxS --info=progress2 --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/run/*,/mnt/*,/media/*,/lost+found} /mnt/disk1/ /mnt/disk2

In brief, the flags to rsync are:

  • --info=progress2: instead of --verbose, this is a less spammy and more useful output for copying a large number of files.
  • --exclude: A sensible list of directories that don't need to make the jump.
  • -a: archive. (preserve timestamps and owenrship)
  • -h: file sizes in human-readable units
  • -H: preserve hard links
  • -A: preserve ACLs
  • -X: preserve xattrs
  • -x: don't cross file system boundaries. Unlikely to affect anything but a safe default.
  • -S: preserve sparse files

Fixing up the bootloader

First, you need to grab the UUID from the new filesystem.


tune2fs -l /dev/xvdb1 | grep 'Filesystem UUID'

Update the /etc/fstab on the new disk with the new filesystem's UUID.

Next, you want to start configuring the bootloader. First, change the GRUB_TIMEOUT in /etc/default/grub to -1 to let grub halt and let you choose the new disk. On the original disk's /boot/grub/grub.cfg I found the menuentry section for my current kernel, copied it over, updated it with the new UUID and renamed it. This was to support booting from the first disk onto the second as it is not possible to configure the VPS to boot from the second attached disk.

If you manage to get grub installed to the MBR of the second disk you could instead have the first disk chainload grub via a simpler menuentry:

menuentry "Boot from second disk" {
set root=(hd1)
drivemap -s hd0 hd1
chainloader +1
}

Once you're done editing the files you need to rebuild the grub configuration on the first disk. The easiest way to get a working chroot is to use the new systemd-nspawn tool, however this requires you to have the systemd-container package installed. It's a quick dependency to install and you should be able to do it with the package manager shipped in the rescue system.


systemd-nspawn -D disk1 -a --bind=/dev/xvda --bind=/dev/xvda1 --bind=/dev/xvdb --bind=/dev/xvdb1 --resolv-conf=off /bin/bash
(in the chroot) update-grub

Although you don't need the xvda / xvda1 bind mounted into the disk for this here is how you'd do it. You almost certainly want the --resolv-conf=off to avoid clobbering your resolv.conf. Note that you may be tempted to chroot into the second disk to run grub-install to the second disk (/dev/xvdb), however, the bind mounts were confusing grub and I had no luck with it.

Finally, outside of all the chroots, unmount your disks and restart.

umount /mnt/disk1
umount /mnt/disk2
shutdown -r now

Finishing up the new system

Upon reboot you should be able to choose your new menu option and boot off of the filesystem of the new disk. Check the dmesg after booting for the new disk's UUID to make sure you've come up on the right system! If everything is looking fine, you can install grub to the MBR of then new disk:


grub-install /dev/xvdb
update-grub

And then do a reboot to make sure everything's still working. if you added the chainloader menuoption to the first disk's grub.cfg above you can try that to make sure the MBR installation of grub is correct.

Post-install

I think the -S flag to rsync turned my on-filesystem swap file into a sparse file. Linux didn't like that and I had to create a new swap file and attach that.

Once everything was up and running on the new system I powered the system down and asked support to detach the old volume so I could do a clean start onto the new disk for the first time.

Conclusion

This process was easy enough I would not be opposed to re-creating filesystems after major kernel updates, especially if there are new filesystem features or flags that I can enable on the new code.