Initramfs error after image restore using dd

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
2
down vote

favorite
1












I need to create backups of a Ubuntu system in a way that I can easily restore the data and system as it is exactly in a ready to go state. So, I decided to go with dd to create a whole HHD images.



I created the image as follows:



dd if=/dev/current_drive of=/dev/backup_drive/backup.img conv=sync status=progress



The image was done without errors. After that I decided to restore the image to a test new drive:



dd if=/backup_drive/backup.img of=/dev/new_drive conv=sync status=progress



So far, so good. The image restore went without errors.
But when I tried to boot from the new hard which got the image restored on, I encountered initramfs errors:
enter image description here



So after manual fsck the errors were cleaned and I was able to boot from the new HDD. But I tried couple of times the procedure of restoring the image to the drive and each time I got problems with booting.
My original system drive and the new one are absolutely identical according to



sudo fdisk -l:



/dev/sda/ is the new hard drive.



/dev/sdb/ is the original one from which the image was created.



Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xf11c2eb5

Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 455024639 455022592 217G 83 Linux
/dev/sda2 455026686 488396799 33370114 15.9G 5 Extended
/dev/sda5 455026688 488396799 33370112 15.9G 82 Linux swap / Solaris


Disk /dev/sdb: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xf11c2eb5

Device Boot Start End Sectors Size Id Type
/dev/sdb1 * 2048 455024639 455022592 217G 83 Linux
/dev/sdb2 455026686 488396799 33370114 15.9G 5 Extended
/dev/sdb5 455026688 488396799 33370112 15.9G 82 Linux swap / Solaris


So, any ideas what I am doing wrong and why I get boot errors after image restore? I don't want in a real situation to have to fix the eventual new hard drive in case of failure of the original one.



Btw, the original drive is SSD, while the new one is HDD if this matters.







share|improve this question




















  • From the post it seems you did the dd while running the system, right?
    – Rui F Ribeiro
    Feb 8 at 20:48










  • @RuiFRibeiro Yes. When the image was created, the system was booted from the source drive(the SSD which I need to make images of). When the image was restored to the test new drive, again the system was booted from the SSD.
    – CuriousGuy
    Feb 8 at 21:13














up vote
2
down vote

favorite
1












I need to create backups of a Ubuntu system in a way that I can easily restore the data and system as it is exactly in a ready to go state. So, I decided to go with dd to create a whole HHD images.



I created the image as follows:



dd if=/dev/current_drive of=/dev/backup_drive/backup.img conv=sync status=progress



The image was done without errors. After that I decided to restore the image to a test new drive:



dd if=/backup_drive/backup.img of=/dev/new_drive conv=sync status=progress



So far, so good. The image restore went without errors.
But when I tried to boot from the new hard which got the image restored on, I encountered initramfs errors:
enter image description here



So after manual fsck the errors were cleaned and I was able to boot from the new HDD. But I tried couple of times the procedure of restoring the image to the drive and each time I got problems with booting.
My original system drive and the new one are absolutely identical according to



sudo fdisk -l:



/dev/sda/ is the new hard drive.



/dev/sdb/ is the original one from which the image was created.



Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xf11c2eb5

Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 455024639 455022592 217G 83 Linux
/dev/sda2 455026686 488396799 33370114 15.9G 5 Extended
/dev/sda5 455026688 488396799 33370112 15.9G 82 Linux swap / Solaris


Disk /dev/sdb: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xf11c2eb5

Device Boot Start End Sectors Size Id Type
/dev/sdb1 * 2048 455024639 455022592 217G 83 Linux
/dev/sdb2 455026686 488396799 33370114 15.9G 5 Extended
/dev/sdb5 455026688 488396799 33370112 15.9G 82 Linux swap / Solaris


So, any ideas what I am doing wrong and why I get boot errors after image restore? I don't want in a real situation to have to fix the eventual new hard drive in case of failure of the original one.



Btw, the original drive is SSD, while the new one is HDD if this matters.







share|improve this question




















  • From the post it seems you did the dd while running the system, right?
    – Rui F Ribeiro
    Feb 8 at 20:48










  • @RuiFRibeiro Yes. When the image was created, the system was booted from the source drive(the SSD which I need to make images of). When the image was restored to the test new drive, again the system was booted from the SSD.
    – CuriousGuy
    Feb 8 at 21:13












up vote
2
down vote

favorite
1









up vote
2
down vote

favorite
1






1





I need to create backups of a Ubuntu system in a way that I can easily restore the data and system as it is exactly in a ready to go state. So, I decided to go with dd to create a whole HHD images.



I created the image as follows:



dd if=/dev/current_drive of=/dev/backup_drive/backup.img conv=sync status=progress



The image was done without errors. After that I decided to restore the image to a test new drive:



dd if=/backup_drive/backup.img of=/dev/new_drive conv=sync status=progress



So far, so good. The image restore went without errors.
But when I tried to boot from the new hard which got the image restored on, I encountered initramfs errors:
enter image description here



So after manual fsck the errors were cleaned and I was able to boot from the new HDD. But I tried couple of times the procedure of restoring the image to the drive and each time I got problems with booting.
My original system drive and the new one are absolutely identical according to



sudo fdisk -l:



/dev/sda/ is the new hard drive.



/dev/sdb/ is the original one from which the image was created.



Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xf11c2eb5

Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 455024639 455022592 217G 83 Linux
/dev/sda2 455026686 488396799 33370114 15.9G 5 Extended
/dev/sda5 455026688 488396799 33370112 15.9G 82 Linux swap / Solaris


Disk /dev/sdb: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xf11c2eb5

Device Boot Start End Sectors Size Id Type
/dev/sdb1 * 2048 455024639 455022592 217G 83 Linux
/dev/sdb2 455026686 488396799 33370114 15.9G 5 Extended
/dev/sdb5 455026688 488396799 33370112 15.9G 82 Linux swap / Solaris


So, any ideas what I am doing wrong and why I get boot errors after image restore? I don't want in a real situation to have to fix the eventual new hard drive in case of failure of the original one.



Btw, the original drive is SSD, while the new one is HDD if this matters.







share|improve this question












I need to create backups of a Ubuntu system in a way that I can easily restore the data and system as it is exactly in a ready to go state. So, I decided to go with dd to create a whole HHD images.



I created the image as follows:



dd if=/dev/current_drive of=/dev/backup_drive/backup.img conv=sync status=progress



The image was done without errors. After that I decided to restore the image to a test new drive:



dd if=/backup_drive/backup.img of=/dev/new_drive conv=sync status=progress



So far, so good. The image restore went without errors.
But when I tried to boot from the new hard which got the image restored on, I encountered initramfs errors:
enter image description here



So after manual fsck the errors were cleaned and I was able to boot from the new HDD. But I tried couple of times the procedure of restoring the image to the drive and each time I got problems with booting.
My original system drive and the new one are absolutely identical according to



sudo fdisk -l:



/dev/sda/ is the new hard drive.



/dev/sdb/ is the original one from which the image was created.



Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xf11c2eb5

Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 455024639 455022592 217G 83 Linux
/dev/sda2 455026686 488396799 33370114 15.9G 5 Extended
/dev/sda5 455026688 488396799 33370112 15.9G 82 Linux swap / Solaris


Disk /dev/sdb: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xf11c2eb5

Device Boot Start End Sectors Size Id Type
/dev/sdb1 * 2048 455024639 455022592 217G 83 Linux
/dev/sdb2 455026686 488396799 33370114 15.9G 5 Extended
/dev/sdb5 455026688 488396799 33370112 15.9G 82 Linux swap / Solaris


So, any ideas what I am doing wrong and why I get boot errors after image restore? I don't want in a real situation to have to fix the eventual new hard drive in case of failure of the original one.



Btw, the original drive is SSD, while the new one is HDD if this matters.









share|improve this question











share|improve this question




share|improve this question










asked Feb 8 at 20:09









CuriousGuy

2281211




2281211











  • From the post it seems you did the dd while running the system, right?
    – Rui F Ribeiro
    Feb 8 at 20:48










  • @RuiFRibeiro Yes. When the image was created, the system was booted from the source drive(the SSD which I need to make images of). When the image was restored to the test new drive, again the system was booted from the SSD.
    – CuriousGuy
    Feb 8 at 21:13
















  • From the post it seems you did the dd while running the system, right?
    – Rui F Ribeiro
    Feb 8 at 20:48










  • @RuiFRibeiro Yes. When the image was created, the system was booted from the source drive(the SSD which I need to make images of). When the image was restored to the test new drive, again the system was booted from the SSD.
    – CuriousGuy
    Feb 8 at 21:13















From the post it seems you did the dd while running the system, right?
– Rui F Ribeiro
Feb 8 at 20:48




From the post it seems you did the dd while running the system, right?
– Rui F Ribeiro
Feb 8 at 20:48












@RuiFRibeiro Yes. When the image was created, the system was booted from the source drive(the SSD which I need to make images of). When the image was restored to the test new drive, again the system was booted from the SSD.
– CuriousGuy
Feb 8 at 21:13




@RuiFRibeiro Yes. When the image was created, the system was booted from the source drive(the SSD which I need to make images of). When the image was restored to the test new drive, again the system was booted from the SSD.
– CuriousGuy
Feb 8 at 21:13










1 Answer
1






active

oldest

votes

















up vote
2
down vote



accepted










When you dd,



  • source device/filesystem must not be changed/mounted/written to

  • don't use conv with noerror and sync, it corrupts data

  • add bs=1M (or bs=64K) for performance reasons

Basically a successful, consistent direct disk copy can only be achieved through an independent system like a Live CD, and even then you have to be careful about filesystems getting mounted automatically without you even knowing.



Also if you expect there to be read errors (hence sync, noerror, and other conv options) it's much more reliable to go with ddrescue instead. It handles read errors properly and even has the ability to retry and resume.



All in all, block level copies just tend to be unreliable because it's easy to make mistakes. The only reason they are done at all is that it's the only method to produce a copy with perfect consistency (only if done right).



All other approaches are merely good enough in practice, never perfect. There is no way to make a perfect copy with running processes that keep half their data in memory and the other half on disk. You have to turn it off to get the full picture. (Or virtualize everything and freeze it.)



There are alternatives:



  • file based backups using cp, rsync, or dedicated backup programs like borg

  • filesystem specific tools (xfsdump, btrfs send / snapshot, ...)

  • LVM snapshots (but not with btrfs)

  • Databases need special treatment and provide their own backup tools


If it must be a block level copy, you could also abuse the mdadm system to put a RAID 1 layer on the source drive, and use it to produce a consistent copy out of a running system by adding a target drive. The RAID keeps both sides in perfect sync, thus mostly avoiding the inconsistency issue (provided you allow the sync to finish before removing the target drive).



# RAID creation (before installing Linux)
mdadm --create /dev/md0 --level=1 --raid-devices=1 --force /dev/source

# /proc/mdstat
md0 : active raid1 sda2[3]
134306472 blocks super 1.2 [1/1] [U]

# Add the target drive.
mdadm --grow /dev/md0 --raid-devices=2 --force
mdadm --manage /dev/md0 --add --write-mostly /dev/target

# Wait for RAID resilvering.
mdadm --wait /dev/md0
sync

# Remove the target drive.
mdadm /dev/md0 --fail /dev/target
mdadm /dev/md0 --remove /dev/target
mdadm --grow /dev/md0 --raid-devices=1 --force


But that's a hack and the copy will still appear as a filesystem that wasn't umounted properly. This is slightly less worse than a power loss, as you don't get to do a sync when you lose power unexpectedly. But orders of magnitude better than dd where the state of the first half of the image is hours behind the last half.



I use this method to mirror my single SSD drive to HDD every week, without preventing HDD standy. Should the SSD fail, the HDD can be booted directly with little effort.



Of course, the same can be achieved with a file based copy as well.




Since you mention UUIDs, cloning drives on a block level will clone UUIDs which in turn can be the cause of desaster. (In the case of the RAID hack above, they are conveniently hidden behind the RAID layer.)



File based copy to a new filesystem will have new UUIDs, but it's reasonably straightforward to solve:




  • chroot, edit /etc/fstab, update initramfs, reinstall bootloader (you find the chroot method in basically every linux wiki)

  • otherwise restore old UUIDs by changing them with tune2fs -U <UUID>, there are similar tools for other filesystems (requires documentation, otherwise you won't know the UUIDs you need). Again, careful not to duplicate them, only do this if the old device is gone altogether.





share|improve this answer






















  • My first backup attempt was to tar everything important(/etc, /bin etc.) But I couldn't restore the system after fresh preinstall, because for example the UUIDS during the fresh installation was different from the ones previously tarred. So I wasn't able to boot the system after that. Then I decided to try with dd because if my disk fails, I can just place the perfect copy on new one and everything is fine. Or at least this is how I imagined it would be. Also I wanted to do the backup process automatic.
    – CuriousGuy
    Feb 8 at 21:37










  • Now I understand that the source and target disks must be not mounted. This means that automatic backup process will not work, as I'll need livecd to boot from to start the backup. Do you have user friendly suggestions how to make reliable and easily restorable system(not data) backups?
    – CuriousGuy
    Feb 8 at 21:40










  • Probably safe to say I don't have user friendly suggestions. I guess most users just reinstall from scratch with a distro that comes with user-friendly installers where that isn't a problem. Sorry about the answer, it's all over the place - for concise answers, ask more specific questions. "what could possibly have gone wrong" is a very broad question.
    – frostschutz
    Feb 8 at 22:19










  • You answer was very helpful to me, thank you. But how come I can do live disk images on a running Windows system using software like Aeomi which uses VSS. Is there anything similar for Linux systems(to be able to do healthy images from running hard disks)?
    – CuriousGuy
    Feb 9 at 8:10










Your Answer







StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);








 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f422899%2finitramfs-error-after-image-restore-using-dd%23new-answer', 'question_page');

);

Post as a guest






























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
2
down vote



accepted










When you dd,



  • source device/filesystem must not be changed/mounted/written to

  • don't use conv with noerror and sync, it corrupts data

  • add bs=1M (or bs=64K) for performance reasons

Basically a successful, consistent direct disk copy can only be achieved through an independent system like a Live CD, and even then you have to be careful about filesystems getting mounted automatically without you even knowing.



Also if you expect there to be read errors (hence sync, noerror, and other conv options) it's much more reliable to go with ddrescue instead. It handles read errors properly and even has the ability to retry and resume.



All in all, block level copies just tend to be unreliable because it's easy to make mistakes. The only reason they are done at all is that it's the only method to produce a copy with perfect consistency (only if done right).



All other approaches are merely good enough in practice, never perfect. There is no way to make a perfect copy with running processes that keep half their data in memory and the other half on disk. You have to turn it off to get the full picture. (Or virtualize everything and freeze it.)



There are alternatives:



  • file based backups using cp, rsync, or dedicated backup programs like borg

  • filesystem specific tools (xfsdump, btrfs send / snapshot, ...)

  • LVM snapshots (but not with btrfs)

  • Databases need special treatment and provide their own backup tools


If it must be a block level copy, you could also abuse the mdadm system to put a RAID 1 layer on the source drive, and use it to produce a consistent copy out of a running system by adding a target drive. The RAID keeps both sides in perfect sync, thus mostly avoiding the inconsistency issue (provided you allow the sync to finish before removing the target drive).



# RAID creation (before installing Linux)
mdadm --create /dev/md0 --level=1 --raid-devices=1 --force /dev/source

# /proc/mdstat
md0 : active raid1 sda2[3]
134306472 blocks super 1.2 [1/1] [U]

# Add the target drive.
mdadm --grow /dev/md0 --raid-devices=2 --force
mdadm --manage /dev/md0 --add --write-mostly /dev/target

# Wait for RAID resilvering.
mdadm --wait /dev/md0
sync

# Remove the target drive.
mdadm /dev/md0 --fail /dev/target
mdadm /dev/md0 --remove /dev/target
mdadm --grow /dev/md0 --raid-devices=1 --force


But that's a hack and the copy will still appear as a filesystem that wasn't umounted properly. This is slightly less worse than a power loss, as you don't get to do a sync when you lose power unexpectedly. But orders of magnitude better than dd where the state of the first half of the image is hours behind the last half.



I use this method to mirror my single SSD drive to HDD every week, without preventing HDD standy. Should the SSD fail, the HDD can be booted directly with little effort.



Of course, the same can be achieved with a file based copy as well.




Since you mention UUIDs, cloning drives on a block level will clone UUIDs which in turn can be the cause of desaster. (In the case of the RAID hack above, they are conveniently hidden behind the RAID layer.)



File based copy to a new filesystem will have new UUIDs, but it's reasonably straightforward to solve:




  • chroot, edit /etc/fstab, update initramfs, reinstall bootloader (you find the chroot method in basically every linux wiki)

  • otherwise restore old UUIDs by changing them with tune2fs -U <UUID>, there are similar tools for other filesystems (requires documentation, otherwise you won't know the UUIDs you need). Again, careful not to duplicate them, only do this if the old device is gone altogether.





share|improve this answer






















  • My first backup attempt was to tar everything important(/etc, /bin etc.) But I couldn't restore the system after fresh preinstall, because for example the UUIDS during the fresh installation was different from the ones previously tarred. So I wasn't able to boot the system after that. Then I decided to try with dd because if my disk fails, I can just place the perfect copy on new one and everything is fine. Or at least this is how I imagined it would be. Also I wanted to do the backup process automatic.
    – CuriousGuy
    Feb 8 at 21:37










  • Now I understand that the source and target disks must be not mounted. This means that automatic backup process will not work, as I'll need livecd to boot from to start the backup. Do you have user friendly suggestions how to make reliable and easily restorable system(not data) backups?
    – CuriousGuy
    Feb 8 at 21:40










  • Probably safe to say I don't have user friendly suggestions. I guess most users just reinstall from scratch with a distro that comes with user-friendly installers where that isn't a problem. Sorry about the answer, it's all over the place - for concise answers, ask more specific questions. "what could possibly have gone wrong" is a very broad question.
    – frostschutz
    Feb 8 at 22:19










  • You answer was very helpful to me, thank you. But how come I can do live disk images on a running Windows system using software like Aeomi which uses VSS. Is there anything similar for Linux systems(to be able to do healthy images from running hard disks)?
    – CuriousGuy
    Feb 9 at 8:10














up vote
2
down vote



accepted










When you dd,



  • source device/filesystem must not be changed/mounted/written to

  • don't use conv with noerror and sync, it corrupts data

  • add bs=1M (or bs=64K) for performance reasons

Basically a successful, consistent direct disk copy can only be achieved through an independent system like a Live CD, and even then you have to be careful about filesystems getting mounted automatically without you even knowing.



Also if you expect there to be read errors (hence sync, noerror, and other conv options) it's much more reliable to go with ddrescue instead. It handles read errors properly and even has the ability to retry and resume.



All in all, block level copies just tend to be unreliable because it's easy to make mistakes. The only reason they are done at all is that it's the only method to produce a copy with perfect consistency (only if done right).



All other approaches are merely good enough in practice, never perfect. There is no way to make a perfect copy with running processes that keep half their data in memory and the other half on disk. You have to turn it off to get the full picture. (Or virtualize everything and freeze it.)



There are alternatives:



  • file based backups using cp, rsync, or dedicated backup programs like borg

  • filesystem specific tools (xfsdump, btrfs send / snapshot, ...)

  • LVM snapshots (but not with btrfs)

  • Databases need special treatment and provide their own backup tools


If it must be a block level copy, you could also abuse the mdadm system to put a RAID 1 layer on the source drive, and use it to produce a consistent copy out of a running system by adding a target drive. The RAID keeps both sides in perfect sync, thus mostly avoiding the inconsistency issue (provided you allow the sync to finish before removing the target drive).



# RAID creation (before installing Linux)
mdadm --create /dev/md0 --level=1 --raid-devices=1 --force /dev/source

# /proc/mdstat
md0 : active raid1 sda2[3]
134306472 blocks super 1.2 [1/1] [U]

# Add the target drive.
mdadm --grow /dev/md0 --raid-devices=2 --force
mdadm --manage /dev/md0 --add --write-mostly /dev/target

# Wait for RAID resilvering.
mdadm --wait /dev/md0
sync

# Remove the target drive.
mdadm /dev/md0 --fail /dev/target
mdadm /dev/md0 --remove /dev/target
mdadm --grow /dev/md0 --raid-devices=1 --force


But that's a hack and the copy will still appear as a filesystem that wasn't umounted properly. This is slightly less worse than a power loss, as you don't get to do a sync when you lose power unexpectedly. But orders of magnitude better than dd where the state of the first half of the image is hours behind the last half.



I use this method to mirror my single SSD drive to HDD every week, without preventing HDD standy. Should the SSD fail, the HDD can be booted directly with little effort.



Of course, the same can be achieved with a file based copy as well.




Since you mention UUIDs, cloning drives on a block level will clone UUIDs which in turn can be the cause of desaster. (In the case of the RAID hack above, they are conveniently hidden behind the RAID layer.)



File based copy to a new filesystem will have new UUIDs, but it's reasonably straightforward to solve:




  • chroot, edit /etc/fstab, update initramfs, reinstall bootloader (you find the chroot method in basically every linux wiki)

  • otherwise restore old UUIDs by changing them with tune2fs -U <UUID>, there are similar tools for other filesystems (requires documentation, otherwise you won't know the UUIDs you need). Again, careful not to duplicate them, only do this if the old device is gone altogether.





share|improve this answer






















  • My first backup attempt was to tar everything important(/etc, /bin etc.) But I couldn't restore the system after fresh preinstall, because for example the UUIDS during the fresh installation was different from the ones previously tarred. So I wasn't able to boot the system after that. Then I decided to try with dd because if my disk fails, I can just place the perfect copy on new one and everything is fine. Or at least this is how I imagined it would be. Also I wanted to do the backup process automatic.
    – CuriousGuy
    Feb 8 at 21:37










  • Now I understand that the source and target disks must be not mounted. This means that automatic backup process will not work, as I'll need livecd to boot from to start the backup. Do you have user friendly suggestions how to make reliable and easily restorable system(not data) backups?
    – CuriousGuy
    Feb 8 at 21:40










  • Probably safe to say I don't have user friendly suggestions. I guess most users just reinstall from scratch with a distro that comes with user-friendly installers where that isn't a problem. Sorry about the answer, it's all over the place - for concise answers, ask more specific questions. "what could possibly have gone wrong" is a very broad question.
    – frostschutz
    Feb 8 at 22:19










  • You answer was very helpful to me, thank you. But how come I can do live disk images on a running Windows system using software like Aeomi which uses VSS. Is there anything similar for Linux systems(to be able to do healthy images from running hard disks)?
    – CuriousGuy
    Feb 9 at 8:10












up vote
2
down vote



accepted







up vote
2
down vote



accepted






When you dd,



  • source device/filesystem must not be changed/mounted/written to

  • don't use conv with noerror and sync, it corrupts data

  • add bs=1M (or bs=64K) for performance reasons

Basically a successful, consistent direct disk copy can only be achieved through an independent system like a Live CD, and even then you have to be careful about filesystems getting mounted automatically without you even knowing.



Also if you expect there to be read errors (hence sync, noerror, and other conv options) it's much more reliable to go with ddrescue instead. It handles read errors properly and even has the ability to retry and resume.



All in all, block level copies just tend to be unreliable because it's easy to make mistakes. The only reason they are done at all is that it's the only method to produce a copy with perfect consistency (only if done right).



All other approaches are merely good enough in practice, never perfect. There is no way to make a perfect copy with running processes that keep half their data in memory and the other half on disk. You have to turn it off to get the full picture. (Or virtualize everything and freeze it.)



There are alternatives:



  • file based backups using cp, rsync, or dedicated backup programs like borg

  • filesystem specific tools (xfsdump, btrfs send / snapshot, ...)

  • LVM snapshots (but not with btrfs)

  • Databases need special treatment and provide their own backup tools


If it must be a block level copy, you could also abuse the mdadm system to put a RAID 1 layer on the source drive, and use it to produce a consistent copy out of a running system by adding a target drive. The RAID keeps both sides in perfect sync, thus mostly avoiding the inconsistency issue (provided you allow the sync to finish before removing the target drive).



# RAID creation (before installing Linux)
mdadm --create /dev/md0 --level=1 --raid-devices=1 --force /dev/source

# /proc/mdstat
md0 : active raid1 sda2[3]
134306472 blocks super 1.2 [1/1] [U]

# Add the target drive.
mdadm --grow /dev/md0 --raid-devices=2 --force
mdadm --manage /dev/md0 --add --write-mostly /dev/target

# Wait for RAID resilvering.
mdadm --wait /dev/md0
sync

# Remove the target drive.
mdadm /dev/md0 --fail /dev/target
mdadm /dev/md0 --remove /dev/target
mdadm --grow /dev/md0 --raid-devices=1 --force


But that's a hack and the copy will still appear as a filesystem that wasn't umounted properly. This is slightly less worse than a power loss, as you don't get to do a sync when you lose power unexpectedly. But orders of magnitude better than dd where the state of the first half of the image is hours behind the last half.



I use this method to mirror my single SSD drive to HDD every week, without preventing HDD standy. Should the SSD fail, the HDD can be booted directly with little effort.



Of course, the same can be achieved with a file based copy as well.




Since you mention UUIDs, cloning drives on a block level will clone UUIDs which in turn can be the cause of desaster. (In the case of the RAID hack above, they are conveniently hidden behind the RAID layer.)



File based copy to a new filesystem will have new UUIDs, but it's reasonably straightforward to solve:




  • chroot, edit /etc/fstab, update initramfs, reinstall bootloader (you find the chroot method in basically every linux wiki)

  • otherwise restore old UUIDs by changing them with tune2fs -U <UUID>, there are similar tools for other filesystems (requires documentation, otherwise you won't know the UUIDs you need). Again, careful not to duplicate them, only do this if the old device is gone altogether.





share|improve this answer














When you dd,



  • source device/filesystem must not be changed/mounted/written to

  • don't use conv with noerror and sync, it corrupts data

  • add bs=1M (or bs=64K) for performance reasons

Basically a successful, consistent direct disk copy can only be achieved through an independent system like a Live CD, and even then you have to be careful about filesystems getting mounted automatically without you even knowing.



Also if you expect there to be read errors (hence sync, noerror, and other conv options) it's much more reliable to go with ddrescue instead. It handles read errors properly and even has the ability to retry and resume.



All in all, block level copies just tend to be unreliable because it's easy to make mistakes. The only reason they are done at all is that it's the only method to produce a copy with perfect consistency (only if done right).



All other approaches are merely good enough in practice, never perfect. There is no way to make a perfect copy with running processes that keep half their data in memory and the other half on disk. You have to turn it off to get the full picture. (Or virtualize everything and freeze it.)



There are alternatives:



  • file based backups using cp, rsync, or dedicated backup programs like borg

  • filesystem specific tools (xfsdump, btrfs send / snapshot, ...)

  • LVM snapshots (but not with btrfs)

  • Databases need special treatment and provide their own backup tools


If it must be a block level copy, you could also abuse the mdadm system to put a RAID 1 layer on the source drive, and use it to produce a consistent copy out of a running system by adding a target drive. The RAID keeps both sides in perfect sync, thus mostly avoiding the inconsistency issue (provided you allow the sync to finish before removing the target drive).



# RAID creation (before installing Linux)
mdadm --create /dev/md0 --level=1 --raid-devices=1 --force /dev/source

# /proc/mdstat
md0 : active raid1 sda2[3]
134306472 blocks super 1.2 [1/1] [U]

# Add the target drive.
mdadm --grow /dev/md0 --raid-devices=2 --force
mdadm --manage /dev/md0 --add --write-mostly /dev/target

# Wait for RAID resilvering.
mdadm --wait /dev/md0
sync

# Remove the target drive.
mdadm /dev/md0 --fail /dev/target
mdadm /dev/md0 --remove /dev/target
mdadm --grow /dev/md0 --raid-devices=1 --force


But that's a hack and the copy will still appear as a filesystem that wasn't umounted properly. This is slightly less worse than a power loss, as you don't get to do a sync when you lose power unexpectedly. But orders of magnitude better than dd where the state of the first half of the image is hours behind the last half.



I use this method to mirror my single SSD drive to HDD every week, without preventing HDD standy. Should the SSD fail, the HDD can be booted directly with little effort.



Of course, the same can be achieved with a file based copy as well.




Since you mention UUIDs, cloning drives on a block level will clone UUIDs which in turn can be the cause of desaster. (In the case of the RAID hack above, they are conveniently hidden behind the RAID layer.)



File based copy to a new filesystem will have new UUIDs, but it's reasonably straightforward to solve:




  • chroot, edit /etc/fstab, update initramfs, reinstall bootloader (you find the chroot method in basically every linux wiki)

  • otherwise restore old UUIDs by changing them with tune2fs -U <UUID>, there are similar tools for other filesystems (requires documentation, otherwise you won't know the UUIDs you need). Again, careful not to duplicate them, only do this if the old device is gone altogether.






share|improve this answer














share|improve this answer



share|improve this answer








edited Feb 9 at 10:48

























answered Feb 8 at 21:25









frostschutz

24.5k14774




24.5k14774











  • My first backup attempt was to tar everything important(/etc, /bin etc.) But I couldn't restore the system after fresh preinstall, because for example the UUIDS during the fresh installation was different from the ones previously tarred. So I wasn't able to boot the system after that. Then I decided to try with dd because if my disk fails, I can just place the perfect copy on new one and everything is fine. Or at least this is how I imagined it would be. Also I wanted to do the backup process automatic.
    – CuriousGuy
    Feb 8 at 21:37










  • Now I understand that the source and target disks must be not mounted. This means that automatic backup process will not work, as I'll need livecd to boot from to start the backup. Do you have user friendly suggestions how to make reliable and easily restorable system(not data) backups?
    – CuriousGuy
    Feb 8 at 21:40










  • Probably safe to say I don't have user friendly suggestions. I guess most users just reinstall from scratch with a distro that comes with user-friendly installers where that isn't a problem. Sorry about the answer, it's all over the place - for concise answers, ask more specific questions. "what could possibly have gone wrong" is a very broad question.
    – frostschutz
    Feb 8 at 22:19










  • You answer was very helpful to me, thank you. But how come I can do live disk images on a running Windows system using software like Aeomi which uses VSS. Is there anything similar for Linux systems(to be able to do healthy images from running hard disks)?
    – CuriousGuy
    Feb 9 at 8:10
















  • My first backup attempt was to tar everything important(/etc, /bin etc.) But I couldn't restore the system after fresh preinstall, because for example the UUIDS during the fresh installation was different from the ones previously tarred. So I wasn't able to boot the system after that. Then I decided to try with dd because if my disk fails, I can just place the perfect copy on new one and everything is fine. Or at least this is how I imagined it would be. Also I wanted to do the backup process automatic.
    – CuriousGuy
    Feb 8 at 21:37










  • Now I understand that the source and target disks must be not mounted. This means that automatic backup process will not work, as I'll need livecd to boot from to start the backup. Do you have user friendly suggestions how to make reliable and easily restorable system(not data) backups?
    – CuriousGuy
    Feb 8 at 21:40










  • Probably safe to say I don't have user friendly suggestions. I guess most users just reinstall from scratch with a distro that comes with user-friendly installers where that isn't a problem. Sorry about the answer, it's all over the place - for concise answers, ask more specific questions. "what could possibly have gone wrong" is a very broad question.
    – frostschutz
    Feb 8 at 22:19










  • You answer was very helpful to me, thank you. But how come I can do live disk images on a running Windows system using software like Aeomi which uses VSS. Is there anything similar for Linux systems(to be able to do healthy images from running hard disks)?
    – CuriousGuy
    Feb 9 at 8:10















My first backup attempt was to tar everything important(/etc, /bin etc.) But I couldn't restore the system after fresh preinstall, because for example the UUIDS during the fresh installation was different from the ones previously tarred. So I wasn't able to boot the system after that. Then I decided to try with dd because if my disk fails, I can just place the perfect copy on new one and everything is fine. Or at least this is how I imagined it would be. Also I wanted to do the backup process automatic.
– CuriousGuy
Feb 8 at 21:37




My first backup attempt was to tar everything important(/etc, /bin etc.) But I couldn't restore the system after fresh preinstall, because for example the UUIDS during the fresh installation was different from the ones previously tarred. So I wasn't able to boot the system after that. Then I decided to try with dd because if my disk fails, I can just place the perfect copy on new one and everything is fine. Or at least this is how I imagined it would be. Also I wanted to do the backup process automatic.
– CuriousGuy
Feb 8 at 21:37












Now I understand that the source and target disks must be not mounted. This means that automatic backup process will not work, as I'll need livecd to boot from to start the backup. Do you have user friendly suggestions how to make reliable and easily restorable system(not data) backups?
– CuriousGuy
Feb 8 at 21:40




Now I understand that the source and target disks must be not mounted. This means that automatic backup process will not work, as I'll need livecd to boot from to start the backup. Do you have user friendly suggestions how to make reliable and easily restorable system(not data) backups?
– CuriousGuy
Feb 8 at 21:40












Probably safe to say I don't have user friendly suggestions. I guess most users just reinstall from scratch with a distro that comes with user-friendly installers where that isn't a problem. Sorry about the answer, it's all over the place - for concise answers, ask more specific questions. "what could possibly have gone wrong" is a very broad question.
– frostschutz
Feb 8 at 22:19




Probably safe to say I don't have user friendly suggestions. I guess most users just reinstall from scratch with a distro that comes with user-friendly installers where that isn't a problem. Sorry about the answer, it's all over the place - for concise answers, ask more specific questions. "what could possibly have gone wrong" is a very broad question.
– frostschutz
Feb 8 at 22:19












You answer was very helpful to me, thank you. But how come I can do live disk images on a running Windows system using software like Aeomi which uses VSS. Is there anything similar for Linux systems(to be able to do healthy images from running hard disks)?
– CuriousGuy
Feb 9 at 8:10




You answer was very helpful to me, thank you. But how come I can do live disk images on a running Windows system using software like Aeomi which uses VSS. Is there anything similar for Linux systems(to be able to do healthy images from running hard disks)?
– CuriousGuy
Feb 9 at 8:10












 

draft saved


draft discarded


























 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f422899%2finitramfs-error-after-image-restore-using-dd%23new-answer', 'question_page');

);

Post as a guest













































































Popular posts from this blog

How to check contact read email or not when send email to Individual?

Displaying single band from multi-band raster using QGIS

How many registers does an x86_64 CPU actually have?