restoring raid filesystem
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
I ended up with my 3-disk mdadm array not being able to assemble because each of the disks had corrupted superblocks.
I tried to force reassembling the array using
mdadm --create
This resulted in the ability to assemble the raid, but now there is no (ext4) filesystem. Other indicators seem to show that the data is still there, but I need to fix the filesystem.
I tried to fix the filesystem by seeing if I can restore the superblock:
mkfs.ext4 -n /dev/md0
fsck.ext4 -b <tried_all_of_the_blocks> /dev/md0
But I get
fsck.ext4: Filesystem has unexpected block size while trying to open /dev/md0
The superblock could not be read or does not describe a valid ext2/ext4/ext4 filesystem...
So I take it that there is no filesystem and therefore, no valid superblocks from which to restore the filesystem.
I have two questions:
1. Can I safely use mkfs.ext4 on the raid to regenerate the filesystem, without losing the data that appears to still be in the array?
2. Can I fix the superblock of the array using a backup from one of the individual disks?
Obviously, I do not understand many things, so I appreciate your kind response.
Here is the mdadm --examine
output:
root@server:~# mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/md0
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : aa70a8ec:192f1719:23bc5df4:1ddac384
Name : server:0 (local to host server)
Creation Time : Sat Oct 28 00:21:46 2017
Raid Level : raid5
Raid Devices : 3
Avail Dev Size : 4294703103 (2047.87 GiB 2198.89 GB)
Array Size : 4294702080 (4095.75 GiB 4397.77 GB)
Used Dev Size : 4294702080 (2047.87 GiB 2198.89 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=1023 sectors
State : clean
Device UUID : bf5a4ff5:e4e3659e:99caca7c:333475f3
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Oct 28 05:48:33 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 71521ea5 - correct
Events : 3358
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 0
Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : aa70a8ec:192f1719:23bc5df4:1ddac384
Name : server:0 (local to host server)
Creation Time : Sat Oct 28 00:21:46 2017
Raid Level : raid5
Raid Devices : 3
Avail Dev Size : 4294703103 (2047.87 GiB 2198.89 GB)
Array Size : 4294702080 (4095.75 GiB 4397.77 GB)
Used Dev Size : 4294702080 (2047.87 GiB 2198.89 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=1023 sectors
State : clean
Device UUID : 91528c6d:77861852:a1a4f630:9d8eb8ab
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Oct 28 05:48:33 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 9b0ed7c - correct
Events : 3358
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : aa70a8ec:192f1719:23bc5df4:1ddac384
Name : server:0 (local to host server)
Creation Time : Sat Oct 28 00:21:46 2017
Raid Level : raid5
Raid Devices : 3
Avail Dev Size : 4294703103 (2047.87 GiB 2198.89 GB)
Array Size : 4294702080 (4095.75 GiB 4397.77 GB)
Used Dev Size : 4294702080 (2047.87 GiB 2198.89 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=1023 sectors
State : clean
Device UUID : bdc61c9f:321a7ca6:2ed914d0:d10b96a4
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Oct 28 05:48:33 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 27a0a727 - correct
Events : 3358
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 2
Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
mdadm: No md superblock detected on /dev/md0.
This is all on Ubuntu Server 16.04, with mdadm
version 1.2
filesystems mdadm partition-table superblock
 |Â
show 2 more comments
up vote
1
down vote
favorite
I ended up with my 3-disk mdadm array not being able to assemble because each of the disks had corrupted superblocks.
I tried to force reassembling the array using
mdadm --create
This resulted in the ability to assemble the raid, but now there is no (ext4) filesystem. Other indicators seem to show that the data is still there, but I need to fix the filesystem.
I tried to fix the filesystem by seeing if I can restore the superblock:
mkfs.ext4 -n /dev/md0
fsck.ext4 -b <tried_all_of_the_blocks> /dev/md0
But I get
fsck.ext4: Filesystem has unexpected block size while trying to open /dev/md0
The superblock could not be read or does not describe a valid ext2/ext4/ext4 filesystem...
So I take it that there is no filesystem and therefore, no valid superblocks from which to restore the filesystem.
I have two questions:
1. Can I safely use mkfs.ext4 on the raid to regenerate the filesystem, without losing the data that appears to still be in the array?
2. Can I fix the superblock of the array using a backup from one of the individual disks?
Obviously, I do not understand many things, so I appreciate your kind response.
Here is the mdadm --examine
output:
root@server:~# mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/md0
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : aa70a8ec:192f1719:23bc5df4:1ddac384
Name : server:0 (local to host server)
Creation Time : Sat Oct 28 00:21:46 2017
Raid Level : raid5
Raid Devices : 3
Avail Dev Size : 4294703103 (2047.87 GiB 2198.89 GB)
Array Size : 4294702080 (4095.75 GiB 4397.77 GB)
Used Dev Size : 4294702080 (2047.87 GiB 2198.89 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=1023 sectors
State : clean
Device UUID : bf5a4ff5:e4e3659e:99caca7c:333475f3
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Oct 28 05:48:33 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 71521ea5 - correct
Events : 3358
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 0
Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : aa70a8ec:192f1719:23bc5df4:1ddac384
Name : server:0 (local to host server)
Creation Time : Sat Oct 28 00:21:46 2017
Raid Level : raid5
Raid Devices : 3
Avail Dev Size : 4294703103 (2047.87 GiB 2198.89 GB)
Array Size : 4294702080 (4095.75 GiB 4397.77 GB)
Used Dev Size : 4294702080 (2047.87 GiB 2198.89 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=1023 sectors
State : clean
Device UUID : 91528c6d:77861852:a1a4f630:9d8eb8ab
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Oct 28 05:48:33 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 9b0ed7c - correct
Events : 3358
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : aa70a8ec:192f1719:23bc5df4:1ddac384
Name : server:0 (local to host server)
Creation Time : Sat Oct 28 00:21:46 2017
Raid Level : raid5
Raid Devices : 3
Avail Dev Size : 4294703103 (2047.87 GiB 2198.89 GB)
Array Size : 4294702080 (4095.75 GiB 4397.77 GB)
Used Dev Size : 4294702080 (2047.87 GiB 2198.89 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=1023 sectors
State : clean
Device UUID : bdc61c9f:321a7ca6:2ed914d0:d10b96a4
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Oct 28 05:48:33 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 27a0a727 - correct
Events : 3358
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 2
Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
mdadm: No md superblock detected on /dev/md0.
This is all on Ubuntu Server 16.04, with mdadm
version 1.2
filesystems mdadm partition-table superblock
mdraid doesn't protect against silent/filesystem corruption, only against disk failures. This is why you need backups. You might be able to salvage something from your disk usingtestdisk
or some similar utility. For anyone to be able to provide more specific advice, you need to provide more details and a specific enough problem to answer.
â sebasth
Oct 28 '17 at 22:28
Would you all please check my completely revised question and consider reopening it?
â fectus
Nov 3 '17 at 16:26
"Can I safely use mkfs.ext4 on the raid to regenerate the filesystem...?" NO! If you runmkfs.ext4
you will overwrite whatever is present and create an empty filesystem. DO NOT DO THIS
â roaima
Nov 3 '17 at 17:52
Ubuntu Server 16.04, mdadm 1.2
â fectus
Nov 3 '17 at 17:55
First sentence, "my 3-disk mdadm array not being able to assemble because each of the disks had corrupted superblocks". Are you talking about the RAID metadata or the filesystem superblock? The RAID metadata looks ok, and on that basis your RAID array should have assembled just fine.
â roaima
Nov 3 '17 at 18:23
 |Â
show 2 more comments
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I ended up with my 3-disk mdadm array not being able to assemble because each of the disks had corrupted superblocks.
I tried to force reassembling the array using
mdadm --create
This resulted in the ability to assemble the raid, but now there is no (ext4) filesystem. Other indicators seem to show that the data is still there, but I need to fix the filesystem.
I tried to fix the filesystem by seeing if I can restore the superblock:
mkfs.ext4 -n /dev/md0
fsck.ext4 -b <tried_all_of_the_blocks> /dev/md0
But I get
fsck.ext4: Filesystem has unexpected block size while trying to open /dev/md0
The superblock could not be read or does not describe a valid ext2/ext4/ext4 filesystem...
So I take it that there is no filesystem and therefore, no valid superblocks from which to restore the filesystem.
I have two questions:
1. Can I safely use mkfs.ext4 on the raid to regenerate the filesystem, without losing the data that appears to still be in the array?
2. Can I fix the superblock of the array using a backup from one of the individual disks?
Obviously, I do not understand many things, so I appreciate your kind response.
Here is the mdadm --examine
output:
root@server:~# mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/md0
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : aa70a8ec:192f1719:23bc5df4:1ddac384
Name : server:0 (local to host server)
Creation Time : Sat Oct 28 00:21:46 2017
Raid Level : raid5
Raid Devices : 3
Avail Dev Size : 4294703103 (2047.87 GiB 2198.89 GB)
Array Size : 4294702080 (4095.75 GiB 4397.77 GB)
Used Dev Size : 4294702080 (2047.87 GiB 2198.89 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=1023 sectors
State : clean
Device UUID : bf5a4ff5:e4e3659e:99caca7c:333475f3
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Oct 28 05:48:33 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 71521ea5 - correct
Events : 3358
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 0
Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : aa70a8ec:192f1719:23bc5df4:1ddac384
Name : server:0 (local to host server)
Creation Time : Sat Oct 28 00:21:46 2017
Raid Level : raid5
Raid Devices : 3
Avail Dev Size : 4294703103 (2047.87 GiB 2198.89 GB)
Array Size : 4294702080 (4095.75 GiB 4397.77 GB)
Used Dev Size : 4294702080 (2047.87 GiB 2198.89 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=1023 sectors
State : clean
Device UUID : 91528c6d:77861852:a1a4f630:9d8eb8ab
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Oct 28 05:48:33 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 9b0ed7c - correct
Events : 3358
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : aa70a8ec:192f1719:23bc5df4:1ddac384
Name : server:0 (local to host server)
Creation Time : Sat Oct 28 00:21:46 2017
Raid Level : raid5
Raid Devices : 3
Avail Dev Size : 4294703103 (2047.87 GiB 2198.89 GB)
Array Size : 4294702080 (4095.75 GiB 4397.77 GB)
Used Dev Size : 4294702080 (2047.87 GiB 2198.89 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=1023 sectors
State : clean
Device UUID : bdc61c9f:321a7ca6:2ed914d0:d10b96a4
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Oct 28 05:48:33 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 27a0a727 - correct
Events : 3358
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 2
Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
mdadm: No md superblock detected on /dev/md0.
This is all on Ubuntu Server 16.04, with mdadm
version 1.2
filesystems mdadm partition-table superblock
I ended up with my 3-disk mdadm array not being able to assemble because each of the disks had corrupted superblocks.
I tried to force reassembling the array using
mdadm --create
This resulted in the ability to assemble the raid, but now there is no (ext4) filesystem. Other indicators seem to show that the data is still there, but I need to fix the filesystem.
I tried to fix the filesystem by seeing if I can restore the superblock:
mkfs.ext4 -n /dev/md0
fsck.ext4 -b <tried_all_of_the_blocks> /dev/md0
But I get
fsck.ext4: Filesystem has unexpected block size while trying to open /dev/md0
The superblock could not be read or does not describe a valid ext2/ext4/ext4 filesystem...
So I take it that there is no filesystem and therefore, no valid superblocks from which to restore the filesystem.
I have two questions:
1. Can I safely use mkfs.ext4 on the raid to regenerate the filesystem, without losing the data that appears to still be in the array?
2. Can I fix the superblock of the array using a backup from one of the individual disks?
Obviously, I do not understand many things, so I appreciate your kind response.
Here is the mdadm --examine
output:
root@server:~# mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/md0
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : aa70a8ec:192f1719:23bc5df4:1ddac384
Name : server:0 (local to host server)
Creation Time : Sat Oct 28 00:21:46 2017
Raid Level : raid5
Raid Devices : 3
Avail Dev Size : 4294703103 (2047.87 GiB 2198.89 GB)
Array Size : 4294702080 (4095.75 GiB 4397.77 GB)
Used Dev Size : 4294702080 (2047.87 GiB 2198.89 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=1023 sectors
State : clean
Device UUID : bf5a4ff5:e4e3659e:99caca7c:333475f3
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Oct 28 05:48:33 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 71521ea5 - correct
Events : 3358
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 0
Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : aa70a8ec:192f1719:23bc5df4:1ddac384
Name : server:0 (local to host server)
Creation Time : Sat Oct 28 00:21:46 2017
Raid Level : raid5
Raid Devices : 3
Avail Dev Size : 4294703103 (2047.87 GiB 2198.89 GB)
Array Size : 4294702080 (4095.75 GiB 4397.77 GB)
Used Dev Size : 4294702080 (2047.87 GiB 2198.89 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=1023 sectors
State : clean
Device UUID : 91528c6d:77861852:a1a4f630:9d8eb8ab
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Oct 28 05:48:33 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 9b0ed7c - correct
Events : 3358
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : aa70a8ec:192f1719:23bc5df4:1ddac384
Name : server:0 (local to host server)
Creation Time : Sat Oct 28 00:21:46 2017
Raid Level : raid5
Raid Devices : 3
Avail Dev Size : 4294703103 (2047.87 GiB 2198.89 GB)
Array Size : 4294702080 (4095.75 GiB 4397.77 GB)
Used Dev Size : 4294702080 (2047.87 GiB 2198.89 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=1023 sectors
State : clean
Device UUID : bdc61c9f:321a7ca6:2ed914d0:d10b96a4
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Oct 28 05:48:33 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 27a0a727 - correct
Events : 3358
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 2
Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
mdadm: No md superblock detected on /dev/md0.
This is all on Ubuntu Server 16.04, with mdadm
version 1.2
filesystems mdadm partition-table superblock
edited Nov 3 '17 at 18:21
roaima
40k546109
40k546109
asked Oct 28 '17 at 15:07
fectus
344
344
mdraid doesn't protect against silent/filesystem corruption, only against disk failures. This is why you need backups. You might be able to salvage something from your disk usingtestdisk
or some similar utility. For anyone to be able to provide more specific advice, you need to provide more details and a specific enough problem to answer.
â sebasth
Oct 28 '17 at 22:28
Would you all please check my completely revised question and consider reopening it?
â fectus
Nov 3 '17 at 16:26
"Can I safely use mkfs.ext4 on the raid to regenerate the filesystem...?" NO! If you runmkfs.ext4
you will overwrite whatever is present and create an empty filesystem. DO NOT DO THIS
â roaima
Nov 3 '17 at 17:52
Ubuntu Server 16.04, mdadm 1.2
â fectus
Nov 3 '17 at 17:55
First sentence, "my 3-disk mdadm array not being able to assemble because each of the disks had corrupted superblocks". Are you talking about the RAID metadata or the filesystem superblock? The RAID metadata looks ok, and on that basis your RAID array should have assembled just fine.
â roaima
Nov 3 '17 at 18:23
 |Â
show 2 more comments
mdraid doesn't protect against silent/filesystem corruption, only against disk failures. This is why you need backups. You might be able to salvage something from your disk usingtestdisk
or some similar utility. For anyone to be able to provide more specific advice, you need to provide more details and a specific enough problem to answer.
â sebasth
Oct 28 '17 at 22:28
Would you all please check my completely revised question and consider reopening it?
â fectus
Nov 3 '17 at 16:26
"Can I safely use mkfs.ext4 on the raid to regenerate the filesystem...?" NO! If you runmkfs.ext4
you will overwrite whatever is present and create an empty filesystem. DO NOT DO THIS
â roaima
Nov 3 '17 at 17:52
Ubuntu Server 16.04, mdadm 1.2
â fectus
Nov 3 '17 at 17:55
First sentence, "my 3-disk mdadm array not being able to assemble because each of the disks had corrupted superblocks". Are you talking about the RAID metadata or the filesystem superblock? The RAID metadata looks ok, and on that basis your RAID array should have assembled just fine.
â roaima
Nov 3 '17 at 18:23
mdraid doesn't protect against silent/filesystem corruption, only against disk failures. This is why you need backups. You might be able to salvage something from your disk using
testdisk
or some similar utility. For anyone to be able to provide more specific advice, you need to provide more details and a specific enough problem to answer.â sebasth
Oct 28 '17 at 22:28
mdraid doesn't protect against silent/filesystem corruption, only against disk failures. This is why you need backups. You might be able to salvage something from your disk using
testdisk
or some similar utility. For anyone to be able to provide more specific advice, you need to provide more details and a specific enough problem to answer.â sebasth
Oct 28 '17 at 22:28
Would you all please check my completely revised question and consider reopening it?
â fectus
Nov 3 '17 at 16:26
Would you all please check my completely revised question and consider reopening it?
â fectus
Nov 3 '17 at 16:26
"Can I safely use mkfs.ext4 on the raid to regenerate the filesystem...?" NO! If you run
mkfs.ext4
you will overwrite whatever is present and create an empty filesystem. DO NOT DO THISâ roaima
Nov 3 '17 at 17:52
"Can I safely use mkfs.ext4 on the raid to regenerate the filesystem...?" NO! If you run
mkfs.ext4
you will overwrite whatever is present and create an empty filesystem. DO NOT DO THISâ roaima
Nov 3 '17 at 17:52
Ubuntu Server 16.04, mdadm 1.2
â fectus
Nov 3 '17 at 17:55
Ubuntu Server 16.04, mdadm 1.2
â fectus
Nov 3 '17 at 17:55
First sentence, "my 3-disk mdadm array not being able to assemble because each of the disks had corrupted superblocks". Are you talking about the RAID metadata or the filesystem superblock? The RAID metadata looks ok, and on that basis your RAID array should have assembled just fine.
â roaima
Nov 3 '17 at 18:23
First sentence, "my 3-disk mdadm array not being able to assemble because each of the disks had corrupted superblocks". Are you talking about the RAID metadata or the filesystem superblock? The RAID metadata looks ok, and on that basis your RAID array should have assembled just fine.
â roaima
Nov 3 '17 at 18:23
 |Â
show 2 more comments
1 Answer
1
active
oldest
votes
up vote
0
down vote
Between the original failure and your recovery efforts, it sounds like your array is pretty well wrecked. If you're very lucky, mdadm --create
put the disks back in their original order with the original layout. In that case, you can point some data-recovery software such as foremost
at the array and pull some of the files out, or you can send the disks off to a data recovery company and hope they'll do a better job at it than you can.
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
Between the original failure and your recovery efforts, it sounds like your array is pretty well wrecked. If you're very lucky, mdadm --create
put the disks back in their original order with the original layout. In that case, you can point some data-recovery software such as foremost
at the array and pull some of the files out, or you can send the disks off to a data recovery company and hope they'll do a better job at it than you can.
add a comment |Â
up vote
0
down vote
Between the original failure and your recovery efforts, it sounds like your array is pretty well wrecked. If you're very lucky, mdadm --create
put the disks back in their original order with the original layout. In that case, you can point some data-recovery software such as foremost
at the array and pull some of the files out, or you can send the disks off to a data recovery company and hope they'll do a better job at it than you can.
add a comment |Â
up vote
0
down vote
up vote
0
down vote
Between the original failure and your recovery efforts, it sounds like your array is pretty well wrecked. If you're very lucky, mdadm --create
put the disks back in their original order with the original layout. In that case, you can point some data-recovery software such as foremost
at the array and pull some of the files out, or you can send the disks off to a data recovery company and hope they'll do a better job at it than you can.
Between the original failure and your recovery efforts, it sounds like your array is pretty well wrecked. If you're very lucky, mdadm --create
put the disks back in their original order with the original layout. In that case, you can point some data-recovery software such as foremost
at the array and pull some of the files out, or you can send the disks off to a data recovery company and hope they'll do a better job at it than you can.
answered Jan 20 at 3:22
Mark
1,88111325
1,88111325
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f401077%2frestoring-raid-filesystem%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
mdraid doesn't protect against silent/filesystem corruption, only against disk failures. This is why you need backups. You might be able to salvage something from your disk using
testdisk
or some similar utility. For anyone to be able to provide more specific advice, you need to provide more details and a specific enough problem to answer.â sebasth
Oct 28 '17 at 22:28
Would you all please check my completely revised question and consider reopening it?
â fectus
Nov 3 '17 at 16:26
"Can I safely use mkfs.ext4 on the raid to regenerate the filesystem...?" NO! If you run
mkfs.ext4
you will overwrite whatever is present and create an empty filesystem. DO NOT DO THISâ roaima
Nov 3 '17 at 17:52
Ubuntu Server 16.04, mdadm 1.2
â fectus
Nov 3 '17 at 17:55
First sentence, "my 3-disk mdadm array not being able to assemble because each of the disks had corrupted superblocks". Are you talking about the RAID metadata or the filesystem superblock? The RAID metadata looks ok, and on that basis your RAID array should have assembled just fine.
â roaima
Nov 3 '17 at 18:23