(Complicated) RAID hard drive issue
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
I have a (complicated) RAID hard drive issue, I was wondering if anyone could help me:
I have a Synology DS413j with 4 hds, hd 3 failed so I decided to replace them. The official support article says to replace them one by one, and being the dumb person I am I replaced hard drive 1. I then decided to transfer the files onto a external hard drive then create a new volume with the new hds and then transfer the files back on, but then hd 3 kept on crashing the entire thing, and with hard drive 1 not being initialized and SHR's one drive fault tolerance, I couldn't do that successfully without multiple attempts. I now pulled all 4 hds and decided to create a new volume with the new drives then take the 4 old hds and transfer the data that way. When I put the SHR formatted hds on gparted it looks like this:
/dev/sdc1 - Linux-Raid-Member (2,5GB)
/dev/sdc2 - Linux-Raid-Member (2,5GB)
Unallocated Space
/dev/sdc3 - Extended (2,0TB)
Unallocated Space
/dev/sdc5 - Linux-Raid-Member (2,0TB)
Unallocated Space
fdisk -l says:
Disk /dev/sde: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x48da305a
Device Boot Start End Sectors Size Id Type
/dev/sde1 256 4980735 4980480 2.4G fd Linux raid
autodetect
/dev/sde2 4980736 9175039 4194304 2G fd Linux raid
autodetect
/dev/sde3 9437184 3907015007 3897577824 1.8T f W95 Ext'd (LBA)
/dev/sde5 9453280 3907015007 3897561728 1.8T fd Linux raid
autodetect
sdc1 & sdc2 are probably the system partitions (which have been corrupted on 2 of the hds), sdc5 is the data.
Any ideas on how to tackle this? I need to get the server running promptly as I am running low on hds to store files on.
linux partition raid mdadm synology
add a comment |Â
up vote
1
down vote
favorite
I have a (complicated) RAID hard drive issue, I was wondering if anyone could help me:
I have a Synology DS413j with 4 hds, hd 3 failed so I decided to replace them. The official support article says to replace them one by one, and being the dumb person I am I replaced hard drive 1. I then decided to transfer the files onto a external hard drive then create a new volume with the new hds and then transfer the files back on, but then hd 3 kept on crashing the entire thing, and with hard drive 1 not being initialized and SHR's one drive fault tolerance, I couldn't do that successfully without multiple attempts. I now pulled all 4 hds and decided to create a new volume with the new drives then take the 4 old hds and transfer the data that way. When I put the SHR formatted hds on gparted it looks like this:
/dev/sdc1 - Linux-Raid-Member (2,5GB)
/dev/sdc2 - Linux-Raid-Member (2,5GB)
Unallocated Space
/dev/sdc3 - Extended (2,0TB)
Unallocated Space
/dev/sdc5 - Linux-Raid-Member (2,0TB)
Unallocated Space
fdisk -l says:
Disk /dev/sde: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x48da305a
Device Boot Start End Sectors Size Id Type
/dev/sde1 256 4980735 4980480 2.4G fd Linux raid
autodetect
/dev/sde2 4980736 9175039 4194304 2G fd Linux raid
autodetect
/dev/sde3 9437184 3907015007 3897577824 1.8T f W95 Ext'd (LBA)
/dev/sde5 9453280 3907015007 3897561728 1.8T fd Linux raid
autodetect
sdc1 & sdc2 are probably the system partitions (which have been corrupted on 2 of the hds), sdc5 is the data.
Any ideas on how to tackle this? I need to get the server running promptly as I am running low on hds to store files on.
linux partition raid mdadm synology
"The official support article says to replace them one by one" -- I expect that the official support article expects that you know you only need to replace the failing drive. Additionally the output offdisk -l
is probably more useful than what you've pasted.
â wurtel
Jun 22 at 10:09
I wanted to replace all the drives, so according to the support article I need to replace them one by one. Will add fdisk -l soon.
â laol12
Jun 22 at 16:58
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I have a (complicated) RAID hard drive issue, I was wondering if anyone could help me:
I have a Synology DS413j with 4 hds, hd 3 failed so I decided to replace them. The official support article says to replace them one by one, and being the dumb person I am I replaced hard drive 1. I then decided to transfer the files onto a external hard drive then create a new volume with the new hds and then transfer the files back on, but then hd 3 kept on crashing the entire thing, and with hard drive 1 not being initialized and SHR's one drive fault tolerance, I couldn't do that successfully without multiple attempts. I now pulled all 4 hds and decided to create a new volume with the new drives then take the 4 old hds and transfer the data that way. When I put the SHR formatted hds on gparted it looks like this:
/dev/sdc1 - Linux-Raid-Member (2,5GB)
/dev/sdc2 - Linux-Raid-Member (2,5GB)
Unallocated Space
/dev/sdc3 - Extended (2,0TB)
Unallocated Space
/dev/sdc5 - Linux-Raid-Member (2,0TB)
Unallocated Space
fdisk -l says:
Disk /dev/sde: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x48da305a
Device Boot Start End Sectors Size Id Type
/dev/sde1 256 4980735 4980480 2.4G fd Linux raid
autodetect
/dev/sde2 4980736 9175039 4194304 2G fd Linux raid
autodetect
/dev/sde3 9437184 3907015007 3897577824 1.8T f W95 Ext'd (LBA)
/dev/sde5 9453280 3907015007 3897561728 1.8T fd Linux raid
autodetect
sdc1 & sdc2 are probably the system partitions (which have been corrupted on 2 of the hds), sdc5 is the data.
Any ideas on how to tackle this? I need to get the server running promptly as I am running low on hds to store files on.
linux partition raid mdadm synology
I have a (complicated) RAID hard drive issue, I was wondering if anyone could help me:
I have a Synology DS413j with 4 hds, hd 3 failed so I decided to replace them. The official support article says to replace them one by one, and being the dumb person I am I replaced hard drive 1. I then decided to transfer the files onto a external hard drive then create a new volume with the new hds and then transfer the files back on, but then hd 3 kept on crashing the entire thing, and with hard drive 1 not being initialized and SHR's one drive fault tolerance, I couldn't do that successfully without multiple attempts. I now pulled all 4 hds and decided to create a new volume with the new drives then take the 4 old hds and transfer the data that way. When I put the SHR formatted hds on gparted it looks like this:
/dev/sdc1 - Linux-Raid-Member (2,5GB)
/dev/sdc2 - Linux-Raid-Member (2,5GB)
Unallocated Space
/dev/sdc3 - Extended (2,0TB)
Unallocated Space
/dev/sdc5 - Linux-Raid-Member (2,0TB)
Unallocated Space
fdisk -l says:
Disk /dev/sde: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x48da305a
Device Boot Start End Sectors Size Id Type
/dev/sde1 256 4980735 4980480 2.4G fd Linux raid
autodetect
/dev/sde2 4980736 9175039 4194304 2G fd Linux raid
autodetect
/dev/sde3 9437184 3907015007 3897577824 1.8T f W95 Ext'd (LBA)
/dev/sde5 9453280 3907015007 3897561728 1.8T fd Linux raid
autodetect
sdc1 & sdc2 are probably the system partitions (which have been corrupted on 2 of the hds), sdc5 is the data.
Any ideas on how to tackle this? I need to get the server running promptly as I am running low on hds to store files on.
linux partition raid mdadm synology
edited Jun 22 at 17:13
asked Jun 22 at 6:07
laol12
62
62
"The official support article says to replace them one by one" -- I expect that the official support article expects that you know you only need to replace the failing drive. Additionally the output offdisk -l
is probably more useful than what you've pasted.
â wurtel
Jun 22 at 10:09
I wanted to replace all the drives, so according to the support article I need to replace them one by one. Will add fdisk -l soon.
â laol12
Jun 22 at 16:58
add a comment |Â
"The official support article says to replace them one by one" -- I expect that the official support article expects that you know you only need to replace the failing drive. Additionally the output offdisk -l
is probably more useful than what you've pasted.
â wurtel
Jun 22 at 10:09
I wanted to replace all the drives, so according to the support article I need to replace them one by one. Will add fdisk -l soon.
â laol12
Jun 22 at 16:58
"The official support article says to replace them one by one" -- I expect that the official support article expects that you know you only need to replace the failing drive. Additionally the output of
fdisk -l
is probably more useful than what you've pasted.â wurtel
Jun 22 at 10:09
"The official support article says to replace them one by one" -- I expect that the official support article expects that you know you only need to replace the failing drive. Additionally the output of
fdisk -l
is probably more useful than what you've pasted.â wurtel
Jun 22 at 10:09
I wanted to replace all the drives, so according to the support article I need to replace them one by one. Will add fdisk -l soon.
â laol12
Jun 22 at 16:58
I wanted to replace all the drives, so according to the support article I need to replace them one by one. Will add fdisk -l soon.
â laol12
Jun 22 at 16:58
add a comment |Â
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f451228%2fcomplicated-raid-hard-drive-issue%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
"The official support article says to replace them one by one" -- I expect that the official support article expects that you know you only need to replace the failing drive. Additionally the output of
fdisk -l
is probably more useful than what you've pasted.â wurtel
Jun 22 at 10:09
I wanted to replace all the drives, so according to the support article I need to replace them one by one. Will add fdisk -l soon.
â laol12
Jun 22 at 16:58