Recovering a RAID 6

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite
1












I'm attempting to recover a 7 Drive RAID6 array from a failed Thecus NAS. I've been able to get the drives and access to the data through an Ubuntu machine I setup, the problem being that the transfer rates off of the raid are painfully slow (~500Kb/s - 1.2Mbs/).



I've discovered that one of the drives appears to be degraded, and am guessing that is probably the root of the problem. When performing a "mdadm --detail /dev/md0" I get the following results:



/dev/md0:
Version : 1.2
Creation Time : Tue May 7 15:39:33 2013
Raid Level : raid6
Array Size : 14638110720 (13959.99 GiB 14989.43 GB)
Used Dev Size : 2927622144 (2792.00 GiB 2997.89 GB)
Raid Devices : 7
Total Devices : 6
Persistence : Superblock is persistent

Update Time : Thu Feb 8 08:02:27 2018
State : clean, degraded
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

Name : N7700PRO:0
UUID : 7169575c:a8d508eb:dea20994:ee2351ef
Events : 64278

Number Major Minor RaidDevice State
7 8 130 0 active sync /dev/sdi2
2 0 0 2 removed
2 8 82 2 active sync /dev/sdf2
3 8 34 3 active sync /dev/sdc2
4 8 50 4 active sync /dev/sdd2
5 8 2 5 active sync /dev/sda2
6 8 18 6 active sync /dev/sdb2


I've got a spare drive on-hand for a failed drive, but I'm not entirely sure how to add it into the array and repair it. I've pulled the bad drive out of the system, and plugged the spare in it's place, but when performing the mdadm --detail I'm getting the same results as with the original drive in place.



I believe the command to add a drive is just



mdadm --add /dev/md0 <new_disk>


But, I'm unsure how to get the path for the new disk since it isn't appearing in the list, I wasn't seeing any information in the disk utility that matches the "/dev/sdx2" format either to lend any clues for the command.



I've also got all of the SATA ports on the Motherboard occupied at this point, and am wondering if that could be part of the issue as well? I'm not really sure, but here are the details of the machine as it sits-



  • 7x3TB WD REDS (RAID Drives)

  • 1x2TB WD Green (OS)

  • Asus Sabertooth 990fx r2

  • 16GB DDR3

  • AMD FX 8350

  • AMD 7870

  • XFX 850w PSU

Output from ls /dev/sd?; some investigation says that it looks like the new drive is /dev/sdg



/dev/sda /dev/sdc /dev/sde /dev/sdg /dev/sdi
/dev/sdb /dev/sdd /dev/sdf /dev/sdh


Output from mount | awk '$3=="/"'



/dev/sdh1 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)


Let me know if there is any further information you may need, appreciate any and all assistance in this.







share|improve this question


























    up vote
    1
    down vote

    favorite
    1












    I'm attempting to recover a 7 Drive RAID6 array from a failed Thecus NAS. I've been able to get the drives and access to the data through an Ubuntu machine I setup, the problem being that the transfer rates off of the raid are painfully slow (~500Kb/s - 1.2Mbs/).



    I've discovered that one of the drives appears to be degraded, and am guessing that is probably the root of the problem. When performing a "mdadm --detail /dev/md0" I get the following results:



    /dev/md0:
    Version : 1.2
    Creation Time : Tue May 7 15:39:33 2013
    Raid Level : raid6
    Array Size : 14638110720 (13959.99 GiB 14989.43 GB)
    Used Dev Size : 2927622144 (2792.00 GiB 2997.89 GB)
    Raid Devices : 7
    Total Devices : 6
    Persistence : Superblock is persistent

    Update Time : Thu Feb 8 08:02:27 2018
    State : clean, degraded
    Active Devices : 6
    Working Devices : 6
    Failed Devices : 0
    Spare Devices : 0

    Layout : left-symmetric
    Chunk Size : 64K

    Name : N7700PRO:0
    UUID : 7169575c:a8d508eb:dea20994:ee2351ef
    Events : 64278

    Number Major Minor RaidDevice State
    7 8 130 0 active sync /dev/sdi2
    2 0 0 2 removed
    2 8 82 2 active sync /dev/sdf2
    3 8 34 3 active sync /dev/sdc2
    4 8 50 4 active sync /dev/sdd2
    5 8 2 5 active sync /dev/sda2
    6 8 18 6 active sync /dev/sdb2


    I've got a spare drive on-hand for a failed drive, but I'm not entirely sure how to add it into the array and repair it. I've pulled the bad drive out of the system, and plugged the spare in it's place, but when performing the mdadm --detail I'm getting the same results as with the original drive in place.



    I believe the command to add a drive is just



    mdadm --add /dev/md0 <new_disk>


    But, I'm unsure how to get the path for the new disk since it isn't appearing in the list, I wasn't seeing any information in the disk utility that matches the "/dev/sdx2" format either to lend any clues for the command.



    I've also got all of the SATA ports on the Motherboard occupied at this point, and am wondering if that could be part of the issue as well? I'm not really sure, but here are the details of the machine as it sits-



    • 7x3TB WD REDS (RAID Drives)

    • 1x2TB WD Green (OS)

    • Asus Sabertooth 990fx r2

    • 16GB DDR3

    • AMD FX 8350

    • AMD 7870

    • XFX 850w PSU

    Output from ls /dev/sd?; some investigation says that it looks like the new drive is /dev/sdg



    /dev/sda /dev/sdc /dev/sde /dev/sdg /dev/sdi
    /dev/sdb /dev/sdd /dev/sdf /dev/sdh


    Output from mount | awk '$3=="/"'



    /dev/sdh1 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)


    Let me know if there is any further information you may need, appreciate any and all assistance in this.







    share|improve this question
























      up vote
      1
      down vote

      favorite
      1









      up vote
      1
      down vote

      favorite
      1






      1





      I'm attempting to recover a 7 Drive RAID6 array from a failed Thecus NAS. I've been able to get the drives and access to the data through an Ubuntu machine I setup, the problem being that the transfer rates off of the raid are painfully slow (~500Kb/s - 1.2Mbs/).



      I've discovered that one of the drives appears to be degraded, and am guessing that is probably the root of the problem. When performing a "mdadm --detail /dev/md0" I get the following results:



      /dev/md0:
      Version : 1.2
      Creation Time : Tue May 7 15:39:33 2013
      Raid Level : raid6
      Array Size : 14638110720 (13959.99 GiB 14989.43 GB)
      Used Dev Size : 2927622144 (2792.00 GiB 2997.89 GB)
      Raid Devices : 7
      Total Devices : 6
      Persistence : Superblock is persistent

      Update Time : Thu Feb 8 08:02:27 2018
      State : clean, degraded
      Active Devices : 6
      Working Devices : 6
      Failed Devices : 0
      Spare Devices : 0

      Layout : left-symmetric
      Chunk Size : 64K

      Name : N7700PRO:0
      UUID : 7169575c:a8d508eb:dea20994:ee2351ef
      Events : 64278

      Number Major Minor RaidDevice State
      7 8 130 0 active sync /dev/sdi2
      2 0 0 2 removed
      2 8 82 2 active sync /dev/sdf2
      3 8 34 3 active sync /dev/sdc2
      4 8 50 4 active sync /dev/sdd2
      5 8 2 5 active sync /dev/sda2
      6 8 18 6 active sync /dev/sdb2


      I've got a spare drive on-hand for a failed drive, but I'm not entirely sure how to add it into the array and repair it. I've pulled the bad drive out of the system, and plugged the spare in it's place, but when performing the mdadm --detail I'm getting the same results as with the original drive in place.



      I believe the command to add a drive is just



      mdadm --add /dev/md0 <new_disk>


      But, I'm unsure how to get the path for the new disk since it isn't appearing in the list, I wasn't seeing any information in the disk utility that matches the "/dev/sdx2" format either to lend any clues for the command.



      I've also got all of the SATA ports on the Motherboard occupied at this point, and am wondering if that could be part of the issue as well? I'm not really sure, but here are the details of the machine as it sits-



      • 7x3TB WD REDS (RAID Drives)

      • 1x2TB WD Green (OS)

      • Asus Sabertooth 990fx r2

      • 16GB DDR3

      • AMD FX 8350

      • AMD 7870

      • XFX 850w PSU

      Output from ls /dev/sd?; some investigation says that it looks like the new drive is /dev/sdg



      /dev/sda /dev/sdc /dev/sde /dev/sdg /dev/sdi
      /dev/sdb /dev/sdd /dev/sdf /dev/sdh


      Output from mount | awk '$3=="/"'



      /dev/sdh1 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)


      Let me know if there is any further information you may need, appreciate any and all assistance in this.







      share|improve this question














      I'm attempting to recover a 7 Drive RAID6 array from a failed Thecus NAS. I've been able to get the drives and access to the data through an Ubuntu machine I setup, the problem being that the transfer rates off of the raid are painfully slow (~500Kb/s - 1.2Mbs/).



      I've discovered that one of the drives appears to be degraded, and am guessing that is probably the root of the problem. When performing a "mdadm --detail /dev/md0" I get the following results:



      /dev/md0:
      Version : 1.2
      Creation Time : Tue May 7 15:39:33 2013
      Raid Level : raid6
      Array Size : 14638110720 (13959.99 GiB 14989.43 GB)
      Used Dev Size : 2927622144 (2792.00 GiB 2997.89 GB)
      Raid Devices : 7
      Total Devices : 6
      Persistence : Superblock is persistent

      Update Time : Thu Feb 8 08:02:27 2018
      State : clean, degraded
      Active Devices : 6
      Working Devices : 6
      Failed Devices : 0
      Spare Devices : 0

      Layout : left-symmetric
      Chunk Size : 64K

      Name : N7700PRO:0
      UUID : 7169575c:a8d508eb:dea20994:ee2351ef
      Events : 64278

      Number Major Minor RaidDevice State
      7 8 130 0 active sync /dev/sdi2
      2 0 0 2 removed
      2 8 82 2 active sync /dev/sdf2
      3 8 34 3 active sync /dev/sdc2
      4 8 50 4 active sync /dev/sdd2
      5 8 2 5 active sync /dev/sda2
      6 8 18 6 active sync /dev/sdb2


      I've got a spare drive on-hand for a failed drive, but I'm not entirely sure how to add it into the array and repair it. I've pulled the bad drive out of the system, and plugged the spare in it's place, but when performing the mdadm --detail I'm getting the same results as with the original drive in place.



      I believe the command to add a drive is just



      mdadm --add /dev/md0 <new_disk>


      But, I'm unsure how to get the path for the new disk since it isn't appearing in the list, I wasn't seeing any information in the disk utility that matches the "/dev/sdx2" format either to lend any clues for the command.



      I've also got all of the SATA ports on the Motherboard occupied at this point, and am wondering if that could be part of the issue as well? I'm not really sure, but here are the details of the machine as it sits-



      • 7x3TB WD REDS (RAID Drives)

      • 1x2TB WD Green (OS)

      • Asus Sabertooth 990fx r2

      • 16GB DDR3

      • AMD FX 8350

      • AMD 7870

      • XFX 850w PSU

      Output from ls /dev/sd?; some investigation says that it looks like the new drive is /dev/sdg



      /dev/sda /dev/sdc /dev/sde /dev/sdg /dev/sdi
      /dev/sdb /dev/sdd /dev/sdf /dev/sdh


      Output from mount | awk '$3=="/"'



      /dev/sdh1 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)


      Let me know if there is any further information you may need, appreciate any and all assistance in this.









      share|improve this question













      share|improve this question




      share|improve this question








      edited Feb 8 at 16:45









      roaima

      39.6k545108




      39.6k545108










      asked Feb 8 at 14:07









      zroberts

      85




      85




















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          1
          down vote



          accepted










          The disk layout is /dev/sdXN where X is a letter in the range [a-z] and N is a number in the range [1-9]. Each disk is represented as /dev/sdX so that's what you need to use to find the new disk. The N is the partition (slice) number; your RAID is expecting to use partition 2 on each disk so you need to find out what the disk layout is, and replicate that onto the new disk. Finally you can then add the partition to your RAID and let it rebuild.




          1. Identify the new disk



            You have said that it's /dev/sdg.




          2. Replicate the disk partition table



            It must be GPT because you are using 3TB disks (MBR works only for disks up to 2TB). We will replicate the partition table from /dev/sda onto the new disk /dev/sdg, remembering to generate new UUIDs along the way:



            sgdisk --replicate=/dev/sdg /dev/sda
            sgdisk --randomize-guids /dev/sdg


            If you don't have sgdisk installed you can find it in the gdisk package (Debian, Ubuntu, CentOS, etc.).




          3. Add the newly partitioned disk into the RAID array



            mdadm --add /dev/md0 /dev/sdg2


            Don't forget to let it rebuild (see cat /proc/mdstat for status details)



          I would strongly recommend that you read the man page for sgdisk and mdadm to ensure that the commands I have suggested will indeed do what I have described and you would expect. If you lose a second disk from your RAID6 array you won't have any redundancy left.






          share|improve this answer




















          • You are a Rock star my friend. Thank you so much.
            – zroberts
            Feb 8 at 17:41










          Your Answer







          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "106"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: false,
          noModals: false,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );








           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f422814%2frecovering-a-raid-6%23new-answer', 'question_page');

          );

          Post as a guest






























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          1
          down vote



          accepted










          The disk layout is /dev/sdXN where X is a letter in the range [a-z] and N is a number in the range [1-9]. Each disk is represented as /dev/sdX so that's what you need to use to find the new disk. The N is the partition (slice) number; your RAID is expecting to use partition 2 on each disk so you need to find out what the disk layout is, and replicate that onto the new disk. Finally you can then add the partition to your RAID and let it rebuild.




          1. Identify the new disk



            You have said that it's /dev/sdg.




          2. Replicate the disk partition table



            It must be GPT because you are using 3TB disks (MBR works only for disks up to 2TB). We will replicate the partition table from /dev/sda onto the new disk /dev/sdg, remembering to generate new UUIDs along the way:



            sgdisk --replicate=/dev/sdg /dev/sda
            sgdisk --randomize-guids /dev/sdg


            If you don't have sgdisk installed you can find it in the gdisk package (Debian, Ubuntu, CentOS, etc.).




          3. Add the newly partitioned disk into the RAID array



            mdadm --add /dev/md0 /dev/sdg2


            Don't forget to let it rebuild (see cat /proc/mdstat for status details)



          I would strongly recommend that you read the man page for sgdisk and mdadm to ensure that the commands I have suggested will indeed do what I have described and you would expect. If you lose a second disk from your RAID6 array you won't have any redundancy left.






          share|improve this answer




















          • You are a Rock star my friend. Thank you so much.
            – zroberts
            Feb 8 at 17:41














          up vote
          1
          down vote



          accepted










          The disk layout is /dev/sdXN where X is a letter in the range [a-z] and N is a number in the range [1-9]. Each disk is represented as /dev/sdX so that's what you need to use to find the new disk. The N is the partition (slice) number; your RAID is expecting to use partition 2 on each disk so you need to find out what the disk layout is, and replicate that onto the new disk. Finally you can then add the partition to your RAID and let it rebuild.




          1. Identify the new disk



            You have said that it's /dev/sdg.




          2. Replicate the disk partition table



            It must be GPT because you are using 3TB disks (MBR works only for disks up to 2TB). We will replicate the partition table from /dev/sda onto the new disk /dev/sdg, remembering to generate new UUIDs along the way:



            sgdisk --replicate=/dev/sdg /dev/sda
            sgdisk --randomize-guids /dev/sdg


            If you don't have sgdisk installed you can find it in the gdisk package (Debian, Ubuntu, CentOS, etc.).




          3. Add the newly partitioned disk into the RAID array



            mdadm --add /dev/md0 /dev/sdg2


            Don't forget to let it rebuild (see cat /proc/mdstat for status details)



          I would strongly recommend that you read the man page for sgdisk and mdadm to ensure that the commands I have suggested will indeed do what I have described and you would expect. If you lose a second disk from your RAID6 array you won't have any redundancy left.






          share|improve this answer




















          • You are a Rock star my friend. Thank you so much.
            – zroberts
            Feb 8 at 17:41












          up vote
          1
          down vote



          accepted







          up vote
          1
          down vote



          accepted






          The disk layout is /dev/sdXN where X is a letter in the range [a-z] and N is a number in the range [1-9]. Each disk is represented as /dev/sdX so that's what you need to use to find the new disk. The N is the partition (slice) number; your RAID is expecting to use partition 2 on each disk so you need to find out what the disk layout is, and replicate that onto the new disk. Finally you can then add the partition to your RAID and let it rebuild.




          1. Identify the new disk



            You have said that it's /dev/sdg.




          2. Replicate the disk partition table



            It must be GPT because you are using 3TB disks (MBR works only for disks up to 2TB). We will replicate the partition table from /dev/sda onto the new disk /dev/sdg, remembering to generate new UUIDs along the way:



            sgdisk --replicate=/dev/sdg /dev/sda
            sgdisk --randomize-guids /dev/sdg


            If you don't have sgdisk installed you can find it in the gdisk package (Debian, Ubuntu, CentOS, etc.).




          3. Add the newly partitioned disk into the RAID array



            mdadm --add /dev/md0 /dev/sdg2


            Don't forget to let it rebuild (see cat /proc/mdstat for status details)



          I would strongly recommend that you read the man page for sgdisk and mdadm to ensure that the commands I have suggested will indeed do what I have described and you would expect. If you lose a second disk from your RAID6 array you won't have any redundancy left.






          share|improve this answer












          The disk layout is /dev/sdXN where X is a letter in the range [a-z] and N is a number in the range [1-9]. Each disk is represented as /dev/sdX so that's what you need to use to find the new disk. The N is the partition (slice) number; your RAID is expecting to use partition 2 on each disk so you need to find out what the disk layout is, and replicate that onto the new disk. Finally you can then add the partition to your RAID and let it rebuild.




          1. Identify the new disk



            You have said that it's /dev/sdg.




          2. Replicate the disk partition table



            It must be GPT because you are using 3TB disks (MBR works only for disks up to 2TB). We will replicate the partition table from /dev/sda onto the new disk /dev/sdg, remembering to generate new UUIDs along the way:



            sgdisk --replicate=/dev/sdg /dev/sda
            sgdisk --randomize-guids /dev/sdg


            If you don't have sgdisk installed you can find it in the gdisk package (Debian, Ubuntu, CentOS, etc.).




          3. Add the newly partitioned disk into the RAID array



            mdadm --add /dev/md0 /dev/sdg2


            Don't forget to let it rebuild (see cat /proc/mdstat for status details)



          I would strongly recommend that you read the man page for sgdisk and mdadm to ensure that the commands I have suggested will indeed do what I have described and you would expect. If you lose a second disk from your RAID6 array you won't have any redundancy left.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Feb 8 at 17:18









          roaima

          39.6k545108




          39.6k545108











          • You are a Rock star my friend. Thank you so much.
            – zroberts
            Feb 8 at 17:41
















          • You are a Rock star my friend. Thank you so much.
            – zroberts
            Feb 8 at 17:41















          You are a Rock star my friend. Thank you so much.
          – zroberts
          Feb 8 at 17:41




          You are a Rock star my friend. Thank you so much.
          – zroberts
          Feb 8 at 17:41












           

          draft saved


          draft discarded


























           


          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f422814%2frecovering-a-raid-6%23new-answer', 'question_page');

          );

          Post as a guest













































































          Popular posts from this blog

          How to check contact read email or not when send email to Individual?

          Displaying single band from multi-band raster using QGIS

          How many registers does an x86_64 CPU actually have?