Why did system revert to old data after raid 1 rebuild [closed]

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
2
down vote

favorite












To begin, there is a very similar thread/question already called:

"Old data on Linux raid 1 disks"



This is very similar, but I see some critical differences.



Very short description:



After rebuilding broken raid with same disks/partitions data has reverted back to the day the raid broke.



  • /dev/md0 -> Raid drive

  • /dev/sdc5 -> First Raid partition

  • /dev/sdd5 -> Second Raid partition, the one that failed.

System Setup



I'm running an Ubuntu 16.04 installation on a 2 partition raid 1 disk.
/dev/sdc5 + /dev/sdd5 = /dev/md0



Probable scenario for failure



My chassy has a hardware switch for switching between drives.
A physical dual boot button.
Only one of my drives was connected to the controller card for this hardware switch /dev/sdd



For some reason, this button has probably been switched (I didn't realize at the time that I had this function), and thus my raid became degraded.



This was most probably July 23rd.



Fast forward 12 days to August 4th.
After my retailer pointed out to me that there was no error with my physical disk I want home and started doing some re-wiring inside my chassy.



First thing I did was remove the hardware dual-boot controller, so now all my drives are connected directly to my motherboard.



This resulted in the disk (dev/sdd) showing up in the system again.
System booted up fine on /dev/md0, with only /dev/sdc5. System was still up to date.



Steps leading up to old data



  1. I re-added (or actually just added, since disk2 was completely removed) /dev/sdd5 to the raid.

    • This went smoothly and the raid started to rebuild and sync.

    • I did not do update-grub, mostly because I didn't realise I needed to.

    • Then I rebooted the system.

    • On reboot I was dropped by the system into busybox


  2. I removed /dev/sdd5 from the /dev/md0 and again rebooted.

    • This resulted in the system booting again.

    • At this point I don't think I checked if the data was correct, only assumed it was.


  3. I again added the partition /dev/sdd5 to the raid /dev/md0.

    • The raid rebuilt and synced.


  4. I did a grub-update this time and then rebooted again.

    • System rebooted "correctly".
      This is the point where I noticed that something was off. apt-get had a lot of updates waiting for me even though I had just updated.
      When later examining my logs I have concluded the date of the failure/degredation of the raid to be 23rd July and then on August 4th I "fixed" the problem.


Question



So to my actual questions:



  1. Can anyone explain to be what happened? Why and how did the raid produce the old data? Shouldn't the raid have synced from /dev/sdc5 to /dev/sdd5 instead of the other way around?

  2. Do you think that there is any chance of fixing it at this point? That is, recover/revert back to state where data between July 23rd and August 4th is availible?









share|improve this question















closed as unclear what you're asking by Rui F Ribeiro, msp9011, schily, Thomas, jimmij Aug 7 at 19:42


Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.


















    up vote
    2
    down vote

    favorite












    To begin, there is a very similar thread/question already called:

    "Old data on Linux raid 1 disks"



    This is very similar, but I see some critical differences.



    Very short description:



    After rebuilding broken raid with same disks/partitions data has reverted back to the day the raid broke.



    • /dev/md0 -> Raid drive

    • /dev/sdc5 -> First Raid partition

    • /dev/sdd5 -> Second Raid partition, the one that failed.

    System Setup



    I'm running an Ubuntu 16.04 installation on a 2 partition raid 1 disk.
    /dev/sdc5 + /dev/sdd5 = /dev/md0



    Probable scenario for failure



    My chassy has a hardware switch for switching between drives.
    A physical dual boot button.
    Only one of my drives was connected to the controller card for this hardware switch /dev/sdd



    For some reason, this button has probably been switched (I didn't realize at the time that I had this function), and thus my raid became degraded.



    This was most probably July 23rd.



    Fast forward 12 days to August 4th.
    After my retailer pointed out to me that there was no error with my physical disk I want home and started doing some re-wiring inside my chassy.



    First thing I did was remove the hardware dual-boot controller, so now all my drives are connected directly to my motherboard.



    This resulted in the disk (dev/sdd) showing up in the system again.
    System booted up fine on /dev/md0, with only /dev/sdc5. System was still up to date.



    Steps leading up to old data



    1. I re-added (or actually just added, since disk2 was completely removed) /dev/sdd5 to the raid.

      • This went smoothly and the raid started to rebuild and sync.

      • I did not do update-grub, mostly because I didn't realise I needed to.

      • Then I rebooted the system.

      • On reboot I was dropped by the system into busybox


    2. I removed /dev/sdd5 from the /dev/md0 and again rebooted.

      • This resulted in the system booting again.

      • At this point I don't think I checked if the data was correct, only assumed it was.


    3. I again added the partition /dev/sdd5 to the raid /dev/md0.

      • The raid rebuilt and synced.


    4. I did a grub-update this time and then rebooted again.

      • System rebooted "correctly".
        This is the point where I noticed that something was off. apt-get had a lot of updates waiting for me even though I had just updated.
        When later examining my logs I have concluded the date of the failure/degredation of the raid to be 23rd July and then on August 4th I "fixed" the problem.


    Question



    So to my actual questions:



    1. Can anyone explain to be what happened? Why and how did the raid produce the old data? Shouldn't the raid have synced from /dev/sdc5 to /dev/sdd5 instead of the other way around?

    2. Do you think that there is any chance of fixing it at this point? That is, recover/revert back to state where data between July 23rd and August 4th is availible?









    share|improve this question















    closed as unclear what you're asking by Rui F Ribeiro, msp9011, schily, Thomas, jimmij Aug 7 at 19:42


    Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
















      up vote
      2
      down vote

      favorite









      up vote
      2
      down vote

      favorite











      To begin, there is a very similar thread/question already called:

      "Old data on Linux raid 1 disks"



      This is very similar, but I see some critical differences.



      Very short description:



      After rebuilding broken raid with same disks/partitions data has reverted back to the day the raid broke.



      • /dev/md0 -> Raid drive

      • /dev/sdc5 -> First Raid partition

      • /dev/sdd5 -> Second Raid partition, the one that failed.

      System Setup



      I'm running an Ubuntu 16.04 installation on a 2 partition raid 1 disk.
      /dev/sdc5 + /dev/sdd5 = /dev/md0



      Probable scenario for failure



      My chassy has a hardware switch for switching between drives.
      A physical dual boot button.
      Only one of my drives was connected to the controller card for this hardware switch /dev/sdd



      For some reason, this button has probably been switched (I didn't realize at the time that I had this function), and thus my raid became degraded.



      This was most probably July 23rd.



      Fast forward 12 days to August 4th.
      After my retailer pointed out to me that there was no error with my physical disk I want home and started doing some re-wiring inside my chassy.



      First thing I did was remove the hardware dual-boot controller, so now all my drives are connected directly to my motherboard.



      This resulted in the disk (dev/sdd) showing up in the system again.
      System booted up fine on /dev/md0, with only /dev/sdc5. System was still up to date.



      Steps leading up to old data



      1. I re-added (or actually just added, since disk2 was completely removed) /dev/sdd5 to the raid.

        • This went smoothly and the raid started to rebuild and sync.

        • I did not do update-grub, mostly because I didn't realise I needed to.

        • Then I rebooted the system.

        • On reboot I was dropped by the system into busybox


      2. I removed /dev/sdd5 from the /dev/md0 and again rebooted.

        • This resulted in the system booting again.

        • At this point I don't think I checked if the data was correct, only assumed it was.


      3. I again added the partition /dev/sdd5 to the raid /dev/md0.

        • The raid rebuilt and synced.


      4. I did a grub-update this time and then rebooted again.

        • System rebooted "correctly".
          This is the point where I noticed that something was off. apt-get had a lot of updates waiting for me even though I had just updated.
          When later examining my logs I have concluded the date of the failure/degredation of the raid to be 23rd July and then on August 4th I "fixed" the problem.


      Question



      So to my actual questions:



      1. Can anyone explain to be what happened? Why and how did the raid produce the old data? Shouldn't the raid have synced from /dev/sdc5 to /dev/sdd5 instead of the other way around?

      2. Do you think that there is any chance of fixing it at this point? That is, recover/revert back to state where data between July 23rd and August 4th is availible?









      share|improve this question















      To begin, there is a very similar thread/question already called:

      "Old data on Linux raid 1 disks"



      This is very similar, but I see some critical differences.



      Very short description:



      After rebuilding broken raid with same disks/partitions data has reverted back to the day the raid broke.



      • /dev/md0 -> Raid drive

      • /dev/sdc5 -> First Raid partition

      • /dev/sdd5 -> Second Raid partition, the one that failed.

      System Setup



      I'm running an Ubuntu 16.04 installation on a 2 partition raid 1 disk.
      /dev/sdc5 + /dev/sdd5 = /dev/md0



      Probable scenario for failure



      My chassy has a hardware switch for switching between drives.
      A physical dual boot button.
      Only one of my drives was connected to the controller card for this hardware switch /dev/sdd



      For some reason, this button has probably been switched (I didn't realize at the time that I had this function), and thus my raid became degraded.



      This was most probably July 23rd.



      Fast forward 12 days to August 4th.
      After my retailer pointed out to me that there was no error with my physical disk I want home and started doing some re-wiring inside my chassy.



      First thing I did was remove the hardware dual-boot controller, so now all my drives are connected directly to my motherboard.



      This resulted in the disk (dev/sdd) showing up in the system again.
      System booted up fine on /dev/md0, with only /dev/sdc5. System was still up to date.



      Steps leading up to old data



      1. I re-added (or actually just added, since disk2 was completely removed) /dev/sdd5 to the raid.

        • This went smoothly and the raid started to rebuild and sync.

        • I did not do update-grub, mostly because I didn't realise I needed to.

        • Then I rebooted the system.

        • On reboot I was dropped by the system into busybox


      2. I removed /dev/sdd5 from the /dev/md0 and again rebooted.

        • This resulted in the system booting again.

        • At this point I don't think I checked if the data was correct, only assumed it was.


      3. I again added the partition /dev/sdd5 to the raid /dev/md0.

        • The raid rebuilt and synced.


      4. I did a grub-update this time and then rebooted again.

        • System rebooted "correctly".
          This is the point where I noticed that something was off. apt-get had a lot of updates waiting for me even though I had just updated.
          When later examining my logs I have concluded the date of the failure/degredation of the raid to be 23rd July and then on August 4th I "fixed" the problem.


      Question



      So to my actual questions:



      1. Can anyone explain to be what happened? Why and how did the raid produce the old data? Shouldn't the raid have synced from /dev/sdc5 to /dev/sdd5 instead of the other way around?

      2. Do you think that there is any chance of fixing it at this point? That is, recover/revert back to state where data between July 23rd and August 4th is availible?






      linux ubuntu raid1






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Aug 7 at 20:35

























      asked Aug 7 at 9:32









      aliex

      92




      92




      closed as unclear what you're asking by Rui F Ribeiro, msp9011, schily, Thomas, jimmij Aug 7 at 19:42


      Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.






      closed as unclear what you're asking by Rui F Ribeiro, msp9011, schily, Thomas, jimmij Aug 7 at 19:42


      Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.



























          active

          oldest

          votes






















          active

          oldest

          votes













          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes

          Popular posts from this blog

          How to check contact read email or not when send email to Individual?

          Bahrain

          Postfix configuration issue with fips on centos 7; mailgun relay