MDADM can't automount existing RAID1 array

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








0















I recently reformatted my system and cannot get my RAID 1 array to load on boot. Would appreciate some help figuring out why this is. Using ubuntu 14 with MDADM. Its a very simple RAID1 array consisting of 2 drives sda and sdb. I tried to edit fstab to mount the RAID but it fails at boot and asks me to press a key to skip mounting.



Here's my MDADM.conf file



# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY metadata=imsm UUID=4e85cd11:34b3cd40:f263b2be:616ef7fb
ARRAY /dev/md/Storage(RAID) container=4e85cd11:34b3cd40:f263b2be:616ef7fb member=0 UUID=c26e7f0d:a956fc70:db00c387:58e7a99a

# This file was auto-generated on Sun, 10 Mar 2019 16:31:50 -0500
# by mkconf $Id$


Output from mdadm --Examine /dev/sda:



/dev/sda:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.3.00
Orig Family : 96f17bdd
Family : 96f17bdd
Generation : 000bd2b7
Attributes : All supported
UUID : 4e85cd11:34b3cd40:f263b2be:616ef7fb
Checksum : 7eee6b4f correct
MPB Sectors : 1
Disks : 2
RAID Devices : 1

Disk00 Serial : ZAD04VWC
State : active
Id : 00000004
Usable Size : 11721040136 (5589.03 GiB 6001.17 GB)

[Storage(RAID)]:
UUID : c26e7f0d:a956fc70:db00c387:58e7a99a
RAID Level : 1
Members : 2
Slots : [UU]
Failed disk : none
This Slot : 0
Array Size : 11721039872 (5589.03 GiB 6001.17 GB)
Per Dev Size : 11721040136 (5589.03 GiB 6001.17 GB)
Sector Offset : 0
Num Stripes : 45785312
Chunk Size : 64 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : clean

Disk01 Serial : ZAD05L4F
State : active
Id : 00000006
Usable Size : 11721040136 (5589.03 GiB 6001.17 GB)


Output from mdadm --Examine /dev/sdb:



/dev/sdb:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.3.00
Orig Family : 96f17bdd
Family : 96f17bdd
Generation : 000bd2b7
Attributes : All supported
UUID : 4e85cd11:34b3cd40:f263b2be:616ef7fb
Checksum : 7eee6b4f correct
MPB Sectors : 1
Disks : 2
RAID Devices : 1

Disk01 Serial : ZAD05L4F
State : active
Id : 00000006
Usable Size : 11721040136 (5589.03 GiB 6001.17 GB)

[Storage(RAID)]:
UUID : c26e7f0d:a956fc70:db00c387:58e7a99a
RAID Level : 1
Members : 2
Slots : [UU]
Failed disk : none
This Slot : 1
Array Size : 11721039872 (5589.03 GiB 6001.17 GB)
Per Dev Size : 11721040136 (5589.03 GiB 6001.17 GB)
Sector Offset : 0
Num Stripes : 45785312
Chunk Size : 64 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : clean

Disk00 Serial : ZAD04VWC
State : active
Id : 00000004
Usable Size : 11721040136 (5589.03 GiB 6001.17 GB)


Output from lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT gives the following:



NAME SIZE FSTYPE TYPE MOUNTPOINT
sda 5.5T disk
└─md126 5.5T raid1
├─md126p1 128M md
└─md126p2 5.5T md
sdb 5.5T disk
└─md126 5.5T raid1
├─md126p1 128M md
└─md126p2 5.5T md
sr0 1024M rom
nvme0n1 477G disk
├─nvme0n1p1 499M part
├─nvme0n1p2 99M part /boot/efi
├─nvme0n1p3 16M part
├─nvme0n1p4 237.4G part
├─nvme0n1p5 470M part
├─nvme0n1p6 232.9G part /
└─nvme0n1p9 1.9G part [SWAP]


If I run sudo mdadm --verbose --assemble --scan after boot, here is the output



mdadm: looking for devices for further assembly
mdadm: cannot open device /dev/sr0: No medium found
mdadm: no RAID superblock on /dev/nvme0n1p9
mdadm: no RAID superblock on /dev/nvme0n1p6
mdadm: no RAID superblock on /dev/nvme0n1p5
mdadm: no RAID superblock on /dev/nvme0n1p4
mdadm: no RAID superblock on /dev/nvme0n1p3
mdadm: no RAID superblock on /dev/nvme0n1p2
mdadm: no RAID superblock on /dev/nvme0n1p1
mdadm: no RAID superblock on /dev/nvme0n1
mdadm: /dev/sdb is identified as a member of /dev/md/imsm0, slot -1.
mdadm: /dev/sda is identified as a member of /dev/md/imsm0, slot -1.
mdadm: added /dev/sda to /dev/md/imsm0 as -1
mdadm: added /dev/sdb to /dev/md/imsm0 as -1
mdadm: Container /dev/md/imsm0 has been assembled with 2 drives
mdadm: looking for devices for further assembly
mdadm: looking for devices for further assembly
mdadm: /dev/sdb is busy - skipping
mdadm: /dev/sda is busy - skipping
mdadm: cannot open device /dev/sr0: No medium found
mdadm: no recogniseable superblock on /dev/nvme0n1p9
mdadm: no recogniseable superblock on /dev/nvme0n1p6
mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p5
mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p4
mdadm: no recogniseable superblock on /dev/nvme0n1p3
mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p2
mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p1
mdadm: Cannot assemble mbr metadata on /dev/nvme0n1
mdadm: looking in container /dev/md127
mdadm: found match on member /md127/0 in /dev/md127
mdadm: Started /dev/md/Storage(RAID)_0 with 2 devices
mdadm: looking for devices for further assembly
mdadm: /dev/sdb is busy - skipping
mdadm: /dev/sda is busy - skipping
mdadm: looking in container /dev/md127
mdadm: member /md127/0 in /dev/md127 is already assembled
mdadm: looking for devices for further assembly
mdadm: Cannot assemble mbr metadata on /dev/md/Storage_RAID__0p2
mdadm: no recogniseable superblock on /dev/md/Storage_RAID__0p1
mdadm: Cannot assemble mbr metadata on /dev/md126
mdadm: /dev/sdb is busy - skipping
mdadm: /dev/sda is busy - skipping
mdadm: cannot open device /dev/sr0: No medium found
mdadm: no recogniseable superblock on /dev/nvme0n1p9
mdadm: no recogniseable superblock on /dev/nvme0n1p6
mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p5
mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p4
mdadm: no recogniseable superblock on /dev/nvme0n1p3
mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p2
mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p1
mdadm: Cannot assemble mbr metadata on /dev/nvme0n1
mdadm: looking in container /dev/md127
mdadm: member /md127/0 in /dev/md127 is already assembled


I've made sure initramfs is updated when making changes to mdadm.conf. And once I run the --assemble --scan command, it does mount the arrays. If I then do cat /proc/mdstat I get:



md127 : inactive sdb[1](S) sda[0](S)
5032 blocks super external:imsm

md126 : active (auto-read-only) raid1 sda[1] sdb[0]
5860519936 blocks super external:/md127/0 [2/2] [UU]


But if I try to do mdadm --detail --scan to add to the mdadm.conf file I get this:



ARRAY /dev/md/imsm0 metadata=imsm UUID=4e85cd11:34b3cd40:f263b2be:616ef7fb
mdadm: cannot open /dev/md/Storage(RAID)_0: No such file or directory


I need this array to mount at boot but have no idea what is stopping that from happening.










share|improve this question




























    0















    I recently reformatted my system and cannot get my RAID 1 array to load on boot. Would appreciate some help figuring out why this is. Using ubuntu 14 with MDADM. Its a very simple RAID1 array consisting of 2 drives sda and sdb. I tried to edit fstab to mount the RAID but it fails at boot and asks me to press a key to skip mounting.



    Here's my MDADM.conf file



    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #

    # by default (built-in), scan all partitions (/proc/partitions) and all
    # containers for MD superblocks. alternatively, specify devices to scan, using
    # wildcards if desired.
    #DEVICE partitions containers

    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes

    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>

    # instruct the monitoring daemon where to send mail alerts
    MAILADDR root

    # definitions of existing MD arrays
    ARRAY metadata=imsm UUID=4e85cd11:34b3cd40:f263b2be:616ef7fb
    ARRAY /dev/md/Storage(RAID) container=4e85cd11:34b3cd40:f263b2be:616ef7fb member=0 UUID=c26e7f0d:a956fc70:db00c387:58e7a99a

    # This file was auto-generated on Sun, 10 Mar 2019 16:31:50 -0500
    # by mkconf $Id$


    Output from mdadm --Examine /dev/sda:



    /dev/sda:
    Magic : Intel Raid ISM Cfg Sig.
    Version : 1.3.00
    Orig Family : 96f17bdd
    Family : 96f17bdd
    Generation : 000bd2b7
    Attributes : All supported
    UUID : 4e85cd11:34b3cd40:f263b2be:616ef7fb
    Checksum : 7eee6b4f correct
    MPB Sectors : 1
    Disks : 2
    RAID Devices : 1

    Disk00 Serial : ZAD04VWC
    State : active
    Id : 00000004
    Usable Size : 11721040136 (5589.03 GiB 6001.17 GB)

    [Storage(RAID)]:
    UUID : c26e7f0d:a956fc70:db00c387:58e7a99a
    RAID Level : 1
    Members : 2
    Slots : [UU]
    Failed disk : none
    This Slot : 0
    Array Size : 11721039872 (5589.03 GiB 6001.17 GB)
    Per Dev Size : 11721040136 (5589.03 GiB 6001.17 GB)
    Sector Offset : 0
    Num Stripes : 45785312
    Chunk Size : 64 KiB
    Reserved : 0
    Migrate State : idle
    Map State : normal
    Dirty State : clean

    Disk01 Serial : ZAD05L4F
    State : active
    Id : 00000006
    Usable Size : 11721040136 (5589.03 GiB 6001.17 GB)


    Output from mdadm --Examine /dev/sdb:



    /dev/sdb:
    Magic : Intel Raid ISM Cfg Sig.
    Version : 1.3.00
    Orig Family : 96f17bdd
    Family : 96f17bdd
    Generation : 000bd2b7
    Attributes : All supported
    UUID : 4e85cd11:34b3cd40:f263b2be:616ef7fb
    Checksum : 7eee6b4f correct
    MPB Sectors : 1
    Disks : 2
    RAID Devices : 1

    Disk01 Serial : ZAD05L4F
    State : active
    Id : 00000006
    Usable Size : 11721040136 (5589.03 GiB 6001.17 GB)

    [Storage(RAID)]:
    UUID : c26e7f0d:a956fc70:db00c387:58e7a99a
    RAID Level : 1
    Members : 2
    Slots : [UU]
    Failed disk : none
    This Slot : 1
    Array Size : 11721039872 (5589.03 GiB 6001.17 GB)
    Per Dev Size : 11721040136 (5589.03 GiB 6001.17 GB)
    Sector Offset : 0
    Num Stripes : 45785312
    Chunk Size : 64 KiB
    Reserved : 0
    Migrate State : idle
    Map State : normal
    Dirty State : clean

    Disk00 Serial : ZAD04VWC
    State : active
    Id : 00000004
    Usable Size : 11721040136 (5589.03 GiB 6001.17 GB)


    Output from lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT gives the following:



    NAME SIZE FSTYPE TYPE MOUNTPOINT
    sda 5.5T disk
    └─md126 5.5T raid1
    ├─md126p1 128M md
    └─md126p2 5.5T md
    sdb 5.5T disk
    └─md126 5.5T raid1
    ├─md126p1 128M md
    └─md126p2 5.5T md
    sr0 1024M rom
    nvme0n1 477G disk
    ├─nvme0n1p1 499M part
    ├─nvme0n1p2 99M part /boot/efi
    ├─nvme0n1p3 16M part
    ├─nvme0n1p4 237.4G part
    ├─nvme0n1p5 470M part
    ├─nvme0n1p6 232.9G part /
    └─nvme0n1p9 1.9G part [SWAP]


    If I run sudo mdadm --verbose --assemble --scan after boot, here is the output



    mdadm: looking for devices for further assembly
    mdadm: cannot open device /dev/sr0: No medium found
    mdadm: no RAID superblock on /dev/nvme0n1p9
    mdadm: no RAID superblock on /dev/nvme0n1p6
    mdadm: no RAID superblock on /dev/nvme0n1p5
    mdadm: no RAID superblock on /dev/nvme0n1p4
    mdadm: no RAID superblock on /dev/nvme0n1p3
    mdadm: no RAID superblock on /dev/nvme0n1p2
    mdadm: no RAID superblock on /dev/nvme0n1p1
    mdadm: no RAID superblock on /dev/nvme0n1
    mdadm: /dev/sdb is identified as a member of /dev/md/imsm0, slot -1.
    mdadm: /dev/sda is identified as a member of /dev/md/imsm0, slot -1.
    mdadm: added /dev/sda to /dev/md/imsm0 as -1
    mdadm: added /dev/sdb to /dev/md/imsm0 as -1
    mdadm: Container /dev/md/imsm0 has been assembled with 2 drives
    mdadm: looking for devices for further assembly
    mdadm: looking for devices for further assembly
    mdadm: /dev/sdb is busy - skipping
    mdadm: /dev/sda is busy - skipping
    mdadm: cannot open device /dev/sr0: No medium found
    mdadm: no recogniseable superblock on /dev/nvme0n1p9
    mdadm: no recogniseable superblock on /dev/nvme0n1p6
    mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p5
    mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p4
    mdadm: no recogniseable superblock on /dev/nvme0n1p3
    mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p2
    mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p1
    mdadm: Cannot assemble mbr metadata on /dev/nvme0n1
    mdadm: looking in container /dev/md127
    mdadm: found match on member /md127/0 in /dev/md127
    mdadm: Started /dev/md/Storage(RAID)_0 with 2 devices
    mdadm: looking for devices for further assembly
    mdadm: /dev/sdb is busy - skipping
    mdadm: /dev/sda is busy - skipping
    mdadm: looking in container /dev/md127
    mdadm: member /md127/0 in /dev/md127 is already assembled
    mdadm: looking for devices for further assembly
    mdadm: Cannot assemble mbr metadata on /dev/md/Storage_RAID__0p2
    mdadm: no recogniseable superblock on /dev/md/Storage_RAID__0p1
    mdadm: Cannot assemble mbr metadata on /dev/md126
    mdadm: /dev/sdb is busy - skipping
    mdadm: /dev/sda is busy - skipping
    mdadm: cannot open device /dev/sr0: No medium found
    mdadm: no recogniseable superblock on /dev/nvme0n1p9
    mdadm: no recogniseable superblock on /dev/nvme0n1p6
    mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p5
    mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p4
    mdadm: no recogniseable superblock on /dev/nvme0n1p3
    mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p2
    mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p1
    mdadm: Cannot assemble mbr metadata on /dev/nvme0n1
    mdadm: looking in container /dev/md127
    mdadm: member /md127/0 in /dev/md127 is already assembled


    I've made sure initramfs is updated when making changes to mdadm.conf. And once I run the --assemble --scan command, it does mount the arrays. If I then do cat /proc/mdstat I get:



    md127 : inactive sdb[1](S) sda[0](S)
    5032 blocks super external:imsm

    md126 : active (auto-read-only) raid1 sda[1] sdb[0]
    5860519936 blocks super external:/md127/0 [2/2] [UU]


    But if I try to do mdadm --detail --scan to add to the mdadm.conf file I get this:



    ARRAY /dev/md/imsm0 metadata=imsm UUID=4e85cd11:34b3cd40:f263b2be:616ef7fb
    mdadm: cannot open /dev/md/Storage(RAID)_0: No such file or directory


    I need this array to mount at boot but have no idea what is stopping that from happening.










    share|improve this question
























      0












      0








      0








      I recently reformatted my system and cannot get my RAID 1 array to load on boot. Would appreciate some help figuring out why this is. Using ubuntu 14 with MDADM. Its a very simple RAID1 array consisting of 2 drives sda and sdb. I tried to edit fstab to mount the RAID but it fails at boot and asks me to press a key to skip mounting.



      Here's my MDADM.conf file



      # mdadm.conf
      #
      # Please refer to mdadm.conf(5) for information about this file.
      #

      # by default (built-in), scan all partitions (/proc/partitions) and all
      # containers for MD superblocks. alternatively, specify devices to scan, using
      # wildcards if desired.
      #DEVICE partitions containers

      # auto-create devices with Debian standard permissions
      CREATE owner=root group=disk mode=0660 auto=yes

      # automatically tag new arrays as belonging to the local system
      HOMEHOST <system>

      # instruct the monitoring daemon where to send mail alerts
      MAILADDR root

      # definitions of existing MD arrays
      ARRAY metadata=imsm UUID=4e85cd11:34b3cd40:f263b2be:616ef7fb
      ARRAY /dev/md/Storage(RAID) container=4e85cd11:34b3cd40:f263b2be:616ef7fb member=0 UUID=c26e7f0d:a956fc70:db00c387:58e7a99a

      # This file was auto-generated on Sun, 10 Mar 2019 16:31:50 -0500
      # by mkconf $Id$


      Output from mdadm --Examine /dev/sda:



      /dev/sda:
      Magic : Intel Raid ISM Cfg Sig.
      Version : 1.3.00
      Orig Family : 96f17bdd
      Family : 96f17bdd
      Generation : 000bd2b7
      Attributes : All supported
      UUID : 4e85cd11:34b3cd40:f263b2be:616ef7fb
      Checksum : 7eee6b4f correct
      MPB Sectors : 1
      Disks : 2
      RAID Devices : 1

      Disk00 Serial : ZAD04VWC
      State : active
      Id : 00000004
      Usable Size : 11721040136 (5589.03 GiB 6001.17 GB)

      [Storage(RAID)]:
      UUID : c26e7f0d:a956fc70:db00c387:58e7a99a
      RAID Level : 1
      Members : 2
      Slots : [UU]
      Failed disk : none
      This Slot : 0
      Array Size : 11721039872 (5589.03 GiB 6001.17 GB)
      Per Dev Size : 11721040136 (5589.03 GiB 6001.17 GB)
      Sector Offset : 0
      Num Stripes : 45785312
      Chunk Size : 64 KiB
      Reserved : 0
      Migrate State : idle
      Map State : normal
      Dirty State : clean

      Disk01 Serial : ZAD05L4F
      State : active
      Id : 00000006
      Usable Size : 11721040136 (5589.03 GiB 6001.17 GB)


      Output from mdadm --Examine /dev/sdb:



      /dev/sdb:
      Magic : Intel Raid ISM Cfg Sig.
      Version : 1.3.00
      Orig Family : 96f17bdd
      Family : 96f17bdd
      Generation : 000bd2b7
      Attributes : All supported
      UUID : 4e85cd11:34b3cd40:f263b2be:616ef7fb
      Checksum : 7eee6b4f correct
      MPB Sectors : 1
      Disks : 2
      RAID Devices : 1

      Disk01 Serial : ZAD05L4F
      State : active
      Id : 00000006
      Usable Size : 11721040136 (5589.03 GiB 6001.17 GB)

      [Storage(RAID)]:
      UUID : c26e7f0d:a956fc70:db00c387:58e7a99a
      RAID Level : 1
      Members : 2
      Slots : [UU]
      Failed disk : none
      This Slot : 1
      Array Size : 11721039872 (5589.03 GiB 6001.17 GB)
      Per Dev Size : 11721040136 (5589.03 GiB 6001.17 GB)
      Sector Offset : 0
      Num Stripes : 45785312
      Chunk Size : 64 KiB
      Reserved : 0
      Migrate State : idle
      Map State : normal
      Dirty State : clean

      Disk00 Serial : ZAD04VWC
      State : active
      Id : 00000004
      Usable Size : 11721040136 (5589.03 GiB 6001.17 GB)


      Output from lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT gives the following:



      NAME SIZE FSTYPE TYPE MOUNTPOINT
      sda 5.5T disk
      └─md126 5.5T raid1
      ├─md126p1 128M md
      └─md126p2 5.5T md
      sdb 5.5T disk
      └─md126 5.5T raid1
      ├─md126p1 128M md
      └─md126p2 5.5T md
      sr0 1024M rom
      nvme0n1 477G disk
      ├─nvme0n1p1 499M part
      ├─nvme0n1p2 99M part /boot/efi
      ├─nvme0n1p3 16M part
      ├─nvme0n1p4 237.4G part
      ├─nvme0n1p5 470M part
      ├─nvme0n1p6 232.9G part /
      └─nvme0n1p9 1.9G part [SWAP]


      If I run sudo mdadm --verbose --assemble --scan after boot, here is the output



      mdadm: looking for devices for further assembly
      mdadm: cannot open device /dev/sr0: No medium found
      mdadm: no RAID superblock on /dev/nvme0n1p9
      mdadm: no RAID superblock on /dev/nvme0n1p6
      mdadm: no RAID superblock on /dev/nvme0n1p5
      mdadm: no RAID superblock on /dev/nvme0n1p4
      mdadm: no RAID superblock on /dev/nvme0n1p3
      mdadm: no RAID superblock on /dev/nvme0n1p2
      mdadm: no RAID superblock on /dev/nvme0n1p1
      mdadm: no RAID superblock on /dev/nvme0n1
      mdadm: /dev/sdb is identified as a member of /dev/md/imsm0, slot -1.
      mdadm: /dev/sda is identified as a member of /dev/md/imsm0, slot -1.
      mdadm: added /dev/sda to /dev/md/imsm0 as -1
      mdadm: added /dev/sdb to /dev/md/imsm0 as -1
      mdadm: Container /dev/md/imsm0 has been assembled with 2 drives
      mdadm: looking for devices for further assembly
      mdadm: looking for devices for further assembly
      mdadm: /dev/sdb is busy - skipping
      mdadm: /dev/sda is busy - skipping
      mdadm: cannot open device /dev/sr0: No medium found
      mdadm: no recogniseable superblock on /dev/nvme0n1p9
      mdadm: no recogniseable superblock on /dev/nvme0n1p6
      mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p5
      mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p4
      mdadm: no recogniseable superblock on /dev/nvme0n1p3
      mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p2
      mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p1
      mdadm: Cannot assemble mbr metadata on /dev/nvme0n1
      mdadm: looking in container /dev/md127
      mdadm: found match on member /md127/0 in /dev/md127
      mdadm: Started /dev/md/Storage(RAID)_0 with 2 devices
      mdadm: looking for devices for further assembly
      mdadm: /dev/sdb is busy - skipping
      mdadm: /dev/sda is busy - skipping
      mdadm: looking in container /dev/md127
      mdadm: member /md127/0 in /dev/md127 is already assembled
      mdadm: looking for devices for further assembly
      mdadm: Cannot assemble mbr metadata on /dev/md/Storage_RAID__0p2
      mdadm: no recogniseable superblock on /dev/md/Storage_RAID__0p1
      mdadm: Cannot assemble mbr metadata on /dev/md126
      mdadm: /dev/sdb is busy - skipping
      mdadm: /dev/sda is busy - skipping
      mdadm: cannot open device /dev/sr0: No medium found
      mdadm: no recogniseable superblock on /dev/nvme0n1p9
      mdadm: no recogniseable superblock on /dev/nvme0n1p6
      mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p5
      mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p4
      mdadm: no recogniseable superblock on /dev/nvme0n1p3
      mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p2
      mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p1
      mdadm: Cannot assemble mbr metadata on /dev/nvme0n1
      mdadm: looking in container /dev/md127
      mdadm: member /md127/0 in /dev/md127 is already assembled


      I've made sure initramfs is updated when making changes to mdadm.conf. And once I run the --assemble --scan command, it does mount the arrays. If I then do cat /proc/mdstat I get:



      md127 : inactive sdb[1](S) sda[0](S)
      5032 blocks super external:imsm

      md126 : active (auto-read-only) raid1 sda[1] sdb[0]
      5860519936 blocks super external:/md127/0 [2/2] [UU]


      But if I try to do mdadm --detail --scan to add to the mdadm.conf file I get this:



      ARRAY /dev/md/imsm0 metadata=imsm UUID=4e85cd11:34b3cd40:f263b2be:616ef7fb
      mdadm: cannot open /dev/md/Storage(RAID)_0: No such file or directory


      I need this array to mount at boot but have no idea what is stopping that from happening.










      share|improve this question














      I recently reformatted my system and cannot get my RAID 1 array to load on boot. Would appreciate some help figuring out why this is. Using ubuntu 14 with MDADM. Its a very simple RAID1 array consisting of 2 drives sda and sdb. I tried to edit fstab to mount the RAID but it fails at boot and asks me to press a key to skip mounting.



      Here's my MDADM.conf file



      # mdadm.conf
      #
      # Please refer to mdadm.conf(5) for information about this file.
      #

      # by default (built-in), scan all partitions (/proc/partitions) and all
      # containers for MD superblocks. alternatively, specify devices to scan, using
      # wildcards if desired.
      #DEVICE partitions containers

      # auto-create devices with Debian standard permissions
      CREATE owner=root group=disk mode=0660 auto=yes

      # automatically tag new arrays as belonging to the local system
      HOMEHOST <system>

      # instruct the monitoring daemon where to send mail alerts
      MAILADDR root

      # definitions of existing MD arrays
      ARRAY metadata=imsm UUID=4e85cd11:34b3cd40:f263b2be:616ef7fb
      ARRAY /dev/md/Storage(RAID) container=4e85cd11:34b3cd40:f263b2be:616ef7fb member=0 UUID=c26e7f0d:a956fc70:db00c387:58e7a99a

      # This file was auto-generated on Sun, 10 Mar 2019 16:31:50 -0500
      # by mkconf $Id$


      Output from mdadm --Examine /dev/sda:



      /dev/sda:
      Magic : Intel Raid ISM Cfg Sig.
      Version : 1.3.00
      Orig Family : 96f17bdd
      Family : 96f17bdd
      Generation : 000bd2b7
      Attributes : All supported
      UUID : 4e85cd11:34b3cd40:f263b2be:616ef7fb
      Checksum : 7eee6b4f correct
      MPB Sectors : 1
      Disks : 2
      RAID Devices : 1

      Disk00 Serial : ZAD04VWC
      State : active
      Id : 00000004
      Usable Size : 11721040136 (5589.03 GiB 6001.17 GB)

      [Storage(RAID)]:
      UUID : c26e7f0d:a956fc70:db00c387:58e7a99a
      RAID Level : 1
      Members : 2
      Slots : [UU]
      Failed disk : none
      This Slot : 0
      Array Size : 11721039872 (5589.03 GiB 6001.17 GB)
      Per Dev Size : 11721040136 (5589.03 GiB 6001.17 GB)
      Sector Offset : 0
      Num Stripes : 45785312
      Chunk Size : 64 KiB
      Reserved : 0
      Migrate State : idle
      Map State : normal
      Dirty State : clean

      Disk01 Serial : ZAD05L4F
      State : active
      Id : 00000006
      Usable Size : 11721040136 (5589.03 GiB 6001.17 GB)


      Output from mdadm --Examine /dev/sdb:



      /dev/sdb:
      Magic : Intel Raid ISM Cfg Sig.
      Version : 1.3.00
      Orig Family : 96f17bdd
      Family : 96f17bdd
      Generation : 000bd2b7
      Attributes : All supported
      UUID : 4e85cd11:34b3cd40:f263b2be:616ef7fb
      Checksum : 7eee6b4f correct
      MPB Sectors : 1
      Disks : 2
      RAID Devices : 1

      Disk01 Serial : ZAD05L4F
      State : active
      Id : 00000006
      Usable Size : 11721040136 (5589.03 GiB 6001.17 GB)

      [Storage(RAID)]:
      UUID : c26e7f0d:a956fc70:db00c387:58e7a99a
      RAID Level : 1
      Members : 2
      Slots : [UU]
      Failed disk : none
      This Slot : 1
      Array Size : 11721039872 (5589.03 GiB 6001.17 GB)
      Per Dev Size : 11721040136 (5589.03 GiB 6001.17 GB)
      Sector Offset : 0
      Num Stripes : 45785312
      Chunk Size : 64 KiB
      Reserved : 0
      Migrate State : idle
      Map State : normal
      Dirty State : clean

      Disk00 Serial : ZAD04VWC
      State : active
      Id : 00000004
      Usable Size : 11721040136 (5589.03 GiB 6001.17 GB)


      Output from lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT gives the following:



      NAME SIZE FSTYPE TYPE MOUNTPOINT
      sda 5.5T disk
      └─md126 5.5T raid1
      ├─md126p1 128M md
      └─md126p2 5.5T md
      sdb 5.5T disk
      └─md126 5.5T raid1
      ├─md126p1 128M md
      └─md126p2 5.5T md
      sr0 1024M rom
      nvme0n1 477G disk
      ├─nvme0n1p1 499M part
      ├─nvme0n1p2 99M part /boot/efi
      ├─nvme0n1p3 16M part
      ├─nvme0n1p4 237.4G part
      ├─nvme0n1p5 470M part
      ├─nvme0n1p6 232.9G part /
      └─nvme0n1p9 1.9G part [SWAP]


      If I run sudo mdadm --verbose --assemble --scan after boot, here is the output



      mdadm: looking for devices for further assembly
      mdadm: cannot open device /dev/sr0: No medium found
      mdadm: no RAID superblock on /dev/nvme0n1p9
      mdadm: no RAID superblock on /dev/nvme0n1p6
      mdadm: no RAID superblock on /dev/nvme0n1p5
      mdadm: no RAID superblock on /dev/nvme0n1p4
      mdadm: no RAID superblock on /dev/nvme0n1p3
      mdadm: no RAID superblock on /dev/nvme0n1p2
      mdadm: no RAID superblock on /dev/nvme0n1p1
      mdadm: no RAID superblock on /dev/nvme0n1
      mdadm: /dev/sdb is identified as a member of /dev/md/imsm0, slot -1.
      mdadm: /dev/sda is identified as a member of /dev/md/imsm0, slot -1.
      mdadm: added /dev/sda to /dev/md/imsm0 as -1
      mdadm: added /dev/sdb to /dev/md/imsm0 as -1
      mdadm: Container /dev/md/imsm0 has been assembled with 2 drives
      mdadm: looking for devices for further assembly
      mdadm: looking for devices for further assembly
      mdadm: /dev/sdb is busy - skipping
      mdadm: /dev/sda is busy - skipping
      mdadm: cannot open device /dev/sr0: No medium found
      mdadm: no recogniseable superblock on /dev/nvme0n1p9
      mdadm: no recogniseable superblock on /dev/nvme0n1p6
      mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p5
      mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p4
      mdadm: no recogniseable superblock on /dev/nvme0n1p3
      mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p2
      mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p1
      mdadm: Cannot assemble mbr metadata on /dev/nvme0n1
      mdadm: looking in container /dev/md127
      mdadm: found match on member /md127/0 in /dev/md127
      mdadm: Started /dev/md/Storage(RAID)_0 with 2 devices
      mdadm: looking for devices for further assembly
      mdadm: /dev/sdb is busy - skipping
      mdadm: /dev/sda is busy - skipping
      mdadm: looking in container /dev/md127
      mdadm: member /md127/0 in /dev/md127 is already assembled
      mdadm: looking for devices for further assembly
      mdadm: Cannot assemble mbr metadata on /dev/md/Storage_RAID__0p2
      mdadm: no recogniseable superblock on /dev/md/Storage_RAID__0p1
      mdadm: Cannot assemble mbr metadata on /dev/md126
      mdadm: /dev/sdb is busy - skipping
      mdadm: /dev/sda is busy - skipping
      mdadm: cannot open device /dev/sr0: No medium found
      mdadm: no recogniseable superblock on /dev/nvme0n1p9
      mdadm: no recogniseable superblock on /dev/nvme0n1p6
      mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p5
      mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p4
      mdadm: no recogniseable superblock on /dev/nvme0n1p3
      mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p2
      mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p1
      mdadm: Cannot assemble mbr metadata on /dev/nvme0n1
      mdadm: looking in container /dev/md127
      mdadm: member /md127/0 in /dev/md127 is already assembled


      I've made sure initramfs is updated when making changes to mdadm.conf. And once I run the --assemble --scan command, it does mount the arrays. If I then do cat /proc/mdstat I get:



      md127 : inactive sdb[1](S) sda[0](S)
      5032 blocks super external:imsm

      md126 : active (auto-read-only) raid1 sda[1] sdb[0]
      5860519936 blocks super external:/md127/0 [2/2] [UU]


      But if I try to do mdadm --detail --scan to add to the mdadm.conf file I get this:



      ARRAY /dev/md/imsm0 metadata=imsm UUID=4e85cd11:34b3cd40:f263b2be:616ef7fb
      mdadm: cannot open /dev/md/Storage(RAID)_0: No such file or directory


      I need this array to mount at boot but have no idea what is stopping that from happening.







      ubuntu raid mdadm






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Mar 10 at 22:16









      ccac3ccac3

      11




      11




















          0






          active

          oldest

          votes












          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "106"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f505540%2fmdadm-cant-automount-existing-raid1-array%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Unix & Linux Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f505540%2fmdadm-cant-automount-existing-raid1-array%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown






          Popular posts from this blog

          How to check contact read email or not when send email to Individual?

          Displaying single band from multi-band raster using QGIS

          How many registers does an x86_64 CPU actually have?