RAID1 unmounted during boot as degraded but can be mounted fine manually

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












I'm running Fedora Server Edition 26 and I have two external USB drives with a partition on each that I have combined in RAID1. My /etc/fstab file has this line for automounting the array:



UUID=B0C4-A677 /mnt/backup-raid exfat uid=strwrsdbz,gid=strwrsdbz,umask=022,windows_names,locale=en.utf8,nobootwait,nofail 0 2


However, when booting is finished the array at /mnt/backup-raid is not mounted. If I check the journal logs I see



Oct 28 21:32:07 hostname systemd[1]: Started File System Check on /dev/disk/by-uuid/B0C4-A677.
Oct 28 21:32:07 hostname audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-fsck@dev-disk-byx2duuid-B0C4x2dA677 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:07 hostname kernel: audit: type=1130 audit(1509240727.851:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-fsck@dev-disk-byx2duuid-B0C4x2dA677 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:08 hostname systemd[1]: Mounting /mnt/backup-raid...
Oct 28 21:32:08 hostname systemd[1]: Mounted /mnt/c.
Oct 28 21:32:08 hostname ntfs-3g[702]: Version 2017.3.23 integrated FUSE 28
Oct 28 21:32:08 hostname ntfs-3g[702]: Mounted /dev/sda1 (Read-Write, label "", NTFS 3.1)
Oct 28 21:32:08 hostname ntfs-3g[702]: Cmdline options: rw,uid=1000,gid=1000,umask=022,windows_names,locale=en.utf8
Oct 28 21:32:08 hostname ntfs-3g[702]: Mount options: rw,allow_other,nonempty,relatime,default_permissions,fsname=/dev/sda1,blkdev,blksize=4096
Oct 28 21:32:08 hostname ntfs-3g[702]: Global ownership and permissions enforced, configuration type 7
Oct 28 21:32:08 hostname lvm[599]: 3 logical volume(s) in volume group "fedora" now active
Oct 28 21:32:08 hostname systemd[1]: Started LVM2 PV scan on device 8:5.
Oct 28 21:32:08 hostname audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-pvscan@8:5 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:08 hostname kernel: audit: type=1130 audit(1509240728.594:76): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-pvscan@8:5 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:08 hostname systemd[1]: Found device /dev/mapper/fedora-home.
Oct 28 21:32:08 hostname systemd[1]: Mounting /home...
Oct 28 21:32:08 hostname kernel: XFS (dm-2): Mounting V4 Filesystem
Oct 28 21:32:08 hostname systemd[1]: Mounted /mnt/igel1.
Oct 28 21:32:08 hostname systemd-fsck[666]: /dev/sda3: clean, 376/128016 files, 291819/512000 blocks
Oct 28 21:32:08 hostname systemd[1]: Mounted /mnt/igel2.
Oct 28 21:32:08 hostname systemd[1]: Mounted /mnt/backup-raid.


*snip*



Oct 28 21:32:33 hostname systemd[1]: Created slice system-mdadmx2dlastx2dresort.slice.
Oct 28 21:32:33 hostname systemd[1]: Starting Activate md array even though degraded...
Oct 28 21:32:33 hostname systemd[1]: Unmounting /mnt/backup-raid...
Oct 28 21:32:34 hostname systemd[1]: Started Activate md array even though degraded.
Oct 28 21:32:34 hostname audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=mdadm-last-resort@md0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:34 hostname audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=mdadm-last-resort@md0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:34 hostname kernel: md0:
Oct 28 21:32:34 hostname systemd[1]: Unmounted /mnt/backup-raid.


So it looks like it gets mounted in that first log block but then later it gets unmounted because it is appearing as degraded. But once it's finished booting I can run sudo mount -a and the array mounts without issue. The contents appear correctly in /mnt/backup-raid and checking /proc/mdstat shows



Personalities : [raid1]
md0 : active raid1 sdc2[0] sdb2[2]
485345344 blocks super 1.2 [2/2] [UU]
bitmap: 0/4 pages [0KB], 65536KB chunk

unused devices: <none>


so everything looks healthy. In case it helps, my /etc/mdadm.conf contains



ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=hostname:backup-raid UUID=6c8bf3df:c4147eb1:4c3f88d8:e94d1dbc devices=/dev/sdb2,/dev/sdc2


I found this email thread which appeared to be dealing with a similar situation but it looks to me like it just went silent. I'm sorry if the answer is in that email thread and I missed it but it just gets a bit too dense for me to follow.







share|improve this question




















  • another systemd bug?
    – Ipor Sircer
    Oct 29 '17 at 23:11










  • It might be the drives are slow to spin up, or that the USB subsystem gets reinitialized in between the two RAID mount checks, making mdadm think the RAID got borked. Check for USB messages between the sections you pasted.
    – Mioriin
    Oct 30 '17 at 7:11










  • There are no messages with USB in them between the two journal blocks I included above. I gave the entries between the two blocks a skim by eye too and nothing obvious popped out as drives having problems or re-initializing.
    – strwrsdbz
    Oct 31 '17 at 3:06














up vote
1
down vote

favorite












I'm running Fedora Server Edition 26 and I have two external USB drives with a partition on each that I have combined in RAID1. My /etc/fstab file has this line for automounting the array:



UUID=B0C4-A677 /mnt/backup-raid exfat uid=strwrsdbz,gid=strwrsdbz,umask=022,windows_names,locale=en.utf8,nobootwait,nofail 0 2


However, when booting is finished the array at /mnt/backup-raid is not mounted. If I check the journal logs I see



Oct 28 21:32:07 hostname systemd[1]: Started File System Check on /dev/disk/by-uuid/B0C4-A677.
Oct 28 21:32:07 hostname audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-fsck@dev-disk-byx2duuid-B0C4x2dA677 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:07 hostname kernel: audit: type=1130 audit(1509240727.851:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-fsck@dev-disk-byx2duuid-B0C4x2dA677 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:08 hostname systemd[1]: Mounting /mnt/backup-raid...
Oct 28 21:32:08 hostname systemd[1]: Mounted /mnt/c.
Oct 28 21:32:08 hostname ntfs-3g[702]: Version 2017.3.23 integrated FUSE 28
Oct 28 21:32:08 hostname ntfs-3g[702]: Mounted /dev/sda1 (Read-Write, label "", NTFS 3.1)
Oct 28 21:32:08 hostname ntfs-3g[702]: Cmdline options: rw,uid=1000,gid=1000,umask=022,windows_names,locale=en.utf8
Oct 28 21:32:08 hostname ntfs-3g[702]: Mount options: rw,allow_other,nonempty,relatime,default_permissions,fsname=/dev/sda1,blkdev,blksize=4096
Oct 28 21:32:08 hostname ntfs-3g[702]: Global ownership and permissions enforced, configuration type 7
Oct 28 21:32:08 hostname lvm[599]: 3 logical volume(s) in volume group "fedora" now active
Oct 28 21:32:08 hostname systemd[1]: Started LVM2 PV scan on device 8:5.
Oct 28 21:32:08 hostname audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-pvscan@8:5 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:08 hostname kernel: audit: type=1130 audit(1509240728.594:76): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-pvscan@8:5 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:08 hostname systemd[1]: Found device /dev/mapper/fedora-home.
Oct 28 21:32:08 hostname systemd[1]: Mounting /home...
Oct 28 21:32:08 hostname kernel: XFS (dm-2): Mounting V4 Filesystem
Oct 28 21:32:08 hostname systemd[1]: Mounted /mnt/igel1.
Oct 28 21:32:08 hostname systemd-fsck[666]: /dev/sda3: clean, 376/128016 files, 291819/512000 blocks
Oct 28 21:32:08 hostname systemd[1]: Mounted /mnt/igel2.
Oct 28 21:32:08 hostname systemd[1]: Mounted /mnt/backup-raid.


*snip*



Oct 28 21:32:33 hostname systemd[1]: Created slice system-mdadmx2dlastx2dresort.slice.
Oct 28 21:32:33 hostname systemd[1]: Starting Activate md array even though degraded...
Oct 28 21:32:33 hostname systemd[1]: Unmounting /mnt/backup-raid...
Oct 28 21:32:34 hostname systemd[1]: Started Activate md array even though degraded.
Oct 28 21:32:34 hostname audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=mdadm-last-resort@md0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:34 hostname audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=mdadm-last-resort@md0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:34 hostname kernel: md0:
Oct 28 21:32:34 hostname systemd[1]: Unmounted /mnt/backup-raid.


So it looks like it gets mounted in that first log block but then later it gets unmounted because it is appearing as degraded. But once it's finished booting I can run sudo mount -a and the array mounts without issue. The contents appear correctly in /mnt/backup-raid and checking /proc/mdstat shows



Personalities : [raid1]
md0 : active raid1 sdc2[0] sdb2[2]
485345344 blocks super 1.2 [2/2] [UU]
bitmap: 0/4 pages [0KB], 65536KB chunk

unused devices: <none>


so everything looks healthy. In case it helps, my /etc/mdadm.conf contains



ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=hostname:backup-raid UUID=6c8bf3df:c4147eb1:4c3f88d8:e94d1dbc devices=/dev/sdb2,/dev/sdc2


I found this email thread which appeared to be dealing with a similar situation but it looks to me like it just went silent. I'm sorry if the answer is in that email thread and I missed it but it just gets a bit too dense for me to follow.







share|improve this question




















  • another systemd bug?
    – Ipor Sircer
    Oct 29 '17 at 23:11










  • It might be the drives are slow to spin up, or that the USB subsystem gets reinitialized in between the two RAID mount checks, making mdadm think the RAID got borked. Check for USB messages between the sections you pasted.
    – Mioriin
    Oct 30 '17 at 7:11










  • There are no messages with USB in them between the two journal blocks I included above. I gave the entries between the two blocks a skim by eye too and nothing obvious popped out as drives having problems or re-initializing.
    – strwrsdbz
    Oct 31 '17 at 3:06












up vote
1
down vote

favorite









up vote
1
down vote

favorite











I'm running Fedora Server Edition 26 and I have two external USB drives with a partition on each that I have combined in RAID1. My /etc/fstab file has this line for automounting the array:



UUID=B0C4-A677 /mnt/backup-raid exfat uid=strwrsdbz,gid=strwrsdbz,umask=022,windows_names,locale=en.utf8,nobootwait,nofail 0 2


However, when booting is finished the array at /mnt/backup-raid is not mounted. If I check the journal logs I see



Oct 28 21:32:07 hostname systemd[1]: Started File System Check on /dev/disk/by-uuid/B0C4-A677.
Oct 28 21:32:07 hostname audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-fsck@dev-disk-byx2duuid-B0C4x2dA677 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:07 hostname kernel: audit: type=1130 audit(1509240727.851:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-fsck@dev-disk-byx2duuid-B0C4x2dA677 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:08 hostname systemd[1]: Mounting /mnt/backup-raid...
Oct 28 21:32:08 hostname systemd[1]: Mounted /mnt/c.
Oct 28 21:32:08 hostname ntfs-3g[702]: Version 2017.3.23 integrated FUSE 28
Oct 28 21:32:08 hostname ntfs-3g[702]: Mounted /dev/sda1 (Read-Write, label "", NTFS 3.1)
Oct 28 21:32:08 hostname ntfs-3g[702]: Cmdline options: rw,uid=1000,gid=1000,umask=022,windows_names,locale=en.utf8
Oct 28 21:32:08 hostname ntfs-3g[702]: Mount options: rw,allow_other,nonempty,relatime,default_permissions,fsname=/dev/sda1,blkdev,blksize=4096
Oct 28 21:32:08 hostname ntfs-3g[702]: Global ownership and permissions enforced, configuration type 7
Oct 28 21:32:08 hostname lvm[599]: 3 logical volume(s) in volume group "fedora" now active
Oct 28 21:32:08 hostname systemd[1]: Started LVM2 PV scan on device 8:5.
Oct 28 21:32:08 hostname audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-pvscan@8:5 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:08 hostname kernel: audit: type=1130 audit(1509240728.594:76): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-pvscan@8:5 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:08 hostname systemd[1]: Found device /dev/mapper/fedora-home.
Oct 28 21:32:08 hostname systemd[1]: Mounting /home...
Oct 28 21:32:08 hostname kernel: XFS (dm-2): Mounting V4 Filesystem
Oct 28 21:32:08 hostname systemd[1]: Mounted /mnt/igel1.
Oct 28 21:32:08 hostname systemd-fsck[666]: /dev/sda3: clean, 376/128016 files, 291819/512000 blocks
Oct 28 21:32:08 hostname systemd[1]: Mounted /mnt/igel2.
Oct 28 21:32:08 hostname systemd[1]: Mounted /mnt/backup-raid.


*snip*



Oct 28 21:32:33 hostname systemd[1]: Created slice system-mdadmx2dlastx2dresort.slice.
Oct 28 21:32:33 hostname systemd[1]: Starting Activate md array even though degraded...
Oct 28 21:32:33 hostname systemd[1]: Unmounting /mnt/backup-raid...
Oct 28 21:32:34 hostname systemd[1]: Started Activate md array even though degraded.
Oct 28 21:32:34 hostname audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=mdadm-last-resort@md0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:34 hostname audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=mdadm-last-resort@md0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:34 hostname kernel: md0:
Oct 28 21:32:34 hostname systemd[1]: Unmounted /mnt/backup-raid.


So it looks like it gets mounted in that first log block but then later it gets unmounted because it is appearing as degraded. But once it's finished booting I can run sudo mount -a and the array mounts without issue. The contents appear correctly in /mnt/backup-raid and checking /proc/mdstat shows



Personalities : [raid1]
md0 : active raid1 sdc2[0] sdb2[2]
485345344 blocks super 1.2 [2/2] [UU]
bitmap: 0/4 pages [0KB], 65536KB chunk

unused devices: <none>


so everything looks healthy. In case it helps, my /etc/mdadm.conf contains



ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=hostname:backup-raid UUID=6c8bf3df:c4147eb1:4c3f88d8:e94d1dbc devices=/dev/sdb2,/dev/sdc2


I found this email thread which appeared to be dealing with a similar situation but it looks to me like it just went silent. I'm sorry if the answer is in that email thread and I missed it but it just gets a bit too dense for me to follow.







share|improve this question












I'm running Fedora Server Edition 26 and I have two external USB drives with a partition on each that I have combined in RAID1. My /etc/fstab file has this line for automounting the array:



UUID=B0C4-A677 /mnt/backup-raid exfat uid=strwrsdbz,gid=strwrsdbz,umask=022,windows_names,locale=en.utf8,nobootwait,nofail 0 2


However, when booting is finished the array at /mnt/backup-raid is not mounted. If I check the journal logs I see



Oct 28 21:32:07 hostname systemd[1]: Started File System Check on /dev/disk/by-uuid/B0C4-A677.
Oct 28 21:32:07 hostname audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-fsck@dev-disk-byx2duuid-B0C4x2dA677 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:07 hostname kernel: audit: type=1130 audit(1509240727.851:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-fsck@dev-disk-byx2duuid-B0C4x2dA677 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:08 hostname systemd[1]: Mounting /mnt/backup-raid...
Oct 28 21:32:08 hostname systemd[1]: Mounted /mnt/c.
Oct 28 21:32:08 hostname ntfs-3g[702]: Version 2017.3.23 integrated FUSE 28
Oct 28 21:32:08 hostname ntfs-3g[702]: Mounted /dev/sda1 (Read-Write, label "", NTFS 3.1)
Oct 28 21:32:08 hostname ntfs-3g[702]: Cmdline options: rw,uid=1000,gid=1000,umask=022,windows_names,locale=en.utf8
Oct 28 21:32:08 hostname ntfs-3g[702]: Mount options: rw,allow_other,nonempty,relatime,default_permissions,fsname=/dev/sda1,blkdev,blksize=4096
Oct 28 21:32:08 hostname ntfs-3g[702]: Global ownership and permissions enforced, configuration type 7
Oct 28 21:32:08 hostname lvm[599]: 3 logical volume(s) in volume group "fedora" now active
Oct 28 21:32:08 hostname systemd[1]: Started LVM2 PV scan on device 8:5.
Oct 28 21:32:08 hostname audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-pvscan@8:5 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:08 hostname kernel: audit: type=1130 audit(1509240728.594:76): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-pvscan@8:5 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:08 hostname systemd[1]: Found device /dev/mapper/fedora-home.
Oct 28 21:32:08 hostname systemd[1]: Mounting /home...
Oct 28 21:32:08 hostname kernel: XFS (dm-2): Mounting V4 Filesystem
Oct 28 21:32:08 hostname systemd[1]: Mounted /mnt/igel1.
Oct 28 21:32:08 hostname systemd-fsck[666]: /dev/sda3: clean, 376/128016 files, 291819/512000 blocks
Oct 28 21:32:08 hostname systemd[1]: Mounted /mnt/igel2.
Oct 28 21:32:08 hostname systemd[1]: Mounted /mnt/backup-raid.


*snip*



Oct 28 21:32:33 hostname systemd[1]: Created slice system-mdadmx2dlastx2dresort.slice.
Oct 28 21:32:33 hostname systemd[1]: Starting Activate md array even though degraded...
Oct 28 21:32:33 hostname systemd[1]: Unmounting /mnt/backup-raid...
Oct 28 21:32:34 hostname systemd[1]: Started Activate md array even though degraded.
Oct 28 21:32:34 hostname audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=mdadm-last-resort@md0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:34 hostname audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=mdadm-last-resort@md0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:34 hostname kernel: md0:
Oct 28 21:32:34 hostname systemd[1]: Unmounted /mnt/backup-raid.


So it looks like it gets mounted in that first log block but then later it gets unmounted because it is appearing as degraded. But once it's finished booting I can run sudo mount -a and the array mounts without issue. The contents appear correctly in /mnt/backup-raid and checking /proc/mdstat shows



Personalities : [raid1]
md0 : active raid1 sdc2[0] sdb2[2]
485345344 blocks super 1.2 [2/2] [UU]
bitmap: 0/4 pages [0KB], 65536KB chunk

unused devices: <none>


so everything looks healthy. In case it helps, my /etc/mdadm.conf contains



ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=hostname:backup-raid UUID=6c8bf3df:c4147eb1:4c3f88d8:e94d1dbc devices=/dev/sdb2,/dev/sdc2


I found this email thread which appeared to be dealing with a similar situation but it looks to me like it just went silent. I'm sorry if the answer is in that email thread and I missed it but it just gets a bit too dense for me to follow.









share|improve this question











share|improve this question




share|improve this question










asked Oct 29 '17 at 23:01









strwrsdbz

62




62











  • another systemd bug?
    – Ipor Sircer
    Oct 29 '17 at 23:11










  • It might be the drives are slow to spin up, or that the USB subsystem gets reinitialized in between the two RAID mount checks, making mdadm think the RAID got borked. Check for USB messages between the sections you pasted.
    – Mioriin
    Oct 30 '17 at 7:11










  • There are no messages with USB in them between the two journal blocks I included above. I gave the entries between the two blocks a skim by eye too and nothing obvious popped out as drives having problems or re-initializing.
    – strwrsdbz
    Oct 31 '17 at 3:06
















  • another systemd bug?
    – Ipor Sircer
    Oct 29 '17 at 23:11










  • It might be the drives are slow to spin up, or that the USB subsystem gets reinitialized in between the two RAID mount checks, making mdadm think the RAID got borked. Check for USB messages between the sections you pasted.
    – Mioriin
    Oct 30 '17 at 7:11










  • There are no messages with USB in them between the two journal blocks I included above. I gave the entries between the two blocks a skim by eye too and nothing obvious popped out as drives having problems or re-initializing.
    – strwrsdbz
    Oct 31 '17 at 3:06















another systemd bug?
– Ipor Sircer
Oct 29 '17 at 23:11




another systemd bug?
– Ipor Sircer
Oct 29 '17 at 23:11












It might be the drives are slow to spin up, or that the USB subsystem gets reinitialized in between the two RAID mount checks, making mdadm think the RAID got borked. Check for USB messages between the sections you pasted.
– Mioriin
Oct 30 '17 at 7:11




It might be the drives are slow to spin up, or that the USB subsystem gets reinitialized in between the two RAID mount checks, making mdadm think the RAID got borked. Check for USB messages between the sections you pasted.
– Mioriin
Oct 30 '17 at 7:11












There are no messages with USB in them between the two journal blocks I included above. I gave the entries between the two blocks a skim by eye too and nothing obvious popped out as drives having problems or re-initializing.
– strwrsdbz
Oct 31 '17 at 3:06




There are no messages with USB in them between the two journal blocks I included above. I gave the entries between the two blocks a skim by eye too and nothing obvious popped out as drives having problems or re-initializing.
– strwrsdbz
Oct 31 '17 at 3:06










1 Answer
1






active

oldest

votes

















up vote
0
down vote













The 'Errorneous detection of degraded array' systemd-devel thread is about a race condition between udev and the mdadm-last-resort timer/service. The Conflicts=sys-devices-virtual-block-%i.device line then triggers the umount of the previously mounted filesystem.



The thread also mentions a workaround that should also fix your issue: replace the Conflicts=... line with a ConditionPathExissts=... line:



# cp /usr/lib/systemd/system/mdadm-last-resort@.* /etc/systemd/system/
# sed -i 's@^Conflicts=sys-devices-virtual-block-%i.device@ConditionPathExists=/sys/devices/virtual/block/%i@'
/etc/systemd/system/mdadm-last-resort@.*
# shutdown -r now


Note that a drop-in replacement via /etc/systemd/system/.../override.conf doesn't work for removing the Conflicts= line.



You can subscribe to the related upstream systemd issue Need a uni-directional version of "Conflicts" to get notified about changes regarding the underlying issue.



See also my Fedora 27 bug report where this issue manifests itself in /boot/efi not being mounted when placed on a RAID-1 mirror.






share|improve this answer




















  • For the record, I've stopped using the computer since before this answer so I can't test the suggestions.
    – strwrsdbz
    May 13 at 16:47










Your Answer







StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f401308%2fraid1-unmounted-during-boot-as-degraded-but-can-be-mounted-fine-manually%23new-answer', 'question_page');

);

Post as a guest






























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
0
down vote













The 'Errorneous detection of degraded array' systemd-devel thread is about a race condition between udev and the mdadm-last-resort timer/service. The Conflicts=sys-devices-virtual-block-%i.device line then triggers the umount of the previously mounted filesystem.



The thread also mentions a workaround that should also fix your issue: replace the Conflicts=... line with a ConditionPathExissts=... line:



# cp /usr/lib/systemd/system/mdadm-last-resort@.* /etc/systemd/system/
# sed -i 's@^Conflicts=sys-devices-virtual-block-%i.device@ConditionPathExists=/sys/devices/virtual/block/%i@'
/etc/systemd/system/mdadm-last-resort@.*
# shutdown -r now


Note that a drop-in replacement via /etc/systemd/system/.../override.conf doesn't work for removing the Conflicts= line.



You can subscribe to the related upstream systemd issue Need a uni-directional version of "Conflicts" to get notified about changes regarding the underlying issue.



See also my Fedora 27 bug report where this issue manifests itself in /boot/efi not being mounted when placed on a RAID-1 mirror.






share|improve this answer




















  • For the record, I've stopped using the computer since before this answer so I can't test the suggestions.
    – strwrsdbz
    May 13 at 16:47














up vote
0
down vote













The 'Errorneous detection of degraded array' systemd-devel thread is about a race condition between udev and the mdadm-last-resort timer/service. The Conflicts=sys-devices-virtual-block-%i.device line then triggers the umount of the previously mounted filesystem.



The thread also mentions a workaround that should also fix your issue: replace the Conflicts=... line with a ConditionPathExissts=... line:



# cp /usr/lib/systemd/system/mdadm-last-resort@.* /etc/systemd/system/
# sed -i 's@^Conflicts=sys-devices-virtual-block-%i.device@ConditionPathExists=/sys/devices/virtual/block/%i@'
/etc/systemd/system/mdadm-last-resort@.*
# shutdown -r now


Note that a drop-in replacement via /etc/systemd/system/.../override.conf doesn't work for removing the Conflicts= line.



You can subscribe to the related upstream systemd issue Need a uni-directional version of "Conflicts" to get notified about changes regarding the underlying issue.



See also my Fedora 27 bug report where this issue manifests itself in /boot/efi not being mounted when placed on a RAID-1 mirror.






share|improve this answer




















  • For the record, I've stopped using the computer since before this answer so I can't test the suggestions.
    – strwrsdbz
    May 13 at 16:47












up vote
0
down vote










up vote
0
down vote









The 'Errorneous detection of degraded array' systemd-devel thread is about a race condition between udev and the mdadm-last-resort timer/service. The Conflicts=sys-devices-virtual-block-%i.device line then triggers the umount of the previously mounted filesystem.



The thread also mentions a workaround that should also fix your issue: replace the Conflicts=... line with a ConditionPathExissts=... line:



# cp /usr/lib/systemd/system/mdadm-last-resort@.* /etc/systemd/system/
# sed -i 's@^Conflicts=sys-devices-virtual-block-%i.device@ConditionPathExists=/sys/devices/virtual/block/%i@'
/etc/systemd/system/mdadm-last-resort@.*
# shutdown -r now


Note that a drop-in replacement via /etc/systemd/system/.../override.conf doesn't work for removing the Conflicts= line.



You can subscribe to the related upstream systemd issue Need a uni-directional version of "Conflicts" to get notified about changes regarding the underlying issue.



See also my Fedora 27 bug report where this issue manifests itself in /boot/efi not being mounted when placed on a RAID-1 mirror.






share|improve this answer












The 'Errorneous detection of degraded array' systemd-devel thread is about a race condition between udev and the mdadm-last-resort timer/service. The Conflicts=sys-devices-virtual-block-%i.device line then triggers the umount of the previously mounted filesystem.



The thread also mentions a workaround that should also fix your issue: replace the Conflicts=... line with a ConditionPathExissts=... line:



# cp /usr/lib/systemd/system/mdadm-last-resort@.* /etc/systemd/system/
# sed -i 's@^Conflicts=sys-devices-virtual-block-%i.device@ConditionPathExists=/sys/devices/virtual/block/%i@'
/etc/systemd/system/mdadm-last-resort@.*
# shutdown -r now


Note that a drop-in replacement via /etc/systemd/system/.../override.conf doesn't work for removing the Conflicts= line.



You can subscribe to the related upstream systemd issue Need a uni-directional version of "Conflicts" to get notified about changes regarding the underlying issue.



See also my Fedora 27 bug report where this issue manifests itself in /boot/efi not being mounted when placed on a RAID-1 mirror.







share|improve this answer












share|improve this answer



share|improve this answer










answered Apr 8 at 15:09









maxschlepzig

32k30135202




32k30135202











  • For the record, I've stopped using the computer since before this answer so I can't test the suggestions.
    – strwrsdbz
    May 13 at 16:47
















  • For the record, I've stopped using the computer since before this answer so I can't test the suggestions.
    – strwrsdbz
    May 13 at 16:47















For the record, I've stopped using the computer since before this answer so I can't test the suggestions.
– strwrsdbz
May 13 at 16:47




For the record, I've stopped using the computer since before this answer so I can't test the suggestions.
– strwrsdbz
May 13 at 16:47

















 

draft saved


draft discarded















































 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f401308%2fraid1-unmounted-during-boot-as-degraded-but-can-be-mounted-fine-manually%23new-answer', 'question_page');

);

Post as a guest













































































Popular posts from this blog

How to check contact read email or not when send email to Individual?

Bahrain

Postfix configuration issue with fips on centos 7; mailgun relay