RAID1 is read-only after upgrade to Ubuntu 17.10

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
4
down vote

favorite












I'm stumped. I had a perfectly functioning RAID1 setup on 16.10. After upgrading to 17.10, it auto-magically detected the array and re-created md0. All my files are fine, but when I mount md0, it says that the array is read-only:



cat /proc/mdstat 
Personalities : [raid1]
md0 : active (read-only) raid1 dm-0[0] dm-1[1]
5860390464 blocks super 1.2 [2/2] [UU]
bitmap: 0/44 pages [0KB], 65536KB chunk

unused devices: <none>

sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Jul 9 23:54:40 2016
Raid Level : raid1
Array Size : 5860390464 (5588.90 GiB 6001.04 GB)
Used Dev Size : 5860390464 (5588.90 GiB 6001.04 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sat Nov 4 23:16:18 2017
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : x6:0 (local to host x6)
UUID : baaccfeb:860781dd:eda253ba:6a08916f
Events : 11596

Number Major Minor RaidDevice State
0 253 0 0 active sync /dev/dm-0
1 253 1 1 active sync /dev/dm-1


There are no errors in /var/log/kern.log nor dmesg.



I can stop it and re-assemble it, to no effect:



sudo mdadm --stop /dev/md0
sudo mdadm --assemble --scan


I do not understand why it worked perfectly before, but now the array is read-only for no reason I can detect. And this is the same array that auto-magically re-assembled when I upgraded from 16.04 to 16.10.



Researching the problem, I found a post about the problem being /sys mounted read-only, which mine indeed is:



ls -ld /sys
dr-xr-xr-x 13 root root 0 Nov 5 22:28 /sys


But neither works to fix it, as /sys stays read-only:



sudo mount -o remount,rw /sys
sudo mount -o remount,rw -t sysfs sysfs /sys
ls -ld /sys
dr-xr-xr-x 13 root root 0 Nov 5 22:29 /sys


Can anyone provide some insight that I am missing?



Edited to include /etc/mdadm/mdadm.conf:



# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 UUID=baaccfeb:860781dd:eda253ba:6a08916f name=x6:0

# This configuration was auto-generated on Sun, 05 Nov 2017 15:37:16 -0800 by mkconf


The device mapper files, which appear to be writable:



ls -l /dev/dm-*
brw-rw---- 1 root disk 253, 0 Nov 5 16:28 /dev/dm-0
brw-rw---- 1 root disk 253, 1 Nov 5 16:28 /dev/dm-1


And something else that Ubuntu or Debian has changed; I have no idea what these osprober files are doing here. I thought they were only used at installation time:



ls -l /dev/mapper/
total 0
crw------- 1 root root 10, 236 Nov 5 15:34 control
lrwxrwxrwx 1 root root 7 Nov 5 16:28 osprober-linux-sdb1 -> ../dm-0
lrwxrwxrwx 1 root root 7 Nov 5 16:28 osprober-linux-sdc1 -> ../dm-1


parted info:



sudo parted -l
Model: ATA SanDisk Ultra II (scsi)
Disk /dev/sda: 960GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 1049kB 81.9GB 81.9GB ext4
2 81.9GB 131GB 49.2GB linux-swap(v1)
3 131GB 131GB 99.6MB fat32 boot, esp
4 131GB 960GB 829GB ext4


Model: ATA WDC WD60EZRZ-00R (scsi)
Disk /dev/sdb: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 1049kB 6001GB 6001GB raid


Model: ATA WDC WD60EZRZ-00R (scsi)
Disk /dev/sdc: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 1049kB 6001GB 6001GB raid


Error: /dev/mapper/osprober-linux-sdc1: unrecognised disk label
Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/osprober-linux-sdc1: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags:

Error: /dev/mapper/osprober-linux-sdb1: unrecognised disk label
Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/osprober-linux-sdb1: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags:

Model: Linux Software RAID Array (md)
Disk /dev/md0: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:

Number Start End Size File system Flags
1 0.00B 6001GB 6001GB ext4


Device mapper info:



$ sudo dmsetup table
osprober-linux-sdc1: 0 11721043087 linear 8:33 0
osprober-linux-sdb1: 0 11721043087 linear 8:17 0

$ sudo dmsetup info
Name: osprober-linux-sdc1
State: ACTIVE (READ-ONLY)
Read Ahead: 256
Tables present: LIVE
Open count: 1
Event number: 0
Major, minor: 253, 1
Number of targets: 1

Name: osprober-linux-sdb1
State: ACTIVE (READ-ONLY)
Read Ahead: 256
Tables present: LIVE
Open count: 1
Event number: 0
Major, minor: 253, 0
Number of targets: 1


strace output for attempt to set array to rw (with some context):



openat(AT_FDCWD, "/dev/md0", O_RDONLY) = 3
fstat(3, 0660, st_rdev=makedev(9, 0), ...) = 0
ioctl(3, RAID_VERSION, 0x7fffb3813574) = 0
fstat(3, 0660, st_rdev=makedev(9, 0), ...) = 0
ioctl(3, RAID_VERSION, 0x7fffb38134c4) = 0
ioctl(3, RAID_VERSION, 0x7fffb38114bc) = 0
fstat(3, 0660, st_rdev=makedev(9, 0), ...) = 0
readlink("/sys/dev/block/9:0", "../../devices/virtual/block/md0", 199) = 31
openat(AT_FDCWD, "/sys/block/md0/md/metadata_version", O_RDONLY) = 4
read(4, "1.2n", 4096) = 4
close(4) = 0
openat(AT_FDCWD, "/sys/block/md0/md/level", O_RDONLY) = 4
read(4, "raid1n", 4096) = 6
close(4) = 0
ioctl(3, GET_ARRAY_INFO, 0x7fffb3813580) = 0
ioctl(3, RESTART_ARRAY_RW, 0) = -1 EROFS (Read-only file system)
write(2, "mdadm: failed to set writable fo"..., 66mdadm: failed to set writable for /dev/md0: Read-only file system
) = 66






share|improve this question






















  • What does mount | grep /sys tell you about read-only or read-write for the /sys filesystem? (Ignore the permissions on the /sys directory itself.)
    – roaima
    Nov 6 '17 at 21:31














up vote
4
down vote

favorite












I'm stumped. I had a perfectly functioning RAID1 setup on 16.10. After upgrading to 17.10, it auto-magically detected the array and re-created md0. All my files are fine, but when I mount md0, it says that the array is read-only:



cat /proc/mdstat 
Personalities : [raid1]
md0 : active (read-only) raid1 dm-0[0] dm-1[1]
5860390464 blocks super 1.2 [2/2] [UU]
bitmap: 0/44 pages [0KB], 65536KB chunk

unused devices: <none>

sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Jul 9 23:54:40 2016
Raid Level : raid1
Array Size : 5860390464 (5588.90 GiB 6001.04 GB)
Used Dev Size : 5860390464 (5588.90 GiB 6001.04 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sat Nov 4 23:16:18 2017
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : x6:0 (local to host x6)
UUID : baaccfeb:860781dd:eda253ba:6a08916f
Events : 11596

Number Major Minor RaidDevice State
0 253 0 0 active sync /dev/dm-0
1 253 1 1 active sync /dev/dm-1


There are no errors in /var/log/kern.log nor dmesg.



I can stop it and re-assemble it, to no effect:



sudo mdadm --stop /dev/md0
sudo mdadm --assemble --scan


I do not understand why it worked perfectly before, but now the array is read-only for no reason I can detect. And this is the same array that auto-magically re-assembled when I upgraded from 16.04 to 16.10.



Researching the problem, I found a post about the problem being /sys mounted read-only, which mine indeed is:



ls -ld /sys
dr-xr-xr-x 13 root root 0 Nov 5 22:28 /sys


But neither works to fix it, as /sys stays read-only:



sudo mount -o remount,rw /sys
sudo mount -o remount,rw -t sysfs sysfs /sys
ls -ld /sys
dr-xr-xr-x 13 root root 0 Nov 5 22:29 /sys


Can anyone provide some insight that I am missing?



Edited to include /etc/mdadm/mdadm.conf:



# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 UUID=baaccfeb:860781dd:eda253ba:6a08916f name=x6:0

# This configuration was auto-generated on Sun, 05 Nov 2017 15:37:16 -0800 by mkconf


The device mapper files, which appear to be writable:



ls -l /dev/dm-*
brw-rw---- 1 root disk 253, 0 Nov 5 16:28 /dev/dm-0
brw-rw---- 1 root disk 253, 1 Nov 5 16:28 /dev/dm-1


And something else that Ubuntu or Debian has changed; I have no idea what these osprober files are doing here. I thought they were only used at installation time:



ls -l /dev/mapper/
total 0
crw------- 1 root root 10, 236 Nov 5 15:34 control
lrwxrwxrwx 1 root root 7 Nov 5 16:28 osprober-linux-sdb1 -> ../dm-0
lrwxrwxrwx 1 root root 7 Nov 5 16:28 osprober-linux-sdc1 -> ../dm-1


parted info:



sudo parted -l
Model: ATA SanDisk Ultra II (scsi)
Disk /dev/sda: 960GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 1049kB 81.9GB 81.9GB ext4
2 81.9GB 131GB 49.2GB linux-swap(v1)
3 131GB 131GB 99.6MB fat32 boot, esp
4 131GB 960GB 829GB ext4


Model: ATA WDC WD60EZRZ-00R (scsi)
Disk /dev/sdb: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 1049kB 6001GB 6001GB raid


Model: ATA WDC WD60EZRZ-00R (scsi)
Disk /dev/sdc: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 1049kB 6001GB 6001GB raid


Error: /dev/mapper/osprober-linux-sdc1: unrecognised disk label
Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/osprober-linux-sdc1: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags:

Error: /dev/mapper/osprober-linux-sdb1: unrecognised disk label
Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/osprober-linux-sdb1: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags:

Model: Linux Software RAID Array (md)
Disk /dev/md0: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:

Number Start End Size File system Flags
1 0.00B 6001GB 6001GB ext4


Device mapper info:



$ sudo dmsetup table
osprober-linux-sdc1: 0 11721043087 linear 8:33 0
osprober-linux-sdb1: 0 11721043087 linear 8:17 0

$ sudo dmsetup info
Name: osprober-linux-sdc1
State: ACTIVE (READ-ONLY)
Read Ahead: 256
Tables present: LIVE
Open count: 1
Event number: 0
Major, minor: 253, 1
Number of targets: 1

Name: osprober-linux-sdb1
State: ACTIVE (READ-ONLY)
Read Ahead: 256
Tables present: LIVE
Open count: 1
Event number: 0
Major, minor: 253, 0
Number of targets: 1


strace output for attempt to set array to rw (with some context):



openat(AT_FDCWD, "/dev/md0", O_RDONLY) = 3
fstat(3, 0660, st_rdev=makedev(9, 0), ...) = 0
ioctl(3, RAID_VERSION, 0x7fffb3813574) = 0
fstat(3, 0660, st_rdev=makedev(9, 0), ...) = 0
ioctl(3, RAID_VERSION, 0x7fffb38134c4) = 0
ioctl(3, RAID_VERSION, 0x7fffb38114bc) = 0
fstat(3, 0660, st_rdev=makedev(9, 0), ...) = 0
readlink("/sys/dev/block/9:0", "../../devices/virtual/block/md0", 199) = 31
openat(AT_FDCWD, "/sys/block/md0/md/metadata_version", O_RDONLY) = 4
read(4, "1.2n", 4096) = 4
close(4) = 0
openat(AT_FDCWD, "/sys/block/md0/md/level", O_RDONLY) = 4
read(4, "raid1n", 4096) = 6
close(4) = 0
ioctl(3, GET_ARRAY_INFO, 0x7fffb3813580) = 0
ioctl(3, RESTART_ARRAY_RW, 0) = -1 EROFS (Read-only file system)
write(2, "mdadm: failed to set writable fo"..., 66mdadm: failed to set writable for /dev/md0: Read-only file system
) = 66






share|improve this question






















  • What does mount | grep /sys tell you about read-only or read-write for the /sys filesystem? (Ignore the permissions on the /sys directory itself.)
    – roaima
    Nov 6 '17 at 21:31












up vote
4
down vote

favorite









up vote
4
down vote

favorite











I'm stumped. I had a perfectly functioning RAID1 setup on 16.10. After upgrading to 17.10, it auto-magically detected the array and re-created md0. All my files are fine, but when I mount md0, it says that the array is read-only:



cat /proc/mdstat 
Personalities : [raid1]
md0 : active (read-only) raid1 dm-0[0] dm-1[1]
5860390464 blocks super 1.2 [2/2] [UU]
bitmap: 0/44 pages [0KB], 65536KB chunk

unused devices: <none>

sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Jul 9 23:54:40 2016
Raid Level : raid1
Array Size : 5860390464 (5588.90 GiB 6001.04 GB)
Used Dev Size : 5860390464 (5588.90 GiB 6001.04 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sat Nov 4 23:16:18 2017
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : x6:0 (local to host x6)
UUID : baaccfeb:860781dd:eda253ba:6a08916f
Events : 11596

Number Major Minor RaidDevice State
0 253 0 0 active sync /dev/dm-0
1 253 1 1 active sync /dev/dm-1


There are no errors in /var/log/kern.log nor dmesg.



I can stop it and re-assemble it, to no effect:



sudo mdadm --stop /dev/md0
sudo mdadm --assemble --scan


I do not understand why it worked perfectly before, but now the array is read-only for no reason I can detect. And this is the same array that auto-magically re-assembled when I upgraded from 16.04 to 16.10.



Researching the problem, I found a post about the problem being /sys mounted read-only, which mine indeed is:



ls -ld /sys
dr-xr-xr-x 13 root root 0 Nov 5 22:28 /sys


But neither works to fix it, as /sys stays read-only:



sudo mount -o remount,rw /sys
sudo mount -o remount,rw -t sysfs sysfs /sys
ls -ld /sys
dr-xr-xr-x 13 root root 0 Nov 5 22:29 /sys


Can anyone provide some insight that I am missing?



Edited to include /etc/mdadm/mdadm.conf:



# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 UUID=baaccfeb:860781dd:eda253ba:6a08916f name=x6:0

# This configuration was auto-generated on Sun, 05 Nov 2017 15:37:16 -0800 by mkconf


The device mapper files, which appear to be writable:



ls -l /dev/dm-*
brw-rw---- 1 root disk 253, 0 Nov 5 16:28 /dev/dm-0
brw-rw---- 1 root disk 253, 1 Nov 5 16:28 /dev/dm-1


And something else that Ubuntu or Debian has changed; I have no idea what these osprober files are doing here. I thought they were only used at installation time:



ls -l /dev/mapper/
total 0
crw------- 1 root root 10, 236 Nov 5 15:34 control
lrwxrwxrwx 1 root root 7 Nov 5 16:28 osprober-linux-sdb1 -> ../dm-0
lrwxrwxrwx 1 root root 7 Nov 5 16:28 osprober-linux-sdc1 -> ../dm-1


parted info:



sudo parted -l
Model: ATA SanDisk Ultra II (scsi)
Disk /dev/sda: 960GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 1049kB 81.9GB 81.9GB ext4
2 81.9GB 131GB 49.2GB linux-swap(v1)
3 131GB 131GB 99.6MB fat32 boot, esp
4 131GB 960GB 829GB ext4


Model: ATA WDC WD60EZRZ-00R (scsi)
Disk /dev/sdb: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 1049kB 6001GB 6001GB raid


Model: ATA WDC WD60EZRZ-00R (scsi)
Disk /dev/sdc: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 1049kB 6001GB 6001GB raid


Error: /dev/mapper/osprober-linux-sdc1: unrecognised disk label
Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/osprober-linux-sdc1: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags:

Error: /dev/mapper/osprober-linux-sdb1: unrecognised disk label
Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/osprober-linux-sdb1: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags:

Model: Linux Software RAID Array (md)
Disk /dev/md0: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:

Number Start End Size File system Flags
1 0.00B 6001GB 6001GB ext4


Device mapper info:



$ sudo dmsetup table
osprober-linux-sdc1: 0 11721043087 linear 8:33 0
osprober-linux-sdb1: 0 11721043087 linear 8:17 0

$ sudo dmsetup info
Name: osprober-linux-sdc1
State: ACTIVE (READ-ONLY)
Read Ahead: 256
Tables present: LIVE
Open count: 1
Event number: 0
Major, minor: 253, 1
Number of targets: 1

Name: osprober-linux-sdb1
State: ACTIVE (READ-ONLY)
Read Ahead: 256
Tables present: LIVE
Open count: 1
Event number: 0
Major, minor: 253, 0
Number of targets: 1


strace output for attempt to set array to rw (with some context):



openat(AT_FDCWD, "/dev/md0", O_RDONLY) = 3
fstat(3, 0660, st_rdev=makedev(9, 0), ...) = 0
ioctl(3, RAID_VERSION, 0x7fffb3813574) = 0
fstat(3, 0660, st_rdev=makedev(9, 0), ...) = 0
ioctl(3, RAID_VERSION, 0x7fffb38134c4) = 0
ioctl(3, RAID_VERSION, 0x7fffb38114bc) = 0
fstat(3, 0660, st_rdev=makedev(9, 0), ...) = 0
readlink("/sys/dev/block/9:0", "../../devices/virtual/block/md0", 199) = 31
openat(AT_FDCWD, "/sys/block/md0/md/metadata_version", O_RDONLY) = 4
read(4, "1.2n", 4096) = 4
close(4) = 0
openat(AT_FDCWD, "/sys/block/md0/md/level", O_RDONLY) = 4
read(4, "raid1n", 4096) = 6
close(4) = 0
ioctl(3, GET_ARRAY_INFO, 0x7fffb3813580) = 0
ioctl(3, RESTART_ARRAY_RW, 0) = -1 EROFS (Read-only file system)
write(2, "mdadm: failed to set writable fo"..., 66mdadm: failed to set writable for /dev/md0: Read-only file system
) = 66






share|improve this question














I'm stumped. I had a perfectly functioning RAID1 setup on 16.10. After upgrading to 17.10, it auto-magically detected the array and re-created md0. All my files are fine, but when I mount md0, it says that the array is read-only:



cat /proc/mdstat 
Personalities : [raid1]
md0 : active (read-only) raid1 dm-0[0] dm-1[1]
5860390464 blocks super 1.2 [2/2] [UU]
bitmap: 0/44 pages [0KB], 65536KB chunk

unused devices: <none>

sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Jul 9 23:54:40 2016
Raid Level : raid1
Array Size : 5860390464 (5588.90 GiB 6001.04 GB)
Used Dev Size : 5860390464 (5588.90 GiB 6001.04 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sat Nov 4 23:16:18 2017
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : x6:0 (local to host x6)
UUID : baaccfeb:860781dd:eda253ba:6a08916f
Events : 11596

Number Major Minor RaidDevice State
0 253 0 0 active sync /dev/dm-0
1 253 1 1 active sync /dev/dm-1


There are no errors in /var/log/kern.log nor dmesg.



I can stop it and re-assemble it, to no effect:



sudo mdadm --stop /dev/md0
sudo mdadm --assemble --scan


I do not understand why it worked perfectly before, but now the array is read-only for no reason I can detect. And this is the same array that auto-magically re-assembled when I upgraded from 16.04 to 16.10.



Researching the problem, I found a post about the problem being /sys mounted read-only, which mine indeed is:



ls -ld /sys
dr-xr-xr-x 13 root root 0 Nov 5 22:28 /sys


But neither works to fix it, as /sys stays read-only:



sudo mount -o remount,rw /sys
sudo mount -o remount,rw -t sysfs sysfs /sys
ls -ld /sys
dr-xr-xr-x 13 root root 0 Nov 5 22:29 /sys


Can anyone provide some insight that I am missing?



Edited to include /etc/mdadm/mdadm.conf:



# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 UUID=baaccfeb:860781dd:eda253ba:6a08916f name=x6:0

# This configuration was auto-generated on Sun, 05 Nov 2017 15:37:16 -0800 by mkconf


The device mapper files, which appear to be writable:



ls -l /dev/dm-*
brw-rw---- 1 root disk 253, 0 Nov 5 16:28 /dev/dm-0
brw-rw---- 1 root disk 253, 1 Nov 5 16:28 /dev/dm-1


And something else that Ubuntu or Debian has changed; I have no idea what these osprober files are doing here. I thought they were only used at installation time:



ls -l /dev/mapper/
total 0
crw------- 1 root root 10, 236 Nov 5 15:34 control
lrwxrwxrwx 1 root root 7 Nov 5 16:28 osprober-linux-sdb1 -> ../dm-0
lrwxrwxrwx 1 root root 7 Nov 5 16:28 osprober-linux-sdc1 -> ../dm-1


parted info:



sudo parted -l
Model: ATA SanDisk Ultra II (scsi)
Disk /dev/sda: 960GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 1049kB 81.9GB 81.9GB ext4
2 81.9GB 131GB 49.2GB linux-swap(v1)
3 131GB 131GB 99.6MB fat32 boot, esp
4 131GB 960GB 829GB ext4


Model: ATA WDC WD60EZRZ-00R (scsi)
Disk /dev/sdb: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 1049kB 6001GB 6001GB raid


Model: ATA WDC WD60EZRZ-00R (scsi)
Disk /dev/sdc: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 1049kB 6001GB 6001GB raid


Error: /dev/mapper/osprober-linux-sdc1: unrecognised disk label
Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/osprober-linux-sdc1: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags:

Error: /dev/mapper/osprober-linux-sdb1: unrecognised disk label
Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/osprober-linux-sdb1: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags:

Model: Linux Software RAID Array (md)
Disk /dev/md0: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:

Number Start End Size File system Flags
1 0.00B 6001GB 6001GB ext4


Device mapper info:



$ sudo dmsetup table
osprober-linux-sdc1: 0 11721043087 linear 8:33 0
osprober-linux-sdb1: 0 11721043087 linear 8:17 0

$ sudo dmsetup info
Name: osprober-linux-sdc1
State: ACTIVE (READ-ONLY)
Read Ahead: 256
Tables present: LIVE
Open count: 1
Event number: 0
Major, minor: 253, 1
Number of targets: 1

Name: osprober-linux-sdb1
State: ACTIVE (READ-ONLY)
Read Ahead: 256
Tables present: LIVE
Open count: 1
Event number: 0
Major, minor: 253, 0
Number of targets: 1


strace output for attempt to set array to rw (with some context):



openat(AT_FDCWD, "/dev/md0", O_RDONLY) = 3
fstat(3, 0660, st_rdev=makedev(9, 0), ...) = 0
ioctl(3, RAID_VERSION, 0x7fffb3813574) = 0
fstat(3, 0660, st_rdev=makedev(9, 0), ...) = 0
ioctl(3, RAID_VERSION, 0x7fffb38134c4) = 0
ioctl(3, RAID_VERSION, 0x7fffb38114bc) = 0
fstat(3, 0660, st_rdev=makedev(9, 0), ...) = 0
readlink("/sys/dev/block/9:0", "../../devices/virtual/block/md0", 199) = 31
openat(AT_FDCWD, "/sys/block/md0/md/metadata_version", O_RDONLY) = 4
read(4, "1.2n", 4096) = 4
close(4) = 0
openat(AT_FDCWD, "/sys/block/md0/md/level", O_RDONLY) = 4
read(4, "raid1n", 4096) = 6
close(4) = 0
ioctl(3, GET_ARRAY_INFO, 0x7fffb3813580) = 0
ioctl(3, RESTART_ARRAY_RW, 0) = -1 EROFS (Read-only file system)
write(2, "mdadm: failed to set writable fo"..., 66mdadm: failed to set writable for /dev/md0: Read-only file system
) = 66








share|improve this question













share|improve this question




share|improve this question








edited Nov 6 '17 at 20:32

























asked Nov 6 '17 at 6:37









user3399292

233




233











  • What does mount | grep /sys tell you about read-only or read-write for the /sys filesystem? (Ignore the permissions on the /sys directory itself.)
    – roaima
    Nov 6 '17 at 21:31
















  • What does mount | grep /sys tell you about read-only or read-write for the /sys filesystem? (Ignore the permissions on the /sys directory itself.)
    – roaima
    Nov 6 '17 at 21:31















What does mount | grep /sys tell you about read-only or read-write for the /sys filesystem? (Ignore the permissions on the /sys directory itself.)
– roaima
Nov 6 '17 at 21:31




What does mount | grep /sys tell you about read-only or read-write for the /sys filesystem? (Ignore the permissions on the /sys directory itself.)
– roaima
Nov 6 '17 at 21:31










1 Answer
1






active

oldest

votes

















up vote
3
down vote



accepted










This won’t explain why your array ended up in read-only mode, but



mdadm --readwrite /dev/md0


should return it to normal. In your case it doesn’t, and the reason isn’t entirely obvious: if constituent devices are themselves read-only, the RAID array is read-only (which is what matches the behaviour you’re seeing, and the code-paths used when you try to re-enable read-write).



The dmsetup table information gives a strong hint as to what’s going on: osprober (I imagine, given the names of the devices) is finding the real RAID components, and for some reason it’s creating device mapper devices on top of them, and these are being picked up and used for the RAID device. Since the only device mapper devices are those two osprober devices, the simplest solution is to stop the RAID device, stop the DM devices, and re-scan the RAID array so that the underlying components devices are used. To stop the DM devices, run



dmsetup remove_all


as root.






share|improve this answer






















    Your Answer







    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "106"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: false,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f402771%2fraid1-is-read-only-after-upgrade-to-ubuntu-17-10%23new-answer', 'question_page');

    );

    Post as a guest






























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    3
    down vote



    accepted










    This won’t explain why your array ended up in read-only mode, but



    mdadm --readwrite /dev/md0


    should return it to normal. In your case it doesn’t, and the reason isn’t entirely obvious: if constituent devices are themselves read-only, the RAID array is read-only (which is what matches the behaviour you’re seeing, and the code-paths used when you try to re-enable read-write).



    The dmsetup table information gives a strong hint as to what’s going on: osprober (I imagine, given the names of the devices) is finding the real RAID components, and for some reason it’s creating device mapper devices on top of them, and these are being picked up and used for the RAID device. Since the only device mapper devices are those two osprober devices, the simplest solution is to stop the RAID device, stop the DM devices, and re-scan the RAID array so that the underlying components devices are used. To stop the DM devices, run



    dmsetup remove_all


    as root.






    share|improve this answer


























      up vote
      3
      down vote



      accepted










      This won’t explain why your array ended up in read-only mode, but



      mdadm --readwrite /dev/md0


      should return it to normal. In your case it doesn’t, and the reason isn’t entirely obvious: if constituent devices are themselves read-only, the RAID array is read-only (which is what matches the behaviour you’re seeing, and the code-paths used when you try to re-enable read-write).



      The dmsetup table information gives a strong hint as to what’s going on: osprober (I imagine, given the names of the devices) is finding the real RAID components, and for some reason it’s creating device mapper devices on top of them, and these are being picked up and used for the RAID device. Since the only device mapper devices are those two osprober devices, the simplest solution is to stop the RAID device, stop the DM devices, and re-scan the RAID array so that the underlying components devices are used. To stop the DM devices, run



      dmsetup remove_all


      as root.






      share|improve this answer
























        up vote
        3
        down vote



        accepted







        up vote
        3
        down vote



        accepted






        This won’t explain why your array ended up in read-only mode, but



        mdadm --readwrite /dev/md0


        should return it to normal. In your case it doesn’t, and the reason isn’t entirely obvious: if constituent devices are themselves read-only, the RAID array is read-only (which is what matches the behaviour you’re seeing, and the code-paths used when you try to re-enable read-write).



        The dmsetup table information gives a strong hint as to what’s going on: osprober (I imagine, given the names of the devices) is finding the real RAID components, and for some reason it’s creating device mapper devices on top of them, and these are being picked up and used for the RAID device. Since the only device mapper devices are those two osprober devices, the simplest solution is to stop the RAID device, stop the DM devices, and re-scan the RAID array so that the underlying components devices are used. To stop the DM devices, run



        dmsetup remove_all


        as root.






        share|improve this answer














        This won’t explain why your array ended up in read-only mode, but



        mdadm --readwrite /dev/md0


        should return it to normal. In your case it doesn’t, and the reason isn’t entirely obvious: if constituent devices are themselves read-only, the RAID array is read-only (which is what matches the behaviour you’re seeing, and the code-paths used when you try to re-enable read-write).



        The dmsetup table information gives a strong hint as to what’s going on: osprober (I imagine, given the names of the devices) is finding the real RAID components, and for some reason it’s creating device mapper devices on top of them, and these are being picked up and used for the RAID device. Since the only device mapper devices are those two osprober devices, the simplest solution is to stop the RAID device, stop the DM devices, and re-scan the RAID array so that the underlying components devices are used. To stop the DM devices, run



        dmsetup remove_all


        as root.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Nov 7 '17 at 19:17

























        answered Nov 6 '17 at 7:00









        Stephen Kitt

        144k22312377




        144k22312377



























             

            draft saved


            draft discarded















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f402771%2fraid1-is-read-only-after-upgrade-to-ubuntu-17-10%23new-answer', 'question_page');

            );

            Post as a guest













































































            Popular posts from this blog

            How to check contact read email or not when send email to Individual?

            Displaying single band from multi-band raster using QGIS

            How many registers does an x86_64 CPU actually have?