UUID in fstab + in which cases we must not configured UUID in fstab

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
2
down vote

favorite












discussion - we have redhat linux machines and my question is about the UUID configuration in /etc/fstab file , and in which cases UUID risk the OS



as I understand we MUST NOT use UUID in /etc/fstab if using software RAID1.



Why? Because the RAID volume itself and the first element of the mirror will appear to have the same file system UUID. If the mirror breaks or for any other reason the md device isn't started at boot, the system will mount any random underlying disk instead, clobbering your mirror.



so my question is



what are the RAID levels ( numbers ) that we must not is UUID in fstab ?



info about the raid level - https://en.wikipedia.org/wiki/Standard_RAID_levels







share|improve this question


























    up vote
    2
    down vote

    favorite












    discussion - we have redhat linux machines and my question is about the UUID configuration in /etc/fstab file , and in which cases UUID risk the OS



    as I understand we MUST NOT use UUID in /etc/fstab if using software RAID1.



    Why? Because the RAID volume itself and the first element of the mirror will appear to have the same file system UUID. If the mirror breaks or for any other reason the md device isn't started at boot, the system will mount any random underlying disk instead, clobbering your mirror.



    so my question is



    what are the RAID levels ( numbers ) that we must not is UUID in fstab ?



    info about the raid level - https://en.wikipedia.org/wiki/Standard_RAID_levels







    share|improve this question
























      up vote
      2
      down vote

      favorite









      up vote
      2
      down vote

      favorite











      discussion - we have redhat linux machines and my question is about the UUID configuration in /etc/fstab file , and in which cases UUID risk the OS



      as I understand we MUST NOT use UUID in /etc/fstab if using software RAID1.



      Why? Because the RAID volume itself and the first element of the mirror will appear to have the same file system UUID. If the mirror breaks or for any other reason the md device isn't started at boot, the system will mount any random underlying disk instead, clobbering your mirror.



      so my question is



      what are the RAID levels ( numbers ) that we must not is UUID in fstab ?



      info about the raid level - https://en.wikipedia.org/wiki/Standard_RAID_levels







      share|improve this question














      discussion - we have redhat linux machines and my question is about the UUID configuration in /etc/fstab file , and in which cases UUID risk the OS



      as I understand we MUST NOT use UUID in /etc/fstab if using software RAID1.



      Why? Because the RAID volume itself and the first element of the mirror will appear to have the same file system UUID. If the mirror breaks or for any other reason the md device isn't started at boot, the system will mount any random underlying disk instead, clobbering your mirror.



      so my question is



      what are the RAID levels ( numbers ) that we must not is UUID in fstab ?



      info about the raid level - https://en.wikipedia.org/wiki/Standard_RAID_levels









      share|improve this question













      share|improve this question




      share|improve this question








      edited Jan 4 at 16:41









      Jeff Schaller

      31.8k848109




      31.8k848109










      asked Jan 3 at 19:37









      yael

      2,0091145




      2,0091145




















          2 Answers
          2






          active

          oldest

          votes

















          up vote
          0
          down vote



          accepted










          We'll just go ahead and test this on ArchLinux and mdadm. But first of all this shouldn't matter for partition based arrays because then the member partitions have their own UUIDs so this would in theory only appply to whole disk members.



          TL;DR: This isn't a real problem even with old metadata blocks. It might have been a bug in older software I don't know. But it doesn't affect a modern ArchLinux.



          #uname -sr
          Linux 4.14.7-1-ARCH

          #modprobe raid1

          #mdadm --create --verbose /dev/md0 --metadata 0.9 --level=mirror --raid-devices=2 /dev/sdb /dev/sdd
          mdadm: size set to 102336K
          mdadm: array /dev/md0 started.

          #cat /proc/mdstat
          Personalities : [raid1]
          md0 : active raid1 sdd[1] sdb[0]
          102336 blocks [2/2] [UU]

          unused devices: <none>

          #mdadm --detail --scan >> /etc/mdadm.conf

          fdisk /dev/md0
          lsblk /dev/md0
          NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
          sdb 8:16 0 100M 0 disk
          └─md0 9:0 0 100M 0 raid1
          └─md0p1 259:0 0 98.9M 0 md
          sdd 8:48 0 100M 0 disk
          └─md0 9:0 0 100M 0 raid1
          └─md0p1 259:0 0 98.9M 0 md
          md0 8:0 0 100M 0 raid1
          └─sda2 8:2 0 98.9M 0 md


          mdstat -> [UU]



          #blkid /dev/md0
          /dev/md0: PTUUID="d49d8666-e580-8244-8c82-2bc325157e66" PTTYPE="gpt"
          #blkid /dev/sdd
          /dev/sdd: UUID="b3d82551-0226-6687-8279-b6dd6ad00d98" TYPE="linux_raid_member"
          #blkid /dev/sdb
          /dev/sdb: UUID="b3d82551-0226-6687-8279-b6dd6ad00d98" TYPE="linux_raid_member"

          #mkfs.ext4 /dev/md0p1
          mke2fs 1.43.7 (16-Oct-2017)
          creating filesystem with 101292 1k blocks and 25376 inodes
          Filesystem UUID: 652bcf77-fe47-416e-952c-bbOa76a78407
          Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729
          Allocating group tables: done
          Writing inode tables: done
          Creating journal (4096 blocks): done
          Writing superblocks and filesystem accounting information: done

          #mount /dev/md0p1 /mnt

          #lsblk -o NAME,UUID,MOUNTPOINT /dev/sdb /dev/sdd
          NAME UUID MOUNTPOINT
          sdb b3d82551-0226-6687-8279-b6dd6ad00d98
          └─md0
          └─md0p1 652bcf77-fe47-416e-952c-bbOa76a78407 /mnt
          sdd b3d82551-0226-6687-8279-b6dd6ad00d98
          └─md0
          └─md0p1 652bcf77-fe47-416e-952c-bbOa76a78407 /mnt


          So far so good. Not only does this correctly identify the member devices as raid devices but there are two partition level UUIDs that match. In fact these as part of the same container device md0 and lists the same mount point. It DOES NOT list any normal partition containers on sdd or sdb. Note that the md0 device itself does NOT have a UUID. Only its members have the UUID and its actually the same UUID.



          #echo "UUID=652bcf77-fe47-416e-952c-bbOa76a78407 /mnt ext4 rw,relatime,data=ordered 0 2" >> /etc/fstab
          umount /mnt
          mount /mnt
          cd /mnt
          fallocate -l 50MiB data


          mdstat -> [UU]



          Noting that we asked for the file system UUID of the raid members now lets try running the system without mdadm running.



          #cd
          #umount /mnt
          #mdadm --stop /dev/md0
          mdadm: stopped /dev/md0
          #lsblk /dev/sdb /dev/sdd
          NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
          sdb 8:16 0 100M 0 disk
          sdd 8:48 0 100M 0 disk


          Now the system thinks these are correctly raw disks because they have no partition table and so are not containers. However if we ask about what they are:



          #blkid /dev/sdd
          /dev/sdd: UUID="b3d82551-0226-6687-8279-b6dd6ad00d98" TYPE="linux_raid_member"


          It's still a linux_raid_member and if we try to mount it:



          #mount /dev/sdd /mnt
          mount /mnt: unknown filesystem type "linux raid member"


          How about:



          #mount /mnt
          mount: /mnt can't find UUID=652bcf77-fe47-416e-952c-bbOa76a78407


          And that makes sense because sdd is NOT a container and therefor there are no file systems that are probed. However if I run:



          #mdadm --assemble --scan && mount /mnt
          mdadm: /dev/md0 has been started with 2 drives.


          And if I stop it again and remove mdadm.conf:



          #umount /mnt && mdadm --stop /dev/md0
          #modprobe -r raid1
          #rm /etc/mdadm.conf
          #modprobe raid1
          #mdadm --assemble --scan
          mdadm: /dev/md/0 has been started with 2 drives.


          Dually note that my configuration for md0 device name is no longer taking effect and its being created at /dev/md/0 automatically. Now lets reboot and see what systemd/Linux does with fstab.



          #mdadm --stop /dev/md/0
          mdadm: stopped /dev/md/0
          #systemctl reboot


          #dmesg | grep md0
          [ 14.550231] md/raidl:md0: active with 2 out of 2 mirrors
          [ 14.550261] md0: detected capacity change from 0 to 104792064
          [ 14.836905] md0: p1
          [ 16.909057] EXT4-fs (md0p1): mounted filesystem with ordered data mode. Opts: data=ordered

          #lsblk /dev/md0
          NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
          md0 9:0 0 100M 0 raidl
          └─md0p1 259:0 0 98.9M 0 md /mnt


          And again with raid=noautodetect kernel parameter as this would also simulate versions of Linux that would not autodetect all raids and in all superblock/metadata versions etc. Yet still it mounts the raid because I asked for it in fstab and it force loaded mod raid1. So lets try again with it black listed with modprobe.blacklist=raid1:



          enter image description here



          Okay so whats going on?:



          enter image description here



          So linux knows its a raid type device even if it has no raid support. When trying to mount it, it correctly detects its a raid device and when using fstab it doesn't find the UUID despite it being in the file systems super block.



          And again! With no information in fstab or mdadm.



          #mount /dev/sdd /mnt
          mount: /mnt: unknown filesystem type "linux_raid_member".


          I think the gist of this is not only is Linux's probing is smart. Besides that using tools like fdisk warm that there is extra information stuffed in the partition table area. You would have to be trying really hard to make this mistake your file system UUID for one of the member disks.






          share|improve this answer






















          • Excellent point, it makes sense that blkid (used by udev to scan for UUIDs) works this way. I'm quite curious why sdb1 / sdd1 don't show up after a reboot though; they are instead a result of the kernel partition parser. github.com/torvalds/linux/blob/v4.14/block/partitions/…
            – sourcejedi
            Jan 4 at 9:16










          • I don't understand the first paragraph though; I think this should apply equally to MD in both a whole-disk or a partition.
            – sourcejedi
            Jan 4 at 9:19










          • @sourcejedi There could in theory be some side effect or bug related to this. But when using partition members each partition has its own partition UUID.
            – jdwolf
            Jan 4 at 21:07











          • @sourcejedi So that way with or without raid awareness its still contained a a slave. The idea being that if as a whole disk it would just look over the raid metadata and find a file system UUID this would apply to whole disks but if you used partitions then not only is this device a slave of a holder but the partition table also marks the partition as a raid device. At the very least if this were an issue it should work with whole disks before it worked with partitions.
            – jdwolf
            Jan 4 at 21:14


















          up vote
          0
          down vote













          as per earlier answers you can use UUID in any RAID "level" with no concern, either by



          • not using mdadm metadata v0.9 or v1.0 (instead use v1.1 or 1.2)

          • use the UUID associated with the MD array, instead of the filesystem. Of course this requires reconfiguration if you move the FS off the software raid and onto a different device, but you probably don't have reason to be concerned about that.

          in which cases it will be problematic to configure UUID in fstab






          share|improve this answer




















          • Can you actually use the UUID of the MD device in fstab? I don't know, but I am a bit surprised if it works.
            – ilkkachu
            Jan 3 at 20:45










          • @ilkkachu I think I misread one of the answers. Nevertheless, it should be possible to use the symlink /dev/disk/by-id/md-uuid-.... At a guess, it cannot be used with UUID=... though.
            – sourcejedi
            Jan 3 at 21:09











          • I'm not entirely convinced this is an actual problem. Someone should do a real test of this instead of interpreting an answer that's interpreting documentation.
            – jdwolf
            Jan 3 at 22:04










          Your Answer







          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "106"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: false,
          noModals: false,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );








           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f414619%2fuuid-in-fstab-in-which-cases-we-must-not-configured-uuid-in-fstab%23new-answer', 'question_page');

          );

          Post as a guest






























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          0
          down vote



          accepted










          We'll just go ahead and test this on ArchLinux and mdadm. But first of all this shouldn't matter for partition based arrays because then the member partitions have their own UUIDs so this would in theory only appply to whole disk members.



          TL;DR: This isn't a real problem even with old metadata blocks. It might have been a bug in older software I don't know. But it doesn't affect a modern ArchLinux.



          #uname -sr
          Linux 4.14.7-1-ARCH

          #modprobe raid1

          #mdadm --create --verbose /dev/md0 --metadata 0.9 --level=mirror --raid-devices=2 /dev/sdb /dev/sdd
          mdadm: size set to 102336K
          mdadm: array /dev/md0 started.

          #cat /proc/mdstat
          Personalities : [raid1]
          md0 : active raid1 sdd[1] sdb[0]
          102336 blocks [2/2] [UU]

          unused devices: <none>

          #mdadm --detail --scan >> /etc/mdadm.conf

          fdisk /dev/md0
          lsblk /dev/md0
          NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
          sdb 8:16 0 100M 0 disk
          └─md0 9:0 0 100M 0 raid1
          └─md0p1 259:0 0 98.9M 0 md
          sdd 8:48 0 100M 0 disk
          └─md0 9:0 0 100M 0 raid1
          └─md0p1 259:0 0 98.9M 0 md
          md0 8:0 0 100M 0 raid1
          └─sda2 8:2 0 98.9M 0 md


          mdstat -> [UU]



          #blkid /dev/md0
          /dev/md0: PTUUID="d49d8666-e580-8244-8c82-2bc325157e66" PTTYPE="gpt"
          #blkid /dev/sdd
          /dev/sdd: UUID="b3d82551-0226-6687-8279-b6dd6ad00d98" TYPE="linux_raid_member"
          #blkid /dev/sdb
          /dev/sdb: UUID="b3d82551-0226-6687-8279-b6dd6ad00d98" TYPE="linux_raid_member"

          #mkfs.ext4 /dev/md0p1
          mke2fs 1.43.7 (16-Oct-2017)
          creating filesystem with 101292 1k blocks and 25376 inodes
          Filesystem UUID: 652bcf77-fe47-416e-952c-bbOa76a78407
          Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729
          Allocating group tables: done
          Writing inode tables: done
          Creating journal (4096 blocks): done
          Writing superblocks and filesystem accounting information: done

          #mount /dev/md0p1 /mnt

          #lsblk -o NAME,UUID,MOUNTPOINT /dev/sdb /dev/sdd
          NAME UUID MOUNTPOINT
          sdb b3d82551-0226-6687-8279-b6dd6ad00d98
          └─md0
          └─md0p1 652bcf77-fe47-416e-952c-bbOa76a78407 /mnt
          sdd b3d82551-0226-6687-8279-b6dd6ad00d98
          └─md0
          └─md0p1 652bcf77-fe47-416e-952c-bbOa76a78407 /mnt


          So far so good. Not only does this correctly identify the member devices as raid devices but there are two partition level UUIDs that match. In fact these as part of the same container device md0 and lists the same mount point. It DOES NOT list any normal partition containers on sdd or sdb. Note that the md0 device itself does NOT have a UUID. Only its members have the UUID and its actually the same UUID.



          #echo "UUID=652bcf77-fe47-416e-952c-bbOa76a78407 /mnt ext4 rw,relatime,data=ordered 0 2" >> /etc/fstab
          umount /mnt
          mount /mnt
          cd /mnt
          fallocate -l 50MiB data


          mdstat -> [UU]



          Noting that we asked for the file system UUID of the raid members now lets try running the system without mdadm running.



          #cd
          #umount /mnt
          #mdadm --stop /dev/md0
          mdadm: stopped /dev/md0
          #lsblk /dev/sdb /dev/sdd
          NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
          sdb 8:16 0 100M 0 disk
          sdd 8:48 0 100M 0 disk


          Now the system thinks these are correctly raw disks because they have no partition table and so are not containers. However if we ask about what they are:



          #blkid /dev/sdd
          /dev/sdd: UUID="b3d82551-0226-6687-8279-b6dd6ad00d98" TYPE="linux_raid_member"


          It's still a linux_raid_member and if we try to mount it:



          #mount /dev/sdd /mnt
          mount /mnt: unknown filesystem type "linux raid member"


          How about:



          #mount /mnt
          mount: /mnt can't find UUID=652bcf77-fe47-416e-952c-bbOa76a78407


          And that makes sense because sdd is NOT a container and therefor there are no file systems that are probed. However if I run:



          #mdadm --assemble --scan && mount /mnt
          mdadm: /dev/md0 has been started with 2 drives.


          And if I stop it again and remove mdadm.conf:



          #umount /mnt && mdadm --stop /dev/md0
          #modprobe -r raid1
          #rm /etc/mdadm.conf
          #modprobe raid1
          #mdadm --assemble --scan
          mdadm: /dev/md/0 has been started with 2 drives.


          Dually note that my configuration for md0 device name is no longer taking effect and its being created at /dev/md/0 automatically. Now lets reboot and see what systemd/Linux does with fstab.



          #mdadm --stop /dev/md/0
          mdadm: stopped /dev/md/0
          #systemctl reboot


          #dmesg | grep md0
          [ 14.550231] md/raidl:md0: active with 2 out of 2 mirrors
          [ 14.550261] md0: detected capacity change from 0 to 104792064
          [ 14.836905] md0: p1
          [ 16.909057] EXT4-fs (md0p1): mounted filesystem with ordered data mode. Opts: data=ordered

          #lsblk /dev/md0
          NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
          md0 9:0 0 100M 0 raidl
          └─md0p1 259:0 0 98.9M 0 md /mnt


          And again with raid=noautodetect kernel parameter as this would also simulate versions of Linux that would not autodetect all raids and in all superblock/metadata versions etc. Yet still it mounts the raid because I asked for it in fstab and it force loaded mod raid1. So lets try again with it black listed with modprobe.blacklist=raid1:



          enter image description here



          Okay so whats going on?:



          enter image description here



          So linux knows its a raid type device even if it has no raid support. When trying to mount it, it correctly detects its a raid device and when using fstab it doesn't find the UUID despite it being in the file systems super block.



          And again! With no information in fstab or mdadm.



          #mount /dev/sdd /mnt
          mount: /mnt: unknown filesystem type "linux_raid_member".


          I think the gist of this is not only is Linux's probing is smart. Besides that using tools like fdisk warm that there is extra information stuffed in the partition table area. You would have to be trying really hard to make this mistake your file system UUID for one of the member disks.






          share|improve this answer






















          • Excellent point, it makes sense that blkid (used by udev to scan for UUIDs) works this way. I'm quite curious why sdb1 / sdd1 don't show up after a reboot though; they are instead a result of the kernel partition parser. github.com/torvalds/linux/blob/v4.14/block/partitions/…
            – sourcejedi
            Jan 4 at 9:16










          • I don't understand the first paragraph though; I think this should apply equally to MD in both a whole-disk or a partition.
            – sourcejedi
            Jan 4 at 9:19










          • @sourcejedi There could in theory be some side effect or bug related to this. But when using partition members each partition has its own partition UUID.
            – jdwolf
            Jan 4 at 21:07











          • @sourcejedi So that way with or without raid awareness its still contained a a slave. The idea being that if as a whole disk it would just look over the raid metadata and find a file system UUID this would apply to whole disks but if you used partitions then not only is this device a slave of a holder but the partition table also marks the partition as a raid device. At the very least if this were an issue it should work with whole disks before it worked with partitions.
            – jdwolf
            Jan 4 at 21:14















          up vote
          0
          down vote



          accepted










          We'll just go ahead and test this on ArchLinux and mdadm. But first of all this shouldn't matter for partition based arrays because then the member partitions have their own UUIDs so this would in theory only appply to whole disk members.



          TL;DR: This isn't a real problem even with old metadata blocks. It might have been a bug in older software I don't know. But it doesn't affect a modern ArchLinux.



          #uname -sr
          Linux 4.14.7-1-ARCH

          #modprobe raid1

          #mdadm --create --verbose /dev/md0 --metadata 0.9 --level=mirror --raid-devices=2 /dev/sdb /dev/sdd
          mdadm: size set to 102336K
          mdadm: array /dev/md0 started.

          #cat /proc/mdstat
          Personalities : [raid1]
          md0 : active raid1 sdd[1] sdb[0]
          102336 blocks [2/2] [UU]

          unused devices: <none>

          #mdadm --detail --scan >> /etc/mdadm.conf

          fdisk /dev/md0
          lsblk /dev/md0
          NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
          sdb 8:16 0 100M 0 disk
          └─md0 9:0 0 100M 0 raid1
          └─md0p1 259:0 0 98.9M 0 md
          sdd 8:48 0 100M 0 disk
          └─md0 9:0 0 100M 0 raid1
          └─md0p1 259:0 0 98.9M 0 md
          md0 8:0 0 100M 0 raid1
          └─sda2 8:2 0 98.9M 0 md


          mdstat -> [UU]



          #blkid /dev/md0
          /dev/md0: PTUUID="d49d8666-e580-8244-8c82-2bc325157e66" PTTYPE="gpt"
          #blkid /dev/sdd
          /dev/sdd: UUID="b3d82551-0226-6687-8279-b6dd6ad00d98" TYPE="linux_raid_member"
          #blkid /dev/sdb
          /dev/sdb: UUID="b3d82551-0226-6687-8279-b6dd6ad00d98" TYPE="linux_raid_member"

          #mkfs.ext4 /dev/md0p1
          mke2fs 1.43.7 (16-Oct-2017)
          creating filesystem with 101292 1k blocks and 25376 inodes
          Filesystem UUID: 652bcf77-fe47-416e-952c-bbOa76a78407
          Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729
          Allocating group tables: done
          Writing inode tables: done
          Creating journal (4096 blocks): done
          Writing superblocks and filesystem accounting information: done

          #mount /dev/md0p1 /mnt

          #lsblk -o NAME,UUID,MOUNTPOINT /dev/sdb /dev/sdd
          NAME UUID MOUNTPOINT
          sdb b3d82551-0226-6687-8279-b6dd6ad00d98
          └─md0
          └─md0p1 652bcf77-fe47-416e-952c-bbOa76a78407 /mnt
          sdd b3d82551-0226-6687-8279-b6dd6ad00d98
          └─md0
          └─md0p1 652bcf77-fe47-416e-952c-bbOa76a78407 /mnt


          So far so good. Not only does this correctly identify the member devices as raid devices but there are two partition level UUIDs that match. In fact these as part of the same container device md0 and lists the same mount point. It DOES NOT list any normal partition containers on sdd or sdb. Note that the md0 device itself does NOT have a UUID. Only its members have the UUID and its actually the same UUID.



          #echo "UUID=652bcf77-fe47-416e-952c-bbOa76a78407 /mnt ext4 rw,relatime,data=ordered 0 2" >> /etc/fstab
          umount /mnt
          mount /mnt
          cd /mnt
          fallocate -l 50MiB data


          mdstat -> [UU]



          Noting that we asked for the file system UUID of the raid members now lets try running the system without mdadm running.



          #cd
          #umount /mnt
          #mdadm --stop /dev/md0
          mdadm: stopped /dev/md0
          #lsblk /dev/sdb /dev/sdd
          NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
          sdb 8:16 0 100M 0 disk
          sdd 8:48 0 100M 0 disk


          Now the system thinks these are correctly raw disks because they have no partition table and so are not containers. However if we ask about what they are:



          #blkid /dev/sdd
          /dev/sdd: UUID="b3d82551-0226-6687-8279-b6dd6ad00d98" TYPE="linux_raid_member"


          It's still a linux_raid_member and if we try to mount it:



          #mount /dev/sdd /mnt
          mount /mnt: unknown filesystem type "linux raid member"


          How about:



          #mount /mnt
          mount: /mnt can't find UUID=652bcf77-fe47-416e-952c-bbOa76a78407


          And that makes sense because sdd is NOT a container and therefor there are no file systems that are probed. However if I run:



          #mdadm --assemble --scan && mount /mnt
          mdadm: /dev/md0 has been started with 2 drives.


          And if I stop it again and remove mdadm.conf:



          #umount /mnt && mdadm --stop /dev/md0
          #modprobe -r raid1
          #rm /etc/mdadm.conf
          #modprobe raid1
          #mdadm --assemble --scan
          mdadm: /dev/md/0 has been started with 2 drives.


          Dually note that my configuration for md0 device name is no longer taking effect and its being created at /dev/md/0 automatically. Now lets reboot and see what systemd/Linux does with fstab.



          #mdadm --stop /dev/md/0
          mdadm: stopped /dev/md/0
          #systemctl reboot


          #dmesg | grep md0
          [ 14.550231] md/raidl:md0: active with 2 out of 2 mirrors
          [ 14.550261] md0: detected capacity change from 0 to 104792064
          [ 14.836905] md0: p1
          [ 16.909057] EXT4-fs (md0p1): mounted filesystem with ordered data mode. Opts: data=ordered

          #lsblk /dev/md0
          NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
          md0 9:0 0 100M 0 raidl
          └─md0p1 259:0 0 98.9M 0 md /mnt


          And again with raid=noautodetect kernel parameter as this would also simulate versions of Linux that would not autodetect all raids and in all superblock/metadata versions etc. Yet still it mounts the raid because I asked for it in fstab and it force loaded mod raid1. So lets try again with it black listed with modprobe.blacklist=raid1:



          enter image description here



          Okay so whats going on?:



          enter image description here



          So linux knows its a raid type device even if it has no raid support. When trying to mount it, it correctly detects its a raid device and when using fstab it doesn't find the UUID despite it being in the file systems super block.



          And again! With no information in fstab or mdadm.



          #mount /dev/sdd /mnt
          mount: /mnt: unknown filesystem type "linux_raid_member".


          I think the gist of this is not only is Linux's probing is smart. Besides that using tools like fdisk warm that there is extra information stuffed in the partition table area. You would have to be trying really hard to make this mistake your file system UUID for one of the member disks.






          share|improve this answer






















          • Excellent point, it makes sense that blkid (used by udev to scan for UUIDs) works this way. I'm quite curious why sdb1 / sdd1 don't show up after a reboot though; they are instead a result of the kernel partition parser. github.com/torvalds/linux/blob/v4.14/block/partitions/…
            – sourcejedi
            Jan 4 at 9:16










          • I don't understand the first paragraph though; I think this should apply equally to MD in both a whole-disk or a partition.
            – sourcejedi
            Jan 4 at 9:19










          • @sourcejedi There could in theory be some side effect or bug related to this. But when using partition members each partition has its own partition UUID.
            – jdwolf
            Jan 4 at 21:07











          • @sourcejedi So that way with or without raid awareness its still contained a a slave. The idea being that if as a whole disk it would just look over the raid metadata and find a file system UUID this would apply to whole disks but if you used partitions then not only is this device a slave of a holder but the partition table also marks the partition as a raid device. At the very least if this were an issue it should work with whole disks before it worked with partitions.
            – jdwolf
            Jan 4 at 21:14













          up vote
          0
          down vote



          accepted







          up vote
          0
          down vote



          accepted






          We'll just go ahead and test this on ArchLinux and mdadm. But first of all this shouldn't matter for partition based arrays because then the member partitions have their own UUIDs so this would in theory only appply to whole disk members.



          TL;DR: This isn't a real problem even with old metadata blocks. It might have been a bug in older software I don't know. But it doesn't affect a modern ArchLinux.



          #uname -sr
          Linux 4.14.7-1-ARCH

          #modprobe raid1

          #mdadm --create --verbose /dev/md0 --metadata 0.9 --level=mirror --raid-devices=2 /dev/sdb /dev/sdd
          mdadm: size set to 102336K
          mdadm: array /dev/md0 started.

          #cat /proc/mdstat
          Personalities : [raid1]
          md0 : active raid1 sdd[1] sdb[0]
          102336 blocks [2/2] [UU]

          unused devices: <none>

          #mdadm --detail --scan >> /etc/mdadm.conf

          fdisk /dev/md0
          lsblk /dev/md0
          NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
          sdb 8:16 0 100M 0 disk
          └─md0 9:0 0 100M 0 raid1
          └─md0p1 259:0 0 98.9M 0 md
          sdd 8:48 0 100M 0 disk
          └─md0 9:0 0 100M 0 raid1
          └─md0p1 259:0 0 98.9M 0 md
          md0 8:0 0 100M 0 raid1
          └─sda2 8:2 0 98.9M 0 md


          mdstat -> [UU]



          #blkid /dev/md0
          /dev/md0: PTUUID="d49d8666-e580-8244-8c82-2bc325157e66" PTTYPE="gpt"
          #blkid /dev/sdd
          /dev/sdd: UUID="b3d82551-0226-6687-8279-b6dd6ad00d98" TYPE="linux_raid_member"
          #blkid /dev/sdb
          /dev/sdb: UUID="b3d82551-0226-6687-8279-b6dd6ad00d98" TYPE="linux_raid_member"

          #mkfs.ext4 /dev/md0p1
          mke2fs 1.43.7 (16-Oct-2017)
          creating filesystem with 101292 1k blocks and 25376 inodes
          Filesystem UUID: 652bcf77-fe47-416e-952c-bbOa76a78407
          Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729
          Allocating group tables: done
          Writing inode tables: done
          Creating journal (4096 blocks): done
          Writing superblocks and filesystem accounting information: done

          #mount /dev/md0p1 /mnt

          #lsblk -o NAME,UUID,MOUNTPOINT /dev/sdb /dev/sdd
          NAME UUID MOUNTPOINT
          sdb b3d82551-0226-6687-8279-b6dd6ad00d98
          └─md0
          └─md0p1 652bcf77-fe47-416e-952c-bbOa76a78407 /mnt
          sdd b3d82551-0226-6687-8279-b6dd6ad00d98
          └─md0
          └─md0p1 652bcf77-fe47-416e-952c-bbOa76a78407 /mnt


          So far so good. Not only does this correctly identify the member devices as raid devices but there are two partition level UUIDs that match. In fact these as part of the same container device md0 and lists the same mount point. It DOES NOT list any normal partition containers on sdd or sdb. Note that the md0 device itself does NOT have a UUID. Only its members have the UUID and its actually the same UUID.



          #echo "UUID=652bcf77-fe47-416e-952c-bbOa76a78407 /mnt ext4 rw,relatime,data=ordered 0 2" >> /etc/fstab
          umount /mnt
          mount /mnt
          cd /mnt
          fallocate -l 50MiB data


          mdstat -> [UU]



          Noting that we asked for the file system UUID of the raid members now lets try running the system without mdadm running.



          #cd
          #umount /mnt
          #mdadm --stop /dev/md0
          mdadm: stopped /dev/md0
          #lsblk /dev/sdb /dev/sdd
          NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
          sdb 8:16 0 100M 0 disk
          sdd 8:48 0 100M 0 disk


          Now the system thinks these are correctly raw disks because they have no partition table and so are not containers. However if we ask about what they are:



          #blkid /dev/sdd
          /dev/sdd: UUID="b3d82551-0226-6687-8279-b6dd6ad00d98" TYPE="linux_raid_member"


          It's still a linux_raid_member and if we try to mount it:



          #mount /dev/sdd /mnt
          mount /mnt: unknown filesystem type "linux raid member"


          How about:



          #mount /mnt
          mount: /mnt can't find UUID=652bcf77-fe47-416e-952c-bbOa76a78407


          And that makes sense because sdd is NOT a container and therefor there are no file systems that are probed. However if I run:



          #mdadm --assemble --scan && mount /mnt
          mdadm: /dev/md0 has been started with 2 drives.


          And if I stop it again and remove mdadm.conf:



          #umount /mnt && mdadm --stop /dev/md0
          #modprobe -r raid1
          #rm /etc/mdadm.conf
          #modprobe raid1
          #mdadm --assemble --scan
          mdadm: /dev/md/0 has been started with 2 drives.


          Dually note that my configuration for md0 device name is no longer taking effect and its being created at /dev/md/0 automatically. Now lets reboot and see what systemd/Linux does with fstab.



          #mdadm --stop /dev/md/0
          mdadm: stopped /dev/md/0
          #systemctl reboot


          #dmesg | grep md0
          [ 14.550231] md/raidl:md0: active with 2 out of 2 mirrors
          [ 14.550261] md0: detected capacity change from 0 to 104792064
          [ 14.836905] md0: p1
          [ 16.909057] EXT4-fs (md0p1): mounted filesystem with ordered data mode. Opts: data=ordered

          #lsblk /dev/md0
          NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
          md0 9:0 0 100M 0 raidl
          └─md0p1 259:0 0 98.9M 0 md /mnt


          And again with raid=noautodetect kernel parameter as this would also simulate versions of Linux that would not autodetect all raids and in all superblock/metadata versions etc. Yet still it mounts the raid because I asked for it in fstab and it force loaded mod raid1. So lets try again with it black listed with modprobe.blacklist=raid1:



          enter image description here



          Okay so whats going on?:



          enter image description here



          So linux knows its a raid type device even if it has no raid support. When trying to mount it, it correctly detects its a raid device and when using fstab it doesn't find the UUID despite it being in the file systems super block.



          And again! With no information in fstab or mdadm.



          #mount /dev/sdd /mnt
          mount: /mnt: unknown filesystem type "linux_raid_member".


          I think the gist of this is not only is Linux's probing is smart. Besides that using tools like fdisk warm that there is extra information stuffed in the partition table area. You would have to be trying really hard to make this mistake your file system UUID for one of the member disks.






          share|improve this answer














          We'll just go ahead and test this on ArchLinux and mdadm. But first of all this shouldn't matter for partition based arrays because then the member partitions have their own UUIDs so this would in theory only appply to whole disk members.



          TL;DR: This isn't a real problem even with old metadata blocks. It might have been a bug in older software I don't know. But it doesn't affect a modern ArchLinux.



          #uname -sr
          Linux 4.14.7-1-ARCH

          #modprobe raid1

          #mdadm --create --verbose /dev/md0 --metadata 0.9 --level=mirror --raid-devices=2 /dev/sdb /dev/sdd
          mdadm: size set to 102336K
          mdadm: array /dev/md0 started.

          #cat /proc/mdstat
          Personalities : [raid1]
          md0 : active raid1 sdd[1] sdb[0]
          102336 blocks [2/2] [UU]

          unused devices: <none>

          #mdadm --detail --scan >> /etc/mdadm.conf

          fdisk /dev/md0
          lsblk /dev/md0
          NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
          sdb 8:16 0 100M 0 disk
          └─md0 9:0 0 100M 0 raid1
          └─md0p1 259:0 0 98.9M 0 md
          sdd 8:48 0 100M 0 disk
          └─md0 9:0 0 100M 0 raid1
          └─md0p1 259:0 0 98.9M 0 md
          md0 8:0 0 100M 0 raid1
          └─sda2 8:2 0 98.9M 0 md


          mdstat -> [UU]



          #blkid /dev/md0
          /dev/md0: PTUUID="d49d8666-e580-8244-8c82-2bc325157e66" PTTYPE="gpt"
          #blkid /dev/sdd
          /dev/sdd: UUID="b3d82551-0226-6687-8279-b6dd6ad00d98" TYPE="linux_raid_member"
          #blkid /dev/sdb
          /dev/sdb: UUID="b3d82551-0226-6687-8279-b6dd6ad00d98" TYPE="linux_raid_member"

          #mkfs.ext4 /dev/md0p1
          mke2fs 1.43.7 (16-Oct-2017)
          creating filesystem with 101292 1k blocks and 25376 inodes
          Filesystem UUID: 652bcf77-fe47-416e-952c-bbOa76a78407
          Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729
          Allocating group tables: done
          Writing inode tables: done
          Creating journal (4096 blocks): done
          Writing superblocks and filesystem accounting information: done

          #mount /dev/md0p1 /mnt

          #lsblk -o NAME,UUID,MOUNTPOINT /dev/sdb /dev/sdd
          NAME UUID MOUNTPOINT
          sdb b3d82551-0226-6687-8279-b6dd6ad00d98
          └─md0
          └─md0p1 652bcf77-fe47-416e-952c-bbOa76a78407 /mnt
          sdd b3d82551-0226-6687-8279-b6dd6ad00d98
          └─md0
          └─md0p1 652bcf77-fe47-416e-952c-bbOa76a78407 /mnt


          So far so good. Not only does this correctly identify the member devices as raid devices but there are two partition level UUIDs that match. In fact these as part of the same container device md0 and lists the same mount point. It DOES NOT list any normal partition containers on sdd or sdb. Note that the md0 device itself does NOT have a UUID. Only its members have the UUID and its actually the same UUID.



          #echo "UUID=652bcf77-fe47-416e-952c-bbOa76a78407 /mnt ext4 rw,relatime,data=ordered 0 2" >> /etc/fstab
          umount /mnt
          mount /mnt
          cd /mnt
          fallocate -l 50MiB data


          mdstat -> [UU]



          Noting that we asked for the file system UUID of the raid members now lets try running the system without mdadm running.



          #cd
          #umount /mnt
          #mdadm --stop /dev/md0
          mdadm: stopped /dev/md0
          #lsblk /dev/sdb /dev/sdd
          NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
          sdb 8:16 0 100M 0 disk
          sdd 8:48 0 100M 0 disk


          Now the system thinks these are correctly raw disks because they have no partition table and so are not containers. However if we ask about what they are:



          #blkid /dev/sdd
          /dev/sdd: UUID="b3d82551-0226-6687-8279-b6dd6ad00d98" TYPE="linux_raid_member"


          It's still a linux_raid_member and if we try to mount it:



          #mount /dev/sdd /mnt
          mount /mnt: unknown filesystem type "linux raid member"


          How about:



          #mount /mnt
          mount: /mnt can't find UUID=652bcf77-fe47-416e-952c-bbOa76a78407


          And that makes sense because sdd is NOT a container and therefor there are no file systems that are probed. However if I run:



          #mdadm --assemble --scan && mount /mnt
          mdadm: /dev/md0 has been started with 2 drives.


          And if I stop it again and remove mdadm.conf:



          #umount /mnt && mdadm --stop /dev/md0
          #modprobe -r raid1
          #rm /etc/mdadm.conf
          #modprobe raid1
          #mdadm --assemble --scan
          mdadm: /dev/md/0 has been started with 2 drives.


          Dually note that my configuration for md0 device name is no longer taking effect and its being created at /dev/md/0 automatically. Now lets reboot and see what systemd/Linux does with fstab.



          #mdadm --stop /dev/md/0
          mdadm: stopped /dev/md/0
          #systemctl reboot


          #dmesg | grep md0
          [ 14.550231] md/raidl:md0: active with 2 out of 2 mirrors
          [ 14.550261] md0: detected capacity change from 0 to 104792064
          [ 14.836905] md0: p1
          [ 16.909057] EXT4-fs (md0p1): mounted filesystem with ordered data mode. Opts: data=ordered

          #lsblk /dev/md0
          NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
          md0 9:0 0 100M 0 raidl
          └─md0p1 259:0 0 98.9M 0 md /mnt


          And again with raid=noautodetect kernel parameter as this would also simulate versions of Linux that would not autodetect all raids and in all superblock/metadata versions etc. Yet still it mounts the raid because I asked for it in fstab and it force loaded mod raid1. So lets try again with it black listed with modprobe.blacklist=raid1:



          enter image description here



          Okay so whats going on?:



          enter image description here



          So linux knows its a raid type device even if it has no raid support. When trying to mount it, it correctly detects its a raid device and when using fstab it doesn't find the UUID despite it being in the file systems super block.



          And again! With no information in fstab or mdadm.



          #mount /dev/sdd /mnt
          mount: /mnt: unknown filesystem type "linux_raid_member".


          I think the gist of this is not only is Linux's probing is smart. Besides that using tools like fdisk warm that there is extra information stuffed in the partition table area. You would have to be trying really hard to make this mistake your file system UUID for one of the member disks.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Jan 4 at 10:13









          ilkkachu

          49.9k674137




          49.9k674137










          answered Jan 4 at 0:48









          jdwolf

          2,392116




          2,392116











          • Excellent point, it makes sense that blkid (used by udev to scan for UUIDs) works this way. I'm quite curious why sdb1 / sdd1 don't show up after a reboot though; they are instead a result of the kernel partition parser. github.com/torvalds/linux/blob/v4.14/block/partitions/…
            – sourcejedi
            Jan 4 at 9:16










          • I don't understand the first paragraph though; I think this should apply equally to MD in both a whole-disk or a partition.
            – sourcejedi
            Jan 4 at 9:19










          • @sourcejedi There could in theory be some side effect or bug related to this. But when using partition members each partition has its own partition UUID.
            – jdwolf
            Jan 4 at 21:07











          • @sourcejedi So that way with or without raid awareness its still contained a a slave. The idea being that if as a whole disk it would just look over the raid metadata and find a file system UUID this would apply to whole disks but if you used partitions then not only is this device a slave of a holder but the partition table also marks the partition as a raid device. At the very least if this were an issue it should work with whole disks before it worked with partitions.
            – jdwolf
            Jan 4 at 21:14

















          • Excellent point, it makes sense that blkid (used by udev to scan for UUIDs) works this way. I'm quite curious why sdb1 / sdd1 don't show up after a reboot though; they are instead a result of the kernel partition parser. github.com/torvalds/linux/blob/v4.14/block/partitions/…
            – sourcejedi
            Jan 4 at 9:16










          • I don't understand the first paragraph though; I think this should apply equally to MD in both a whole-disk or a partition.
            – sourcejedi
            Jan 4 at 9:19










          • @sourcejedi There could in theory be some side effect or bug related to this. But when using partition members each partition has its own partition UUID.
            – jdwolf
            Jan 4 at 21:07











          • @sourcejedi So that way with or without raid awareness its still contained a a slave. The idea being that if as a whole disk it would just look over the raid metadata and find a file system UUID this would apply to whole disks but if you used partitions then not only is this device a slave of a holder but the partition table also marks the partition as a raid device. At the very least if this were an issue it should work with whole disks before it worked with partitions.
            – jdwolf
            Jan 4 at 21:14
















          Excellent point, it makes sense that blkid (used by udev to scan for UUIDs) works this way. I'm quite curious why sdb1 / sdd1 don't show up after a reboot though; they are instead a result of the kernel partition parser. github.com/torvalds/linux/blob/v4.14/block/partitions/…
          – sourcejedi
          Jan 4 at 9:16




          Excellent point, it makes sense that blkid (used by udev to scan for UUIDs) works this way. I'm quite curious why sdb1 / sdd1 don't show up after a reboot though; they are instead a result of the kernel partition parser. github.com/torvalds/linux/blob/v4.14/block/partitions/…
          – sourcejedi
          Jan 4 at 9:16












          I don't understand the first paragraph though; I think this should apply equally to MD in both a whole-disk or a partition.
          – sourcejedi
          Jan 4 at 9:19




          I don't understand the first paragraph though; I think this should apply equally to MD in both a whole-disk or a partition.
          – sourcejedi
          Jan 4 at 9:19












          @sourcejedi There could in theory be some side effect or bug related to this. But when using partition members each partition has its own partition UUID.
          – jdwolf
          Jan 4 at 21:07





          @sourcejedi There could in theory be some side effect or bug related to this. But when using partition members each partition has its own partition UUID.
          – jdwolf
          Jan 4 at 21:07













          @sourcejedi So that way with or without raid awareness its still contained a a slave. The idea being that if as a whole disk it would just look over the raid metadata and find a file system UUID this would apply to whole disks but if you used partitions then not only is this device a slave of a holder but the partition table also marks the partition as a raid device. At the very least if this were an issue it should work with whole disks before it worked with partitions.
          – jdwolf
          Jan 4 at 21:14





          @sourcejedi So that way with or without raid awareness its still contained a a slave. The idea being that if as a whole disk it would just look over the raid metadata and find a file system UUID this would apply to whole disks but if you used partitions then not only is this device a slave of a holder but the partition table also marks the partition as a raid device. At the very least if this were an issue it should work with whole disks before it worked with partitions.
          – jdwolf
          Jan 4 at 21:14













          up vote
          0
          down vote













          as per earlier answers you can use UUID in any RAID "level" with no concern, either by



          • not using mdadm metadata v0.9 or v1.0 (instead use v1.1 or 1.2)

          • use the UUID associated with the MD array, instead of the filesystem. Of course this requires reconfiguration if you move the FS off the software raid and onto a different device, but you probably don't have reason to be concerned about that.

          in which cases it will be problematic to configure UUID in fstab






          share|improve this answer




















          • Can you actually use the UUID of the MD device in fstab? I don't know, but I am a bit surprised if it works.
            – ilkkachu
            Jan 3 at 20:45










          • @ilkkachu I think I misread one of the answers. Nevertheless, it should be possible to use the symlink /dev/disk/by-id/md-uuid-.... At a guess, it cannot be used with UUID=... though.
            – sourcejedi
            Jan 3 at 21:09











          • I'm not entirely convinced this is an actual problem. Someone should do a real test of this instead of interpreting an answer that's interpreting documentation.
            – jdwolf
            Jan 3 at 22:04














          up vote
          0
          down vote













          as per earlier answers you can use UUID in any RAID "level" with no concern, either by



          • not using mdadm metadata v0.9 or v1.0 (instead use v1.1 or 1.2)

          • use the UUID associated with the MD array, instead of the filesystem. Of course this requires reconfiguration if you move the FS off the software raid and onto a different device, but you probably don't have reason to be concerned about that.

          in which cases it will be problematic to configure UUID in fstab






          share|improve this answer




















          • Can you actually use the UUID of the MD device in fstab? I don't know, but I am a bit surprised if it works.
            – ilkkachu
            Jan 3 at 20:45










          • @ilkkachu I think I misread one of the answers. Nevertheless, it should be possible to use the symlink /dev/disk/by-id/md-uuid-.... At a guess, it cannot be used with UUID=... though.
            – sourcejedi
            Jan 3 at 21:09











          • I'm not entirely convinced this is an actual problem. Someone should do a real test of this instead of interpreting an answer that's interpreting documentation.
            – jdwolf
            Jan 3 at 22:04












          up vote
          0
          down vote










          up vote
          0
          down vote









          as per earlier answers you can use UUID in any RAID "level" with no concern, either by



          • not using mdadm metadata v0.9 or v1.0 (instead use v1.1 or 1.2)

          • use the UUID associated with the MD array, instead of the filesystem. Of course this requires reconfiguration if you move the FS off the software raid and onto a different device, but you probably don't have reason to be concerned about that.

          in which cases it will be problematic to configure UUID in fstab






          share|improve this answer












          as per earlier answers you can use UUID in any RAID "level" with no concern, either by



          • not using mdadm metadata v0.9 or v1.0 (instead use v1.1 or 1.2)

          • use the UUID associated with the MD array, instead of the filesystem. Of course this requires reconfiguration if you move the FS off the software raid and onto a different device, but you probably don't have reason to be concerned about that.

          in which cases it will be problematic to configure UUID in fstab







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Jan 3 at 20:04









          sourcejedi

          19k32478




          19k32478











          • Can you actually use the UUID of the MD device in fstab? I don't know, but I am a bit surprised if it works.
            – ilkkachu
            Jan 3 at 20:45










          • @ilkkachu I think I misread one of the answers. Nevertheless, it should be possible to use the symlink /dev/disk/by-id/md-uuid-.... At a guess, it cannot be used with UUID=... though.
            – sourcejedi
            Jan 3 at 21:09











          • I'm not entirely convinced this is an actual problem. Someone should do a real test of this instead of interpreting an answer that's interpreting documentation.
            – jdwolf
            Jan 3 at 22:04
















          • Can you actually use the UUID of the MD device in fstab? I don't know, but I am a bit surprised if it works.
            – ilkkachu
            Jan 3 at 20:45










          • @ilkkachu I think I misread one of the answers. Nevertheless, it should be possible to use the symlink /dev/disk/by-id/md-uuid-.... At a guess, it cannot be used with UUID=... though.
            – sourcejedi
            Jan 3 at 21:09











          • I'm not entirely convinced this is an actual problem. Someone should do a real test of this instead of interpreting an answer that's interpreting documentation.
            – jdwolf
            Jan 3 at 22:04















          Can you actually use the UUID of the MD device in fstab? I don't know, but I am a bit surprised if it works.
          – ilkkachu
          Jan 3 at 20:45




          Can you actually use the UUID of the MD device in fstab? I don't know, but I am a bit surprised if it works.
          – ilkkachu
          Jan 3 at 20:45












          @ilkkachu I think I misread one of the answers. Nevertheless, it should be possible to use the symlink /dev/disk/by-id/md-uuid-.... At a guess, it cannot be used with UUID=... though.
          – sourcejedi
          Jan 3 at 21:09





          @ilkkachu I think I misread one of the answers. Nevertheless, it should be possible to use the symlink /dev/disk/by-id/md-uuid-.... At a guess, it cannot be used with UUID=... though.
          – sourcejedi
          Jan 3 at 21:09













          I'm not entirely convinced this is an actual problem. Someone should do a real test of this instead of interpreting an answer that's interpreting documentation.
          – jdwolf
          Jan 3 at 22:04




          I'm not entirely convinced this is an actual problem. Someone should do a real test of this instead of interpreting an answer that's interpreting documentation.
          – jdwolf
          Jan 3 at 22:04












           

          draft saved


          draft discarded


























           


          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f414619%2fuuid-in-fstab-in-which-cases-we-must-not-configured-uuid-in-fstab%23new-answer', 'question_page');

          );

          Post as a guest













































































          Popular posts from this blog

          How to check contact read email or not when send email to Individual?

          Displaying single band from multi-band raster using QGIS

          How many registers does an x86_64 CPU actually have?