multipathing and mdadm hide /dev/sdX devices.

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












We have linux server that use multipathing



we have something that I think is a race condition between multipathing and mdadm



if we build the raid on powerpath devices like /dev/mapper/mpathab.. after reboot the raid is either degraded or in devices like /dev/sdX so for one reason or the other it does not keep the initial configuration.



we have installed emc powerpath as the san is a vnx and created the raid like this:



mdadm --verbose --create /dev/md0 --level=mirror --raid-devices=2 /dev/emcpowera /dev/emcpowerb


but after reboot this is the status of the raid:



# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Mon Jun 11 15:14:47 2018
Raid Level : raid1
Array Size : 419298304 (399.87 GiB 429.36 GB)
Used Dev Size : 419298304 (399.87 GiB 429.36 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Tue Jun 12 15:25:02 2018
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Consistency Policy : unknown

Name : cjlnwp01:0 (local to host cjlnwp01)
UUID : d2779403:bd8d370b:bdea907e:bb0e3c72
Events : 567

Number Major Minor RaidDevice State
0 65 0 0 active sync /dev/sdq
1 8 160 1 active sync /dev/sdk


looks like mdadm on reboot takes the first devices it finds?



how to make sure that when a device is part of multipath it does not appear as a separate /dev/sdX devices..



per install devices sdc to sdq in the lsblk output below should not appear.



sdc 8:32 0 400G 0 disk
sde 8:64 0 400G 0 disk
sdg 8:96 0 400G 0 disk
sdi 8:128 0 400G 0 disk
sdk 8:160 0 400G 0 disk
sdm 8:192 0 400G 0 disk
sdo 8:224 0 400G 0 disk
sdq 65:0 0 400G 0 disk
emcpowera 120:0 0 400G 0 disk
└─md0 9:0 0 399.9G 0 raid1
emcpowerb 120:16 0 400G 0 disk
└─md0 9:0 0 399.9G 0 raid1


is there some sort of racing condition between mdadm and multipathing that can be arranged by adding dependencies in systemd?



for the record the OS is OEL 7.5 on a HPE proliant DL380 G9 server.







share|improve this question























    up vote
    1
    down vote

    favorite












    We have linux server that use multipathing



    we have something that I think is a race condition between multipathing and mdadm



    if we build the raid on powerpath devices like /dev/mapper/mpathab.. after reboot the raid is either degraded or in devices like /dev/sdX so for one reason or the other it does not keep the initial configuration.



    we have installed emc powerpath as the san is a vnx and created the raid like this:



    mdadm --verbose --create /dev/md0 --level=mirror --raid-devices=2 /dev/emcpowera /dev/emcpowerb


    but after reboot this is the status of the raid:



    # mdadm --detail /dev/md0
    /dev/md0:
    Version : 1.2
    Creation Time : Mon Jun 11 15:14:47 2018
    Raid Level : raid1
    Array Size : 419298304 (399.87 GiB 429.36 GB)
    Used Dev Size : 419298304 (399.87 GiB 429.36 GB)
    Raid Devices : 2
    Total Devices : 2
    Persistence : Superblock is persistent

    Intent Bitmap : Internal

    Update Time : Tue Jun 12 15:25:02 2018
    State : clean
    Active Devices : 2
    Working Devices : 2
    Failed Devices : 0
    Spare Devices : 0

    Consistency Policy : unknown

    Name : cjlnwp01:0 (local to host cjlnwp01)
    UUID : d2779403:bd8d370b:bdea907e:bb0e3c72
    Events : 567

    Number Major Minor RaidDevice State
    0 65 0 0 active sync /dev/sdq
    1 8 160 1 active sync /dev/sdk


    looks like mdadm on reboot takes the first devices it finds?



    how to make sure that when a device is part of multipath it does not appear as a separate /dev/sdX devices..



    per install devices sdc to sdq in the lsblk output below should not appear.



    sdc 8:32 0 400G 0 disk
    sde 8:64 0 400G 0 disk
    sdg 8:96 0 400G 0 disk
    sdi 8:128 0 400G 0 disk
    sdk 8:160 0 400G 0 disk
    sdm 8:192 0 400G 0 disk
    sdo 8:224 0 400G 0 disk
    sdq 65:0 0 400G 0 disk
    emcpowera 120:0 0 400G 0 disk
    └─md0 9:0 0 399.9G 0 raid1
    emcpowerb 120:16 0 400G 0 disk
    └─md0 9:0 0 399.9G 0 raid1


    is there some sort of racing condition between mdadm and multipathing that can be arranged by adding dependencies in systemd?



    for the record the OS is OEL 7.5 on a HPE proliant DL380 G9 server.







    share|improve this question





















      up vote
      1
      down vote

      favorite









      up vote
      1
      down vote

      favorite











      We have linux server that use multipathing



      we have something that I think is a race condition between multipathing and mdadm



      if we build the raid on powerpath devices like /dev/mapper/mpathab.. after reboot the raid is either degraded or in devices like /dev/sdX so for one reason or the other it does not keep the initial configuration.



      we have installed emc powerpath as the san is a vnx and created the raid like this:



      mdadm --verbose --create /dev/md0 --level=mirror --raid-devices=2 /dev/emcpowera /dev/emcpowerb


      but after reboot this is the status of the raid:



      # mdadm --detail /dev/md0
      /dev/md0:
      Version : 1.2
      Creation Time : Mon Jun 11 15:14:47 2018
      Raid Level : raid1
      Array Size : 419298304 (399.87 GiB 429.36 GB)
      Used Dev Size : 419298304 (399.87 GiB 429.36 GB)
      Raid Devices : 2
      Total Devices : 2
      Persistence : Superblock is persistent

      Intent Bitmap : Internal

      Update Time : Tue Jun 12 15:25:02 2018
      State : clean
      Active Devices : 2
      Working Devices : 2
      Failed Devices : 0
      Spare Devices : 0

      Consistency Policy : unknown

      Name : cjlnwp01:0 (local to host cjlnwp01)
      UUID : d2779403:bd8d370b:bdea907e:bb0e3c72
      Events : 567

      Number Major Minor RaidDevice State
      0 65 0 0 active sync /dev/sdq
      1 8 160 1 active sync /dev/sdk


      looks like mdadm on reboot takes the first devices it finds?



      how to make sure that when a device is part of multipath it does not appear as a separate /dev/sdX devices..



      per install devices sdc to sdq in the lsblk output below should not appear.



      sdc 8:32 0 400G 0 disk
      sde 8:64 0 400G 0 disk
      sdg 8:96 0 400G 0 disk
      sdi 8:128 0 400G 0 disk
      sdk 8:160 0 400G 0 disk
      sdm 8:192 0 400G 0 disk
      sdo 8:224 0 400G 0 disk
      sdq 65:0 0 400G 0 disk
      emcpowera 120:0 0 400G 0 disk
      └─md0 9:0 0 399.9G 0 raid1
      emcpowerb 120:16 0 400G 0 disk
      └─md0 9:0 0 399.9G 0 raid1


      is there some sort of racing condition between mdadm and multipathing that can be arranged by adding dependencies in systemd?



      for the record the OS is OEL 7.5 on a HPE proliant DL380 G9 server.







      share|improve this question











      We have linux server that use multipathing



      we have something that I think is a race condition between multipathing and mdadm



      if we build the raid on powerpath devices like /dev/mapper/mpathab.. after reboot the raid is either degraded or in devices like /dev/sdX so for one reason or the other it does not keep the initial configuration.



      we have installed emc powerpath as the san is a vnx and created the raid like this:



      mdadm --verbose --create /dev/md0 --level=mirror --raid-devices=2 /dev/emcpowera /dev/emcpowerb


      but after reboot this is the status of the raid:



      # mdadm --detail /dev/md0
      /dev/md0:
      Version : 1.2
      Creation Time : Mon Jun 11 15:14:47 2018
      Raid Level : raid1
      Array Size : 419298304 (399.87 GiB 429.36 GB)
      Used Dev Size : 419298304 (399.87 GiB 429.36 GB)
      Raid Devices : 2
      Total Devices : 2
      Persistence : Superblock is persistent

      Intent Bitmap : Internal

      Update Time : Tue Jun 12 15:25:02 2018
      State : clean
      Active Devices : 2
      Working Devices : 2
      Failed Devices : 0
      Spare Devices : 0

      Consistency Policy : unknown

      Name : cjlnwp01:0 (local to host cjlnwp01)
      UUID : d2779403:bd8d370b:bdea907e:bb0e3c72
      Events : 567

      Number Major Minor RaidDevice State
      0 65 0 0 active sync /dev/sdq
      1 8 160 1 active sync /dev/sdk


      looks like mdadm on reboot takes the first devices it finds?



      how to make sure that when a device is part of multipath it does not appear as a separate /dev/sdX devices..



      per install devices sdc to sdq in the lsblk output below should not appear.



      sdc 8:32 0 400G 0 disk
      sde 8:64 0 400G 0 disk
      sdg 8:96 0 400G 0 disk
      sdi 8:128 0 400G 0 disk
      sdk 8:160 0 400G 0 disk
      sdm 8:192 0 400G 0 disk
      sdo 8:224 0 400G 0 disk
      sdq 65:0 0 400G 0 disk
      emcpowera 120:0 0 400G 0 disk
      └─md0 9:0 0 399.9G 0 raid1
      emcpowerb 120:16 0 400G 0 disk
      └─md0 9:0 0 399.9G 0 raid1


      is there some sort of racing condition between mdadm and multipathing that can be arranged by adding dependencies in systemd?



      for the record the OS is OEL 7.5 on a HPE proliant DL380 G9 server.









      share|improve this question










      share|improve this question




      share|improve this question









      asked Jun 13 at 14:33









      danidar

      212




      212




















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          0
          down vote













          You can use the DEVICE entry in mdadm.conf to make it only consider specific device names and ignore everything else. By default, mdadm accepts all block devices listed in /proc/partitions.



          DEVICE /dev/emc*


          Unfortunately this can only be considered a lousy workaround. It's still a huge mess, as there are many cases that might end up using the wrong device.



          This is also an issue you encounter with loop devices:



          # losetup --find --show /dev/sdi2
          /dev/loop4


          Now, /dev/loop4 and /dev/sdc3 are identical, which also means they share the same UUID:



          # blkid /dev/sdi2 /dev/loop4
          /dev/sdi2: UUID="0a73725c-7e29-4171-be5d-be31d56bf8fe" TYPE="ext2"
          /dev/loop4: UUID="0a73725c-7e29-4171-be5d-be31d56bf8fe" TYPE="ext2"


          Now, which device should be used when you mount UUID=0a73725c-7e29-4171-be5d-be31d56bf8fe?



          # mount UUID=0a73725c-7e29-4171-be5d-be31d56bf8fe /mnt/tmp
          # df -h /mnt/tmp
          Filesystem Size Used Avail Use% Mounted on
          /dev/loop4 2.0G 490M 1.4G 26% /mnt/tmp


          It ended up picking the loop device in this case, which may or may not have been my intention.



          Duplicating devices like this is a huge problem because suddenly, UUIDs that are supposed to be unique, are not unique any more, and thus the wrong device might end up being used.



          LVM also struggles with this issue, it is described in some detail here: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/duplicate_pv_multipath



          Unfortunately this documentation also does not provide a proper solution, it simply suggests a device filter workaround as it were.




          For a proper solution, it would be best to completely avoid having two different block devices representing the same data. Usually this involves putting data at an offset. I do not know if multipath has an offset feature by default.



          With partition tables, mdadm, LUKS, LVM, you usually get the offset for free since these have a header at the start of their parent device, and the child block devices they provide are offset from that.



          Thus on /dev/sdx you only see the partition table, /dev/sdx1 you only see the mdadm header, /dev/md1 you only see the LUKS header, /dev/mapper/cryptomd1 you only see the LVM header, and /dev/VG/LV you only see the filesystem because each of these devices is offset from its parent data.



          If you did the same thing for your multipath setup, the mdadm metadata would only be visible on /dev/emcpowera, but not on /dev/sdk and there would be no way for the latter to mistakenly be assembled into a RAID.






          share|improve this answer























          • how create the offset creating a partition them the md dev it creates the linux raid header on all the /dev/sdX devices. it even add the partiton see below. /dev/sdq1: UUID="434fb10c-0010-abcd-09cf-3408ba0def44" UUID_SUB="55a35449-e8b0-9200-6623-2d5be220fe82" LABEL="" TYPE="linux_raid_member" /dev/emcpowera1: UUID="434fb10c-0010-abcd-09cf-3408ba0def44" UUID_SUB="55a35449-e8b0-9200-6623-2d5be220fe82" LABEL="" TYPE="linux_raid_member" /dev/emcpowerb1: UUID="434fb10c-0010-abcd-09cf-3408ba0def44" UUID_SUB="8ac3f534-8685-f18b-7ef1-52cbc7422a20" LABEL="" TYPE="linux_raid_member
            – danidar
            Jun 13 at 16:39











          Your Answer







          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "106"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: false,
          noModals: false,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );








           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f449568%2fmultipathing-and-mdadm-hide-dev-sdx-devices%23new-answer', 'question_page');

          );

          Post as a guest






























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          0
          down vote













          You can use the DEVICE entry in mdadm.conf to make it only consider specific device names and ignore everything else. By default, mdadm accepts all block devices listed in /proc/partitions.



          DEVICE /dev/emc*


          Unfortunately this can only be considered a lousy workaround. It's still a huge mess, as there are many cases that might end up using the wrong device.



          This is also an issue you encounter with loop devices:



          # losetup --find --show /dev/sdi2
          /dev/loop4


          Now, /dev/loop4 and /dev/sdc3 are identical, which also means they share the same UUID:



          # blkid /dev/sdi2 /dev/loop4
          /dev/sdi2: UUID="0a73725c-7e29-4171-be5d-be31d56bf8fe" TYPE="ext2"
          /dev/loop4: UUID="0a73725c-7e29-4171-be5d-be31d56bf8fe" TYPE="ext2"


          Now, which device should be used when you mount UUID=0a73725c-7e29-4171-be5d-be31d56bf8fe?



          # mount UUID=0a73725c-7e29-4171-be5d-be31d56bf8fe /mnt/tmp
          # df -h /mnt/tmp
          Filesystem Size Used Avail Use% Mounted on
          /dev/loop4 2.0G 490M 1.4G 26% /mnt/tmp


          It ended up picking the loop device in this case, which may or may not have been my intention.



          Duplicating devices like this is a huge problem because suddenly, UUIDs that are supposed to be unique, are not unique any more, and thus the wrong device might end up being used.



          LVM also struggles with this issue, it is described in some detail here: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/duplicate_pv_multipath



          Unfortunately this documentation also does not provide a proper solution, it simply suggests a device filter workaround as it were.




          For a proper solution, it would be best to completely avoid having two different block devices representing the same data. Usually this involves putting data at an offset. I do not know if multipath has an offset feature by default.



          With partition tables, mdadm, LUKS, LVM, you usually get the offset for free since these have a header at the start of their parent device, and the child block devices they provide are offset from that.



          Thus on /dev/sdx you only see the partition table, /dev/sdx1 you only see the mdadm header, /dev/md1 you only see the LUKS header, /dev/mapper/cryptomd1 you only see the LVM header, and /dev/VG/LV you only see the filesystem because each of these devices is offset from its parent data.



          If you did the same thing for your multipath setup, the mdadm metadata would only be visible on /dev/emcpowera, but not on /dev/sdk and there would be no way for the latter to mistakenly be assembled into a RAID.






          share|improve this answer























          • how create the offset creating a partition them the md dev it creates the linux raid header on all the /dev/sdX devices. it even add the partiton see below. /dev/sdq1: UUID="434fb10c-0010-abcd-09cf-3408ba0def44" UUID_SUB="55a35449-e8b0-9200-6623-2d5be220fe82" LABEL="" TYPE="linux_raid_member" /dev/emcpowera1: UUID="434fb10c-0010-abcd-09cf-3408ba0def44" UUID_SUB="55a35449-e8b0-9200-6623-2d5be220fe82" LABEL="" TYPE="linux_raid_member" /dev/emcpowerb1: UUID="434fb10c-0010-abcd-09cf-3408ba0def44" UUID_SUB="8ac3f534-8685-f18b-7ef1-52cbc7422a20" LABEL="" TYPE="linux_raid_member
            – danidar
            Jun 13 at 16:39















          up vote
          0
          down vote













          You can use the DEVICE entry in mdadm.conf to make it only consider specific device names and ignore everything else. By default, mdadm accepts all block devices listed in /proc/partitions.



          DEVICE /dev/emc*


          Unfortunately this can only be considered a lousy workaround. It's still a huge mess, as there are many cases that might end up using the wrong device.



          This is also an issue you encounter with loop devices:



          # losetup --find --show /dev/sdi2
          /dev/loop4


          Now, /dev/loop4 and /dev/sdc3 are identical, which also means they share the same UUID:



          # blkid /dev/sdi2 /dev/loop4
          /dev/sdi2: UUID="0a73725c-7e29-4171-be5d-be31d56bf8fe" TYPE="ext2"
          /dev/loop4: UUID="0a73725c-7e29-4171-be5d-be31d56bf8fe" TYPE="ext2"


          Now, which device should be used when you mount UUID=0a73725c-7e29-4171-be5d-be31d56bf8fe?



          # mount UUID=0a73725c-7e29-4171-be5d-be31d56bf8fe /mnt/tmp
          # df -h /mnt/tmp
          Filesystem Size Used Avail Use% Mounted on
          /dev/loop4 2.0G 490M 1.4G 26% /mnt/tmp


          It ended up picking the loop device in this case, which may or may not have been my intention.



          Duplicating devices like this is a huge problem because suddenly, UUIDs that are supposed to be unique, are not unique any more, and thus the wrong device might end up being used.



          LVM also struggles with this issue, it is described in some detail here: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/duplicate_pv_multipath



          Unfortunately this documentation also does not provide a proper solution, it simply suggests a device filter workaround as it were.




          For a proper solution, it would be best to completely avoid having two different block devices representing the same data. Usually this involves putting data at an offset. I do not know if multipath has an offset feature by default.



          With partition tables, mdadm, LUKS, LVM, you usually get the offset for free since these have a header at the start of their parent device, and the child block devices they provide are offset from that.



          Thus on /dev/sdx you only see the partition table, /dev/sdx1 you only see the mdadm header, /dev/md1 you only see the LUKS header, /dev/mapper/cryptomd1 you only see the LVM header, and /dev/VG/LV you only see the filesystem because each of these devices is offset from its parent data.



          If you did the same thing for your multipath setup, the mdadm metadata would only be visible on /dev/emcpowera, but not on /dev/sdk and there would be no way for the latter to mistakenly be assembled into a RAID.






          share|improve this answer























          • how create the offset creating a partition them the md dev it creates the linux raid header on all the /dev/sdX devices. it even add the partiton see below. /dev/sdq1: UUID="434fb10c-0010-abcd-09cf-3408ba0def44" UUID_SUB="55a35449-e8b0-9200-6623-2d5be220fe82" LABEL="" TYPE="linux_raid_member" /dev/emcpowera1: UUID="434fb10c-0010-abcd-09cf-3408ba0def44" UUID_SUB="55a35449-e8b0-9200-6623-2d5be220fe82" LABEL="" TYPE="linux_raid_member" /dev/emcpowerb1: UUID="434fb10c-0010-abcd-09cf-3408ba0def44" UUID_SUB="8ac3f534-8685-f18b-7ef1-52cbc7422a20" LABEL="" TYPE="linux_raid_member
            – danidar
            Jun 13 at 16:39













          up vote
          0
          down vote










          up vote
          0
          down vote









          You can use the DEVICE entry in mdadm.conf to make it only consider specific device names and ignore everything else. By default, mdadm accepts all block devices listed in /proc/partitions.



          DEVICE /dev/emc*


          Unfortunately this can only be considered a lousy workaround. It's still a huge mess, as there are many cases that might end up using the wrong device.



          This is also an issue you encounter with loop devices:



          # losetup --find --show /dev/sdi2
          /dev/loop4


          Now, /dev/loop4 and /dev/sdc3 are identical, which also means they share the same UUID:



          # blkid /dev/sdi2 /dev/loop4
          /dev/sdi2: UUID="0a73725c-7e29-4171-be5d-be31d56bf8fe" TYPE="ext2"
          /dev/loop4: UUID="0a73725c-7e29-4171-be5d-be31d56bf8fe" TYPE="ext2"


          Now, which device should be used when you mount UUID=0a73725c-7e29-4171-be5d-be31d56bf8fe?



          # mount UUID=0a73725c-7e29-4171-be5d-be31d56bf8fe /mnt/tmp
          # df -h /mnt/tmp
          Filesystem Size Used Avail Use% Mounted on
          /dev/loop4 2.0G 490M 1.4G 26% /mnt/tmp


          It ended up picking the loop device in this case, which may or may not have been my intention.



          Duplicating devices like this is a huge problem because suddenly, UUIDs that are supposed to be unique, are not unique any more, and thus the wrong device might end up being used.



          LVM also struggles with this issue, it is described in some detail here: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/duplicate_pv_multipath



          Unfortunately this documentation also does not provide a proper solution, it simply suggests a device filter workaround as it were.




          For a proper solution, it would be best to completely avoid having two different block devices representing the same data. Usually this involves putting data at an offset. I do not know if multipath has an offset feature by default.



          With partition tables, mdadm, LUKS, LVM, you usually get the offset for free since these have a header at the start of their parent device, and the child block devices they provide are offset from that.



          Thus on /dev/sdx you only see the partition table, /dev/sdx1 you only see the mdadm header, /dev/md1 you only see the LUKS header, /dev/mapper/cryptomd1 you only see the LVM header, and /dev/VG/LV you only see the filesystem because each of these devices is offset from its parent data.



          If you did the same thing for your multipath setup, the mdadm metadata would only be visible on /dev/emcpowera, but not on /dev/sdk and there would be no way for the latter to mistakenly be assembled into a RAID.






          share|improve this answer















          You can use the DEVICE entry in mdadm.conf to make it only consider specific device names and ignore everything else. By default, mdadm accepts all block devices listed in /proc/partitions.



          DEVICE /dev/emc*


          Unfortunately this can only be considered a lousy workaround. It's still a huge mess, as there are many cases that might end up using the wrong device.



          This is also an issue you encounter with loop devices:



          # losetup --find --show /dev/sdi2
          /dev/loop4


          Now, /dev/loop4 and /dev/sdc3 are identical, which also means they share the same UUID:



          # blkid /dev/sdi2 /dev/loop4
          /dev/sdi2: UUID="0a73725c-7e29-4171-be5d-be31d56bf8fe" TYPE="ext2"
          /dev/loop4: UUID="0a73725c-7e29-4171-be5d-be31d56bf8fe" TYPE="ext2"


          Now, which device should be used when you mount UUID=0a73725c-7e29-4171-be5d-be31d56bf8fe?



          # mount UUID=0a73725c-7e29-4171-be5d-be31d56bf8fe /mnt/tmp
          # df -h /mnt/tmp
          Filesystem Size Used Avail Use% Mounted on
          /dev/loop4 2.0G 490M 1.4G 26% /mnt/tmp


          It ended up picking the loop device in this case, which may or may not have been my intention.



          Duplicating devices like this is a huge problem because suddenly, UUIDs that are supposed to be unique, are not unique any more, and thus the wrong device might end up being used.



          LVM also struggles with this issue, it is described in some detail here: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/duplicate_pv_multipath



          Unfortunately this documentation also does not provide a proper solution, it simply suggests a device filter workaround as it were.




          For a proper solution, it would be best to completely avoid having two different block devices representing the same data. Usually this involves putting data at an offset. I do not know if multipath has an offset feature by default.



          With partition tables, mdadm, LUKS, LVM, you usually get the offset for free since these have a header at the start of their parent device, and the child block devices they provide are offset from that.



          Thus on /dev/sdx you only see the partition table, /dev/sdx1 you only see the mdadm header, /dev/md1 you only see the LUKS header, /dev/mapper/cryptomd1 you only see the LVM header, and /dev/VG/LV you only see the filesystem because each of these devices is offset from its parent data.



          If you did the same thing for your multipath setup, the mdadm metadata would only be visible on /dev/emcpowera, but not on /dev/sdk and there would be no way for the latter to mistakenly be assembled into a RAID.







          share|improve this answer















          share|improve this answer



          share|improve this answer








          edited Jun 13 at 16:11


























          answered Jun 13 at 16:02









          frostschutz

          24.1k14570




          24.1k14570











          • how create the offset creating a partition them the md dev it creates the linux raid header on all the /dev/sdX devices. it even add the partiton see below. /dev/sdq1: UUID="434fb10c-0010-abcd-09cf-3408ba0def44" UUID_SUB="55a35449-e8b0-9200-6623-2d5be220fe82" LABEL="" TYPE="linux_raid_member" /dev/emcpowera1: UUID="434fb10c-0010-abcd-09cf-3408ba0def44" UUID_SUB="55a35449-e8b0-9200-6623-2d5be220fe82" LABEL="" TYPE="linux_raid_member" /dev/emcpowerb1: UUID="434fb10c-0010-abcd-09cf-3408ba0def44" UUID_SUB="8ac3f534-8685-f18b-7ef1-52cbc7422a20" LABEL="" TYPE="linux_raid_member
            – danidar
            Jun 13 at 16:39

















          • how create the offset creating a partition them the md dev it creates the linux raid header on all the /dev/sdX devices. it even add the partiton see below. /dev/sdq1: UUID="434fb10c-0010-abcd-09cf-3408ba0def44" UUID_SUB="55a35449-e8b0-9200-6623-2d5be220fe82" LABEL="" TYPE="linux_raid_member" /dev/emcpowera1: UUID="434fb10c-0010-abcd-09cf-3408ba0def44" UUID_SUB="55a35449-e8b0-9200-6623-2d5be220fe82" LABEL="" TYPE="linux_raid_member" /dev/emcpowerb1: UUID="434fb10c-0010-abcd-09cf-3408ba0def44" UUID_SUB="8ac3f534-8685-f18b-7ef1-52cbc7422a20" LABEL="" TYPE="linux_raid_member
            – danidar
            Jun 13 at 16:39
















          how create the offset creating a partition them the md dev it creates the linux raid header on all the /dev/sdX devices. it even add the partiton see below. /dev/sdq1: UUID="434fb10c-0010-abcd-09cf-3408ba0def44" UUID_SUB="55a35449-e8b0-9200-6623-2d5be220fe82" LABEL="" TYPE="linux_raid_member" /dev/emcpowera1: UUID="434fb10c-0010-abcd-09cf-3408ba0def44" UUID_SUB="55a35449-e8b0-9200-6623-2d5be220fe82" LABEL="" TYPE="linux_raid_member" /dev/emcpowerb1: UUID="434fb10c-0010-abcd-09cf-3408ba0def44" UUID_SUB="8ac3f534-8685-f18b-7ef1-52cbc7422a20" LABEL="" TYPE="linux_raid_member
          – danidar
          Jun 13 at 16:39





          how create the offset creating a partition them the md dev it creates the linux raid header on all the /dev/sdX devices. it even add the partiton see below. /dev/sdq1: UUID="434fb10c-0010-abcd-09cf-3408ba0def44" UUID_SUB="55a35449-e8b0-9200-6623-2d5be220fe82" LABEL="" TYPE="linux_raid_member" /dev/emcpowera1: UUID="434fb10c-0010-abcd-09cf-3408ba0def44" UUID_SUB="55a35449-e8b0-9200-6623-2d5be220fe82" LABEL="" TYPE="linux_raid_member" /dev/emcpowerb1: UUID="434fb10c-0010-abcd-09cf-3408ba0def44" UUID_SUB="8ac3f534-8685-f18b-7ef1-52cbc7422a20" LABEL="" TYPE="linux_raid_member
          – danidar
          Jun 13 at 16:39













           

          draft saved


          draft discarded


























           


          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f449568%2fmultipathing-and-mdadm-hide-dev-sdx-devices%23new-answer', 'question_page');

          );

          Post as a guest













































































          Popular posts from this blog

          How to check contact read email or not when send email to Individual?

          Bahrain

          Postfix configuration issue with fips on centos 7; mailgun relay