I/O queue on LVM device v.s. I/O queue on underlying device(s)

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












dm-1 / alan_dell_2016-swap is an LVM Logical Volume, which is physically stored on partition sda7 of the device sda.



NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 500M 0 part /boot/efi
├─sda2 8:2 0 128M 0 part
├─sda3 8:3 0 50.5G 0 part
├─sda4 8:4 0 450M 0 part
├─sda5 8:5 0 7.6G 0 part
├─sda6 8:6 0 1G 0 part /boot
└─sda7 8:7 0 371.4G 0 part
├─alan_dell_2016-fedora 253:0 0 40G 0 lvm /
├─alan_dell_2016-swap 253:1 0 2G 0 lvm [SWAP]
└─alan_dell_2016-home 253:2 0 318G 0 lvm /home


What does it mean for I/O to be queued on dm-1, as opposed to sda?



Does the queue on dm-1 feed into the queue on sda ? Or are they two separate queues, which the system has some way of arbitrating between (how?) ? Or is there really only one queue, but the system reports separate stats to show which LV (or not) the I/O was generated on?



I am curious because I saw the LVM queue can be longer than that of the underlying device.



Note nr_requests (max queue length?) of the LVM device is the same as the underlying device. nr_requests on the LVM device cannot be changed. Also, the sysfs attribute queue/scheduler just shows none. On a physical device, it looks like noop deadline [cfq].



EDIT: I found a partial answer, regarding the existence of queue/scheduler on LVM here. Apparently there is a type of device-mapper for multi-path, where the I/O scheduler is attached to the dm device (and the scheduler on the underlying devices has no effect). But on a LVM logical volume, the scheduler will be attached to the underlying device only. It seems to tell us the I/O scheduler is not aware of the stacked devices... but this doesn't really explain what the reported queue lengths mean. It's even a little more mysterious. It shows that some dm devices are "request-based", which means they do not have a queue. LVM logical volumes are not request-based, so they actually do have a queue, but it seems the queue is not scheduled, and for some reason you are not allowed to change the queue length?



My kernel version is 4.19.2-200.fc28.x86_64. sda (and dm-1) are single-queue devices; they do not use the new multi-queue block layer.



Extreme example (atop output):



LVM | ll_2016-swap | busy 59% | read 24328 | write 175735 | KiB/r 4 | KiB/w 4 | MBr/s 0.2 | MBw/s 1.1 |avq 684.13| avio 1.76 ms



DSK | sda | busy 93% | read 88967 | write 45808 | KiB/r 81 | KiB/w 152 | MBr/s 11.8 | MBw/s 11.4 |avq 96.50| avio 4.12 ms



Slightly less extreme example, output from iostat -d -x -y during cp of a large file:




Device r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
sda 123.00 55.00 26932.00 17812.00 16.00 307.00 11.51 84.81 16.23 106.96 7.31 218.96 323.85 5.62 100.00
dm-0 23.00 40.00 200.00 212.00 0.00 0.00 0.00 0.00 36.09 45.98 2.63 8.70 5.30 13.44 84.70
dm-1 12.00 304.00 48.00 1216.00 0.00 0.00 0.00 0.00 35.42 146.51 44.96 4.00 4.00 1.26 39.90
dm-2 102.00 10.00 26112.00 18432.00 0.00 0.00 0.00 0.00 16.25 324.40 4.59 256.00 1843.20 8.93 100.00









share|improve this question



























    up vote
    0
    down vote

    favorite












    dm-1 / alan_dell_2016-swap is an LVM Logical Volume, which is physically stored on partition sda7 of the device sda.



    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    sda 8:0 0 465.8G 0 disk
    ├─sda1 8:1 0 500M 0 part /boot/efi
    ├─sda2 8:2 0 128M 0 part
    ├─sda3 8:3 0 50.5G 0 part
    ├─sda4 8:4 0 450M 0 part
    ├─sda5 8:5 0 7.6G 0 part
    ├─sda6 8:6 0 1G 0 part /boot
    └─sda7 8:7 0 371.4G 0 part
    ├─alan_dell_2016-fedora 253:0 0 40G 0 lvm /
    ├─alan_dell_2016-swap 253:1 0 2G 0 lvm [SWAP]
    └─alan_dell_2016-home 253:2 0 318G 0 lvm /home


    What does it mean for I/O to be queued on dm-1, as opposed to sda?



    Does the queue on dm-1 feed into the queue on sda ? Or are they two separate queues, which the system has some way of arbitrating between (how?) ? Or is there really only one queue, but the system reports separate stats to show which LV (or not) the I/O was generated on?



    I am curious because I saw the LVM queue can be longer than that of the underlying device.



    Note nr_requests (max queue length?) of the LVM device is the same as the underlying device. nr_requests on the LVM device cannot be changed. Also, the sysfs attribute queue/scheduler just shows none. On a physical device, it looks like noop deadline [cfq].



    EDIT: I found a partial answer, regarding the existence of queue/scheduler on LVM here. Apparently there is a type of device-mapper for multi-path, where the I/O scheduler is attached to the dm device (and the scheduler on the underlying devices has no effect). But on a LVM logical volume, the scheduler will be attached to the underlying device only. It seems to tell us the I/O scheduler is not aware of the stacked devices... but this doesn't really explain what the reported queue lengths mean. It's even a little more mysterious. It shows that some dm devices are "request-based", which means they do not have a queue. LVM logical volumes are not request-based, so they actually do have a queue, but it seems the queue is not scheduled, and for some reason you are not allowed to change the queue length?



    My kernel version is 4.19.2-200.fc28.x86_64. sda (and dm-1) are single-queue devices; they do not use the new multi-queue block layer.



    Extreme example (atop output):



    LVM | ll_2016-swap | busy 59% | read 24328 | write 175735 | KiB/r 4 | KiB/w 4 | MBr/s 0.2 | MBw/s 1.1 |avq 684.13| avio 1.76 ms



    DSK | sda | busy 93% | read 88967 | write 45808 | KiB/r 81 | KiB/w 152 | MBr/s 11.8 | MBw/s 11.4 |avq 96.50| avio 4.12 ms



    Slightly less extreme example, output from iostat -d -x -y during cp of a large file:




    Device r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
    sda 123.00 55.00 26932.00 17812.00 16.00 307.00 11.51 84.81 16.23 106.96 7.31 218.96 323.85 5.62 100.00
    dm-0 23.00 40.00 200.00 212.00 0.00 0.00 0.00 0.00 36.09 45.98 2.63 8.70 5.30 13.44 84.70
    dm-1 12.00 304.00 48.00 1216.00 0.00 0.00 0.00 0.00 35.42 146.51 44.96 4.00 4.00 1.26 39.90
    dm-2 102.00 10.00 26112.00 18432.00 0.00 0.00 0.00 0.00 16.25 324.40 4.59 256.00 1843.20 8.93 100.00









    share|improve this question

























      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      dm-1 / alan_dell_2016-swap is an LVM Logical Volume, which is physically stored on partition sda7 of the device sda.



      NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
      sda 8:0 0 465.8G 0 disk
      ├─sda1 8:1 0 500M 0 part /boot/efi
      ├─sda2 8:2 0 128M 0 part
      ├─sda3 8:3 0 50.5G 0 part
      ├─sda4 8:4 0 450M 0 part
      ├─sda5 8:5 0 7.6G 0 part
      ├─sda6 8:6 0 1G 0 part /boot
      └─sda7 8:7 0 371.4G 0 part
      ├─alan_dell_2016-fedora 253:0 0 40G 0 lvm /
      ├─alan_dell_2016-swap 253:1 0 2G 0 lvm [SWAP]
      └─alan_dell_2016-home 253:2 0 318G 0 lvm /home


      What does it mean for I/O to be queued on dm-1, as opposed to sda?



      Does the queue on dm-1 feed into the queue on sda ? Or are they two separate queues, which the system has some way of arbitrating between (how?) ? Or is there really only one queue, but the system reports separate stats to show which LV (or not) the I/O was generated on?



      I am curious because I saw the LVM queue can be longer than that of the underlying device.



      Note nr_requests (max queue length?) of the LVM device is the same as the underlying device. nr_requests on the LVM device cannot be changed. Also, the sysfs attribute queue/scheduler just shows none. On a physical device, it looks like noop deadline [cfq].



      EDIT: I found a partial answer, regarding the existence of queue/scheduler on LVM here. Apparently there is a type of device-mapper for multi-path, where the I/O scheduler is attached to the dm device (and the scheduler on the underlying devices has no effect). But on a LVM logical volume, the scheduler will be attached to the underlying device only. It seems to tell us the I/O scheduler is not aware of the stacked devices... but this doesn't really explain what the reported queue lengths mean. It's even a little more mysterious. It shows that some dm devices are "request-based", which means they do not have a queue. LVM logical volumes are not request-based, so they actually do have a queue, but it seems the queue is not scheduled, and for some reason you are not allowed to change the queue length?



      My kernel version is 4.19.2-200.fc28.x86_64. sda (and dm-1) are single-queue devices; they do not use the new multi-queue block layer.



      Extreme example (atop output):



      LVM | ll_2016-swap | busy 59% | read 24328 | write 175735 | KiB/r 4 | KiB/w 4 | MBr/s 0.2 | MBw/s 1.1 |avq 684.13| avio 1.76 ms



      DSK | sda | busy 93% | read 88967 | write 45808 | KiB/r 81 | KiB/w 152 | MBr/s 11.8 | MBw/s 11.4 |avq 96.50| avio 4.12 ms



      Slightly less extreme example, output from iostat -d -x -y during cp of a large file:




      Device r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
      sda 123.00 55.00 26932.00 17812.00 16.00 307.00 11.51 84.81 16.23 106.96 7.31 218.96 323.85 5.62 100.00
      dm-0 23.00 40.00 200.00 212.00 0.00 0.00 0.00 0.00 36.09 45.98 2.63 8.70 5.30 13.44 84.70
      dm-1 12.00 304.00 48.00 1216.00 0.00 0.00 0.00 0.00 35.42 146.51 44.96 4.00 4.00 1.26 39.90
      dm-2 102.00 10.00 26112.00 18432.00 0.00 0.00 0.00 0.00 16.25 324.40 4.59 256.00 1843.20 8.93 100.00









      share|improve this question















      dm-1 / alan_dell_2016-swap is an LVM Logical Volume, which is physically stored on partition sda7 of the device sda.



      NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
      sda 8:0 0 465.8G 0 disk
      ├─sda1 8:1 0 500M 0 part /boot/efi
      ├─sda2 8:2 0 128M 0 part
      ├─sda3 8:3 0 50.5G 0 part
      ├─sda4 8:4 0 450M 0 part
      ├─sda5 8:5 0 7.6G 0 part
      ├─sda6 8:6 0 1G 0 part /boot
      └─sda7 8:7 0 371.4G 0 part
      ├─alan_dell_2016-fedora 253:0 0 40G 0 lvm /
      ├─alan_dell_2016-swap 253:1 0 2G 0 lvm [SWAP]
      └─alan_dell_2016-home 253:2 0 318G 0 lvm /home


      What does it mean for I/O to be queued on dm-1, as opposed to sda?



      Does the queue on dm-1 feed into the queue on sda ? Or are they two separate queues, which the system has some way of arbitrating between (how?) ? Or is there really only one queue, but the system reports separate stats to show which LV (or not) the I/O was generated on?



      I am curious because I saw the LVM queue can be longer than that of the underlying device.



      Note nr_requests (max queue length?) of the LVM device is the same as the underlying device. nr_requests on the LVM device cannot be changed. Also, the sysfs attribute queue/scheduler just shows none. On a physical device, it looks like noop deadline [cfq].



      EDIT: I found a partial answer, regarding the existence of queue/scheduler on LVM here. Apparently there is a type of device-mapper for multi-path, where the I/O scheduler is attached to the dm device (and the scheduler on the underlying devices has no effect). But on a LVM logical volume, the scheduler will be attached to the underlying device only. It seems to tell us the I/O scheduler is not aware of the stacked devices... but this doesn't really explain what the reported queue lengths mean. It's even a little more mysterious. It shows that some dm devices are "request-based", which means they do not have a queue. LVM logical volumes are not request-based, so they actually do have a queue, but it seems the queue is not scheduled, and for some reason you are not allowed to change the queue length?



      My kernel version is 4.19.2-200.fc28.x86_64. sda (and dm-1) are single-queue devices; they do not use the new multi-queue block layer.



      Extreme example (atop output):



      LVM | ll_2016-swap | busy 59% | read 24328 | write 175735 | KiB/r 4 | KiB/w 4 | MBr/s 0.2 | MBw/s 1.1 |avq 684.13| avio 1.76 ms



      DSK | sda | busy 93% | read 88967 | write 45808 | KiB/r 81 | KiB/w 152 | MBr/s 11.8 | MBw/s 11.4 |avq 96.50| avio 4.12 ms



      Slightly less extreme example, output from iostat -d -x -y during cp of a large file:




      Device r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
      sda 123.00 55.00 26932.00 17812.00 16.00 307.00 11.51 84.81 16.23 106.96 7.31 218.96 323.85 5.62 100.00
      dm-0 23.00 40.00 200.00 212.00 0.00 0.00 0.00 0.00 36.09 45.98 2.63 8.70 5.30 13.44 84.70
      dm-1 12.00 304.00 48.00 1216.00 0.00 0.00 0.00 0.00 35.42 146.51 44.96 4.00 4.00 1.26 39.90
      dm-2 102.00 10.00 26112.00 18432.00 0.00 0.00 0.00 0.00 16.25 324.40 4.59 256.00 1843.20 8.93 100.00






      linux linux-kernel lvm device-mapper iostat






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited 13 hours ago

























      asked Nov 25 at 15:31









      sourcejedi

      22.2k43397




      22.2k43397

























          active

          oldest

          votes











          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "106"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f484053%2fi-o-queue-on-lvm-device-v-s-i-o-queue-on-underlying-devices%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown






























          active

          oldest

          votes













          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Unix & Linux Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f484053%2fi-o-queue-on-lvm-device-v-s-i-o-queue-on-underlying-devices%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown






          Popular posts from this blog

          How to check contact read email or not when send email to Individual?

          Bahrain

          Postfix configuration issue with fips on centos 7; mailgun relay