Reading iostat utilization with ZFS zvols

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












First off, I asked this question 5 days ago over on Serverfault. I hope I'm not doing a bad by bringing it over here to the Unix&Linux Stack. I have also asked this question on 3 other sites not related to Stack, with no answers. I plan on updating each site with an answer, if I can just get it answered. Here we go.



I am having a hard time understanding the output of iostat -x with specific regards to ZFS zvols. I'm running Proxmox 4.4, fully updated and encountering some generally poor IO performance.



While troubleshooting the sluggish performance, I was looking at iostat -x 1 and saw this sort of utilization reading near constantly.



Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.00 0.00 77.00 115.00 308.00 640.00 9.88 2.02 10.33 9.92 10.61 3.58 68.80
sdb 0.00 0.00 81.00 116.00 324.00 644.00 9.83 1.32 6.72 6.42 6.93 2.50 49.20
...
sde 0.00 0.00 77.00 117.00 308.00 640.00 9.77 1.16 6.25 5.25 6.91 2.35 45.60
sdf 0.00 0.00 78.00 116.00 312.00 640.00 9.81 1.25 6.45 5.64 7.00 2.47 48.00
...
zd32 0.00 0.00 0.00 197.00 0.00 788.00 8.00 1.09 5.54 0.00 5.54 5.06 99.60


Where I am confused is that the utilization percent for zd32, the zvol of my VM, is at 100%, where the underlying storage is at roughly 50% utilization.



My question is: Shouldn't the zvol utilization reflect the utilization of the underlying storage devices?



For reference, there are other VMs on this system, but this troubleshooting was done after hours, so they were idle. This one VM was the only busy VM, running Windows updates. The zpool is a RAID-Z2 of 7200RPM SATA disks, so not exactly built for incredible speed. I'm just wondering about the utilization right now.







share|improve this question
























    up vote
    0
    down vote

    favorite












    First off, I asked this question 5 days ago over on Serverfault. I hope I'm not doing a bad by bringing it over here to the Unix&Linux Stack. I have also asked this question on 3 other sites not related to Stack, with no answers. I plan on updating each site with an answer, if I can just get it answered. Here we go.



    I am having a hard time understanding the output of iostat -x with specific regards to ZFS zvols. I'm running Proxmox 4.4, fully updated and encountering some generally poor IO performance.



    While troubleshooting the sluggish performance, I was looking at iostat -x 1 and saw this sort of utilization reading near constantly.



    Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
    sda 0.00 0.00 77.00 115.00 308.00 640.00 9.88 2.02 10.33 9.92 10.61 3.58 68.80
    sdb 0.00 0.00 81.00 116.00 324.00 644.00 9.83 1.32 6.72 6.42 6.93 2.50 49.20
    ...
    sde 0.00 0.00 77.00 117.00 308.00 640.00 9.77 1.16 6.25 5.25 6.91 2.35 45.60
    sdf 0.00 0.00 78.00 116.00 312.00 640.00 9.81 1.25 6.45 5.64 7.00 2.47 48.00
    ...
    zd32 0.00 0.00 0.00 197.00 0.00 788.00 8.00 1.09 5.54 0.00 5.54 5.06 99.60


    Where I am confused is that the utilization percent for zd32, the zvol of my VM, is at 100%, where the underlying storage is at roughly 50% utilization.



    My question is: Shouldn't the zvol utilization reflect the utilization of the underlying storage devices?



    For reference, there are other VMs on this system, but this troubleshooting was done after hours, so they were idle. This one VM was the only busy VM, running Windows updates. The zpool is a RAID-Z2 of 7200RPM SATA disks, so not exactly built for incredible speed. I'm just wondering about the utilization right now.







    share|improve this question






















      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      First off, I asked this question 5 days ago over on Serverfault. I hope I'm not doing a bad by bringing it over here to the Unix&Linux Stack. I have also asked this question on 3 other sites not related to Stack, with no answers. I plan on updating each site with an answer, if I can just get it answered. Here we go.



      I am having a hard time understanding the output of iostat -x with specific regards to ZFS zvols. I'm running Proxmox 4.4, fully updated and encountering some generally poor IO performance.



      While troubleshooting the sluggish performance, I was looking at iostat -x 1 and saw this sort of utilization reading near constantly.



      Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
      sda 0.00 0.00 77.00 115.00 308.00 640.00 9.88 2.02 10.33 9.92 10.61 3.58 68.80
      sdb 0.00 0.00 81.00 116.00 324.00 644.00 9.83 1.32 6.72 6.42 6.93 2.50 49.20
      ...
      sde 0.00 0.00 77.00 117.00 308.00 640.00 9.77 1.16 6.25 5.25 6.91 2.35 45.60
      sdf 0.00 0.00 78.00 116.00 312.00 640.00 9.81 1.25 6.45 5.64 7.00 2.47 48.00
      ...
      zd32 0.00 0.00 0.00 197.00 0.00 788.00 8.00 1.09 5.54 0.00 5.54 5.06 99.60


      Where I am confused is that the utilization percent for zd32, the zvol of my VM, is at 100%, where the underlying storage is at roughly 50% utilization.



      My question is: Shouldn't the zvol utilization reflect the utilization of the underlying storage devices?



      For reference, there are other VMs on this system, but this troubleshooting was done after hours, so they were idle. This one VM was the only busy VM, running Windows updates. The zpool is a RAID-Z2 of 7200RPM SATA disks, so not exactly built for incredible speed. I'm just wondering about the utilization right now.







      share|improve this question












      First off, I asked this question 5 days ago over on Serverfault. I hope I'm not doing a bad by bringing it over here to the Unix&Linux Stack. I have also asked this question on 3 other sites not related to Stack, with no answers. I plan on updating each site with an answer, if I can just get it answered. Here we go.



      I am having a hard time understanding the output of iostat -x with specific regards to ZFS zvols. I'm running Proxmox 4.4, fully updated and encountering some generally poor IO performance.



      While troubleshooting the sluggish performance, I was looking at iostat -x 1 and saw this sort of utilization reading near constantly.



      Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
      sda 0.00 0.00 77.00 115.00 308.00 640.00 9.88 2.02 10.33 9.92 10.61 3.58 68.80
      sdb 0.00 0.00 81.00 116.00 324.00 644.00 9.83 1.32 6.72 6.42 6.93 2.50 49.20
      ...
      sde 0.00 0.00 77.00 117.00 308.00 640.00 9.77 1.16 6.25 5.25 6.91 2.35 45.60
      sdf 0.00 0.00 78.00 116.00 312.00 640.00 9.81 1.25 6.45 5.64 7.00 2.47 48.00
      ...
      zd32 0.00 0.00 0.00 197.00 0.00 788.00 8.00 1.09 5.54 0.00 5.54 5.06 99.60


      Where I am confused is that the utilization percent for zd32, the zvol of my VM, is at 100%, where the underlying storage is at roughly 50% utilization.



      My question is: Shouldn't the zvol utilization reflect the utilization of the underlying storage devices?



      For reference, there are other VMs on this system, but this troubleshooting was done after hours, so they were idle. This one VM was the only busy VM, running Windows updates. The zpool is a RAID-Z2 of 7200RPM SATA disks, so not exactly built for incredible speed. I'm just wondering about the utilization right now.









      share|improve this question











      share|improve this question




      share|improve this question










      asked Apr 2 at 2:27









      user246270

      33




      33




















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          0
          down vote



          accepted










          Here are some hints. Yes, it should, because zfs volume is created on zpool which is located on a storage device. If that storage is shared between other resources, they can affect zfs pools and volumes.



          Unfortunately, I do not know what Proxmox is, but %util usually shows the time the device has a positive queue of transactions. A number of transactions in the queue is avgqu-sz. Both of these values are also depend on the storage system type and model which can support quite a large queue. So, it may be a bad symptom or not. Therefore first of all it's better to look at: await, r/s, w/s, rkB/s, wkB/s to see if the volume has a real workload and performance issues or not.



          There is a special command: zpool iostat to monitor zpool statistic.






          share|improve this answer




















            Your Answer







            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "106"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            convertImagesToLinks: false,
            noModals: false,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );








             

            draft saved


            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f434949%2freading-iostat-utilization-with-zfs-zvols%23new-answer', 'question_page');

            );

            Post as a guest






























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            0
            down vote



            accepted










            Here are some hints. Yes, it should, because zfs volume is created on zpool which is located on a storage device. If that storage is shared between other resources, they can affect zfs pools and volumes.



            Unfortunately, I do not know what Proxmox is, but %util usually shows the time the device has a positive queue of transactions. A number of transactions in the queue is avgqu-sz. Both of these values are also depend on the storage system type and model which can support quite a large queue. So, it may be a bad symptom or not. Therefore first of all it's better to look at: await, r/s, w/s, rkB/s, wkB/s to see if the volume has a real workload and performance issues or not.



            There is a special command: zpool iostat to monitor zpool statistic.






            share|improve this answer
























              up vote
              0
              down vote



              accepted










              Here are some hints. Yes, it should, because zfs volume is created on zpool which is located on a storage device. If that storage is shared between other resources, they can affect zfs pools and volumes.



              Unfortunately, I do not know what Proxmox is, but %util usually shows the time the device has a positive queue of transactions. A number of transactions in the queue is avgqu-sz. Both of these values are also depend on the storage system type and model which can support quite a large queue. So, it may be a bad symptom or not. Therefore first of all it's better to look at: await, r/s, w/s, rkB/s, wkB/s to see if the volume has a real workload and performance issues or not.



              There is a special command: zpool iostat to monitor zpool statistic.






              share|improve this answer






















                up vote
                0
                down vote



                accepted







                up vote
                0
                down vote



                accepted






                Here are some hints. Yes, it should, because zfs volume is created on zpool which is located on a storage device. If that storage is shared between other resources, they can affect zfs pools and volumes.



                Unfortunately, I do not know what Proxmox is, but %util usually shows the time the device has a positive queue of transactions. A number of transactions in the queue is avgqu-sz. Both of these values are also depend on the storage system type and model which can support quite a large queue. So, it may be a bad symptom or not. Therefore first of all it's better to look at: await, r/s, w/s, rkB/s, wkB/s to see if the volume has a real workload and performance issues or not.



                There is a special command: zpool iostat to monitor zpool statistic.






                share|improve this answer












                Here are some hints. Yes, it should, because zfs volume is created on zpool which is located on a storage device. If that storage is shared between other resources, they can affect zfs pools and volumes.



                Unfortunately, I do not know what Proxmox is, but %util usually shows the time the device has a positive queue of transactions. A number of transactions in the queue is avgqu-sz. Both of these values are also depend on the storage system type and model which can support quite a large queue. So, it may be a bad symptom or not. Therefore first of all it's better to look at: await, r/s, w/s, rkB/s, wkB/s to see if the volume has a real workload and performance issues or not.



                There is a special command: zpool iostat to monitor zpool statistic.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Apr 11 at 14:34









                Mikhail Zakharov

                1246




                1246






















                     

                    draft saved


                    draft discarded


























                     


                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f434949%2freading-iostat-utilization-with-zfs-zvols%23new-answer', 'question_page');

                    );

                    Post as a guest













































































                    Popular posts from this blog

                    How to check contact read email or not when send email to Individual?

                    Displaying single band from multi-band raster using QGIS

                    How many registers does an x86_64 CPU actually have?