IO Wait is consistently touching high values around 60-70% during load run

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












I am stuck on an IOwait related problem, the server I am monitoring shows a very high value for IOwait during my load run time (50%-70%). I generated this data using the SAR report command. The ideal value should be below 8%-9% as the server has 12 cores(1/12 ~ 0.08). I read this somewhere and took the assumption accordingly.



What can be done to rectify this high IOwait problem, how is it related to other factors in the server which can be checked for to improve the performance.










share|improve this question



















  • 1




    Could you please provide physical disk model? ls /dev/disk/by-id/ or something. Honestly, I haven't heard of a disk with cores and I am very curious.
    – mikst
    Aug 21 at 9:36










  • Sorry, I rectified the mistake, the server has 12 cores.
    – M. Rafi
    Aug 21 at 9:48














up vote
0
down vote

favorite












I am stuck on an IOwait related problem, the server I am monitoring shows a very high value for IOwait during my load run time (50%-70%). I generated this data using the SAR report command. The ideal value should be below 8%-9% as the server has 12 cores(1/12 ~ 0.08). I read this somewhere and took the assumption accordingly.



What can be done to rectify this high IOwait problem, how is it related to other factors in the server which can be checked for to improve the performance.










share|improve this question



















  • 1




    Could you please provide physical disk model? ls /dev/disk/by-id/ or something. Honestly, I haven't heard of a disk with cores and I am very curious.
    – mikst
    Aug 21 at 9:36










  • Sorry, I rectified the mistake, the server has 12 cores.
    – M. Rafi
    Aug 21 at 9:48












up vote
0
down vote

favorite









up vote
0
down vote

favorite











I am stuck on an IOwait related problem, the server I am monitoring shows a very high value for IOwait during my load run time (50%-70%). I generated this data using the SAR report command. The ideal value should be below 8%-9% as the server has 12 cores(1/12 ~ 0.08). I read this somewhere and took the assumption accordingly.



What can be done to rectify this high IOwait problem, how is it related to other factors in the server which can be checked for to improve the performance.










share|improve this question















I am stuck on an IOwait related problem, the server I am monitoring shows a very high value for IOwait during my load run time (50%-70%). I generated this data using the SAR report command. The ideal value should be below 8%-9% as the server has 12 cores(1/12 ~ 0.08). I read this somewhere and took the assumption accordingly.



What can be done to rectify this high IOwait problem, how is it related to other factors in the server which can be checked for to improve the performance.







disk io sar






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Aug 21 at 9:47

























asked Aug 21 at 9:10









M. Rafi

11




11







  • 1




    Could you please provide physical disk model? ls /dev/disk/by-id/ or something. Honestly, I haven't heard of a disk with cores and I am very curious.
    – mikst
    Aug 21 at 9:36










  • Sorry, I rectified the mistake, the server has 12 cores.
    – M. Rafi
    Aug 21 at 9:48












  • 1




    Could you please provide physical disk model? ls /dev/disk/by-id/ or something. Honestly, I haven't heard of a disk with cores and I am very curious.
    – mikst
    Aug 21 at 9:36










  • Sorry, I rectified the mistake, the server has 12 cores.
    – M. Rafi
    Aug 21 at 9:48







1




1




Could you please provide physical disk model? ls /dev/disk/by-id/ or something. Honestly, I haven't heard of a disk with cores and I am very curious.
– mikst
Aug 21 at 9:36




Could you please provide physical disk model? ls /dev/disk/by-id/ or something. Honestly, I haven't heard of a disk with cores and I am very curious.
– mikst
Aug 21 at 9:36












Sorry, I rectified the mistake, the server has 12 cores.
– M. Rafi
Aug 21 at 9:48




Sorry, I rectified the mistake, the server has 12 cores.
– M. Rafi
Aug 21 at 9:48










2 Answers
2






active

oldest

votes

















up vote
0
down vote













The more powerful is CPU the greater iowait, not the other way around.



In general in order to reduce iowait this can help:



  1. Optimising application code if possible/applicable, for example suboptimal database query can force DBMS execute inefficient plan and cause excessive disk load.


  2. Getting more RAM if your load is heavy on reads.


  3. Making storage subsystem faster. Faster disks, faster RAID, faster storage controller, write-back caching. That's a science on it's own.


Sorry, with generic question like that, there is only a generic answer.






share|improve this answer




















  • Thank you @mikst, I'll dig deep into the inputs you have given and ask for more specific inputs.
    – M. Rafi
    Aug 21 at 10:26


















up vote
0
down vote













Allow me to reveal the small Linux secret: there is no reliable iowait statistics in Linux. This is only truth. From PROC(5) we read:




iowait (since Linux 2.5.41)



(5) Time waiting for I/O to complete. This value is not reliable, for
the following reasons:



  1. The CPU will not wait for I/O to complete; iowait is the time that a task is waiting for I/O to complete. When a CPU goes into idle
    state for outstanding task I/O, another task will be scheduled on this
    CPU.


  2. On a multi-core CPU, the task waiting for I/O to complete is not running on any CPU, so the iowait of each CPU is difficult to
    calculate.


  3. The value in this field may decrease in certain conditions.




So, my suggestion for is to forget about iowait measurements in Linux.






share|improve this answer






















    Your Answer







    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "106"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: false,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f463810%2fio-wait-is-consistently-touching-high-values-around-60-70-during-load-run%23new-answer', 'question_page');

    );

    Post as a guest






























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    0
    down vote













    The more powerful is CPU the greater iowait, not the other way around.



    In general in order to reduce iowait this can help:



    1. Optimising application code if possible/applicable, for example suboptimal database query can force DBMS execute inefficient plan and cause excessive disk load.


    2. Getting more RAM if your load is heavy on reads.


    3. Making storage subsystem faster. Faster disks, faster RAID, faster storage controller, write-back caching. That's a science on it's own.


    Sorry, with generic question like that, there is only a generic answer.






    share|improve this answer




















    • Thank you @mikst, I'll dig deep into the inputs you have given and ask for more specific inputs.
      – M. Rafi
      Aug 21 at 10:26















    up vote
    0
    down vote













    The more powerful is CPU the greater iowait, not the other way around.



    In general in order to reduce iowait this can help:



    1. Optimising application code if possible/applicable, for example suboptimal database query can force DBMS execute inefficient plan and cause excessive disk load.


    2. Getting more RAM if your load is heavy on reads.


    3. Making storage subsystem faster. Faster disks, faster RAID, faster storage controller, write-back caching. That's a science on it's own.


    Sorry, with generic question like that, there is only a generic answer.






    share|improve this answer




















    • Thank you @mikst, I'll dig deep into the inputs you have given and ask for more specific inputs.
      – M. Rafi
      Aug 21 at 10:26













    up vote
    0
    down vote










    up vote
    0
    down vote









    The more powerful is CPU the greater iowait, not the other way around.



    In general in order to reduce iowait this can help:



    1. Optimising application code if possible/applicable, for example suboptimal database query can force DBMS execute inefficient plan and cause excessive disk load.


    2. Getting more RAM if your load is heavy on reads.


    3. Making storage subsystem faster. Faster disks, faster RAID, faster storage controller, write-back caching. That's a science on it's own.


    Sorry, with generic question like that, there is only a generic answer.






    share|improve this answer












    The more powerful is CPU the greater iowait, not the other way around.



    In general in order to reduce iowait this can help:



    1. Optimising application code if possible/applicable, for example suboptimal database query can force DBMS execute inefficient plan and cause excessive disk load.


    2. Getting more RAM if your load is heavy on reads.


    3. Making storage subsystem faster. Faster disks, faster RAID, faster storage controller, write-back caching. That's a science on it's own.


    Sorry, with generic question like that, there is only a generic answer.







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Aug 21 at 9:59









    mikst

    967




    967











    • Thank you @mikst, I'll dig deep into the inputs you have given and ask for more specific inputs.
      – M. Rafi
      Aug 21 at 10:26

















    • Thank you @mikst, I'll dig deep into the inputs you have given and ask for more specific inputs.
      – M. Rafi
      Aug 21 at 10:26
















    Thank you @mikst, I'll dig deep into the inputs you have given and ask for more specific inputs.
    – M. Rafi
    Aug 21 at 10:26





    Thank you @mikst, I'll dig deep into the inputs you have given and ask for more specific inputs.
    – M. Rafi
    Aug 21 at 10:26













    up vote
    0
    down vote













    Allow me to reveal the small Linux secret: there is no reliable iowait statistics in Linux. This is only truth. From PROC(5) we read:




    iowait (since Linux 2.5.41)



    (5) Time waiting for I/O to complete. This value is not reliable, for
    the following reasons:



    1. The CPU will not wait for I/O to complete; iowait is the time that a task is waiting for I/O to complete. When a CPU goes into idle
      state for outstanding task I/O, another task will be scheduled on this
      CPU.


    2. On a multi-core CPU, the task waiting for I/O to complete is not running on any CPU, so the iowait of each CPU is difficult to
      calculate.


    3. The value in this field may decrease in certain conditions.




    So, my suggestion for is to forget about iowait measurements in Linux.






    share|improve this answer


























      up vote
      0
      down vote













      Allow me to reveal the small Linux secret: there is no reliable iowait statistics in Linux. This is only truth. From PROC(5) we read:




      iowait (since Linux 2.5.41)



      (5) Time waiting for I/O to complete. This value is not reliable, for
      the following reasons:



      1. The CPU will not wait for I/O to complete; iowait is the time that a task is waiting for I/O to complete. When a CPU goes into idle
        state for outstanding task I/O, another task will be scheduled on this
        CPU.


      2. On a multi-core CPU, the task waiting for I/O to complete is not running on any CPU, so the iowait of each CPU is difficult to
        calculate.


      3. The value in this field may decrease in certain conditions.




      So, my suggestion for is to forget about iowait measurements in Linux.






      share|improve this answer
























        up vote
        0
        down vote










        up vote
        0
        down vote









        Allow me to reveal the small Linux secret: there is no reliable iowait statistics in Linux. This is only truth. From PROC(5) we read:




        iowait (since Linux 2.5.41)



        (5) Time waiting for I/O to complete. This value is not reliable, for
        the following reasons:



        1. The CPU will not wait for I/O to complete; iowait is the time that a task is waiting for I/O to complete. When a CPU goes into idle
          state for outstanding task I/O, another task will be scheduled on this
          CPU.


        2. On a multi-core CPU, the task waiting for I/O to complete is not running on any CPU, so the iowait of each CPU is difficult to
          calculate.


        3. The value in this field may decrease in certain conditions.




        So, my suggestion for is to forget about iowait measurements in Linux.






        share|improve this answer














        Allow me to reveal the small Linux secret: there is no reliable iowait statistics in Linux. This is only truth. From PROC(5) we read:




        iowait (since Linux 2.5.41)



        (5) Time waiting for I/O to complete. This value is not reliable, for
        the following reasons:



        1. The CPU will not wait for I/O to complete; iowait is the time that a task is waiting for I/O to complete. When a CPU goes into idle
          state for outstanding task I/O, another task will be scheduled on this
          CPU.


        2. On a multi-core CPU, the task waiting for I/O to complete is not running on any CPU, so the iowait of each CPU is difficult to
          calculate.


        3. The value in this field may decrease in certain conditions.




        So, my suggestion for is to forget about iowait measurements in Linux.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Aug 21 at 10:59

























        answered Aug 21 at 10:42









        Bob

        72017




        72017



























             

            draft saved


            draft discarded















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f463810%2fio-wait-is-consistently-touching-high-values-around-60-70-during-load-run%23new-answer', 'question_page');

            );

            Post as a guest













































































            Popular posts from this blog

            How to check contact read email or not when send email to Individual?

            Bahrain

            Postfix configuration issue with fips on centos 7; mailgun relay