What exactly is iodepth in fio? [on hold]

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;







up vote
1
down vote

favorite












Is iodepth of fio and queue depth of storage both same? Then, how is it possible to control queue depth with an iodepth parameter from fio command? Will that be creating parallel jobs, but then again there is an option to run jobs in parallel also (will that not be a trivial or conflicting?)



I am struggling to understand how fio is controlling it's workloads (especially about this iodepth). Can someone please explain the iodepth parameter in detail.



UPDATE #1



My question was also asked in the Flexible I/O Tester's forum. This is the answer that I received there.




Hi,




On 28 July 2018 at 14:26, Jeevan Patnaik wrote:
Hi,



Is Iodepth of fio and queue depth of storage both same? Then, how is
it possible to control queue depth with an iodepth parameter from fio




fio iodepth and the depth I/O your OS achieves submitting I/O down to
storage are linked but most certainly do not have to be the same and
the relationship is highly dependent on your operating
system/storage/fio ioengine used/fio parameters. Basically fio submits
I/O a particular way to your operating system. Depending on how you
submit your I/O to your operating system it can choose to submit it
further down in a more optimal/different fashion (e.g. by batching
requests together, breaking requests that are too big into smaller
pieces, delaying I/O etc). Additionally and as stated in the HOWTO, iodepth only affects asynchronous ioengines (and note that text
includes warnings about the need to use direct=1 on Linux).




command? Will that be creating parallel jobs, but then again there is
an option to run jobs in parallel also (will that not be a trivial or
conflicting?)




I'm going to give a brief summary but note I'm not trying to cover
caching/readahead/plugging/block device layers (e.g. RAID/LVM) etc:



A synchronous fio I/O engine submits a single I/O to the OS, waits for
it to be "acknowledged" as having been received and then sends another
I/O etc.



If an fio I/O engine is able to submit I/O to the OS in a truly
asynchronous fashion (see link above) then the key is that it does NOT
have to wait for earlier I/O to be "acknowledged" before submitting
new I/O. If the iodepth is only 1 it will have to behave in a fashion
similar to a synchronous I/O engine. However, let's say a jobs
specifies an iodepth of 32. In that case up to 32 I/Os to be
outstanding before fio will choose to wait before submitting any more
I/O (just what the watermarks are and how much is submitted at a time
is controlled by the iodepth_batch_* options. This can be more
efficient and achieve higher throughputs but often comes with a cost
of higher latency.



fio will not create parallel fio jobs just because of iodepth BUT
using parallel fio jobs is another way of increasing the amount of
simultaneous I/O being submitted at any given time (by using different
threads/processes) and using both on the same device will act in
tandem (so if you have two fio jobs submitting asynchronous I/O at an
iodepth of 16 each your OS could be be actually receiving 32 I/Os at
any given time). There can be reasons for combining the two (e.g. you
have multiple devices and they are so fast that one CPU can't keep up
even when submitting I/O asynchronously).




I am struggling to understand how fio is controlling it's workloads
(especially about this iodepth). Can someone please explain the iodepth
parameter in detail.




I will note you've also asked this question over on stackexchange
(What exactly is iodepth in fio?). You may want to link to
https://www.spinics.net/lists/fio/msg07190.html from there to help
others who may have a similar question...








share|improve this question













put on hold as off-topic by Jeff Schaller, schily, Rui F Ribeiro, Jesse_b, Romeo Ninov Aug 4 at 16:14


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question has been posted on multiple sites. Cross-posting is strongly discouraged; see the help center and community FAQ for more information." – Jeff Schaller, schily, Rui F Ribeiro, Jesse_b, Romeo Ninov












  • This question was also asked (and seemingly answered) over on the fio mailing list - spinics.net/lists/fio/msg07190.html ...
    – Anon
    Aug 1 at 6:36










  • ...and asked again over on serverfault.com/questions/923487/… ...
    – Anon
    Aug 3 at 22:46










  • The OP asked this Q in the Flexible I/O Tester forum and got the most complete A'er you're going to find on this topic - spinics.net/lists/fio/msg07191.html.
    – slm♦
    2 days ago
















up vote
1
down vote

favorite












Is iodepth of fio and queue depth of storage both same? Then, how is it possible to control queue depth with an iodepth parameter from fio command? Will that be creating parallel jobs, but then again there is an option to run jobs in parallel also (will that not be a trivial or conflicting?)



I am struggling to understand how fio is controlling it's workloads (especially about this iodepth). Can someone please explain the iodepth parameter in detail.



UPDATE #1



My question was also asked in the Flexible I/O Tester's forum. This is the answer that I received there.




Hi,




On 28 July 2018 at 14:26, Jeevan Patnaik wrote:
Hi,



Is Iodepth of fio and queue depth of storage both same? Then, how is
it possible to control queue depth with an iodepth parameter from fio




fio iodepth and the depth I/O your OS achieves submitting I/O down to
storage are linked but most certainly do not have to be the same and
the relationship is highly dependent on your operating
system/storage/fio ioengine used/fio parameters. Basically fio submits
I/O a particular way to your operating system. Depending on how you
submit your I/O to your operating system it can choose to submit it
further down in a more optimal/different fashion (e.g. by batching
requests together, breaking requests that are too big into smaller
pieces, delaying I/O etc). Additionally and as stated in the HOWTO, iodepth only affects asynchronous ioengines (and note that text
includes warnings about the need to use direct=1 on Linux).




command? Will that be creating parallel jobs, but then again there is
an option to run jobs in parallel also (will that not be a trivial or
conflicting?)




I'm going to give a brief summary but note I'm not trying to cover
caching/readahead/plugging/block device layers (e.g. RAID/LVM) etc:



A synchronous fio I/O engine submits a single I/O to the OS, waits for
it to be "acknowledged" as having been received and then sends another
I/O etc.



If an fio I/O engine is able to submit I/O to the OS in a truly
asynchronous fashion (see link above) then the key is that it does NOT
have to wait for earlier I/O to be "acknowledged" before submitting
new I/O. If the iodepth is only 1 it will have to behave in a fashion
similar to a synchronous I/O engine. However, let's say a jobs
specifies an iodepth of 32. In that case up to 32 I/Os to be
outstanding before fio will choose to wait before submitting any more
I/O (just what the watermarks are and how much is submitted at a time
is controlled by the iodepth_batch_* options. This can be more
efficient and achieve higher throughputs but often comes with a cost
of higher latency.



fio will not create parallel fio jobs just because of iodepth BUT
using parallel fio jobs is another way of increasing the amount of
simultaneous I/O being submitted at any given time (by using different
threads/processes) and using both on the same device will act in
tandem (so if you have two fio jobs submitting asynchronous I/O at an
iodepth of 16 each your OS could be be actually receiving 32 I/Os at
any given time). There can be reasons for combining the two (e.g. you
have multiple devices and they are so fast that one CPU can't keep up
even when submitting I/O asynchronously).




I am struggling to understand how fio is controlling it's workloads
(especially about this iodepth). Can someone please explain the iodepth
parameter in detail.




I will note you've also asked this question over on stackexchange
(What exactly is iodepth in fio?). You may want to link to
https://www.spinics.net/lists/fio/msg07190.html from there to help
others who may have a similar question...








share|improve this question













put on hold as off-topic by Jeff Schaller, schily, Rui F Ribeiro, Jesse_b, Romeo Ninov Aug 4 at 16:14


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question has been posted on multiple sites. Cross-posting is strongly discouraged; see the help center and community FAQ for more information." – Jeff Schaller, schily, Rui F Ribeiro, Jesse_b, Romeo Ninov












  • This question was also asked (and seemingly answered) over on the fio mailing list - spinics.net/lists/fio/msg07190.html ...
    – Anon
    Aug 1 at 6:36










  • ...and asked again over on serverfault.com/questions/923487/… ...
    – Anon
    Aug 3 at 22:46










  • The OP asked this Q in the Flexible I/O Tester forum and got the most complete A'er you're going to find on this topic - spinics.net/lists/fio/msg07191.html.
    – slm♦
    2 days ago












up vote
1
down vote

favorite









up vote
1
down vote

favorite











Is iodepth of fio and queue depth of storage both same? Then, how is it possible to control queue depth with an iodepth parameter from fio command? Will that be creating parallel jobs, but then again there is an option to run jobs in parallel also (will that not be a trivial or conflicting?)



I am struggling to understand how fio is controlling it's workloads (especially about this iodepth). Can someone please explain the iodepth parameter in detail.



UPDATE #1



My question was also asked in the Flexible I/O Tester's forum. This is the answer that I received there.




Hi,




On 28 July 2018 at 14:26, Jeevan Patnaik wrote:
Hi,



Is Iodepth of fio and queue depth of storage both same? Then, how is
it possible to control queue depth with an iodepth parameter from fio




fio iodepth and the depth I/O your OS achieves submitting I/O down to
storage are linked but most certainly do not have to be the same and
the relationship is highly dependent on your operating
system/storage/fio ioengine used/fio parameters. Basically fio submits
I/O a particular way to your operating system. Depending on how you
submit your I/O to your operating system it can choose to submit it
further down in a more optimal/different fashion (e.g. by batching
requests together, breaking requests that are too big into smaller
pieces, delaying I/O etc). Additionally and as stated in the HOWTO, iodepth only affects asynchronous ioengines (and note that text
includes warnings about the need to use direct=1 on Linux).




command? Will that be creating parallel jobs, but then again there is
an option to run jobs in parallel also (will that not be a trivial or
conflicting?)




I'm going to give a brief summary but note I'm not trying to cover
caching/readahead/plugging/block device layers (e.g. RAID/LVM) etc:



A synchronous fio I/O engine submits a single I/O to the OS, waits for
it to be "acknowledged" as having been received and then sends another
I/O etc.



If an fio I/O engine is able to submit I/O to the OS in a truly
asynchronous fashion (see link above) then the key is that it does NOT
have to wait for earlier I/O to be "acknowledged" before submitting
new I/O. If the iodepth is only 1 it will have to behave in a fashion
similar to a synchronous I/O engine. However, let's say a jobs
specifies an iodepth of 32. In that case up to 32 I/Os to be
outstanding before fio will choose to wait before submitting any more
I/O (just what the watermarks are and how much is submitted at a time
is controlled by the iodepth_batch_* options. This can be more
efficient and achieve higher throughputs but often comes with a cost
of higher latency.



fio will not create parallel fio jobs just because of iodepth BUT
using parallel fio jobs is another way of increasing the amount of
simultaneous I/O being submitted at any given time (by using different
threads/processes) and using both on the same device will act in
tandem (so if you have two fio jobs submitting asynchronous I/O at an
iodepth of 16 each your OS could be be actually receiving 32 I/Os at
any given time). There can be reasons for combining the two (e.g. you
have multiple devices and they are so fast that one CPU can't keep up
even when submitting I/O asynchronously).




I am struggling to understand how fio is controlling it's workloads
(especially about this iodepth). Can someone please explain the iodepth
parameter in detail.




I will note you've also asked this question over on stackexchange
(What exactly is iodepth in fio?). You may want to link to
https://www.spinics.net/lists/fio/msg07190.html from there to help
others who may have a similar question...








share|improve this question













Is iodepth of fio and queue depth of storage both same? Then, how is it possible to control queue depth with an iodepth parameter from fio command? Will that be creating parallel jobs, but then again there is an option to run jobs in parallel also (will that not be a trivial or conflicting?)



I am struggling to understand how fio is controlling it's workloads (especially about this iodepth). Can someone please explain the iodepth parameter in detail.



UPDATE #1



My question was also asked in the Flexible I/O Tester's forum. This is the answer that I received there.




Hi,




On 28 July 2018 at 14:26, Jeevan Patnaik wrote:
Hi,



Is Iodepth of fio and queue depth of storage both same? Then, how is
it possible to control queue depth with an iodepth parameter from fio




fio iodepth and the depth I/O your OS achieves submitting I/O down to
storage are linked but most certainly do not have to be the same and
the relationship is highly dependent on your operating
system/storage/fio ioengine used/fio parameters. Basically fio submits
I/O a particular way to your operating system. Depending on how you
submit your I/O to your operating system it can choose to submit it
further down in a more optimal/different fashion (e.g. by batching
requests together, breaking requests that are too big into smaller
pieces, delaying I/O etc). Additionally and as stated in the HOWTO, iodepth only affects asynchronous ioengines (and note that text
includes warnings about the need to use direct=1 on Linux).




command? Will that be creating parallel jobs, but then again there is
an option to run jobs in parallel also (will that not be a trivial or
conflicting?)




I'm going to give a brief summary but note I'm not trying to cover
caching/readahead/plugging/block device layers (e.g. RAID/LVM) etc:



A synchronous fio I/O engine submits a single I/O to the OS, waits for
it to be "acknowledged" as having been received and then sends another
I/O etc.



If an fio I/O engine is able to submit I/O to the OS in a truly
asynchronous fashion (see link above) then the key is that it does NOT
have to wait for earlier I/O to be "acknowledged" before submitting
new I/O. If the iodepth is only 1 it will have to behave in a fashion
similar to a synchronous I/O engine. However, let's say a jobs
specifies an iodepth of 32. In that case up to 32 I/Os to be
outstanding before fio will choose to wait before submitting any more
I/O (just what the watermarks are and how much is submitted at a time
is controlled by the iodepth_batch_* options. This can be more
efficient and achieve higher throughputs but often comes with a cost
of higher latency.



fio will not create parallel fio jobs just because of iodepth BUT
using parallel fio jobs is another way of increasing the amount of
simultaneous I/O being submitted at any given time (by using different
threads/processes) and using both on the same device will act in
tandem (so if you have two fio jobs submitting asynchronous I/O at an
iodepth of 16 each your OS could be be actually receiving 32 I/Os at
any given time). There can be reasons for combining the two (e.g. you
have multiple devices and they are so fast that one CPU can't keep up
even when submitting I/O asynchronously).




I am struggling to understand how fio is controlling it's workloads
(especially about this iodepth). Can someone please explain the iodepth
parameter in detail.




I will note you've also asked this question over on stackexchange
(What exactly is iodepth in fio?). You may want to link to
https://www.spinics.net/lists/fio/msg07190.html from there to help
others who may have a similar question...










share|improve this question












share|improve this question




share|improve this question








edited 2 days ago









slm♦

232k65479649




232k65479649









asked Jul 28 at 12:22









Jeevan Patnaik

1952518




1952518




put on hold as off-topic by Jeff Schaller, schily, Rui F Ribeiro, Jesse_b, Romeo Ninov Aug 4 at 16:14


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question has been posted on multiple sites. Cross-posting is strongly discouraged; see the help center and community FAQ for more information." – Jeff Schaller, schily, Rui F Ribeiro, Jesse_b, Romeo Ninov




put on hold as off-topic by Jeff Schaller, schily, Rui F Ribeiro, Jesse_b, Romeo Ninov Aug 4 at 16:14


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question has been posted on multiple sites. Cross-posting is strongly discouraged; see the help center and community FAQ for more information." – Jeff Schaller, schily, Rui F Ribeiro, Jesse_b, Romeo Ninov











  • This question was also asked (and seemingly answered) over on the fio mailing list - spinics.net/lists/fio/msg07190.html ...
    – Anon
    Aug 1 at 6:36










  • ...and asked again over on serverfault.com/questions/923487/… ...
    – Anon
    Aug 3 at 22:46










  • The OP asked this Q in the Flexible I/O Tester forum and got the most complete A'er you're going to find on this topic - spinics.net/lists/fio/msg07191.html.
    – slm♦
    2 days ago
















  • This question was also asked (and seemingly answered) over on the fio mailing list - spinics.net/lists/fio/msg07190.html ...
    – Anon
    Aug 1 at 6:36










  • ...and asked again over on serverfault.com/questions/923487/… ...
    – Anon
    Aug 3 at 22:46










  • The OP asked this Q in the Flexible I/O Tester forum and got the most complete A'er you're going to find on this topic - spinics.net/lists/fio/msg07191.html.
    – slm♦
    2 days ago















This question was also asked (and seemingly answered) over on the fio mailing list - spinics.net/lists/fio/msg07190.html ...
– Anon
Aug 1 at 6:36




This question was also asked (and seemingly answered) over on the fio mailing list - spinics.net/lists/fio/msg07190.html ...
– Anon
Aug 1 at 6:36












...and asked again over on serverfault.com/questions/923487/… ...
– Anon
Aug 3 at 22:46




...and asked again over on serverfault.com/questions/923487/… ...
– Anon
Aug 3 at 22:46












The OP asked this Q in the Flexible I/O Tester forum and got the most complete A'er you're going to find on this topic - spinics.net/lists/fio/msg07191.html.
– slm♦
2 days ago




The OP asked this Q in the Flexible I/O Tester forum and got the most complete A'er you're going to find on this topic - spinics.net/lists/fio/msg07191.html.
– slm♦
2 days ago










2 Answers
2






active

oldest

votes

















up vote
1
down vote














will that not be a trivial?




Assume direct IO, as required for iodepth= to work.



A sequential job with iodepth=2 will submit two sequential IO requests at a time.



A sequential job with numjobs=2 will have two threads, each submitting sequential IO.



These are different IO patterns. The latter will generate 2x the bandwidth across the IO bus, even if the physical IO reduces back to 1x due to device caches. (I suspect the two jobs would tend to remain in lockstep due to device caches, unless you used multiple files and a randomized file_service_type=). If the IOs are synchronous writes (sync=true), the physical IO would not be reduced at all, unless the device is doing an unusual amount of optimization (perhaps a de-duplicating SSD controller).






share|improve this answer




























    up vote
    0
    down vote













    Per the Linux kernel docs:




    .. option:: iodepth=int



    Number of I/O units to keep in flight against the file. Note that
    increasing iodepth beyond 1 will not affect synchronous ioengines (except
    for small degrees when :option:verify_async is in use). Even async
    engines may impose OS restrictions causing the desired depth not to be
    achieved. This may happen on Linux when using libaio and not setting
    :option:direct=1, since buffered I/O is not async on that OS. Keep an
    eye on the I/O depth distribution in the fio output to verify that the
    achieved depth is as expected. Default: 1.




    This tutorial titled: Fio Output Explained had this example:




    Fio has an iodepth setting that controls how many IOs it issues to the OS at any given time. This is entirely application-side, meaning it is not the same thing as the device's IO queue. In this case, iodepth was set to 1 so the IO depth was always 1 100% of the time.



     submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%


    submit and complete represent the number of submitted IOs at a time by fio and the number completed at a time. In the case of the thrashing test used to generate this output, the iodepth is at the default value of 1, so 100% of IOs were submitted 1 at a time placing the results in the 1-4 bucket. Basically these only matter if iodepth is greater than 1.







    share|improve this answer























    • (I think specifically the intent was that io_submit() returns -EAGAIN when the queue is full, but that was only implemented recently, and before then it would end up blocking. lwn.net/Articles/724198 )
      – sourcejedi
      Jul 28 at 21:35










    • @sourcejedi numjobs option also does the samething right? It allows jobs to be submitted in parallel. If these parallel jobs can also create the same queue depth at the storage level, why iodepth again?
      – Jeevan Patnaik
      Jul 29 at 6:51

















    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    1
    down vote














    will that not be a trivial?




    Assume direct IO, as required for iodepth= to work.



    A sequential job with iodepth=2 will submit two sequential IO requests at a time.



    A sequential job with numjobs=2 will have two threads, each submitting sequential IO.



    These are different IO patterns. The latter will generate 2x the bandwidth across the IO bus, even if the physical IO reduces back to 1x due to device caches. (I suspect the two jobs would tend to remain in lockstep due to device caches, unless you used multiple files and a randomized file_service_type=). If the IOs are synchronous writes (sync=true), the physical IO would not be reduced at all, unless the device is doing an unusual amount of optimization (perhaps a de-duplicating SSD controller).






    share|improve this answer

























      up vote
      1
      down vote














      will that not be a trivial?




      Assume direct IO, as required for iodepth= to work.



      A sequential job with iodepth=2 will submit two sequential IO requests at a time.



      A sequential job with numjobs=2 will have two threads, each submitting sequential IO.



      These are different IO patterns. The latter will generate 2x the bandwidth across the IO bus, even if the physical IO reduces back to 1x due to device caches. (I suspect the two jobs would tend to remain in lockstep due to device caches, unless you used multiple files and a randomized file_service_type=). If the IOs are synchronous writes (sync=true), the physical IO would not be reduced at all, unless the device is doing an unusual amount of optimization (perhaps a de-duplicating SSD controller).






      share|improve this answer























        up vote
        1
        down vote










        up vote
        1
        down vote










        will that not be a trivial?




        Assume direct IO, as required for iodepth= to work.



        A sequential job with iodepth=2 will submit two sequential IO requests at a time.



        A sequential job with numjobs=2 will have two threads, each submitting sequential IO.



        These are different IO patterns. The latter will generate 2x the bandwidth across the IO bus, even if the physical IO reduces back to 1x due to device caches. (I suspect the two jobs would tend to remain in lockstep due to device caches, unless you used multiple files and a randomized file_service_type=). If the IOs are synchronous writes (sync=true), the physical IO would not be reduced at all, unless the device is doing an unusual amount of optimization (perhaps a de-duplicating SSD controller).






        share|improve this answer














        will that not be a trivial?




        Assume direct IO, as required for iodepth= to work.



        A sequential job with iodepth=2 will submit two sequential IO requests at a time.



        A sequential job with numjobs=2 will have two threads, each submitting sequential IO.



        These are different IO patterns. The latter will generate 2x the bandwidth across the IO bus, even if the physical IO reduces back to 1x due to device caches. (I suspect the two jobs would tend to remain in lockstep due to device caches, unless you used multiple files and a randomized file_service_type=). If the IOs are synchronous writes (sync=true), the physical IO would not be reduced at all, unless the device is doing an unusual amount of optimization (perhaps a de-duplicating SSD controller).







        share|improve this answer













        share|improve this answer



        share|improve this answer











        answered Jul 29 at 7:33









        sourcejedi

        18k22375




        18k22375






















            up vote
            0
            down vote













            Per the Linux kernel docs:




            .. option:: iodepth=int



            Number of I/O units to keep in flight against the file. Note that
            increasing iodepth beyond 1 will not affect synchronous ioengines (except
            for small degrees when :option:verify_async is in use). Even async
            engines may impose OS restrictions causing the desired depth not to be
            achieved. This may happen on Linux when using libaio and not setting
            :option:direct=1, since buffered I/O is not async on that OS. Keep an
            eye on the I/O depth distribution in the fio output to verify that the
            achieved depth is as expected. Default: 1.




            This tutorial titled: Fio Output Explained had this example:




            Fio has an iodepth setting that controls how many IOs it issues to the OS at any given time. This is entirely application-side, meaning it is not the same thing as the device's IO queue. In this case, iodepth was set to 1 so the IO depth was always 1 100% of the time.



             submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
            complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%


            submit and complete represent the number of submitted IOs at a time by fio and the number completed at a time. In the case of the thrashing test used to generate this output, the iodepth is at the default value of 1, so 100% of IOs were submitted 1 at a time placing the results in the 1-4 bucket. Basically these only matter if iodepth is greater than 1.







            share|improve this answer























            • (I think specifically the intent was that io_submit() returns -EAGAIN when the queue is full, but that was only implemented recently, and before then it would end up blocking. lwn.net/Articles/724198 )
              – sourcejedi
              Jul 28 at 21:35










            • @sourcejedi numjobs option also does the samething right? It allows jobs to be submitted in parallel. If these parallel jobs can also create the same queue depth at the storage level, why iodepth again?
              – Jeevan Patnaik
              Jul 29 at 6:51














            up vote
            0
            down vote













            Per the Linux kernel docs:




            .. option:: iodepth=int



            Number of I/O units to keep in flight against the file. Note that
            increasing iodepth beyond 1 will not affect synchronous ioengines (except
            for small degrees when :option:verify_async is in use). Even async
            engines may impose OS restrictions causing the desired depth not to be
            achieved. This may happen on Linux when using libaio and not setting
            :option:direct=1, since buffered I/O is not async on that OS. Keep an
            eye on the I/O depth distribution in the fio output to verify that the
            achieved depth is as expected. Default: 1.




            This tutorial titled: Fio Output Explained had this example:




            Fio has an iodepth setting that controls how many IOs it issues to the OS at any given time. This is entirely application-side, meaning it is not the same thing as the device's IO queue. In this case, iodepth was set to 1 so the IO depth was always 1 100% of the time.



             submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
            complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%


            submit and complete represent the number of submitted IOs at a time by fio and the number completed at a time. In the case of the thrashing test used to generate this output, the iodepth is at the default value of 1, so 100% of IOs were submitted 1 at a time placing the results in the 1-4 bucket. Basically these only matter if iodepth is greater than 1.







            share|improve this answer























            • (I think specifically the intent was that io_submit() returns -EAGAIN when the queue is full, but that was only implemented recently, and before then it would end up blocking. lwn.net/Articles/724198 )
              – sourcejedi
              Jul 28 at 21:35










            • @sourcejedi numjobs option also does the samething right? It allows jobs to be submitted in parallel. If these parallel jobs can also create the same queue depth at the storage level, why iodepth again?
              – Jeevan Patnaik
              Jul 29 at 6:51












            up vote
            0
            down vote










            up vote
            0
            down vote









            Per the Linux kernel docs:




            .. option:: iodepth=int



            Number of I/O units to keep in flight against the file. Note that
            increasing iodepth beyond 1 will not affect synchronous ioengines (except
            for small degrees when :option:verify_async is in use). Even async
            engines may impose OS restrictions causing the desired depth not to be
            achieved. This may happen on Linux when using libaio and not setting
            :option:direct=1, since buffered I/O is not async on that OS. Keep an
            eye on the I/O depth distribution in the fio output to verify that the
            achieved depth is as expected. Default: 1.




            This tutorial titled: Fio Output Explained had this example:




            Fio has an iodepth setting that controls how many IOs it issues to the OS at any given time. This is entirely application-side, meaning it is not the same thing as the device's IO queue. In this case, iodepth was set to 1 so the IO depth was always 1 100% of the time.



             submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
            complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%


            submit and complete represent the number of submitted IOs at a time by fio and the number completed at a time. In the case of the thrashing test used to generate this output, the iodepth is at the default value of 1, so 100% of IOs were submitted 1 at a time placing the results in the 1-4 bucket. Basically these only matter if iodepth is greater than 1.







            share|improve this answer















            Per the Linux kernel docs:




            .. option:: iodepth=int



            Number of I/O units to keep in flight against the file. Note that
            increasing iodepth beyond 1 will not affect synchronous ioengines (except
            for small degrees when :option:verify_async is in use). Even async
            engines may impose OS restrictions causing the desired depth not to be
            achieved. This may happen on Linux when using libaio and not setting
            :option:direct=1, since buffered I/O is not async on that OS. Keep an
            eye on the I/O depth distribution in the fio output to verify that the
            achieved depth is as expected. Default: 1.




            This tutorial titled: Fio Output Explained had this example:




            Fio has an iodepth setting that controls how many IOs it issues to the OS at any given time. This is entirely application-side, meaning it is not the same thing as the device's IO queue. In this case, iodepth was set to 1 so the IO depth was always 1 100% of the time.



             submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
            complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%


            submit and complete represent the number of submitted IOs at a time by fio and the number completed at a time. In the case of the thrashing test used to generate this output, the iodepth is at the default value of 1, so 100% of IOs were submitted 1 at a time placing the results in the 1-4 bucket. Basically these only matter if iodepth is greater than 1.








            share|improve this answer















            share|improve this answer



            share|improve this answer








            edited Jul 28 at 21:07


























            answered Jul 28 at 20:58









            slm♦

            232k65479649




            232k65479649











            • (I think specifically the intent was that io_submit() returns -EAGAIN when the queue is full, but that was only implemented recently, and before then it would end up blocking. lwn.net/Articles/724198 )
              – sourcejedi
              Jul 28 at 21:35










            • @sourcejedi numjobs option also does the samething right? It allows jobs to be submitted in parallel. If these parallel jobs can also create the same queue depth at the storage level, why iodepth again?
              – Jeevan Patnaik
              Jul 29 at 6:51
















            • (I think specifically the intent was that io_submit() returns -EAGAIN when the queue is full, but that was only implemented recently, and before then it would end up blocking. lwn.net/Articles/724198 )
              – sourcejedi
              Jul 28 at 21:35










            • @sourcejedi numjobs option also does the samething right? It allows jobs to be submitted in parallel. If these parallel jobs can also create the same queue depth at the storage level, why iodepth again?
              – Jeevan Patnaik
              Jul 29 at 6:51















            (I think specifically the intent was that io_submit() returns -EAGAIN when the queue is full, but that was only implemented recently, and before then it would end up blocking. lwn.net/Articles/724198 )
            – sourcejedi
            Jul 28 at 21:35




            (I think specifically the intent was that io_submit() returns -EAGAIN when the queue is full, but that was only implemented recently, and before then it would end up blocking. lwn.net/Articles/724198 )
            – sourcejedi
            Jul 28 at 21:35












            @sourcejedi numjobs option also does the samething right? It allows jobs to be submitted in parallel. If these parallel jobs can also create the same queue depth at the storage level, why iodepth again?
            – Jeevan Patnaik
            Jul 29 at 6:51




            @sourcejedi numjobs option also does the samething right? It allows jobs to be submitted in parallel. If these parallel jobs can also create the same queue depth at the storage level, why iodepth again?
            – Jeevan Patnaik
            Jul 29 at 6:51


            Popular posts from this blog

            How to check contact read email or not when send email to Individual?

            Bahrain

            Postfix configuration issue with fips on centos 7; mailgun relay