Why do diskspd and fio generate odd numbers on Linux vs. Windows?
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
UPDATE
Thanks to Anon's answer, I figured out that the file system was at fault - I used NTFS. The following are the results using FAT32.
Windows:
diskspd64 -b128K -d5 -o32 -t1 -W0 -Sh -w0 cdm
508, 518, 520, 513, 513
fio --name=dontknow --ioengine=windowsaio --thread --size=1024m --bs=128k --time_based=1 --runtime=5s --iodepth=32 --numjobs=1 --rw=read --direct=1 --buffered=0 --startdelay=0s --filename=cdm
557, 557, 557, 558, 556
Linux:
diskspd -b128K -d5 -o32 -t1 -W0 -Sh -w0 cdm
529, 528, 529, 529, 529
fio --name=dontknow --ioengine=libaio --thread --size=1024m --bs=128k --time_based=1 --runtime=5s --iodepth=32 --numjobs=1 --rw=read --direct=1 --buffered=0 --startdelay=0s --filename=cdm
560, 560, 560, 560, 559
ORIGINAL QUESTION
Based on the same input file on the same drive, these are the resulting numbers for read speed (MB/s - I ran each 5 times) for given commands on Windows:
diskspd64 -b128k -d5 -o32 -t1 -W0 -S -w0 cdm
555, 555, 556, 556, 555
fio --name=doesntmatter --ioengine=windowsaio --thread=1 --size=1024m --bs=128k --time_based=1 --runtime=5s --iodepth=32 --numjobs=1 --rw=read --direct=1 --startdelay=0s --filename=cdm
561, 553, 562, 561, 558
And Linux (to be precise - KDE neon useredition-20180802):
diskspd -b128K -d5 -o32 -t1 -W0 -Sh -w0 cdm
1800, 2000, 1925, 1891, 1973
fio --name=doesntmatter --ioengine=libaio --thread=1 --size=1024m --bs=128k --time_based=1 --runtime=5s --iodepth=32 --numjobs=1 --rw=read --direct=1 --startdelay=0s --filename=cdm
2637, 2826, 2593, 2770
I would also like to mention that this is a SATA SSD drive with an official maximum read speed of 555 MB/s
. So the Windows numbers seem to be accurate.
linux windows hard-disk performance benchmark
add a comment |Â
up vote
1
down vote
favorite
UPDATE
Thanks to Anon's answer, I figured out that the file system was at fault - I used NTFS. The following are the results using FAT32.
Windows:
diskspd64 -b128K -d5 -o32 -t1 -W0 -Sh -w0 cdm
508, 518, 520, 513, 513
fio --name=dontknow --ioengine=windowsaio --thread --size=1024m --bs=128k --time_based=1 --runtime=5s --iodepth=32 --numjobs=1 --rw=read --direct=1 --buffered=0 --startdelay=0s --filename=cdm
557, 557, 557, 558, 556
Linux:
diskspd -b128K -d5 -o32 -t1 -W0 -Sh -w0 cdm
529, 528, 529, 529, 529
fio --name=dontknow --ioengine=libaio --thread --size=1024m --bs=128k --time_based=1 --runtime=5s --iodepth=32 --numjobs=1 --rw=read --direct=1 --buffered=0 --startdelay=0s --filename=cdm
560, 560, 560, 560, 559
ORIGINAL QUESTION
Based on the same input file on the same drive, these are the resulting numbers for read speed (MB/s - I ran each 5 times) for given commands on Windows:
diskspd64 -b128k -d5 -o32 -t1 -W0 -S -w0 cdm
555, 555, 556, 556, 555
fio --name=doesntmatter --ioengine=windowsaio --thread=1 --size=1024m --bs=128k --time_based=1 --runtime=5s --iodepth=32 --numjobs=1 --rw=read --direct=1 --startdelay=0s --filename=cdm
561, 553, 562, 561, 558
And Linux (to be precise - KDE neon useredition-20180802):
diskspd -b128K -d5 -o32 -t1 -W0 -Sh -w0 cdm
1800, 2000, 1925, 1891, 1973
fio --name=doesntmatter --ioengine=libaio --thread=1 --size=1024m --bs=128k --time_based=1 --runtime=5s --iodepth=32 --numjobs=1 --rw=read --direct=1 --startdelay=0s --filename=cdm
2637, 2826, 2593, 2770
I would also like to mention that this is a SATA SSD drive with an official maximum read speed of 555 MB/s
. So the Windows numbers seem to be accurate.
linux windows hard-disk performance benchmark
Weird!iotop
should measure the bandwidth of uncached IOs, I wonder if it shows more realistic figures than the statistics at the end of thefio
run.
â sourcejedi
Aug 8 at 18:22
It doesn't make sense for me to use other tools unless they can be similarly fine-tuned as diskspd and fio and produce similar results. Because somebody needs to finally develop a cross-platform alternative to CrystalDiskMark and that somebody might be me. That is - IF someone can answer my question...
â AndyO
Aug 8 at 19:24
Sorry, I mean to try and illuminate the surprising results you're getting. Not as an alternative to fio.
â sourcejedi
Aug 8 at 20:03
Ideally I would use sync=1, or a longer test to avoid caching effects. But direct IO should still have to traverse the SATA bus, limited to an absolute max of 600 MB/s, so I don't think that explains it.
â sourcejedi
Aug 8 at 20:18
It's all good - I simply explained why that unfortunately doesn't help me. :)
â AndyO
Aug 8 at 20:55
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
UPDATE
Thanks to Anon's answer, I figured out that the file system was at fault - I used NTFS. The following are the results using FAT32.
Windows:
diskspd64 -b128K -d5 -o32 -t1 -W0 -Sh -w0 cdm
508, 518, 520, 513, 513
fio --name=dontknow --ioengine=windowsaio --thread --size=1024m --bs=128k --time_based=1 --runtime=5s --iodepth=32 --numjobs=1 --rw=read --direct=1 --buffered=0 --startdelay=0s --filename=cdm
557, 557, 557, 558, 556
Linux:
diskspd -b128K -d5 -o32 -t1 -W0 -Sh -w0 cdm
529, 528, 529, 529, 529
fio --name=dontknow --ioengine=libaio --thread --size=1024m --bs=128k --time_based=1 --runtime=5s --iodepth=32 --numjobs=1 --rw=read --direct=1 --buffered=0 --startdelay=0s --filename=cdm
560, 560, 560, 560, 559
ORIGINAL QUESTION
Based on the same input file on the same drive, these are the resulting numbers for read speed (MB/s - I ran each 5 times) for given commands on Windows:
diskspd64 -b128k -d5 -o32 -t1 -W0 -S -w0 cdm
555, 555, 556, 556, 555
fio --name=doesntmatter --ioengine=windowsaio --thread=1 --size=1024m --bs=128k --time_based=1 --runtime=5s --iodepth=32 --numjobs=1 --rw=read --direct=1 --startdelay=0s --filename=cdm
561, 553, 562, 561, 558
And Linux (to be precise - KDE neon useredition-20180802):
diskspd -b128K -d5 -o32 -t1 -W0 -Sh -w0 cdm
1800, 2000, 1925, 1891, 1973
fio --name=doesntmatter --ioengine=libaio --thread=1 --size=1024m --bs=128k --time_based=1 --runtime=5s --iodepth=32 --numjobs=1 --rw=read --direct=1 --startdelay=0s --filename=cdm
2637, 2826, 2593, 2770
I would also like to mention that this is a SATA SSD drive with an official maximum read speed of 555 MB/s
. So the Windows numbers seem to be accurate.
linux windows hard-disk performance benchmark
UPDATE
Thanks to Anon's answer, I figured out that the file system was at fault - I used NTFS. The following are the results using FAT32.
Windows:
diskspd64 -b128K -d5 -o32 -t1 -W0 -Sh -w0 cdm
508, 518, 520, 513, 513
fio --name=dontknow --ioengine=windowsaio --thread --size=1024m --bs=128k --time_based=1 --runtime=5s --iodepth=32 --numjobs=1 --rw=read --direct=1 --buffered=0 --startdelay=0s --filename=cdm
557, 557, 557, 558, 556
Linux:
diskspd -b128K -d5 -o32 -t1 -W0 -Sh -w0 cdm
529, 528, 529, 529, 529
fio --name=dontknow --ioengine=libaio --thread --size=1024m --bs=128k --time_based=1 --runtime=5s --iodepth=32 --numjobs=1 --rw=read --direct=1 --buffered=0 --startdelay=0s --filename=cdm
560, 560, 560, 560, 559
ORIGINAL QUESTION
Based on the same input file on the same drive, these are the resulting numbers for read speed (MB/s - I ran each 5 times) for given commands on Windows:
diskspd64 -b128k -d5 -o32 -t1 -W0 -S -w0 cdm
555, 555, 556, 556, 555
fio --name=doesntmatter --ioengine=windowsaio --thread=1 --size=1024m --bs=128k --time_based=1 --runtime=5s --iodepth=32 --numjobs=1 --rw=read --direct=1 --startdelay=0s --filename=cdm
561, 553, 562, 561, 558
And Linux (to be precise - KDE neon useredition-20180802):
diskspd -b128K -d5 -o32 -t1 -W0 -Sh -w0 cdm
1800, 2000, 1925, 1891, 1973
fio --name=doesntmatter --ioengine=libaio --thread=1 --size=1024m --bs=128k --time_based=1 --runtime=5s --iodepth=32 --numjobs=1 --rw=read --direct=1 --startdelay=0s --filename=cdm
2637, 2826, 2593, 2770
I would also like to mention that this is a SATA SSD drive with an official maximum read speed of 555 MB/s
. So the Windows numbers seem to be accurate.
linux windows hard-disk performance benchmark
linux windows hard-disk performance benchmark
edited Aug 8 at 23:35
asked Aug 8 at 17:36
AndyO
1084
1084
Weird!iotop
should measure the bandwidth of uncached IOs, I wonder if it shows more realistic figures than the statistics at the end of thefio
run.
â sourcejedi
Aug 8 at 18:22
It doesn't make sense for me to use other tools unless they can be similarly fine-tuned as diskspd and fio and produce similar results. Because somebody needs to finally develop a cross-platform alternative to CrystalDiskMark and that somebody might be me. That is - IF someone can answer my question...
â AndyO
Aug 8 at 19:24
Sorry, I mean to try and illuminate the surprising results you're getting. Not as an alternative to fio.
â sourcejedi
Aug 8 at 20:03
Ideally I would use sync=1, or a longer test to avoid caching effects. But direct IO should still have to traverse the SATA bus, limited to an absolute max of 600 MB/s, so I don't think that explains it.
â sourcejedi
Aug 8 at 20:18
It's all good - I simply explained why that unfortunately doesn't help me. :)
â AndyO
Aug 8 at 20:55
add a comment |Â
Weird!iotop
should measure the bandwidth of uncached IOs, I wonder if it shows more realistic figures than the statistics at the end of thefio
run.
â sourcejedi
Aug 8 at 18:22
It doesn't make sense for me to use other tools unless they can be similarly fine-tuned as diskspd and fio and produce similar results. Because somebody needs to finally develop a cross-platform alternative to CrystalDiskMark and that somebody might be me. That is - IF someone can answer my question...
â AndyO
Aug 8 at 19:24
Sorry, I mean to try and illuminate the surprising results you're getting. Not as an alternative to fio.
â sourcejedi
Aug 8 at 20:03
Ideally I would use sync=1, or a longer test to avoid caching effects. But direct IO should still have to traverse the SATA bus, limited to an absolute max of 600 MB/s, so I don't think that explains it.
â sourcejedi
Aug 8 at 20:18
It's all good - I simply explained why that unfortunately doesn't help me. :)
â AndyO
Aug 8 at 20:55
Weird!
iotop
should measure the bandwidth of uncached IOs, I wonder if it shows more realistic figures than the statistics at the end of the fio
run.â sourcejedi
Aug 8 at 18:22
Weird!
iotop
should measure the bandwidth of uncached IOs, I wonder if it shows more realistic figures than the statistics at the end of the fio
run.â sourcejedi
Aug 8 at 18:22
It doesn't make sense for me to use other tools unless they can be similarly fine-tuned as diskspd and fio and produce similar results. Because somebody needs to finally develop a cross-platform alternative to CrystalDiskMark and that somebody might be me. That is - IF someone can answer my question...
â AndyO
Aug 8 at 19:24
It doesn't make sense for me to use other tools unless they can be similarly fine-tuned as diskspd and fio and produce similar results. Because somebody needs to finally develop a cross-platform alternative to CrystalDiskMark and that somebody might be me. That is - IF someone can answer my question...
â AndyO
Aug 8 at 19:24
Sorry, I mean to try and illuminate the surprising results you're getting. Not as an alternative to fio.
â sourcejedi
Aug 8 at 20:03
Sorry, I mean to try and illuminate the surprising results you're getting. Not as an alternative to fio.
â sourcejedi
Aug 8 at 20:03
Ideally I would use sync=1, or a longer test to avoid caching effects. But direct IO should still have to traverse the SATA bus, limited to an absolute max of 600 MB/s, so I don't think that explains it.
â sourcejedi
Aug 8 at 20:18
Ideally I would use sync=1, or a longer test to avoid caching effects. But direct IO should still have to traverse the SATA bus, limited to an absolute max of 600 MB/s, so I don't think that explains it.
â sourcejedi
Aug 8 at 20:18
It's all good - I simply explained why that unfortunately doesn't help me. :)
â AndyO
Aug 8 at 20:55
It's all good - I simply explained why that unfortunately doesn't help me. :)
â AndyO
Aug 8 at 20:55
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
1
down vote
accepted
Unfortunately there's not enough information to answer your question - it is often necessary to see the full fio output from your run and to know what version of fio you are running because this can say things like what depths are achieved and how busy Linux thought the disk was during the run (e.g. when latencies are close to 0 that's almost always a sign of caching taking place).
Could be that the filesystem the file is on doesn't support direct=1
with the options it is using. Could be your file was entirely cached for some reason and you're just reading back from the cache (watch out for this when file sizes are dramatically less than your total RAM). Could be that because you didn't write to your file it's sparse/empty and not really "there" (try doing a full set of writes before you read it back)...
PS: thread
doesn't need to take a value (see http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-thread ).
Thank you very much! It was the file system! Wish I would've realized this before I blindly ordered a new drive... (Probably would've anyway though)
â AndyO
Aug 8 at 23:36
1
You're welcome! Thanks for diligently updating your question for the next person...
â Anon
Aug 9 at 8:39
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
Unfortunately there's not enough information to answer your question - it is often necessary to see the full fio output from your run and to know what version of fio you are running because this can say things like what depths are achieved and how busy Linux thought the disk was during the run (e.g. when latencies are close to 0 that's almost always a sign of caching taking place).
Could be that the filesystem the file is on doesn't support direct=1
with the options it is using. Could be your file was entirely cached for some reason and you're just reading back from the cache (watch out for this when file sizes are dramatically less than your total RAM). Could be that because you didn't write to your file it's sparse/empty and not really "there" (try doing a full set of writes before you read it back)...
PS: thread
doesn't need to take a value (see http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-thread ).
Thank you very much! It was the file system! Wish I would've realized this before I blindly ordered a new drive... (Probably would've anyway though)
â AndyO
Aug 8 at 23:36
1
You're welcome! Thanks for diligently updating your question for the next person...
â Anon
Aug 9 at 8:39
add a comment |Â
up vote
1
down vote
accepted
Unfortunately there's not enough information to answer your question - it is often necessary to see the full fio output from your run and to know what version of fio you are running because this can say things like what depths are achieved and how busy Linux thought the disk was during the run (e.g. when latencies are close to 0 that's almost always a sign of caching taking place).
Could be that the filesystem the file is on doesn't support direct=1
with the options it is using. Could be your file was entirely cached for some reason and you're just reading back from the cache (watch out for this when file sizes are dramatically less than your total RAM). Could be that because you didn't write to your file it's sparse/empty and not really "there" (try doing a full set of writes before you read it back)...
PS: thread
doesn't need to take a value (see http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-thread ).
Thank you very much! It was the file system! Wish I would've realized this before I blindly ordered a new drive... (Probably would've anyway though)
â AndyO
Aug 8 at 23:36
1
You're welcome! Thanks for diligently updating your question for the next person...
â Anon
Aug 9 at 8:39
add a comment |Â
up vote
1
down vote
accepted
up vote
1
down vote
accepted
Unfortunately there's not enough information to answer your question - it is often necessary to see the full fio output from your run and to know what version of fio you are running because this can say things like what depths are achieved and how busy Linux thought the disk was during the run (e.g. when latencies are close to 0 that's almost always a sign of caching taking place).
Could be that the filesystem the file is on doesn't support direct=1
with the options it is using. Could be your file was entirely cached for some reason and you're just reading back from the cache (watch out for this when file sizes are dramatically less than your total RAM). Could be that because you didn't write to your file it's sparse/empty and not really "there" (try doing a full set of writes before you read it back)...
PS: thread
doesn't need to take a value (see http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-thread ).
Unfortunately there's not enough information to answer your question - it is often necessary to see the full fio output from your run and to know what version of fio you are running because this can say things like what depths are achieved and how busy Linux thought the disk was during the run (e.g. when latencies are close to 0 that's almost always a sign of caching taking place).
Could be that the filesystem the file is on doesn't support direct=1
with the options it is using. Could be your file was entirely cached for some reason and you're just reading back from the cache (watch out for this when file sizes are dramatically less than your total RAM). Could be that because you didn't write to your file it's sparse/empty and not really "there" (try doing a full set of writes before you read it back)...
PS: thread
doesn't need to take a value (see http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-thread ).
edited Aug 9 at 8:37
answered Aug 8 at 21:32
Anon
1,3101018
1,3101018
Thank you very much! It was the file system! Wish I would've realized this before I blindly ordered a new drive... (Probably would've anyway though)
â AndyO
Aug 8 at 23:36
1
You're welcome! Thanks for diligently updating your question for the next person...
â Anon
Aug 9 at 8:39
add a comment |Â
Thank you very much! It was the file system! Wish I would've realized this before I blindly ordered a new drive... (Probably would've anyway though)
â AndyO
Aug 8 at 23:36
1
You're welcome! Thanks for diligently updating your question for the next person...
â Anon
Aug 9 at 8:39
Thank you very much! It was the file system! Wish I would've realized this before I blindly ordered a new drive... (Probably would've anyway though)
â AndyO
Aug 8 at 23:36
Thank you very much! It was the file system! Wish I would've realized this before I blindly ordered a new drive... (Probably would've anyway though)
â AndyO
Aug 8 at 23:36
1
1
You're welcome! Thanks for diligently updating your question for the next person...
â Anon
Aug 9 at 8:39
You're welcome! Thanks for diligently updating your question for the next person...
â Anon
Aug 9 at 8:39
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f461328%2fwhy-do-diskspd-and-fio-generate-odd-numbers-on-linux-vs-windows%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Weird!
iotop
should measure the bandwidth of uncached IOs, I wonder if it shows more realistic figures than the statistics at the end of thefio
run.â sourcejedi
Aug 8 at 18:22
It doesn't make sense for me to use other tools unless they can be similarly fine-tuned as diskspd and fio and produce similar results. Because somebody needs to finally develop a cross-platform alternative to CrystalDiskMark and that somebody might be me. That is - IF someone can answer my question...
â AndyO
Aug 8 at 19:24
Sorry, I mean to try and illuminate the surprising results you're getting. Not as an alternative to fio.
â sourcejedi
Aug 8 at 20:03
Ideally I would use sync=1, or a longer test to avoid caching effects. But direct IO should still have to traverse the SATA bus, limited to an absolute max of 600 MB/s, so I don't think that explains it.
â sourcejedi
Aug 8 at 20:18
It's all good - I simply explained why that unfortunately doesn't help me. :)
â AndyO
Aug 8 at 20:55