Comparing I/O schedulers

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












I intend to compare different three I/O schedulers: "noop", "cfq", and "deadline" and I plan to compare them for both random reads and writes. So far, the only meaningful cases that I have identified are the following,



  • read: with caching and with syncing

  • read: without caching and with syncing

  • read: without caching and without syncing

  • read: with caching and without syncing.

(all of the above both sequentially and non-sequentially)



and



  • write: with caching

  • write: without caching

(I suppose sequential and non-sequential are pertinent here as well)



I plan to conduct write-tests that are both single- and multi-threaded.



Q1:



Are there any other meaningful cases to test that I have missed?



Q2:



When writing, should I expect there to be any meaningful difference between writing random data or just some character repeatedly?



Q3:



What are some interesting block sizes to study? How many number of blocks should I write/read? Would there be a point to vary the number of blocks read/written during the benchmarking or is it better to use a consistent size for each case. I.e. benchmarking using



  • blocksize = 512

  • blocksize = 1024
    ...

or is it more interesting to see what happens when the first read is some number and the next read is some other number of blocks?
Should I try block sizes that are not divisible by 2?




Of course, the answer to a lot of these questions except Q1 can be determined by simply running more tests. I am merely trying to avoid unnecessary benchmarks to be able to focus more qualitatively on data which is relevant. There are simply so many combinations of tests that can be run.










share|improve this question





















  • Testing I/O schedulers is about disk access contention between processes. Doing that in a meaningful way is hard, the main factor is going to be how you create random processes to generate the load, rather than what you write. Measuring schedule delays rather than disk access is also going to be a challenge.
    – Satō Katsura
    Oct 9 '17 at 12:01











  • So, not only the number of processes but also having processes that attempt to write/read with different amounts of fervour?
    – Filip Allberg
    Oct 9 '17 at 12:04














up vote
0
down vote

favorite












I intend to compare different three I/O schedulers: "noop", "cfq", and "deadline" and I plan to compare them for both random reads and writes. So far, the only meaningful cases that I have identified are the following,



  • read: with caching and with syncing

  • read: without caching and with syncing

  • read: without caching and without syncing

  • read: with caching and without syncing.

(all of the above both sequentially and non-sequentially)



and



  • write: with caching

  • write: without caching

(I suppose sequential and non-sequential are pertinent here as well)



I plan to conduct write-tests that are both single- and multi-threaded.



Q1:



Are there any other meaningful cases to test that I have missed?



Q2:



When writing, should I expect there to be any meaningful difference between writing random data or just some character repeatedly?



Q3:



What are some interesting block sizes to study? How many number of blocks should I write/read? Would there be a point to vary the number of blocks read/written during the benchmarking or is it better to use a consistent size for each case. I.e. benchmarking using



  • blocksize = 512

  • blocksize = 1024
    ...

or is it more interesting to see what happens when the first read is some number and the next read is some other number of blocks?
Should I try block sizes that are not divisible by 2?




Of course, the answer to a lot of these questions except Q1 can be determined by simply running more tests. I am merely trying to avoid unnecessary benchmarks to be able to focus more qualitatively on data which is relevant. There are simply so many combinations of tests that can be run.










share|improve this question





















  • Testing I/O schedulers is about disk access contention between processes. Doing that in a meaningful way is hard, the main factor is going to be how you create random processes to generate the load, rather than what you write. Measuring schedule delays rather than disk access is also going to be a challenge.
    – Satō Katsura
    Oct 9 '17 at 12:01











  • So, not only the number of processes but also having processes that attempt to write/read with different amounts of fervour?
    – Filip Allberg
    Oct 9 '17 at 12:04












up vote
0
down vote

favorite









up vote
0
down vote

favorite











I intend to compare different three I/O schedulers: "noop", "cfq", and "deadline" and I plan to compare them for both random reads and writes. So far, the only meaningful cases that I have identified are the following,



  • read: with caching and with syncing

  • read: without caching and with syncing

  • read: without caching and without syncing

  • read: with caching and without syncing.

(all of the above both sequentially and non-sequentially)



and



  • write: with caching

  • write: without caching

(I suppose sequential and non-sequential are pertinent here as well)



I plan to conduct write-tests that are both single- and multi-threaded.



Q1:



Are there any other meaningful cases to test that I have missed?



Q2:



When writing, should I expect there to be any meaningful difference between writing random data or just some character repeatedly?



Q3:



What are some interesting block sizes to study? How many number of blocks should I write/read? Would there be a point to vary the number of blocks read/written during the benchmarking or is it better to use a consistent size for each case. I.e. benchmarking using



  • blocksize = 512

  • blocksize = 1024
    ...

or is it more interesting to see what happens when the first read is some number and the next read is some other number of blocks?
Should I try block sizes that are not divisible by 2?




Of course, the answer to a lot of these questions except Q1 can be determined by simply running more tests. I am merely trying to avoid unnecessary benchmarks to be able to focus more qualitatively on data which is relevant. There are simply so many combinations of tests that can be run.










share|improve this question













I intend to compare different three I/O schedulers: "noop", "cfq", and "deadline" and I plan to compare them for both random reads and writes. So far, the only meaningful cases that I have identified are the following,



  • read: with caching and with syncing

  • read: without caching and with syncing

  • read: without caching and without syncing

  • read: with caching and without syncing.

(all of the above both sequentially and non-sequentially)



and



  • write: with caching

  • write: without caching

(I suppose sequential and non-sequential are pertinent here as well)



I plan to conduct write-tests that are both single- and multi-threaded.



Q1:



Are there any other meaningful cases to test that I have missed?



Q2:



When writing, should I expect there to be any meaningful difference between writing random data or just some character repeatedly?



Q3:



What are some interesting block sizes to study? How many number of blocks should I write/read? Would there be a point to vary the number of blocks read/written during the benchmarking or is it better to use a consistent size for each case. I.e. benchmarking using



  • blocksize = 512

  • blocksize = 1024
    ...

or is it more interesting to see what happens when the first read is some number and the next read is some other number of blocks?
Should I try block sizes that are not divisible by 2?




Of course, the answer to a lot of these questions except Q1 can be determined by simply running more tests. I am merely trying to avoid unnecessary benchmarks to be able to focus more qualitatively on data which is relevant. There are simply so many combinations of tests that can be run.







performance io scheduling benchmark read-write






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Oct 9 '17 at 11:27









Filip Allberg

1012




1012











  • Testing I/O schedulers is about disk access contention between processes. Doing that in a meaningful way is hard, the main factor is going to be how you create random processes to generate the load, rather than what you write. Measuring schedule delays rather than disk access is also going to be a challenge.
    – Satō Katsura
    Oct 9 '17 at 12:01











  • So, not only the number of processes but also having processes that attempt to write/read with different amounts of fervour?
    – Filip Allberg
    Oct 9 '17 at 12:04
















  • Testing I/O schedulers is about disk access contention between processes. Doing that in a meaningful way is hard, the main factor is going to be how you create random processes to generate the load, rather than what you write. Measuring schedule delays rather than disk access is also going to be a challenge.
    – Satō Katsura
    Oct 9 '17 at 12:01











  • So, not only the number of processes but also having processes that attempt to write/read with different amounts of fervour?
    – Filip Allberg
    Oct 9 '17 at 12:04















Testing I/O schedulers is about disk access contention between processes. Doing that in a meaningful way is hard, the main factor is going to be how you create random processes to generate the load, rather than what you write. Measuring schedule delays rather than disk access is also going to be a challenge.
– Satō Katsura
Oct 9 '17 at 12:01





Testing I/O schedulers is about disk access contention between processes. Doing that in a meaningful way is hard, the main factor is going to be how you create random processes to generate the load, rather than what you write. Measuring schedule delays rather than disk access is also going to be a challenge.
– Satō Katsura
Oct 9 '17 at 12:01













So, not only the number of processes but also having processes that attempt to write/read with different amounts of fervour?
– Filip Allberg
Oct 9 '17 at 12:04




So, not only the number of processes but also having processes that attempt to write/read with different amounts of fervour?
– Filip Allberg
Oct 9 '17 at 12:04















active

oldest

votes











Your Answer







StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f396994%2fcomparing-i-o-schedulers%23new-answer', 'question_page');

);

Post as a guest



































active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes















 

draft saved


draft discarded















































 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f396994%2fcomparing-i-o-schedulers%23new-answer', 'question_page');

);

Post as a guest













































































Popular posts from this blog

How to check contact read email or not when send email to Individual?

Bahrain

Postfix configuration issue with fips on centos 7; mailgun relay