Comparing I/O schedulers
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
I intend to compare different three I/O schedulers: "noop", "cfq", and "deadline" and I plan to compare them for both random reads and writes. So far, the only meaningful cases that I have identified are the following,
- read: with caching and with syncing
- read: without caching and with syncing
- read: without caching and without syncing
- read: with caching and without syncing.
(all of the above both sequentially and non-sequentially)
and
- write: with caching
- write: without caching
(I suppose sequential and non-sequential are pertinent here as well)
I plan to conduct write-tests that are both single- and multi-threaded.
Q1:
Are there any other meaningful cases to test that I have missed?
Q2:
When writing, should I expect there to be any meaningful difference between writing random data or just some character repeatedly?
Q3:
What are some interesting block sizes to study? How many number of blocks should I write/read? Would there be a point to vary the number of blocks read/written during the benchmarking or is it better to use a consistent size for each case. I.e. benchmarking using
- blocksize = 512
- blocksize = 1024
...
or is it more interesting to see what happens when the first read is some number and the next read is some other number of blocks?
Should I try block sizes that are not divisible by 2?
Of course, the answer to a lot of these questions except Q1 can be determined by simply running more tests. I am merely trying to avoid unnecessary benchmarks to be able to focus more qualitatively on data which is relevant. There are simply so many combinations of tests that can be run.
performance io scheduling benchmark read-write
add a comment |Â
up vote
0
down vote
favorite
I intend to compare different three I/O schedulers: "noop", "cfq", and "deadline" and I plan to compare them for both random reads and writes. So far, the only meaningful cases that I have identified are the following,
- read: with caching and with syncing
- read: without caching and with syncing
- read: without caching and without syncing
- read: with caching and without syncing.
(all of the above both sequentially and non-sequentially)
and
- write: with caching
- write: without caching
(I suppose sequential and non-sequential are pertinent here as well)
I plan to conduct write-tests that are both single- and multi-threaded.
Q1:
Are there any other meaningful cases to test that I have missed?
Q2:
When writing, should I expect there to be any meaningful difference between writing random data or just some character repeatedly?
Q3:
What are some interesting block sizes to study? How many number of blocks should I write/read? Would there be a point to vary the number of blocks read/written during the benchmarking or is it better to use a consistent size for each case. I.e. benchmarking using
- blocksize = 512
- blocksize = 1024
...
or is it more interesting to see what happens when the first read is some number and the next read is some other number of blocks?
Should I try block sizes that are not divisible by 2?
Of course, the answer to a lot of these questions except Q1 can be determined by simply running more tests. I am merely trying to avoid unnecessary benchmarks to be able to focus more qualitatively on data which is relevant. There are simply so many combinations of tests that can be run.
performance io scheduling benchmark read-write
Testing I/O schedulers is about disk access contention between processes. Doing that in a meaningful way is hard, the main factor is going to be how you create random processes to generate the load, rather than what you write. Measuring schedule delays rather than disk access is also going to be a challenge.
â Satà  Katsura
Oct 9 '17 at 12:01
So, not only the number of processes but also having processes that attempt to write/read with different amounts of fervour?
â Filip Allberg
Oct 9 '17 at 12:04
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I intend to compare different three I/O schedulers: "noop", "cfq", and "deadline" and I plan to compare them for both random reads and writes. So far, the only meaningful cases that I have identified are the following,
- read: with caching and with syncing
- read: without caching and with syncing
- read: without caching and without syncing
- read: with caching and without syncing.
(all of the above both sequentially and non-sequentially)
and
- write: with caching
- write: without caching
(I suppose sequential and non-sequential are pertinent here as well)
I plan to conduct write-tests that are both single- and multi-threaded.
Q1:
Are there any other meaningful cases to test that I have missed?
Q2:
When writing, should I expect there to be any meaningful difference between writing random data or just some character repeatedly?
Q3:
What are some interesting block sizes to study? How many number of blocks should I write/read? Would there be a point to vary the number of blocks read/written during the benchmarking or is it better to use a consistent size for each case. I.e. benchmarking using
- blocksize = 512
- blocksize = 1024
...
or is it more interesting to see what happens when the first read is some number and the next read is some other number of blocks?
Should I try block sizes that are not divisible by 2?
Of course, the answer to a lot of these questions except Q1 can be determined by simply running more tests. I am merely trying to avoid unnecessary benchmarks to be able to focus more qualitatively on data which is relevant. There are simply so many combinations of tests that can be run.
performance io scheduling benchmark read-write
I intend to compare different three I/O schedulers: "noop", "cfq", and "deadline" and I plan to compare them for both random reads and writes. So far, the only meaningful cases that I have identified are the following,
- read: with caching and with syncing
- read: without caching and with syncing
- read: without caching and without syncing
- read: with caching and without syncing.
(all of the above both sequentially and non-sequentially)
and
- write: with caching
- write: without caching
(I suppose sequential and non-sequential are pertinent here as well)
I plan to conduct write-tests that are both single- and multi-threaded.
Q1:
Are there any other meaningful cases to test that I have missed?
Q2:
When writing, should I expect there to be any meaningful difference between writing random data or just some character repeatedly?
Q3:
What are some interesting block sizes to study? How many number of blocks should I write/read? Would there be a point to vary the number of blocks read/written during the benchmarking or is it better to use a consistent size for each case. I.e. benchmarking using
- blocksize = 512
- blocksize = 1024
...
or is it more interesting to see what happens when the first read is some number and the next read is some other number of blocks?
Should I try block sizes that are not divisible by 2?
Of course, the answer to a lot of these questions except Q1 can be determined by simply running more tests. I am merely trying to avoid unnecessary benchmarks to be able to focus more qualitatively on data which is relevant. There are simply so many combinations of tests that can be run.
performance io scheduling benchmark read-write
performance io scheduling benchmark read-write
asked Oct 9 '17 at 11:27
Filip Allberg
1012
1012
Testing I/O schedulers is about disk access contention between processes. Doing that in a meaningful way is hard, the main factor is going to be how you create random processes to generate the load, rather than what you write. Measuring schedule delays rather than disk access is also going to be a challenge.
â Satà  Katsura
Oct 9 '17 at 12:01
So, not only the number of processes but also having processes that attempt to write/read with different amounts of fervour?
â Filip Allberg
Oct 9 '17 at 12:04
add a comment |Â
Testing I/O schedulers is about disk access contention between processes. Doing that in a meaningful way is hard, the main factor is going to be how you create random processes to generate the load, rather than what you write. Measuring schedule delays rather than disk access is also going to be a challenge.
â Satà  Katsura
Oct 9 '17 at 12:01
So, not only the number of processes but also having processes that attempt to write/read with different amounts of fervour?
â Filip Allberg
Oct 9 '17 at 12:04
Testing I/O schedulers is about disk access contention between processes. Doing that in a meaningful way is hard, the main factor is going to be how you create random processes to generate the load, rather than what you write. Measuring schedule delays rather than disk access is also going to be a challenge.
â Satà  Katsura
Oct 9 '17 at 12:01
Testing I/O schedulers is about disk access contention between processes. Doing that in a meaningful way is hard, the main factor is going to be how you create random processes to generate the load, rather than what you write. Measuring schedule delays rather than disk access is also going to be a challenge.
â Satà  Katsura
Oct 9 '17 at 12:01
So, not only the number of processes but also having processes that attempt to write/read with different amounts of fervour?
â Filip Allberg
Oct 9 '17 at 12:04
So, not only the number of processes but also having processes that attempt to write/read with different amounts of fervour?
â Filip Allberg
Oct 9 '17 at 12:04
add a comment |Â
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f396994%2fcomparing-i-o-schedulers%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Testing I/O schedulers is about disk access contention between processes. Doing that in a meaningful way is hard, the main factor is going to be how you create random processes to generate the load, rather than what you write. Measuring schedule delays rather than disk access is also going to be a challenge.
â Satà  Katsura
Oct 9 '17 at 12:01
So, not only the number of processes but also having processes that attempt to write/read with different amounts of fervour?
â Filip Allberg
Oct 9 '17 at 12:04