disk %util reaches 100% constantly when avgrq-sz is small
Clash Royale CLAN TAG#URR8PPP
On my system with 4.9.86 kernel I have noticed weird behaviour with my disk (HDD with 5400 rpm), the %util goes 100% for quite some time constantly (for 5 minutes or so), I do see the avrg-rq size is 8K when this happens. avgqu-sz and await is also very high, causing many processes going into D state (including jdb2 thread) . I have also noticed KBDirty going high this time (658 MB in this case which is usually in few KBs otherwise), Am I hitting disk saturation?
SAR Memory Usage:======================================
Linux 4.9.86 01/07/19 _x86_64_ (32 CPU)
11:29:20 kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty
11:29:21 80270488 52009236 39.32 354368 17373312 15789156 7.92 10257860 15388656 658488
Average: 80270488 52009236 39.32 354368 17373312 15789156 7.92 10257860 15388656 658488
SAR IO Usage:======================================
Linux 4.9.86 01/07/19 _x86_64_ (32 CPU)
11:29:22 tps rtps wtps bread/s bwrtn/s
11:29:23 351.00 0.00 351.00 0.00 2808.00
Average: 351.00 0.00 351.00 0.00 2808.00
SAR Device IO activity:======================================
Linux 4.9.86 01/07/19 _x86_64_ (32 CPU)
11:29:23 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
11:29:24 loop5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:29:24 sda 285.00 0.00 2280.00 8.00 143.51 510.94 3.51 100.00
11:29:24 vault 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
Average: loop5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: sda 285.00 0.00 2280.00 8.00 143.51 510.94 3.51 100.00
Average: vault 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
SAR Queue and Load avg:======================================
Linux 4.9.86 01/07/19 _x86_64_ (32 CPU)
11:29:25 runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15 blocked
11:29:26 0 1043 3.39 2.30 2.15 2
Average: 0 1043 3.39 2.30 2.15 2
the file system mounted as ext3
with ext4
driver, data=ordered,barrier=0
setting with journaling enabled.
Raid configuration:
Model:SAS2008 Firmware Version: 9.00.00.00 RAID Level:RAID1
filesystems ext3 journaling
add a comment |
On my system with 4.9.86 kernel I have noticed weird behaviour with my disk (HDD with 5400 rpm), the %util goes 100% for quite some time constantly (for 5 minutes or so), I do see the avrg-rq size is 8K when this happens. avgqu-sz and await is also very high, causing many processes going into D state (including jdb2 thread) . I have also noticed KBDirty going high this time (658 MB in this case which is usually in few KBs otherwise), Am I hitting disk saturation?
SAR Memory Usage:======================================
Linux 4.9.86 01/07/19 _x86_64_ (32 CPU)
11:29:20 kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty
11:29:21 80270488 52009236 39.32 354368 17373312 15789156 7.92 10257860 15388656 658488
Average: 80270488 52009236 39.32 354368 17373312 15789156 7.92 10257860 15388656 658488
SAR IO Usage:======================================
Linux 4.9.86 01/07/19 _x86_64_ (32 CPU)
11:29:22 tps rtps wtps bread/s bwrtn/s
11:29:23 351.00 0.00 351.00 0.00 2808.00
Average: 351.00 0.00 351.00 0.00 2808.00
SAR Device IO activity:======================================
Linux 4.9.86 01/07/19 _x86_64_ (32 CPU)
11:29:23 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
11:29:24 loop5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:29:24 sda 285.00 0.00 2280.00 8.00 143.51 510.94 3.51 100.00
11:29:24 vault 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
Average: loop5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: sda 285.00 0.00 2280.00 8.00 143.51 510.94 3.51 100.00
Average: vault 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
SAR Queue and Load avg:======================================
Linux 4.9.86 01/07/19 _x86_64_ (32 CPU)
11:29:25 runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15 blocked
11:29:26 0 1043 3.39 2.30 2.15 2
Average: 0 1043 3.39 2.30 2.15 2
the file system mounted as ext3
with ext4
driver, data=ordered,barrier=0
setting with journaling enabled.
Raid configuration:
Model:SAS2008 Firmware Version: 9.00.00.00 RAID Level:RAID1
filesystems ext3 journaling
add a comment |
On my system with 4.9.86 kernel I have noticed weird behaviour with my disk (HDD with 5400 rpm), the %util goes 100% for quite some time constantly (for 5 minutes or so), I do see the avrg-rq size is 8K when this happens. avgqu-sz and await is also very high, causing many processes going into D state (including jdb2 thread) . I have also noticed KBDirty going high this time (658 MB in this case which is usually in few KBs otherwise), Am I hitting disk saturation?
SAR Memory Usage:======================================
Linux 4.9.86 01/07/19 _x86_64_ (32 CPU)
11:29:20 kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty
11:29:21 80270488 52009236 39.32 354368 17373312 15789156 7.92 10257860 15388656 658488
Average: 80270488 52009236 39.32 354368 17373312 15789156 7.92 10257860 15388656 658488
SAR IO Usage:======================================
Linux 4.9.86 01/07/19 _x86_64_ (32 CPU)
11:29:22 tps rtps wtps bread/s bwrtn/s
11:29:23 351.00 0.00 351.00 0.00 2808.00
Average: 351.00 0.00 351.00 0.00 2808.00
SAR Device IO activity:======================================
Linux 4.9.86 01/07/19 _x86_64_ (32 CPU)
11:29:23 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
11:29:24 loop5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:29:24 sda 285.00 0.00 2280.00 8.00 143.51 510.94 3.51 100.00
11:29:24 vault 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
Average: loop5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: sda 285.00 0.00 2280.00 8.00 143.51 510.94 3.51 100.00
Average: vault 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
SAR Queue and Load avg:======================================
Linux 4.9.86 01/07/19 _x86_64_ (32 CPU)
11:29:25 runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15 blocked
11:29:26 0 1043 3.39 2.30 2.15 2
Average: 0 1043 3.39 2.30 2.15 2
the file system mounted as ext3
with ext4
driver, data=ordered,barrier=0
setting with journaling enabled.
Raid configuration:
Model:SAS2008 Firmware Version: 9.00.00.00 RAID Level:RAID1
filesystems ext3 journaling
On my system with 4.9.86 kernel I have noticed weird behaviour with my disk (HDD with 5400 rpm), the %util goes 100% for quite some time constantly (for 5 minutes or so), I do see the avrg-rq size is 8K when this happens. avgqu-sz and await is also very high, causing many processes going into D state (including jdb2 thread) . I have also noticed KBDirty going high this time (658 MB in this case which is usually in few KBs otherwise), Am I hitting disk saturation?
SAR Memory Usage:======================================
Linux 4.9.86 01/07/19 _x86_64_ (32 CPU)
11:29:20 kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty
11:29:21 80270488 52009236 39.32 354368 17373312 15789156 7.92 10257860 15388656 658488
Average: 80270488 52009236 39.32 354368 17373312 15789156 7.92 10257860 15388656 658488
SAR IO Usage:======================================
Linux 4.9.86 01/07/19 _x86_64_ (32 CPU)
11:29:22 tps rtps wtps bread/s bwrtn/s
11:29:23 351.00 0.00 351.00 0.00 2808.00
Average: 351.00 0.00 351.00 0.00 2808.00
SAR Device IO activity:======================================
Linux 4.9.86 01/07/19 _x86_64_ (32 CPU)
11:29:23 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
11:29:24 loop5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:29:24 sda 285.00 0.00 2280.00 8.00 143.51 510.94 3.51 100.00
11:29:24 vault 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
Average: loop5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: sda 285.00 0.00 2280.00 8.00 143.51 510.94 3.51 100.00
Average: vault 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
SAR Queue and Load avg:======================================
Linux 4.9.86 01/07/19 _x86_64_ (32 CPU)
11:29:25 runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15 blocked
11:29:26 0 1043 3.39 2.30 2.15 2
Average: 0 1043 3.39 2.30 2.15 2
the file system mounted as ext3
with ext4
driver, data=ordered,barrier=0
setting with journaling enabled.
Raid configuration:
Model:SAS2008 Firmware Version: 9.00.00.00 RAID Level:RAID1
filesystems ext3 journaling
filesystems ext3 journaling
edited Jan 8 at 20:10
guntbert
1,04911017
1,04911017
asked Jan 8 at 19:44
PKB85PKB85
113
113
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
This looks like what you'd expect to see if process were to send a lot of non-sequential (random) small writes. You'd have a relatively small average request size (8, which probably means 8×512 byte sectors = 4K, so the minimum with normal writes). Having more dirty buffers also is consistent, it means the writes have been passed off to the kernel & the kernel is working on writing them to disk. 285 tps is pretty darn nice performance for a magnetic disk.
You need to investigate what programs are writing to disk and see if they're exhibiting abnormal behavior. Or if the programs can be configured to spread write better (e.g., if it's a database, the dirty page writeback speed is often configurable).
ext3 isn't really recommended for, well, anything. ext4 is a conservative choice for a replacement (but with barrier=0
, you're clearly not concerned about that); XFS is another good choice (and still very reliable). But I doubt that'll really help here. SSDs, though, certainly will give far higher IOPS.
thanks. the issue is more frequent after kernel upgrade from 3.x to 4.9.86. I have noticed one weird thing after this upgrade. the mount point was in writeback mode in 3.x kernel (using ext3 driver) which got changed to ordered mode (using ext4 driver but fs mounted as ext3). Can this be a factor?
– PKB85
Jan 9 at 16:52
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f493315%2fdisk-util-reaches-100-constantly-when-avgrq-sz-is-small%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
This looks like what you'd expect to see if process were to send a lot of non-sequential (random) small writes. You'd have a relatively small average request size (8, which probably means 8×512 byte sectors = 4K, so the minimum with normal writes). Having more dirty buffers also is consistent, it means the writes have been passed off to the kernel & the kernel is working on writing them to disk. 285 tps is pretty darn nice performance for a magnetic disk.
You need to investigate what programs are writing to disk and see if they're exhibiting abnormal behavior. Or if the programs can be configured to spread write better (e.g., if it's a database, the dirty page writeback speed is often configurable).
ext3 isn't really recommended for, well, anything. ext4 is a conservative choice for a replacement (but with barrier=0
, you're clearly not concerned about that); XFS is another good choice (and still very reliable). But I doubt that'll really help here. SSDs, though, certainly will give far higher IOPS.
thanks. the issue is more frequent after kernel upgrade from 3.x to 4.9.86. I have noticed one weird thing after this upgrade. the mount point was in writeback mode in 3.x kernel (using ext3 driver) which got changed to ordered mode (using ext4 driver but fs mounted as ext3). Can this be a factor?
– PKB85
Jan 9 at 16:52
add a comment |
This looks like what you'd expect to see if process were to send a lot of non-sequential (random) small writes. You'd have a relatively small average request size (8, which probably means 8×512 byte sectors = 4K, so the minimum with normal writes). Having more dirty buffers also is consistent, it means the writes have been passed off to the kernel & the kernel is working on writing them to disk. 285 tps is pretty darn nice performance for a magnetic disk.
You need to investigate what programs are writing to disk and see if they're exhibiting abnormal behavior. Or if the programs can be configured to spread write better (e.g., if it's a database, the dirty page writeback speed is often configurable).
ext3 isn't really recommended for, well, anything. ext4 is a conservative choice for a replacement (but with barrier=0
, you're clearly not concerned about that); XFS is another good choice (and still very reliable). But I doubt that'll really help here. SSDs, though, certainly will give far higher IOPS.
thanks. the issue is more frequent after kernel upgrade from 3.x to 4.9.86. I have noticed one weird thing after this upgrade. the mount point was in writeback mode in 3.x kernel (using ext3 driver) which got changed to ordered mode (using ext4 driver but fs mounted as ext3). Can this be a factor?
– PKB85
Jan 9 at 16:52
add a comment |
This looks like what you'd expect to see if process were to send a lot of non-sequential (random) small writes. You'd have a relatively small average request size (8, which probably means 8×512 byte sectors = 4K, so the minimum with normal writes). Having more dirty buffers also is consistent, it means the writes have been passed off to the kernel & the kernel is working on writing them to disk. 285 tps is pretty darn nice performance for a magnetic disk.
You need to investigate what programs are writing to disk and see if they're exhibiting abnormal behavior. Or if the programs can be configured to spread write better (e.g., if it's a database, the dirty page writeback speed is often configurable).
ext3 isn't really recommended for, well, anything. ext4 is a conservative choice for a replacement (but with barrier=0
, you're clearly not concerned about that); XFS is another good choice (and still very reliable). But I doubt that'll really help here. SSDs, though, certainly will give far higher IOPS.
This looks like what you'd expect to see if process were to send a lot of non-sequential (random) small writes. You'd have a relatively small average request size (8, which probably means 8×512 byte sectors = 4K, so the minimum with normal writes). Having more dirty buffers also is consistent, it means the writes have been passed off to the kernel & the kernel is working on writing them to disk. 285 tps is pretty darn nice performance for a magnetic disk.
You need to investigate what programs are writing to disk and see if they're exhibiting abnormal behavior. Or if the programs can be configured to spread write better (e.g., if it's a database, the dirty page writeback speed is often configurable).
ext3 isn't really recommended for, well, anything. ext4 is a conservative choice for a replacement (but with barrier=0
, you're clearly not concerned about that); XFS is another good choice (and still very reliable). But I doubt that'll really help here. SSDs, though, certainly will give far higher IOPS.
answered Jan 8 at 21:15
derobertderobert
73k8153210
73k8153210
thanks. the issue is more frequent after kernel upgrade from 3.x to 4.9.86. I have noticed one weird thing after this upgrade. the mount point was in writeback mode in 3.x kernel (using ext3 driver) which got changed to ordered mode (using ext4 driver but fs mounted as ext3). Can this be a factor?
– PKB85
Jan 9 at 16:52
add a comment |
thanks. the issue is more frequent after kernel upgrade from 3.x to 4.9.86. I have noticed one weird thing after this upgrade. the mount point was in writeback mode in 3.x kernel (using ext3 driver) which got changed to ordered mode (using ext4 driver but fs mounted as ext3). Can this be a factor?
– PKB85
Jan 9 at 16:52
thanks. the issue is more frequent after kernel upgrade from 3.x to 4.9.86. I have noticed one weird thing after this upgrade. the mount point was in writeback mode in 3.x kernel (using ext3 driver) which got changed to ordered mode (using ext4 driver but fs mounted as ext3). Can this be a factor?
– PKB85
Jan 9 at 16:52
thanks. the issue is more frequent after kernel upgrade from 3.x to 4.9.86. I have noticed one weird thing after this upgrade. the mount point was in writeback mode in 3.x kernel (using ext3 driver) which got changed to ordered mode (using ext4 driver but fs mounted as ext3). Can this be a factor?
– PKB85
Jan 9 at 16:52
add a comment |
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f493315%2fdisk-util-reaches-100-constantly-when-avgrq-sz-is-small%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown