Linux GUI becomes very unresponsive when doing heavy disk I/O - what to tune?
Clash Royale CLAN TAG#URR8PPP
I have a device /dev/mydisk
that is based on a stack of functionality: a LUKS-encrypted, software RAID-1.
From time to time, I do a backup of /dev/mydisk
contents to an external USB disk, which is itself encrypted using LUKS. A couple of 100 GiB need to be transferred. This operation is not a simple dd
but a recursive cp
(I still need to change to use rsync
)
A while after the backup starts, the interactivity of the whole system declines tremendously. The KDE interface is choked to death apparently waiting for memory requests to be granted. A wait time of 2 minutes for the prompt is not unusual. Waiting for network I/O likewise demands a lot patience. This is similar behaviour to what happens when baloo
kicks in and decides to unzip every zip and index every file content for purposes unknown: The system becomes swamp canoe.
It seems that the kernel gives all the RAM to the copying processes and is loath to hand it back to give interactive processes a chance. RAM is not shabby: 23 GiB. There is also 11 GiB of swap space, just in case, but it's just occupied by a few MiB at any time.
Is it possible to make sure interactive processes get their RAM in preference to the copying processes? If so, how?
Version information:
- This is a Fedora 29 (4.19.15-300.fc29.x86_64) system but I know that I had this issue in earlier Fedora systems, too.
- The KDE version is based on "KDE Frameworks: 5.53.0".
Update
Thanks to everyone for the answers so far!
Once one knows what to search for, one finds some things.
What I have hauled in:
- 2018-10: U&LSE entry apparently exactly about my problem: System lags when doing large R/W operations on external disks. As the questioner uses
dd
, the remedy is to use the flagoflag=direct
to bypass the page cache. - 2018-11: U&LSE relatively general question about slowdown-on-write Why were “USB-stick stall” problems reported in 2013? Why wasn't this problem solved by the existing “No-I/O dirty throttling” code?. This is rather confusing and and we have to wrestle rumours and phenomena.
- 2013-11: Jonathan Corbet at LWM.net: The pernicious USB-stick stall problem. This is the article the "reported the problem in 2013". However, an answer to the question of 2018-11 says that this article is wrong and based on incorrect premises.
- 2011-08: U&LSE entry about how to forcefully clear the page cache, which may bring responsiveness back: Setting /proc/sys/vm/drop_caches to clear cache
- 2016-01: U&LSE entry about how to restrict the size of the buffer cache: Restrict size of buffer cache in Linux
- Discussions about I/O schedulers and writeback throttling.
- 2018-10: U&LSE question on this: Is “writeback throttling” a solution to the “USB-stick stall problem”?
- 2016-04: Jonathan Corbet at LWM.net: Toward less-annoying background writeback.
- 2018-10: U&LSE question on this: Is “writeback throttling” a solution to the “USB-stick stall problem”?
- I'm also thinking about: 2017-05: Improving Linux System Performance with I/O Scheduler Tuning, 2009-06: Selecting a Linux I/O Scheduler
Why aren't there expert systems handling the I/O tuning by now..?
linux performance cache
add a comment |
I have a device /dev/mydisk
that is based on a stack of functionality: a LUKS-encrypted, software RAID-1.
From time to time, I do a backup of /dev/mydisk
contents to an external USB disk, which is itself encrypted using LUKS. A couple of 100 GiB need to be transferred. This operation is not a simple dd
but a recursive cp
(I still need to change to use rsync
)
A while after the backup starts, the interactivity of the whole system declines tremendously. The KDE interface is choked to death apparently waiting for memory requests to be granted. A wait time of 2 minutes for the prompt is not unusual. Waiting for network I/O likewise demands a lot patience. This is similar behaviour to what happens when baloo
kicks in and decides to unzip every zip and index every file content for purposes unknown: The system becomes swamp canoe.
It seems that the kernel gives all the RAM to the copying processes and is loath to hand it back to give interactive processes a chance. RAM is not shabby: 23 GiB. There is also 11 GiB of swap space, just in case, but it's just occupied by a few MiB at any time.
Is it possible to make sure interactive processes get their RAM in preference to the copying processes? If so, how?
Version information:
- This is a Fedora 29 (4.19.15-300.fc29.x86_64) system but I know that I had this issue in earlier Fedora systems, too.
- The KDE version is based on "KDE Frameworks: 5.53.0".
Update
Thanks to everyone for the answers so far!
Once one knows what to search for, one finds some things.
What I have hauled in:
- 2018-10: U&LSE entry apparently exactly about my problem: System lags when doing large R/W operations on external disks. As the questioner uses
dd
, the remedy is to use the flagoflag=direct
to bypass the page cache. - 2018-11: U&LSE relatively general question about slowdown-on-write Why were “USB-stick stall” problems reported in 2013? Why wasn't this problem solved by the existing “No-I/O dirty throttling” code?. This is rather confusing and and we have to wrestle rumours and phenomena.
- 2013-11: Jonathan Corbet at LWM.net: The pernicious USB-stick stall problem. This is the article the "reported the problem in 2013". However, an answer to the question of 2018-11 says that this article is wrong and based on incorrect premises.
- 2011-08: U&LSE entry about how to forcefully clear the page cache, which may bring responsiveness back: Setting /proc/sys/vm/drop_caches to clear cache
- 2016-01: U&LSE entry about how to restrict the size of the buffer cache: Restrict size of buffer cache in Linux
- Discussions about I/O schedulers and writeback throttling.
- 2018-10: U&LSE question on this: Is “writeback throttling” a solution to the “USB-stick stall problem”?
- 2016-04: Jonathan Corbet at LWM.net: Toward less-annoying background writeback.
- 2018-10: U&LSE question on this: Is “writeback throttling” a solution to the “USB-stick stall problem”?
- I'm also thinking about: 2017-05: Improving Linux System Performance with I/O Scheduler Tuning, 2009-06: Selecting a Linux I/O Scheduler
Why aren't there expert systems handling the I/O tuning by now..?
linux performance cache
For reference, please specify the version either of KDE or the OS you are using. gnome-shell currently has an issue where it calls fsync() on the main thread, which can hang the entire GUI for tens of seconds. Obviously it would be nice if fsync() didn't do this, but gnome-shell should not be doing it in the first place, and it may be fixed in some later versions (and some parts of the code are already deliberately avoiding it). gitlab.gnome.org/GNOME/gnome-shell/issues/815 . So IMO it would be useful to note the version of KDE you are using here.
– sourcejedi
Jan 21 at 10:52
Thanks @sourcejedi Added info. KDE has the "stop the world" phenomenon also even where the system is basically idle but it needs to do some internal configuration. But that is not likely related to the described slowdown.
– David Tonhofer
Jan 21 at 11:03
1
I noticed similar symptoms on my system, and dropping caches withecho 3 > /proc/sys/vm/drop_caches
restores interactivity (and improves disk transfer rate). Your problem may or may not be related. I haven't yet found out the reason for this behaviour.
– dirkt
Jan 21 at 13:38
@dirkt3
drops both buffer cache and reclaimable dentries and inodes. If you have a lot of dentries and inodes cached, dropping them will increase the limit for dirty buffer cache. unix.stackexchange.com/a/480999/29483 That's one possible mechanism.
– sourcejedi
Jan 21 at 15:28
add a comment |
I have a device /dev/mydisk
that is based on a stack of functionality: a LUKS-encrypted, software RAID-1.
From time to time, I do a backup of /dev/mydisk
contents to an external USB disk, which is itself encrypted using LUKS. A couple of 100 GiB need to be transferred. This operation is not a simple dd
but a recursive cp
(I still need to change to use rsync
)
A while after the backup starts, the interactivity of the whole system declines tremendously. The KDE interface is choked to death apparently waiting for memory requests to be granted. A wait time of 2 minutes for the prompt is not unusual. Waiting for network I/O likewise demands a lot patience. This is similar behaviour to what happens when baloo
kicks in and decides to unzip every zip and index every file content for purposes unknown: The system becomes swamp canoe.
It seems that the kernel gives all the RAM to the copying processes and is loath to hand it back to give interactive processes a chance. RAM is not shabby: 23 GiB. There is also 11 GiB of swap space, just in case, but it's just occupied by a few MiB at any time.
Is it possible to make sure interactive processes get their RAM in preference to the copying processes? If so, how?
Version information:
- This is a Fedora 29 (4.19.15-300.fc29.x86_64) system but I know that I had this issue in earlier Fedora systems, too.
- The KDE version is based on "KDE Frameworks: 5.53.0".
Update
Thanks to everyone for the answers so far!
Once one knows what to search for, one finds some things.
What I have hauled in:
- 2018-10: U&LSE entry apparently exactly about my problem: System lags when doing large R/W operations on external disks. As the questioner uses
dd
, the remedy is to use the flagoflag=direct
to bypass the page cache. - 2018-11: U&LSE relatively general question about slowdown-on-write Why were “USB-stick stall” problems reported in 2013? Why wasn't this problem solved by the existing “No-I/O dirty throttling” code?. This is rather confusing and and we have to wrestle rumours and phenomena.
- 2013-11: Jonathan Corbet at LWM.net: The pernicious USB-stick stall problem. This is the article the "reported the problem in 2013". However, an answer to the question of 2018-11 says that this article is wrong and based on incorrect premises.
- 2011-08: U&LSE entry about how to forcefully clear the page cache, which may bring responsiveness back: Setting /proc/sys/vm/drop_caches to clear cache
- 2016-01: U&LSE entry about how to restrict the size of the buffer cache: Restrict size of buffer cache in Linux
- Discussions about I/O schedulers and writeback throttling.
- 2018-10: U&LSE question on this: Is “writeback throttling” a solution to the “USB-stick stall problem”?
- 2016-04: Jonathan Corbet at LWM.net: Toward less-annoying background writeback.
- 2018-10: U&LSE question on this: Is “writeback throttling” a solution to the “USB-stick stall problem”?
- I'm also thinking about: 2017-05: Improving Linux System Performance with I/O Scheduler Tuning, 2009-06: Selecting a Linux I/O Scheduler
Why aren't there expert systems handling the I/O tuning by now..?
linux performance cache
I have a device /dev/mydisk
that is based on a stack of functionality: a LUKS-encrypted, software RAID-1.
From time to time, I do a backup of /dev/mydisk
contents to an external USB disk, which is itself encrypted using LUKS. A couple of 100 GiB need to be transferred. This operation is not a simple dd
but a recursive cp
(I still need to change to use rsync
)
A while after the backup starts, the interactivity of the whole system declines tremendously. The KDE interface is choked to death apparently waiting for memory requests to be granted. A wait time of 2 minutes for the prompt is not unusual. Waiting for network I/O likewise demands a lot patience. This is similar behaviour to what happens when baloo
kicks in and decides to unzip every zip and index every file content for purposes unknown: The system becomes swamp canoe.
It seems that the kernel gives all the RAM to the copying processes and is loath to hand it back to give interactive processes a chance. RAM is not shabby: 23 GiB. There is also 11 GiB of swap space, just in case, but it's just occupied by a few MiB at any time.
Is it possible to make sure interactive processes get their RAM in preference to the copying processes? If so, how?
Version information:
- This is a Fedora 29 (4.19.15-300.fc29.x86_64) system but I know that I had this issue in earlier Fedora systems, too.
- The KDE version is based on "KDE Frameworks: 5.53.0".
Update
Thanks to everyone for the answers so far!
Once one knows what to search for, one finds some things.
What I have hauled in:
- 2018-10: U&LSE entry apparently exactly about my problem: System lags when doing large R/W operations on external disks. As the questioner uses
dd
, the remedy is to use the flagoflag=direct
to bypass the page cache. - 2018-11: U&LSE relatively general question about slowdown-on-write Why were “USB-stick stall” problems reported in 2013? Why wasn't this problem solved by the existing “No-I/O dirty throttling” code?. This is rather confusing and and we have to wrestle rumours and phenomena.
- 2013-11: Jonathan Corbet at LWM.net: The pernicious USB-stick stall problem. This is the article the "reported the problem in 2013". However, an answer to the question of 2018-11 says that this article is wrong and based on incorrect premises.
- 2011-08: U&LSE entry about how to forcefully clear the page cache, which may bring responsiveness back: Setting /proc/sys/vm/drop_caches to clear cache
- 2016-01: U&LSE entry about how to restrict the size of the buffer cache: Restrict size of buffer cache in Linux
- Discussions about I/O schedulers and writeback throttling.
- 2018-10: U&LSE question on this: Is “writeback throttling” a solution to the “USB-stick stall problem”?
- 2016-04: Jonathan Corbet at LWM.net: Toward less-annoying background writeback.
- 2018-10: U&LSE question on this: Is “writeback throttling” a solution to the “USB-stick stall problem”?
- I'm also thinking about: 2017-05: Improving Linux System Performance with I/O Scheduler Tuning, 2009-06: Selecting a Linux I/O Scheduler
Why aren't there expert systems handling the I/O tuning by now..?
linux performance cache
linux performance cache
edited Jan 27 at 0:55
David Tonhofer
asked Jan 21 at 8:50
David TonhoferDavid Tonhofer
525416
525416
For reference, please specify the version either of KDE or the OS you are using. gnome-shell currently has an issue where it calls fsync() on the main thread, which can hang the entire GUI for tens of seconds. Obviously it would be nice if fsync() didn't do this, but gnome-shell should not be doing it in the first place, and it may be fixed in some later versions (and some parts of the code are already deliberately avoiding it). gitlab.gnome.org/GNOME/gnome-shell/issues/815 . So IMO it would be useful to note the version of KDE you are using here.
– sourcejedi
Jan 21 at 10:52
Thanks @sourcejedi Added info. KDE has the "stop the world" phenomenon also even where the system is basically idle but it needs to do some internal configuration. But that is not likely related to the described slowdown.
– David Tonhofer
Jan 21 at 11:03
1
I noticed similar symptoms on my system, and dropping caches withecho 3 > /proc/sys/vm/drop_caches
restores interactivity (and improves disk transfer rate). Your problem may or may not be related. I haven't yet found out the reason for this behaviour.
– dirkt
Jan 21 at 13:38
@dirkt3
drops both buffer cache and reclaimable dentries and inodes. If you have a lot of dentries and inodes cached, dropping them will increase the limit for dirty buffer cache. unix.stackexchange.com/a/480999/29483 That's one possible mechanism.
– sourcejedi
Jan 21 at 15:28
add a comment |
For reference, please specify the version either of KDE or the OS you are using. gnome-shell currently has an issue where it calls fsync() on the main thread, which can hang the entire GUI for tens of seconds. Obviously it would be nice if fsync() didn't do this, but gnome-shell should not be doing it in the first place, and it may be fixed in some later versions (and some parts of the code are already deliberately avoiding it). gitlab.gnome.org/GNOME/gnome-shell/issues/815 . So IMO it would be useful to note the version of KDE you are using here.
– sourcejedi
Jan 21 at 10:52
Thanks @sourcejedi Added info. KDE has the "stop the world" phenomenon also even where the system is basically idle but it needs to do some internal configuration. But that is not likely related to the described slowdown.
– David Tonhofer
Jan 21 at 11:03
1
I noticed similar symptoms on my system, and dropping caches withecho 3 > /proc/sys/vm/drop_caches
restores interactivity (and improves disk transfer rate). Your problem may or may not be related. I haven't yet found out the reason for this behaviour.
– dirkt
Jan 21 at 13:38
@dirkt3
drops both buffer cache and reclaimable dentries and inodes. If you have a lot of dentries and inodes cached, dropping them will increase the limit for dirty buffer cache. unix.stackexchange.com/a/480999/29483 That's one possible mechanism.
– sourcejedi
Jan 21 at 15:28
For reference, please specify the version either of KDE or the OS you are using. gnome-shell currently has an issue where it calls fsync() on the main thread, which can hang the entire GUI for tens of seconds. Obviously it would be nice if fsync() didn't do this, but gnome-shell should not be doing it in the first place, and it may be fixed in some later versions (and some parts of the code are already deliberately avoiding it). gitlab.gnome.org/GNOME/gnome-shell/issues/815 . So IMO it would be useful to note the version of KDE you are using here.
– sourcejedi
Jan 21 at 10:52
For reference, please specify the version either of KDE or the OS you are using. gnome-shell currently has an issue where it calls fsync() on the main thread, which can hang the entire GUI for tens of seconds. Obviously it would be nice if fsync() didn't do this, but gnome-shell should not be doing it in the first place, and it may be fixed in some later versions (and some parts of the code are already deliberately avoiding it). gitlab.gnome.org/GNOME/gnome-shell/issues/815 . So IMO it would be useful to note the version of KDE you are using here.
– sourcejedi
Jan 21 at 10:52
Thanks @sourcejedi Added info. KDE has the "stop the world" phenomenon also even where the system is basically idle but it needs to do some internal configuration. But that is not likely related to the described slowdown.
– David Tonhofer
Jan 21 at 11:03
Thanks @sourcejedi Added info. KDE has the "stop the world" phenomenon also even where the system is basically idle but it needs to do some internal configuration. But that is not likely related to the described slowdown.
– David Tonhofer
Jan 21 at 11:03
1
1
I noticed similar symptoms on my system, and dropping caches with
echo 3 > /proc/sys/vm/drop_caches
restores interactivity (and improves disk transfer rate). Your problem may or may not be related. I haven't yet found out the reason for this behaviour.– dirkt
Jan 21 at 13:38
I noticed similar symptoms on my system, and dropping caches with
echo 3 > /proc/sys/vm/drop_caches
restores interactivity (and improves disk transfer rate). Your problem may or may not be related. I haven't yet found out the reason for this behaviour.– dirkt
Jan 21 at 13:38
@dirkt
3
drops both buffer cache and reclaimable dentries and inodes. If you have a lot of dentries and inodes cached, dropping them will increase the limit for dirty buffer cache. unix.stackexchange.com/a/480999/29483 That's one possible mechanism.– sourcejedi
Jan 21 at 15:28
@dirkt
3
drops both buffer cache and reclaimable dentries and inodes. If you have a lot of dentries and inodes cached, dropping them will increase the limit for dirty buffer cache. unix.stackexchange.com/a/480999/29483 That's one possible mechanism.– sourcejedi
Jan 21 at 15:28
add a comment |
2 Answers
2
active
oldest
votes
I'd nice -n 19
the backup process (it gives low priority to CPU), and maybe also ionice -c 3
(I/O on idle).
rsync will also be a major improvement (it won't copy the 100Gb each time).
For instance, my backup scripts look like this:
SOURCE=/whatever/precious/directory
DESTINATION=/media/some_usb_drive/backup
nice -n 19 rsync --verbose --archive --compress --delete --force --recursive --links --safe-links --rsh ssh --exclude-from=$EXCLUDEFILE $SOURCE $DESTINATION
# or
nice -n 19 ionice -c 3 rsync --verbose --archive --compress --delete --force --recursive --links --safe-links --rsh ssh --exclude-from=$EXCLUDEFILE $SOURCE $DESTINATION
(exclude-from is used to avoid the .cache directories, .o files, etc.)
ionice
only affects reads, it has no effect on buffered writes. unix.stackexchange.com/questions/480862/…
– sourcejedi
Jan 21 at 10:48
Thanks for pointing the ionice. I've updated the example. I had abandoned it long ago becausenice
gave good enough results.
– Demi-Lune
Jan 21 at 10:48
add a comment |
nocache
I searched, and although the documentation does not mention it, nocache
should work correctly for writes. Although it will run slower when copying small files, because it requires an fdatasync()
call on each file.
(The impact of large numbers of fdatasync()
/ fsync()
calls can be reduced, using Linux-specific features. See note "[1]" about how dpkg
works, in this related answer about IO and cache effects. However this would require changing nocache
to defer close()
, and this could have unwanted side effects in some situations :-(.)
An alternative idea is to run your copy process in a cgroup, possibly using systemd-run
, and setting a limit on memory consumption. The cgroup memory controller controls cache as well as process memory. However I can't find any good examples for the systemd-run
command on our Unix & Linux site.
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f495734%2flinux-gui-becomes-very-unresponsive-when-doing-heavy-disk-i-o-what-to-tune%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
I'd nice -n 19
the backup process (it gives low priority to CPU), and maybe also ionice -c 3
(I/O on idle).
rsync will also be a major improvement (it won't copy the 100Gb each time).
For instance, my backup scripts look like this:
SOURCE=/whatever/precious/directory
DESTINATION=/media/some_usb_drive/backup
nice -n 19 rsync --verbose --archive --compress --delete --force --recursive --links --safe-links --rsh ssh --exclude-from=$EXCLUDEFILE $SOURCE $DESTINATION
# or
nice -n 19 ionice -c 3 rsync --verbose --archive --compress --delete --force --recursive --links --safe-links --rsh ssh --exclude-from=$EXCLUDEFILE $SOURCE $DESTINATION
(exclude-from is used to avoid the .cache directories, .o files, etc.)
ionice
only affects reads, it has no effect on buffered writes. unix.stackexchange.com/questions/480862/…
– sourcejedi
Jan 21 at 10:48
Thanks for pointing the ionice. I've updated the example. I had abandoned it long ago becausenice
gave good enough results.
– Demi-Lune
Jan 21 at 10:48
add a comment |
I'd nice -n 19
the backup process (it gives low priority to CPU), and maybe also ionice -c 3
(I/O on idle).
rsync will also be a major improvement (it won't copy the 100Gb each time).
For instance, my backup scripts look like this:
SOURCE=/whatever/precious/directory
DESTINATION=/media/some_usb_drive/backup
nice -n 19 rsync --verbose --archive --compress --delete --force --recursive --links --safe-links --rsh ssh --exclude-from=$EXCLUDEFILE $SOURCE $DESTINATION
# or
nice -n 19 ionice -c 3 rsync --verbose --archive --compress --delete --force --recursive --links --safe-links --rsh ssh --exclude-from=$EXCLUDEFILE $SOURCE $DESTINATION
(exclude-from is used to avoid the .cache directories, .o files, etc.)
ionice
only affects reads, it has no effect on buffered writes. unix.stackexchange.com/questions/480862/…
– sourcejedi
Jan 21 at 10:48
Thanks for pointing the ionice. I've updated the example. I had abandoned it long ago becausenice
gave good enough results.
– Demi-Lune
Jan 21 at 10:48
add a comment |
I'd nice -n 19
the backup process (it gives low priority to CPU), and maybe also ionice -c 3
(I/O on idle).
rsync will also be a major improvement (it won't copy the 100Gb each time).
For instance, my backup scripts look like this:
SOURCE=/whatever/precious/directory
DESTINATION=/media/some_usb_drive/backup
nice -n 19 rsync --verbose --archive --compress --delete --force --recursive --links --safe-links --rsh ssh --exclude-from=$EXCLUDEFILE $SOURCE $DESTINATION
# or
nice -n 19 ionice -c 3 rsync --verbose --archive --compress --delete --force --recursive --links --safe-links --rsh ssh --exclude-from=$EXCLUDEFILE $SOURCE $DESTINATION
(exclude-from is used to avoid the .cache directories, .o files, etc.)
I'd nice -n 19
the backup process (it gives low priority to CPU), and maybe also ionice -c 3
(I/O on idle).
rsync will also be a major improvement (it won't copy the 100Gb each time).
For instance, my backup scripts look like this:
SOURCE=/whatever/precious/directory
DESTINATION=/media/some_usb_drive/backup
nice -n 19 rsync --verbose --archive --compress --delete --force --recursive --links --safe-links --rsh ssh --exclude-from=$EXCLUDEFILE $SOURCE $DESTINATION
# or
nice -n 19 ionice -c 3 rsync --verbose --archive --compress --delete --force --recursive --links --safe-links --rsh ssh --exclude-from=$EXCLUDEFILE $SOURCE $DESTINATION
(exclude-from is used to avoid the .cache directories, .o files, etc.)
edited Jan 21 at 10:50
answered Jan 21 at 10:35
Demi-LuneDemi-Lune
265
265
ionice
only affects reads, it has no effect on buffered writes. unix.stackexchange.com/questions/480862/…
– sourcejedi
Jan 21 at 10:48
Thanks for pointing the ionice. I've updated the example. I had abandoned it long ago becausenice
gave good enough results.
– Demi-Lune
Jan 21 at 10:48
add a comment |
ionice
only affects reads, it has no effect on buffered writes. unix.stackexchange.com/questions/480862/…
– sourcejedi
Jan 21 at 10:48
Thanks for pointing the ionice. I've updated the example. I had abandoned it long ago becausenice
gave good enough results.
– Demi-Lune
Jan 21 at 10:48
ionice
only affects reads, it has no effect on buffered writes. unix.stackexchange.com/questions/480862/…– sourcejedi
Jan 21 at 10:48
ionice
only affects reads, it has no effect on buffered writes. unix.stackexchange.com/questions/480862/…– sourcejedi
Jan 21 at 10:48
Thanks for pointing the ionice. I've updated the example. I had abandoned it long ago because
nice
gave good enough results.– Demi-Lune
Jan 21 at 10:48
Thanks for pointing the ionice. I've updated the example. I had abandoned it long ago because
nice
gave good enough results.– Demi-Lune
Jan 21 at 10:48
add a comment |
nocache
I searched, and although the documentation does not mention it, nocache
should work correctly for writes. Although it will run slower when copying small files, because it requires an fdatasync()
call on each file.
(The impact of large numbers of fdatasync()
/ fsync()
calls can be reduced, using Linux-specific features. See note "[1]" about how dpkg
works, in this related answer about IO and cache effects. However this would require changing nocache
to defer close()
, and this could have unwanted side effects in some situations :-(.)
An alternative idea is to run your copy process in a cgroup, possibly using systemd-run
, and setting a limit on memory consumption. The cgroup memory controller controls cache as well as process memory. However I can't find any good examples for the systemd-run
command on our Unix & Linux site.
add a comment |
nocache
I searched, and although the documentation does not mention it, nocache
should work correctly for writes. Although it will run slower when copying small files, because it requires an fdatasync()
call on each file.
(The impact of large numbers of fdatasync()
/ fsync()
calls can be reduced, using Linux-specific features. See note "[1]" about how dpkg
works, in this related answer about IO and cache effects. However this would require changing nocache
to defer close()
, and this could have unwanted side effects in some situations :-(.)
An alternative idea is to run your copy process in a cgroup, possibly using systemd-run
, and setting a limit on memory consumption. The cgroup memory controller controls cache as well as process memory. However I can't find any good examples for the systemd-run
command on our Unix & Linux site.
add a comment |
nocache
I searched, and although the documentation does not mention it, nocache
should work correctly for writes. Although it will run slower when copying small files, because it requires an fdatasync()
call on each file.
(The impact of large numbers of fdatasync()
/ fsync()
calls can be reduced, using Linux-specific features. See note "[1]" about how dpkg
works, in this related answer about IO and cache effects. However this would require changing nocache
to defer close()
, and this could have unwanted side effects in some situations :-(.)
An alternative idea is to run your copy process in a cgroup, possibly using systemd-run
, and setting a limit on memory consumption. The cgroup memory controller controls cache as well as process memory. However I can't find any good examples for the systemd-run
command on our Unix & Linux site.
nocache
I searched, and although the documentation does not mention it, nocache
should work correctly for writes. Although it will run slower when copying small files, because it requires an fdatasync()
call on each file.
(The impact of large numbers of fdatasync()
/ fsync()
calls can be reduced, using Linux-specific features. See note "[1]" about how dpkg
works, in this related answer about IO and cache effects. However this would require changing nocache
to defer close()
, and this could have unwanted side effects in some situations :-(.)
An alternative idea is to run your copy process in a cgroup, possibly using systemd-run
, and setting a limit on memory consumption. The cgroup memory controller controls cache as well as process memory. However I can't find any good examples for the systemd-run
command on our Unix & Linux site.
edited Jan 21 at 12:34
answered Jan 21 at 10:23
sourcejedisourcejedi
24.1k439106
24.1k439106
add a comment |
add a comment |
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f495734%2flinux-gui-becomes-very-unresponsive-when-doing-heavy-disk-i-o-what-to-tune%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
For reference, please specify the version either of KDE or the OS you are using. gnome-shell currently has an issue where it calls fsync() on the main thread, which can hang the entire GUI for tens of seconds. Obviously it would be nice if fsync() didn't do this, but gnome-shell should not be doing it in the first place, and it may be fixed in some later versions (and some parts of the code are already deliberately avoiding it). gitlab.gnome.org/GNOME/gnome-shell/issues/815 . So IMO it would be useful to note the version of KDE you are using here.
– sourcejedi
Jan 21 at 10:52
Thanks @sourcejedi Added info. KDE has the "stop the world" phenomenon also even where the system is basically idle but it needs to do some internal configuration. But that is not likely related to the described slowdown.
– David Tonhofer
Jan 21 at 11:03
1
I noticed similar symptoms on my system, and dropping caches with
echo 3 > /proc/sys/vm/drop_caches
restores interactivity (and improves disk transfer rate). Your problem may or may not be related. I haven't yet found out the reason for this behaviour.– dirkt
Jan 21 at 13:38
@dirkt
3
drops both buffer cache and reclaimable dentries and inodes. If you have a lot of dentries and inodes cached, dropping them will increase the limit for dirty buffer cache. unix.stackexchange.com/a/480999/29483 That's one possible mechanism.– sourcejedi
Jan 21 at 15:28