Limit write cache size for a given device?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite
1












Is there a way to limit how much data Linux will cache in the write cache for a given block device prior to beginning to flush that data out to disk?



My server has a large amount of RAM (64GB or so). I find that whenever I write large files to spinning media (i.e. slow media) or to slow USB flash drives, Linux will cache gigabytes of writable data in RAM. For many operations where only a small amount of data is written, this is fine and Linux will ultimately flush the cache to disk.



However, when I'm doing big operations on large files, the write caching and subsequent flushing makes the user interface seem very erratic. Take the simple example of using pv to pipe a large file from a RAM disk to stable storage on a USB flash drive which has slow write speeds. It will pull in well over 1GB of data at an insane rate, and then slow down to normal disk write speed, but then once the write is finished it will take well over a minute to flush the cache.



Other applications will appear to do a huge amount of writing almost instantly, then completely stall for a minute, then appear to do another bunch of writing instantly, then stall. Since the spinning disks are quite slow, sometimes the lag in flushing the cache is enough to throw a kernel OOPS with a process being hung for more than 120sec. Doing things like using ffmpeg to remux a file, transferring a VMware disk image, and similar "bulk" write operations on large, sequential files tend to cause this effect.



I would like to be able to tell Linux "on device /dev/sdX, only cache a maximum of, say, 64MB of data before beginning to flush the cache". What this should mean is that, in the above pv example, only a maximum 64MB of data should remain to be written after the UI indicates the write has completed. Typing the sync command should only have to flush out that 64MB. (Right now, doing a pv followed by a sync could hang sync for over two minutes, simply because Linux cached gigabytes of data and is now flushing it to a device with only single digits or tens of megabytes per second write speeds.)



I don't necessarily want to limit write-cache system wide, because e.g. my boot drive which is on an SSD definitely benefits from the cache, and typical operations involving very small writes also benefit well from it. It's only the case where large files are being written more or less sequentially where it's visible.



EDIT: My preferred distro is Arch Linux, but I'm pretty good at adapting instructions for other distros to work with my setup.







share|improve this question




















  • I don't know of any parameter that can do what you want - the page cache is the page cache as far as I'm aware. Maybe bypassing the page cache using dd ... oflag=direct ... as the final process in your pipe solves your issues? There's no real point in using the write cache when you're not going to read the data before it gets flushed from the cache.
    – Andrew Henle
    Mar 7 at 11:04






  • 1




    That works for simple pipes but won't work for something like, say, ffmpeg which does its own internal writes. I found that I can mount ext4 filesystems with the sync option (-o sync) but that ends up being unbearably slow when you are doing a copy on the same volume.
    – fdmillion
    Mar 7 at 14:11










  • You can use pipes with ffmpeg: ffmpeg.org/ffmpeg-all.html#pipe
    – Andrew Henle
    Mar 7 at 15:44














up vote
1
down vote

favorite
1












Is there a way to limit how much data Linux will cache in the write cache for a given block device prior to beginning to flush that data out to disk?



My server has a large amount of RAM (64GB or so). I find that whenever I write large files to spinning media (i.e. slow media) or to slow USB flash drives, Linux will cache gigabytes of writable data in RAM. For many operations where only a small amount of data is written, this is fine and Linux will ultimately flush the cache to disk.



However, when I'm doing big operations on large files, the write caching and subsequent flushing makes the user interface seem very erratic. Take the simple example of using pv to pipe a large file from a RAM disk to stable storage on a USB flash drive which has slow write speeds. It will pull in well over 1GB of data at an insane rate, and then slow down to normal disk write speed, but then once the write is finished it will take well over a minute to flush the cache.



Other applications will appear to do a huge amount of writing almost instantly, then completely stall for a minute, then appear to do another bunch of writing instantly, then stall. Since the spinning disks are quite slow, sometimes the lag in flushing the cache is enough to throw a kernel OOPS with a process being hung for more than 120sec. Doing things like using ffmpeg to remux a file, transferring a VMware disk image, and similar "bulk" write operations on large, sequential files tend to cause this effect.



I would like to be able to tell Linux "on device /dev/sdX, only cache a maximum of, say, 64MB of data before beginning to flush the cache". What this should mean is that, in the above pv example, only a maximum 64MB of data should remain to be written after the UI indicates the write has completed. Typing the sync command should only have to flush out that 64MB. (Right now, doing a pv followed by a sync could hang sync for over two minutes, simply because Linux cached gigabytes of data and is now flushing it to a device with only single digits or tens of megabytes per second write speeds.)



I don't necessarily want to limit write-cache system wide, because e.g. my boot drive which is on an SSD definitely benefits from the cache, and typical operations involving very small writes also benefit well from it. It's only the case where large files are being written more or less sequentially where it's visible.



EDIT: My preferred distro is Arch Linux, but I'm pretty good at adapting instructions for other distros to work with my setup.







share|improve this question




















  • I don't know of any parameter that can do what you want - the page cache is the page cache as far as I'm aware. Maybe bypassing the page cache using dd ... oflag=direct ... as the final process in your pipe solves your issues? There's no real point in using the write cache when you're not going to read the data before it gets flushed from the cache.
    – Andrew Henle
    Mar 7 at 11:04






  • 1




    That works for simple pipes but won't work for something like, say, ffmpeg which does its own internal writes. I found that I can mount ext4 filesystems with the sync option (-o sync) but that ends up being unbearably slow when you are doing a copy on the same volume.
    – fdmillion
    Mar 7 at 14:11










  • You can use pipes with ffmpeg: ffmpeg.org/ffmpeg-all.html#pipe
    – Andrew Henle
    Mar 7 at 15:44












up vote
1
down vote

favorite
1









up vote
1
down vote

favorite
1






1





Is there a way to limit how much data Linux will cache in the write cache for a given block device prior to beginning to flush that data out to disk?



My server has a large amount of RAM (64GB or so). I find that whenever I write large files to spinning media (i.e. slow media) or to slow USB flash drives, Linux will cache gigabytes of writable data in RAM. For many operations where only a small amount of data is written, this is fine and Linux will ultimately flush the cache to disk.



However, when I'm doing big operations on large files, the write caching and subsequent flushing makes the user interface seem very erratic. Take the simple example of using pv to pipe a large file from a RAM disk to stable storage on a USB flash drive which has slow write speeds. It will pull in well over 1GB of data at an insane rate, and then slow down to normal disk write speed, but then once the write is finished it will take well over a minute to flush the cache.



Other applications will appear to do a huge amount of writing almost instantly, then completely stall for a minute, then appear to do another bunch of writing instantly, then stall. Since the spinning disks are quite slow, sometimes the lag in flushing the cache is enough to throw a kernel OOPS with a process being hung for more than 120sec. Doing things like using ffmpeg to remux a file, transferring a VMware disk image, and similar "bulk" write operations on large, sequential files tend to cause this effect.



I would like to be able to tell Linux "on device /dev/sdX, only cache a maximum of, say, 64MB of data before beginning to flush the cache". What this should mean is that, in the above pv example, only a maximum 64MB of data should remain to be written after the UI indicates the write has completed. Typing the sync command should only have to flush out that 64MB. (Right now, doing a pv followed by a sync could hang sync for over two minutes, simply because Linux cached gigabytes of data and is now flushing it to a device with only single digits or tens of megabytes per second write speeds.)



I don't necessarily want to limit write-cache system wide, because e.g. my boot drive which is on an SSD definitely benefits from the cache, and typical operations involving very small writes also benefit well from it. It's only the case where large files are being written more or less sequentially where it's visible.



EDIT: My preferred distro is Arch Linux, but I'm pretty good at adapting instructions for other distros to work with my setup.







share|improve this question












Is there a way to limit how much data Linux will cache in the write cache for a given block device prior to beginning to flush that data out to disk?



My server has a large amount of RAM (64GB or so). I find that whenever I write large files to spinning media (i.e. slow media) or to slow USB flash drives, Linux will cache gigabytes of writable data in RAM. For many operations where only a small amount of data is written, this is fine and Linux will ultimately flush the cache to disk.



However, when I'm doing big operations on large files, the write caching and subsequent flushing makes the user interface seem very erratic. Take the simple example of using pv to pipe a large file from a RAM disk to stable storage on a USB flash drive which has slow write speeds. It will pull in well over 1GB of data at an insane rate, and then slow down to normal disk write speed, but then once the write is finished it will take well over a minute to flush the cache.



Other applications will appear to do a huge amount of writing almost instantly, then completely stall for a minute, then appear to do another bunch of writing instantly, then stall. Since the spinning disks are quite slow, sometimes the lag in flushing the cache is enough to throw a kernel OOPS with a process being hung for more than 120sec. Doing things like using ffmpeg to remux a file, transferring a VMware disk image, and similar "bulk" write operations on large, sequential files tend to cause this effect.



I would like to be able to tell Linux "on device /dev/sdX, only cache a maximum of, say, 64MB of data before beginning to flush the cache". What this should mean is that, in the above pv example, only a maximum 64MB of data should remain to be written after the UI indicates the write has completed. Typing the sync command should only have to flush out that 64MB. (Right now, doing a pv followed by a sync could hang sync for over two minutes, simply because Linux cached gigabytes of data and is now flushing it to a device with only single digits or tens of megabytes per second write speeds.)



I don't necessarily want to limit write-cache system wide, because e.g. my boot drive which is on an SSD definitely benefits from the cache, and typical operations involving very small writes also benefit well from it. It's only the case where large files are being written more or less sequentially where it's visible.



EDIT: My preferred distro is Arch Linux, but I'm pretty good at adapting instructions for other distros to work with my setup.









share|improve this question











share|improve this question




share|improve this question










asked Mar 6 at 23:32









fdmillion

6801712




6801712











  • I don't know of any parameter that can do what you want - the page cache is the page cache as far as I'm aware. Maybe bypassing the page cache using dd ... oflag=direct ... as the final process in your pipe solves your issues? There's no real point in using the write cache when you're not going to read the data before it gets flushed from the cache.
    – Andrew Henle
    Mar 7 at 11:04






  • 1




    That works for simple pipes but won't work for something like, say, ffmpeg which does its own internal writes. I found that I can mount ext4 filesystems with the sync option (-o sync) but that ends up being unbearably slow when you are doing a copy on the same volume.
    – fdmillion
    Mar 7 at 14:11










  • You can use pipes with ffmpeg: ffmpeg.org/ffmpeg-all.html#pipe
    – Andrew Henle
    Mar 7 at 15:44
















  • I don't know of any parameter that can do what you want - the page cache is the page cache as far as I'm aware. Maybe bypassing the page cache using dd ... oflag=direct ... as the final process in your pipe solves your issues? There's no real point in using the write cache when you're not going to read the data before it gets flushed from the cache.
    – Andrew Henle
    Mar 7 at 11:04






  • 1




    That works for simple pipes but won't work for something like, say, ffmpeg which does its own internal writes. I found that I can mount ext4 filesystems with the sync option (-o sync) but that ends up being unbearably slow when you are doing a copy on the same volume.
    – fdmillion
    Mar 7 at 14:11










  • You can use pipes with ffmpeg: ffmpeg.org/ffmpeg-all.html#pipe
    – Andrew Henle
    Mar 7 at 15:44















I don't know of any parameter that can do what you want - the page cache is the page cache as far as I'm aware. Maybe bypassing the page cache using dd ... oflag=direct ... as the final process in your pipe solves your issues? There's no real point in using the write cache when you're not going to read the data before it gets flushed from the cache.
– Andrew Henle
Mar 7 at 11:04




I don't know of any parameter that can do what you want - the page cache is the page cache as far as I'm aware. Maybe bypassing the page cache using dd ... oflag=direct ... as the final process in your pipe solves your issues? There's no real point in using the write cache when you're not going to read the data before it gets flushed from the cache.
– Andrew Henle
Mar 7 at 11:04




1




1




That works for simple pipes but won't work for something like, say, ffmpeg which does its own internal writes. I found that I can mount ext4 filesystems with the sync option (-o sync) but that ends up being unbearably slow when you are doing a copy on the same volume.
– fdmillion
Mar 7 at 14:11




That works for simple pipes but won't work for something like, say, ffmpeg which does its own internal writes. I found that I can mount ext4 filesystems with the sync option (-o sync) but that ends up being unbearably slow when you are doing a copy on the same volume.
– fdmillion
Mar 7 at 14:11












You can use pipes with ffmpeg: ffmpeg.org/ffmpeg-all.html#pipe
– Andrew Henle
Mar 7 at 15:44




You can use pipes with ffmpeg: ffmpeg.org/ffmpeg-all.html#pipe
– Andrew Henle
Mar 7 at 15:44















active

oldest

votes











Your Answer







StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);








 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f428643%2flimit-write-cache-size-for-a-given-device%23new-answer', 'question_page');

);

Post as a guest



































active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes










 

draft saved


draft discarded


























 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f428643%2flimit-write-cache-size-for-a-given-device%23new-answer', 'question_page');

);

Post as a guest













































































Popular posts from this blog

How to check contact read email or not when send email to Individual?

Bahrain

Postfix configuration issue with fips on centos 7; mailgun relay