How to record the maximum size of a folder?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
2
down vote

favorite
3












Packages are unpacked and compiled on a test system in /tmp/test.



I need to get the maximum size the directory had at any moment during all these steps.



At the moment I help my self by recording the size with



du -sch /tmp/test >> /tmp/size.txt 


in a loop. But this is a very dirty workaround and is not precise. It could be, that the computer is very busy in /tmp/test and the du misses the peak size. Is there a elegant solution?



The available file systems are ext, or btrfs if that helps.



One reader asked for usage examples:



  • When preparing packages for Gentoo Linux, I need to know how much space is needed during compilation. For some packages like Firefox, Boost or LibreOffice it is very important that the package verifies that enough free space is available.


  • I wrote scripts which create many temporary files. It would be interesting to monitor the folder size.


Update: In the meantime I found sysdig which looks promising for this task, but I did not get a folder size yet.







share|improve this question






















  • you should tell us why you need this and what the hard requirements are. is the process lenghty or easily repeatable? how important is accuracy (what if the result is a little bit bigger than necessary? what if it is much bigger?). the choice of file system also could make a difference in the result (block size differences, efficient packing of small files in some fs). as it is, this is not a well written question.
    – user601
    Jan 7 at 18:41











  • @hop I think it should be technically possible to measure it quite precisely and I am sure many do this already. I just do not know the right tools and commands. One has to capture the data from the kernel, not with a tool like du or df.
    – Jonas Stein
    Jan 7 at 20:38















up vote
2
down vote

favorite
3












Packages are unpacked and compiled on a test system in /tmp/test.



I need to get the maximum size the directory had at any moment during all these steps.



At the moment I help my self by recording the size with



du -sch /tmp/test >> /tmp/size.txt 


in a loop. But this is a very dirty workaround and is not precise. It could be, that the computer is very busy in /tmp/test and the du misses the peak size. Is there a elegant solution?



The available file systems are ext, or btrfs if that helps.



One reader asked for usage examples:



  • When preparing packages for Gentoo Linux, I need to know how much space is needed during compilation. For some packages like Firefox, Boost or LibreOffice it is very important that the package verifies that enough free space is available.


  • I wrote scripts which create many temporary files. It would be interesting to monitor the folder size.


Update: In the meantime I found sysdig which looks promising for this task, but I did not get a folder size yet.







share|improve this question






















  • you should tell us why you need this and what the hard requirements are. is the process lenghty or easily repeatable? how important is accuracy (what if the result is a little bit bigger than necessary? what if it is much bigger?). the choice of file system also could make a difference in the result (block size differences, efficient packing of small files in some fs). as it is, this is not a well written question.
    – user601
    Jan 7 at 18:41











  • @hop I think it should be technically possible to measure it quite precisely and I am sure many do this already. I just do not know the right tools and commands. One has to capture the data from the kernel, not with a tool like du or df.
    – Jonas Stein
    Jan 7 at 20:38













up vote
2
down vote

favorite
3









up vote
2
down vote

favorite
3






3





Packages are unpacked and compiled on a test system in /tmp/test.



I need to get the maximum size the directory had at any moment during all these steps.



At the moment I help my self by recording the size with



du -sch /tmp/test >> /tmp/size.txt 


in a loop. But this is a very dirty workaround and is not precise. It could be, that the computer is very busy in /tmp/test and the du misses the peak size. Is there a elegant solution?



The available file systems are ext, or btrfs if that helps.



One reader asked for usage examples:



  • When preparing packages for Gentoo Linux, I need to know how much space is needed during compilation. For some packages like Firefox, Boost or LibreOffice it is very important that the package verifies that enough free space is available.


  • I wrote scripts which create many temporary files. It would be interesting to monitor the folder size.


Update: In the meantime I found sysdig which looks promising for this task, but I did not get a folder size yet.







share|improve this question














Packages are unpacked and compiled on a test system in /tmp/test.



I need to get the maximum size the directory had at any moment during all these steps.



At the moment I help my self by recording the size with



du -sch /tmp/test >> /tmp/size.txt 


in a loop. But this is a very dirty workaround and is not precise. It could be, that the computer is very busy in /tmp/test and the du misses the peak size. Is there a elegant solution?



The available file systems are ext, or btrfs if that helps.



One reader asked for usage examples:



  • When preparing packages for Gentoo Linux, I need to know how much space is needed during compilation. For some packages like Firefox, Boost or LibreOffice it is very important that the package verifies that enough free space is available.


  • I wrote scripts which create many temporary files. It would be interesting to monitor the folder size.


Update: In the meantime I found sysdig which looks promising for this task, but I did not get a folder size yet.









share|improve this question













share|improve this question




share|improve this question








edited Jan 7 at 20:32

























asked Nov 19 '17 at 23:50









Jonas Stein

1,02721033




1,02721033











  • you should tell us why you need this and what the hard requirements are. is the process lenghty or easily repeatable? how important is accuracy (what if the result is a little bit bigger than necessary? what if it is much bigger?). the choice of file system also could make a difference in the result (block size differences, efficient packing of small files in some fs). as it is, this is not a well written question.
    – user601
    Jan 7 at 18:41











  • @hop I think it should be technically possible to measure it quite precisely and I am sure many do this already. I just do not know the right tools and commands. One has to capture the data from the kernel, not with a tool like du or df.
    – Jonas Stein
    Jan 7 at 20:38

















  • you should tell us why you need this and what the hard requirements are. is the process lenghty or easily repeatable? how important is accuracy (what if the result is a little bit bigger than necessary? what if it is much bigger?). the choice of file system also could make a difference in the result (block size differences, efficient packing of small files in some fs). as it is, this is not a well written question.
    – user601
    Jan 7 at 18:41











  • @hop I think it should be technically possible to measure it quite precisely and I am sure many do this already. I just do not know the right tools and commands. One has to capture the data from the kernel, not with a tool like du or df.
    – Jonas Stein
    Jan 7 at 20:38
















you should tell us why you need this and what the hard requirements are. is the process lenghty or easily repeatable? how important is accuracy (what if the result is a little bit bigger than necessary? what if it is much bigger?). the choice of file system also could make a difference in the result (block size differences, efficient packing of small files in some fs). as it is, this is not a well written question.
– user601
Jan 7 at 18:41





you should tell us why you need this and what the hard requirements are. is the process lenghty or easily repeatable? how important is accuracy (what if the result is a little bit bigger than necessary? what if it is much bigger?). the choice of file system also could make a difference in the result (block size differences, efficient packing of small files in some fs). as it is, this is not a well written question.
– user601
Jan 7 at 18:41













@hop I think it should be technically possible to measure it quite precisely and I am sure many do this already. I just do not know the right tools and commands. One has to capture the data from the kernel, not with a tool like du or df.
– Jonas Stein
Jan 7 at 20:38





@hop I think it should be technically possible to measure it quite precisely and I am sure many do this already. I just do not know the right tools and commands. One has to capture the data from the kernel, not with a tool like du or df.
– Jonas Stein
Jan 7 at 20:38











2 Answers
2






active

oldest

votes

















up vote
1
down vote













One possibility might be to monitor file-system events and run some logging command on file-creation and file-deletion. There are some tools that would probably facilitate this approach, such as inotify, fswatch, or the Linux audit framework. You could either try to log the total disk-space after each event or just log the change in disk-space and then used the logs to calculate the maximum size. See, for example, the following SuperUser post:



  • Continuously detect new file(s) with inotify-tools within multiple directories recursively

A different approach comes from the following post:



  • How can I monitor disk I/O in a particular directory?

There the suggestion is made to mount the directory in question on its own partition and then run iostat on that partition. That should allow you to continuously log IO events to that directory.



For further discussion on monitoring disk IO you might refer to the following post:



  • How can I monitor disk io?





share|improve this answer




















  • i second the idea to mount a separate file system on /tmp/test. even df might be (fast) enough for the OP's purposes with that trick. an additional option would be to hack together a fuse file system that monitors the size.
    – user601
    Jan 7 at 18:37










  • @hop df gets slow, if there are many files. And it sums only one snapshot not the real maximum.
    – Jonas Stein
    Jan 7 at 20:40

















up vote
-2
down vote













Schedule a script for every 1 minute



*/1 *. *. *. * script path


The script should be the following



#!/bin/bash
du -sch /tmp/|sed -n '1p' >>outputfile


After some time, sort it and get the highest size.






share|improve this answer


















  • 4




    Doesn't this suffer from the same shortcoming that the OP is specifically trying to overcome? I think @jonas-stein is looking for a way to continuously monitor file-size.
    – igal
    Nov 20 '17 at 2:36











Your Answer







StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f405674%2fhow-to-record-the-maximum-size-of-a-folder%23new-answer', 'question_page');

);

Post as a guest






























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
1
down vote













One possibility might be to monitor file-system events and run some logging command on file-creation and file-deletion. There are some tools that would probably facilitate this approach, such as inotify, fswatch, or the Linux audit framework. You could either try to log the total disk-space after each event or just log the change in disk-space and then used the logs to calculate the maximum size. See, for example, the following SuperUser post:



  • Continuously detect new file(s) with inotify-tools within multiple directories recursively

A different approach comes from the following post:



  • How can I monitor disk I/O in a particular directory?

There the suggestion is made to mount the directory in question on its own partition and then run iostat on that partition. That should allow you to continuously log IO events to that directory.



For further discussion on monitoring disk IO you might refer to the following post:



  • How can I monitor disk io?





share|improve this answer




















  • i second the idea to mount a separate file system on /tmp/test. even df might be (fast) enough for the OP's purposes with that trick. an additional option would be to hack together a fuse file system that monitors the size.
    – user601
    Jan 7 at 18:37










  • @hop df gets slow, if there are many files. And it sums only one snapshot not the real maximum.
    – Jonas Stein
    Jan 7 at 20:40














up vote
1
down vote













One possibility might be to monitor file-system events and run some logging command on file-creation and file-deletion. There are some tools that would probably facilitate this approach, such as inotify, fswatch, or the Linux audit framework. You could either try to log the total disk-space after each event or just log the change in disk-space and then used the logs to calculate the maximum size. See, for example, the following SuperUser post:



  • Continuously detect new file(s) with inotify-tools within multiple directories recursively

A different approach comes from the following post:



  • How can I monitor disk I/O in a particular directory?

There the suggestion is made to mount the directory in question on its own partition and then run iostat on that partition. That should allow you to continuously log IO events to that directory.



For further discussion on monitoring disk IO you might refer to the following post:



  • How can I monitor disk io?





share|improve this answer




















  • i second the idea to mount a separate file system on /tmp/test. even df might be (fast) enough for the OP's purposes with that trick. an additional option would be to hack together a fuse file system that monitors the size.
    – user601
    Jan 7 at 18:37










  • @hop df gets slow, if there are many files. And it sums only one snapshot not the real maximum.
    – Jonas Stein
    Jan 7 at 20:40












up vote
1
down vote










up vote
1
down vote









One possibility might be to monitor file-system events and run some logging command on file-creation and file-deletion. There are some tools that would probably facilitate this approach, such as inotify, fswatch, or the Linux audit framework. You could either try to log the total disk-space after each event or just log the change in disk-space and then used the logs to calculate the maximum size. See, for example, the following SuperUser post:



  • Continuously detect new file(s) with inotify-tools within multiple directories recursively

A different approach comes from the following post:



  • How can I monitor disk I/O in a particular directory?

There the suggestion is made to mount the directory in question on its own partition and then run iostat on that partition. That should allow you to continuously log IO events to that directory.



For further discussion on monitoring disk IO you might refer to the following post:



  • How can I monitor disk io?





share|improve this answer












One possibility might be to monitor file-system events and run some logging command on file-creation and file-deletion. There are some tools that would probably facilitate this approach, such as inotify, fswatch, or the Linux audit framework. You could either try to log the total disk-space after each event or just log the change in disk-space and then used the logs to calculate the maximum size. See, for example, the following SuperUser post:



  • Continuously detect new file(s) with inotify-tools within multiple directories recursively

A different approach comes from the following post:



  • How can I monitor disk I/O in a particular directory?

There the suggestion is made to mount the directory in question on its own partition and then run iostat on that partition. That should allow you to continuously log IO events to that directory.



For further discussion on monitoring disk IO you might refer to the following post:



  • How can I monitor disk io?






share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 20 '17 at 0:25









igal

4,830930




4,830930











  • i second the idea to mount a separate file system on /tmp/test. even df might be (fast) enough for the OP's purposes with that trick. an additional option would be to hack together a fuse file system that monitors the size.
    – user601
    Jan 7 at 18:37










  • @hop df gets slow, if there are many files. And it sums only one snapshot not the real maximum.
    – Jonas Stein
    Jan 7 at 20:40
















  • i second the idea to mount a separate file system on /tmp/test. even df might be (fast) enough for the OP's purposes with that trick. an additional option would be to hack together a fuse file system that monitors the size.
    – user601
    Jan 7 at 18:37










  • @hop df gets slow, if there are many files. And it sums only one snapshot not the real maximum.
    – Jonas Stein
    Jan 7 at 20:40















i second the idea to mount a separate file system on /tmp/test. even df might be (fast) enough for the OP's purposes with that trick. an additional option would be to hack together a fuse file system that monitors the size.
– user601
Jan 7 at 18:37




i second the idea to mount a separate file system on /tmp/test. even df might be (fast) enough for the OP's purposes with that trick. an additional option would be to hack together a fuse file system that monitors the size.
– user601
Jan 7 at 18:37












@hop df gets slow, if there are many files. And it sums only one snapshot not the real maximum.
– Jonas Stein
Jan 7 at 20:40




@hop df gets slow, if there are many files. And it sums only one snapshot not the real maximum.
– Jonas Stein
Jan 7 at 20:40












up vote
-2
down vote













Schedule a script for every 1 minute



*/1 *. *. *. * script path


The script should be the following



#!/bin/bash
du -sch /tmp/|sed -n '1p' >>outputfile


After some time, sort it and get the highest size.






share|improve this answer


















  • 4




    Doesn't this suffer from the same shortcoming that the OP is specifically trying to overcome? I think @jonas-stein is looking for a way to continuously monitor file-size.
    – igal
    Nov 20 '17 at 2:36















up vote
-2
down vote













Schedule a script for every 1 minute



*/1 *. *. *. * script path


The script should be the following



#!/bin/bash
du -sch /tmp/|sed -n '1p' >>outputfile


After some time, sort it and get the highest size.






share|improve this answer


















  • 4




    Doesn't this suffer from the same shortcoming that the OP is specifically trying to overcome? I think @jonas-stein is looking for a way to continuously monitor file-size.
    – igal
    Nov 20 '17 at 2:36













up vote
-2
down vote










up vote
-2
down vote









Schedule a script for every 1 minute



*/1 *. *. *. * script path


The script should be the following



#!/bin/bash
du -sch /tmp/|sed -n '1p' >>outputfile


After some time, sort it and get the highest size.






share|improve this answer














Schedule a script for every 1 minute



*/1 *. *. *. * script path


The script should be the following



#!/bin/bash
du -sch /tmp/|sed -n '1p' >>outputfile


After some time, sort it and get the highest size.







share|improve this answer














share|improve this answer



share|improve this answer








edited Jan 7 at 18:30









grg

1857




1857










answered Nov 20 '17 at 2:33









Praveen Kumar BS

1,010128




1,010128







  • 4




    Doesn't this suffer from the same shortcoming that the OP is specifically trying to overcome? I think @jonas-stein is looking for a way to continuously monitor file-size.
    – igal
    Nov 20 '17 at 2:36













  • 4




    Doesn't this suffer from the same shortcoming that the OP is specifically trying to overcome? I think @jonas-stein is looking for a way to continuously monitor file-size.
    – igal
    Nov 20 '17 at 2:36








4




4




Doesn't this suffer from the same shortcoming that the OP is specifically trying to overcome? I think @jonas-stein is looking for a way to continuously monitor file-size.
– igal
Nov 20 '17 at 2:36





Doesn't this suffer from the same shortcoming that the OP is specifically trying to overcome? I think @jonas-stein is looking for a way to continuously monitor file-size.
– igal
Nov 20 '17 at 2:36


















 

draft saved


draft discarded















































 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f405674%2fhow-to-record-the-maximum-size-of-a-folder%23new-answer', 'question_page');

);

Post as a guest













































































Popular posts from this blog

How to check contact read email or not when send email to Individual?

Displaying single band from multi-band raster using QGIS

How many registers does an x86_64 CPU actually have?