How to record the maximum size of a folder?
Clash Royale CLAN TAG#URR8PPP
up vote
2
down vote
favorite
Packages are unpacked and compiled on a test system in /tmp/test
.
I need to get the maximum size the directory had at any moment during all these steps.
At the moment I help my self by recording the size with
du -sch /tmp/test >> /tmp/size.txt
in a loop. But this is a very dirty workaround and is not precise. It could be, that the computer is very busy in /tmp/test
and the du
misses the peak size. Is there a elegant solution?
The available file systems are ext, or btrfs if that helps.
One reader asked for usage examples:
When preparing packages for Gentoo Linux, I need to know how much space is needed during compilation. For some packages like Firefox, Boost or LibreOffice it is very important that the package verifies that enough free space is available.
I wrote scripts which create many temporary files. It would be interesting to monitor the folder size.
Update: In the meantime I found sysdig
which looks promising for this task, but I did not get a folder size yet.
files linux-kernel directory monitoring size
add a comment |Â
up vote
2
down vote
favorite
Packages are unpacked and compiled on a test system in /tmp/test
.
I need to get the maximum size the directory had at any moment during all these steps.
At the moment I help my self by recording the size with
du -sch /tmp/test >> /tmp/size.txt
in a loop. But this is a very dirty workaround and is not precise. It could be, that the computer is very busy in /tmp/test
and the du
misses the peak size. Is there a elegant solution?
The available file systems are ext, or btrfs if that helps.
One reader asked for usage examples:
When preparing packages for Gentoo Linux, I need to know how much space is needed during compilation. For some packages like Firefox, Boost or LibreOffice it is very important that the package verifies that enough free space is available.
I wrote scripts which create many temporary files. It would be interesting to monitor the folder size.
Update: In the meantime I found sysdig
which looks promising for this task, but I did not get a folder size yet.
files linux-kernel directory monitoring size
you should tell us why you need this and what the hard requirements are. is the process lenghty or easily repeatable? how important is accuracy (what if the result is a little bit bigger than necessary? what if it is much bigger?). the choice of file system also could make a difference in the result (block size differences, efficient packing of small files in some fs). as it is, this is not a well written question.
â user601
Jan 7 at 18:41
@hop I think it should be technically possible to measure it quite precisely and I am sure many do this already. I just do not know the right tools and commands. One has to capture the data from the kernel, not with a tool likedu
ordf
.
â Jonas Stein
Jan 7 at 20:38
add a comment |Â
up vote
2
down vote
favorite
up vote
2
down vote
favorite
Packages are unpacked and compiled on a test system in /tmp/test
.
I need to get the maximum size the directory had at any moment during all these steps.
At the moment I help my self by recording the size with
du -sch /tmp/test >> /tmp/size.txt
in a loop. But this is a very dirty workaround and is not precise. It could be, that the computer is very busy in /tmp/test
and the du
misses the peak size. Is there a elegant solution?
The available file systems are ext, or btrfs if that helps.
One reader asked for usage examples:
When preparing packages for Gentoo Linux, I need to know how much space is needed during compilation. For some packages like Firefox, Boost or LibreOffice it is very important that the package verifies that enough free space is available.
I wrote scripts which create many temporary files. It would be interesting to monitor the folder size.
Update: In the meantime I found sysdig
which looks promising for this task, but I did not get a folder size yet.
files linux-kernel directory monitoring size
Packages are unpacked and compiled on a test system in /tmp/test
.
I need to get the maximum size the directory had at any moment during all these steps.
At the moment I help my self by recording the size with
du -sch /tmp/test >> /tmp/size.txt
in a loop. But this is a very dirty workaround and is not precise. It could be, that the computer is very busy in /tmp/test
and the du
misses the peak size. Is there a elegant solution?
The available file systems are ext, or btrfs if that helps.
One reader asked for usage examples:
When preparing packages for Gentoo Linux, I need to know how much space is needed during compilation. For some packages like Firefox, Boost or LibreOffice it is very important that the package verifies that enough free space is available.
I wrote scripts which create many temporary files. It would be interesting to monitor the folder size.
Update: In the meantime I found sysdig
which looks promising for this task, but I did not get a folder size yet.
files linux-kernel directory monitoring size
edited Jan 7 at 20:32
asked Nov 19 '17 at 23:50
Jonas Stein
1,02721033
1,02721033
you should tell us why you need this and what the hard requirements are. is the process lenghty or easily repeatable? how important is accuracy (what if the result is a little bit bigger than necessary? what if it is much bigger?). the choice of file system also could make a difference in the result (block size differences, efficient packing of small files in some fs). as it is, this is not a well written question.
â user601
Jan 7 at 18:41
@hop I think it should be technically possible to measure it quite precisely and I am sure many do this already. I just do not know the right tools and commands. One has to capture the data from the kernel, not with a tool likedu
ordf
.
â Jonas Stein
Jan 7 at 20:38
add a comment |Â
you should tell us why you need this and what the hard requirements are. is the process lenghty or easily repeatable? how important is accuracy (what if the result is a little bit bigger than necessary? what if it is much bigger?). the choice of file system also could make a difference in the result (block size differences, efficient packing of small files in some fs). as it is, this is not a well written question.
â user601
Jan 7 at 18:41
@hop I think it should be technically possible to measure it quite precisely and I am sure many do this already. I just do not know the right tools and commands. One has to capture the data from the kernel, not with a tool likedu
ordf
.
â Jonas Stein
Jan 7 at 20:38
you should tell us why you need this and what the hard requirements are. is the process lenghty or easily repeatable? how important is accuracy (what if the result is a little bit bigger than necessary? what if it is much bigger?). the choice of file system also could make a difference in the result (block size differences, efficient packing of small files in some fs). as it is, this is not a well written question.
â user601
Jan 7 at 18:41
you should tell us why you need this and what the hard requirements are. is the process lenghty or easily repeatable? how important is accuracy (what if the result is a little bit bigger than necessary? what if it is much bigger?). the choice of file system also could make a difference in the result (block size differences, efficient packing of small files in some fs). as it is, this is not a well written question.
â user601
Jan 7 at 18:41
@hop I think it should be technically possible to measure it quite precisely and I am sure many do this already. I just do not know the right tools and commands. One has to capture the data from the kernel, not with a tool like
du
or df
.â Jonas Stein
Jan 7 at 20:38
@hop I think it should be technically possible to measure it quite precisely and I am sure many do this already. I just do not know the right tools and commands. One has to capture the data from the kernel, not with a tool like
du
or df
.â Jonas Stein
Jan 7 at 20:38
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
1
down vote
One possibility might be to monitor file-system events and run some logging command on file-creation and file-deletion. There are some tools that would probably facilitate this approach, such as inotify, fswatch, or the Linux audit framework. You could either try to log the total disk-space after each event or just log the change in disk-space and then used the logs to calculate the maximum size. See, for example, the following SuperUser post:
- Continuously detect new file(s) with inotify-tools within multiple directories recursively
A different approach comes from the following post:
- How can I monitor disk I/O in a particular directory?
There the suggestion is made to mount the directory in question on its own partition and then run iostat on that partition. That should allow you to continuously log IO events to that directory.
For further discussion on monitoring disk IO you might refer to the following post:
- How can I monitor disk io?
i second the idea to mount a separate file system on /tmp/test. evendf
might be (fast) enough for the OP's purposes with that trick. an additional option would be to hack together a fuse file system that monitors the size.
â user601
Jan 7 at 18:37
@hop df gets slow, if there are many files. And it sums only one snapshot not the real maximum.
â Jonas Stein
Jan 7 at 20:40
add a comment |Â
up vote
-2
down vote
Schedule a script for every 1 minute
*/1 *. *. *. * script path
The script should be the following
#!/bin/bash
du -sch /tmp/|sed -n '1p' >>outputfile
After some time, sort it and get the highest size.
4
Doesn't this suffer from the same shortcoming that the OP is specifically trying to overcome? I think @jonas-stein is looking for a way to continuously monitor file-size.
â igal
Nov 20 '17 at 2:36
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
One possibility might be to monitor file-system events and run some logging command on file-creation and file-deletion. There are some tools that would probably facilitate this approach, such as inotify, fswatch, or the Linux audit framework. You could either try to log the total disk-space after each event or just log the change in disk-space and then used the logs to calculate the maximum size. See, for example, the following SuperUser post:
- Continuously detect new file(s) with inotify-tools within multiple directories recursively
A different approach comes from the following post:
- How can I monitor disk I/O in a particular directory?
There the suggestion is made to mount the directory in question on its own partition and then run iostat on that partition. That should allow you to continuously log IO events to that directory.
For further discussion on monitoring disk IO you might refer to the following post:
- How can I monitor disk io?
i second the idea to mount a separate file system on /tmp/test. evendf
might be (fast) enough for the OP's purposes with that trick. an additional option would be to hack together a fuse file system that monitors the size.
â user601
Jan 7 at 18:37
@hop df gets slow, if there are many files. And it sums only one snapshot not the real maximum.
â Jonas Stein
Jan 7 at 20:40
add a comment |Â
up vote
1
down vote
One possibility might be to monitor file-system events and run some logging command on file-creation and file-deletion. There are some tools that would probably facilitate this approach, such as inotify, fswatch, or the Linux audit framework. You could either try to log the total disk-space after each event or just log the change in disk-space and then used the logs to calculate the maximum size. See, for example, the following SuperUser post:
- Continuously detect new file(s) with inotify-tools within multiple directories recursively
A different approach comes from the following post:
- How can I monitor disk I/O in a particular directory?
There the suggestion is made to mount the directory in question on its own partition and then run iostat on that partition. That should allow you to continuously log IO events to that directory.
For further discussion on monitoring disk IO you might refer to the following post:
- How can I monitor disk io?
i second the idea to mount a separate file system on /tmp/test. evendf
might be (fast) enough for the OP's purposes with that trick. an additional option would be to hack together a fuse file system that monitors the size.
â user601
Jan 7 at 18:37
@hop df gets slow, if there are many files. And it sums only one snapshot not the real maximum.
â Jonas Stein
Jan 7 at 20:40
add a comment |Â
up vote
1
down vote
up vote
1
down vote
One possibility might be to monitor file-system events and run some logging command on file-creation and file-deletion. There are some tools that would probably facilitate this approach, such as inotify, fswatch, or the Linux audit framework. You could either try to log the total disk-space after each event or just log the change in disk-space and then used the logs to calculate the maximum size. See, for example, the following SuperUser post:
- Continuously detect new file(s) with inotify-tools within multiple directories recursively
A different approach comes from the following post:
- How can I monitor disk I/O in a particular directory?
There the suggestion is made to mount the directory in question on its own partition and then run iostat on that partition. That should allow you to continuously log IO events to that directory.
For further discussion on monitoring disk IO you might refer to the following post:
- How can I monitor disk io?
One possibility might be to monitor file-system events and run some logging command on file-creation and file-deletion. There are some tools that would probably facilitate this approach, such as inotify, fswatch, or the Linux audit framework. You could either try to log the total disk-space after each event or just log the change in disk-space and then used the logs to calculate the maximum size. See, for example, the following SuperUser post:
- Continuously detect new file(s) with inotify-tools within multiple directories recursively
A different approach comes from the following post:
- How can I monitor disk I/O in a particular directory?
There the suggestion is made to mount the directory in question on its own partition and then run iostat on that partition. That should allow you to continuously log IO events to that directory.
For further discussion on monitoring disk IO you might refer to the following post:
- How can I monitor disk io?
answered Nov 20 '17 at 0:25
igal
4,830930
4,830930
i second the idea to mount a separate file system on /tmp/test. evendf
might be (fast) enough for the OP's purposes with that trick. an additional option would be to hack together a fuse file system that monitors the size.
â user601
Jan 7 at 18:37
@hop df gets slow, if there are many files. And it sums only one snapshot not the real maximum.
â Jonas Stein
Jan 7 at 20:40
add a comment |Â
i second the idea to mount a separate file system on /tmp/test. evendf
might be (fast) enough for the OP's purposes with that trick. an additional option would be to hack together a fuse file system that monitors the size.
â user601
Jan 7 at 18:37
@hop df gets slow, if there are many files. And it sums only one snapshot not the real maximum.
â Jonas Stein
Jan 7 at 20:40
i second the idea to mount a separate file system on /tmp/test. even
df
might be (fast) enough for the OP's purposes with that trick. an additional option would be to hack together a fuse file system that monitors the size.â user601
Jan 7 at 18:37
i second the idea to mount a separate file system on /tmp/test. even
df
might be (fast) enough for the OP's purposes with that trick. an additional option would be to hack together a fuse file system that monitors the size.â user601
Jan 7 at 18:37
@hop df gets slow, if there are many files. And it sums only one snapshot not the real maximum.
â Jonas Stein
Jan 7 at 20:40
@hop df gets slow, if there are many files. And it sums only one snapshot not the real maximum.
â Jonas Stein
Jan 7 at 20:40
add a comment |Â
up vote
-2
down vote
Schedule a script for every 1 minute
*/1 *. *. *. * script path
The script should be the following
#!/bin/bash
du -sch /tmp/|sed -n '1p' >>outputfile
After some time, sort it and get the highest size.
4
Doesn't this suffer from the same shortcoming that the OP is specifically trying to overcome? I think @jonas-stein is looking for a way to continuously monitor file-size.
â igal
Nov 20 '17 at 2:36
add a comment |Â
up vote
-2
down vote
Schedule a script for every 1 minute
*/1 *. *. *. * script path
The script should be the following
#!/bin/bash
du -sch /tmp/|sed -n '1p' >>outputfile
After some time, sort it and get the highest size.
4
Doesn't this suffer from the same shortcoming that the OP is specifically trying to overcome? I think @jonas-stein is looking for a way to continuously monitor file-size.
â igal
Nov 20 '17 at 2:36
add a comment |Â
up vote
-2
down vote
up vote
-2
down vote
Schedule a script for every 1 minute
*/1 *. *. *. * script path
The script should be the following
#!/bin/bash
du -sch /tmp/|sed -n '1p' >>outputfile
After some time, sort it and get the highest size.
Schedule a script for every 1 minute
*/1 *. *. *. * script path
The script should be the following
#!/bin/bash
du -sch /tmp/|sed -n '1p' >>outputfile
After some time, sort it and get the highest size.
edited Jan 7 at 18:30
grg
1857
1857
answered Nov 20 '17 at 2:33
Praveen Kumar BS
1,010128
1,010128
4
Doesn't this suffer from the same shortcoming that the OP is specifically trying to overcome? I think @jonas-stein is looking for a way to continuously monitor file-size.
â igal
Nov 20 '17 at 2:36
add a comment |Â
4
Doesn't this suffer from the same shortcoming that the OP is specifically trying to overcome? I think @jonas-stein is looking for a way to continuously monitor file-size.
â igal
Nov 20 '17 at 2:36
4
4
Doesn't this suffer from the same shortcoming that the OP is specifically trying to overcome? I think @jonas-stein is looking for a way to continuously monitor file-size.
â igal
Nov 20 '17 at 2:36
Doesn't this suffer from the same shortcoming that the OP is specifically trying to overcome? I think @jonas-stein is looking for a way to continuously monitor file-size.
â igal
Nov 20 '17 at 2:36
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f405674%2fhow-to-record-the-maximum-size-of-a-folder%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
you should tell us why you need this and what the hard requirements are. is the process lenghty or easily repeatable? how important is accuracy (what if the result is a little bit bigger than necessary? what if it is much bigger?). the choice of file system also could make a difference in the result (block size differences, efficient packing of small files in some fs). as it is, this is not a well written question.
â user601
Jan 7 at 18:41
@hop I think it should be technically possible to measure it quite precisely and I am sure many do this already. I just do not know the right tools and commands. One has to capture the data from the kernel, not with a tool like
du
ordf
.â Jonas Stein
Jan 7 at 20:38