Way to instantly fill up/use up lots of disk space?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
38
down vote

favorite
8












On a Linux VM I would like to TEST the NAGIOS monitoring more deeply than just switching off the VM or disconnecting the virtual NIC; I would like to test or "enforce a disk space alarm" through occupying several % of free space for a short period of time.



I know that I could just use a



dd if=/dev/zero of=/tmp/hd-fillup.zeros bs=1G count=50


or something like that... but this takes time and loads the system and requires again time when removing the test files with rm.



Is there a quick (almost instant) way to fill up a partition that does not load down the system and takes a lot of time ? im thinking about something that allocates space, but does not "fill" it.










share|improve this question























  • sorry, forgot to mention that its a >> ext3 filesystem.
    – Axel Werner
    Jun 29 '16 at 12:17










  • You need to upgrade it to ext4 to support fallocate.
    – Rui F Ribeiro
    Jun 29 '16 at 12:22






  • 1




    Zip bomb always works
    – galois
    Jun 29 '16 at 18:53






  • 1




    @jaska Make it an answer. It was the very first idea I got when reading the title...
    – Crowley
    Jun 30 '16 at 9:35










  • Why don't you use /dev/full? (Assuming it exists). Try echo 'test' > /dev/full on Debian.
    – Ismael Miguel
    Jun 30 '16 at 15:53















up vote
38
down vote

favorite
8












On a Linux VM I would like to TEST the NAGIOS monitoring more deeply than just switching off the VM or disconnecting the virtual NIC; I would like to test or "enforce a disk space alarm" through occupying several % of free space for a short period of time.



I know that I could just use a



dd if=/dev/zero of=/tmp/hd-fillup.zeros bs=1G count=50


or something like that... but this takes time and loads the system and requires again time when removing the test files with rm.



Is there a quick (almost instant) way to fill up a partition that does not load down the system and takes a lot of time ? im thinking about something that allocates space, but does not "fill" it.










share|improve this question























  • sorry, forgot to mention that its a >> ext3 filesystem.
    – Axel Werner
    Jun 29 '16 at 12:17










  • You need to upgrade it to ext4 to support fallocate.
    – Rui F Ribeiro
    Jun 29 '16 at 12:22






  • 1




    Zip bomb always works
    – galois
    Jun 29 '16 at 18:53






  • 1




    @jaska Make it an answer. It was the very first idea I got when reading the title...
    – Crowley
    Jun 30 '16 at 9:35










  • Why don't you use /dev/full? (Assuming it exists). Try echo 'test' > /dev/full on Debian.
    – Ismael Miguel
    Jun 30 '16 at 15:53













up vote
38
down vote

favorite
8









up vote
38
down vote

favorite
8






8





On a Linux VM I would like to TEST the NAGIOS monitoring more deeply than just switching off the VM or disconnecting the virtual NIC; I would like to test or "enforce a disk space alarm" through occupying several % of free space for a short period of time.



I know that I could just use a



dd if=/dev/zero of=/tmp/hd-fillup.zeros bs=1G count=50


or something like that... but this takes time and loads the system and requires again time when removing the test files with rm.



Is there a quick (almost instant) way to fill up a partition that does not load down the system and takes a lot of time ? im thinking about something that allocates space, but does not "fill" it.










share|improve this question















On a Linux VM I would like to TEST the NAGIOS monitoring more deeply than just switching off the VM or disconnecting the virtual NIC; I would like to test or "enforce a disk space alarm" through occupying several % of free space for a short period of time.



I know that I could just use a



dd if=/dev/zero of=/tmp/hd-fillup.zeros bs=1G count=50


or something like that... but this takes time and loads the system and requires again time when removing the test files with rm.



Is there a quick (almost instant) way to fill up a partition that does not load down the system and takes a lot of time ? im thinking about something that allocates space, but does not "fill" it.







linux filesystems hard-disk disk-usage






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Feb 11 at 21:42









Rui F Ribeiro

37.9k1475122




37.9k1475122










asked Jun 29 '16 at 12:04









Axel Werner

3631314




3631314











  • sorry, forgot to mention that its a >> ext3 filesystem.
    – Axel Werner
    Jun 29 '16 at 12:17










  • You need to upgrade it to ext4 to support fallocate.
    – Rui F Ribeiro
    Jun 29 '16 at 12:22






  • 1




    Zip bomb always works
    – galois
    Jun 29 '16 at 18:53






  • 1




    @jaska Make it an answer. It was the very first idea I got when reading the title...
    – Crowley
    Jun 30 '16 at 9:35










  • Why don't you use /dev/full? (Assuming it exists). Try echo 'test' > /dev/full on Debian.
    – Ismael Miguel
    Jun 30 '16 at 15:53

















  • sorry, forgot to mention that its a >> ext3 filesystem.
    – Axel Werner
    Jun 29 '16 at 12:17










  • You need to upgrade it to ext4 to support fallocate.
    – Rui F Ribeiro
    Jun 29 '16 at 12:22






  • 1




    Zip bomb always works
    – galois
    Jun 29 '16 at 18:53






  • 1




    @jaska Make it an answer. It was the very first idea I got when reading the title...
    – Crowley
    Jun 30 '16 at 9:35










  • Why don't you use /dev/full? (Assuming it exists). Try echo 'test' > /dev/full on Debian.
    – Ismael Miguel
    Jun 30 '16 at 15:53
















sorry, forgot to mention that its a >> ext3 filesystem.
– Axel Werner
Jun 29 '16 at 12:17




sorry, forgot to mention that its a >> ext3 filesystem.
– Axel Werner
Jun 29 '16 at 12:17












You need to upgrade it to ext4 to support fallocate.
– Rui F Ribeiro
Jun 29 '16 at 12:22




You need to upgrade it to ext4 to support fallocate.
– Rui F Ribeiro
Jun 29 '16 at 12:22




1




1




Zip bomb always works
– galois
Jun 29 '16 at 18:53




Zip bomb always works
– galois
Jun 29 '16 at 18:53




1




1




@jaska Make it an answer. It was the very first idea I got when reading the title...
– Crowley
Jun 30 '16 at 9:35




@jaska Make it an answer. It was the very first idea I got when reading the title...
– Crowley
Jun 30 '16 at 9:35












Why don't you use /dev/full? (Assuming it exists). Try echo 'test' > /dev/full on Debian.
– Ismael Miguel
Jun 30 '16 at 15:53





Why don't you use /dev/full? (Assuming it exists). Try echo 'test' > /dev/full on Debian.
– Ismael Miguel
Jun 30 '16 at 15:53











4 Answers
4






active

oldest

votes

















up vote
54
down vote



accepted










The fastest way to create a file in a Linux system is using fallocate:



fallocate -l 50G file 


From man:




fallocate is used to manipulate the allocated disk space for a
file,
either to deallocate or preallocate it.

For filesystems which support
the fallocate system call, preallocation is done quickly by allocating
blocks and marking them as uninitialized, requiring no IO to the data
blocks. This is much faster than creating a file by filling it with
zeros.

Supported for XFS (since Linux 2.6.38), ext4 (since Linux 3.0),
Btrfs (since Linux 3.7) and tmpfs (since Linux 3.5).







share|improve this answer


















  • 1




    Why are you running it through sudo?
    – gerrit
    Jun 29 '16 at 23:31






  • 1




    @gerrit Added that point to the answer.
    – Rui F Ribeiro
    Jun 30 '16 at 6:53







  • 3




    "fallocate needs root privileges" Not on my system (Linux Mint 17.3, downstream of Ubuntu, thus Debian). (ext4 file system)
    – T.J. Crowder
    Jun 30 '16 at 7:48







  • 1




    +1 although the OP explicitly mentioned that his filesystem is ext3.
    – syneticon-dj
    Jun 30 '16 at 7:59






  • 1




    @RuiFRibeiro, thanks! for sles11sp4 ive been able to create a file, format it with ext4, but where unable to mount it in RW mode. later i found a kernel message in /var/log/messages that said, ext4 is supported only as read-only. :/
    – Axel Werner
    Jul 1 '16 at 11:40

















up vote
13
down vote













Other alternatives include:



  1. to change the alarm thresholds to something near or below the current usage, or

  2. to create a very small test partition with limited inodes, size, or other attributes.

Being able to test things such as running into the root reserved percentage, if any, may also be handy.






share|improve this answer






















  • root reserved percentage normally is 10% unless you tweak it - it ends up a too big waste of system in big partitions/modern disks. When defining alarms, you´d better already take it in account.
    – Rui F Ribeiro
    Jun 29 '16 at 16:59











  • +1 for the first thing. a hundred times true. why the hell should I actually create something on a machine disk? what if something (like coredump, batch job generating big temporary files, ...) happens at the time of my testing and whole disk gets accidentally eaten up?
    – Fiisch
    Jun 29 '16 at 18:30







  • 1




    @Fisch - Why? To make sure your alerting threshold is correct and that you're not doing something like accidentally setting the inode free percentage instead of the disk space free percentage (which I've seen done before). If something fails because you filled up a disk to the alerting threshold, then your alerting threshold is too low - the whole point of alerting is that it's supposed to alert you before things start to break.
    – Johnny
    Jun 29 '16 at 20:06










  • Cat, good point. But no solution for me. I dont have controll over the VM configuration (cant alter partitions or virtual disks), nor have controll over the NAGIOS Server.
    – Axel Werner
    Jun 30 '16 at 6:37






  • 2




    @AxelWerner Can you loopback-mount a file as "fake" partitiion? That still would allow you to test without seriously affecting anything. Format it with one of the supported filesystems and and you can play around with fallocate too.
    – Tonny
    Jun 30 '16 at 14:09

















up vote
9
down vote













  1. fallocate -l 50G big_file


  2. truncate -s 50G big_file


  3. dd of=bigfile bs=1 seek=50G count=0


As those three ways can all fill up a partition quickly.



If you like use dd, usually you can try it with seek. Just set seek=file_size_what_you_need and set count=0. That will tell the system there is a file, and its size is what you set, but the system will not create it actually. And used this way, you can create a file which is bigger than the partition size.




Example, on an ext4 partition with less than 3G available. Use dd to create a 5T file which exists as metadata -- requiring virtually no block space.



df -h . ; dd of=biggerfile bs=1 seek=5000G count=0 ; ls -log biggerfile ; df -h .


Output:



Filesystem Size Used Avail Use% Mounted on
/dev/sda9 42G 37G 2.8G 94% /home
0+0 records in
0+0 records out
0 bytes copied, 4.9296e-05 s, 0.0 kB/s
-rw-rw-r-- 1 5368709120000 Jun 29 13:13 biggerfile
Filesystem Size Used Avail Use% Mounted on
/dev/sda9 42G 37G 2.8G 94% /home





share|improve this answer


















  • 1




    can you add some more information to your answer?
    – cat
    Jun 29 '16 at 15:18










  • I just add more thinking in a finished question for people which search the same question other way. ignore it what if you are not.
    – Se ven
    Jun 29 '16 at 15:50










  • This count=0 method is quite interesting, I've added an example.
    – agc
    Jun 29 '16 at 16:47







  • 6




    Note that the dd example above may well allocate a sparse file. In that case the file size is 50G, it's actually only using a block (or not even) and so the disk is not getting full. YMMV.
    – MAP
    Jun 29 '16 at 19:49






  • 2




    I tested your suggestion on my ext3 filesystem. it did not work as expected. truncate and dd did create a file with a large file size, but "df -h" did not recognize it. still shows the same free hd space.
    – Axel Werner
    Jun 30 '16 at 6:53

















up vote
0
down vote













You could also take advantage of the stress-ng tool that is supported on a wide number of Linux-based systems:



stress-ng --fallocate 4 --fallocate-bytes 70% --timeout 1m --metrics --verify --times




share








New contributor




theosophe74 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

















    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "106"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f292843%2fway-to-instantly-fill-up-use-up-lots-of-disk-space%23new-answer', 'question_page');

    );

    Post as a guest






























    4 Answers
    4






    active

    oldest

    votes








    4 Answers
    4






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    54
    down vote



    accepted










    The fastest way to create a file in a Linux system is using fallocate:



    fallocate -l 50G file 


    From man:




    fallocate is used to manipulate the allocated disk space for a
    file,
    either to deallocate or preallocate it.

    For filesystems which support
    the fallocate system call, preallocation is done quickly by allocating
    blocks and marking them as uninitialized, requiring no IO to the data
    blocks. This is much faster than creating a file by filling it with
    zeros.

    Supported for XFS (since Linux 2.6.38), ext4 (since Linux 3.0),
    Btrfs (since Linux 3.7) and tmpfs (since Linux 3.5).







    share|improve this answer


















    • 1




      Why are you running it through sudo?
      – gerrit
      Jun 29 '16 at 23:31






    • 1




      @gerrit Added that point to the answer.
      – Rui F Ribeiro
      Jun 30 '16 at 6:53







    • 3




      "fallocate needs root privileges" Not on my system (Linux Mint 17.3, downstream of Ubuntu, thus Debian). (ext4 file system)
      – T.J. Crowder
      Jun 30 '16 at 7:48







    • 1




      +1 although the OP explicitly mentioned that his filesystem is ext3.
      – syneticon-dj
      Jun 30 '16 at 7:59






    • 1




      @RuiFRibeiro, thanks! for sles11sp4 ive been able to create a file, format it with ext4, but where unable to mount it in RW mode. later i found a kernel message in /var/log/messages that said, ext4 is supported only as read-only. :/
      – Axel Werner
      Jul 1 '16 at 11:40














    up vote
    54
    down vote



    accepted










    The fastest way to create a file in a Linux system is using fallocate:



    fallocate -l 50G file 


    From man:




    fallocate is used to manipulate the allocated disk space for a
    file,
    either to deallocate or preallocate it.

    For filesystems which support
    the fallocate system call, preallocation is done quickly by allocating
    blocks and marking them as uninitialized, requiring no IO to the data
    blocks. This is much faster than creating a file by filling it with
    zeros.

    Supported for XFS (since Linux 2.6.38), ext4 (since Linux 3.0),
    Btrfs (since Linux 3.7) and tmpfs (since Linux 3.5).







    share|improve this answer


















    • 1




      Why are you running it through sudo?
      – gerrit
      Jun 29 '16 at 23:31






    • 1




      @gerrit Added that point to the answer.
      – Rui F Ribeiro
      Jun 30 '16 at 6:53







    • 3




      "fallocate needs root privileges" Not on my system (Linux Mint 17.3, downstream of Ubuntu, thus Debian). (ext4 file system)
      – T.J. Crowder
      Jun 30 '16 at 7:48







    • 1




      +1 although the OP explicitly mentioned that his filesystem is ext3.
      – syneticon-dj
      Jun 30 '16 at 7:59






    • 1




      @RuiFRibeiro, thanks! for sles11sp4 ive been able to create a file, format it with ext4, but where unable to mount it in RW mode. later i found a kernel message in /var/log/messages that said, ext4 is supported only as read-only. :/
      – Axel Werner
      Jul 1 '16 at 11:40












    up vote
    54
    down vote



    accepted







    up vote
    54
    down vote



    accepted






    The fastest way to create a file in a Linux system is using fallocate:



    fallocate -l 50G file 


    From man:




    fallocate is used to manipulate the allocated disk space for a
    file,
    either to deallocate or preallocate it.

    For filesystems which support
    the fallocate system call, preallocation is done quickly by allocating
    blocks and marking them as uninitialized, requiring no IO to the data
    blocks. This is much faster than creating a file by filling it with
    zeros.

    Supported for XFS (since Linux 2.6.38), ext4 (since Linux 3.0),
    Btrfs (since Linux 3.7) and tmpfs (since Linux 3.5).







    share|improve this answer














    The fastest way to create a file in a Linux system is using fallocate:



    fallocate -l 50G file 


    From man:




    fallocate is used to manipulate the allocated disk space for a
    file,
    either to deallocate or preallocate it.

    For filesystems which support
    the fallocate system call, preallocation is done quickly by allocating
    blocks and marking them as uninitialized, requiring no IO to the data
    blocks. This is much faster than creating a file by filling it with
    zeros.

    Supported for XFS (since Linux 2.6.38), ext4 (since Linux 3.0),
    Btrfs (since Linux 3.7) and tmpfs (since Linux 3.5).








    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Feb 3 at 1:15

























    answered Jun 29 '16 at 12:17









    Rui F Ribeiro

    37.9k1475122




    37.9k1475122







    • 1




      Why are you running it through sudo?
      – gerrit
      Jun 29 '16 at 23:31






    • 1




      @gerrit Added that point to the answer.
      – Rui F Ribeiro
      Jun 30 '16 at 6:53







    • 3




      "fallocate needs root privileges" Not on my system (Linux Mint 17.3, downstream of Ubuntu, thus Debian). (ext4 file system)
      – T.J. Crowder
      Jun 30 '16 at 7:48







    • 1




      +1 although the OP explicitly mentioned that his filesystem is ext3.
      – syneticon-dj
      Jun 30 '16 at 7:59






    • 1




      @RuiFRibeiro, thanks! for sles11sp4 ive been able to create a file, format it with ext4, but where unable to mount it in RW mode. later i found a kernel message in /var/log/messages that said, ext4 is supported only as read-only. :/
      – Axel Werner
      Jul 1 '16 at 11:40












    • 1




      Why are you running it through sudo?
      – gerrit
      Jun 29 '16 at 23:31






    • 1




      @gerrit Added that point to the answer.
      – Rui F Ribeiro
      Jun 30 '16 at 6:53







    • 3




      "fallocate needs root privileges" Not on my system (Linux Mint 17.3, downstream of Ubuntu, thus Debian). (ext4 file system)
      – T.J. Crowder
      Jun 30 '16 at 7:48







    • 1




      +1 although the OP explicitly mentioned that his filesystem is ext3.
      – syneticon-dj
      Jun 30 '16 at 7:59






    • 1




      @RuiFRibeiro, thanks! for sles11sp4 ive been able to create a file, format it with ext4, but where unable to mount it in RW mode. later i found a kernel message in /var/log/messages that said, ext4 is supported only as read-only. :/
      – Axel Werner
      Jul 1 '16 at 11:40







    1




    1




    Why are you running it through sudo?
    – gerrit
    Jun 29 '16 at 23:31




    Why are you running it through sudo?
    – gerrit
    Jun 29 '16 at 23:31




    1




    1




    @gerrit Added that point to the answer.
    – Rui F Ribeiro
    Jun 30 '16 at 6:53





    @gerrit Added that point to the answer.
    – Rui F Ribeiro
    Jun 30 '16 at 6:53





    3




    3




    "fallocate needs root privileges" Not on my system (Linux Mint 17.3, downstream of Ubuntu, thus Debian). (ext4 file system)
    – T.J. Crowder
    Jun 30 '16 at 7:48





    "fallocate needs root privileges" Not on my system (Linux Mint 17.3, downstream of Ubuntu, thus Debian). (ext4 file system)
    – T.J. Crowder
    Jun 30 '16 at 7:48





    1




    1




    +1 although the OP explicitly mentioned that his filesystem is ext3.
    – syneticon-dj
    Jun 30 '16 at 7:59




    +1 although the OP explicitly mentioned that his filesystem is ext3.
    – syneticon-dj
    Jun 30 '16 at 7:59




    1




    1




    @RuiFRibeiro, thanks! for sles11sp4 ive been able to create a file, format it with ext4, but where unable to mount it in RW mode. later i found a kernel message in /var/log/messages that said, ext4 is supported only as read-only. :/
    – Axel Werner
    Jul 1 '16 at 11:40




    @RuiFRibeiro, thanks! for sles11sp4 ive been able to create a file, format it with ext4, but where unable to mount it in RW mode. later i found a kernel message in /var/log/messages that said, ext4 is supported only as read-only. :/
    – Axel Werner
    Jul 1 '16 at 11:40












    up vote
    13
    down vote













    Other alternatives include:



    1. to change the alarm thresholds to something near or below the current usage, or

    2. to create a very small test partition with limited inodes, size, or other attributes.

    Being able to test things such as running into the root reserved percentage, if any, may also be handy.






    share|improve this answer






















    • root reserved percentage normally is 10% unless you tweak it - it ends up a too big waste of system in big partitions/modern disks. When defining alarms, you´d better already take it in account.
      – Rui F Ribeiro
      Jun 29 '16 at 16:59











    • +1 for the first thing. a hundred times true. why the hell should I actually create something on a machine disk? what if something (like coredump, batch job generating big temporary files, ...) happens at the time of my testing and whole disk gets accidentally eaten up?
      – Fiisch
      Jun 29 '16 at 18:30







    • 1




      @Fisch - Why? To make sure your alerting threshold is correct and that you're not doing something like accidentally setting the inode free percentage instead of the disk space free percentage (which I've seen done before). If something fails because you filled up a disk to the alerting threshold, then your alerting threshold is too low - the whole point of alerting is that it's supposed to alert you before things start to break.
      – Johnny
      Jun 29 '16 at 20:06










    • Cat, good point. But no solution for me. I dont have controll over the VM configuration (cant alter partitions or virtual disks), nor have controll over the NAGIOS Server.
      – Axel Werner
      Jun 30 '16 at 6:37






    • 2




      @AxelWerner Can you loopback-mount a file as "fake" partitiion? That still would allow you to test without seriously affecting anything. Format it with one of the supported filesystems and and you can play around with fallocate too.
      – Tonny
      Jun 30 '16 at 14:09














    up vote
    13
    down vote













    Other alternatives include:



    1. to change the alarm thresholds to something near or below the current usage, or

    2. to create a very small test partition with limited inodes, size, or other attributes.

    Being able to test things such as running into the root reserved percentage, if any, may also be handy.






    share|improve this answer






















    • root reserved percentage normally is 10% unless you tweak it - it ends up a too big waste of system in big partitions/modern disks. When defining alarms, you´d better already take it in account.
      – Rui F Ribeiro
      Jun 29 '16 at 16:59











    • +1 for the first thing. a hundred times true. why the hell should I actually create something on a machine disk? what if something (like coredump, batch job generating big temporary files, ...) happens at the time of my testing and whole disk gets accidentally eaten up?
      – Fiisch
      Jun 29 '16 at 18:30







    • 1




      @Fisch - Why? To make sure your alerting threshold is correct and that you're not doing something like accidentally setting the inode free percentage instead of the disk space free percentage (which I've seen done before). If something fails because you filled up a disk to the alerting threshold, then your alerting threshold is too low - the whole point of alerting is that it's supposed to alert you before things start to break.
      – Johnny
      Jun 29 '16 at 20:06










    • Cat, good point. But no solution for me. I dont have controll over the VM configuration (cant alter partitions or virtual disks), nor have controll over the NAGIOS Server.
      – Axel Werner
      Jun 30 '16 at 6:37






    • 2




      @AxelWerner Can you loopback-mount a file as "fake" partitiion? That still would allow you to test without seriously affecting anything. Format it with one of the supported filesystems and and you can play around with fallocate too.
      – Tonny
      Jun 30 '16 at 14:09












    up vote
    13
    down vote










    up vote
    13
    down vote









    Other alternatives include:



    1. to change the alarm thresholds to something near or below the current usage, or

    2. to create a very small test partition with limited inodes, size, or other attributes.

    Being able to test things such as running into the root reserved percentage, if any, may also be handy.






    share|improve this answer














    Other alternatives include:



    1. to change the alarm thresholds to something near or below the current usage, or

    2. to create a very small test partition with limited inodes, size, or other attributes.

    Being able to test things such as running into the root reserved percentage, if any, may also be handy.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Jun 29 '16 at 15:46









    cat

    1,67621135




    1,67621135










    answered Jun 29 '16 at 14:05









    thrig

    23.2k12854




    23.2k12854











    • root reserved percentage normally is 10% unless you tweak it - it ends up a too big waste of system in big partitions/modern disks. When defining alarms, you´d better already take it in account.
      – Rui F Ribeiro
      Jun 29 '16 at 16:59











    • +1 for the first thing. a hundred times true. why the hell should I actually create something on a machine disk? what if something (like coredump, batch job generating big temporary files, ...) happens at the time of my testing and whole disk gets accidentally eaten up?
      – Fiisch
      Jun 29 '16 at 18:30







    • 1




      @Fisch - Why? To make sure your alerting threshold is correct and that you're not doing something like accidentally setting the inode free percentage instead of the disk space free percentage (which I've seen done before). If something fails because you filled up a disk to the alerting threshold, then your alerting threshold is too low - the whole point of alerting is that it's supposed to alert you before things start to break.
      – Johnny
      Jun 29 '16 at 20:06










    • Cat, good point. But no solution for me. I dont have controll over the VM configuration (cant alter partitions or virtual disks), nor have controll over the NAGIOS Server.
      – Axel Werner
      Jun 30 '16 at 6:37






    • 2




      @AxelWerner Can you loopback-mount a file as "fake" partitiion? That still would allow you to test without seriously affecting anything. Format it with one of the supported filesystems and and you can play around with fallocate too.
      – Tonny
      Jun 30 '16 at 14:09
















    • root reserved percentage normally is 10% unless you tweak it - it ends up a too big waste of system in big partitions/modern disks. When defining alarms, you´d better already take it in account.
      – Rui F Ribeiro
      Jun 29 '16 at 16:59











    • +1 for the first thing. a hundred times true. why the hell should I actually create something on a machine disk? what if something (like coredump, batch job generating big temporary files, ...) happens at the time of my testing and whole disk gets accidentally eaten up?
      – Fiisch
      Jun 29 '16 at 18:30







    • 1




      @Fisch - Why? To make sure your alerting threshold is correct and that you're not doing something like accidentally setting the inode free percentage instead of the disk space free percentage (which I've seen done before). If something fails because you filled up a disk to the alerting threshold, then your alerting threshold is too low - the whole point of alerting is that it's supposed to alert you before things start to break.
      – Johnny
      Jun 29 '16 at 20:06










    • Cat, good point. But no solution for me. I dont have controll over the VM configuration (cant alter partitions or virtual disks), nor have controll over the NAGIOS Server.
      – Axel Werner
      Jun 30 '16 at 6:37






    • 2




      @AxelWerner Can you loopback-mount a file as "fake" partitiion? That still would allow you to test without seriously affecting anything. Format it with one of the supported filesystems and and you can play around with fallocate too.
      – Tonny
      Jun 30 '16 at 14:09















    root reserved percentage normally is 10% unless you tweak it - it ends up a too big waste of system in big partitions/modern disks. When defining alarms, you´d better already take it in account.
    – Rui F Ribeiro
    Jun 29 '16 at 16:59





    root reserved percentage normally is 10% unless you tweak it - it ends up a too big waste of system in big partitions/modern disks. When defining alarms, you´d better already take it in account.
    – Rui F Ribeiro
    Jun 29 '16 at 16:59













    +1 for the first thing. a hundred times true. why the hell should I actually create something on a machine disk? what if something (like coredump, batch job generating big temporary files, ...) happens at the time of my testing and whole disk gets accidentally eaten up?
    – Fiisch
    Jun 29 '16 at 18:30





    +1 for the first thing. a hundred times true. why the hell should I actually create something on a machine disk? what if something (like coredump, batch job generating big temporary files, ...) happens at the time of my testing and whole disk gets accidentally eaten up?
    – Fiisch
    Jun 29 '16 at 18:30





    1




    1




    @Fisch - Why? To make sure your alerting threshold is correct and that you're not doing something like accidentally setting the inode free percentage instead of the disk space free percentage (which I've seen done before). If something fails because you filled up a disk to the alerting threshold, then your alerting threshold is too low - the whole point of alerting is that it's supposed to alert you before things start to break.
    – Johnny
    Jun 29 '16 at 20:06




    @Fisch - Why? To make sure your alerting threshold is correct and that you're not doing something like accidentally setting the inode free percentage instead of the disk space free percentage (which I've seen done before). If something fails because you filled up a disk to the alerting threshold, then your alerting threshold is too low - the whole point of alerting is that it's supposed to alert you before things start to break.
    – Johnny
    Jun 29 '16 at 20:06












    Cat, good point. But no solution for me. I dont have controll over the VM configuration (cant alter partitions or virtual disks), nor have controll over the NAGIOS Server.
    – Axel Werner
    Jun 30 '16 at 6:37




    Cat, good point. But no solution for me. I dont have controll over the VM configuration (cant alter partitions or virtual disks), nor have controll over the NAGIOS Server.
    – Axel Werner
    Jun 30 '16 at 6:37




    2




    2




    @AxelWerner Can you loopback-mount a file as "fake" partitiion? That still would allow you to test without seriously affecting anything. Format it with one of the supported filesystems and and you can play around with fallocate too.
    – Tonny
    Jun 30 '16 at 14:09




    @AxelWerner Can you loopback-mount a file as "fake" partitiion? That still would allow you to test without seriously affecting anything. Format it with one of the supported filesystems and and you can play around with fallocate too.
    – Tonny
    Jun 30 '16 at 14:09










    up vote
    9
    down vote













    1. fallocate -l 50G big_file


    2. truncate -s 50G big_file


    3. dd of=bigfile bs=1 seek=50G count=0


    As those three ways can all fill up a partition quickly.



    If you like use dd, usually you can try it with seek. Just set seek=file_size_what_you_need and set count=0. That will tell the system there is a file, and its size is what you set, but the system will not create it actually. And used this way, you can create a file which is bigger than the partition size.




    Example, on an ext4 partition with less than 3G available. Use dd to create a 5T file which exists as metadata -- requiring virtually no block space.



    df -h . ; dd of=biggerfile bs=1 seek=5000G count=0 ; ls -log biggerfile ; df -h .


    Output:



    Filesystem Size Used Avail Use% Mounted on
    /dev/sda9 42G 37G 2.8G 94% /home
    0+0 records in
    0+0 records out
    0 bytes copied, 4.9296e-05 s, 0.0 kB/s
    -rw-rw-r-- 1 5368709120000 Jun 29 13:13 biggerfile
    Filesystem Size Used Avail Use% Mounted on
    /dev/sda9 42G 37G 2.8G 94% /home





    share|improve this answer


















    • 1




      can you add some more information to your answer?
      – cat
      Jun 29 '16 at 15:18










    • I just add more thinking in a finished question for people which search the same question other way. ignore it what if you are not.
      – Se ven
      Jun 29 '16 at 15:50










    • This count=0 method is quite interesting, I've added an example.
      – agc
      Jun 29 '16 at 16:47







    • 6




      Note that the dd example above may well allocate a sparse file. In that case the file size is 50G, it's actually only using a block (or not even) and so the disk is not getting full. YMMV.
      – MAP
      Jun 29 '16 at 19:49






    • 2




      I tested your suggestion on my ext3 filesystem. it did not work as expected. truncate and dd did create a file with a large file size, but "df -h" did not recognize it. still shows the same free hd space.
      – Axel Werner
      Jun 30 '16 at 6:53














    up vote
    9
    down vote













    1. fallocate -l 50G big_file


    2. truncate -s 50G big_file


    3. dd of=bigfile bs=1 seek=50G count=0


    As those three ways can all fill up a partition quickly.



    If you like use dd, usually you can try it with seek. Just set seek=file_size_what_you_need and set count=0. That will tell the system there is a file, and its size is what you set, but the system will not create it actually. And used this way, you can create a file which is bigger than the partition size.




    Example, on an ext4 partition with less than 3G available. Use dd to create a 5T file which exists as metadata -- requiring virtually no block space.



    df -h . ; dd of=biggerfile bs=1 seek=5000G count=0 ; ls -log biggerfile ; df -h .


    Output:



    Filesystem Size Used Avail Use% Mounted on
    /dev/sda9 42G 37G 2.8G 94% /home
    0+0 records in
    0+0 records out
    0 bytes copied, 4.9296e-05 s, 0.0 kB/s
    -rw-rw-r-- 1 5368709120000 Jun 29 13:13 biggerfile
    Filesystem Size Used Avail Use% Mounted on
    /dev/sda9 42G 37G 2.8G 94% /home





    share|improve this answer


















    • 1




      can you add some more information to your answer?
      – cat
      Jun 29 '16 at 15:18










    • I just add more thinking in a finished question for people which search the same question other way. ignore it what if you are not.
      – Se ven
      Jun 29 '16 at 15:50










    • This count=0 method is quite interesting, I've added an example.
      – agc
      Jun 29 '16 at 16:47







    • 6




      Note that the dd example above may well allocate a sparse file. In that case the file size is 50G, it's actually only using a block (or not even) and so the disk is not getting full. YMMV.
      – MAP
      Jun 29 '16 at 19:49






    • 2




      I tested your suggestion on my ext3 filesystem. it did not work as expected. truncate and dd did create a file with a large file size, but "df -h" did not recognize it. still shows the same free hd space.
      – Axel Werner
      Jun 30 '16 at 6:53












    up vote
    9
    down vote










    up vote
    9
    down vote









    1. fallocate -l 50G big_file


    2. truncate -s 50G big_file


    3. dd of=bigfile bs=1 seek=50G count=0


    As those three ways can all fill up a partition quickly.



    If you like use dd, usually you can try it with seek. Just set seek=file_size_what_you_need and set count=0. That will tell the system there is a file, and its size is what you set, but the system will not create it actually. And used this way, you can create a file which is bigger than the partition size.




    Example, on an ext4 partition with less than 3G available. Use dd to create a 5T file which exists as metadata -- requiring virtually no block space.



    df -h . ; dd of=biggerfile bs=1 seek=5000G count=0 ; ls -log biggerfile ; df -h .


    Output:



    Filesystem Size Used Avail Use% Mounted on
    /dev/sda9 42G 37G 2.8G 94% /home
    0+0 records in
    0+0 records out
    0 bytes copied, 4.9296e-05 s, 0.0 kB/s
    -rw-rw-r-- 1 5368709120000 Jun 29 13:13 biggerfile
    Filesystem Size Used Avail Use% Mounted on
    /dev/sda9 42G 37G 2.8G 94% /home





    share|improve this answer














    1. fallocate -l 50G big_file


    2. truncate -s 50G big_file


    3. dd of=bigfile bs=1 seek=50G count=0


    As those three ways can all fill up a partition quickly.



    If you like use dd, usually you can try it with seek. Just set seek=file_size_what_you_need and set count=0. That will tell the system there is a file, and its size is what you set, but the system will not create it actually. And used this way, you can create a file which is bigger than the partition size.




    Example, on an ext4 partition with less than 3G available. Use dd to create a 5T file which exists as metadata -- requiring virtually no block space.



    df -h . ; dd of=biggerfile bs=1 seek=5000G count=0 ; ls -log biggerfile ; df -h .


    Output:



    Filesystem Size Used Avail Use% Mounted on
    /dev/sda9 42G 37G 2.8G 94% /home
    0+0 records in
    0+0 records out
    0 bytes copied, 4.9296e-05 s, 0.0 kB/s
    -rw-rw-r-- 1 5368709120000 Jun 29 13:13 biggerfile
    Filesystem Size Used Avail Use% Mounted on
    /dev/sda9 42G 37G 2.8G 94% /home






    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Jun 29 '16 at 21:36









    agc

    4,3631935




    4,3631935










    answered Jun 29 '16 at 14:33









    Se ven

    1349




    1349







    • 1




      can you add some more information to your answer?
      – cat
      Jun 29 '16 at 15:18










    • I just add more thinking in a finished question for people which search the same question other way. ignore it what if you are not.
      – Se ven
      Jun 29 '16 at 15:50










    • This count=0 method is quite interesting, I've added an example.
      – agc
      Jun 29 '16 at 16:47







    • 6




      Note that the dd example above may well allocate a sparse file. In that case the file size is 50G, it's actually only using a block (or not even) and so the disk is not getting full. YMMV.
      – MAP
      Jun 29 '16 at 19:49






    • 2




      I tested your suggestion on my ext3 filesystem. it did not work as expected. truncate and dd did create a file with a large file size, but "df -h" did not recognize it. still shows the same free hd space.
      – Axel Werner
      Jun 30 '16 at 6:53












    • 1




      can you add some more information to your answer?
      – cat
      Jun 29 '16 at 15:18










    • I just add more thinking in a finished question for people which search the same question other way. ignore it what if you are not.
      – Se ven
      Jun 29 '16 at 15:50










    • This count=0 method is quite interesting, I've added an example.
      – agc
      Jun 29 '16 at 16:47







    • 6




      Note that the dd example above may well allocate a sparse file. In that case the file size is 50G, it's actually only using a block (or not even) and so the disk is not getting full. YMMV.
      – MAP
      Jun 29 '16 at 19:49






    • 2




      I tested your suggestion on my ext3 filesystem. it did not work as expected. truncate and dd did create a file with a large file size, but "df -h" did not recognize it. still shows the same free hd space.
      – Axel Werner
      Jun 30 '16 at 6:53







    1




    1




    can you add some more information to your answer?
    – cat
    Jun 29 '16 at 15:18




    can you add some more information to your answer?
    – cat
    Jun 29 '16 at 15:18












    I just add more thinking in a finished question for people which search the same question other way. ignore it what if you are not.
    – Se ven
    Jun 29 '16 at 15:50




    I just add more thinking in a finished question for people which search the same question other way. ignore it what if you are not.
    – Se ven
    Jun 29 '16 at 15:50












    This count=0 method is quite interesting, I've added an example.
    – agc
    Jun 29 '16 at 16:47





    This count=0 method is quite interesting, I've added an example.
    – agc
    Jun 29 '16 at 16:47





    6




    6




    Note that the dd example above may well allocate a sparse file. In that case the file size is 50G, it's actually only using a block (or not even) and so the disk is not getting full. YMMV.
    – MAP
    Jun 29 '16 at 19:49




    Note that the dd example above may well allocate a sparse file. In that case the file size is 50G, it's actually only using a block (or not even) and so the disk is not getting full. YMMV.
    – MAP
    Jun 29 '16 at 19:49




    2




    2




    I tested your suggestion on my ext3 filesystem. it did not work as expected. truncate and dd did create a file with a large file size, but "df -h" did not recognize it. still shows the same free hd space.
    – Axel Werner
    Jun 30 '16 at 6:53




    I tested your suggestion on my ext3 filesystem. it did not work as expected. truncate and dd did create a file with a large file size, but "df -h" did not recognize it. still shows the same free hd space.
    – Axel Werner
    Jun 30 '16 at 6:53










    up vote
    0
    down vote













    You could also take advantage of the stress-ng tool that is supported on a wide number of Linux-based systems:



    stress-ng --fallocate 4 --fallocate-bytes 70% --timeout 1m --metrics --verify --times




    share








    New contributor




    theosophe74 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.





















      up vote
      0
      down vote













      You could also take advantage of the stress-ng tool that is supported on a wide number of Linux-based systems:



      stress-ng --fallocate 4 --fallocate-bytes 70% --timeout 1m --metrics --verify --times




      share








      New contributor




      theosophe74 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.



















        up vote
        0
        down vote










        up vote
        0
        down vote









        You could also take advantage of the stress-ng tool that is supported on a wide number of Linux-based systems:



        stress-ng --fallocate 4 --fallocate-bytes 70% --timeout 1m --metrics --verify --times




        share








        New contributor




        theosophe74 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        You could also take advantage of the stress-ng tool that is supported on a wide number of Linux-based systems:



        stress-ng --fallocate 4 --fallocate-bytes 70% --timeout 1m --metrics --verify --times





        share








        New contributor




        theosophe74 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.








        share


        share






        New contributor




        theosophe74 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        answered 8 mins ago









        theosophe74

        11




        11




        New contributor




        theosophe74 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.





        New contributor





        theosophe74 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.






        theosophe74 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.



























             

            draft saved


            draft discarded















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f292843%2fway-to-instantly-fill-up-use-up-lots-of-disk-space%23new-answer', 'question_page');

            );

            Post as a guest













































































            Popular posts from this blog

            How to check contact read email or not when send email to Individual?

            Displaying single band from multi-band raster using QGIS

            How many registers does an x86_64 CPU actually have?