Efficiently delete large directory containing thousands of files

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
136
down vote

favorite
68












We have an issue with a folder becoming unwieldy with hundreds of thousands of tiny files.



There are so many files that performing rm -rf returns an error and instead what we need to do is something like:



find /path/to/folder -name "filenamestart*" -type f -exec rm -f ;



This works but is very slow and constantly fails from running out of memory.



Is there a better way to do this? Ideally I would like to remove the entire directory without caring about the contents inside it.










share|improve this question



















  • 13




    rm -rf * in the folder probably fails because of too many arguments; but what about rm -rf folder/ if you want to remove the entire directory anyways?
    – sr_
    Apr 26 '12 at 8:01






  • 4




    Instead of deleting it manually, I suggest having the folder on a separate partition and simply unmount && format && remount.
    – bbaja42
    Apr 26 '12 at 11:22






  • 4




    Just out of curiosity - how many files does it take to break rm -rf?
    – jw013
    Apr 26 '12 at 11:37






  • 6




    You should probably rename the question to something more accurate, like "Efficiently delete large directory containing thousands of files." In order to delete a directory and its contents, recursion is necessary by definition. You could manually unlink just the directory inode itself (probably requires root privileges), unmount the file system, and run fsck on it to reclaim the unused disk blocks, but that approach seems risky and may not be any faster. In addition, the file system check might involve recursively traversing the file system tree anyways.
    – jw013
    Apr 26 '12 at 13:27







  • 3




    Once I had a ccache file tree so huge, and rm was taking so long (and making the entire system sluggish), it was considerably faster to copy all other files off the filesystem, format, and copy them back. Ever since then I give such massive small file trees their own dedicated filesystem, so you can mkfs directly instead of rm.
    – frostschutz
    Jun 15 '13 at 11:43














up vote
136
down vote

favorite
68












We have an issue with a folder becoming unwieldy with hundreds of thousands of tiny files.



There are so many files that performing rm -rf returns an error and instead what we need to do is something like:



find /path/to/folder -name "filenamestart*" -type f -exec rm -f ;



This works but is very slow and constantly fails from running out of memory.



Is there a better way to do this? Ideally I would like to remove the entire directory without caring about the contents inside it.










share|improve this question



















  • 13




    rm -rf * in the folder probably fails because of too many arguments; but what about rm -rf folder/ if you want to remove the entire directory anyways?
    – sr_
    Apr 26 '12 at 8:01






  • 4




    Instead of deleting it manually, I suggest having the folder on a separate partition and simply unmount && format && remount.
    – bbaja42
    Apr 26 '12 at 11:22






  • 4




    Just out of curiosity - how many files does it take to break rm -rf?
    – jw013
    Apr 26 '12 at 11:37






  • 6




    You should probably rename the question to something more accurate, like "Efficiently delete large directory containing thousands of files." In order to delete a directory and its contents, recursion is necessary by definition. You could manually unlink just the directory inode itself (probably requires root privileges), unmount the file system, and run fsck on it to reclaim the unused disk blocks, but that approach seems risky and may not be any faster. In addition, the file system check might involve recursively traversing the file system tree anyways.
    – jw013
    Apr 26 '12 at 13:27







  • 3




    Once I had a ccache file tree so huge, and rm was taking so long (and making the entire system sluggish), it was considerably faster to copy all other files off the filesystem, format, and copy them back. Ever since then I give such massive small file trees their own dedicated filesystem, so you can mkfs directly instead of rm.
    – frostschutz
    Jun 15 '13 at 11:43












up vote
136
down vote

favorite
68









up vote
136
down vote

favorite
68






68





We have an issue with a folder becoming unwieldy with hundreds of thousands of tiny files.



There are so many files that performing rm -rf returns an error and instead what we need to do is something like:



find /path/to/folder -name "filenamestart*" -type f -exec rm -f ;



This works but is very slow and constantly fails from running out of memory.



Is there a better way to do this? Ideally I would like to remove the entire directory without caring about the contents inside it.










share|improve this question















We have an issue with a folder becoming unwieldy with hundreds of thousands of tiny files.



There are so many files that performing rm -rf returns an error and instead what we need to do is something like:



find /path/to/folder -name "filenamestart*" -type f -exec rm -f ;



This works but is very slow and constantly fails from running out of memory.



Is there a better way to do this? Ideally I would like to remove the entire directory without caring about the contents inside it.







linux command-line files rm






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 5 '16 at 17:23









Wildcard

22.2k859156




22.2k859156










asked Apr 26 '12 at 7:50









Toby

1,31031216




1,31031216







  • 13




    rm -rf * in the folder probably fails because of too many arguments; but what about rm -rf folder/ if you want to remove the entire directory anyways?
    – sr_
    Apr 26 '12 at 8:01






  • 4




    Instead of deleting it manually, I suggest having the folder on a separate partition and simply unmount && format && remount.
    – bbaja42
    Apr 26 '12 at 11:22






  • 4




    Just out of curiosity - how many files does it take to break rm -rf?
    – jw013
    Apr 26 '12 at 11:37






  • 6




    You should probably rename the question to something more accurate, like "Efficiently delete large directory containing thousands of files." In order to delete a directory and its contents, recursion is necessary by definition. You could manually unlink just the directory inode itself (probably requires root privileges), unmount the file system, and run fsck on it to reclaim the unused disk blocks, but that approach seems risky and may not be any faster. In addition, the file system check might involve recursively traversing the file system tree anyways.
    – jw013
    Apr 26 '12 at 13:27







  • 3




    Once I had a ccache file tree so huge, and rm was taking so long (and making the entire system sluggish), it was considerably faster to copy all other files off the filesystem, format, and copy them back. Ever since then I give such massive small file trees their own dedicated filesystem, so you can mkfs directly instead of rm.
    – frostschutz
    Jun 15 '13 at 11:43












  • 13




    rm -rf * in the folder probably fails because of too many arguments; but what about rm -rf folder/ if you want to remove the entire directory anyways?
    – sr_
    Apr 26 '12 at 8:01






  • 4




    Instead of deleting it manually, I suggest having the folder on a separate partition and simply unmount && format && remount.
    – bbaja42
    Apr 26 '12 at 11:22






  • 4




    Just out of curiosity - how many files does it take to break rm -rf?
    – jw013
    Apr 26 '12 at 11:37






  • 6




    You should probably rename the question to something more accurate, like "Efficiently delete large directory containing thousands of files." In order to delete a directory and its contents, recursion is necessary by definition. You could manually unlink just the directory inode itself (probably requires root privileges), unmount the file system, and run fsck on it to reclaim the unused disk blocks, but that approach seems risky and may not be any faster. In addition, the file system check might involve recursively traversing the file system tree anyways.
    – jw013
    Apr 26 '12 at 13:27







  • 3




    Once I had a ccache file tree so huge, and rm was taking so long (and making the entire system sluggish), it was considerably faster to copy all other files off the filesystem, format, and copy them back. Ever since then I give such massive small file trees their own dedicated filesystem, so you can mkfs directly instead of rm.
    – frostschutz
    Jun 15 '13 at 11:43







13




13




rm -rf * in the folder probably fails because of too many arguments; but what about rm -rf folder/ if you want to remove the entire directory anyways?
– sr_
Apr 26 '12 at 8:01




rm -rf * in the folder probably fails because of too many arguments; but what about rm -rf folder/ if you want to remove the entire directory anyways?
– sr_
Apr 26 '12 at 8:01




4




4




Instead of deleting it manually, I suggest having the folder on a separate partition and simply unmount && format && remount.
– bbaja42
Apr 26 '12 at 11:22




Instead of deleting it manually, I suggest having the folder on a separate partition and simply unmount && format && remount.
– bbaja42
Apr 26 '12 at 11:22




4




4




Just out of curiosity - how many files does it take to break rm -rf?
– jw013
Apr 26 '12 at 11:37




Just out of curiosity - how many files does it take to break rm -rf?
– jw013
Apr 26 '12 at 11:37




6




6




You should probably rename the question to something more accurate, like "Efficiently delete large directory containing thousands of files." In order to delete a directory and its contents, recursion is necessary by definition. You could manually unlink just the directory inode itself (probably requires root privileges), unmount the file system, and run fsck on it to reclaim the unused disk blocks, but that approach seems risky and may not be any faster. In addition, the file system check might involve recursively traversing the file system tree anyways.
– jw013
Apr 26 '12 at 13:27





You should probably rename the question to something more accurate, like "Efficiently delete large directory containing thousands of files." In order to delete a directory and its contents, recursion is necessary by definition. You could manually unlink just the directory inode itself (probably requires root privileges), unmount the file system, and run fsck on it to reclaim the unused disk blocks, but that approach seems risky and may not be any faster. In addition, the file system check might involve recursively traversing the file system tree anyways.
– jw013
Apr 26 '12 at 13:27





3




3




Once I had a ccache file tree so huge, and rm was taking so long (and making the entire system sluggish), it was considerably faster to copy all other files off the filesystem, format, and copy them back. Ever since then I give such massive small file trees their own dedicated filesystem, so you can mkfs directly instead of rm.
– frostschutz
Jun 15 '13 at 11:43




Once I had a ccache file tree so huge, and rm was taking so long (and making the entire system sluggish), it was considerably faster to copy all other files off the filesystem, format, and copy them back. Ever since then I give such massive small file trees their own dedicated filesystem, so you can mkfs directly instead of rm.
– frostschutz
Jun 15 '13 at 11:43










15 Answers
15






active

oldest

votes

















up vote
166
down vote













Using rsync is surprising fast and simple.



mkdir empty_dir
rsync -a --delete empty_dir/ yourdirectory/


@sarath's answer mentioned another fast choice: Perl! Its benchmarks are faster than rsync -a --delete.



cd yourdirectory
perl -e 'for(<*>)((stat)[9]<(unlink))'


Sources:



  1. https://stackoverflow.com/questions/1795370/unix-fast-remove-directory-for-cleaning-up-daily-builds

  2. http://www.slashroot.in/which-is-the-fastest-method-to-delete-files-in-linux





share|improve this answer


















  • 3




    Thanks, very useful. I use rsync all the time, I had no idea you could use it to delete like this. Vastly quicker than rm -rf
    – John Powell
    Aug 21 '14 at 19:41






  • 16




    rsync can be faster than plain rm, because it guarantees the deletes in correct order, so less btress recomputation is needed. See this answer serverfault.com/a/328305/105902
    – Marki555
    Jun 29 '15 at 12:45






  • 5




    Can anyone modify the perl expression to recursively delete all directories and files inside a directory_to_be_deleted ?
    – Abhinav
    Oct 6 '15 at 15:43






  • 1




    Notes : add -P option to rsync for some more display, also, be careful about the syntax, the trailing slashes are mandatory. Finally, you can start the rsync command a first time with the -n option first to launch a dry run.
    – Drasill
    Oct 23 '15 at 15:39






  • 1




    -a equals -rlptgoD, but for deletion only -rd is necessary
    – Koen.
    Mar 19 '16 at 14:36

















up vote
37
down vote













Someone on Twitter suggested using -delete instead of -exec rm -f ;



This has improved the efficiency of the command, it still uses recursion to go through everything though.






share|improve this answer
















  • 10




    This is non standard. GNU find have -delete, and other find maybe.
    – enzotib
    Apr 26 '12 at 9:11






  • 12




    -delete should always be preferred to -exec rm when available, for reasons of safety and efficiency.
    – jw013
    Apr 26 '12 at 11:37










  • GNU is the de facto standard.
    – RonJohn
    Mar 3 at 17:38

















up vote
17
down vote













What about something like:
find /path/to/folder -name "filenamestart*" -type f -print0 | xargs -0rn 20 rm -f



You can limit number of files to delete at once by changing the argument for parameter -n. The file names with blanks are included also.






share|improve this answer
















  • 1




    You probably don't need the -n 20 bit, since xargs should limit itself to acceptable argument-list sizes anyway.
    – Useless
    Apr 26 '12 at 13:41










  • Yes, you are right. Here is a note from man xargs : (...) max-chars characters per command line (...). The largest allowed value is system-dependent, and is calculated as the argument length limit for exec. So -n option is for such cases where xargs cannot determine the CLI buffer size or if the executed command has some limits.
    – digital_infinity
    Apr 26 '12 at 13:50


















up vote
11
down vote













Expanding on one of the comments, I do not think you're doing what you think you're doing.



First I created a huge amount of files, to simulate your situation:



$ mkdir foo
$ cd foo/
$ for X in $(seq 1 1000);do touch 1..1000_$X; done


Then I tried what I expected to fail, and what it sounds like you're doing in the question:



$ rm -r foo/*
bash: /bin/rm: Argument list too long


But this does work:



$ rm -r foo/
$ ls foo
ls: cannot access foo: No such file or directory





share|improve this answer
















  • 5




    This is the only solution that worked: Run rm -Rf bigdirectory several times. I had a directory with thousands of millions of subdirectories and files. I couldn’t even run ls or find or rsync in that directory, because it ran out of memory. The command rm -Rf quit many times (out of memory) only deleting part of the billions of files. But after many retries it finally did the job. Seems to be the only solution if running out of memory is the problem.
    – erik
    Apr 9 '14 at 13:01


















up vote
10
down vote













A clever trick:



rsync -a --delete empty/ your_folder/


It's super CPU intensive, but really really fast. See https://web.archive.org/web/20130929001850/http://linuxnote.net/jianingy/en/linux/a-fast-way-to-remove-huge-number-of-files.html






share|improve this answer






















  • It's not so fast, because it reads the directory contents in-efficiently. See this answer for 10x faster solution and explanation serverfault.com/a/328305/105902
    – Marki555
    Jun 29 '15 at 12:46






  • 2




    @Marki555: in the Edit of the question it is reported 60 seconds for rsync -a --delete vs 43 for lsdent. The ratio 10x was for time ls -1 | wc -l vs time ./dentls bigfolder >out.txt (that is a partially fair comparison because of > file vs wc -l).
    – Hastur
    Jan 21 '16 at 9:30










  • The problem there is that NONE of the commands over there actually DO the desired traversal operation for deletion. The code they give? DOES NOT WORK as described by Marki555.
    – Svartalf
    Sep 10 at 16:05

















up vote
6
down vote













I had the opportunity to test -delete as compared to -exec rm ; and for me -delete was the answer to this problem.



Using -delete deleted the files in a folder of 400,000 files at least 1,000 times faster than rm.



The 'How to delete large number of files in linux' article suggests it is about three time faster, but in my test the difference was much more dramatic.






share|improve this answer


















  • 3




    Using find -exec executes the rm command for every file separately, that's why it is so slow.
    – Marki555
    Jun 26 '15 at 21:43

















up vote
3
down vote













There are couple of methods that can be used to delete large number of files in linux,. You can use find with delete option, which is faster than exec option. Then you can use perl unlink, then even rsync.
How to delete large number of files in linux






share|improve this answer



























    up vote
    2
    down vote













    About the -delete option above: I'm using it to remove a large number (1M+ est) files in a temp folder that I created and inadvertently forgot to cleanup nightly. I filled my disk/partition accidentally, and nothing else could remove them but the find . command. It is slow, at first I was using:



    find . -ls -exec rm ;


    But that was taking an EXTREME amount of time. It started after about 15 mins to remove some of the files, but my guess is that it was removing less than 10 or so per second after it finally started. So, I tried the:



    find . -delete


    instead, and I'm letting it run right now. It appears to be running faster, though it's EXTREMELY taxing on the CPU which the other command was not. It's been running for like an hour now and I think I'm getting space back on my drive and the partition gradually "slimming down" but it's still taking a very long time. I seriously doubt it's running 1,000 times faster than the other. As in all things, I just wanted to point out the tradeoff in space vs. time. If you have the CPU bandwidth to spare (we do) then run the latter. It's got my CPU running (uptime reports):



    10:59:17 up 539 days, 21:21, 3 users, load average: 22.98, 24.10, 22.87


    And I've seen the load average go over 30.00 which is not good for a busy system, but for ours which is normally lightly loaded, it's OK for a couple hours. I've checked most other things on the system and they're still responsive so we are OK for now.






    share|improve this answer






















    • if you're going to use exec you almost certainly want not use -ls and do find . -type f -exec rm '' + + is faster because it will give as many arguments to rm as it can handle at once.
      – xenoterracide
      Jan 3 '14 at 17:48










    • I think you should go ahead and edit this into its own answer… it's really too long for a comment. Also, it sound like your filesystem has fairly expensive deletes, curious which one it is? You can run that find … -delete through nice or ionice, that may help. So might changing some mount options to less-crash-safe settings. (And, of course, depending on what else is on the filesystem, the quickest way to delete everything is often mkfs.)
      – derobert
      Jan 4 '14 at 7:24







    • 2




      Load average is not always CPU, it's just a measure of the number of blocked processes over time. Processes can block on disk I/O, which is likely what is happening here.
      – Score_Under
      Jul 14 '14 at 12:47










    • Also note that load average does not account for number of logical CPUs. So loadavg 1 for single-core machine is the same as loadavg 64 on 64-core system - meaning each CPU is busy 100% of time.
      – Marki555
      Jun 29 '15 at 12:49

















    up vote
    1
    down vote













    Deleting REALLY LARGE directories needs a different approach, as I learned from this site - you'll need to utilize ionice.It ensures (with -c3) that deletes will only be performed when the system has IO-time for it. You systems load will not rise to high and everything stays responsive (though my CPU time for find was quite high at about 50%).



    find <dir> -type f -exec ionice -c3 rm ;





    share|improve this answer
















    • 5




      using + instead of ; would make this faster as it passes more arguments to rm at once, less forking
      – xenoterracide
      Jan 3 '14 at 17:50

















    up vote
    1
    down vote













    Consider using Btrfs volume and simply delete whole volume for such a directory with large number of files.



    Alternatively you can create an FS image file then unmount and delete its file to remove everything at once really fast.






    share|improve this answer



























      up vote
      1
      down vote













      Assuming to have GNU parallel installed, I've used this:



      parallel rm -rf dir/ ::: `ls -f dir/`



      and it was fast enough.






      share|improve this answer



























        up vote
        0
        down vote













        ls -1 | xargs rm -rf 


        should work inside the main folder






        share|improve this answer
















        • 1




          ls won't work because of the amount of files in the folder. This is why I had to use find, thanks though.
          – Toby
          Apr 26 '12 at 8:19






        • 4




          @Toby: Try ls -f, which disables sorting. Sorting requires that the entire directory be loaded into memory to be sorted. An unsorted ls should be able to stream its output.
          – camh
          Apr 26 '12 at 10:59






        • 1




          Does not work on filenames that contain newlines.
          – maxschlepzig
          Jan 5 '14 at 7:53










        • @camh that's true. But removing files in sorted order is faster than in unsorted (because of recalculating the btree of the directory after each deletion). See this answer for an example serverfault.com/a/328305/105902
          – Marki555
          Jun 29 '15 at 12:50











        • @maxschlepzig for such files you can use find . -print0 | xargs -0 rm, which will use the NULL char as filename separator.
          – Marki555
          Jun 29 '15 at 12:51

















        up vote
        0
        down vote













        For Izkata's hint above:




        But this does work:



        $ rm -r foo/
        $ ls foo
        ls: cannot access foo: No such file or directory



        This almost worked - or would have worked - but I had some problems in permission; files were on a server, but still I don't understand where this permission issue came from. Anyway, Terminal asked for confirmation on every file. Amount of files was around 20 000, so this wasn't an option. After "-r" I added option "-f", so the whole command was "rm -r -f foldername/". Then it seemed to work fine. I'm a novice with Terminal, but I guess this was okay, right? Thanks!






        share|improve this answer



























          up vote
          0
          down vote













          Depending on how well you need to get rid of those files, I'd suggest using shred.



          $ shred -zuv folder


          if you want to purge the directory, but you can't remove it and recreate it, I suggest moving it and recreating it instantly.



          mv folder folder_del
          mkdir folder
          rm -rf folder_del


          this is faster, believe it or not, as only one inode has to be changed. Remember: You can't really parallelize this tast on a multicore computer. It comes down to disk access, which is limited by the RAID or what have you.






          share|improve this answer
















          • 1




            shred will not work with many modern filesystems.
            – Smith John
            Jul 2 '13 at 14:47


















          up vote
          0
          down vote













          If you have millions of files and every solution above gets your system in stress you may try this inspiration:



          File nice_delete:



          #!/bin/bash

          MAX_LOAD=3
          FILES=("$@")
          BATCH=100

          while [ $#FILES[@] -gt 0 ]; do
          DEL=("$FILES[@]:0:$BATCH")
          ionice -c3 rm "$DEL[@]"
          echo -n "#"
          FILES=("$FILES[@]:$BATCH")
          while [[ $(cat /proc/loadavg | awk 'print int($1)') -gt $MAX_LOAD ]]; do
          echo -n "."
          sleep 1
          done
          done


          And now delete the files:



          find /path/to/folder -type f -exec ./nice_delete +


          Find will create batches (see getconf ARG_MAX) of some tens thousands of files and pass it to nice_delete. This will create even smaller batches to allow sleeping when overload is detected.






          share|improve this answer



















            protected by slm♦ Feb 18 '14 at 21:21



            Thank you for your interest in this question.
            Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



            Would you like to answer one of these unanswered questions instead?














            15 Answers
            15






            active

            oldest

            votes








            15 Answers
            15






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            166
            down vote













            Using rsync is surprising fast and simple.



            mkdir empty_dir
            rsync -a --delete empty_dir/ yourdirectory/


            @sarath's answer mentioned another fast choice: Perl! Its benchmarks are faster than rsync -a --delete.



            cd yourdirectory
            perl -e 'for(<*>)((stat)[9]<(unlink))'


            Sources:



            1. https://stackoverflow.com/questions/1795370/unix-fast-remove-directory-for-cleaning-up-daily-builds

            2. http://www.slashroot.in/which-is-the-fastest-method-to-delete-files-in-linux





            share|improve this answer


















            • 3




              Thanks, very useful. I use rsync all the time, I had no idea you could use it to delete like this. Vastly quicker than rm -rf
              – John Powell
              Aug 21 '14 at 19:41






            • 16




              rsync can be faster than plain rm, because it guarantees the deletes in correct order, so less btress recomputation is needed. See this answer serverfault.com/a/328305/105902
              – Marki555
              Jun 29 '15 at 12:45






            • 5




              Can anyone modify the perl expression to recursively delete all directories and files inside a directory_to_be_deleted ?
              – Abhinav
              Oct 6 '15 at 15:43






            • 1




              Notes : add -P option to rsync for some more display, also, be careful about the syntax, the trailing slashes are mandatory. Finally, you can start the rsync command a first time with the -n option first to launch a dry run.
              – Drasill
              Oct 23 '15 at 15:39






            • 1




              -a equals -rlptgoD, but for deletion only -rd is necessary
              – Koen.
              Mar 19 '16 at 14:36














            up vote
            166
            down vote













            Using rsync is surprising fast and simple.



            mkdir empty_dir
            rsync -a --delete empty_dir/ yourdirectory/


            @sarath's answer mentioned another fast choice: Perl! Its benchmarks are faster than rsync -a --delete.



            cd yourdirectory
            perl -e 'for(<*>)((stat)[9]<(unlink))'


            Sources:



            1. https://stackoverflow.com/questions/1795370/unix-fast-remove-directory-for-cleaning-up-daily-builds

            2. http://www.slashroot.in/which-is-the-fastest-method-to-delete-files-in-linux





            share|improve this answer


















            • 3




              Thanks, very useful. I use rsync all the time, I had no idea you could use it to delete like this. Vastly quicker than rm -rf
              – John Powell
              Aug 21 '14 at 19:41






            • 16




              rsync can be faster than plain rm, because it guarantees the deletes in correct order, so less btress recomputation is needed. See this answer serverfault.com/a/328305/105902
              – Marki555
              Jun 29 '15 at 12:45






            • 5




              Can anyone modify the perl expression to recursively delete all directories and files inside a directory_to_be_deleted ?
              – Abhinav
              Oct 6 '15 at 15:43






            • 1




              Notes : add -P option to rsync for some more display, also, be careful about the syntax, the trailing slashes are mandatory. Finally, you can start the rsync command a first time with the -n option first to launch a dry run.
              – Drasill
              Oct 23 '15 at 15:39






            • 1




              -a equals -rlptgoD, but for deletion only -rd is necessary
              – Koen.
              Mar 19 '16 at 14:36












            up vote
            166
            down vote










            up vote
            166
            down vote









            Using rsync is surprising fast and simple.



            mkdir empty_dir
            rsync -a --delete empty_dir/ yourdirectory/


            @sarath's answer mentioned another fast choice: Perl! Its benchmarks are faster than rsync -a --delete.



            cd yourdirectory
            perl -e 'for(<*>)((stat)[9]<(unlink))'


            Sources:



            1. https://stackoverflow.com/questions/1795370/unix-fast-remove-directory-for-cleaning-up-daily-builds

            2. http://www.slashroot.in/which-is-the-fastest-method-to-delete-files-in-linux





            share|improve this answer














            Using rsync is surprising fast and simple.



            mkdir empty_dir
            rsync -a --delete empty_dir/ yourdirectory/


            @sarath's answer mentioned another fast choice: Perl! Its benchmarks are faster than rsync -a --delete.



            cd yourdirectory
            perl -e 'for(<*>)((stat)[9]<(unlink))'


            Sources:



            1. https://stackoverflow.com/questions/1795370/unix-fast-remove-directory-for-cleaning-up-daily-builds

            2. http://www.slashroot.in/which-is-the-fastest-method-to-delete-files-in-linux






            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited May 23 '17 at 12:40









            Community♦

            1




            1










            answered Jun 17 '13 at 7:26









            stevendaniels

            1,761174




            1,761174







            • 3




              Thanks, very useful. I use rsync all the time, I had no idea you could use it to delete like this. Vastly quicker than rm -rf
              – John Powell
              Aug 21 '14 at 19:41






            • 16




              rsync can be faster than plain rm, because it guarantees the deletes in correct order, so less btress recomputation is needed. See this answer serverfault.com/a/328305/105902
              – Marki555
              Jun 29 '15 at 12:45






            • 5




              Can anyone modify the perl expression to recursively delete all directories and files inside a directory_to_be_deleted ?
              – Abhinav
              Oct 6 '15 at 15:43






            • 1




              Notes : add -P option to rsync for some more display, also, be careful about the syntax, the trailing slashes are mandatory. Finally, you can start the rsync command a first time with the -n option first to launch a dry run.
              – Drasill
              Oct 23 '15 at 15:39






            • 1




              -a equals -rlptgoD, but for deletion only -rd is necessary
              – Koen.
              Mar 19 '16 at 14:36












            • 3




              Thanks, very useful. I use rsync all the time, I had no idea you could use it to delete like this. Vastly quicker than rm -rf
              – John Powell
              Aug 21 '14 at 19:41






            • 16




              rsync can be faster than plain rm, because it guarantees the deletes in correct order, so less btress recomputation is needed. See this answer serverfault.com/a/328305/105902
              – Marki555
              Jun 29 '15 at 12:45






            • 5




              Can anyone modify the perl expression to recursively delete all directories and files inside a directory_to_be_deleted ?
              – Abhinav
              Oct 6 '15 at 15:43






            • 1




              Notes : add -P option to rsync for some more display, also, be careful about the syntax, the trailing slashes are mandatory. Finally, you can start the rsync command a first time with the -n option first to launch a dry run.
              – Drasill
              Oct 23 '15 at 15:39






            • 1




              -a equals -rlptgoD, but for deletion only -rd is necessary
              – Koen.
              Mar 19 '16 at 14:36







            3




            3




            Thanks, very useful. I use rsync all the time, I had no idea you could use it to delete like this. Vastly quicker than rm -rf
            – John Powell
            Aug 21 '14 at 19:41




            Thanks, very useful. I use rsync all the time, I had no idea you could use it to delete like this. Vastly quicker than rm -rf
            – John Powell
            Aug 21 '14 at 19:41




            16




            16




            rsync can be faster than plain rm, because it guarantees the deletes in correct order, so less btress recomputation is needed. See this answer serverfault.com/a/328305/105902
            – Marki555
            Jun 29 '15 at 12:45




            rsync can be faster than plain rm, because it guarantees the deletes in correct order, so less btress recomputation is needed. See this answer serverfault.com/a/328305/105902
            – Marki555
            Jun 29 '15 at 12:45




            5




            5




            Can anyone modify the perl expression to recursively delete all directories and files inside a directory_to_be_deleted ?
            – Abhinav
            Oct 6 '15 at 15:43




            Can anyone modify the perl expression to recursively delete all directories and files inside a directory_to_be_deleted ?
            – Abhinav
            Oct 6 '15 at 15:43




            1




            1




            Notes : add -P option to rsync for some more display, also, be careful about the syntax, the trailing slashes are mandatory. Finally, you can start the rsync command a first time with the -n option first to launch a dry run.
            – Drasill
            Oct 23 '15 at 15:39




            Notes : add -P option to rsync for some more display, also, be careful about the syntax, the trailing slashes are mandatory. Finally, you can start the rsync command a first time with the -n option first to launch a dry run.
            – Drasill
            Oct 23 '15 at 15:39




            1




            1




            -a equals -rlptgoD, but for deletion only -rd is necessary
            – Koen.
            Mar 19 '16 at 14:36




            -a equals -rlptgoD, but for deletion only -rd is necessary
            – Koen.
            Mar 19 '16 at 14:36












            up vote
            37
            down vote













            Someone on Twitter suggested using -delete instead of -exec rm -f ;



            This has improved the efficiency of the command, it still uses recursion to go through everything though.






            share|improve this answer
















            • 10




              This is non standard. GNU find have -delete, and other find maybe.
              – enzotib
              Apr 26 '12 at 9:11






            • 12




              -delete should always be preferred to -exec rm when available, for reasons of safety and efficiency.
              – jw013
              Apr 26 '12 at 11:37










            • GNU is the de facto standard.
              – RonJohn
              Mar 3 at 17:38














            up vote
            37
            down vote













            Someone on Twitter suggested using -delete instead of -exec rm -f ;



            This has improved the efficiency of the command, it still uses recursion to go through everything though.






            share|improve this answer
















            • 10




              This is non standard. GNU find have -delete, and other find maybe.
              – enzotib
              Apr 26 '12 at 9:11






            • 12




              -delete should always be preferred to -exec rm when available, for reasons of safety and efficiency.
              – jw013
              Apr 26 '12 at 11:37










            • GNU is the de facto standard.
              – RonJohn
              Mar 3 at 17:38












            up vote
            37
            down vote










            up vote
            37
            down vote









            Someone on Twitter suggested using -delete instead of -exec rm -f ;



            This has improved the efficiency of the command, it still uses recursion to go through everything though.






            share|improve this answer












            Someone on Twitter suggested using -delete instead of -exec rm -f ;



            This has improved the efficiency of the command, it still uses recursion to go through everything though.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Apr 26 '12 at 8:18









            Toby

            1,31031216




            1,31031216







            • 10




              This is non standard. GNU find have -delete, and other find maybe.
              – enzotib
              Apr 26 '12 at 9:11






            • 12




              -delete should always be preferred to -exec rm when available, for reasons of safety and efficiency.
              – jw013
              Apr 26 '12 at 11:37










            • GNU is the de facto standard.
              – RonJohn
              Mar 3 at 17:38












            • 10




              This is non standard. GNU find have -delete, and other find maybe.
              – enzotib
              Apr 26 '12 at 9:11






            • 12




              -delete should always be preferred to -exec rm when available, for reasons of safety and efficiency.
              – jw013
              Apr 26 '12 at 11:37










            • GNU is the de facto standard.
              – RonJohn
              Mar 3 at 17:38







            10




            10




            This is non standard. GNU find have -delete, and other find maybe.
            – enzotib
            Apr 26 '12 at 9:11




            This is non standard. GNU find have -delete, and other find maybe.
            – enzotib
            Apr 26 '12 at 9:11




            12




            12




            -delete should always be preferred to -exec rm when available, for reasons of safety and efficiency.
            – jw013
            Apr 26 '12 at 11:37




            -delete should always be preferred to -exec rm when available, for reasons of safety and efficiency.
            – jw013
            Apr 26 '12 at 11:37












            GNU is the de facto standard.
            – RonJohn
            Mar 3 at 17:38




            GNU is the de facto standard.
            – RonJohn
            Mar 3 at 17:38










            up vote
            17
            down vote













            What about something like:
            find /path/to/folder -name "filenamestart*" -type f -print0 | xargs -0rn 20 rm -f



            You can limit number of files to delete at once by changing the argument for parameter -n. The file names with blanks are included also.






            share|improve this answer
















            • 1




              You probably don't need the -n 20 bit, since xargs should limit itself to acceptable argument-list sizes anyway.
              – Useless
              Apr 26 '12 at 13:41










            • Yes, you are right. Here is a note from man xargs : (...) max-chars characters per command line (...). The largest allowed value is system-dependent, and is calculated as the argument length limit for exec. So -n option is for such cases where xargs cannot determine the CLI buffer size or if the executed command has some limits.
              – digital_infinity
              Apr 26 '12 at 13:50















            up vote
            17
            down vote













            What about something like:
            find /path/to/folder -name "filenamestart*" -type f -print0 | xargs -0rn 20 rm -f



            You can limit number of files to delete at once by changing the argument for parameter -n. The file names with blanks are included also.






            share|improve this answer
















            • 1




              You probably don't need the -n 20 bit, since xargs should limit itself to acceptable argument-list sizes anyway.
              – Useless
              Apr 26 '12 at 13:41










            • Yes, you are right. Here is a note from man xargs : (...) max-chars characters per command line (...). The largest allowed value is system-dependent, and is calculated as the argument length limit for exec. So -n option is for such cases where xargs cannot determine the CLI buffer size or if the executed command has some limits.
              – digital_infinity
              Apr 26 '12 at 13:50













            up vote
            17
            down vote










            up vote
            17
            down vote









            What about something like:
            find /path/to/folder -name "filenamestart*" -type f -print0 | xargs -0rn 20 rm -f



            You can limit number of files to delete at once by changing the argument for parameter -n. The file names with blanks are included also.






            share|improve this answer












            What about something like:
            find /path/to/folder -name "filenamestart*" -type f -print0 | xargs -0rn 20 rm -f



            You can limit number of files to delete at once by changing the argument for parameter -n. The file names with blanks are included also.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Apr 26 '12 at 8:20









            digital_infinity

            6451410




            6451410







            • 1




              You probably don't need the -n 20 bit, since xargs should limit itself to acceptable argument-list sizes anyway.
              – Useless
              Apr 26 '12 at 13:41










            • Yes, you are right. Here is a note from man xargs : (...) max-chars characters per command line (...). The largest allowed value is system-dependent, and is calculated as the argument length limit for exec. So -n option is for such cases where xargs cannot determine the CLI buffer size or if the executed command has some limits.
              – digital_infinity
              Apr 26 '12 at 13:50













            • 1




              You probably don't need the -n 20 bit, since xargs should limit itself to acceptable argument-list sizes anyway.
              – Useless
              Apr 26 '12 at 13:41










            • Yes, you are right. Here is a note from man xargs : (...) max-chars characters per command line (...). The largest allowed value is system-dependent, and is calculated as the argument length limit for exec. So -n option is for such cases where xargs cannot determine the CLI buffer size or if the executed command has some limits.
              – digital_infinity
              Apr 26 '12 at 13:50








            1




            1




            You probably don't need the -n 20 bit, since xargs should limit itself to acceptable argument-list sizes anyway.
            – Useless
            Apr 26 '12 at 13:41




            You probably don't need the -n 20 bit, since xargs should limit itself to acceptable argument-list sizes anyway.
            – Useless
            Apr 26 '12 at 13:41












            Yes, you are right. Here is a note from man xargs : (...) max-chars characters per command line (...). The largest allowed value is system-dependent, and is calculated as the argument length limit for exec. So -n option is for such cases where xargs cannot determine the CLI buffer size or if the executed command has some limits.
            – digital_infinity
            Apr 26 '12 at 13:50





            Yes, you are right. Here is a note from man xargs : (...) max-chars characters per command line (...). The largest allowed value is system-dependent, and is calculated as the argument length limit for exec. So -n option is for such cases where xargs cannot determine the CLI buffer size or if the executed command has some limits.
            – digital_infinity
            Apr 26 '12 at 13:50











            up vote
            11
            down vote













            Expanding on one of the comments, I do not think you're doing what you think you're doing.



            First I created a huge amount of files, to simulate your situation:



            $ mkdir foo
            $ cd foo/
            $ for X in $(seq 1 1000);do touch 1..1000_$X; done


            Then I tried what I expected to fail, and what it sounds like you're doing in the question:



            $ rm -r foo/*
            bash: /bin/rm: Argument list too long


            But this does work:



            $ rm -r foo/
            $ ls foo
            ls: cannot access foo: No such file or directory





            share|improve this answer
















            • 5




              This is the only solution that worked: Run rm -Rf bigdirectory several times. I had a directory with thousands of millions of subdirectories and files. I couldn’t even run ls or find or rsync in that directory, because it ran out of memory. The command rm -Rf quit many times (out of memory) only deleting part of the billions of files. But after many retries it finally did the job. Seems to be the only solution if running out of memory is the problem.
              – erik
              Apr 9 '14 at 13:01















            up vote
            11
            down vote













            Expanding on one of the comments, I do not think you're doing what you think you're doing.



            First I created a huge amount of files, to simulate your situation:



            $ mkdir foo
            $ cd foo/
            $ for X in $(seq 1 1000);do touch 1..1000_$X; done


            Then I tried what I expected to fail, and what it sounds like you're doing in the question:



            $ rm -r foo/*
            bash: /bin/rm: Argument list too long


            But this does work:



            $ rm -r foo/
            $ ls foo
            ls: cannot access foo: No such file or directory





            share|improve this answer
















            • 5




              This is the only solution that worked: Run rm -Rf bigdirectory several times. I had a directory with thousands of millions of subdirectories and files. I couldn’t even run ls or find or rsync in that directory, because it ran out of memory. The command rm -Rf quit many times (out of memory) only deleting part of the billions of files. But after many retries it finally did the job. Seems to be the only solution if running out of memory is the problem.
              – erik
              Apr 9 '14 at 13:01













            up vote
            11
            down vote










            up vote
            11
            down vote









            Expanding on one of the comments, I do not think you're doing what you think you're doing.



            First I created a huge amount of files, to simulate your situation:



            $ mkdir foo
            $ cd foo/
            $ for X in $(seq 1 1000);do touch 1..1000_$X; done


            Then I tried what I expected to fail, and what it sounds like you're doing in the question:



            $ rm -r foo/*
            bash: /bin/rm: Argument list too long


            But this does work:



            $ rm -r foo/
            $ ls foo
            ls: cannot access foo: No such file or directory





            share|improve this answer












            Expanding on one of the comments, I do not think you're doing what you think you're doing.



            First I created a huge amount of files, to simulate your situation:



            $ mkdir foo
            $ cd foo/
            $ for X in $(seq 1 1000);do touch 1..1000_$X; done


            Then I tried what I expected to fail, and what it sounds like you're doing in the question:



            $ rm -r foo/*
            bash: /bin/rm: Argument list too long


            But this does work:



            $ rm -r foo/
            $ ls foo
            ls: cannot access foo: No such file or directory






            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Apr 26 '12 at 14:04









            Izkata

            57028




            57028







            • 5




              This is the only solution that worked: Run rm -Rf bigdirectory several times. I had a directory with thousands of millions of subdirectories and files. I couldn’t even run ls or find or rsync in that directory, because it ran out of memory. The command rm -Rf quit many times (out of memory) only deleting part of the billions of files. But after many retries it finally did the job. Seems to be the only solution if running out of memory is the problem.
              – erik
              Apr 9 '14 at 13:01













            • 5




              This is the only solution that worked: Run rm -Rf bigdirectory several times. I had a directory with thousands of millions of subdirectories and files. I couldn’t even run ls or find or rsync in that directory, because it ran out of memory. The command rm -Rf quit many times (out of memory) only deleting part of the billions of files. But after many retries it finally did the job. Seems to be the only solution if running out of memory is the problem.
              – erik
              Apr 9 '14 at 13:01








            5




            5




            This is the only solution that worked: Run rm -Rf bigdirectory several times. I had a directory with thousands of millions of subdirectories and files. I couldn’t even run ls or find or rsync in that directory, because it ran out of memory. The command rm -Rf quit many times (out of memory) only deleting part of the billions of files. But after many retries it finally did the job. Seems to be the only solution if running out of memory is the problem.
            – erik
            Apr 9 '14 at 13:01





            This is the only solution that worked: Run rm -Rf bigdirectory several times. I had a directory with thousands of millions of subdirectories and files. I couldn’t even run ls or find or rsync in that directory, because it ran out of memory. The command rm -Rf quit many times (out of memory) only deleting part of the billions of files. But after many retries it finally did the job. Seems to be the only solution if running out of memory is the problem.
            – erik
            Apr 9 '14 at 13:01











            up vote
            10
            down vote













            A clever trick:



            rsync -a --delete empty/ your_folder/


            It's super CPU intensive, but really really fast. See https://web.archive.org/web/20130929001850/http://linuxnote.net/jianingy/en/linux/a-fast-way-to-remove-huge-number-of-files.html






            share|improve this answer






















            • It's not so fast, because it reads the directory contents in-efficiently. See this answer for 10x faster solution and explanation serverfault.com/a/328305/105902
              – Marki555
              Jun 29 '15 at 12:46






            • 2




              @Marki555: in the Edit of the question it is reported 60 seconds for rsync -a --delete vs 43 for lsdent. The ratio 10x was for time ls -1 | wc -l vs time ./dentls bigfolder >out.txt (that is a partially fair comparison because of > file vs wc -l).
              – Hastur
              Jan 21 '16 at 9:30










            • The problem there is that NONE of the commands over there actually DO the desired traversal operation for deletion. The code they give? DOES NOT WORK as described by Marki555.
              – Svartalf
              Sep 10 at 16:05














            up vote
            10
            down vote













            A clever trick:



            rsync -a --delete empty/ your_folder/


            It's super CPU intensive, but really really fast. See https://web.archive.org/web/20130929001850/http://linuxnote.net/jianingy/en/linux/a-fast-way-to-remove-huge-number-of-files.html






            share|improve this answer






















            • It's not so fast, because it reads the directory contents in-efficiently. See this answer for 10x faster solution and explanation serverfault.com/a/328305/105902
              – Marki555
              Jun 29 '15 at 12:46






            • 2




              @Marki555: in the Edit of the question it is reported 60 seconds for rsync -a --delete vs 43 for lsdent. The ratio 10x was for time ls -1 | wc -l vs time ./dentls bigfolder >out.txt (that is a partially fair comparison because of > file vs wc -l).
              – Hastur
              Jan 21 '16 at 9:30










            • The problem there is that NONE of the commands over there actually DO the desired traversal operation for deletion. The code they give? DOES NOT WORK as described by Marki555.
              – Svartalf
              Sep 10 at 16:05












            up vote
            10
            down vote










            up vote
            10
            down vote









            A clever trick:



            rsync -a --delete empty/ your_folder/


            It's super CPU intensive, but really really fast. See https://web.archive.org/web/20130929001850/http://linuxnote.net/jianingy/en/linux/a-fast-way-to-remove-huge-number-of-files.html






            share|improve this answer














            A clever trick:



            rsync -a --delete empty/ your_folder/


            It's super CPU intensive, but really really fast. See https://web.archive.org/web/20130929001850/http://linuxnote.net/jianingy/en/linux/a-fast-way-to-remove-huge-number-of-files.html







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Oct 25 '16 at 16:09









            Qtax

            1054




            1054










            answered Aug 31 '13 at 4:13









            MZAweb

            22337




            22337











            • It's not so fast, because it reads the directory contents in-efficiently. See this answer for 10x faster solution and explanation serverfault.com/a/328305/105902
              – Marki555
              Jun 29 '15 at 12:46






            • 2




              @Marki555: in the Edit of the question it is reported 60 seconds for rsync -a --delete vs 43 for lsdent. The ratio 10x was for time ls -1 | wc -l vs time ./dentls bigfolder >out.txt (that is a partially fair comparison because of > file vs wc -l).
              – Hastur
              Jan 21 '16 at 9:30










            • The problem there is that NONE of the commands over there actually DO the desired traversal operation for deletion. The code they give? DOES NOT WORK as described by Marki555.
              – Svartalf
              Sep 10 at 16:05
















            • It's not so fast, because it reads the directory contents in-efficiently. See this answer for 10x faster solution and explanation serverfault.com/a/328305/105902
              – Marki555
              Jun 29 '15 at 12:46






            • 2




              @Marki555: in the Edit of the question it is reported 60 seconds for rsync -a --delete vs 43 for lsdent. The ratio 10x was for time ls -1 | wc -l vs time ./dentls bigfolder >out.txt (that is a partially fair comparison because of > file vs wc -l).
              – Hastur
              Jan 21 '16 at 9:30










            • The problem there is that NONE of the commands over there actually DO the desired traversal operation for deletion. The code they give? DOES NOT WORK as described by Marki555.
              – Svartalf
              Sep 10 at 16:05















            It's not so fast, because it reads the directory contents in-efficiently. See this answer for 10x faster solution and explanation serverfault.com/a/328305/105902
            – Marki555
            Jun 29 '15 at 12:46




            It's not so fast, because it reads the directory contents in-efficiently. See this answer for 10x faster solution and explanation serverfault.com/a/328305/105902
            – Marki555
            Jun 29 '15 at 12:46




            2




            2




            @Marki555: in the Edit of the question it is reported 60 seconds for rsync -a --delete vs 43 for lsdent. The ratio 10x was for time ls -1 | wc -l vs time ./dentls bigfolder >out.txt (that is a partially fair comparison because of > file vs wc -l).
            – Hastur
            Jan 21 '16 at 9:30




            @Marki555: in the Edit of the question it is reported 60 seconds for rsync -a --delete vs 43 for lsdent. The ratio 10x was for time ls -1 | wc -l vs time ./dentls bigfolder >out.txt (that is a partially fair comparison because of > file vs wc -l).
            – Hastur
            Jan 21 '16 at 9:30












            The problem there is that NONE of the commands over there actually DO the desired traversal operation for deletion. The code they give? DOES NOT WORK as described by Marki555.
            – Svartalf
            Sep 10 at 16:05




            The problem there is that NONE of the commands over there actually DO the desired traversal operation for deletion. The code they give? DOES NOT WORK as described by Marki555.
            – Svartalf
            Sep 10 at 16:05










            up vote
            6
            down vote













            I had the opportunity to test -delete as compared to -exec rm ; and for me -delete was the answer to this problem.



            Using -delete deleted the files in a folder of 400,000 files at least 1,000 times faster than rm.



            The 'How to delete large number of files in linux' article suggests it is about three time faster, but in my test the difference was much more dramatic.






            share|improve this answer


















            • 3




              Using find -exec executes the rm command for every file separately, that's why it is so slow.
              – Marki555
              Jun 26 '15 at 21:43














            up vote
            6
            down vote













            I had the opportunity to test -delete as compared to -exec rm ; and for me -delete was the answer to this problem.



            Using -delete deleted the files in a folder of 400,000 files at least 1,000 times faster than rm.



            The 'How to delete large number of files in linux' article suggests it is about three time faster, but in my test the difference was much more dramatic.






            share|improve this answer


















            • 3




              Using find -exec executes the rm command for every file separately, that's why it is so slow.
              – Marki555
              Jun 26 '15 at 21:43












            up vote
            6
            down vote










            up vote
            6
            down vote









            I had the opportunity to test -delete as compared to -exec rm ; and for me -delete was the answer to this problem.



            Using -delete deleted the files in a folder of 400,000 files at least 1,000 times faster than rm.



            The 'How to delete large number of files in linux' article suggests it is about three time faster, but in my test the difference was much more dramatic.






            share|improve this answer














            I had the opportunity to test -delete as compared to -exec rm ; and for me -delete was the answer to this problem.



            Using -delete deleted the files in a folder of 400,000 files at least 1,000 times faster than rm.



            The 'How to delete large number of files in linux' article suggests it is about three time faster, but in my test the difference was much more dramatic.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Jul 2 '13 at 13:33









            slm♦

            240k65496665




            240k65496665










            answered Jul 2 '13 at 13:17









            user2365090

            6111




            6111







            • 3




              Using find -exec executes the rm command for every file separately, that's why it is so slow.
              – Marki555
              Jun 26 '15 at 21:43












            • 3




              Using find -exec executes the rm command for every file separately, that's why it is so slow.
              – Marki555
              Jun 26 '15 at 21:43







            3




            3




            Using find -exec executes the rm command for every file separately, that's why it is so slow.
            – Marki555
            Jun 26 '15 at 21:43




            Using find -exec executes the rm command for every file separately, that's why it is so slow.
            – Marki555
            Jun 26 '15 at 21:43










            up vote
            3
            down vote













            There are couple of methods that can be used to delete large number of files in linux,. You can use find with delete option, which is faster than exec option. Then you can use perl unlink, then even rsync.
            How to delete large number of files in linux






            share|improve this answer
























              up vote
              3
              down vote













              There are couple of methods that can be used to delete large number of files in linux,. You can use find with delete option, which is faster than exec option. Then you can use perl unlink, then even rsync.
              How to delete large number of files in linux






              share|improve this answer






















                up vote
                3
                down vote










                up vote
                3
                down vote









                There are couple of methods that can be used to delete large number of files in linux,. You can use find with delete option, which is faster than exec option. Then you can use perl unlink, then even rsync.
                How to delete large number of files in linux






                share|improve this answer












                There are couple of methods that can be used to delete large number of files in linux,. You can use find with delete option, which is faster than exec option. Then you can use perl unlink, then even rsync.
                How to delete large number of files in linux







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Jun 15 '13 at 11:39









                sarath

                312




                312




















                    up vote
                    2
                    down vote













                    About the -delete option above: I'm using it to remove a large number (1M+ est) files in a temp folder that I created and inadvertently forgot to cleanup nightly. I filled my disk/partition accidentally, and nothing else could remove them but the find . command. It is slow, at first I was using:



                    find . -ls -exec rm ;


                    But that was taking an EXTREME amount of time. It started after about 15 mins to remove some of the files, but my guess is that it was removing less than 10 or so per second after it finally started. So, I tried the:



                    find . -delete


                    instead, and I'm letting it run right now. It appears to be running faster, though it's EXTREMELY taxing on the CPU which the other command was not. It's been running for like an hour now and I think I'm getting space back on my drive and the partition gradually "slimming down" but it's still taking a very long time. I seriously doubt it's running 1,000 times faster than the other. As in all things, I just wanted to point out the tradeoff in space vs. time. If you have the CPU bandwidth to spare (we do) then run the latter. It's got my CPU running (uptime reports):



                    10:59:17 up 539 days, 21:21, 3 users, load average: 22.98, 24.10, 22.87


                    And I've seen the load average go over 30.00 which is not good for a busy system, but for ours which is normally lightly loaded, it's OK for a couple hours. I've checked most other things on the system and they're still responsive so we are OK for now.






                    share|improve this answer






















                    • if you're going to use exec you almost certainly want not use -ls and do find . -type f -exec rm '' + + is faster because it will give as many arguments to rm as it can handle at once.
                      – xenoterracide
                      Jan 3 '14 at 17:48










                    • I think you should go ahead and edit this into its own answer… it's really too long for a comment. Also, it sound like your filesystem has fairly expensive deletes, curious which one it is? You can run that find … -delete through nice or ionice, that may help. So might changing some mount options to less-crash-safe settings. (And, of course, depending on what else is on the filesystem, the quickest way to delete everything is often mkfs.)
                      – derobert
                      Jan 4 '14 at 7:24







                    • 2




                      Load average is not always CPU, it's just a measure of the number of blocked processes over time. Processes can block on disk I/O, which is likely what is happening here.
                      – Score_Under
                      Jul 14 '14 at 12:47










                    • Also note that load average does not account for number of logical CPUs. So loadavg 1 for single-core machine is the same as loadavg 64 on 64-core system - meaning each CPU is busy 100% of time.
                      – Marki555
                      Jun 29 '15 at 12:49














                    up vote
                    2
                    down vote













                    About the -delete option above: I'm using it to remove a large number (1M+ est) files in a temp folder that I created and inadvertently forgot to cleanup nightly. I filled my disk/partition accidentally, and nothing else could remove them but the find . command. It is slow, at first I was using:



                    find . -ls -exec rm ;


                    But that was taking an EXTREME amount of time. It started after about 15 mins to remove some of the files, but my guess is that it was removing less than 10 or so per second after it finally started. So, I tried the:



                    find . -delete


                    instead, and I'm letting it run right now. It appears to be running faster, though it's EXTREMELY taxing on the CPU which the other command was not. It's been running for like an hour now and I think I'm getting space back on my drive and the partition gradually "slimming down" but it's still taking a very long time. I seriously doubt it's running 1,000 times faster than the other. As in all things, I just wanted to point out the tradeoff in space vs. time. If you have the CPU bandwidth to spare (we do) then run the latter. It's got my CPU running (uptime reports):



                    10:59:17 up 539 days, 21:21, 3 users, load average: 22.98, 24.10, 22.87


                    And I've seen the load average go over 30.00 which is not good for a busy system, but for ours which is normally lightly loaded, it's OK for a couple hours. I've checked most other things on the system and they're still responsive so we are OK for now.






                    share|improve this answer






















                    • if you're going to use exec you almost certainly want not use -ls and do find . -type f -exec rm '' + + is faster because it will give as many arguments to rm as it can handle at once.
                      – xenoterracide
                      Jan 3 '14 at 17:48










                    • I think you should go ahead and edit this into its own answer… it's really too long for a comment. Also, it sound like your filesystem has fairly expensive deletes, curious which one it is? You can run that find … -delete through nice or ionice, that may help. So might changing some mount options to less-crash-safe settings. (And, of course, depending on what else is on the filesystem, the quickest way to delete everything is often mkfs.)
                      – derobert
                      Jan 4 '14 at 7:24







                    • 2




                      Load average is not always CPU, it's just a measure of the number of blocked processes over time. Processes can block on disk I/O, which is likely what is happening here.
                      – Score_Under
                      Jul 14 '14 at 12:47










                    • Also note that load average does not account for number of logical CPUs. So loadavg 1 for single-core machine is the same as loadavg 64 on 64-core system - meaning each CPU is busy 100% of time.
                      – Marki555
                      Jun 29 '15 at 12:49












                    up vote
                    2
                    down vote










                    up vote
                    2
                    down vote









                    About the -delete option above: I'm using it to remove a large number (1M+ est) files in a temp folder that I created and inadvertently forgot to cleanup nightly. I filled my disk/partition accidentally, and nothing else could remove them but the find . command. It is slow, at first I was using:



                    find . -ls -exec rm ;


                    But that was taking an EXTREME amount of time. It started after about 15 mins to remove some of the files, but my guess is that it was removing less than 10 or so per second after it finally started. So, I tried the:



                    find . -delete


                    instead, and I'm letting it run right now. It appears to be running faster, though it's EXTREMELY taxing on the CPU which the other command was not. It's been running for like an hour now and I think I'm getting space back on my drive and the partition gradually "slimming down" but it's still taking a very long time. I seriously doubt it's running 1,000 times faster than the other. As in all things, I just wanted to point out the tradeoff in space vs. time. If you have the CPU bandwidth to spare (we do) then run the latter. It's got my CPU running (uptime reports):



                    10:59:17 up 539 days, 21:21, 3 users, load average: 22.98, 24.10, 22.87


                    And I've seen the load average go over 30.00 which is not good for a busy system, but for ours which is normally lightly loaded, it's OK for a couple hours. I've checked most other things on the system and they're still responsive so we are OK for now.






                    share|improve this answer














                    About the -delete option above: I'm using it to remove a large number (1M+ est) files in a temp folder that I created and inadvertently forgot to cleanup nightly. I filled my disk/partition accidentally, and nothing else could remove them but the find . command. It is slow, at first I was using:



                    find . -ls -exec rm ;


                    But that was taking an EXTREME amount of time. It started after about 15 mins to remove some of the files, but my guess is that it was removing less than 10 or so per second after it finally started. So, I tried the:



                    find . -delete


                    instead, and I'm letting it run right now. It appears to be running faster, though it's EXTREMELY taxing on the CPU which the other command was not. It's been running for like an hour now and I think I'm getting space back on my drive and the partition gradually "slimming down" but it's still taking a very long time. I seriously doubt it's running 1,000 times faster than the other. As in all things, I just wanted to point out the tradeoff in space vs. time. If you have the CPU bandwidth to spare (we do) then run the latter. It's got my CPU running (uptime reports):



                    10:59:17 up 539 days, 21:21, 3 users, load average: 22.98, 24.10, 22.87


                    And I've seen the load average go over 30.00 which is not good for a busy system, but for ours which is normally lightly loaded, it's OK for a couple hours. I've checked most other things on the system and they're still responsive so we are OK for now.







                    share|improve this answer














                    share|improve this answer



                    share|improve this answer








                    edited Jun 20 '16 at 11:20









                    Pierre.Vriens

                    94641015




                    94641015










                    answered Dec 31 '13 at 19:00









                    Scotty

                    211




                    211











                    • if you're going to use exec you almost certainly want not use -ls and do find . -type f -exec rm '' + + is faster because it will give as many arguments to rm as it can handle at once.
                      – xenoterracide
                      Jan 3 '14 at 17:48










                    • I think you should go ahead and edit this into its own answer… it's really too long for a comment. Also, it sound like your filesystem has fairly expensive deletes, curious which one it is? You can run that find … -delete through nice or ionice, that may help. So might changing some mount options to less-crash-safe settings. (And, of course, depending on what else is on the filesystem, the quickest way to delete everything is often mkfs.)
                      – derobert
                      Jan 4 '14 at 7:24







                    • 2




                      Load average is not always CPU, it's just a measure of the number of blocked processes over time. Processes can block on disk I/O, which is likely what is happening here.
                      – Score_Under
                      Jul 14 '14 at 12:47










                    • Also note that load average does not account for number of logical CPUs. So loadavg 1 for single-core machine is the same as loadavg 64 on 64-core system - meaning each CPU is busy 100% of time.
                      – Marki555
                      Jun 29 '15 at 12:49
















                    • if you're going to use exec you almost certainly want not use -ls and do find . -type f -exec rm '' + + is faster because it will give as many arguments to rm as it can handle at once.
                      – xenoterracide
                      Jan 3 '14 at 17:48










                    • I think you should go ahead and edit this into its own answer… it's really too long for a comment. Also, it sound like your filesystem has fairly expensive deletes, curious which one it is? You can run that find … -delete through nice or ionice, that may help. So might changing some mount options to less-crash-safe settings. (And, of course, depending on what else is on the filesystem, the quickest way to delete everything is often mkfs.)
                      – derobert
                      Jan 4 '14 at 7:24







                    • 2




                      Load average is not always CPU, it's just a measure of the number of blocked processes over time. Processes can block on disk I/O, which is likely what is happening here.
                      – Score_Under
                      Jul 14 '14 at 12:47










                    • Also note that load average does not account for number of logical CPUs. So loadavg 1 for single-core machine is the same as loadavg 64 on 64-core system - meaning each CPU is busy 100% of time.
                      – Marki555
                      Jun 29 '15 at 12:49















                    if you're going to use exec you almost certainly want not use -ls and do find . -type f -exec rm '' + + is faster because it will give as many arguments to rm as it can handle at once.
                    – xenoterracide
                    Jan 3 '14 at 17:48




                    if you're going to use exec you almost certainly want not use -ls and do find . -type f -exec rm '' + + is faster because it will give as many arguments to rm as it can handle at once.
                    – xenoterracide
                    Jan 3 '14 at 17:48












                    I think you should go ahead and edit this into its own answer… it's really too long for a comment. Also, it sound like your filesystem has fairly expensive deletes, curious which one it is? You can run that find … -delete through nice or ionice, that may help. So might changing some mount options to less-crash-safe settings. (And, of course, depending on what else is on the filesystem, the quickest way to delete everything is often mkfs.)
                    – derobert
                    Jan 4 '14 at 7:24





                    I think you should go ahead and edit this into its own answer… it's really too long for a comment. Also, it sound like your filesystem has fairly expensive deletes, curious which one it is? You can run that find … -delete through nice or ionice, that may help. So might changing some mount options to less-crash-safe settings. (And, of course, depending on what else is on the filesystem, the quickest way to delete everything is often mkfs.)
                    – derobert
                    Jan 4 '14 at 7:24





                    2




                    2




                    Load average is not always CPU, it's just a measure of the number of blocked processes over time. Processes can block on disk I/O, which is likely what is happening here.
                    – Score_Under
                    Jul 14 '14 at 12:47




                    Load average is not always CPU, it's just a measure of the number of blocked processes over time. Processes can block on disk I/O, which is likely what is happening here.
                    – Score_Under
                    Jul 14 '14 at 12:47












                    Also note that load average does not account for number of logical CPUs. So loadavg 1 for single-core machine is the same as loadavg 64 on 64-core system - meaning each CPU is busy 100% of time.
                    – Marki555
                    Jun 29 '15 at 12:49




                    Also note that load average does not account for number of logical CPUs. So loadavg 1 for single-core machine is the same as loadavg 64 on 64-core system - meaning each CPU is busy 100% of time.
                    – Marki555
                    Jun 29 '15 at 12:49










                    up vote
                    1
                    down vote













                    Deleting REALLY LARGE directories needs a different approach, as I learned from this site - you'll need to utilize ionice.It ensures (with -c3) that deletes will only be performed when the system has IO-time for it. You systems load will not rise to high and everything stays responsive (though my CPU time for find was quite high at about 50%).



                    find <dir> -type f -exec ionice -c3 rm ;





                    share|improve this answer
















                    • 5




                      using + instead of ; would make this faster as it passes more arguments to rm at once, less forking
                      – xenoterracide
                      Jan 3 '14 at 17:50














                    up vote
                    1
                    down vote













                    Deleting REALLY LARGE directories needs a different approach, as I learned from this site - you'll need to utilize ionice.It ensures (with -c3) that deletes will only be performed when the system has IO-time for it. You systems load will not rise to high and everything stays responsive (though my CPU time for find was quite high at about 50%).



                    find <dir> -type f -exec ionice -c3 rm ;





                    share|improve this answer
















                    • 5




                      using + instead of ; would make this faster as it passes more arguments to rm at once, less forking
                      – xenoterracide
                      Jan 3 '14 at 17:50












                    up vote
                    1
                    down vote










                    up vote
                    1
                    down vote









                    Deleting REALLY LARGE directories needs a different approach, as I learned from this site - you'll need to utilize ionice.It ensures (with -c3) that deletes will only be performed when the system has IO-time for it. You systems load will not rise to high and everything stays responsive (though my CPU time for find was quite high at about 50%).



                    find <dir> -type f -exec ionice -c3 rm ;





                    share|improve this answer












                    Deleting REALLY LARGE directories needs a different approach, as I learned from this site - you'll need to utilize ionice.It ensures (with -c3) that deletes will only be performed when the system has IO-time for it. You systems load will not rise to high and everything stays responsive (though my CPU time for find was quite high at about 50%).



                    find <dir> -type f -exec ionice -c3 rm ;






                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered May 10 '13 at 6:51









                    gamma

                    1112




                    1112







                    • 5




                      using + instead of ; would make this faster as it passes more arguments to rm at once, less forking
                      – xenoterracide
                      Jan 3 '14 at 17:50












                    • 5




                      using + instead of ; would make this faster as it passes more arguments to rm at once, less forking
                      – xenoterracide
                      Jan 3 '14 at 17:50







                    5




                    5




                    using + instead of ; would make this faster as it passes more arguments to rm at once, less forking
                    – xenoterracide
                    Jan 3 '14 at 17:50




                    using + instead of ; would make this faster as it passes more arguments to rm at once, less forking
                    – xenoterracide
                    Jan 3 '14 at 17:50










                    up vote
                    1
                    down vote













                    Consider using Btrfs volume and simply delete whole volume for such a directory with large number of files.



                    Alternatively you can create an FS image file then unmount and delete its file to remove everything at once really fast.






                    share|improve this answer
























                      up vote
                      1
                      down vote













                      Consider using Btrfs volume and simply delete whole volume for such a directory with large number of files.



                      Alternatively you can create an FS image file then unmount and delete its file to remove everything at once really fast.






                      share|improve this answer






















                        up vote
                        1
                        down vote










                        up vote
                        1
                        down vote









                        Consider using Btrfs volume and simply delete whole volume for such a directory with large number of files.



                        Alternatively you can create an FS image file then unmount and delete its file to remove everything at once really fast.






                        share|improve this answer












                        Consider using Btrfs volume and simply delete whole volume for such a directory with large number of files.



                        Alternatively you can create an FS image file then unmount and delete its file to remove everything at once really fast.







                        share|improve this answer












                        share|improve this answer



                        share|improve this answer










                        answered Feb 27 '17 at 15:46









                        Sergei

                        1455




                        1455




















                            up vote
                            1
                            down vote













                            Assuming to have GNU parallel installed, I've used this:



                            parallel rm -rf dir/ ::: `ls -f dir/`



                            and it was fast enough.






                            share|improve this answer
























                              up vote
                              1
                              down vote













                              Assuming to have GNU parallel installed, I've used this:



                              parallel rm -rf dir/ ::: `ls -f dir/`



                              and it was fast enough.






                              share|improve this answer






















                                up vote
                                1
                                down vote










                                up vote
                                1
                                down vote









                                Assuming to have GNU parallel installed, I've used this:



                                parallel rm -rf dir/ ::: `ls -f dir/`



                                and it was fast enough.






                                share|improve this answer












                                Assuming to have GNU parallel installed, I've used this:



                                parallel rm -rf dir/ ::: `ls -f dir/`



                                and it was fast enough.







                                share|improve this answer












                                share|improve this answer



                                share|improve this answer










                                answered Oct 3 '17 at 0:41









                                Nacho

                                234




                                234




















                                    up vote
                                    0
                                    down vote













                                    ls -1 | xargs rm -rf 


                                    should work inside the main folder






                                    share|improve this answer
















                                    • 1




                                      ls won't work because of the amount of files in the folder. This is why I had to use find, thanks though.
                                      – Toby
                                      Apr 26 '12 at 8:19






                                    • 4




                                      @Toby: Try ls -f, which disables sorting. Sorting requires that the entire directory be loaded into memory to be sorted. An unsorted ls should be able to stream its output.
                                      – camh
                                      Apr 26 '12 at 10:59






                                    • 1




                                      Does not work on filenames that contain newlines.
                                      – maxschlepzig
                                      Jan 5 '14 at 7:53










                                    • @camh that's true. But removing files in sorted order is faster than in unsorted (because of recalculating the btree of the directory after each deletion). See this answer for an example serverfault.com/a/328305/105902
                                      – Marki555
                                      Jun 29 '15 at 12:50











                                    • @maxschlepzig for such files you can use find . -print0 | xargs -0 rm, which will use the NULL char as filename separator.
                                      – Marki555
                                      Jun 29 '15 at 12:51














                                    up vote
                                    0
                                    down vote













                                    ls -1 | xargs rm -rf 


                                    should work inside the main folder






                                    share|improve this answer
















                                    • 1




                                      ls won't work because of the amount of files in the folder. This is why I had to use find, thanks though.
                                      – Toby
                                      Apr 26 '12 at 8:19






                                    • 4




                                      @Toby: Try ls -f, which disables sorting. Sorting requires that the entire directory be loaded into memory to be sorted. An unsorted ls should be able to stream its output.
                                      – camh
                                      Apr 26 '12 at 10:59






                                    • 1




                                      Does not work on filenames that contain newlines.
                                      – maxschlepzig
                                      Jan 5 '14 at 7:53










                                    • @camh that's true. But removing files in sorted order is faster than in unsorted (because of recalculating the btree of the directory after each deletion). See this answer for an example serverfault.com/a/328305/105902
                                      – Marki555
                                      Jun 29 '15 at 12:50











                                    • @maxschlepzig for such files you can use find . -print0 | xargs -0 rm, which will use the NULL char as filename separator.
                                      – Marki555
                                      Jun 29 '15 at 12:51












                                    up vote
                                    0
                                    down vote










                                    up vote
                                    0
                                    down vote









                                    ls -1 | xargs rm -rf 


                                    should work inside the main folder






                                    share|improve this answer












                                    ls -1 | xargs rm -rf 


                                    should work inside the main folder







                                    share|improve this answer












                                    share|improve this answer



                                    share|improve this answer










                                    answered Apr 26 '12 at 8:17









                                    PsyStyle

                                    546




                                    546







                                    • 1




                                      ls won't work because of the amount of files in the folder. This is why I had to use find, thanks though.
                                      – Toby
                                      Apr 26 '12 at 8:19






                                    • 4




                                      @Toby: Try ls -f, which disables sorting. Sorting requires that the entire directory be loaded into memory to be sorted. An unsorted ls should be able to stream its output.
                                      – camh
                                      Apr 26 '12 at 10:59






                                    • 1




                                      Does not work on filenames that contain newlines.
                                      – maxschlepzig
                                      Jan 5 '14 at 7:53










                                    • @camh that's true. But removing files in sorted order is faster than in unsorted (because of recalculating the btree of the directory after each deletion). See this answer for an example serverfault.com/a/328305/105902
                                      – Marki555
                                      Jun 29 '15 at 12:50











                                    • @maxschlepzig for such files you can use find . -print0 | xargs -0 rm, which will use the NULL char as filename separator.
                                      – Marki555
                                      Jun 29 '15 at 12:51












                                    • 1




                                      ls won't work because of the amount of files in the folder. This is why I had to use find, thanks though.
                                      – Toby
                                      Apr 26 '12 at 8:19






                                    • 4




                                      @Toby: Try ls -f, which disables sorting. Sorting requires that the entire directory be loaded into memory to be sorted. An unsorted ls should be able to stream its output.
                                      – camh
                                      Apr 26 '12 at 10:59






                                    • 1




                                      Does not work on filenames that contain newlines.
                                      – maxschlepzig
                                      Jan 5 '14 at 7:53










                                    • @camh that's true. But removing files in sorted order is faster than in unsorted (because of recalculating the btree of the directory after each deletion). See this answer for an example serverfault.com/a/328305/105902
                                      – Marki555
                                      Jun 29 '15 at 12:50











                                    • @maxschlepzig for such files you can use find . -print0 | xargs -0 rm, which will use the NULL char as filename separator.
                                      – Marki555
                                      Jun 29 '15 at 12:51







                                    1




                                    1




                                    ls won't work because of the amount of files in the folder. This is why I had to use find, thanks though.
                                    – Toby
                                    Apr 26 '12 at 8:19




                                    ls won't work because of the amount of files in the folder. This is why I had to use find, thanks though.
                                    – Toby
                                    Apr 26 '12 at 8:19




                                    4




                                    4




                                    @Toby: Try ls -f, which disables sorting. Sorting requires that the entire directory be loaded into memory to be sorted. An unsorted ls should be able to stream its output.
                                    – camh
                                    Apr 26 '12 at 10:59




                                    @Toby: Try ls -f, which disables sorting. Sorting requires that the entire directory be loaded into memory to be sorted. An unsorted ls should be able to stream its output.
                                    – camh
                                    Apr 26 '12 at 10:59




                                    1




                                    1




                                    Does not work on filenames that contain newlines.
                                    – maxschlepzig
                                    Jan 5 '14 at 7:53




                                    Does not work on filenames that contain newlines.
                                    – maxschlepzig
                                    Jan 5 '14 at 7:53












                                    @camh that's true. But removing files in sorted order is faster than in unsorted (because of recalculating the btree of the directory after each deletion). See this answer for an example serverfault.com/a/328305/105902
                                    – Marki555
                                    Jun 29 '15 at 12:50





                                    @camh that's true. But removing files in sorted order is faster than in unsorted (because of recalculating the btree of the directory after each deletion). See this answer for an example serverfault.com/a/328305/105902
                                    – Marki555
                                    Jun 29 '15 at 12:50













                                    @maxschlepzig for such files you can use find . -print0 | xargs -0 rm, which will use the NULL char as filename separator.
                                    – Marki555
                                    Jun 29 '15 at 12:51




                                    @maxschlepzig for such files you can use find . -print0 | xargs -0 rm, which will use the NULL char as filename separator.
                                    – Marki555
                                    Jun 29 '15 at 12:51










                                    up vote
                                    0
                                    down vote













                                    For Izkata's hint above:




                                    But this does work:



                                    $ rm -r foo/
                                    $ ls foo
                                    ls: cannot access foo: No such file or directory



                                    This almost worked - or would have worked - but I had some problems in permission; files were on a server, but still I don't understand where this permission issue came from. Anyway, Terminal asked for confirmation on every file. Amount of files was around 20 000, so this wasn't an option. After "-r" I added option "-f", so the whole command was "rm -r -f foldername/". Then it seemed to work fine. I'm a novice with Terminal, but I guess this was okay, right? Thanks!






                                    share|improve this answer
























                                      up vote
                                      0
                                      down vote













                                      For Izkata's hint above:




                                      But this does work:



                                      $ rm -r foo/
                                      $ ls foo
                                      ls: cannot access foo: No such file or directory



                                      This almost worked - or would have worked - but I had some problems in permission; files were on a server, but still I don't understand where this permission issue came from. Anyway, Terminal asked for confirmation on every file. Amount of files was around 20 000, so this wasn't an option. After "-r" I added option "-f", so the whole command was "rm -r -f foldername/". Then it seemed to work fine. I'm a novice with Terminal, but I guess this was okay, right? Thanks!






                                      share|improve this answer






















                                        up vote
                                        0
                                        down vote










                                        up vote
                                        0
                                        down vote









                                        For Izkata's hint above:




                                        But this does work:



                                        $ rm -r foo/
                                        $ ls foo
                                        ls: cannot access foo: No such file or directory



                                        This almost worked - or would have worked - but I had some problems in permission; files were on a server, but still I don't understand where this permission issue came from. Anyway, Terminal asked for confirmation on every file. Amount of files was around 20 000, so this wasn't an option. After "-r" I added option "-f", so the whole command was "rm -r -f foldername/". Then it seemed to work fine. I'm a novice with Terminal, but I guess this was okay, right? Thanks!






                                        share|improve this answer












                                        For Izkata's hint above:




                                        But this does work:



                                        $ rm -r foo/
                                        $ ls foo
                                        ls: cannot access foo: No such file or directory



                                        This almost worked - or would have worked - but I had some problems in permission; files were on a server, but still I don't understand where this permission issue came from. Anyway, Terminal asked for confirmation on every file. Amount of files was around 20 000, so this wasn't an option. After "-r" I added option "-f", so the whole command was "rm -r -f foldername/". Then it seemed to work fine. I'm a novice with Terminal, but I guess this was okay, right? Thanks!







                                        share|improve this answer












                                        share|improve this answer



                                        share|improve this answer










                                        answered Jun 20 '13 at 5:42









                                        user41527

                                        1




                                        1




















                                            up vote
                                            0
                                            down vote













                                            Depending on how well you need to get rid of those files, I'd suggest using shred.



                                            $ shred -zuv folder


                                            if you want to purge the directory, but you can't remove it and recreate it, I suggest moving it and recreating it instantly.



                                            mv folder folder_del
                                            mkdir folder
                                            rm -rf folder_del


                                            this is faster, believe it or not, as only one inode has to be changed. Remember: You can't really parallelize this tast on a multicore computer. It comes down to disk access, which is limited by the RAID or what have you.






                                            share|improve this answer
















                                            • 1




                                              shred will not work with many modern filesystems.
                                              – Smith John
                                              Jul 2 '13 at 14:47















                                            up vote
                                            0
                                            down vote













                                            Depending on how well you need to get rid of those files, I'd suggest using shred.



                                            $ shred -zuv folder


                                            if you want to purge the directory, but you can't remove it and recreate it, I suggest moving it and recreating it instantly.



                                            mv folder folder_del
                                            mkdir folder
                                            rm -rf folder_del


                                            this is faster, believe it or not, as only one inode has to be changed. Remember: You can't really parallelize this tast on a multicore computer. It comes down to disk access, which is limited by the RAID or what have you.






                                            share|improve this answer
















                                            • 1




                                              shred will not work with many modern filesystems.
                                              – Smith John
                                              Jul 2 '13 at 14:47













                                            up vote
                                            0
                                            down vote










                                            up vote
                                            0
                                            down vote









                                            Depending on how well you need to get rid of those files, I'd suggest using shred.



                                            $ shred -zuv folder


                                            if you want to purge the directory, but you can't remove it and recreate it, I suggest moving it and recreating it instantly.



                                            mv folder folder_del
                                            mkdir folder
                                            rm -rf folder_del


                                            this is faster, believe it or not, as only one inode has to be changed. Remember: You can't really parallelize this tast on a multicore computer. It comes down to disk access, which is limited by the RAID or what have you.






                                            share|improve this answer












                                            Depending on how well you need to get rid of those files, I'd suggest using shred.



                                            $ shred -zuv folder


                                            if you want to purge the directory, but you can't remove it and recreate it, I suggest moving it and recreating it instantly.



                                            mv folder folder_del
                                            mkdir folder
                                            rm -rf folder_del


                                            this is faster, believe it or not, as only one inode has to be changed. Remember: You can't really parallelize this tast on a multicore computer. It comes down to disk access, which is limited by the RAID or what have you.







                                            share|improve this answer












                                            share|improve this answer



                                            share|improve this answer










                                            answered Jul 2 '13 at 13:56









                                            polemon

                                            5,50464076




                                            5,50464076







                                            • 1




                                              shred will not work with many modern filesystems.
                                              – Smith John
                                              Jul 2 '13 at 14:47













                                            • 1




                                              shred will not work with many modern filesystems.
                                              – Smith John
                                              Jul 2 '13 at 14:47








                                            1




                                            1




                                            shred will not work with many modern filesystems.
                                            – Smith John
                                            Jul 2 '13 at 14:47





                                            shred will not work with many modern filesystems.
                                            – Smith John
                                            Jul 2 '13 at 14:47











                                            up vote
                                            0
                                            down vote













                                            If you have millions of files and every solution above gets your system in stress you may try this inspiration:



                                            File nice_delete:



                                            #!/bin/bash

                                            MAX_LOAD=3
                                            FILES=("$@")
                                            BATCH=100

                                            while [ $#FILES[@] -gt 0 ]; do
                                            DEL=("$FILES[@]:0:$BATCH")
                                            ionice -c3 rm "$DEL[@]"
                                            echo -n "#"
                                            FILES=("$FILES[@]:$BATCH")
                                            while [[ $(cat /proc/loadavg | awk 'print int($1)') -gt $MAX_LOAD ]]; do
                                            echo -n "."
                                            sleep 1
                                            done
                                            done


                                            And now delete the files:



                                            find /path/to/folder -type f -exec ./nice_delete +


                                            Find will create batches (see getconf ARG_MAX) of some tens thousands of files and pass it to nice_delete. This will create even smaller batches to allow sleeping when overload is detected.






                                            share|improve this answer
























                                              up vote
                                              0
                                              down vote













                                              If you have millions of files and every solution above gets your system in stress you may try this inspiration:



                                              File nice_delete:



                                              #!/bin/bash

                                              MAX_LOAD=3
                                              FILES=("$@")
                                              BATCH=100

                                              while [ $#FILES[@] -gt 0 ]; do
                                              DEL=("$FILES[@]:0:$BATCH")
                                              ionice -c3 rm "$DEL[@]"
                                              echo -n "#"
                                              FILES=("$FILES[@]:$BATCH")
                                              while [[ $(cat /proc/loadavg | awk 'print int($1)') -gt $MAX_LOAD ]]; do
                                              echo -n "."
                                              sleep 1
                                              done
                                              done


                                              And now delete the files:



                                              find /path/to/folder -type f -exec ./nice_delete +


                                              Find will create batches (see getconf ARG_MAX) of some tens thousands of files and pass it to nice_delete. This will create even smaller batches to allow sleeping when overload is detected.






                                              share|improve this answer






















                                                up vote
                                                0
                                                down vote










                                                up vote
                                                0
                                                down vote









                                                If you have millions of files and every solution above gets your system in stress you may try this inspiration:



                                                File nice_delete:



                                                #!/bin/bash

                                                MAX_LOAD=3
                                                FILES=("$@")
                                                BATCH=100

                                                while [ $#FILES[@] -gt 0 ]; do
                                                DEL=("$FILES[@]:0:$BATCH")
                                                ionice -c3 rm "$DEL[@]"
                                                echo -n "#"
                                                FILES=("$FILES[@]:$BATCH")
                                                while [[ $(cat /proc/loadavg | awk 'print int($1)') -gt $MAX_LOAD ]]; do
                                                echo -n "."
                                                sleep 1
                                                done
                                                done


                                                And now delete the files:



                                                find /path/to/folder -type f -exec ./nice_delete +


                                                Find will create batches (see getconf ARG_MAX) of some tens thousands of files and pass it to nice_delete. This will create even smaller batches to allow sleeping when overload is detected.






                                                share|improve this answer












                                                If you have millions of files and every solution above gets your system in stress you may try this inspiration:



                                                File nice_delete:



                                                #!/bin/bash

                                                MAX_LOAD=3
                                                FILES=("$@")
                                                BATCH=100

                                                while [ $#FILES[@] -gt 0 ]; do
                                                DEL=("$FILES[@]:0:$BATCH")
                                                ionice -c3 rm "$DEL[@]"
                                                echo -n "#"
                                                FILES=("$FILES[@]:$BATCH")
                                                while [[ $(cat /proc/loadavg | awk 'print int($1)') -gt $MAX_LOAD ]]; do
                                                echo -n "."
                                                sleep 1
                                                done
                                                done


                                                And now delete the files:



                                                find /path/to/folder -type f -exec ./nice_delete +


                                                Find will create batches (see getconf ARG_MAX) of some tens thousands of files and pass it to nice_delete. This will create even smaller batches to allow sleeping when overload is detected.







                                                share|improve this answer












                                                share|improve this answer



                                                share|improve this answer










                                                answered Sep 27 at 23:35









                                                brablc

                                                1414




                                                1414















                                                    protected by slm♦ Feb 18 '14 at 21:21



                                                    Thank you for your interest in this question.
                                                    Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



                                                    Would you like to answer one of these unanswered questions instead?


                                                    Popular posts from this blog

                                                    How to check contact read email or not when send email to Individual?

                                                    Bahrain

                                                    Postfix configuration issue with fips on centos 7; mailgun relay