Tracking down where disk space has gone on Linux?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
443
down vote

favorite
150












When administering Linux systems I often find myself struggling to track down the culprit after a partition goes full. I normally use du / | sort -nr but on a large filesystem this takes a long time before any results are returned.



Also, this is usually successful in highlighting the worst offender but I've often found myself resorting to du without the sort
in more subtle cases and then had to trawl through the output.



I'd prefer a command line solution which relies on standard Linux commands since I have to administer quite a few systems and installing new software is a hassle (especially when out of disk space!)










share|improve this question















migrated from stackoverflow.com Apr 18 '14 at 18:36


This question came from our site for professional and enthusiast programmers.


















    up vote
    443
    down vote

    favorite
    150












    When administering Linux systems I often find myself struggling to track down the culprit after a partition goes full. I normally use du / | sort -nr but on a large filesystem this takes a long time before any results are returned.



    Also, this is usually successful in highlighting the worst offender but I've often found myself resorting to du without the sort
    in more subtle cases and then had to trawl through the output.



    I'd prefer a command line solution which relies on standard Linux commands since I have to administer quite a few systems and installing new software is a hassle (especially when out of disk space!)










    share|improve this question















    migrated from stackoverflow.com Apr 18 '14 at 18:36


    This question came from our site for professional and enthusiast programmers.
















      up vote
      443
      down vote

      favorite
      150









      up vote
      443
      down vote

      favorite
      150






      150





      When administering Linux systems I often find myself struggling to track down the culprit after a partition goes full. I normally use du / | sort -nr but on a large filesystem this takes a long time before any results are returned.



      Also, this is usually successful in highlighting the worst offender but I've often found myself resorting to du without the sort
      in more subtle cases and then had to trawl through the output.



      I'd prefer a command line solution which relies on standard Linux commands since I have to administer quite a few systems and installing new software is a hassle (especially when out of disk space!)










      share|improve this question















      When administering Linux systems I often find myself struggling to track down the culprit after a partition goes full. I normally use du / | sort -nr but on a large filesystem this takes a long time before any results are returned.



      Also, this is usually successful in highlighting the worst offender but I've often found myself resorting to du without the sort
      in more subtle cases and then had to trawl through the output.



      I'd prefer a command line solution which relies on standard Linux commands since I have to administer quite a few systems and installing new software is a hassle (especially when out of disk space!)







      command-line partition disk-usage command






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Jul 19 at 15:14









      Anderson

      947




      947










      asked Aug 28 '08 at 13:17







      Owen Fraser-Green











      migrated from stackoverflow.com Apr 18 '14 at 18:36


      This question came from our site for professional and enthusiast programmers.






      migrated from stackoverflow.com Apr 18 '14 at 18:36


      This question came from our site for professional and enthusiast programmers.






















          36 Answers
          36






          active

          oldest

          votes













          1 2
          next











          up vote
          464
          down vote













          Try ncdu, an excellent command-line disk usage analyser:






          share|improve this answer


















          • 61




            This is absolutely the best tool. Thank you.
            – Vlad Patryshev
            Jul 25 '15 at 5:22






          • 3




            Typically, I hate being asked to install something to solve a simple issue, but this is just great.
            – gwg
            Jul 5 '16 at 18:58






          • 8




            sudo apt install ncdu on ubuntu gets it easily. It's great
            – Orion Edwards
            Jul 19 '17 at 22:30






          • 6




            You quite probably know which filesystem is short of space. In which case you can use ncdu -x to only count files and directories on the same filesystem as the directory being scanned.
            – Luke Cousins
            Jul 21 '17 at 11:51






          • 4




            best answer. also: sudo ncdu -rx / should give a clean read on biggest dirs/files ONLY on root area drive. (-r = read-only, -x = stay on same filesystem (meaning: do not traverse other filesystem mounts) )
            – bshea
            Sep 21 '17 at 15:52


















          up vote
          313
          down vote













          Don't go straight to du /. Use df to find the partition that's hurting you, and then try du commands.



          One I like to try is



          # U.S.
          du -h <dir> | grep '[0-9.]+G'
          # Others
          du -h <dir> | grep '[0-9,]+G'


          because it prints sizes in "human readable form". Unless you've got really small partitions, grepping for directories in the gigabytes is a pretty good filter for what you want. This will take you some time, but unless you have quotas set up, I think that's just the way it's going to be.



          As @jchavannes points out in the comments, the expression can get more precise if you're finding too many false positives. I incorporated the suggestion, which does make it better, but there are still false positives, so there are just tradeoffs (simpler expr, worse results; more complex and longer expr, better results). If you have too many little directories showing up in your output, adjust your regex accordingly. For example,



          grep '^s*[0-9.]+G'


          is even more accurate (no < 1GB directories will be listed).



          If you do have quotas, you can use



          quota -v


          to find users that are hogging the disk.






          share|improve this answer


















          • 2




            This is very quick, simple and practical
            – zzapper
            Oct 29 '12 at 16:43






          • 21




            grep '[0-9]G' contained a lot of false positives and also omitted any decimals. This worked better for me: sudo du -h / | grep -P '^[0-9.]+G'
            – jchavannes
            Aug 14 '14 at 6:09







          • 1




            In case you have really big directories, you'll want [GT] instead of just G
            – Vitruvius
            Mar 28 '15 at 20:20






          • 1




            Is there a tool that will continuously monitor disk usage across all directories (lazily) in the filesystem? Something that can be streamed to a web UI? Preferably soft-realtime information.
            – CMCDragonkai
            May 7 '15 at 9:00






          • 17




            I like to use du -h | sort -hr | head
            – augurar
            Jun 13 '16 at 18:48

















          up vote
          100
          down vote













          Both GNU du and BSD du can be depth-restricted (but POSIX du cannot!):




          • GNU (Linux, …):



            du --max-depth 3



          • BSD (macOS, …):



            du -d 3


          This will limit the output display to depth 3. The calculated and displayed size is still the total of the full depth, of course. But despite this, restricting the display depth drastically speeds up the calculation.



          Another helpful option is -h (words on both GNU and BSD but, once again, not on POSIX-only du) for “human-readable” output (i.e. using KiB, MiB etc.).






          share|improve this answer


















          • 23




            if du complains about -d try --max-depth 5 in stead.
            – ReactiveRaven
            Jul 2 '13 at 11:25






          • 8




            Great anwser. Seems correct for me. I suggest du -hcd 1 /directory. -h for human readable, c for total and d for depth.
            – Thales Ceolin
            Feb 4 '14 at 1:13










          • I'm use du -hd 1 <folder to inspect> | sort -hr | head
            – jonathanccalixto
            Jan 10 '17 at 19:39











          • du --max-depth 5 -h /* 2>&1 | grep '[0-9.]+G' | sort -hr | head to filter Permission denied
            – srghma
            Sep 1 '17 at 10:36

















          up vote
          47
          down vote













          You can also run the following command using du :



          ~# du -Pshx /* 2>/dev/null


          • The -s option summarizes and displays total for each argument.


          • h prints Mio, Gio, etc.


          • x = stay in one filesystem (very useful).


          • P = don't follow symlinks (which could cause files to be counted twice for instance).

          Be careful, the /root directory will not be shown, you have to run ~# du -Pshx /root 2>/dev/null to get that (once, I struggled a lot not pointing out that my /root directory had gone full).



          Edit: Corrected option -P






          share|improve this answer


















          • 1




            du -Pshx .* * 2>/dev/null + hidden/system directories
            – Mykhaylo Adamovych
            Feb 15 '16 at 10:16


















          up vote
          27
          down vote













          Finding the biggest files on the filesystem is always going to take a long time. By definition you have to traverse the whole filesystem looking for big files. The only solution is probably to run a cron job on all your systems to have the file ready ahead of time.



          One other thing, the x option of du is useful to keep du from following mount points into other filesystems. I.e:



          du -x [path]


          The full command I usually run is:



          sudo du -xm / | sort -rn > usage.txt


          The -m means return results in megabytes, and sort -rn will sort the results largest number first. You can then open usage.txt in an editor, and the biggest folders (starting with /) will be at the top.






          share|improve this answer


















          • 3




            Thanks for pointing out the -x flag!
            – SamB
            Jun 2 '10 at 20:55






          • 1




            "finding biggest takes long time.." -> Well it depends, but tend to disagree: doesn't take that long with utilities like ncdu - at least quicker than du or find (depending on depth and arguments)..
            – bshea
            Sep 21 '17 at 15:35











          • since I prefer not to be root, I had to adapt where the file is written : sudo du -xm / | sort -rn > ~/usage.txt
            – Bruno
            Sep 14 at 6:55

















          up vote
          19
          down vote













          I always use du -sm * | sort -n, which gives you a sorted list of how much the subdirectories of the current working directory use up, in mebibytes.



          You can also try Konqueror, which has a "size view" mode, which is similar to what WinDirStat does on Windows: it gives you a viual representation of which files/directories use up most of your space.



          Update: on more recent versions, you can also use du -sh * | sort -h which will show human-readable filesizes and sort by those. (numbers will be suffixed with K, M, G, ...)



          For people looking for an alternative to KDE3's Konqueror file size view may take a look at filelight, though it's not quite as nice.






          share|improve this answer




















          • That's only Konqueror 3.x though - the file size view still hasn't been ported to KDE4.
            – flussence
            Feb 9 '09 at 19:09










          • 'du -sh * | sort -h ' works perfectly on my Linux (Centos distro) box. Thanks!
            – pahariayogi
            Apr 25 '17 at 14:05

















          up vote
          17
          down vote













          I use this for the top 25 worst offenders below the current directory



          # -S to not include subdir size, sorted and limited to top 25
          du -S . | sort -nr | head -25





          share|improve this answer




















          • This command did the trick to find a hidden folder that seemed to be increasing in size over time. Thanks!
            – thegreendroid
            Jun 20 '13 at 2:24










          • Is this in bytes?
            – User
            Sep 17 '14 at 0:12










          • By default, on my system, 'du -S' gives a nice human readable output. You get a plain number of bytes for small files, then a number with a 'KB' or 'MB' suffix for bigger files.
            – serg10
            Sep 17 '14 at 8:48










          • You could do du -Sh to get a human readable output.
            – Siddhartha
            Jan 26 '16 at 2:57










          • @Siddhartha If you add -h, it will likely change the effect of the sort -nr command - meaning the sort will no longer work, and then the head command will also no longer work
            – Clare Macrae
            Dec 4 '17 at 13:00


















          up vote
          14
          down vote













          At a previous company we used to have a cron job that was run overnight and identified any files over a certain size, e.g.



          find / -size +10000k



          You may want to be more selective about the directories that you are searching, and watch out for any remotely mounted drives which might go offline.






          share|improve this answer




















          • You can use the -x option of find to make sure you don't find files on other devices than the start point of your find command. This fixes the remotely mounted drives issue.
            – rjmunro
            Jun 29 '15 at 16:29


















          up vote
          10
          down vote













          One option would be to run your du/sort command as a cron job, and output to a file, so it's already there when you need it.






          share|improve this answer



























            up vote
            9
            down vote













            For the commandline I think the du/sort method is the best. If you're not on a server you should take a look at Baobab - Disk usage analyzer. This program also takes some time to run, but you can easily find the sub directory deep, deep down where all the old Linux ISOs are.






            share|improve this answer
















            • 2




              It can also scan remote folders via SSH, FTP, SMB and WebDAV.
              – Colonel Sponsz
              Dec 2 '08 at 16:34










            • This is great. Some things just work better with a GUI to visualize them, and this is one of them! I need an X-server on my server anyways for CrashPlan, so it works on that too.
              – timelmer
              Jun 25 '16 at 20:46

















            up vote
            9
            down vote













            I use



            du -ch --max-depth=2 .


            and I change the max-depth to suit my needs. The "c" option prints totals for the folders and the "h" option prints the sizes in K, M, or G as appropriate. As others have said, it still scans all the directories, but it limits the output in a way that I find easier to find the large directories.






            share|improve this answer



























              up vote
              8
              down vote













              I'm going to second xdiskusage. But I'm going to add in the note that it is actually a du frontend and can read the du output from a file. So you can run du -ax /home > ~/home-du on your server, scp the file back, and then analyze it graphically. Or pipe it through ssh.






              share|improve this answer



























                up vote
                6
                down vote













                Try feeding the output of du into a simple awk script that checks to see if the size of the directory is larger than some threshold, if so it prints it. You don't have to wait for the entire tree to be traversed before you start getting info (vs. many of the other answers).



                For example, the following displays any directories that consume more than about 500 MB.



                du -kx / | awk ' if ($1 > 500000) print $0 '


                To make the above a little more reusable, you can define a function in your .bashrc, ( or you could make it into a standalone script).



                dubig() 
                [ -z "$1" ] && echo "usage: dubig sizethreshMB [dir]" && return
                du -kx $2


                So dubig 200 ~/ looks under the home directory (without following symlinks off device) for directories that use more than 200 MB.






                share|improve this answer






















                • It's a pity that a dozen of grep hacks are more upvoted. Oh and du -k will make it absolutely certain that du is using KB units
                  – ndemou
                  Nov 23 '16 at 20:05










                • Good idea about the -k. Edited.
                  – Mark Borgerding
                  Nov 24 '16 at 11:16










                • Even simpler and more robust: du -kx $2 | awk '$1>'$(($1*1024)) (if you specify only a condition aka pattern to awk the default action is print $0)
                  – dave_thompson_085
                  Nov 27 '16 at 11:31











                • Good point @date_thompson_085. That's true for all versions of awk I know of (net/free-BSD & GNU). @mark-borgerding so this means that you can greatly simplify your first example to just du -kx / | awk '$1 > 500000'
                  – ndemou
                  Dec 13 '16 at 9:46










                • @mark-borgerding: If you have just a few kBytes left somewhere you can also keep the whole output of du like this du -kx / | tee /tmp/du.log | awk '$1 > 500000'. This is very helpful because if your first filtering turns out to be fruitless you can try other values like this awk '$1 > 200000' /tmp/du.log or inspect the complete output like this sort -nr /tmp/du.log|less without re-scanning the whole filesystem
                  – ndemou
                  Dec 13 '16 at 9:59


















                up vote
                4
                down vote













                I like the good old xdiskusage as a graphical alternative to du(1).






                share|improve this answer




















                • Note this part of the question: "I'd prefer a command line solution which relies on standard Linux commands since..."
                  – ndemou
                  Jul 4 '17 at 20:20

















                up vote
                4
                down vote













                I prefer to use the following to get an overview and drill down from there...



                cd /folder_to_check
                du -shx */


                This will display results with human readable output such as GB, MB. It will also prevent traversing through remote filesystems. The -s option only shows summary of each folder found so you can drill down further if interested in more details of a folder. Keep in mind that this solution will only show folders so you will want to omit the / after the asterisk if you want files too.






                share|improve this answer



























                  up vote
                  4
                  down vote













                  Not mentioned here but you should also check lsof in case of deleted/hanging files. I had a 5.9GB deleted tmp file from a run away cronjob.



                  https://serverfault.com/questions/207100/how-can-i-find-phantom-storage-usage Helped me out in find the process owner of said file ( cron ) and then I was able to goto /proc/cron id/fd/file handle # less the file in question to get the start of the run away, resolve that, and then echo "" > file to clear up space and let cron gracefully close itself up.






                  share|improve this answer





























                    up vote
                    2
                    down vote













                    For command line du (and it's options) seems to be the best way. DiskHog looks like it uses du/df info from a cron job too so Peter's suggestion is probably the best combination of simple and effective.



                    (FileLight and KDirStat are ideal for GUI.)






                    share|improve this answer





























                      up vote
                      2
                      down vote













                      You can use standard tools like find and sort to analyze your disk space usage.



                      List directories sorted by their size:



                      find / -mount -type d -exec du -s "" ; | sort -n


                      List files sorted by their size:



                      find / -mount -printf "%kt%pn" | sort -n





                      share|improve this answer
















                      • 1




                        I find this to be the best answer, to detect the large sized in sorted order
                        – vimal krishna
                        Oct 1 '17 at 11:16

















                      up vote
                      2
                      down vote













                      From the terminal, you can get a visual representation of disk usage with dutree



                      It is very fast and light because it is implemented in Rust



                      dutree



                      $ dutree -h
                      Usage: dutree [options] <path> [<path>..]

                      Options:
                      -d, --depth [DEPTH] show directories up to depth N (def 1)
                      -a, --aggr [N[KMG]] aggregate smaller than N B/KiB/MiB/GiB (def 1M)
                      -s, --summary equivalent to -da, or -d1 -a1M
                      -u, --usage report real disk usage instead of file size
                      -b, --bytes print sizes in bytes
                      -f, --files-only skip directories for a fast local overview
                      -x, --exclude NAME exclude matching files or directories
                      -H, --no-hidden exclude hidden files
                      -A, --ascii ASCII characters only, no colors
                      -h, --help show help
                      -v, --version print version number


                      See all the usage details in the website






                      share|improve this answer



























                        up vote
                        1
                        down vote













                        At first I check the size of directories, like so:



                        du -sh /var/cache/*/





                        share|improve this answer



























                          up vote
                          1
                          down vote













                          If you know that the large files have been added in the last few days (say, 3), then you can use a find command in conjunction with "ls -ltra" to discover those recently added files:



                          find /some/dir -type f -mtime -3 -exec ls -lart ;


                          This will give you just the files ("-type f"), not directories; just the files with modification time over the last 3 days ("-mtime -3") and execute "ls -lart" against each found file ("-exec" part).






                          share|improve this answer



























                            up vote
                            1
                            down vote













                            To understand disproportionate disk space usage it's often useful to start at the root directory and walk up through some of its largest children.



                            We can do this by



                            • saving the output of du into a file

                            • grepping through the result iteratively

                            That is:



                            # sum up the size of all files and directories under the root filesystem
                            du -a -h -x / > disk_usage.txt
                            # display the size of root items
                            grep $'t/[^/]*$' disk_usage.txt


                            now let's say /usr appear too large



                            # display the size of /usr items
                            grep $'t/usr/[^/]*$' disk_usage.txt


                            now if /usr/local is suspiciously large



                            # display the size /usr/local items
                            grep $'t/usr/local/[^/]*$' disk_usage.txt


                            and so on...






                            share|improve this answer



























                              up vote
                              1
                              down vote













                              I have used this command to find files bigger than 100Mb:



                              find / -size +100M -exec ls -l ;





                              share|improve this answer



























                                up vote
                                1
                                down vote













                                Maybe worth to note that mc (Midnight Commander, a classic text-mode file manager) by default show only the size of the directory inodes (usually 4096) but with CtrlSpace or with menu Tools you can see the space occupied by the selected directory in a human readable format (e.g., some like 103151M).



                                For instance, the picture below show the full size of the vanilla TeX Live distributions of 2018 and 2017, while the versions of 2015 and 2016 show only the size of the inode (but they have really near of 5 Gb each).



                                That is, CtrlSpace must be done one for one, only for the actual directory level, but it is so fast and handy when you are navigating with mc that maybe you will not need ncdu (that indeed, only for this purpose is better). Otherwise, you can also run ncdu from mc. without exit from mc or launch another terminal.



                                mwe






                                share|improve this answer



























                                  up vote
                                  0
                                  down vote













                                  I've had success tracking down the worst offender(s) piping the du output in human readable form to egrep and matching to a regular expression.



                                  For example:



                                  du -h | egrep "[0-9]+G.*|[5-9][0-9][0-9]M.*"


                                  which should give you back everything 500 megs or higher.






                                  share|improve this answer




















                                  • Don't use grep for arithmetic operations -- use awk instead: du -k | awk '$1 > 500000'. It is much easier to understand, edit and get correct on the first try.
                                    – ndemou
                                    Jul 4 '17 at 20:25

















                                  up vote
                                  0
                                  down vote













                                  If you want speed, you can enable quotas on the filesystems you want to monitor (you need not set quotas for any user), and use a script that uses the quota command to list the disk space being used by each user. For instance:



                                  quota -v $user | grep $filesystem | awk ' print $2 '


                                  would give you the disk usage in blocks for the particular user on the particular filesystem. You should be able to check usages in a matter of seconds this way.



                                  To enable quotas you will need to add usrquota to the filesystem options in your /etc/fstab file and then probably reboot so that quotacheck can be run on a idle filesystem before quotaon is called.






                                  share|improve this answer



























                                    up vote
                                    0
                                    down vote













                                    Here is a tiny app that uses deep sampling to find tumors in any disk or directory. It walks the directory tree twice, once to measure it, and the second time to print out the paths to 20 "random" bytes under the directory.



                                    void walk(string sDir, int iPass, int64& n, int64& n1, int64 step)
                                    foreach(string sSubDir in sDir)
                                    walk(sDir + "/" + sSubDir, iPass, n, n1, step);

                                    foreach(string sFile in sDir)
                                    string sPath = sDir + "/" + sFile;
                                    int64 len = File.Size(sPath);
                                    if (iPass == 2)
                                    while(n1 <= n+len)
                                    print sPath;
                                    n1 += step;


                                    n += len;



                                    void dscan()
                                    int64 n = 0, n1 = 0, step = 0;
                                    // pass 1, measure
                                    walk(".", 1, n, n1);
                                    print n;
                                    // pass 2, print
                                    step = n/20; n1 = step/2; n = 0;
                                    walk(".", 2, n, n1);
                                    print n;



                                    The output looks like this for my Program Files directory:



                                     7,908,634,694
                                    .ArcSoftPhotoStudio 2000Samples3.jpg
                                    .Common FilesJavaUpdateBase Imagesj2re1.4.2-b28core1.zip
                                    .Common FilesWise Installation WizardWISDED53B0BB67C4244AE6AD6FD3C28D1EF_7_0_2_7.MSI
                                    .Insightfulsplus62javajrelibjaws.jar
                                    .IntelCompilerFortran9.1em64tbintselect.exe
                                    .IntelDownloadIntelFortranProCompiler91CompilerItaniumData1.cab
                                    .IntelMKL8.0.1em64tbinmkl_lapack32.dll
                                    .Javajre1.6.0binclientclasses.jsa
                                    .Microsoft SQL Server90Setup Bootstrapsqlsval.dll
                                    .Microsoft Visual StudioDF98DOCTAPI.CHM
                                    .Microsoft Visual Studio .NET 2003CompactFrameworkSDKv1.0.5000Windows CEsqlce20sql2ksp1.exe
                                    .Microsoft Visual Studio .NET 2003SDKv1.1Tool Developers GuidedocsPartition II Metadata.doc
                                    .Microsoft Visual Studio .NET 2003Visual Studio .NET Enterprise Architect 2003 - EnglishLogsVSMsiLog0A34.txt
                                    .Microsoft Visual Studio 8Microsoft Visual Studio 2005 Professional Edition - ENULogsVSMsiLog1A9E.txt
                                    .Microsoft Visual Studio 8SmartDevicesSDKCompactFramework2.0v2.0WindowsCEwce500mipsivNETCFv2.wce5.mipsiv.cab
                                    .Microsoft Visual Studio 8VCceatlmfclibarmv4iUafxcW.lib
                                    .Microsoft Visual Studio 8VCceDllmipsiimfc80ud.pdb
                                    .Movie MakerMUI409moviemk.chm
                                    .TheCompanyTheProductdocsTheProduct User's Guide.pdf
                                    .VNICTT6.0helpStatV1.pdf
                                    7,908,634,694


                                    It tells me that the directory is 7.9gb, of which



                                    • ~15% goes to the Intel Fortran compiler

                                    • ~15% goes to VS .NET 2003

                                    • ~20% goes to VS 8

                                    It is simple enough to ask if any of these can be unloaded.



                                    It also tells about file types that are distributed across the file system, but taken together represent an opportunity for space saving:



                                    • ~15% roughly goes to .cab and .MSI files

                                    • ~10% roughly goes to logging text files

                                    It shows plenty of other things in there also, that I could probably do without, like "SmartDevices" and "ce" support (~15%).



                                    It does take linear time, but it doesn't have to be done often.



                                    Examples of things it has found:



                                    • backup copies of DLLs in many saved code repositories, that don't really need to be saved

                                    • a backup copy of someone's hard drive on the server, under an obscure directory

                                    • voluminous temporary internet files

                                    • ancient doc and help files long past being needed





                                    share|improve this answer



























                                      up vote
                                      0
                                      down vote













                                      I had a similar issue, but the answers on this page weren't enough. I found the following command to be the most useful for the listing:



                                      du -a / | sort -n -r | head -n 20



                                      Which would show me the 20 biggest offenders. However even though I ran this, it did not show me the real issue, because I had already deleted the file. The catch was that there was a process still running that was referencing the deleted log file... so I had to kill that process first then the disk space showed up as free.






                                      share|improve this answer




















                                      • Good point but this should be a comment and not an answer by itself - this question suffers from too many answers
                                        – ndemou
                                        Jun 9 '17 at 10:36

















                                      up vote
                                      0
                                      down vote













                                      You can use DiskReport.net to generate an online web report of all your disks.



                                      With many runs it will show you history graph for all your folders, easy to find what has grow






                                      share|improve this answer




















                                      • This tool doesn't match two main points of the question "I often find myself struggling to track down the culprit after a partition goes full" and "I'd prefer a command line solution which relies on standard Linux commands"
                                        – ndemou
                                        Jun 9 '17 at 10:35

















                                      up vote
                                      0
                                      down vote













                                      There is a nice piece of cross-platform freeware called JDiskReport which includes a GUI to explore what's taking up all that space.



                                      Example screenshot:
                                      JDiskReport screenshot



                                      Of course, you'll need to clear up a little bit of space manually before you can download and install it, or download this to a different drive (like a USB thumbdrive).




                                      (Copied here from same-author answer on duplicate question)






                                      share|improve this answer


























                                        1 2
                                        next


                                        protected by ilkkachu Jun 9 '17 at 11:52



                                        Thank you for your interest in this question.
                                        Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



                                        Would you like to answer one of these unanswered questions instead?













                                        36 Answers
                                        36






                                        active

                                        oldest

                                        votes








                                        36 Answers
                                        36






                                        active

                                        oldest

                                        votes









                                        active

                                        oldest

                                        votes






                                        active

                                        oldest

                                        votes








                                        1 2
                                        next









                                        up vote
                                        464
                                        down vote













                                        Try ncdu, an excellent command-line disk usage analyser:






                                        share|improve this answer


















                                        • 61




                                          This is absolutely the best tool. Thank you.
                                          – Vlad Patryshev
                                          Jul 25 '15 at 5:22






                                        • 3




                                          Typically, I hate being asked to install something to solve a simple issue, but this is just great.
                                          – gwg
                                          Jul 5 '16 at 18:58






                                        • 8




                                          sudo apt install ncdu on ubuntu gets it easily. It's great
                                          – Orion Edwards
                                          Jul 19 '17 at 22:30






                                        • 6




                                          You quite probably know which filesystem is short of space. In which case you can use ncdu -x to only count files and directories on the same filesystem as the directory being scanned.
                                          – Luke Cousins
                                          Jul 21 '17 at 11:51






                                        • 4




                                          best answer. also: sudo ncdu -rx / should give a clean read on biggest dirs/files ONLY on root area drive. (-r = read-only, -x = stay on same filesystem (meaning: do not traverse other filesystem mounts) )
                                          – bshea
                                          Sep 21 '17 at 15:52















                                        up vote
                                        464
                                        down vote













                                        Try ncdu, an excellent command-line disk usage analyser:






                                        share|improve this answer


















                                        • 61




                                          This is absolutely the best tool. Thank you.
                                          – Vlad Patryshev
                                          Jul 25 '15 at 5:22






                                        • 3




                                          Typically, I hate being asked to install something to solve a simple issue, but this is just great.
                                          – gwg
                                          Jul 5 '16 at 18:58






                                        • 8




                                          sudo apt install ncdu on ubuntu gets it easily. It's great
                                          – Orion Edwards
                                          Jul 19 '17 at 22:30






                                        • 6




                                          You quite probably know which filesystem is short of space. In which case you can use ncdu -x to only count files and directories on the same filesystem as the directory being scanned.
                                          – Luke Cousins
                                          Jul 21 '17 at 11:51






                                        • 4




                                          best answer. also: sudo ncdu -rx / should give a clean read on biggest dirs/files ONLY on root area drive. (-r = read-only, -x = stay on same filesystem (meaning: do not traverse other filesystem mounts) )
                                          – bshea
                                          Sep 21 '17 at 15:52













                                        up vote
                                        464
                                        down vote










                                        up vote
                                        464
                                        down vote









                                        Try ncdu, an excellent command-line disk usage analyser:






                                        share|improve this answer














                                        Try ncdu, an excellent command-line disk usage analyser:







                                        share|improve this answer














                                        share|improve this answer



                                        share|improve this answer








                                        edited Nov 15 '16 at 10:32









                                        GAD3R

                                        23.2k164896




                                        23.2k164896










                                        answered Apr 13 '11 at 10:36







                                        Anthony Cartmell














                                        • 61




                                          This is absolutely the best tool. Thank you.
                                          – Vlad Patryshev
                                          Jul 25 '15 at 5:22






                                        • 3




                                          Typically, I hate being asked to install something to solve a simple issue, but this is just great.
                                          – gwg
                                          Jul 5 '16 at 18:58






                                        • 8




                                          sudo apt install ncdu on ubuntu gets it easily. It's great
                                          – Orion Edwards
                                          Jul 19 '17 at 22:30






                                        • 6




                                          You quite probably know which filesystem is short of space. In which case you can use ncdu -x to only count files and directories on the same filesystem as the directory being scanned.
                                          – Luke Cousins
                                          Jul 21 '17 at 11:51






                                        • 4




                                          best answer. also: sudo ncdu -rx / should give a clean read on biggest dirs/files ONLY on root area drive. (-r = read-only, -x = stay on same filesystem (meaning: do not traverse other filesystem mounts) )
                                          – bshea
                                          Sep 21 '17 at 15:52













                                        • 61




                                          This is absolutely the best tool. Thank you.
                                          – Vlad Patryshev
                                          Jul 25 '15 at 5:22






                                        • 3




                                          Typically, I hate being asked to install something to solve a simple issue, but this is just great.
                                          – gwg
                                          Jul 5 '16 at 18:58






                                        • 8




                                          sudo apt install ncdu on ubuntu gets it easily. It's great
                                          – Orion Edwards
                                          Jul 19 '17 at 22:30






                                        • 6




                                          You quite probably know which filesystem is short of space. In which case you can use ncdu -x to only count files and directories on the same filesystem as the directory being scanned.
                                          – Luke Cousins
                                          Jul 21 '17 at 11:51






                                        • 4




                                          best answer. also: sudo ncdu -rx / should give a clean read on biggest dirs/files ONLY on root area drive. (-r = read-only, -x = stay on same filesystem (meaning: do not traverse other filesystem mounts) )
                                          – bshea
                                          Sep 21 '17 at 15:52








                                        61




                                        61




                                        This is absolutely the best tool. Thank you.
                                        – Vlad Patryshev
                                        Jul 25 '15 at 5:22




                                        This is absolutely the best tool. Thank you.
                                        – Vlad Patryshev
                                        Jul 25 '15 at 5:22




                                        3




                                        3




                                        Typically, I hate being asked to install something to solve a simple issue, but this is just great.
                                        – gwg
                                        Jul 5 '16 at 18:58




                                        Typically, I hate being asked to install something to solve a simple issue, but this is just great.
                                        – gwg
                                        Jul 5 '16 at 18:58




                                        8




                                        8




                                        sudo apt install ncdu on ubuntu gets it easily. It's great
                                        – Orion Edwards
                                        Jul 19 '17 at 22:30




                                        sudo apt install ncdu on ubuntu gets it easily. It's great
                                        – Orion Edwards
                                        Jul 19 '17 at 22:30




                                        6




                                        6




                                        You quite probably know which filesystem is short of space. In which case you can use ncdu -x to only count files and directories on the same filesystem as the directory being scanned.
                                        – Luke Cousins
                                        Jul 21 '17 at 11:51




                                        You quite probably know which filesystem is short of space. In which case you can use ncdu -x to only count files and directories on the same filesystem as the directory being scanned.
                                        – Luke Cousins
                                        Jul 21 '17 at 11:51




                                        4




                                        4




                                        best answer. also: sudo ncdu -rx / should give a clean read on biggest dirs/files ONLY on root area drive. (-r = read-only, -x = stay on same filesystem (meaning: do not traverse other filesystem mounts) )
                                        – bshea
                                        Sep 21 '17 at 15:52





                                        best answer. also: sudo ncdu -rx / should give a clean read on biggest dirs/files ONLY on root area drive. (-r = read-only, -x = stay on same filesystem (meaning: do not traverse other filesystem mounts) )
                                        – bshea
                                        Sep 21 '17 at 15:52













                                        up vote
                                        313
                                        down vote













                                        Don't go straight to du /. Use df to find the partition that's hurting you, and then try du commands.



                                        One I like to try is



                                        # U.S.
                                        du -h <dir> | grep '[0-9.]+G'
                                        # Others
                                        du -h <dir> | grep '[0-9,]+G'


                                        because it prints sizes in "human readable form". Unless you've got really small partitions, grepping for directories in the gigabytes is a pretty good filter for what you want. This will take you some time, but unless you have quotas set up, I think that's just the way it's going to be.



                                        As @jchavannes points out in the comments, the expression can get more precise if you're finding too many false positives. I incorporated the suggestion, which does make it better, but there are still false positives, so there are just tradeoffs (simpler expr, worse results; more complex and longer expr, better results). If you have too many little directories showing up in your output, adjust your regex accordingly. For example,



                                        grep '^s*[0-9.]+G'


                                        is even more accurate (no < 1GB directories will be listed).



                                        If you do have quotas, you can use



                                        quota -v


                                        to find users that are hogging the disk.






                                        share|improve this answer


















                                        • 2




                                          This is very quick, simple and practical
                                          – zzapper
                                          Oct 29 '12 at 16:43






                                        • 21




                                          grep '[0-9]G' contained a lot of false positives and also omitted any decimals. This worked better for me: sudo du -h / | grep -P '^[0-9.]+G'
                                          – jchavannes
                                          Aug 14 '14 at 6:09







                                        • 1




                                          In case you have really big directories, you'll want [GT] instead of just G
                                          – Vitruvius
                                          Mar 28 '15 at 20:20






                                        • 1




                                          Is there a tool that will continuously monitor disk usage across all directories (lazily) in the filesystem? Something that can be streamed to a web UI? Preferably soft-realtime information.
                                          – CMCDragonkai
                                          May 7 '15 at 9:00






                                        • 17




                                          I like to use du -h | sort -hr | head
                                          – augurar
                                          Jun 13 '16 at 18:48














                                        up vote
                                        313
                                        down vote













                                        Don't go straight to du /. Use df to find the partition that's hurting you, and then try du commands.



                                        One I like to try is



                                        # U.S.
                                        du -h <dir> | grep '[0-9.]+G'
                                        # Others
                                        du -h <dir> | grep '[0-9,]+G'


                                        because it prints sizes in "human readable form". Unless you've got really small partitions, grepping for directories in the gigabytes is a pretty good filter for what you want. This will take you some time, but unless you have quotas set up, I think that's just the way it's going to be.



                                        As @jchavannes points out in the comments, the expression can get more precise if you're finding too many false positives. I incorporated the suggestion, which does make it better, but there are still false positives, so there are just tradeoffs (simpler expr, worse results; more complex and longer expr, better results). If you have too many little directories showing up in your output, adjust your regex accordingly. For example,



                                        grep '^s*[0-9.]+G'


                                        is even more accurate (no < 1GB directories will be listed).



                                        If you do have quotas, you can use



                                        quota -v


                                        to find users that are hogging the disk.






                                        share|improve this answer


















                                        • 2




                                          This is very quick, simple and practical
                                          – zzapper
                                          Oct 29 '12 at 16:43






                                        • 21




                                          grep '[0-9]G' contained a lot of false positives and also omitted any decimals. This worked better for me: sudo du -h / | grep -P '^[0-9.]+G'
                                          – jchavannes
                                          Aug 14 '14 at 6:09







                                        • 1




                                          In case you have really big directories, you'll want [GT] instead of just G
                                          – Vitruvius
                                          Mar 28 '15 at 20:20






                                        • 1




                                          Is there a tool that will continuously monitor disk usage across all directories (lazily) in the filesystem? Something that can be streamed to a web UI? Preferably soft-realtime information.
                                          – CMCDragonkai
                                          May 7 '15 at 9:00






                                        • 17




                                          I like to use du -h | sort -hr | head
                                          – augurar
                                          Jun 13 '16 at 18:48












                                        up vote
                                        313
                                        down vote










                                        up vote
                                        313
                                        down vote









                                        Don't go straight to du /. Use df to find the partition that's hurting you, and then try du commands.



                                        One I like to try is



                                        # U.S.
                                        du -h <dir> | grep '[0-9.]+G'
                                        # Others
                                        du -h <dir> | grep '[0-9,]+G'


                                        because it prints sizes in "human readable form". Unless you've got really small partitions, grepping for directories in the gigabytes is a pretty good filter for what you want. This will take you some time, but unless you have quotas set up, I think that's just the way it's going to be.



                                        As @jchavannes points out in the comments, the expression can get more precise if you're finding too many false positives. I incorporated the suggestion, which does make it better, but there are still false positives, so there are just tradeoffs (simpler expr, worse results; more complex and longer expr, better results). If you have too many little directories showing up in your output, adjust your regex accordingly. For example,



                                        grep '^s*[0-9.]+G'


                                        is even more accurate (no < 1GB directories will be listed).



                                        If you do have quotas, you can use



                                        quota -v


                                        to find users that are hogging the disk.






                                        share|improve this answer














                                        Don't go straight to du /. Use df to find the partition that's hurting you, and then try du commands.



                                        One I like to try is



                                        # U.S.
                                        du -h <dir> | grep '[0-9.]+G'
                                        # Others
                                        du -h <dir> | grep '[0-9,]+G'


                                        because it prints sizes in "human readable form". Unless you've got really small partitions, grepping for directories in the gigabytes is a pretty good filter for what you want. This will take you some time, but unless you have quotas set up, I think that's just the way it's going to be.



                                        As @jchavannes points out in the comments, the expression can get more precise if you're finding too many false positives. I incorporated the suggestion, which does make it better, but there are still false positives, so there are just tradeoffs (simpler expr, worse results; more complex and longer expr, better results). If you have too many little directories showing up in your output, adjust your regex accordingly. For example,



                                        grep '^s*[0-9.]+G'


                                        is even more accurate (no < 1GB directories will be listed).



                                        If you do have quotas, you can use



                                        quota -v


                                        to find users that are hogging the disk.







                                        share|improve this answer














                                        share|improve this answer



                                        share|improve this answer








                                        edited Jun 8 '17 at 0:43









                                        Community♦

                                        1




                                        1










                                        answered Aug 28 '08 at 13:25









                                        Ben Collins

                                        3,281157




                                        3,281157







                                        • 2




                                          This is very quick, simple and practical
                                          – zzapper
                                          Oct 29 '12 at 16:43






                                        • 21




                                          grep '[0-9]G' contained a lot of false positives and also omitted any decimals. This worked better for me: sudo du -h / | grep -P '^[0-9.]+G'
                                          – jchavannes
                                          Aug 14 '14 at 6:09







                                        • 1




                                          In case you have really big directories, you'll want [GT] instead of just G
                                          – Vitruvius
                                          Mar 28 '15 at 20:20






                                        • 1




                                          Is there a tool that will continuously monitor disk usage across all directories (lazily) in the filesystem? Something that can be streamed to a web UI? Preferably soft-realtime information.
                                          – CMCDragonkai
                                          May 7 '15 at 9:00






                                        • 17




                                          I like to use du -h | sort -hr | head
                                          – augurar
                                          Jun 13 '16 at 18:48












                                        • 2




                                          This is very quick, simple and practical
                                          – zzapper
                                          Oct 29 '12 at 16:43






                                        • 21




                                          grep '[0-9]G' contained a lot of false positives and also omitted any decimals. This worked better for me: sudo du -h / | grep -P '^[0-9.]+G'
                                          – jchavannes
                                          Aug 14 '14 at 6:09







                                        • 1




                                          In case you have really big directories, you'll want [GT] instead of just G
                                          – Vitruvius
                                          Mar 28 '15 at 20:20






                                        • 1




                                          Is there a tool that will continuously monitor disk usage across all directories (lazily) in the filesystem? Something that can be streamed to a web UI? Preferably soft-realtime information.
                                          – CMCDragonkai
                                          May 7 '15 at 9:00






                                        • 17




                                          I like to use du -h | sort -hr | head
                                          – augurar
                                          Jun 13 '16 at 18:48







                                        2




                                        2




                                        This is very quick, simple and practical
                                        – zzapper
                                        Oct 29 '12 at 16:43




                                        This is very quick, simple and practical
                                        – zzapper
                                        Oct 29 '12 at 16:43




                                        21




                                        21




                                        grep '[0-9]G' contained a lot of false positives and also omitted any decimals. This worked better for me: sudo du -h / | grep -P '^[0-9.]+G'
                                        – jchavannes
                                        Aug 14 '14 at 6:09





                                        grep '[0-9]G' contained a lot of false positives and also omitted any decimals. This worked better for me: sudo du -h / | grep -P '^[0-9.]+G'
                                        – jchavannes
                                        Aug 14 '14 at 6:09





                                        1




                                        1




                                        In case you have really big directories, you'll want [GT] instead of just G
                                        – Vitruvius
                                        Mar 28 '15 at 20:20




                                        In case you have really big directories, you'll want [GT] instead of just G
                                        – Vitruvius
                                        Mar 28 '15 at 20:20




                                        1




                                        1




                                        Is there a tool that will continuously monitor disk usage across all directories (lazily) in the filesystem? Something that can be streamed to a web UI? Preferably soft-realtime information.
                                        – CMCDragonkai
                                        May 7 '15 at 9:00




                                        Is there a tool that will continuously monitor disk usage across all directories (lazily) in the filesystem? Something that can be streamed to a web UI? Preferably soft-realtime information.
                                        – CMCDragonkai
                                        May 7 '15 at 9:00




                                        17




                                        17




                                        I like to use du -h | sort -hr | head
                                        – augurar
                                        Jun 13 '16 at 18:48




                                        I like to use du -h | sort -hr | head
                                        – augurar
                                        Jun 13 '16 at 18:48










                                        up vote
                                        100
                                        down vote













                                        Both GNU du and BSD du can be depth-restricted (but POSIX du cannot!):




                                        • GNU (Linux, …):



                                          du --max-depth 3



                                        • BSD (macOS, …):



                                          du -d 3


                                        This will limit the output display to depth 3. The calculated and displayed size is still the total of the full depth, of course. But despite this, restricting the display depth drastically speeds up the calculation.



                                        Another helpful option is -h (words on both GNU and BSD but, once again, not on POSIX-only du) for “human-readable” output (i.e. using KiB, MiB etc.).






                                        share|improve this answer


















                                        • 23




                                          if du complains about -d try --max-depth 5 in stead.
                                          – ReactiveRaven
                                          Jul 2 '13 at 11:25






                                        • 8




                                          Great anwser. Seems correct for me. I suggest du -hcd 1 /directory. -h for human readable, c for total and d for depth.
                                          – Thales Ceolin
                                          Feb 4 '14 at 1:13










                                        • I'm use du -hd 1 <folder to inspect> | sort -hr | head
                                          – jonathanccalixto
                                          Jan 10 '17 at 19:39











                                        • du --max-depth 5 -h /* 2>&1 | grep '[0-9.]+G' | sort -hr | head to filter Permission denied
                                          – srghma
                                          Sep 1 '17 at 10:36














                                        up vote
                                        100
                                        down vote













                                        Both GNU du and BSD du can be depth-restricted (but POSIX du cannot!):




                                        • GNU (Linux, …):



                                          du --max-depth 3



                                        • BSD (macOS, …):



                                          du -d 3


                                        This will limit the output display to depth 3. The calculated and displayed size is still the total of the full depth, of course. But despite this, restricting the display depth drastically speeds up the calculation.



                                        Another helpful option is -h (words on both GNU and BSD but, once again, not on POSIX-only du) for “human-readable” output (i.e. using KiB, MiB etc.).






                                        share|improve this answer


















                                        • 23




                                          if du complains about -d try --max-depth 5 in stead.
                                          – ReactiveRaven
                                          Jul 2 '13 at 11:25






                                        • 8




                                          Great anwser. Seems correct for me. I suggest du -hcd 1 /directory. -h for human readable, c for total and d for depth.
                                          – Thales Ceolin
                                          Feb 4 '14 at 1:13










                                        • I'm use du -hd 1 <folder to inspect> | sort -hr | head
                                          – jonathanccalixto
                                          Jan 10 '17 at 19:39











                                        • du --max-depth 5 -h /* 2>&1 | grep '[0-9.]+G' | sort -hr | head to filter Permission denied
                                          – srghma
                                          Sep 1 '17 at 10:36












                                        up vote
                                        100
                                        down vote










                                        up vote
                                        100
                                        down vote









                                        Both GNU du and BSD du can be depth-restricted (but POSIX du cannot!):




                                        • GNU (Linux, …):



                                          du --max-depth 3



                                        • BSD (macOS, …):



                                          du -d 3


                                        This will limit the output display to depth 3. The calculated and displayed size is still the total of the full depth, of course. But despite this, restricting the display depth drastically speeds up the calculation.



                                        Another helpful option is -h (words on both GNU and BSD but, once again, not on POSIX-only du) for “human-readable” output (i.e. using KiB, MiB etc.).






                                        share|improve this answer














                                        Both GNU du and BSD du can be depth-restricted (but POSIX du cannot!):




                                        • GNU (Linux, …):



                                          du --max-depth 3



                                        • BSD (macOS, …):



                                          du -d 3


                                        This will limit the output display to depth 3. The calculated and displayed size is still the total of the full depth, of course. But despite this, restricting the display depth drastically speeds up the calculation.



                                        Another helpful option is -h (words on both GNU and BSD but, once again, not on POSIX-only du) for “human-readable” output (i.e. using KiB, MiB etc.).







                                        share|improve this answer














                                        share|improve this answer



                                        share|improve this answer








                                        edited Oct 2 at 9:24

























                                        answered Aug 28 '08 at 13:19









                                        Konrad Rudolph

                                        2,05731222




                                        2,05731222







                                        • 23




                                          if du complains about -d try --max-depth 5 in stead.
                                          – ReactiveRaven
                                          Jul 2 '13 at 11:25






                                        • 8




                                          Great anwser. Seems correct for me. I suggest du -hcd 1 /directory. -h for human readable, c for total and d for depth.
                                          – Thales Ceolin
                                          Feb 4 '14 at 1:13










                                        • I'm use du -hd 1 <folder to inspect> | sort -hr | head
                                          – jonathanccalixto
                                          Jan 10 '17 at 19:39











                                        • du --max-depth 5 -h /* 2>&1 | grep '[0-9.]+G' | sort -hr | head to filter Permission denied
                                          – srghma
                                          Sep 1 '17 at 10:36












                                        • 23




                                          if du complains about -d try --max-depth 5 in stead.
                                          – ReactiveRaven
                                          Jul 2 '13 at 11:25






                                        • 8




                                          Great anwser. Seems correct for me. I suggest du -hcd 1 /directory. -h for human readable, c for total and d for depth.
                                          – Thales Ceolin
                                          Feb 4 '14 at 1:13










                                        • I'm use du -hd 1 <folder to inspect> | sort -hr | head
                                          – jonathanccalixto
                                          Jan 10 '17 at 19:39











                                        • du --max-depth 5 -h /* 2>&1 | grep '[0-9.]+G' | sort -hr | head to filter Permission denied
                                          – srghma
                                          Sep 1 '17 at 10:36







                                        23




                                        23




                                        if du complains about -d try --max-depth 5 in stead.
                                        – ReactiveRaven
                                        Jul 2 '13 at 11:25




                                        if du complains about -d try --max-depth 5 in stead.
                                        – ReactiveRaven
                                        Jul 2 '13 at 11:25




                                        8




                                        8




                                        Great anwser. Seems correct for me. I suggest du -hcd 1 /directory. -h for human readable, c for total and d for depth.
                                        – Thales Ceolin
                                        Feb 4 '14 at 1:13




                                        Great anwser. Seems correct for me. I suggest du -hcd 1 /directory. -h for human readable, c for total and d for depth.
                                        – Thales Ceolin
                                        Feb 4 '14 at 1:13












                                        I'm use du -hd 1 <folder to inspect> | sort -hr | head
                                        – jonathanccalixto
                                        Jan 10 '17 at 19:39





                                        I'm use du -hd 1 <folder to inspect> | sort -hr | head
                                        – jonathanccalixto
                                        Jan 10 '17 at 19:39













                                        du --max-depth 5 -h /* 2>&1 | grep '[0-9.]+G' | sort -hr | head to filter Permission denied
                                        – srghma
                                        Sep 1 '17 at 10:36




                                        du --max-depth 5 -h /* 2>&1 | grep '[0-9.]+G' | sort -hr | head to filter Permission denied
                                        – srghma
                                        Sep 1 '17 at 10:36










                                        up vote
                                        47
                                        down vote













                                        You can also run the following command using du :



                                        ~# du -Pshx /* 2>/dev/null


                                        • The -s option summarizes and displays total for each argument.


                                        • h prints Mio, Gio, etc.


                                        • x = stay in one filesystem (very useful).


                                        • P = don't follow symlinks (which could cause files to be counted twice for instance).

                                        Be careful, the /root directory will not be shown, you have to run ~# du -Pshx /root 2>/dev/null to get that (once, I struggled a lot not pointing out that my /root directory had gone full).



                                        Edit: Corrected option -P






                                        share|improve this answer


















                                        • 1




                                          du -Pshx .* * 2>/dev/null + hidden/system directories
                                          – Mykhaylo Adamovych
                                          Feb 15 '16 at 10:16















                                        up vote
                                        47
                                        down vote













                                        You can also run the following command using du :



                                        ~# du -Pshx /* 2>/dev/null


                                        • The -s option summarizes and displays total for each argument.


                                        • h prints Mio, Gio, etc.


                                        • x = stay in one filesystem (very useful).


                                        • P = don't follow symlinks (which could cause files to be counted twice for instance).

                                        Be careful, the /root directory will not be shown, you have to run ~# du -Pshx /root 2>/dev/null to get that (once, I struggled a lot not pointing out that my /root directory had gone full).



                                        Edit: Corrected option -P






                                        share|improve this answer


















                                        • 1




                                          du -Pshx .* * 2>/dev/null + hidden/system directories
                                          – Mykhaylo Adamovych
                                          Feb 15 '16 at 10:16













                                        up vote
                                        47
                                        down vote










                                        up vote
                                        47
                                        down vote









                                        You can also run the following command using du :



                                        ~# du -Pshx /* 2>/dev/null


                                        • The -s option summarizes and displays total for each argument.


                                        • h prints Mio, Gio, etc.


                                        • x = stay in one filesystem (very useful).


                                        • P = don't follow symlinks (which could cause files to be counted twice for instance).

                                        Be careful, the /root directory will not be shown, you have to run ~# du -Pshx /root 2>/dev/null to get that (once, I struggled a lot not pointing out that my /root directory had gone full).



                                        Edit: Corrected option -P






                                        share|improve this answer














                                        You can also run the following command using du :



                                        ~# du -Pshx /* 2>/dev/null


                                        • The -s option summarizes and displays total for each argument.


                                        • h prints Mio, Gio, etc.


                                        • x = stay in one filesystem (very useful).


                                        • P = don't follow symlinks (which could cause files to be counted twice for instance).

                                        Be careful, the /root directory will not be shown, you have to run ~# du -Pshx /root 2>/dev/null to get that (once, I struggled a lot not pointing out that my /root directory had gone full).



                                        Edit: Corrected option -P







                                        share|improve this answer














                                        share|improve this answer



                                        share|improve this answer








                                        edited Nov 27 '16 at 7:51









                                        Adam Tyszecki

                                        33




                                        33










                                        answered May 13 '13 at 20:51









                                        Creasixtine

                                        57143




                                        57143







                                        • 1




                                          du -Pshx .* * 2>/dev/null + hidden/system directories
                                          – Mykhaylo Adamovych
                                          Feb 15 '16 at 10:16













                                        • 1




                                          du -Pshx .* * 2>/dev/null + hidden/system directories
                                          – Mykhaylo Adamovych
                                          Feb 15 '16 at 10:16








                                        1




                                        1




                                        du -Pshx .* * 2>/dev/null + hidden/system directories
                                        – Mykhaylo Adamovych
                                        Feb 15 '16 at 10:16





                                        du -Pshx .* * 2>/dev/null + hidden/system directories
                                        – Mykhaylo Adamovych
                                        Feb 15 '16 at 10:16











                                        up vote
                                        27
                                        down vote













                                        Finding the biggest files on the filesystem is always going to take a long time. By definition you have to traverse the whole filesystem looking for big files. The only solution is probably to run a cron job on all your systems to have the file ready ahead of time.



                                        One other thing, the x option of du is useful to keep du from following mount points into other filesystems. I.e:



                                        du -x [path]


                                        The full command I usually run is:



                                        sudo du -xm / | sort -rn > usage.txt


                                        The -m means return results in megabytes, and sort -rn will sort the results largest number first. You can then open usage.txt in an editor, and the biggest folders (starting with /) will be at the top.






                                        share|improve this answer


















                                        • 3




                                          Thanks for pointing out the -x flag!
                                          – SamB
                                          Jun 2 '10 at 20:55






                                        • 1




                                          "finding biggest takes long time.." -> Well it depends, but tend to disagree: doesn't take that long with utilities like ncdu - at least quicker than du or find (depending on depth and arguments)..
                                          – bshea
                                          Sep 21 '17 at 15:35











                                        • since I prefer not to be root, I had to adapt where the file is written : sudo du -xm / | sort -rn > ~/usage.txt
                                          – Bruno
                                          Sep 14 at 6:55














                                        up vote
                                        27
                                        down vote













                                        Finding the biggest files on the filesystem is always going to take a long time. By definition you have to traverse the whole filesystem looking for big files. The only solution is probably to run a cron job on all your systems to have the file ready ahead of time.



                                        One other thing, the x option of du is useful to keep du from following mount points into other filesystems. I.e:



                                        du -x [path]


                                        The full command I usually run is:



                                        sudo du -xm / | sort -rn > usage.txt


                                        The -m means return results in megabytes, and sort -rn will sort the results largest number first. You can then open usage.txt in an editor, and the biggest folders (starting with /) will be at the top.






                                        share|improve this answer


















                                        • 3




                                          Thanks for pointing out the -x flag!
                                          – SamB
                                          Jun 2 '10 at 20:55






                                        • 1




                                          "finding biggest takes long time.." -> Well it depends, but tend to disagree: doesn't take that long with utilities like ncdu - at least quicker than du or find (depending on depth and arguments)..
                                          – bshea
                                          Sep 21 '17 at 15:35











                                        • since I prefer not to be root, I had to adapt where the file is written : sudo du -xm / | sort -rn > ~/usage.txt
                                          – Bruno
                                          Sep 14 at 6:55












                                        up vote
                                        27
                                        down vote










                                        up vote
                                        27
                                        down vote









                                        Finding the biggest files on the filesystem is always going to take a long time. By definition you have to traverse the whole filesystem looking for big files. The only solution is probably to run a cron job on all your systems to have the file ready ahead of time.



                                        One other thing, the x option of du is useful to keep du from following mount points into other filesystems. I.e:



                                        du -x [path]


                                        The full command I usually run is:



                                        sudo du -xm / | sort -rn > usage.txt


                                        The -m means return results in megabytes, and sort -rn will sort the results largest number first. You can then open usage.txt in an editor, and the biggest folders (starting with /) will be at the top.






                                        share|improve this answer














                                        Finding the biggest files on the filesystem is always going to take a long time. By definition you have to traverse the whole filesystem looking for big files. The only solution is probably to run a cron job on all your systems to have the file ready ahead of time.



                                        One other thing, the x option of du is useful to keep du from following mount points into other filesystems. I.e:



                                        du -x [path]


                                        The full command I usually run is:



                                        sudo du -xm / | sort -rn > usage.txt


                                        The -m means return results in megabytes, and sort -rn will sort the results largest number first. You can then open usage.txt in an editor, and the biggest folders (starting with /) will be at the top.







                                        share|improve this answer














                                        share|improve this answer



                                        share|improve this answer








                                        edited Nov 1 '16 at 17:16

























                                        answered Aug 28 '08 at 13:27









                                        rjmunro

                                        88469




                                        88469







                                        • 3




                                          Thanks for pointing out the -x flag!
                                          – SamB
                                          Jun 2 '10 at 20:55






                                        • 1




                                          "finding biggest takes long time.." -> Well it depends, but tend to disagree: doesn't take that long with utilities like ncdu - at least quicker than du or find (depending on depth and arguments)..
                                          – bshea
                                          Sep 21 '17 at 15:35











                                        • since I prefer not to be root, I had to adapt where the file is written : sudo du -xm / | sort -rn > ~/usage.txt
                                          – Bruno
                                          Sep 14 at 6:55












                                        • 3




                                          Thanks for pointing out the -x flag!
                                          – SamB
                                          Jun 2 '10 at 20:55






                                        • 1




                                          "finding biggest takes long time.." -> Well it depends, but tend to disagree: doesn't take that long with utilities like ncdu - at least quicker than du or find (depending on depth and arguments)..
                                          – bshea
                                          Sep 21 '17 at 15:35











                                        • since I prefer not to be root, I had to adapt where the file is written : sudo du -xm / | sort -rn > ~/usage.txt
                                          – Bruno
                                          Sep 14 at 6:55







                                        3




                                        3




                                        Thanks for pointing out the -x flag!
                                        – SamB
                                        Jun 2 '10 at 20:55




                                        Thanks for pointing out the -x flag!
                                        – SamB
                                        Jun 2 '10 at 20:55




                                        1




                                        1




                                        "finding biggest takes long time.." -> Well it depends, but tend to disagree: doesn't take that long with utilities like ncdu - at least quicker than du or find (depending on depth and arguments)..
                                        – bshea
                                        Sep 21 '17 at 15:35





                                        "finding biggest takes long time.." -> Well it depends, but tend to disagree: doesn't take that long with utilities like ncdu - at least quicker than du or find (depending on depth and arguments)..
                                        – bshea
                                        Sep 21 '17 at 15:35













                                        since I prefer not to be root, I had to adapt where the file is written : sudo du -xm / | sort -rn > ~/usage.txt
                                        – Bruno
                                        Sep 14 at 6:55




                                        since I prefer not to be root, I had to adapt where the file is written : sudo du -xm / | sort -rn > ~/usage.txt
                                        – Bruno
                                        Sep 14 at 6:55










                                        up vote
                                        19
                                        down vote













                                        I always use du -sm * | sort -n, which gives you a sorted list of how much the subdirectories of the current working directory use up, in mebibytes.



                                        You can also try Konqueror, which has a "size view" mode, which is similar to what WinDirStat does on Windows: it gives you a viual representation of which files/directories use up most of your space.



                                        Update: on more recent versions, you can also use du -sh * | sort -h which will show human-readable filesizes and sort by those. (numbers will be suffixed with K, M, G, ...)



                                        For people looking for an alternative to KDE3's Konqueror file size view may take a look at filelight, though it's not quite as nice.






                                        share|improve this answer




















                                        • That's only Konqueror 3.x though - the file size view still hasn't been ported to KDE4.
                                          – flussence
                                          Feb 9 '09 at 19:09










                                        • 'du -sh * | sort -h ' works perfectly on my Linux (Centos distro) box. Thanks!
                                          – pahariayogi
                                          Apr 25 '17 at 14:05














                                        up vote
                                        19
                                        down vote













                                        I always use du -sm * | sort -n, which gives you a sorted list of how much the subdirectories of the current working directory use up, in mebibytes.



                                        You can also try Konqueror, which has a "size view" mode, which is similar to what WinDirStat does on Windows: it gives you a viual representation of which files/directories use up most of your space.



                                        Update: on more recent versions, you can also use du -sh * | sort -h which will show human-readable filesizes and sort by those. (numbers will be suffixed with K, M, G, ...)



                                        For people looking for an alternative to KDE3's Konqueror file size view may take a look at filelight, though it's not quite as nice.






                                        share|improve this answer




















                                        • That's only Konqueror 3.x though - the file size view still hasn't been ported to KDE4.
                                          – flussence
                                          Feb 9 '09 at 19:09










                                        • 'du -sh * | sort -h ' works perfectly on my Linux (Centos distro) box. Thanks!
                                          – pahariayogi
                                          Apr 25 '17 at 14:05












                                        up vote
                                        19
                                        down vote










                                        up vote
                                        19
                                        down vote









                                        I always use du -sm * | sort -n, which gives you a sorted list of how much the subdirectories of the current working directory use up, in mebibytes.



                                        You can also try Konqueror, which has a "size view" mode, which is similar to what WinDirStat does on Windows: it gives you a viual representation of which files/directories use up most of your space.



                                        Update: on more recent versions, you can also use du -sh * | sort -h which will show human-readable filesizes and sort by those. (numbers will be suffixed with K, M, G, ...)



                                        For people looking for an alternative to KDE3's Konqueror file size view may take a look at filelight, though it's not quite as nice.






                                        share|improve this answer












                                        I always use du -sm * | sort -n, which gives you a sorted list of how much the subdirectories of the current working directory use up, in mebibytes.



                                        You can also try Konqueror, which has a "size view" mode, which is similar to what WinDirStat does on Windows: it gives you a viual representation of which files/directories use up most of your space.



                                        Update: on more recent versions, you can also use du -sh * | sort -h which will show human-readable filesizes and sort by those. (numbers will be suffixed with K, M, G, ...)



                                        For people looking for an alternative to KDE3's Konqueror file size view may take a look at filelight, though it's not quite as nice.







                                        share|improve this answer












                                        share|improve this answer



                                        share|improve this answer










                                        answered Aug 28 '08 at 13:33









                                        wvdschel

                                        30113




                                        30113











                                        • That's only Konqueror 3.x though - the file size view still hasn't been ported to KDE4.
                                          – flussence
                                          Feb 9 '09 at 19:09










                                        • 'du -sh * | sort -h ' works perfectly on my Linux (Centos distro) box. Thanks!
                                          – pahariayogi
                                          Apr 25 '17 at 14:05
















                                        • That's only Konqueror 3.x though - the file size view still hasn't been ported to KDE4.
                                          – flussence
                                          Feb 9 '09 at 19:09










                                        • 'du -sh * | sort -h ' works perfectly on my Linux (Centos distro) box. Thanks!
                                          – pahariayogi
                                          Apr 25 '17 at 14:05















                                        That's only Konqueror 3.x though - the file size view still hasn't been ported to KDE4.
                                        – flussence
                                        Feb 9 '09 at 19:09




                                        That's only Konqueror 3.x though - the file size view still hasn't been ported to KDE4.
                                        – flussence
                                        Feb 9 '09 at 19:09












                                        'du -sh * | sort -h ' works perfectly on my Linux (Centos distro) box. Thanks!
                                        – pahariayogi
                                        Apr 25 '17 at 14:05




                                        'du -sh * | sort -h ' works perfectly on my Linux (Centos distro) box. Thanks!
                                        – pahariayogi
                                        Apr 25 '17 at 14:05










                                        up vote
                                        17
                                        down vote













                                        I use this for the top 25 worst offenders below the current directory



                                        # -S to not include subdir size, sorted and limited to top 25
                                        du -S . | sort -nr | head -25





                                        share|improve this answer




















                                        • This command did the trick to find a hidden folder that seemed to be increasing in size over time. Thanks!
                                          – thegreendroid
                                          Jun 20 '13 at 2:24










                                        • Is this in bytes?
                                          – User
                                          Sep 17 '14 at 0:12










                                        • By default, on my system, 'du -S' gives a nice human readable output. You get a plain number of bytes for small files, then a number with a 'KB' or 'MB' suffix for bigger files.
                                          – serg10
                                          Sep 17 '14 at 8:48










                                        • You could do du -Sh to get a human readable output.
                                          – Siddhartha
                                          Jan 26 '16 at 2:57










                                        • @Siddhartha If you add -h, it will likely change the effect of the sort -nr command - meaning the sort will no longer work, and then the head command will also no longer work
                                          – Clare Macrae
                                          Dec 4 '17 at 13:00















                                        up vote
                                        17
                                        down vote













                                        I use this for the top 25 worst offenders below the current directory



                                        # -S to not include subdir size, sorted and limited to top 25
                                        du -S . | sort -nr | head -25





                                        share|improve this answer




















                                        • This command did the trick to find a hidden folder that seemed to be increasing in size over time. Thanks!
                                          – thegreendroid
                                          Jun 20 '13 at 2:24










                                        • Is this in bytes?
                                          – User
                                          Sep 17 '14 at 0:12










                                        • By default, on my system, 'du -S' gives a nice human readable output. You get a plain number of bytes for small files, then a number with a 'KB' or 'MB' suffix for bigger files.
                                          – serg10
                                          Sep 17 '14 at 8:48










                                        • You could do du -Sh to get a human readable output.
                                          – Siddhartha
                                          Jan 26 '16 at 2:57










                                        • @Siddhartha If you add -h, it will likely change the effect of the sort -nr command - meaning the sort will no longer work, and then the head command will also no longer work
                                          – Clare Macrae
                                          Dec 4 '17 at 13:00













                                        up vote
                                        17
                                        down vote










                                        up vote
                                        17
                                        down vote









                                        I use this for the top 25 worst offenders below the current directory



                                        # -S to not include subdir size, sorted and limited to top 25
                                        du -S . | sort -nr | head -25





                                        share|improve this answer












                                        I use this for the top 25 worst offenders below the current directory



                                        # -S to not include subdir size, sorted and limited to top 25
                                        du -S . | sort -nr | head -25






                                        share|improve this answer












                                        share|improve this answer



                                        share|improve this answer










                                        answered Jul 8 '10 at 8:58









                                        serg10

                                        27126




                                        27126











                                        • This command did the trick to find a hidden folder that seemed to be increasing in size over time. Thanks!
                                          – thegreendroid
                                          Jun 20 '13 at 2:24










                                        • Is this in bytes?
                                          – User
                                          Sep 17 '14 at 0:12










                                        • By default, on my system, 'du -S' gives a nice human readable output. You get a plain number of bytes for small files, then a number with a 'KB' or 'MB' suffix for bigger files.
                                          – serg10
                                          Sep 17 '14 at 8:48










                                        • You could do du -Sh to get a human readable output.
                                          – Siddhartha
                                          Jan 26 '16 at 2:57










                                        • @Siddhartha If you add -h, it will likely change the effect of the sort -nr command - meaning the sort will no longer work, and then the head command will also no longer work
                                          – Clare Macrae
                                          Dec 4 '17 at 13:00

















                                        • This command did the trick to find a hidden folder that seemed to be increasing in size over time. Thanks!
                                          – thegreendroid
                                          Jun 20 '13 at 2:24










                                        • Is this in bytes?
                                          – User
                                          Sep 17 '14 at 0:12










                                        • By default, on my system, 'du -S' gives a nice human readable output. You get a plain number of bytes for small files, then a number with a 'KB' or 'MB' suffix for bigger files.
                                          – serg10
                                          Sep 17 '14 at 8:48










                                        • You could do du -Sh to get a human readable output.
                                          – Siddhartha
                                          Jan 26 '16 at 2:57










                                        • @Siddhartha If you add -h, it will likely change the effect of the sort -nr command - meaning the sort will no longer work, and then the head command will also no longer work
                                          – Clare Macrae
                                          Dec 4 '17 at 13:00
















                                        This command did the trick to find a hidden folder that seemed to be increasing in size over time. Thanks!
                                        – thegreendroid
                                        Jun 20 '13 at 2:24




                                        This command did the trick to find a hidden folder that seemed to be increasing in size over time. Thanks!
                                        – thegreendroid
                                        Jun 20 '13 at 2:24












                                        Is this in bytes?
                                        – User
                                        Sep 17 '14 at 0:12




                                        Is this in bytes?
                                        – User
                                        Sep 17 '14 at 0:12












                                        By default, on my system, 'du -S' gives a nice human readable output. You get a plain number of bytes for small files, then a number with a 'KB' or 'MB' suffix for bigger files.
                                        – serg10
                                        Sep 17 '14 at 8:48




                                        By default, on my system, 'du -S' gives a nice human readable output. You get a plain number of bytes for small files, then a number with a 'KB' or 'MB' suffix for bigger files.
                                        – serg10
                                        Sep 17 '14 at 8:48












                                        You could do du -Sh to get a human readable output.
                                        – Siddhartha
                                        Jan 26 '16 at 2:57




                                        You could do du -Sh to get a human readable output.
                                        – Siddhartha
                                        Jan 26 '16 at 2:57












                                        @Siddhartha If you add -h, it will likely change the effect of the sort -nr command - meaning the sort will no longer work, and then the head command will also no longer work
                                        – Clare Macrae
                                        Dec 4 '17 at 13:00





                                        @Siddhartha If you add -h, it will likely change the effect of the sort -nr command - meaning the sort will no longer work, and then the head command will also no longer work
                                        – Clare Macrae
                                        Dec 4 '17 at 13:00











                                        up vote
                                        14
                                        down vote













                                        At a previous company we used to have a cron job that was run overnight and identified any files over a certain size, e.g.



                                        find / -size +10000k



                                        You may want to be more selective about the directories that you are searching, and watch out for any remotely mounted drives which might go offline.






                                        share|improve this answer




















                                        • You can use the -x option of find to make sure you don't find files on other devices than the start point of your find command. This fixes the remotely mounted drives issue.
                                          – rjmunro
                                          Jun 29 '15 at 16:29















                                        up vote
                                        14
                                        down vote













                                        At a previous company we used to have a cron job that was run overnight and identified any files over a certain size, e.g.



                                        find / -size +10000k



                                        You may want to be more selective about the directories that you are searching, and watch out for any remotely mounted drives which might go offline.






                                        share|improve this answer




















                                        • You can use the -x option of find to make sure you don't find files on other devices than the start point of your find command. This fixes the remotely mounted drives issue.
                                          – rjmunro
                                          Jun 29 '15 at 16:29













                                        up vote
                                        14
                                        down vote










                                        up vote
                                        14
                                        down vote









                                        At a previous company we used to have a cron job that was run overnight and identified any files over a certain size, e.g.



                                        find / -size +10000k



                                        You may want to be more selective about the directories that you are searching, and watch out for any remotely mounted drives which might go offline.






                                        share|improve this answer












                                        At a previous company we used to have a cron job that was run overnight and identified any files over a certain size, e.g.



                                        find / -size +10000k



                                        You may want to be more selective about the directories that you are searching, and watch out for any remotely mounted drives which might go offline.







                                        share|improve this answer












                                        share|improve this answer



                                        share|improve this answer










                                        answered Sep 6 '08 at 15:01









                                        Andrew Whitehouse

                                        2414




                                        2414











                                        • You can use the -x option of find to make sure you don't find files on other devices than the start point of your find command. This fixes the remotely mounted drives issue.
                                          – rjmunro
                                          Jun 29 '15 at 16:29

















                                        • You can use the -x option of find to make sure you don't find files on other devices than the start point of your find command. This fixes the remotely mounted drives issue.
                                          – rjmunro
                                          Jun 29 '15 at 16:29
















                                        You can use the -x option of find to make sure you don't find files on other devices than the start point of your find command. This fixes the remotely mounted drives issue.
                                        – rjmunro
                                        Jun 29 '15 at 16:29





                                        You can use the -x option of find to make sure you don't find files on other devices than the start point of your find command. This fixes the remotely mounted drives issue.
                                        – rjmunro
                                        Jun 29 '15 at 16:29











                                        up vote
                                        10
                                        down vote













                                        One option would be to run your du/sort command as a cron job, and output to a file, so it's already there when you need it.






                                        share|improve this answer
























                                          up vote
                                          10
                                          down vote













                                          One option would be to run your du/sort command as a cron job, and output to a file, so it's already there when you need it.






                                          share|improve this answer






















                                            up vote
                                            10
                                            down vote










                                            up vote
                                            10
                                            down vote









                                            One option would be to run your du/sort command as a cron job, and output to a file, so it's already there when you need it.






                                            share|improve this answer












                                            One option would be to run your du/sort command as a cron job, and output to a file, so it's already there when you need it.







                                            share|improve this answer












                                            share|improve this answer



                                            share|improve this answer










                                            answered Aug 28 '08 at 13:20







                                            Peter Hilton



























                                                up vote
                                                9
                                                down vote













                                                For the commandline I think the du/sort method is the best. If you're not on a server you should take a look at Baobab - Disk usage analyzer. This program also takes some time to run, but you can easily find the sub directory deep, deep down where all the old Linux ISOs are.






                                                share|improve this answer
















                                                • 2




                                                  It can also scan remote folders via SSH, FTP, SMB and WebDAV.
                                                  – Colonel Sponsz
                                                  Dec 2 '08 at 16:34










                                                • This is great. Some things just work better with a GUI to visualize them, and this is one of them! I need an X-server on my server anyways for CrashPlan, so it works on that too.
                                                  – timelmer
                                                  Jun 25 '16 at 20:46














                                                up vote
                                                9
                                                down vote













                                                For the commandline I think the du/sort method is the best. If you're not on a server you should take a look at Baobab - Disk usage analyzer. This program also takes some time to run, but you can easily find the sub directory deep, deep down where all the old Linux ISOs are.






                                                share|improve this answer
















                                                • 2




                                                  It can also scan remote folders via SSH, FTP, SMB and WebDAV.
                                                  – Colonel Sponsz
                                                  Dec 2 '08 at 16:34










                                                • This is great. Some things just work better with a GUI to visualize them, and this is one of them! I need an X-server on my server anyways for CrashPlan, so it works on that too.
                                                  – timelmer
                                                  Jun 25 '16 at 20:46












                                                up vote
                                                9
                                                down vote










                                                up vote
                                                9
                                                down vote









                                                For the commandline I think the du/sort method is the best. If you're not on a server you should take a look at Baobab - Disk usage analyzer. This program also takes some time to run, but you can easily find the sub directory deep, deep down where all the old Linux ISOs are.






                                                share|improve this answer












                                                For the commandline I think the du/sort method is the best. If you're not on a server you should take a look at Baobab - Disk usage analyzer. This program also takes some time to run, but you can easily find the sub directory deep, deep down where all the old Linux ISOs are.







                                                share|improve this answer












                                                share|improve this answer



                                                share|improve this answer










                                                answered Aug 28 '08 at 19:20









                                                Peter Stuifzand

                                                666167




                                                666167







                                                • 2




                                                  It can also scan remote folders via SSH, FTP, SMB and WebDAV.
                                                  – Colonel Sponsz
                                                  Dec 2 '08 at 16:34










                                                • This is great. Some things just work better with a GUI to visualize them, and this is one of them! I need an X-server on my server anyways for CrashPlan, so it works on that too.
                                                  – timelmer
                                                  Jun 25 '16 at 20:46












                                                • 2




                                                  It can also scan remote folders via SSH, FTP, SMB and WebDAV.
                                                  – Colonel Sponsz
                                                  Dec 2 '08 at 16:34










                                                • This is great. Some things just work better with a GUI to visualize them, and this is one of them! I need an X-server on my server anyways for CrashPlan, so it works on that too.
                                                  – timelmer
                                                  Jun 25 '16 at 20:46







                                                2




                                                2




                                                It can also scan remote folders via SSH, FTP, SMB and WebDAV.
                                                – Colonel Sponsz
                                                Dec 2 '08 at 16:34




                                                It can also scan remote folders via SSH, FTP, SMB and WebDAV.
                                                – Colonel Sponsz
                                                Dec 2 '08 at 16:34












                                                This is great. Some things just work better with a GUI to visualize them, and this is one of them! I need an X-server on my server anyways for CrashPlan, so it works on that too.
                                                – timelmer
                                                Jun 25 '16 at 20:46




                                                This is great. Some things just work better with a GUI to visualize them, and this is one of them! I need an X-server on my server anyways for CrashPlan, so it works on that too.
                                                – timelmer
                                                Jun 25 '16 at 20:46










                                                up vote
                                                9
                                                down vote













                                                I use



                                                du -ch --max-depth=2 .


                                                and I change the max-depth to suit my needs. The "c" option prints totals for the folders and the "h" option prints the sizes in K, M, or G as appropriate. As others have said, it still scans all the directories, but it limits the output in a way that I find easier to find the large directories.






                                                share|improve this answer
























                                                  up vote
                                                  9
                                                  down vote













                                                  I use



                                                  du -ch --max-depth=2 .


                                                  and I change the max-depth to suit my needs. The "c" option prints totals for the folders and the "h" option prints the sizes in K, M, or G as appropriate. As others have said, it still scans all the directories, but it limits the output in a way that I find easier to find the large directories.






                                                  share|improve this answer






















                                                    up vote
                                                    9
                                                    down vote










                                                    up vote
                                                    9
                                                    down vote









                                                    I use



                                                    du -ch --max-depth=2 .


                                                    and I change the max-depth to suit my needs. The "c" option prints totals for the folders and the "h" option prints the sizes in K, M, or G as appropriate. As others have said, it still scans all the directories, but it limits the output in a way that I find easier to find the large directories.






                                                    share|improve this answer












                                                    I use



                                                    du -ch --max-depth=2 .


                                                    and I change the max-depth to suit my needs. The "c" option prints totals for the folders and the "h" option prints the sizes in K, M, or G as appropriate. As others have said, it still scans all the directories, but it limits the output in a way that I find easier to find the large directories.







                                                    share|improve this answer












                                                    share|improve this answer



                                                    share|improve this answer










                                                    answered Aug 29 '08 at 20:22







                                                    John Meagher



























                                                        up vote
                                                        8
                                                        down vote













                                                        I'm going to second xdiskusage. But I'm going to add in the note that it is actually a du frontend and can read the du output from a file. So you can run du -ax /home > ~/home-du on your server, scp the file back, and then analyze it graphically. Or pipe it through ssh.






                                                        share|improve this answer
























                                                          up vote
                                                          8
                                                          down vote













                                                          I'm going to second xdiskusage. But I'm going to add in the note that it is actually a du frontend and can read the du output from a file. So you can run du -ax /home > ~/home-du on your server, scp the file back, and then analyze it graphically. Or pipe it through ssh.






                                                          share|improve this answer






















                                                            up vote
                                                            8
                                                            down vote










                                                            up vote
                                                            8
                                                            down vote









                                                            I'm going to second xdiskusage. But I'm going to add in the note that it is actually a du frontend and can read the du output from a file. So you can run du -ax /home > ~/home-du on your server, scp the file back, and then analyze it graphically. Or pipe it through ssh.






                                                            share|improve this answer












                                                            I'm going to second xdiskusage. But I'm going to add in the note that it is actually a du frontend and can read the du output from a file. So you can run du -ax /home > ~/home-du on your server, scp the file back, and then analyze it graphically. Or pipe it through ssh.







                                                            share|improve this answer












                                                            share|improve this answer



                                                            share|improve this answer










                                                            answered Dec 2 '08 at 16:20









                                                            derobert

                                                            69.7k8151207




                                                            69.7k8151207




















                                                                up vote
                                                                6
                                                                down vote













                                                                Try feeding the output of du into a simple awk script that checks to see if the size of the directory is larger than some threshold, if so it prints it. You don't have to wait for the entire tree to be traversed before you start getting info (vs. many of the other answers).



                                                                For example, the following displays any directories that consume more than about 500 MB.



                                                                du -kx / | awk ' if ($1 > 500000) print $0 '


                                                                To make the above a little more reusable, you can define a function in your .bashrc, ( or you could make it into a standalone script).



                                                                dubig() 
                                                                [ -z "$1" ] && echo "usage: dubig sizethreshMB [dir]" && return
                                                                du -kx $2


                                                                So dubig 200 ~/ looks under the home directory (without following symlinks off device) for directories that use more than 200 MB.






                                                                share|improve this answer






















                                                                • It's a pity that a dozen of grep hacks are more upvoted. Oh and du -k will make it absolutely certain that du is using KB units
                                                                  – ndemou
                                                                  Nov 23 '16 at 20:05










                                                                • Good idea about the -k. Edited.
                                                                  – Mark Borgerding
                                                                  Nov 24 '16 at 11:16










                                                                • Even simpler and more robust: du -kx $2 | awk '$1>'$(($1*1024)) (if you specify only a condition aka pattern to awk the default action is print $0)
                                                                  – dave_thompson_085
                                                                  Nov 27 '16 at 11:31











                                                                • Good point @date_thompson_085. That's true for all versions of awk I know of (net/free-BSD & GNU). @mark-borgerding so this means that you can greatly simplify your first example to just du -kx / | awk '$1 > 500000'
                                                                  – ndemou
                                                                  Dec 13 '16 at 9:46










                                                                • @mark-borgerding: If you have just a few kBytes left somewhere you can also keep the whole output of du like this du -kx / | tee /tmp/du.log | awk '$1 > 500000'. This is very helpful because if your first filtering turns out to be fruitless you can try other values like this awk '$1 > 200000' /tmp/du.log or inspect the complete output like this sort -nr /tmp/du.log|less without re-scanning the whole filesystem
                                                                  – ndemou
                                                                  Dec 13 '16 at 9:59















                                                                up vote
                                                                6
                                                                down vote













                                                                Try feeding the output of du into a simple awk script that checks to see if the size of the directory is larger than some threshold, if so it prints it. You don't have to wait for the entire tree to be traversed before you start getting info (vs. many of the other answers).



                                                                For example, the following displays any directories that consume more than about 500 MB.



                                                                du -kx / | awk ' if ($1 > 500000) print $0 '


                                                                To make the above a little more reusable, you can define a function in your .bashrc, ( or you could make it into a standalone script).



                                                                dubig() 
                                                                [ -z "$1" ] && echo "usage: dubig sizethreshMB [dir]" && return
                                                                du -kx $2


                                                                So dubig 200 ~/ looks under the home directory (without following symlinks off device) for directories that use more than 200 MB.






                                                                share|improve this answer






















                                                                • It's a pity that a dozen of grep hacks are more upvoted. Oh and du -k will make it absolutely certain that du is using KB units
                                                                  – ndemou
                                                                  Nov 23 '16 at 20:05










                                                                • Good idea about the -k. Edited.
                                                                  – Mark Borgerding
                                                                  Nov 24 '16 at 11:16










                                                                • Even simpler and more robust: du -kx $2 | awk '$1>'$(($1*1024)) (if you specify only a condition aka pattern to awk the default action is print $0)
                                                                  – dave_thompson_085
                                                                  Nov 27 '16 at 11:31











                                                                • Good point @date_thompson_085. That's true for all versions of awk I know of (net/free-BSD & GNU). @mark-borgerding so this means that you can greatly simplify your first example to just du -kx / | awk '$1 > 500000'
                                                                  – ndemou
                                                                  Dec 13 '16 at 9:46










                                                                • @mark-borgerding: If you have just a few kBytes left somewhere you can also keep the whole output of du like this du -kx / | tee /tmp/du.log | awk '$1 > 500000'. This is very helpful because if your first filtering turns out to be fruitless you can try other values like this awk '$1 > 200000' /tmp/du.log or inspect the complete output like this sort -nr /tmp/du.log|less without re-scanning the whole filesystem
                                                                  – ndemou
                                                                  Dec 13 '16 at 9:59













                                                                up vote
                                                                6
                                                                down vote










                                                                up vote
                                                                6
                                                                down vote









                                                                Try feeding the output of du into a simple awk script that checks to see if the size of the directory is larger than some threshold, if so it prints it. You don't have to wait for the entire tree to be traversed before you start getting info (vs. many of the other answers).



                                                                For example, the following displays any directories that consume more than about 500 MB.



                                                                du -kx / | awk ' if ($1 > 500000) print $0 '


                                                                To make the above a little more reusable, you can define a function in your .bashrc, ( or you could make it into a standalone script).



                                                                dubig() 
                                                                [ -z "$1" ] && echo "usage: dubig sizethreshMB [dir]" && return
                                                                du -kx $2


                                                                So dubig 200 ~/ looks under the home directory (without following symlinks off device) for directories that use more than 200 MB.






                                                                share|improve this answer














                                                                Try feeding the output of du into a simple awk script that checks to see if the size of the directory is larger than some threshold, if so it prints it. You don't have to wait for the entire tree to be traversed before you start getting info (vs. many of the other answers).



                                                                For example, the following displays any directories that consume more than about 500 MB.



                                                                du -kx / | awk ' if ($1 > 500000) print $0 '


                                                                To make the above a little more reusable, you can define a function in your .bashrc, ( or you could make it into a standalone script).



                                                                dubig() 
                                                                [ -z "$1" ] && echo "usage: dubig sizethreshMB [dir]" && return
                                                                du -kx $2


                                                                So dubig 200 ~/ looks under the home directory (without following symlinks off device) for directories that use more than 200 MB.







                                                                share|improve this answer














                                                                share|improve this answer



                                                                share|improve this answer








                                                                edited Dec 13 '16 at 12:04

























                                                                answered Aug 28 '08 at 13:50









                                                                Mark Borgerding

                                                                1665




                                                                1665











                                                                • It's a pity that a dozen of grep hacks are more upvoted. Oh and du -k will make it absolutely certain that du is using KB units
                                                                  – ndemou
                                                                  Nov 23 '16 at 20:05










                                                                • Good idea about the -k. Edited.
                                                                  – Mark Borgerding
                                                                  Nov 24 '16 at 11:16










                                                                • Even simpler and more robust: du -kx $2 | awk '$1>'$(($1*1024)) (if you specify only a condition aka pattern to awk the default action is print $0)
                                                                  – dave_thompson_085
                                                                  Nov 27 '16 at 11:31











                                                                • Good point @date_thompson_085. That's true for all versions of awk I know of (net/free-BSD & GNU). @mark-borgerding so this means that you can greatly simplify your first example to just du -kx / | awk '$1 > 500000'
                                                                  – ndemou
                                                                  Dec 13 '16 at 9:46










                                                                • @mark-borgerding: If you have just a few kBytes left somewhere you can also keep the whole output of du like this du -kx / | tee /tmp/du.log | awk '$1 > 500000'. This is very helpful because if your first filtering turns out to be fruitless you can try other values like this awk '$1 > 200000' /tmp/du.log or inspect the complete output like this sort -nr /tmp/du.log|less without re-scanning the whole filesystem
                                                                  – ndemou
                                                                  Dec 13 '16 at 9:59

















                                                                • It's a pity that a dozen of grep hacks are more upvoted. Oh and du -k will make it absolutely certain that du is using KB units
                                                                  – ndemou
                                                                  Nov 23 '16 at 20:05










                                                                • Good idea about the -k. Edited.
                                                                  – Mark Borgerding
                                                                  Nov 24 '16 at 11:16










                                                                • Even simpler and more robust: du -kx $2 | awk '$1>'$(($1*1024)) (if you specify only a condition aka pattern to awk the default action is print $0)
                                                                  – dave_thompson_085
                                                                  Nov 27 '16 at 11:31











                                                                • Good point @date_thompson_085. That's true for all versions of awk I know of (net/free-BSD & GNU). @mark-borgerding so this means that you can greatly simplify your first example to just du -kx / | awk '$1 > 500000'
                                                                  – ndemou
                                                                  Dec 13 '16 at 9:46










                                                                • @mark-borgerding: If you have just a few kBytes left somewhere you can also keep the whole output of du like this du -kx / | tee /tmp/du.log | awk '$1 > 500000'. This is very helpful because if your first filtering turns out to be fruitless you can try other values like this awk '$1 > 200000' /tmp/du.log or inspect the complete output like this sort -nr /tmp/du.log|less without re-scanning the whole filesystem
                                                                  – ndemou
                                                                  Dec 13 '16 at 9:59
















                                                                It's a pity that a dozen of grep hacks are more upvoted. Oh and du -k will make it absolutely certain that du is using KB units
                                                                – ndemou
                                                                Nov 23 '16 at 20:05




                                                                It's a pity that a dozen of grep hacks are more upvoted. Oh and du -k will make it absolutely certain that du is using KB units
                                                                – ndemou
                                                                Nov 23 '16 at 20:05












                                                                Good idea about the -k. Edited.
                                                                – Mark Borgerding
                                                                Nov 24 '16 at 11:16




                                                                Good idea about the -k. Edited.
                                                                – Mark Borgerding
                                                                Nov 24 '16 at 11:16












                                                                Even simpler and more robust: du -kx $2 | awk '$1>'$(($1*1024)) (if you specify only a condition aka pattern to awk the default action is print $0)
                                                                – dave_thompson_085
                                                                Nov 27 '16 at 11:31





                                                                Even simpler and more robust: du -kx $2 | awk '$1>'$(($1*1024)) (if you specify only a condition aka pattern to awk the default action is print $0)
                                                                – dave_thompson_085
                                                                Nov 27 '16 at 11:31













                                                                Good point @date_thompson_085. That's true for all versions of awk I know of (net/free-BSD & GNU). @mark-borgerding so this means that you can greatly simplify your first example to just du -kx / | awk '$1 > 500000'
                                                                – ndemou
                                                                Dec 13 '16 at 9:46




                                                                Good point @date_thompson_085. That's true for all versions of awk I know of (net/free-BSD & GNU). @mark-borgerding so this means that you can greatly simplify your first example to just du -kx / | awk '$1 > 500000'
                                                                – ndemou
                                                                Dec 13 '16 at 9:46












                                                                @mark-borgerding: If you have just a few kBytes left somewhere you can also keep the whole output of du like this du -kx / | tee /tmp/du.log | awk '$1 > 500000'. This is very helpful because if your first filtering turns out to be fruitless you can try other values like this awk '$1 > 200000' /tmp/du.log or inspect the complete output like this sort -nr /tmp/du.log|less without re-scanning the whole filesystem
                                                                – ndemou
                                                                Dec 13 '16 at 9:59





                                                                @mark-borgerding: If you have just a few kBytes left somewhere you can also keep the whole output of du like this du -kx / | tee /tmp/du.log | awk '$1 > 500000'. This is very helpful because if your first filtering turns out to be fruitless you can try other values like this awk '$1 > 200000' /tmp/du.log or inspect the complete output like this sort -nr /tmp/du.log|less without re-scanning the whole filesystem
                                                                – ndemou
                                                                Dec 13 '16 at 9:59











                                                                up vote
                                                                4
                                                                down vote













                                                                I like the good old xdiskusage as a graphical alternative to du(1).






                                                                share|improve this answer




















                                                                • Note this part of the question: "I'd prefer a command line solution which relies on standard Linux commands since..."
                                                                  – ndemou
                                                                  Jul 4 '17 at 20:20














                                                                up vote
                                                                4
                                                                down vote













                                                                I like the good old xdiskusage as a graphical alternative to du(1).






                                                                share|improve this answer




















                                                                • Note this part of the question: "I'd prefer a command line solution which relies on standard Linux commands since..."
                                                                  – ndemou
                                                                  Jul 4 '17 at 20:20












                                                                up vote
                                                                4
                                                                down vote










                                                                up vote
                                                                4
                                                                down vote









                                                                I like the good old xdiskusage as a graphical alternative to du(1).






                                                                share|improve this answer












                                                                I like the good old xdiskusage as a graphical alternative to du(1).







                                                                share|improve this answer












                                                                share|improve this answer



                                                                share|improve this answer










                                                                answered Aug 28 '08 at 14:46









                                                                asjo

                                                                1492




                                                                1492











                                                                • Note this part of the question: "I'd prefer a command line solution which relies on standard Linux commands since..."
                                                                  – ndemou
                                                                  Jul 4 '17 at 20:20
















                                                                • Note this part of the question: "I'd prefer a command line solution which relies on standard Linux commands since..."
                                                                  – ndemou
                                                                  Jul 4 '17 at 20:20















                                                                Note this part of the question: "I'd prefer a command line solution which relies on standard Linux commands since..."
                                                                – ndemou
                                                                Jul 4 '17 at 20:20




                                                                Note this part of the question: "I'd prefer a command line solution which relies on standard Linux commands since..."
                                                                – ndemou
                                                                Jul 4 '17 at 20:20










                                                                up vote
                                                                4
                                                                down vote













                                                                I prefer to use the following to get an overview and drill down from there...



                                                                cd /folder_to_check
                                                                du -shx */


                                                                This will display results with human readable output such as GB, MB. It will also prevent traversing through remote filesystems. The -s option only shows summary of each folder found so you can drill down further if interested in more details of a folder. Keep in mind that this solution will only show folders so you will want to omit the / after the asterisk if you want files too.






                                                                share|improve this answer
























                                                                  up vote
                                                                  4
                                                                  down vote













                                                                  I prefer to use the following to get an overview and drill down from there...



                                                                  cd /folder_to_check
                                                                  du -shx */


                                                                  This will display results with human readable output such as GB, MB. It will also prevent traversing through remote filesystems. The -s option only shows summary of each folder found so you can drill down further if interested in more details of a folder. Keep in mind that this solution will only show folders so you will want to omit the / after the asterisk if you want files too.






                                                                  share|improve this answer






















                                                                    up vote
                                                                    4
                                                                    down vote










                                                                    up vote
                                                                    4
                                                                    down vote









                                                                    I prefer to use the following to get an overview and drill down from there...



                                                                    cd /folder_to_check
                                                                    du -shx */


                                                                    This will display results with human readable output such as GB, MB. It will also prevent traversing through remote filesystems. The -s option only shows summary of each folder found so you can drill down further if interested in more details of a folder. Keep in mind that this solution will only show folders so you will want to omit the / after the asterisk if you want files too.






                                                                    share|improve this answer












                                                                    I prefer to use the following to get an overview and drill down from there...



                                                                    cd /folder_to_check
                                                                    du -shx */


                                                                    This will display results with human readable output such as GB, MB. It will also prevent traversing through remote filesystems. The -s option only shows summary of each folder found so you can drill down further if interested in more details of a folder. Keep in mind that this solution will only show folders so you will want to omit the / after the asterisk if you want files too.







                                                                    share|improve this answer












                                                                    share|improve this answer



                                                                    share|improve this answer










                                                                    answered Feb 8 '13 at 13:28









                                                                    cmevoli

                                                                    30123




                                                                    30123




















                                                                        up vote
                                                                        4
                                                                        down vote













                                                                        Not mentioned here but you should also check lsof in case of deleted/hanging files. I had a 5.9GB deleted tmp file from a run away cronjob.



                                                                        https://serverfault.com/questions/207100/how-can-i-find-phantom-storage-usage Helped me out in find the process owner of said file ( cron ) and then I was able to goto /proc/cron id/fd/file handle # less the file in question to get the start of the run away, resolve that, and then echo "" > file to clear up space and let cron gracefully close itself up.






                                                                        share|improve this answer


























                                                                          up vote
                                                                          4
                                                                          down vote













                                                                          Not mentioned here but you should also check lsof in case of deleted/hanging files. I had a 5.9GB deleted tmp file from a run away cronjob.



                                                                          https://serverfault.com/questions/207100/how-can-i-find-phantom-storage-usage Helped me out in find the process owner of said file ( cron ) and then I was able to goto /proc/cron id/fd/file handle # less the file in question to get the start of the run away, resolve that, and then echo "" > file to clear up space and let cron gracefully close itself up.






                                                                          share|improve this answer
























                                                                            up vote
                                                                            4
                                                                            down vote










                                                                            up vote
                                                                            4
                                                                            down vote









                                                                            Not mentioned here but you should also check lsof in case of deleted/hanging files. I had a 5.9GB deleted tmp file from a run away cronjob.



                                                                            https://serverfault.com/questions/207100/how-can-i-find-phantom-storage-usage Helped me out in find the process owner of said file ( cron ) and then I was able to goto /proc/cron id/fd/file handle # less the file in question to get the start of the run away, resolve that, and then echo "" > file to clear up space and let cron gracefully close itself up.






                                                                            share|improve this answer














                                                                            Not mentioned here but you should also check lsof in case of deleted/hanging files. I had a 5.9GB deleted tmp file from a run away cronjob.



                                                                            https://serverfault.com/questions/207100/how-can-i-find-phantom-storage-usage Helped me out in find the process owner of said file ( cron ) and then I was able to goto /proc/cron id/fd/file handle # less the file in question to get the start of the run away, resolve that, and then echo "" > file to clear up space and let cron gracefully close itself up.







                                                                            share|improve this answer














                                                                            share|improve this answer



                                                                            share|improve this answer








                                                                            edited Apr 13 '17 at 12:13









                                                                            Community♦

                                                                            1




                                                                            1










                                                                            answered Jun 15 '13 at 17:59









                                                                            David

                                                                            1412




                                                                            1412




















                                                                                up vote
                                                                                2
                                                                                down vote













                                                                                For command line du (and it's options) seems to be the best way. DiskHog looks like it uses du/df info from a cron job too so Peter's suggestion is probably the best combination of simple and effective.



                                                                                (FileLight and KDirStat are ideal for GUI.)






                                                                                share|improve this answer


























                                                                                  up vote
                                                                                  2
                                                                                  down vote













                                                                                  For command line du (and it's options) seems to be the best way. DiskHog looks like it uses du/df info from a cron job too so Peter's suggestion is probably the best combination of simple and effective.



                                                                                  (FileLight and KDirStat are ideal for GUI.)






                                                                                  share|improve this answer
























                                                                                    up vote
                                                                                    2
                                                                                    down vote










                                                                                    up vote
                                                                                    2
                                                                                    down vote









                                                                                    For command line du (and it's options) seems to be the best way. DiskHog looks like it uses du/df info from a cron job too so Peter's suggestion is probably the best combination of simple and effective.



                                                                                    (FileLight and KDirStat are ideal for GUI.)






                                                                                    share|improve this answer














                                                                                    For command line du (and it's options) seems to be the best way. DiskHog looks like it uses du/df info from a cron job too so Peter's suggestion is probably the best combination of simple and effective.



                                                                                    (FileLight and KDirStat are ideal for GUI.)







                                                                                    share|improve this answer














                                                                                    share|improve this answer



                                                                                    share|improve this answer








                                                                                    edited May 23 '17 at 12:40









                                                                                    Community♦

                                                                                    1




                                                                                    1










                                                                                    answered Aug 28 '08 at 13:24









                                                                                    hometoast

                                                                                    1293




                                                                                    1293




















                                                                                        up vote
                                                                                        2
                                                                                        down vote













                                                                                        You can use standard tools like find and sort to analyze your disk space usage.



                                                                                        List directories sorted by their size:



                                                                                        find / -mount -type d -exec du -s "" ; | sort -n


                                                                                        List files sorted by their size:



                                                                                        find / -mount -printf "%kt%pn" | sort -n





                                                                                        share|improve this answer
















                                                                                        • 1




                                                                                          I find this to be the best answer, to detect the large sized in sorted order
                                                                                          – vimal krishna
                                                                                          Oct 1 '17 at 11:16














                                                                                        up vote
                                                                                        2
                                                                                        down vote













                                                                                        You can use standard tools like find and sort to analyze your disk space usage.



                                                                                        List directories sorted by their size:



                                                                                        find / -mount -type d -exec du -s "" ; | sort -n


                                                                                        List files sorted by their size:



                                                                                        find / -mount -printf "%kt%pn" | sort -n





                                                                                        share|improve this answer
















                                                                                        • 1




                                                                                          I find this to be the best answer, to detect the large sized in sorted order
                                                                                          – vimal krishna
                                                                                          Oct 1 '17 at 11:16












                                                                                        up vote
                                                                                        2
                                                                                        down vote










                                                                                        up vote
                                                                                        2
                                                                                        down vote









                                                                                        You can use standard tools like find and sort to analyze your disk space usage.



                                                                                        List directories sorted by their size:



                                                                                        find / -mount -type d -exec du -s "" ; | sort -n


                                                                                        List files sorted by their size:



                                                                                        find / -mount -printf "%kt%pn" | sort -n





                                                                                        share|improve this answer












                                                                                        You can use standard tools like find and sort to analyze your disk space usage.



                                                                                        List directories sorted by their size:



                                                                                        find / -mount -type d -exec du -s "" ; | sort -n


                                                                                        List files sorted by their size:



                                                                                        find / -mount -printf "%kt%pn" | sort -n






                                                                                        share|improve this answer












                                                                                        share|improve this answer



                                                                                        share|improve this answer










                                                                                        answered Oct 6 '16 at 8:53









                                                                                        scai

                                                                                        6,23221734




                                                                                        6,23221734







                                                                                        • 1




                                                                                          I find this to be the best answer, to detect the large sized in sorted order
                                                                                          – vimal krishna
                                                                                          Oct 1 '17 at 11:16












                                                                                        • 1




                                                                                          I find this to be the best answer, to detect the large sized in sorted order
                                                                                          – vimal krishna
                                                                                          Oct 1 '17 at 11:16







                                                                                        1




                                                                                        1




                                                                                        I find this to be the best answer, to detect the large sized in sorted order
                                                                                        – vimal krishna
                                                                                        Oct 1 '17 at 11:16




                                                                                        I find this to be the best answer, to detect the large sized in sorted order
                                                                                        – vimal krishna
                                                                                        Oct 1 '17 at 11:16










                                                                                        up vote
                                                                                        2
                                                                                        down vote













                                                                                        From the terminal, you can get a visual representation of disk usage with dutree



                                                                                        It is very fast and light because it is implemented in Rust



                                                                                        dutree



                                                                                        $ dutree -h
                                                                                        Usage: dutree [options] <path> [<path>..]

                                                                                        Options:
                                                                                        -d, --depth [DEPTH] show directories up to depth N (def 1)
                                                                                        -a, --aggr [N[KMG]] aggregate smaller than N B/KiB/MiB/GiB (def 1M)
                                                                                        -s, --summary equivalent to -da, or -d1 -a1M
                                                                                        -u, --usage report real disk usage instead of file size
                                                                                        -b, --bytes print sizes in bytes
                                                                                        -f, --files-only skip directories for a fast local overview
                                                                                        -x, --exclude NAME exclude matching files or directories
                                                                                        -H, --no-hidden exclude hidden files
                                                                                        -A, --ascii ASCII characters only, no colors
                                                                                        -h, --help show help
                                                                                        -v, --version print version number


                                                                                        See all the usage details in the website






                                                                                        share|improve this answer
























                                                                                          up vote
                                                                                          2
                                                                                          down vote













                                                                                          From the terminal, you can get a visual representation of disk usage with dutree



                                                                                          It is very fast and light because it is implemented in Rust



                                                                                          dutree



                                                                                          $ dutree -h
                                                                                          Usage: dutree [options] <path> [<path>..]

                                                                                          Options:
                                                                                          -d, --depth [DEPTH] show directories up to depth N (def 1)
                                                                                          -a, --aggr [N[KMG]] aggregate smaller than N B/KiB/MiB/GiB (def 1M)
                                                                                          -s, --summary equivalent to -da, or -d1 -a1M
                                                                                          -u, --usage report real disk usage instead of file size
                                                                                          -b, --bytes print sizes in bytes
                                                                                          -f, --files-only skip directories for a fast local overview
                                                                                          -x, --exclude NAME exclude matching files or directories
                                                                                          -H, --no-hidden exclude hidden files
                                                                                          -A, --ascii ASCII characters only, no colors
                                                                                          -h, --help show help
                                                                                          -v, --version print version number


                                                                                          See all the usage details in the website






                                                                                          share|improve this answer






















                                                                                            up vote
                                                                                            2
                                                                                            down vote










                                                                                            up vote
                                                                                            2
                                                                                            down vote









                                                                                            From the terminal, you can get a visual representation of disk usage with dutree



                                                                                            It is very fast and light because it is implemented in Rust



                                                                                            dutree



                                                                                            $ dutree -h
                                                                                            Usage: dutree [options] <path> [<path>..]

                                                                                            Options:
                                                                                            -d, --depth [DEPTH] show directories up to depth N (def 1)
                                                                                            -a, --aggr [N[KMG]] aggregate smaller than N B/KiB/MiB/GiB (def 1M)
                                                                                            -s, --summary equivalent to -da, or -d1 -a1M
                                                                                            -u, --usage report real disk usage instead of file size
                                                                                            -b, --bytes print sizes in bytes
                                                                                            -f, --files-only skip directories for a fast local overview
                                                                                            -x, --exclude NAME exclude matching files or directories
                                                                                            -H, --no-hidden exclude hidden files
                                                                                            -A, --ascii ASCII characters only, no colors
                                                                                            -h, --help show help
                                                                                            -v, --version print version number


                                                                                            See all the usage details in the website






                                                                                            share|improve this answer












                                                                                            From the terminal, you can get a visual representation of disk usage with dutree



                                                                                            It is very fast and light because it is implemented in Rust



                                                                                            dutree



                                                                                            $ dutree -h
                                                                                            Usage: dutree [options] <path> [<path>..]

                                                                                            Options:
                                                                                            -d, --depth [DEPTH] show directories up to depth N (def 1)
                                                                                            -a, --aggr [N[KMG]] aggregate smaller than N B/KiB/MiB/GiB (def 1M)
                                                                                            -s, --summary equivalent to -da, or -d1 -a1M
                                                                                            -u, --usage report real disk usage instead of file size
                                                                                            -b, --bytes print sizes in bytes
                                                                                            -f, --files-only skip directories for a fast local overview
                                                                                            -x, --exclude NAME exclude matching files or directories
                                                                                            -H, --no-hidden exclude hidden files
                                                                                            -A, --ascii ASCII characters only, no colors
                                                                                            -h, --help show help
                                                                                            -v, --version print version number


                                                                                            See all the usage details in the website







                                                                                            share|improve this answer












                                                                                            share|improve this answer



                                                                                            share|improve this answer










                                                                                            answered May 3 at 9:17









                                                                                            nachoparker

                                                                                            42935




                                                                                            42935




















                                                                                                up vote
                                                                                                1
                                                                                                down vote













                                                                                                At first I check the size of directories, like so:



                                                                                                du -sh /var/cache/*/





                                                                                                share|improve this answer
























                                                                                                  up vote
                                                                                                  1
                                                                                                  down vote













                                                                                                  At first I check the size of directories, like so:



                                                                                                  du -sh /var/cache/*/





                                                                                                  share|improve this answer






















                                                                                                    up vote
                                                                                                    1
                                                                                                    down vote










                                                                                                    up vote
                                                                                                    1
                                                                                                    down vote









                                                                                                    At first I check the size of directories, like so:



                                                                                                    du -sh /var/cache/*/





                                                                                                    share|improve this answer












                                                                                                    At first I check the size of directories, like so:



                                                                                                    du -sh /var/cache/*/






                                                                                                    share|improve this answer












                                                                                                    share|improve this answer



                                                                                                    share|improve this answer










                                                                                                    answered Sep 8 '08 at 19:12









                                                                                                    hendry

                                                                                                    1117




                                                                                                    1117




















                                                                                                        up vote
                                                                                                        1
                                                                                                        down vote













                                                                                                        If you know that the large files have been added in the last few days (say, 3), then you can use a find command in conjunction with "ls -ltra" to discover those recently added files:



                                                                                                        find /some/dir -type f -mtime -3 -exec ls -lart ;


                                                                                                        This will give you just the files ("-type f"), not directories; just the files with modification time over the last 3 days ("-mtime -3") and execute "ls -lart" against each found file ("-exec" part).






                                                                                                        share|improve this answer
























                                                                                                          up vote
                                                                                                          1
                                                                                                          down vote













                                                                                                          If you know that the large files have been added in the last few days (say, 3), then you can use a find command in conjunction with "ls -ltra" to discover those recently added files:



                                                                                                          find /some/dir -type f -mtime -3 -exec ls -lart ;


                                                                                                          This will give you just the files ("-type f"), not directories; just the files with modification time over the last 3 days ("-mtime -3") and execute "ls -lart" against each found file ("-exec" part).






                                                                                                          share|improve this answer






















                                                                                                            up vote
                                                                                                            1
                                                                                                            down vote










                                                                                                            up vote
                                                                                                            1
                                                                                                            down vote









                                                                                                            If you know that the large files have been added in the last few days (say, 3), then you can use a find command in conjunction with "ls -ltra" to discover those recently added files:



                                                                                                            find /some/dir -type f -mtime -3 -exec ls -lart ;


                                                                                                            This will give you just the files ("-type f"), not directories; just the files with modification time over the last 3 days ("-mtime -3") and execute "ls -lart" against each found file ("-exec" part).






                                                                                                            share|improve this answer












                                                                                                            If you know that the large files have been added in the last few days (say, 3), then you can use a find command in conjunction with "ls -ltra" to discover those recently added files:



                                                                                                            find /some/dir -type f -mtime -3 -exec ls -lart ;


                                                                                                            This will give you just the files ("-type f"), not directories; just the files with modification time over the last 3 days ("-mtime -3") and execute "ls -lart" against each found file ("-exec" part).







                                                                                                            share|improve this answer












                                                                                                            share|improve this answer



                                                                                                            share|improve this answer










                                                                                                            answered Mar 12 '09 at 14:48







                                                                                                            kirillka



























                                                                                                                up vote
                                                                                                                1
                                                                                                                down vote













                                                                                                                To understand disproportionate disk space usage it's often useful to start at the root directory and walk up through some of its largest children.



                                                                                                                We can do this by



                                                                                                                • saving the output of du into a file

                                                                                                                • grepping through the result iteratively

                                                                                                                That is:



                                                                                                                # sum up the size of all files and directories under the root filesystem
                                                                                                                du -a -h -x / > disk_usage.txt
                                                                                                                # display the size of root items
                                                                                                                grep $'t/[^/]*$' disk_usage.txt


                                                                                                                now let's say /usr appear too large



                                                                                                                # display the size of /usr items
                                                                                                                grep $'t/usr/[^/]*$' disk_usage.txt


                                                                                                                now if /usr/local is suspiciously large



                                                                                                                # display the size /usr/local items
                                                                                                                grep $'t/usr/local/[^/]*$' disk_usage.txt


                                                                                                                and so on...






                                                                                                                share|improve this answer
























                                                                                                                  up vote
                                                                                                                  1
                                                                                                                  down vote













                                                                                                                  To understand disproportionate disk space usage it's often useful to start at the root directory and walk up through some of its largest children.



                                                                                                                  We can do this by



                                                                                                                  • saving the output of du into a file

                                                                                                                  • grepping through the result iteratively

                                                                                                                  That is:



                                                                                                                  # sum up the size of all files and directories under the root filesystem
                                                                                                                  du -a -h -x / > disk_usage.txt
                                                                                                                  # display the size of root items
                                                                                                                  grep $'t/[^/]*$' disk_usage.txt


                                                                                                                  now let's say /usr appear too large



                                                                                                                  # display the size of /usr items
                                                                                                                  grep $'t/usr/[^/]*$' disk_usage.txt


                                                                                                                  now if /usr/local is suspiciously large



                                                                                                                  # display the size /usr/local items
                                                                                                                  grep $'t/usr/local/[^/]*$' disk_usage.txt


                                                                                                                  and so on...






                                                                                                                  share|improve this answer






















                                                                                                                    up vote
                                                                                                                    1
                                                                                                                    down vote










                                                                                                                    up vote
                                                                                                                    1
                                                                                                                    down vote









                                                                                                                    To understand disproportionate disk space usage it's often useful to start at the root directory and walk up through some of its largest children.



                                                                                                                    We can do this by



                                                                                                                    • saving the output of du into a file

                                                                                                                    • grepping through the result iteratively

                                                                                                                    That is:



                                                                                                                    # sum up the size of all files and directories under the root filesystem
                                                                                                                    du -a -h -x / > disk_usage.txt
                                                                                                                    # display the size of root items
                                                                                                                    grep $'t/[^/]*$' disk_usage.txt


                                                                                                                    now let's say /usr appear too large



                                                                                                                    # display the size of /usr items
                                                                                                                    grep $'t/usr/[^/]*$' disk_usage.txt


                                                                                                                    now if /usr/local is suspiciously large



                                                                                                                    # display the size /usr/local items
                                                                                                                    grep $'t/usr/local/[^/]*$' disk_usage.txt


                                                                                                                    and so on...






                                                                                                                    share|improve this answer












                                                                                                                    To understand disproportionate disk space usage it's often useful to start at the root directory and walk up through some of its largest children.



                                                                                                                    We can do this by



                                                                                                                    • saving the output of du into a file

                                                                                                                    • grepping through the result iteratively

                                                                                                                    That is:



                                                                                                                    # sum up the size of all files and directories under the root filesystem
                                                                                                                    du -a -h -x / > disk_usage.txt
                                                                                                                    # display the size of root items
                                                                                                                    grep $'t/[^/]*$' disk_usage.txt


                                                                                                                    now let's say /usr appear too large



                                                                                                                    # display the size of /usr items
                                                                                                                    grep $'t/usr/[^/]*$' disk_usage.txt


                                                                                                                    now if /usr/local is suspiciously large



                                                                                                                    # display the size /usr/local items
                                                                                                                    grep $'t/usr/local/[^/]*$' disk_usage.txt


                                                                                                                    and so on...







                                                                                                                    share|improve this answer












                                                                                                                    share|improve this answer



                                                                                                                    share|improve this answer










                                                                                                                    answered Dec 17 '09 at 4:44









                                                                                                                    Alex Jasmin

                                                                                                                    1707




                                                                                                                    1707




















                                                                                                                        up vote
                                                                                                                        1
                                                                                                                        down vote













                                                                                                                        I have used this command to find files bigger than 100Mb:



                                                                                                                        find / -size +100M -exec ls -l ;





                                                                                                                        share|improve this answer
























                                                                                                                          up vote
                                                                                                                          1
                                                                                                                          down vote













                                                                                                                          I have used this command to find files bigger than 100Mb:



                                                                                                                          find / -size +100M -exec ls -l ;





                                                                                                                          share|improve this answer






















                                                                                                                            up vote
                                                                                                                            1
                                                                                                                            down vote










                                                                                                                            up vote
                                                                                                                            1
                                                                                                                            down vote









                                                                                                                            I have used this command to find files bigger than 100Mb:



                                                                                                                            find / -size +100M -exec ls -l ;





                                                                                                                            share|improve this answer












                                                                                                                            I have used this command to find files bigger than 100Mb:



                                                                                                                            find / -size +100M -exec ls -l ;






                                                                                                                            share|improve this answer












                                                                                                                            share|improve this answer



                                                                                                                            share|improve this answer










                                                                                                                            answered Jul 23 '13 at 13:09







                                                                                                                            Petro Franko



























                                                                                                                                up vote
                                                                                                                                1
                                                                                                                                down vote













                                                                                                                                Maybe worth to note that mc (Midnight Commander, a classic text-mode file manager) by default show only the size of the directory inodes (usually 4096) but with CtrlSpace or with menu Tools you can see the space occupied by the selected directory in a human readable format (e.g., some like 103151M).



                                                                                                                                For instance, the picture below show the full size of the vanilla TeX Live distributions of 2018 and 2017, while the versions of 2015 and 2016 show only the size of the inode (but they have really near of 5 Gb each).



                                                                                                                                That is, CtrlSpace must be done one for one, only for the actual directory level, but it is so fast and handy when you are navigating with mc that maybe you will not need ncdu (that indeed, only for this purpose is better). Otherwise, you can also run ncdu from mc. without exit from mc or launch another terminal.



                                                                                                                                mwe






                                                                                                                                share|improve this answer
























                                                                                                                                  up vote
                                                                                                                                  1
                                                                                                                                  down vote













                                                                                                                                  Maybe worth to note that mc (Midnight Commander, a classic text-mode file manager) by default show only the size of the directory inodes (usually 4096) but with CtrlSpace or with menu Tools you can see the space occupied by the selected directory in a human readable format (e.g., some like 103151M).



                                                                                                                                  For instance, the picture below show the full size of the vanilla TeX Live distributions of 2018 and 2017, while the versions of 2015 and 2016 show only the size of the inode (but they have really near of 5 Gb each).



                                                                                                                                  That is, CtrlSpace must be done one for one, only for the actual directory level, but it is so fast and handy when you are navigating with mc that maybe you will not need ncdu (that indeed, only for this purpose is better). Otherwise, you can also run ncdu from mc. without exit from mc or launch another terminal.



                                                                                                                                  mwe






                                                                                                                                  share|improve this answer






















                                                                                                                                    up vote
                                                                                                                                    1
                                                                                                                                    down vote










                                                                                                                                    up vote
                                                                                                                                    1
                                                                                                                                    down vote









                                                                                                                                    Maybe worth to note that mc (Midnight Commander, a classic text-mode file manager) by default show only the size of the directory inodes (usually 4096) but with CtrlSpace or with menu Tools you can see the space occupied by the selected directory in a human readable format (e.g., some like 103151M).



                                                                                                                                    For instance, the picture below show the full size of the vanilla TeX Live distributions of 2018 and 2017, while the versions of 2015 and 2016 show only the size of the inode (but they have really near of 5 Gb each).



                                                                                                                                    That is, CtrlSpace must be done one for one, only for the actual directory level, but it is so fast and handy when you are navigating with mc that maybe you will not need ncdu (that indeed, only for this purpose is better). Otherwise, you can also run ncdu from mc. without exit from mc or launch another terminal.



                                                                                                                                    mwe






                                                                                                                                    share|improve this answer












                                                                                                                                    Maybe worth to note that mc (Midnight Commander, a classic text-mode file manager) by default show only the size of the directory inodes (usually 4096) but with CtrlSpace or with menu Tools you can see the space occupied by the selected directory in a human readable format (e.g., some like 103151M).



                                                                                                                                    For instance, the picture below show the full size of the vanilla TeX Live distributions of 2018 and 2017, while the versions of 2015 and 2016 show only the size of the inode (but they have really near of 5 Gb each).



                                                                                                                                    That is, CtrlSpace must be done one for one, only for the actual directory level, but it is so fast and handy when you are navigating with mc that maybe you will not need ncdu (that indeed, only for this purpose is better). Otherwise, you can also run ncdu from mc. without exit from mc or launch another terminal.



                                                                                                                                    mwe







                                                                                                                                    share|improve this answer












                                                                                                                                    share|improve this answer



                                                                                                                                    share|improve this answer










                                                                                                                                    answered Sep 30 at 21:16









                                                                                                                                    Fran

                                                                                                                                    1,07197




                                                                                                                                    1,07197




















                                                                                                                                        up vote
                                                                                                                                        0
                                                                                                                                        down vote













                                                                                                                                        I've had success tracking down the worst offender(s) piping the du output in human readable form to egrep and matching to a regular expression.



                                                                                                                                        For example:



                                                                                                                                        du -h | egrep "[0-9]+G.*|[5-9][0-9][0-9]M.*"


                                                                                                                                        which should give you back everything 500 megs or higher.






                                                                                                                                        share|improve this answer




















                                                                                                                                        • Don't use grep for arithmetic operations -- use awk instead: du -k | awk '$1 > 500000'. It is much easier to understand, edit and get correct on the first try.
                                                                                                                                          – ndemou
                                                                                                                                          Jul 4 '17 at 20:25














                                                                                                                                        up vote
                                                                                                                                        0
                                                                                                                                        down vote













                                                                                                                                        I've had success tracking down the worst offender(s) piping the du output in human readable form to egrep and matching to a regular expression.



                                                                                                                                        For example:



                                                                                                                                        du -h | egrep "[0-9]+G.*|[5-9][0-9][0-9]M.*"


                                                                                                                                        which should give you back everything 500 megs or higher.






                                                                                                                                        share|improve this answer




















                                                                                                                                        • Don't use grep for arithmetic operations -- use awk instead: du -k | awk '$1 > 500000'. It is much easier to understand, edit and get correct on the first try.
                                                                                                                                          – ndemou
                                                                                                                                          Jul 4 '17 at 20:25












                                                                                                                                        up vote
                                                                                                                                        0
                                                                                                                                        down vote










                                                                                                                                        up vote
                                                                                                                                        0
                                                                                                                                        down vote









                                                                                                                                        I've had success tracking down the worst offender(s) piping the du output in human readable form to egrep and matching to a regular expression.



                                                                                                                                        For example:



                                                                                                                                        du -h | egrep "[0-9]+G.*|[5-9][0-9][0-9]M.*"


                                                                                                                                        which should give you back everything 500 megs or higher.






                                                                                                                                        share|improve this answer












                                                                                                                                        I've had success tracking down the worst offender(s) piping the du output in human readable form to egrep and matching to a regular expression.



                                                                                                                                        For example:



                                                                                                                                        du -h | egrep "[0-9]+G.*|[5-9][0-9][0-9]M.*"


                                                                                                                                        which should give you back everything 500 megs or higher.







                                                                                                                                        share|improve this answer












                                                                                                                                        share|improve this answer



                                                                                                                                        share|improve this answer










                                                                                                                                        answered Aug 28 '08 at 14:53







                                                                                                                                        Justin Standard


















                                                                                                                                        • Don't use grep for arithmetic operations -- use awk instead: du -k | awk '$1 > 500000'. It is much easier to understand, edit and get correct on the first try.
                                                                                                                                          – ndemou
                                                                                                                                          Jul 4 '17 at 20:25
















                                                                                                                                        • Don't use grep for arithmetic operations -- use awk instead: du -k | awk '$1 > 500000'. It is much easier to understand, edit and get correct on the first try.
                                                                                                                                          – ndemou
                                                                                                                                          Jul 4 '17 at 20:25















                                                                                                                                        Don't use grep for arithmetic operations -- use awk instead: du -k | awk '$1 > 500000'. It is much easier to understand, edit and get correct on the first try.
                                                                                                                                        – ndemou
                                                                                                                                        Jul 4 '17 at 20:25




                                                                                                                                        Don't use grep for arithmetic operations -- use awk instead: du -k | awk '$1 > 500000'. It is much easier to understand, edit and get correct on the first try.
                                                                                                                                        – ndemou
                                                                                                                                        Jul 4 '17 at 20:25










                                                                                                                                        up vote
                                                                                                                                        0
                                                                                                                                        down vote













                                                                                                                                        If you want speed, you can enable quotas on the filesystems you want to monitor (you need not set quotas for any user), and use a script that uses the quota command to list the disk space being used by each user. For instance:



                                                                                                                                        quota -v $user | grep $filesystem | awk ' print $2 '


                                                                                                                                        would give you the disk usage in blocks for the particular user on the particular filesystem. You should be able to check usages in a matter of seconds this way.



                                                                                                                                        To enable quotas you will need to add usrquota to the filesystem options in your /etc/fstab file and then probably reboot so that quotacheck can be run on a idle filesystem before quotaon is called.






                                                                                                                                        share|improve this answer
























                                                                                                                                          up vote
                                                                                                                                          0
                                                                                                                                          down vote













                                                                                                                                          If you want speed, you can enable quotas on the filesystems you want to monitor (you need not set quotas for any user), and use a script that uses the quota command to list the disk space being used by each user. For instance:



                                                                                                                                          quota -v $user | grep $filesystem | awk ' print $2 '


                                                                                                                                          would give you the disk usage in blocks for the particular user on the particular filesystem. You should be able to check usages in a matter of seconds this way.



                                                                                                                                          To enable quotas you will need to add usrquota to the filesystem options in your /etc/fstab file and then probably reboot so that quotacheck can be run on a idle filesystem before quotaon is called.






                                                                                                                                          share|improve this answer






















                                                                                                                                            up vote
                                                                                                                                            0
                                                                                                                                            down vote










                                                                                                                                            up vote
                                                                                                                                            0
                                                                                                                                            down vote









                                                                                                                                            If you want speed, you can enable quotas on the filesystems you want to monitor (you need not set quotas for any user), and use a script that uses the quota command to list the disk space being used by each user. For instance:



                                                                                                                                            quota -v $user | grep $filesystem | awk ' print $2 '


                                                                                                                                            would give you the disk usage in blocks for the particular user on the particular filesystem. You should be able to check usages in a matter of seconds this way.



                                                                                                                                            To enable quotas you will need to add usrquota to the filesystem options in your /etc/fstab file and then probably reboot so that quotacheck can be run on a idle filesystem before quotaon is called.






                                                                                                                                            share|improve this answer












                                                                                                                                            If you want speed, you can enable quotas on the filesystems you want to monitor (you need not set quotas for any user), and use a script that uses the quota command to list the disk space being used by each user. For instance:



                                                                                                                                            quota -v $user | grep $filesystem | awk ' print $2 '


                                                                                                                                            would give you the disk usage in blocks for the particular user on the particular filesystem. You should be able to check usages in a matter of seconds this way.



                                                                                                                                            To enable quotas you will need to add usrquota to the filesystem options in your /etc/fstab file and then probably reboot so that quotacheck can be run on a idle filesystem before quotaon is called.







                                                                                                                                            share|improve this answer












                                                                                                                                            share|improve this answer



                                                                                                                                            share|improve this answer










                                                                                                                                            answered Sep 17 '08 at 14:35







                                                                                                                                            Steve Baker



























                                                                                                                                                up vote
                                                                                                                                                0
                                                                                                                                                down vote













                                                                                                                                                Here is a tiny app that uses deep sampling to find tumors in any disk or directory. It walks the directory tree twice, once to measure it, and the second time to print out the paths to 20 "random" bytes under the directory.



                                                                                                                                                void walk(string sDir, int iPass, int64& n, int64& n1, int64 step)
                                                                                                                                                foreach(string sSubDir in sDir)
                                                                                                                                                walk(sDir + "/" + sSubDir, iPass, n, n1, step);

                                                                                                                                                foreach(string sFile in sDir)
                                                                                                                                                string sPath = sDir + "/" + sFile;
                                                                                                                                                int64 len = File.Size(sPath);
                                                                                                                                                if (iPass == 2)
                                                                                                                                                while(n1 <= n+len)
                                                                                                                                                print sPath;
                                                                                                                                                n1 += step;


                                                                                                                                                n += len;



                                                                                                                                                void dscan()
                                                                                                                                                int64 n = 0, n1 = 0, step = 0;
                                                                                                                                                // pass 1, measure
                                                                                                                                                walk(".", 1, n, n1);
                                                                                                                                                print n;
                                                                                                                                                // pass 2, print
                                                                                                                                                step = n/20; n1 = step/2; n = 0;
                                                                                                                                                walk(".", 2, n, n1);
                                                                                                                                                print n;



                                                                                                                                                The output looks like this for my Program Files directory:



                                                                                                                                                 7,908,634,694
                                                                                                                                                .ArcSoftPhotoStudio 2000Samples3.jpg
                                                                                                                                                .Common FilesJavaUpdateBase Imagesj2re1.4.2-b28core1.zip
                                                                                                                                                .Common FilesWise Installation WizardWISDED53B0BB67C4244AE6AD6FD3C28D1EF_7_0_2_7.MSI
                                                                                                                                                .Insightfulsplus62javajrelibjaws.jar
                                                                                                                                                .IntelCompilerFortran9.1em64tbintselect.exe
                                                                                                                                                .IntelDownloadIntelFortranProCompiler91CompilerItaniumData1.cab
                                                                                                                                                .IntelMKL8.0.1em64tbinmkl_lapack32.dll
                                                                                                                                                .Javajre1.6.0binclientclasses.jsa
                                                                                                                                                .Microsoft SQL Server90Setup Bootstrapsqlsval.dll
                                                                                                                                                .Microsoft Visual StudioDF98DOCTAPI.CHM
                                                                                                                                                .Microsoft Visual Studio .NET 2003CompactFrameworkSDKv1.0.5000Windows CEsqlce20sql2ksp1.exe
                                                                                                                                                .Microsoft Visual Studio .NET 2003SDKv1.1Tool Developers GuidedocsPartition II Metadata.doc
                                                                                                                                                .Microsoft Visual Studio .NET 2003Visual Studio .NET Enterprise Architect 2003 - EnglishLogsVSMsiLog0A34.txt
                                                                                                                                                .Microsoft Visual Studio 8Microsoft Visual Studio 2005 Professional Edition - ENULogsVSMsiLog1A9E.txt
                                                                                                                                                .Microsoft Visual Studio 8SmartDevicesSDKCompactFramework2.0v2.0WindowsCEwce500mipsivNETCFv2.wce5.mipsiv.cab
                                                                                                                                                .Microsoft Visual Studio 8VCceatlmfclibarmv4iUafxcW.lib
                                                                                                                                                .Microsoft Visual Studio 8VCceDllmipsiimfc80ud.pdb
                                                                                                                                                .Movie MakerMUI409moviemk.chm
                                                                                                                                                .TheCompanyTheProductdocsTheProduct User's Guide.pdf
                                                                                                                                                .VNICTT6.0helpStatV1.pdf
                                                                                                                                                7,908,634,694


                                                                                                                                                It tells me that the directory is 7.9gb, of which



                                                                                                                                                • ~15% goes to the Intel Fortran compiler

                                                                                                                                                • ~15% goes to VS .NET 2003

                                                                                                                                                • ~20% goes to VS 8

                                                                                                                                                It is simple enough to ask if any of these can be unloaded.



                                                                                                                                                It also tells about file types that are distributed across the file system, but taken together represent an opportunity for space saving:



                                                                                                                                                • ~15% roughly goes to .cab and .MSI files

                                                                                                                                                • ~10% roughly goes to logging text files

                                                                                                                                                It shows plenty of other things in there also, that I could probably do without, like "SmartDevices" and "ce" support (~15%).



                                                                                                                                                It does take linear time, but it doesn't have to be done often.



                                                                                                                                                Examples of things it has found:



                                                                                                                                                • backup copies of DLLs in many saved code repositories, that don't really need to be saved

                                                                                                                                                • a backup copy of someone's hard drive on the server, under an obscure directory

                                                                                                                                                • voluminous temporary internet files

                                                                                                                                                • ancient doc and help files long past being needed





                                                                                                                                                share|improve this answer
























                                                                                                                                                  up vote
                                                                                                                                                  0
                                                                                                                                                  down vote













                                                                                                                                                  Here is a tiny app that uses deep sampling to find tumors in any disk or directory. It walks the directory tree twice, once to measure it, and the second time to print out the paths to 20 "random" bytes under the directory.



                                                                                                                                                  void walk(string sDir, int iPass, int64& n, int64& n1, int64 step)
                                                                                                                                                  foreach(string sSubDir in sDir)
                                                                                                                                                  walk(sDir + "/" + sSubDir, iPass, n, n1, step);

                                                                                                                                                  foreach(string sFile in sDir)
                                                                                                                                                  string sPath = sDir + "/" + sFile;
                                                                                                                                                  int64 len = File.Size(sPath);
                                                                                                                                                  if (iPass == 2)
                                                                                                                                                  while(n1 <= n+len)
                                                                                                                                                  print sPath;
                                                                                                                                                  n1 += step;


                                                                                                                                                  n += len;



                                                                                                                                                  void dscan()
                                                                                                                                                  int64 n = 0, n1 = 0, step = 0;
                                                                                                                                                  // pass 1, measure
                                                                                                                                                  walk(".", 1, n, n1);
                                                                                                                                                  print n;
                                                                                                                                                  // pass 2, print
                                                                                                                                                  step = n/20; n1 = step/2; n = 0;
                                                                                                                                                  walk(".", 2, n, n1);
                                                                                                                                                  print n;



                                                                                                                                                  The output looks like this for my Program Files directory:



                                                                                                                                                   7,908,634,694
                                                                                                                                                  .ArcSoftPhotoStudio 2000Samples3.jpg
                                                                                                                                                  .Common FilesJavaUpdateBase Imagesj2re1.4.2-b28core1.zip
                                                                                                                                                  .Common FilesWise Installation WizardWISDED53B0BB67C4244AE6AD6FD3C28D1EF_7_0_2_7.MSI
                                                                                                                                                  .Insightfulsplus62javajrelibjaws.jar
                                                                                                                                                  .IntelCompilerFortran9.1em64tbintselect.exe
                                                                                                                                                  .IntelDownloadIntelFortranProCompiler91CompilerItaniumData1.cab
                                                                                                                                                  .IntelMKL8.0.1em64tbinmkl_lapack32.dll
                                                                                                                                                  .Javajre1.6.0binclientclasses.jsa
                                                                                                                                                  .Microsoft SQL Server90Setup Bootstrapsqlsval.dll
                                                                                                                                                  .Microsoft Visual StudioDF98DOCTAPI.CHM
                                                                                                                                                  .Microsoft Visual Studio .NET 2003CompactFrameworkSDKv1.0.5000Windows CEsqlce20sql2ksp1.exe
                                                                                                                                                  .Microsoft Visual Studio .NET 2003SDKv1.1Tool Developers GuidedocsPartition II Metadata.doc
                                                                                                                                                  .Microsoft Visual Studio .NET 2003Visual Studio .NET Enterprise Architect 2003 - EnglishLogsVSMsiLog0A34.txt
                                                                                                                                                  .Microsoft Visual Studio 8Microsoft Visual Studio 2005 Professional Edition - ENULogsVSMsiLog1A9E.txt
                                                                                                                                                  .Microsoft Visual Studio 8SmartDevicesSDKCompactFramework2.0v2.0WindowsCEwce500mipsivNETCFv2.wce5.mipsiv.cab
                                                                                                                                                  .Microsoft Visual Studio 8VCceatlmfclibarmv4iUafxcW.lib
                                                                                                                                                  .Microsoft Visual Studio 8VCceDllmipsiimfc80ud.pdb
                                                                                                                                                  .Movie MakerMUI409moviemk.chm
                                                                                                                                                  .TheCompanyTheProductdocsTheProduct User's Guide.pdf
                                                                                                                                                  .VNICTT6.0helpStatV1.pdf
                                                                                                                                                  7,908,634,694


                                                                                                                                                  It tells me that the directory is 7.9gb, of which



                                                                                                                                                  • ~15% goes to the Intel Fortran compiler

                                                                                                                                                  • ~15% goes to VS .NET 2003

                                                                                                                                                  • ~20% goes to VS 8

                                                                                                                                                  It is simple enough to ask if any of these can be unloaded.



                                                                                                                                                  It also tells about file types that are distributed across the file system, but taken together represent an opportunity for space saving:



                                                                                                                                                  • ~15% roughly goes to .cab and .MSI files

                                                                                                                                                  • ~10% roughly goes to logging text files

                                                                                                                                                  It shows plenty of other things in there also, that I could probably do without, like "SmartDevices" and "ce" support (~15%).



                                                                                                                                                  It does take linear time, but it doesn't have to be done often.



                                                                                                                                                  Examples of things it has found:



                                                                                                                                                  • backup copies of DLLs in many saved code repositories, that don't really need to be saved

                                                                                                                                                  • a backup copy of someone's hard drive on the server, under an obscure directory

                                                                                                                                                  • voluminous temporary internet files

                                                                                                                                                  • ancient doc and help files long past being needed





                                                                                                                                                  share|improve this answer






















                                                                                                                                                    up vote
                                                                                                                                                    0
                                                                                                                                                    down vote










                                                                                                                                                    up vote
                                                                                                                                                    0
                                                                                                                                                    down vote









                                                                                                                                                    Here is a tiny app that uses deep sampling to find tumors in any disk or directory. It walks the directory tree twice, once to measure it, and the second time to print out the paths to 20 "random" bytes under the directory.



                                                                                                                                                    void walk(string sDir, int iPass, int64& n, int64& n1, int64 step)
                                                                                                                                                    foreach(string sSubDir in sDir)
                                                                                                                                                    walk(sDir + "/" + sSubDir, iPass, n, n1, step);

                                                                                                                                                    foreach(string sFile in sDir)
                                                                                                                                                    string sPath = sDir + "/" + sFile;
                                                                                                                                                    int64 len = File.Size(sPath);
                                                                                                                                                    if (iPass == 2)
                                                                                                                                                    while(n1 <= n+len)
                                                                                                                                                    print sPath;
                                                                                                                                                    n1 += step;


                                                                                                                                                    n += len;



                                                                                                                                                    void dscan()
                                                                                                                                                    int64 n = 0, n1 = 0, step = 0;
                                                                                                                                                    // pass 1, measure
                                                                                                                                                    walk(".", 1, n, n1);
                                                                                                                                                    print n;
                                                                                                                                                    // pass 2, print
                                                                                                                                                    step = n/20; n1 = step/2; n = 0;
                                                                                                                                                    walk(".", 2, n, n1);
                                                                                                                                                    print n;



                                                                                                                                                    The output looks like this for my Program Files directory:



                                                                                                                                                     7,908,634,694
                                                                                                                                                    .ArcSoftPhotoStudio 2000Samples3.jpg
                                                                                                                                                    .Common FilesJavaUpdateBase Imagesj2re1.4.2-b28core1.zip
                                                                                                                                                    .Common FilesWise Installation WizardWISDED53B0BB67C4244AE6AD6FD3C28D1EF_7_0_2_7.MSI
                                                                                                                                                    .Insightfulsplus62javajrelibjaws.jar
                                                                                                                                                    .IntelCompilerFortran9.1em64tbintselect.exe
                                                                                                                                                    .IntelDownloadIntelFortranProCompiler91CompilerItaniumData1.cab
                                                                                                                                                    .IntelMKL8.0.1em64tbinmkl_lapack32.dll
                                                                                                                                                    .Javajre1.6.0binclientclasses.jsa
                                                                                                                                                    .Microsoft SQL Server90Setup Bootstrapsqlsval.dll
                                                                                                                                                    .Microsoft Visual StudioDF98DOCTAPI.CHM
                                                                                                                                                    .Microsoft Visual Studio .NET 2003CompactFrameworkSDKv1.0.5000Windows CEsqlce20sql2ksp1.exe
                                                                                                                                                    .Microsoft Visual Studio .NET 2003SDKv1.1Tool Developers GuidedocsPartition II Metadata.doc
                                                                                                                                                    .Microsoft Visual Studio .NET 2003Visual Studio .NET Enterprise Architect 2003 - EnglishLogsVSMsiLog0A34.txt
                                                                                                                                                    .Microsoft Visual Studio 8Microsoft Visual Studio 2005 Professional Edition - ENULogsVSMsiLog1A9E.txt
                                                                                                                                                    .Microsoft Visual Studio 8SmartDevicesSDKCompactFramework2.0v2.0WindowsCEwce500mipsivNETCFv2.wce5.mipsiv.cab
                                                                                                                                                    .Microsoft Visual Studio 8VCceatlmfclibarmv4iUafxcW.lib
                                                                                                                                                    .Microsoft Visual Studio 8VCceDllmipsiimfc80ud.pdb
                                                                                                                                                    .Movie MakerMUI409moviemk.chm
                                                                                                                                                    .TheCompanyTheProductdocsTheProduct User's Guide.pdf
                                                                                                                                                    .VNICTT6.0helpStatV1.pdf
                                                                                                                                                    7,908,634,694


                                                                                                                                                    It tells me that the directory is 7.9gb, of which



                                                                                                                                                    • ~15% goes to the Intel Fortran compiler

                                                                                                                                                    • ~15% goes to VS .NET 2003

                                                                                                                                                    • ~20% goes to VS 8

                                                                                                                                                    It is simple enough to ask if any of these can be unloaded.



                                                                                                                                                    It also tells about file types that are distributed across the file system, but taken together represent an opportunity for space saving:



                                                                                                                                                    • ~15% roughly goes to .cab and .MSI files

                                                                                                                                                    • ~10% roughly goes to logging text files

                                                                                                                                                    It shows plenty of other things in there also, that I could probably do without, like "SmartDevices" and "ce" support (~15%).



                                                                                                                                                    It does take linear time, but it doesn't have to be done often.



                                                                                                                                                    Examples of things it has found:



                                                                                                                                                    • backup copies of DLLs in many saved code repositories, that don't really need to be saved

                                                                                                                                                    • a backup copy of someone's hard drive on the server, under an obscure directory

                                                                                                                                                    • voluminous temporary internet files

                                                                                                                                                    • ancient doc and help files long past being needed





                                                                                                                                                    share|improve this answer












                                                                                                                                                    Here is a tiny app that uses deep sampling to find tumors in any disk or directory. It walks the directory tree twice, once to measure it, and the second time to print out the paths to 20 "random" bytes under the directory.



                                                                                                                                                    void walk(string sDir, int iPass, int64& n, int64& n1, int64 step)
                                                                                                                                                    foreach(string sSubDir in sDir)
                                                                                                                                                    walk(sDir + "/" + sSubDir, iPass, n, n1, step);

                                                                                                                                                    foreach(string sFile in sDir)
                                                                                                                                                    string sPath = sDir + "/" + sFile;
                                                                                                                                                    int64 len = File.Size(sPath);
                                                                                                                                                    if (iPass == 2)
                                                                                                                                                    while(n1 <= n+len)
                                                                                                                                                    print sPath;
                                                                                                                                                    n1 += step;


                                                                                                                                                    n += len;



                                                                                                                                                    void dscan()
                                                                                                                                                    int64 n = 0, n1 = 0, step = 0;
                                                                                                                                                    // pass 1, measure
                                                                                                                                                    walk(".", 1, n, n1);
                                                                                                                                                    print n;
                                                                                                                                                    // pass 2, print
                                                                                                                                                    step = n/20; n1 = step/2; n = 0;
                                                                                                                                                    walk(".", 2, n, n1);
                                                                                                                                                    print n;



                                                                                                                                                    The output looks like this for my Program Files directory:



                                                                                                                                                     7,908,634,694
                                                                                                                                                    .ArcSoftPhotoStudio 2000Samples3.jpg
                                                                                                                                                    .Common FilesJavaUpdateBase Imagesj2re1.4.2-b28core1.zip
                                                                                                                                                    .Common FilesWise Installation WizardWISDED53B0BB67C4244AE6AD6FD3C28D1EF_7_0_2_7.MSI
                                                                                                                                                    .Insightfulsplus62javajrelibjaws.jar
                                                                                                                                                    .IntelCompilerFortran9.1em64tbintselect.exe
                                                                                                                                                    .IntelDownloadIntelFortranProCompiler91CompilerItaniumData1.cab
                                                                                                                                                    .IntelMKL8.0.1em64tbinmkl_lapack32.dll
                                                                                                                                                    .Javajre1.6.0binclientclasses.jsa
                                                                                                                                                    .Microsoft SQL Server90Setup Bootstrapsqlsval.dll
                                                                                                                                                    .Microsoft Visual StudioDF98DOCTAPI.CHM
                                                                                                                                                    .Microsoft Visual Studio .NET 2003CompactFrameworkSDKv1.0.5000Windows CEsqlce20sql2ksp1.exe
                                                                                                                                                    .Microsoft Visual Studio .NET 2003SDKv1.1Tool Developers GuidedocsPartition II Metadata.doc
                                                                                                                                                    .Microsoft Visual Studio .NET 2003Visual Studio .NET Enterprise Architect 2003 - EnglishLogsVSMsiLog0A34.txt
                                                                                                                                                    .Microsoft Visual Studio 8Microsoft Visual Studio 2005 Professional Edition - ENULogsVSMsiLog1A9E.txt
                                                                                                                                                    .Microsoft Visual Studio 8SmartDevicesSDKCompactFramework2.0v2.0WindowsCEwce500mipsivNETCFv2.wce5.mipsiv.cab
                                                                                                                                                    .Microsoft Visual Studio 8VCceatlmfclibarmv4iUafxcW.lib
                                                                                                                                                    .Microsoft Visual Studio 8VCceDllmipsiimfc80ud.pdb
                                                                                                                                                    .Movie MakerMUI409moviemk.chm
                                                                                                                                                    .TheCompanyTheProductdocsTheProduct User's Guide.pdf
                                                                                                                                                    .VNICTT6.0helpStatV1.pdf
                                                                                                                                                    7,908,634,694


                                                                                                                                                    It tells me that the directory is 7.9gb, of which



                                                                                                                                                    • ~15% goes to the Intel Fortran compiler

                                                                                                                                                    • ~15% goes to VS .NET 2003

                                                                                                                                                    • ~20% goes to VS 8

                                                                                                                                                    It is simple enough to ask if any of these can be unloaded.



                                                                                                                                                    It also tells about file types that are distributed across the file system, but taken together represent an opportunity for space saving:



                                                                                                                                                    • ~15% roughly goes to .cab and .MSI files

                                                                                                                                                    • ~10% roughly goes to logging text files

                                                                                                                                                    It shows plenty of other things in there also, that I could probably do without, like "SmartDevices" and "ce" support (~15%).



                                                                                                                                                    It does take linear time, but it doesn't have to be done often.



                                                                                                                                                    Examples of things it has found:



                                                                                                                                                    • backup copies of DLLs in many saved code repositories, that don't really need to be saved

                                                                                                                                                    • a backup copy of someone's hard drive on the server, under an obscure directory

                                                                                                                                                    • voluminous temporary internet files

                                                                                                                                                    • ancient doc and help files long past being needed






                                                                                                                                                    share|improve this answer












                                                                                                                                                    share|improve this answer



                                                                                                                                                    share|improve this answer










                                                                                                                                                    answered Feb 9 '09 at 17:35







                                                                                                                                                    Mike Dunlavey



























                                                                                                                                                        up vote
                                                                                                                                                        0
                                                                                                                                                        down vote













                                                                                                                                                        I had a similar issue, but the answers on this page weren't enough. I found the following command to be the most useful for the listing:



                                                                                                                                                        du -a / | sort -n -r | head -n 20



                                                                                                                                                        Which would show me the 20 biggest offenders. However even though I ran this, it did not show me the real issue, because I had already deleted the file. The catch was that there was a process still running that was referencing the deleted log file... so I had to kill that process first then the disk space showed up as free.






                                                                                                                                                        share|improve this answer




















                                                                                                                                                        • Good point but this should be a comment and not an answer by itself - this question suffers from too many answers
                                                                                                                                                          – ndemou
                                                                                                                                                          Jun 9 '17 at 10:36














                                                                                                                                                        up vote
                                                                                                                                                        0
                                                                                                                                                        down vote













                                                                                                                                                        I had a similar issue, but the answers on this page weren't enough. I found the following command to be the most useful for the listing:



                                                                                                                                                        du -a / | sort -n -r | head -n 20



                                                                                                                                                        Which would show me the 20 biggest offenders. However even though I ran this, it did not show me the real issue, because I had already deleted the file. The catch was that there was a process still running that was referencing the deleted log file... so I had to kill that process first then the disk space showed up as free.






                                                                                                                                                        share|improve this answer




















                                                                                                                                                        • Good point but this should be a comment and not an answer by itself - this question suffers from too many answers
                                                                                                                                                          – ndemou
                                                                                                                                                          Jun 9 '17 at 10:36












                                                                                                                                                        up vote
                                                                                                                                                        0
                                                                                                                                                        down vote










                                                                                                                                                        up vote
                                                                                                                                                        0
                                                                                                                                                        down vote









                                                                                                                                                        I had a similar issue, but the answers on this page weren't enough. I found the following command to be the most useful for the listing:



                                                                                                                                                        du -a / | sort -n -r | head -n 20



                                                                                                                                                        Which would show me the 20 biggest offenders. However even though I ran this, it did not show me the real issue, because I had already deleted the file. The catch was that there was a process still running that was referencing the deleted log file... so I had to kill that process first then the disk space showed up as free.






                                                                                                                                                        share|improve this answer












                                                                                                                                                        I had a similar issue, but the answers on this page weren't enough. I found the following command to be the most useful for the listing:



                                                                                                                                                        du -a / | sort -n -r | head -n 20



                                                                                                                                                        Which would show me the 20 biggest offenders. However even though I ran this, it did not show me the real issue, because I had already deleted the file. The catch was that there was a process still running that was referencing the deleted log file... so I had to kill that process first then the disk space showed up as free.







                                                                                                                                                        share|improve this answer












                                                                                                                                                        share|improve this answer



                                                                                                                                                        share|improve this answer










                                                                                                                                                        answered Dec 2 '14 at 22:59









                                                                                                                                                        VenomFangs

                                                                                                                                                        242110




                                                                                                                                                        242110











                                                                                                                                                        • Good point but this should be a comment and not an answer by itself - this question suffers from too many answers
                                                                                                                                                          – ndemou
                                                                                                                                                          Jun 9 '17 at 10:36
















                                                                                                                                                        • Good point but this should be a comment and not an answer by itself - this question suffers from too many answers
                                                                                                                                                          – ndemou
                                                                                                                                                          Jun 9 '17 at 10:36















                                                                                                                                                        Good point but this should be a comment and not an answer by itself - this question suffers from too many answers
                                                                                                                                                        – ndemou
                                                                                                                                                        Jun 9 '17 at 10:36




                                                                                                                                                        Good point but this should be a comment and not an answer by itself - this question suffers from too many answers
                                                                                                                                                        – ndemou
                                                                                                                                                        Jun 9 '17 at 10:36










                                                                                                                                                        up vote
                                                                                                                                                        0
                                                                                                                                                        down vote













                                                                                                                                                        You can use DiskReport.net to generate an online web report of all your disks.



                                                                                                                                                        With many runs it will show you history graph for all your folders, easy to find what has grow






                                                                                                                                                        share|improve this answer




















                                                                                                                                                        • This tool doesn't match two main points of the question "I often find myself struggling to track down the culprit after a partition goes full" and "I'd prefer a command line solution which relies on standard Linux commands"
                                                                                                                                                          – ndemou
                                                                                                                                                          Jun 9 '17 at 10:35














                                                                                                                                                        up vote
                                                                                                                                                        0
                                                                                                                                                        down vote













                                                                                                                                                        You can use DiskReport.net to generate an online web report of all your disks.



                                                                                                                                                        With many runs it will show you history graph for all your folders, easy to find what has grow






                                                                                                                                                        share|improve this answer




















                                                                                                                                                        • This tool doesn't match two main points of the question "I often find myself struggling to track down the culprit after a partition goes full" and "I'd prefer a command line solution which relies on standard Linux commands"
                                                                                                                                                          – ndemou
                                                                                                                                                          Jun 9 '17 at 10:35












                                                                                                                                                        up vote
                                                                                                                                                        0
                                                                                                                                                        down vote










                                                                                                                                                        up vote
                                                                                                                                                        0
                                                                                                                                                        down vote









                                                                                                                                                        You can use DiskReport.net to generate an online web report of all your disks.



                                                                                                                                                        With many runs it will show you history graph for all your folders, easy to find what has grow






                                                                                                                                                        share|improve this answer












                                                                                                                                                        You can use DiskReport.net to generate an online web report of all your disks.



                                                                                                                                                        With many runs it will show you history graph for all your folders, easy to find what has grow







                                                                                                                                                        share|improve this answer












                                                                                                                                                        share|improve this answer



                                                                                                                                                        share|improve this answer










                                                                                                                                                        answered Jun 5 '15 at 21:38









                                                                                                                                                        SteeTri

                                                                                                                                                        91




                                                                                                                                                        91











                                                                                                                                                        • This tool doesn't match two main points of the question "I often find myself struggling to track down the culprit after a partition goes full" and "I'd prefer a command line solution which relies on standard Linux commands"
                                                                                                                                                          – ndemou
                                                                                                                                                          Jun 9 '17 at 10:35
















                                                                                                                                                        • This tool doesn't match two main points of the question "I often find myself struggling to track down the culprit after a partition goes full" and "I'd prefer a command line solution which relies on standard Linux commands"
                                                                                                                                                          – ndemou
                                                                                                                                                          Jun 9 '17 at 10:35















                                                                                                                                                        This tool doesn't match two main points of the question "I often find myself struggling to track down the culprit after a partition goes full" and "I'd prefer a command line solution which relies on standard Linux commands"
                                                                                                                                                        – ndemou
                                                                                                                                                        Jun 9 '17 at 10:35




                                                                                                                                                        This tool doesn't match two main points of the question "I often find myself struggling to track down the culprit after a partition goes full" and "I'd prefer a command line solution which relies on standard Linux commands"
                                                                                                                                                        – ndemou
                                                                                                                                                        Jun 9 '17 at 10:35










                                                                                                                                                        up vote
                                                                                                                                                        0
                                                                                                                                                        down vote













                                                                                                                                                        There is a nice piece of cross-platform freeware called JDiskReport which includes a GUI to explore what's taking up all that space.



                                                                                                                                                        Example screenshot:
                                                                                                                                                        JDiskReport screenshot



                                                                                                                                                        Of course, you'll need to clear up a little bit of space manually before you can download and install it, or download this to a different drive (like a USB thumbdrive).




                                                                                                                                                        (Copied here from same-author answer on duplicate question)






                                                                                                                                                        share|improve this answer


























                                                                                                                                                          up vote
                                                                                                                                                          0
                                                                                                                                                          down vote













                                                                                                                                                          There is a nice piece of cross-platform freeware called JDiskReport which includes a GUI to explore what's taking up all that space.



                                                                                                                                                          Example screenshot:
                                                                                                                                                          JDiskReport screenshot



                                                                                                                                                          Of course, you'll need to clear up a little bit of space manually before you can download and install it, or download this to a different drive (like a USB thumbdrive).




                                                                                                                                                          (Copied here from same-author answer on duplicate question)






                                                                                                                                                          share|improve this answer
























                                                                                                                                                            up vote
                                                                                                                                                            0
                                                                                                                                                            down vote










                                                                                                                                                            up vote
                                                                                                                                                            0
                                                                                                                                                            down vote









                                                                                                                                                            There is a nice piece of cross-platform freeware called JDiskReport which includes a GUI to explore what's taking up all that space.



                                                                                                                                                            Example screenshot:
                                                                                                                                                            JDiskReport screenshot



                                                                                                                                                            Of course, you'll need to clear up a little bit of space manually before you can download and install it, or download this to a different drive (like a USB thumbdrive).




                                                                                                                                                            (Copied here from same-author answer on duplicate question)






                                                                                                                                                            share|improve this answer














                                                                                                                                                            There is a nice piece of cross-platform freeware called JDiskReport which includes a GUI to explore what's taking up all that space.



                                                                                                                                                            Example screenshot:
                                                                                                                                                            JDiskReport screenshot



                                                                                                                                                            Of course, you'll need to clear up a little bit of space manually before you can download and install it, or download this to a different drive (like a USB thumbdrive).




                                                                                                                                                            (Copied here from same-author answer on duplicate question)







                                                                                                                                                            share|improve this answer














                                                                                                                                                            share|improve this answer



                                                                                                                                                            share|improve this answer








                                                                                                                                                            edited Apr 13 '17 at 12:36









                                                                                                                                                            Community♦

                                                                                                                                                            1




                                                                                                                                                            1










                                                                                                                                                            answered Oct 6 '16 at 2:50









                                                                                                                                                            WBT

                                                                                                                                                            121114




                                                                                                                                                            121114




















                                                                                                                                                                1 2
                                                                                                                                                                next




                                                                                                                                                                protected by ilkkachu Jun 9 '17 at 11:52



                                                                                                                                                                Thank you for your interest in this question.
                                                                                                                                                                Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



                                                                                                                                                                Would you like to answer one of these unanswered questions instead?


                                                                                                                                                                Popular posts from this blog

                                                                                                                                                                How to check contact read email or not when send email to Individual?

                                                                                                                                                                Displaying single band from multi-band raster using QGIS

                                                                                                                                                                How many registers does an x86_64 CPU actually have?