Modification required for the perl command

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












I need a modification for the below perl command:



perl -wE 'say for ((sort -s $b <=> -s $a </tmp/?>)[0..9]);'


Requirement:



  1. It should scan through all the sub-directories inside the target directory.

  2. List down the top 10 files with their size and path.






share|improve this question

























    up vote
    0
    down vote

    favorite












    I need a modification for the below perl command:



    perl -wE 'say for ((sort -s $b <=> -s $a </tmp/?>)[0..9]);'


    Requirement:



    1. It should scan through all the sub-directories inside the target directory.

    2. List down the top 10 files with their size and path.






    share|improve this question























      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      I need a modification for the below perl command:



      perl -wE 'say for ((sort -s $b <=> -s $a </tmp/?>)[0..9]);'


      Requirement:



      1. It should scan through all the sub-directories inside the target directory.

      2. List down the top 10 files with their size and path.






      share|improve this question













      I need a modification for the below perl command:



      perl -wE 'say for ((sort -s $b <=> -s $a </tmp/?>)[0..9]);'


      Requirement:



      1. It should scan through all the sub-directories inside the target directory.

      2. List down the top 10 files with their size and path.








      share|improve this question












      share|improve this question




      share|improve this question








      edited May 18 at 13:51









      αғsнιη

      14.7k82361




      14.7k82361









      asked May 18 at 10:37









      Anoop Kumar KR

      242




      242




















          4 Answers
          4






          active

          oldest

          votes

















          up vote
          2
          down vote













          This perl script will print exactly what you need, it is using File::Find to traverse recursively.
          I have used -f to make sure only files are pushed into the hash



          Hash %files has filepath as the key and size as its value. Then sorted it on basis of values and printed the top 10 results



          #!/usr/bin/perl

          use strict;
          use warnings;
          use File::Find;

          my %files;
          my $counter=0;
          find( &wanted, '<target directory>');
          for my $file ( sort $files$b <=> $files$a keys%files)
          print "$file : $files$filen";
          $counter++;
          if ($counter == 10)
          last;




          sub wanted
          $files"$File::Find::name"=-s $File::Find::name if -f;
          return;




          Or simply using an array to get it working



          #!/usr/bin/perl

          use strict;
          use warnings;
          use File::Find;

          my @files;
          my $counter=0;
          find( &wanted, '<target directory>');
          for my $file ( sort -s $b <=> -s $a @files){
          my $size = -s $file;
          print "$file : $sizen"
          $counter++;
          if ($counter == 10)
          last;

          sub wanted
          push @files,$File::Find::name if -f;
          return;






          share|improve this answer























          • in your above code second snippet @files[0..9] won't give top sorted file names , but will only run sort on first 10 entries of @files array
            – mkmayank
            May 18 at 11:25










          • thanks a lot for suggestion, i have edited my answer to get the correct result
            – Arushix
            May 18 at 11:45










          • it worked charm for my lower size directories. However for a bigger size, im getting an "Out of Memory" message
            – Anoop Kumar KR
            May 18 at 14:11










          • @Arushix... any suggestion to avoid the Out of Memory message?
            – Anoop Kumar KR
            May 21 at 9:04

















          up vote
          2
          down vote













          Use File::Find to recursively walk the directory tree:



          perl -MFile::Find -wE '
          find(sub push @all, $File::Find::name , "/tmp");
          say for (sort -s $b <=> -s $a @all)[0..9]'


          If there are too many files and you're getting Out of memory, return the sizes and use external sort and head to limit the output:



          perl -MFile::Find -wE 'find(sub say -s $_, " $File::Find::name" , "/tmp")' 
          | sort -nr | head -n10





          share|improve this answer























          • thanks for the response. I got the file names, how can i get the size of the files too along with the file names.
            – Anoop Kumar KR
            May 18 at 10:58






          • 1




            add | xargs du -h to the above perl command output
            – mkmayank
            May 18 at 11:01










          • I found this for the file size: perlmaven.com/how-to-get-the-size-of-a-file-in-perl
            – Kramer
            May 18 at 11:06










          • i see the size details came out are different with the perl command and with the unix find command
            – Anoop Kumar KR
            May 18 at 13:01










          • hpauto@st2ba1301:/tmp$ p"); say for (sort -s $b <=> -s $a @all)[0..9]' 2>/dev/null | xargs du -s < 343800 /tmp/Spectra/spectre_meltdown_fix.tar 38336 /tmp/lftp_latest/gcc-4.8.3-1.aix7.1.ppc.rpm
            – Anoop Kumar KR
            May 18 at 13:02


















          up vote
          1
          down vote













          zsh -c 'ls -ldS /tmp/**/?(DOL[1,10])'


          To list the 10 Largest single-character (?) files in /tmp and subdirs (**/), ordered by Size.



          With perl, and to avoid storing the whole file list in memory when you only want the 10 largest ones:



          perl -MFile::Find -e '
          find(
          sub
          if (length == 1 && $_ ne ".")
          @s = sort $b->[0] <=> $a->[0] [-s, $File::Find::name], @s;
          splice @s, 10

          , "/tmp"
          ); printf "%16d %sn", @$_ for @s'


          (the length == 1 && $_ ne "." is to match on single-byte file names like your /tmp/? suggests you want to do).



          Instead of printf "%16d %sn", @$_, you could also run ls like in the zsh solution with exec "ls", "-ldS", map $_->[1], @s






          share|improve this answer























          • doesnt have zsh utility...it doesnt work 4 me
            – Anoop Kumar KR
            May 18 at 13:14










          • @AnoopKumarKR, most Unix-like systems have zsh available as a package.
            – Stéphane Chazelas
            May 18 at 13:23










          • i tried to run this, it dint work. Mine is AIX.
            – Anoop Kumar KR
            May 18 at 13:28










          • @AnoopKumarKR, again, you'd need to install it first. Even IBM apparently provides zsh packages these days for AIX. Anyway, I've added a perl solution.
            – Stéphane Chazelas
            May 18 at 15:48











          • link to IBM AIX Toolbox for Linux and directly to the zsh rpm
            – Jeff Schaller
            May 18 at 20:05

















          up vote
          0
          down vote













          easier with bash I would say



          find $PATH_TO_PARENT_FOLDER -type f -exec du -ahx + | sort -rh | head -10



          Essentially we are using find to locate all files inside a folder,




          • -type either f for file or d for dir


          • -exec executes a command using the output of find and places find result as argument at

          then executing du to calculate the size of each element found through find and adding




          • -a, --all write counts for all files, not just directories


          • -h, --human-readable print sizes in human readable format (e.g., 1K 234M 2G)


          • -x, --one-file-system skip directories on different file systems

          Then we pipe it to sort it form bigger to smaller and we use head to present the top 10 only



          sort to sort the output result




          • -h, --human-numeric-sort compare human readable numbers (e.g., 2K 1G)


          • -r, --reverse reverse the result of comparisons

          head to just read the top results




          • -<N> to set the amount of lines you want to see from 1 to N





          share|improve this answer























          • I have a directory with more than 300000 files, find command takes around 15mins or it goes in hung state.. !! :(
            – Anoop Kumar KR
            May 18 at 11:00










          • you could combine the solution from @choroba with mine, read the FS using perl, then apply for each of the found results the commands I shared with you :)
            – Kramer
            May 18 at 11:02











          • choroba's command is failing with "Out of memory"
            – Anoop Kumar KR
            May 18 at 13:23










          • ??????????????????
            – Anoop Kumar KR
            May 22 at 8:18










          Your Answer







          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "106"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: false,
          noModals: false,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );








           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f444560%2fmodification-required-for-the-perl-command%23new-answer', 'question_page');

          );

          Post as a guest






























          4 Answers
          4






          active

          oldest

          votes








          4 Answers
          4






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          2
          down vote













          This perl script will print exactly what you need, it is using File::Find to traverse recursively.
          I have used -f to make sure only files are pushed into the hash



          Hash %files has filepath as the key and size as its value. Then sorted it on basis of values and printed the top 10 results



          #!/usr/bin/perl

          use strict;
          use warnings;
          use File::Find;

          my %files;
          my $counter=0;
          find( &wanted, '<target directory>');
          for my $file ( sort $files$b <=> $files$a keys%files)
          print "$file : $files$filen";
          $counter++;
          if ($counter == 10)
          last;




          sub wanted
          $files"$File::Find::name"=-s $File::Find::name if -f;
          return;




          Or simply using an array to get it working



          #!/usr/bin/perl

          use strict;
          use warnings;
          use File::Find;

          my @files;
          my $counter=0;
          find( &wanted, '<target directory>');
          for my $file ( sort -s $b <=> -s $a @files){
          my $size = -s $file;
          print "$file : $sizen"
          $counter++;
          if ($counter == 10)
          last;

          sub wanted
          push @files,$File::Find::name if -f;
          return;






          share|improve this answer























          • in your above code second snippet @files[0..9] won't give top sorted file names , but will only run sort on first 10 entries of @files array
            – mkmayank
            May 18 at 11:25










          • thanks a lot for suggestion, i have edited my answer to get the correct result
            – Arushix
            May 18 at 11:45










          • it worked charm for my lower size directories. However for a bigger size, im getting an "Out of Memory" message
            – Anoop Kumar KR
            May 18 at 14:11










          • @Arushix... any suggestion to avoid the Out of Memory message?
            – Anoop Kumar KR
            May 21 at 9:04














          up vote
          2
          down vote













          This perl script will print exactly what you need, it is using File::Find to traverse recursively.
          I have used -f to make sure only files are pushed into the hash



          Hash %files has filepath as the key and size as its value. Then sorted it on basis of values and printed the top 10 results



          #!/usr/bin/perl

          use strict;
          use warnings;
          use File::Find;

          my %files;
          my $counter=0;
          find( &wanted, '<target directory>');
          for my $file ( sort $files$b <=> $files$a keys%files)
          print "$file : $files$filen";
          $counter++;
          if ($counter == 10)
          last;




          sub wanted
          $files"$File::Find::name"=-s $File::Find::name if -f;
          return;




          Or simply using an array to get it working



          #!/usr/bin/perl

          use strict;
          use warnings;
          use File::Find;

          my @files;
          my $counter=0;
          find( &wanted, '<target directory>');
          for my $file ( sort -s $b <=> -s $a @files){
          my $size = -s $file;
          print "$file : $sizen"
          $counter++;
          if ($counter == 10)
          last;

          sub wanted
          push @files,$File::Find::name if -f;
          return;






          share|improve this answer























          • in your above code second snippet @files[0..9] won't give top sorted file names , but will only run sort on first 10 entries of @files array
            – mkmayank
            May 18 at 11:25










          • thanks a lot for suggestion, i have edited my answer to get the correct result
            – Arushix
            May 18 at 11:45










          • it worked charm for my lower size directories. However for a bigger size, im getting an "Out of Memory" message
            – Anoop Kumar KR
            May 18 at 14:11










          • @Arushix... any suggestion to avoid the Out of Memory message?
            – Anoop Kumar KR
            May 21 at 9:04












          up vote
          2
          down vote










          up vote
          2
          down vote









          This perl script will print exactly what you need, it is using File::Find to traverse recursively.
          I have used -f to make sure only files are pushed into the hash



          Hash %files has filepath as the key and size as its value. Then sorted it on basis of values and printed the top 10 results



          #!/usr/bin/perl

          use strict;
          use warnings;
          use File::Find;

          my %files;
          my $counter=0;
          find( &wanted, '<target directory>');
          for my $file ( sort $files$b <=> $files$a keys%files)
          print "$file : $files$filen";
          $counter++;
          if ($counter == 10)
          last;




          sub wanted
          $files"$File::Find::name"=-s $File::Find::name if -f;
          return;




          Or simply using an array to get it working



          #!/usr/bin/perl

          use strict;
          use warnings;
          use File::Find;

          my @files;
          my $counter=0;
          find( &wanted, '<target directory>');
          for my $file ( sort -s $b <=> -s $a @files){
          my $size = -s $file;
          print "$file : $sizen"
          $counter++;
          if ($counter == 10)
          last;

          sub wanted
          push @files,$File::Find::name if -f;
          return;






          share|improve this answer















          This perl script will print exactly what you need, it is using File::Find to traverse recursively.
          I have used -f to make sure only files are pushed into the hash



          Hash %files has filepath as the key and size as its value. Then sorted it on basis of values and printed the top 10 results



          #!/usr/bin/perl

          use strict;
          use warnings;
          use File::Find;

          my %files;
          my $counter=0;
          find( &wanted, '<target directory>');
          for my $file ( sort $files$b <=> $files$a keys%files)
          print "$file : $files$filen";
          $counter++;
          if ($counter == 10)
          last;




          sub wanted
          $files"$File::Find::name"=-s $File::Find::name if -f;
          return;




          Or simply using an array to get it working



          #!/usr/bin/perl

          use strict;
          use warnings;
          use File::Find;

          my @files;
          my $counter=0;
          find( &wanted, '<target directory>');
          for my $file ( sort -s $b <=> -s $a @files){
          my $size = -s $file;
          print "$file : $sizen"
          $counter++;
          if ($counter == 10)
          last;

          sub wanted
          push @files,$File::Find::name if -f;
          return;







          share|improve this answer















          share|improve this answer



          share|improve this answer








          edited May 18 at 11:46


























          answered May 18 at 11:01









          Arushix

          9968




          9968











          • in your above code second snippet @files[0..9] won't give top sorted file names , but will only run sort on first 10 entries of @files array
            – mkmayank
            May 18 at 11:25










          • thanks a lot for suggestion, i have edited my answer to get the correct result
            – Arushix
            May 18 at 11:45










          • it worked charm for my lower size directories. However for a bigger size, im getting an "Out of Memory" message
            – Anoop Kumar KR
            May 18 at 14:11










          • @Arushix... any suggestion to avoid the Out of Memory message?
            – Anoop Kumar KR
            May 21 at 9:04
















          • in your above code second snippet @files[0..9] won't give top sorted file names , but will only run sort on first 10 entries of @files array
            – mkmayank
            May 18 at 11:25










          • thanks a lot for suggestion, i have edited my answer to get the correct result
            – Arushix
            May 18 at 11:45










          • it worked charm for my lower size directories. However for a bigger size, im getting an "Out of Memory" message
            – Anoop Kumar KR
            May 18 at 14:11










          • @Arushix... any suggestion to avoid the Out of Memory message?
            – Anoop Kumar KR
            May 21 at 9:04















          in your above code second snippet @files[0..9] won't give top sorted file names , but will only run sort on first 10 entries of @files array
          – mkmayank
          May 18 at 11:25




          in your above code second snippet @files[0..9] won't give top sorted file names , but will only run sort on first 10 entries of @files array
          – mkmayank
          May 18 at 11:25












          thanks a lot for suggestion, i have edited my answer to get the correct result
          – Arushix
          May 18 at 11:45




          thanks a lot for suggestion, i have edited my answer to get the correct result
          – Arushix
          May 18 at 11:45












          it worked charm for my lower size directories. However for a bigger size, im getting an "Out of Memory" message
          – Anoop Kumar KR
          May 18 at 14:11




          it worked charm for my lower size directories. However for a bigger size, im getting an "Out of Memory" message
          – Anoop Kumar KR
          May 18 at 14:11












          @Arushix... any suggestion to avoid the Out of Memory message?
          – Anoop Kumar KR
          May 21 at 9:04




          @Arushix... any suggestion to avoid the Out of Memory message?
          – Anoop Kumar KR
          May 21 at 9:04












          up vote
          2
          down vote













          Use File::Find to recursively walk the directory tree:



          perl -MFile::Find -wE '
          find(sub push @all, $File::Find::name , "/tmp");
          say for (sort -s $b <=> -s $a @all)[0..9]'


          If there are too many files and you're getting Out of memory, return the sizes and use external sort and head to limit the output:



          perl -MFile::Find -wE 'find(sub say -s $_, " $File::Find::name" , "/tmp")' 
          | sort -nr | head -n10





          share|improve this answer























          • thanks for the response. I got the file names, how can i get the size of the files too along with the file names.
            – Anoop Kumar KR
            May 18 at 10:58






          • 1




            add | xargs du -h to the above perl command output
            – mkmayank
            May 18 at 11:01










          • I found this for the file size: perlmaven.com/how-to-get-the-size-of-a-file-in-perl
            – Kramer
            May 18 at 11:06










          • i see the size details came out are different with the perl command and with the unix find command
            – Anoop Kumar KR
            May 18 at 13:01










          • hpauto@st2ba1301:/tmp$ p"); say for (sort -s $b <=> -s $a @all)[0..9]' 2>/dev/null | xargs du -s < 343800 /tmp/Spectra/spectre_meltdown_fix.tar 38336 /tmp/lftp_latest/gcc-4.8.3-1.aix7.1.ppc.rpm
            – Anoop Kumar KR
            May 18 at 13:02















          up vote
          2
          down vote













          Use File::Find to recursively walk the directory tree:



          perl -MFile::Find -wE '
          find(sub push @all, $File::Find::name , "/tmp");
          say for (sort -s $b <=> -s $a @all)[0..9]'


          If there are too many files and you're getting Out of memory, return the sizes and use external sort and head to limit the output:



          perl -MFile::Find -wE 'find(sub say -s $_, " $File::Find::name" , "/tmp")' 
          | sort -nr | head -n10





          share|improve this answer























          • thanks for the response. I got the file names, how can i get the size of the files too along with the file names.
            – Anoop Kumar KR
            May 18 at 10:58






          • 1




            add | xargs du -h to the above perl command output
            – mkmayank
            May 18 at 11:01










          • I found this for the file size: perlmaven.com/how-to-get-the-size-of-a-file-in-perl
            – Kramer
            May 18 at 11:06










          • i see the size details came out are different with the perl command and with the unix find command
            – Anoop Kumar KR
            May 18 at 13:01










          • hpauto@st2ba1301:/tmp$ p"); say for (sort -s $b <=> -s $a @all)[0..9]' 2>/dev/null | xargs du -s < 343800 /tmp/Spectra/spectre_meltdown_fix.tar 38336 /tmp/lftp_latest/gcc-4.8.3-1.aix7.1.ppc.rpm
            – Anoop Kumar KR
            May 18 at 13:02













          up vote
          2
          down vote










          up vote
          2
          down vote









          Use File::Find to recursively walk the directory tree:



          perl -MFile::Find -wE '
          find(sub push @all, $File::Find::name , "/tmp");
          say for (sort -s $b <=> -s $a @all)[0..9]'


          If there are too many files and you're getting Out of memory, return the sizes and use external sort and head to limit the output:



          perl -MFile::Find -wE 'find(sub say -s $_, " $File::Find::name" , "/tmp")' 
          | sort -nr | head -n10





          share|improve this answer















          Use File::Find to recursively walk the directory tree:



          perl -MFile::Find -wE '
          find(sub push @all, $File::Find::name , "/tmp");
          say for (sort -s $b <=> -s $a @all)[0..9]'


          If there are too many files and you're getting Out of memory, return the sizes and use external sort and head to limit the output:



          perl -MFile::Find -wE 'find(sub say -s $_, " $File::Find::name" , "/tmp")' 
          | sort -nr | head -n10






          share|improve this answer















          share|improve this answer



          share|improve this answer








          edited May 18 at 16:59


























          answered May 18 at 10:50









          choroba

          24.3k33967




          24.3k33967











          • thanks for the response. I got the file names, how can i get the size of the files too along with the file names.
            – Anoop Kumar KR
            May 18 at 10:58






          • 1




            add | xargs du -h to the above perl command output
            – mkmayank
            May 18 at 11:01










          • I found this for the file size: perlmaven.com/how-to-get-the-size-of-a-file-in-perl
            – Kramer
            May 18 at 11:06










          • i see the size details came out are different with the perl command and with the unix find command
            – Anoop Kumar KR
            May 18 at 13:01










          • hpauto@st2ba1301:/tmp$ p"); say for (sort -s $b <=> -s $a @all)[0..9]' 2>/dev/null | xargs du -s < 343800 /tmp/Spectra/spectre_meltdown_fix.tar 38336 /tmp/lftp_latest/gcc-4.8.3-1.aix7.1.ppc.rpm
            – Anoop Kumar KR
            May 18 at 13:02

















          • thanks for the response. I got the file names, how can i get the size of the files too along with the file names.
            – Anoop Kumar KR
            May 18 at 10:58






          • 1




            add | xargs du -h to the above perl command output
            – mkmayank
            May 18 at 11:01










          • I found this for the file size: perlmaven.com/how-to-get-the-size-of-a-file-in-perl
            – Kramer
            May 18 at 11:06










          • i see the size details came out are different with the perl command and with the unix find command
            – Anoop Kumar KR
            May 18 at 13:01










          • hpauto@st2ba1301:/tmp$ p"); say for (sort -s $b <=> -s $a @all)[0..9]' 2>/dev/null | xargs du -s < 343800 /tmp/Spectra/spectre_meltdown_fix.tar 38336 /tmp/lftp_latest/gcc-4.8.3-1.aix7.1.ppc.rpm
            – Anoop Kumar KR
            May 18 at 13:02
















          thanks for the response. I got the file names, how can i get the size of the files too along with the file names.
          – Anoop Kumar KR
          May 18 at 10:58




          thanks for the response. I got the file names, how can i get the size of the files too along with the file names.
          – Anoop Kumar KR
          May 18 at 10:58




          1




          1




          add | xargs du -h to the above perl command output
          – mkmayank
          May 18 at 11:01




          add | xargs du -h to the above perl command output
          – mkmayank
          May 18 at 11:01












          I found this for the file size: perlmaven.com/how-to-get-the-size-of-a-file-in-perl
          – Kramer
          May 18 at 11:06




          I found this for the file size: perlmaven.com/how-to-get-the-size-of-a-file-in-perl
          – Kramer
          May 18 at 11:06












          i see the size details came out are different with the perl command and with the unix find command
          – Anoop Kumar KR
          May 18 at 13:01




          i see the size details came out are different with the perl command and with the unix find command
          – Anoop Kumar KR
          May 18 at 13:01












          hpauto@st2ba1301:/tmp$ p"); say for (sort -s $b <=> -s $a @all)[0..9]' 2>/dev/null | xargs du -s < 343800 /tmp/Spectra/spectre_meltdown_fix.tar 38336 /tmp/lftp_latest/gcc-4.8.3-1.aix7.1.ppc.rpm
          – Anoop Kumar KR
          May 18 at 13:02





          hpauto@st2ba1301:/tmp$ p"); say for (sort -s $b <=> -s $a @all)[0..9]' 2>/dev/null | xargs du -s < 343800 /tmp/Spectra/spectre_meltdown_fix.tar 38336 /tmp/lftp_latest/gcc-4.8.3-1.aix7.1.ppc.rpm
          – Anoop Kumar KR
          May 18 at 13:02











          up vote
          1
          down vote













          zsh -c 'ls -ldS /tmp/**/?(DOL[1,10])'


          To list the 10 Largest single-character (?) files in /tmp and subdirs (**/), ordered by Size.



          With perl, and to avoid storing the whole file list in memory when you only want the 10 largest ones:



          perl -MFile::Find -e '
          find(
          sub
          if (length == 1 && $_ ne ".")
          @s = sort $b->[0] <=> $a->[0] [-s, $File::Find::name], @s;
          splice @s, 10

          , "/tmp"
          ); printf "%16d %sn", @$_ for @s'


          (the length == 1 && $_ ne "." is to match on single-byte file names like your /tmp/? suggests you want to do).



          Instead of printf "%16d %sn", @$_, you could also run ls like in the zsh solution with exec "ls", "-ldS", map $_->[1], @s






          share|improve this answer























          • doesnt have zsh utility...it doesnt work 4 me
            – Anoop Kumar KR
            May 18 at 13:14










          • @AnoopKumarKR, most Unix-like systems have zsh available as a package.
            – Stéphane Chazelas
            May 18 at 13:23










          • i tried to run this, it dint work. Mine is AIX.
            – Anoop Kumar KR
            May 18 at 13:28










          • @AnoopKumarKR, again, you'd need to install it first. Even IBM apparently provides zsh packages these days for AIX. Anyway, I've added a perl solution.
            – Stéphane Chazelas
            May 18 at 15:48











          • link to IBM AIX Toolbox for Linux and directly to the zsh rpm
            – Jeff Schaller
            May 18 at 20:05














          up vote
          1
          down vote













          zsh -c 'ls -ldS /tmp/**/?(DOL[1,10])'


          To list the 10 Largest single-character (?) files in /tmp and subdirs (**/), ordered by Size.



          With perl, and to avoid storing the whole file list in memory when you only want the 10 largest ones:



          perl -MFile::Find -e '
          find(
          sub
          if (length == 1 && $_ ne ".")
          @s = sort $b->[0] <=> $a->[0] [-s, $File::Find::name], @s;
          splice @s, 10

          , "/tmp"
          ); printf "%16d %sn", @$_ for @s'


          (the length == 1 && $_ ne "." is to match on single-byte file names like your /tmp/? suggests you want to do).



          Instead of printf "%16d %sn", @$_, you could also run ls like in the zsh solution with exec "ls", "-ldS", map $_->[1], @s






          share|improve this answer























          • doesnt have zsh utility...it doesnt work 4 me
            – Anoop Kumar KR
            May 18 at 13:14










          • @AnoopKumarKR, most Unix-like systems have zsh available as a package.
            – Stéphane Chazelas
            May 18 at 13:23










          • i tried to run this, it dint work. Mine is AIX.
            – Anoop Kumar KR
            May 18 at 13:28










          • @AnoopKumarKR, again, you'd need to install it first. Even IBM apparently provides zsh packages these days for AIX. Anyway, I've added a perl solution.
            – Stéphane Chazelas
            May 18 at 15:48











          • link to IBM AIX Toolbox for Linux and directly to the zsh rpm
            – Jeff Schaller
            May 18 at 20:05












          up vote
          1
          down vote










          up vote
          1
          down vote









          zsh -c 'ls -ldS /tmp/**/?(DOL[1,10])'


          To list the 10 Largest single-character (?) files in /tmp and subdirs (**/), ordered by Size.



          With perl, and to avoid storing the whole file list in memory when you only want the 10 largest ones:



          perl -MFile::Find -e '
          find(
          sub
          if (length == 1 && $_ ne ".")
          @s = sort $b->[0] <=> $a->[0] [-s, $File::Find::name], @s;
          splice @s, 10

          , "/tmp"
          ); printf "%16d %sn", @$_ for @s'


          (the length == 1 && $_ ne "." is to match on single-byte file names like your /tmp/? suggests you want to do).



          Instead of printf "%16d %sn", @$_, you could also run ls like in the zsh solution with exec "ls", "-ldS", map $_->[1], @s






          share|improve this answer















          zsh -c 'ls -ldS /tmp/**/?(DOL[1,10])'


          To list the 10 Largest single-character (?) files in /tmp and subdirs (**/), ordered by Size.



          With perl, and to avoid storing the whole file list in memory when you only want the 10 largest ones:



          perl -MFile::Find -e '
          find(
          sub
          if (length == 1 && $_ ne ".")
          @s = sort $b->[0] <=> $a->[0] [-s, $File::Find::name], @s;
          splice @s, 10

          , "/tmp"
          ); printf "%16d %sn", @$_ for @s'


          (the length == 1 && $_ ne "." is to match on single-byte file names like your /tmp/? suggests you want to do).



          Instead of printf "%16d %sn", @$_, you could also run ls like in the zsh solution with exec "ls", "-ldS", map $_->[1], @s







          share|improve this answer















          share|improve this answer



          share|improve this answer








          edited May 18 at 15:44


























          answered May 18 at 12:27









          Stéphane Chazelas

          279k53513845




          279k53513845











          • doesnt have zsh utility...it doesnt work 4 me
            – Anoop Kumar KR
            May 18 at 13:14










          • @AnoopKumarKR, most Unix-like systems have zsh available as a package.
            – Stéphane Chazelas
            May 18 at 13:23










          • i tried to run this, it dint work. Mine is AIX.
            – Anoop Kumar KR
            May 18 at 13:28










          • @AnoopKumarKR, again, you'd need to install it first. Even IBM apparently provides zsh packages these days for AIX. Anyway, I've added a perl solution.
            – Stéphane Chazelas
            May 18 at 15:48











          • link to IBM AIX Toolbox for Linux and directly to the zsh rpm
            – Jeff Schaller
            May 18 at 20:05
















          • doesnt have zsh utility...it doesnt work 4 me
            – Anoop Kumar KR
            May 18 at 13:14










          • @AnoopKumarKR, most Unix-like systems have zsh available as a package.
            – Stéphane Chazelas
            May 18 at 13:23










          • i tried to run this, it dint work. Mine is AIX.
            – Anoop Kumar KR
            May 18 at 13:28










          • @AnoopKumarKR, again, you'd need to install it first. Even IBM apparently provides zsh packages these days for AIX. Anyway, I've added a perl solution.
            – Stéphane Chazelas
            May 18 at 15:48











          • link to IBM AIX Toolbox for Linux and directly to the zsh rpm
            – Jeff Schaller
            May 18 at 20:05















          doesnt have zsh utility...it doesnt work 4 me
          – Anoop Kumar KR
          May 18 at 13:14




          doesnt have zsh utility...it doesnt work 4 me
          – Anoop Kumar KR
          May 18 at 13:14












          @AnoopKumarKR, most Unix-like systems have zsh available as a package.
          – Stéphane Chazelas
          May 18 at 13:23




          @AnoopKumarKR, most Unix-like systems have zsh available as a package.
          – Stéphane Chazelas
          May 18 at 13:23












          i tried to run this, it dint work. Mine is AIX.
          – Anoop Kumar KR
          May 18 at 13:28




          i tried to run this, it dint work. Mine is AIX.
          – Anoop Kumar KR
          May 18 at 13:28












          @AnoopKumarKR, again, you'd need to install it first. Even IBM apparently provides zsh packages these days for AIX. Anyway, I've added a perl solution.
          – Stéphane Chazelas
          May 18 at 15:48





          @AnoopKumarKR, again, you'd need to install it first. Even IBM apparently provides zsh packages these days for AIX. Anyway, I've added a perl solution.
          – Stéphane Chazelas
          May 18 at 15:48













          link to IBM AIX Toolbox for Linux and directly to the zsh rpm
          – Jeff Schaller
          May 18 at 20:05




          link to IBM AIX Toolbox for Linux and directly to the zsh rpm
          – Jeff Schaller
          May 18 at 20:05










          up vote
          0
          down vote













          easier with bash I would say



          find $PATH_TO_PARENT_FOLDER -type f -exec du -ahx + | sort -rh | head -10



          Essentially we are using find to locate all files inside a folder,




          • -type either f for file or d for dir


          • -exec executes a command using the output of find and places find result as argument at

          then executing du to calculate the size of each element found through find and adding




          • -a, --all write counts for all files, not just directories


          • -h, --human-readable print sizes in human readable format (e.g., 1K 234M 2G)


          • -x, --one-file-system skip directories on different file systems

          Then we pipe it to sort it form bigger to smaller and we use head to present the top 10 only



          sort to sort the output result




          • -h, --human-numeric-sort compare human readable numbers (e.g., 2K 1G)


          • -r, --reverse reverse the result of comparisons

          head to just read the top results




          • -<N> to set the amount of lines you want to see from 1 to N





          share|improve this answer























          • I have a directory with more than 300000 files, find command takes around 15mins or it goes in hung state.. !! :(
            – Anoop Kumar KR
            May 18 at 11:00










          • you could combine the solution from @choroba with mine, read the FS using perl, then apply for each of the found results the commands I shared with you :)
            – Kramer
            May 18 at 11:02











          • choroba's command is failing with "Out of memory"
            – Anoop Kumar KR
            May 18 at 13:23










          • ??????????????????
            – Anoop Kumar KR
            May 22 at 8:18














          up vote
          0
          down vote













          easier with bash I would say



          find $PATH_TO_PARENT_FOLDER -type f -exec du -ahx + | sort -rh | head -10



          Essentially we are using find to locate all files inside a folder,




          • -type either f for file or d for dir


          • -exec executes a command using the output of find and places find result as argument at

          then executing du to calculate the size of each element found through find and adding




          • -a, --all write counts for all files, not just directories


          • -h, --human-readable print sizes in human readable format (e.g., 1K 234M 2G)


          • -x, --one-file-system skip directories on different file systems

          Then we pipe it to sort it form bigger to smaller and we use head to present the top 10 only



          sort to sort the output result




          • -h, --human-numeric-sort compare human readable numbers (e.g., 2K 1G)


          • -r, --reverse reverse the result of comparisons

          head to just read the top results




          • -<N> to set the amount of lines you want to see from 1 to N





          share|improve this answer























          • I have a directory with more than 300000 files, find command takes around 15mins or it goes in hung state.. !! :(
            – Anoop Kumar KR
            May 18 at 11:00










          • you could combine the solution from @choroba with mine, read the FS using perl, then apply for each of the found results the commands I shared with you :)
            – Kramer
            May 18 at 11:02











          • choroba's command is failing with "Out of memory"
            – Anoop Kumar KR
            May 18 at 13:23










          • ??????????????????
            – Anoop Kumar KR
            May 22 at 8:18












          up vote
          0
          down vote










          up vote
          0
          down vote









          easier with bash I would say



          find $PATH_TO_PARENT_FOLDER -type f -exec du -ahx + | sort -rh | head -10



          Essentially we are using find to locate all files inside a folder,




          • -type either f for file or d for dir


          • -exec executes a command using the output of find and places find result as argument at

          then executing du to calculate the size of each element found through find and adding




          • -a, --all write counts for all files, not just directories


          • -h, --human-readable print sizes in human readable format (e.g., 1K 234M 2G)


          • -x, --one-file-system skip directories on different file systems

          Then we pipe it to sort it form bigger to smaller and we use head to present the top 10 only



          sort to sort the output result




          • -h, --human-numeric-sort compare human readable numbers (e.g., 2K 1G)


          • -r, --reverse reverse the result of comparisons

          head to just read the top results




          • -<N> to set the amount of lines you want to see from 1 to N





          share|improve this answer















          easier with bash I would say



          find $PATH_TO_PARENT_FOLDER -type f -exec du -ahx + | sort -rh | head -10



          Essentially we are using find to locate all files inside a folder,




          • -type either f for file or d for dir


          • -exec executes a command using the output of find and places find result as argument at

          then executing du to calculate the size of each element found through find and adding




          • -a, --all write counts for all files, not just directories


          • -h, --human-readable print sizes in human readable format (e.g., 1K 234M 2G)


          • -x, --one-file-system skip directories on different file systems

          Then we pipe it to sort it form bigger to smaller and we use head to present the top 10 only



          sort to sort the output result




          • -h, --human-numeric-sort compare human readable numbers (e.g., 2K 1G)


          • -r, --reverse reverse the result of comparisons

          head to just read the top results




          • -<N> to set the amount of lines you want to see from 1 to N






          share|improve this answer















          share|improve this answer



          share|improve this answer








          edited May 18 at 11:01


























          answered May 18 at 10:58









          Kramer

          1376




          1376











          • I have a directory with more than 300000 files, find command takes around 15mins or it goes in hung state.. !! :(
            – Anoop Kumar KR
            May 18 at 11:00










          • you could combine the solution from @choroba with mine, read the FS using perl, then apply for each of the found results the commands I shared with you :)
            – Kramer
            May 18 at 11:02











          • choroba's command is failing with "Out of memory"
            – Anoop Kumar KR
            May 18 at 13:23










          • ??????????????????
            – Anoop Kumar KR
            May 22 at 8:18
















          • I have a directory with more than 300000 files, find command takes around 15mins or it goes in hung state.. !! :(
            – Anoop Kumar KR
            May 18 at 11:00










          • you could combine the solution from @choroba with mine, read the FS using perl, then apply for each of the found results the commands I shared with you :)
            – Kramer
            May 18 at 11:02











          • choroba's command is failing with "Out of memory"
            – Anoop Kumar KR
            May 18 at 13:23










          • ??????????????????
            – Anoop Kumar KR
            May 22 at 8:18















          I have a directory with more than 300000 files, find command takes around 15mins or it goes in hung state.. !! :(
          – Anoop Kumar KR
          May 18 at 11:00




          I have a directory with more than 300000 files, find command takes around 15mins or it goes in hung state.. !! :(
          – Anoop Kumar KR
          May 18 at 11:00












          you could combine the solution from @choroba with mine, read the FS using perl, then apply for each of the found results the commands I shared with you :)
          – Kramer
          May 18 at 11:02





          you could combine the solution from @choroba with mine, read the FS using perl, then apply for each of the found results the commands I shared with you :)
          – Kramer
          May 18 at 11:02













          choroba's command is failing with "Out of memory"
          – Anoop Kumar KR
          May 18 at 13:23




          choroba's command is failing with "Out of memory"
          – Anoop Kumar KR
          May 18 at 13:23












          ??????????????????
          – Anoop Kumar KR
          May 22 at 8:18




          ??????????????????
          – Anoop Kumar KR
          May 22 at 8:18












           

          draft saved


          draft discarded


























           


          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f444560%2fmodification-required-for-the-perl-command%23new-answer', 'question_page');

          );

          Post as a guest













































































          Popular posts from this blog

          How to check contact read email or not when send email to Individual?

          Bahrain

          Postfix configuration issue with fips on centos 7; mailgun relay