Modification required for the perl command
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
I need a modification for the below perl command:
perl -wE 'say for ((sort -s $b <=> -s $a </tmp/?>)[0..9]);'
Requirement:
- It should scan through all the sub-directories inside the target directory.
- List down the top 10 files with their size and path.
files perl aix
add a comment |Â
up vote
0
down vote
favorite
I need a modification for the below perl command:
perl -wE 'say for ((sort -s $b <=> -s $a </tmp/?>)[0..9]);'
Requirement:
- It should scan through all the sub-directories inside the target directory.
- List down the top 10 files with their size and path.
files perl aix
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I need a modification for the below perl command:
perl -wE 'say for ((sort -s $b <=> -s $a </tmp/?>)[0..9]);'
Requirement:
- It should scan through all the sub-directories inside the target directory.
- List down the top 10 files with their size and path.
files perl aix
I need a modification for the below perl command:
perl -wE 'say for ((sort -s $b <=> -s $a </tmp/?>)[0..9]);'
Requirement:
- It should scan through all the sub-directories inside the target directory.
- List down the top 10 files with their size and path.
files perl aix
edited May 18 at 13:51
ñÃÂsýù÷
14.7k82361
14.7k82361
asked May 18 at 10:37
Anoop Kumar KR
242
242
add a comment |Â
add a comment |Â
4 Answers
4
active
oldest
votes
up vote
2
down vote
This perl script will print exactly what you need, it is using File::Find
to traverse recursively.
I have used -f
to make sure only files are pushed into the hash
Hash %files
has filepath
as the key and size
as its value. Then sorted it on basis of values and printed the top 10 results
#!/usr/bin/perl
use strict;
use warnings;
use File::Find;
my %files;
my $counter=0;
find( &wanted, '<target directory>');
for my $file ( sort $files$b <=> $files$a keys%files)
print "$file : $files$filen";
$counter++;
if ($counter == 10)
last;
sub wanted
$files"$File::Find::name"=-s $File::Find::name if -f;
return;
Or simply using an array to get it working
#!/usr/bin/perl
use strict;
use warnings;
use File::Find;
my @files;
my $counter=0;
find( &wanted, '<target directory>');
for my $file ( sort -s $b <=> -s $a @files){
my $size = -s $file;
print "$file : $sizen"
$counter++;
if ($counter == 10)
last;
sub wanted
push @files,$File::Find::name if -f;
return;
in your above code second snippet@files[0..9]
won't give top sorted file names , but will only run sort on first 10 entries of@files
array
â mkmayank
May 18 at 11:25
thanks a lot for suggestion, i have edited my answer to get the correct result
â Arushix
May 18 at 11:45
it worked charm for my lower size directories. However for a bigger size, im getting an "Out of Memory" message
â Anoop Kumar KR
May 18 at 14:11
@Arushix... any suggestion to avoid the Out of Memory message?
â Anoop Kumar KR
May 21 at 9:04
add a comment |Â
up vote
2
down vote
Use File::Find to recursively walk the directory tree:
perl -MFile::Find -wE '
find(sub push @all, $File::Find::name , "/tmp");
say for (sort -s $b <=> -s $a @all)[0..9]'
If there are too many files and you're getting Out of memory
, return the sizes and use external sort
and head
to limit the output:
perl -MFile::Find -wE 'find(sub say -s $_, " $File::Find::name" , "/tmp")'
| sort -nr | head -n10
thanks for the response. I got the file names, how can i get the size of the files too along with the file names.
â Anoop Kumar KR
May 18 at 10:58
1
add| xargs du -h
to the above perl command output
â mkmayank
May 18 at 11:01
I found this for the file size: perlmaven.com/how-to-get-the-size-of-a-file-in-perl
â Kramer
May 18 at 11:06
i see the size details came out are different with the perl command and with the unix find command
â Anoop Kumar KR
May 18 at 13:01
hpauto@st2ba1301:/tmp$ p"); say for (sort -s $b <=> -s $a @all)[0..9]' 2>/dev/null | xargs du -s < 343800 /tmp/Spectra/spectre_meltdown_fix.tar 38336 /tmp/lftp_latest/gcc-4.8.3-1.aix7.1.ppc.rpm
â Anoop Kumar KR
May 18 at 13:02
 |Â
show 9 more comments
up vote
1
down vote
zsh -c 'ls -ldS /tmp/**/?(DOL[1,10])'
To l
is
t the 10
L
argest single-character (?
) files in /tmp
and subdirs (**/
), ordered by S
ize.
With perl
, and to avoid storing the whole file list in memory when you only want the 10 largest ones:
perl -MFile::Find -e '
find(
sub
if (length == 1 && $_ ne ".")
@s = sort $b->[0] <=> $a->[0] [-s, $File::Find::name], @s;
splice @s, 10
, "/tmp"
); printf "%16d %sn", @$_ for @s'
(the length == 1 && $_ ne "."
is to match on single-byte file names like your /tmp/?
suggests you want to do).
Instead of printf "%16d %sn", @$_
, you could also run ls
like in the zsh
solution with exec "ls", "-ldS", map $_->[1], @s
doesnt have zsh utility...it doesnt work 4 me
â Anoop Kumar KR
May 18 at 13:14
@AnoopKumarKR, most Unix-like systems havezsh
available as a package.
â Stéphane Chazelas
May 18 at 13:23
i tried to run this, it dint work. Mine is AIX.
â Anoop Kumar KR
May 18 at 13:28
@AnoopKumarKR, again, you'd need to install it first. Even IBM apparently provides zsh packages these days for AIX. Anyway, I've added aperl
solution.
â Stéphane Chazelas
May 18 at 15:48
link to IBM AIX Toolbox for Linux and directly to the zsh rpm
â Jeff Schaller
May 18 at 20:05
 |Â
show 1 more comment
up vote
0
down vote
easier with bash I would say
find $PATH_TO_PARENT_FOLDER -type f -exec du -ahx + | sort -rh | head -10
Essentially we are using find
to locate all files inside a folder,
-type
eitherf
for file ord
for dir-exec
executes a command using the output of find and places find result as argument at
then executing du
to calculate the size of each element found through find and adding
-a, --all
write counts for all files, not just directories-h, --human-readable
print sizes in human readable format (e.g., 1K 234M 2G)-x, --one-file-system
skip directories on different file systems
Then we pipe it to sort it form bigger to smaller and we use head to present the top 10 only
sort
to sort the output result
-h, --human-numeric-sort
compare human readable numbers (e.g., 2K 1G)-r, --reverse
reverse the result of comparisons
head
to just read the top results
-<N>
to set the amount of lines you want to see from 1 to N
I have a directory with more than 300000 files, find command takes around 15mins or it goes in hung state.. !! :(
â Anoop Kumar KR
May 18 at 11:00
you could combine the solution from @choroba with mine, read the FS using perl, then apply for each of the found results the commands I shared with you :)
â Kramer
May 18 at 11:02
choroba's command is failing with "Out of memory"
â Anoop Kumar KR
May 18 at 13:23
??????????????????
â Anoop Kumar KR
May 22 at 8:18
add a comment |Â
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
This perl script will print exactly what you need, it is using File::Find
to traverse recursively.
I have used -f
to make sure only files are pushed into the hash
Hash %files
has filepath
as the key and size
as its value. Then sorted it on basis of values and printed the top 10 results
#!/usr/bin/perl
use strict;
use warnings;
use File::Find;
my %files;
my $counter=0;
find( &wanted, '<target directory>');
for my $file ( sort $files$b <=> $files$a keys%files)
print "$file : $files$filen";
$counter++;
if ($counter == 10)
last;
sub wanted
$files"$File::Find::name"=-s $File::Find::name if -f;
return;
Or simply using an array to get it working
#!/usr/bin/perl
use strict;
use warnings;
use File::Find;
my @files;
my $counter=0;
find( &wanted, '<target directory>');
for my $file ( sort -s $b <=> -s $a @files){
my $size = -s $file;
print "$file : $sizen"
$counter++;
if ($counter == 10)
last;
sub wanted
push @files,$File::Find::name if -f;
return;
in your above code second snippet@files[0..9]
won't give top sorted file names , but will only run sort on first 10 entries of@files
array
â mkmayank
May 18 at 11:25
thanks a lot for suggestion, i have edited my answer to get the correct result
â Arushix
May 18 at 11:45
it worked charm for my lower size directories. However for a bigger size, im getting an "Out of Memory" message
â Anoop Kumar KR
May 18 at 14:11
@Arushix... any suggestion to avoid the Out of Memory message?
â Anoop Kumar KR
May 21 at 9:04
add a comment |Â
up vote
2
down vote
This perl script will print exactly what you need, it is using File::Find
to traverse recursively.
I have used -f
to make sure only files are pushed into the hash
Hash %files
has filepath
as the key and size
as its value. Then sorted it on basis of values and printed the top 10 results
#!/usr/bin/perl
use strict;
use warnings;
use File::Find;
my %files;
my $counter=0;
find( &wanted, '<target directory>');
for my $file ( sort $files$b <=> $files$a keys%files)
print "$file : $files$filen";
$counter++;
if ($counter == 10)
last;
sub wanted
$files"$File::Find::name"=-s $File::Find::name if -f;
return;
Or simply using an array to get it working
#!/usr/bin/perl
use strict;
use warnings;
use File::Find;
my @files;
my $counter=0;
find( &wanted, '<target directory>');
for my $file ( sort -s $b <=> -s $a @files){
my $size = -s $file;
print "$file : $sizen"
$counter++;
if ($counter == 10)
last;
sub wanted
push @files,$File::Find::name if -f;
return;
in your above code second snippet@files[0..9]
won't give top sorted file names , but will only run sort on first 10 entries of@files
array
â mkmayank
May 18 at 11:25
thanks a lot for suggestion, i have edited my answer to get the correct result
â Arushix
May 18 at 11:45
it worked charm for my lower size directories. However for a bigger size, im getting an "Out of Memory" message
â Anoop Kumar KR
May 18 at 14:11
@Arushix... any suggestion to avoid the Out of Memory message?
â Anoop Kumar KR
May 21 at 9:04
add a comment |Â
up vote
2
down vote
up vote
2
down vote
This perl script will print exactly what you need, it is using File::Find
to traverse recursively.
I have used -f
to make sure only files are pushed into the hash
Hash %files
has filepath
as the key and size
as its value. Then sorted it on basis of values and printed the top 10 results
#!/usr/bin/perl
use strict;
use warnings;
use File::Find;
my %files;
my $counter=0;
find( &wanted, '<target directory>');
for my $file ( sort $files$b <=> $files$a keys%files)
print "$file : $files$filen";
$counter++;
if ($counter == 10)
last;
sub wanted
$files"$File::Find::name"=-s $File::Find::name if -f;
return;
Or simply using an array to get it working
#!/usr/bin/perl
use strict;
use warnings;
use File::Find;
my @files;
my $counter=0;
find( &wanted, '<target directory>');
for my $file ( sort -s $b <=> -s $a @files){
my $size = -s $file;
print "$file : $sizen"
$counter++;
if ($counter == 10)
last;
sub wanted
push @files,$File::Find::name if -f;
return;
This perl script will print exactly what you need, it is using File::Find
to traverse recursively.
I have used -f
to make sure only files are pushed into the hash
Hash %files
has filepath
as the key and size
as its value. Then sorted it on basis of values and printed the top 10 results
#!/usr/bin/perl
use strict;
use warnings;
use File::Find;
my %files;
my $counter=0;
find( &wanted, '<target directory>');
for my $file ( sort $files$b <=> $files$a keys%files)
print "$file : $files$filen";
$counter++;
if ($counter == 10)
last;
sub wanted
$files"$File::Find::name"=-s $File::Find::name if -f;
return;
Or simply using an array to get it working
#!/usr/bin/perl
use strict;
use warnings;
use File::Find;
my @files;
my $counter=0;
find( &wanted, '<target directory>');
for my $file ( sort -s $b <=> -s $a @files){
my $size = -s $file;
print "$file : $sizen"
$counter++;
if ($counter == 10)
last;
sub wanted
push @files,$File::Find::name if -f;
return;
edited May 18 at 11:46
answered May 18 at 11:01
Arushix
9968
9968
in your above code second snippet@files[0..9]
won't give top sorted file names , but will only run sort on first 10 entries of@files
array
â mkmayank
May 18 at 11:25
thanks a lot for suggestion, i have edited my answer to get the correct result
â Arushix
May 18 at 11:45
it worked charm for my lower size directories. However for a bigger size, im getting an "Out of Memory" message
â Anoop Kumar KR
May 18 at 14:11
@Arushix... any suggestion to avoid the Out of Memory message?
â Anoop Kumar KR
May 21 at 9:04
add a comment |Â
in your above code second snippet@files[0..9]
won't give top sorted file names , but will only run sort on first 10 entries of@files
array
â mkmayank
May 18 at 11:25
thanks a lot for suggestion, i have edited my answer to get the correct result
â Arushix
May 18 at 11:45
it worked charm for my lower size directories. However for a bigger size, im getting an "Out of Memory" message
â Anoop Kumar KR
May 18 at 14:11
@Arushix... any suggestion to avoid the Out of Memory message?
â Anoop Kumar KR
May 21 at 9:04
in your above code second snippet
@files[0..9]
won't give top sorted file names , but will only run sort on first 10 entries of @files
arrayâ mkmayank
May 18 at 11:25
in your above code second snippet
@files[0..9]
won't give top sorted file names , but will only run sort on first 10 entries of @files
arrayâ mkmayank
May 18 at 11:25
thanks a lot for suggestion, i have edited my answer to get the correct result
â Arushix
May 18 at 11:45
thanks a lot for suggestion, i have edited my answer to get the correct result
â Arushix
May 18 at 11:45
it worked charm for my lower size directories. However for a bigger size, im getting an "Out of Memory" message
â Anoop Kumar KR
May 18 at 14:11
it worked charm for my lower size directories. However for a bigger size, im getting an "Out of Memory" message
â Anoop Kumar KR
May 18 at 14:11
@Arushix... any suggestion to avoid the Out of Memory message?
â Anoop Kumar KR
May 21 at 9:04
@Arushix... any suggestion to avoid the Out of Memory message?
â Anoop Kumar KR
May 21 at 9:04
add a comment |Â
up vote
2
down vote
Use File::Find to recursively walk the directory tree:
perl -MFile::Find -wE '
find(sub push @all, $File::Find::name , "/tmp");
say for (sort -s $b <=> -s $a @all)[0..9]'
If there are too many files and you're getting Out of memory
, return the sizes and use external sort
and head
to limit the output:
perl -MFile::Find -wE 'find(sub say -s $_, " $File::Find::name" , "/tmp")'
| sort -nr | head -n10
thanks for the response. I got the file names, how can i get the size of the files too along with the file names.
â Anoop Kumar KR
May 18 at 10:58
1
add| xargs du -h
to the above perl command output
â mkmayank
May 18 at 11:01
I found this for the file size: perlmaven.com/how-to-get-the-size-of-a-file-in-perl
â Kramer
May 18 at 11:06
i see the size details came out are different with the perl command and with the unix find command
â Anoop Kumar KR
May 18 at 13:01
hpauto@st2ba1301:/tmp$ p"); say for (sort -s $b <=> -s $a @all)[0..9]' 2>/dev/null | xargs du -s < 343800 /tmp/Spectra/spectre_meltdown_fix.tar 38336 /tmp/lftp_latest/gcc-4.8.3-1.aix7.1.ppc.rpm
â Anoop Kumar KR
May 18 at 13:02
 |Â
show 9 more comments
up vote
2
down vote
Use File::Find to recursively walk the directory tree:
perl -MFile::Find -wE '
find(sub push @all, $File::Find::name , "/tmp");
say for (sort -s $b <=> -s $a @all)[0..9]'
If there are too many files and you're getting Out of memory
, return the sizes and use external sort
and head
to limit the output:
perl -MFile::Find -wE 'find(sub say -s $_, " $File::Find::name" , "/tmp")'
| sort -nr | head -n10
thanks for the response. I got the file names, how can i get the size of the files too along with the file names.
â Anoop Kumar KR
May 18 at 10:58
1
add| xargs du -h
to the above perl command output
â mkmayank
May 18 at 11:01
I found this for the file size: perlmaven.com/how-to-get-the-size-of-a-file-in-perl
â Kramer
May 18 at 11:06
i see the size details came out are different with the perl command and with the unix find command
â Anoop Kumar KR
May 18 at 13:01
hpauto@st2ba1301:/tmp$ p"); say for (sort -s $b <=> -s $a @all)[0..9]' 2>/dev/null | xargs du -s < 343800 /tmp/Spectra/spectre_meltdown_fix.tar 38336 /tmp/lftp_latest/gcc-4.8.3-1.aix7.1.ppc.rpm
â Anoop Kumar KR
May 18 at 13:02
 |Â
show 9 more comments
up vote
2
down vote
up vote
2
down vote
Use File::Find to recursively walk the directory tree:
perl -MFile::Find -wE '
find(sub push @all, $File::Find::name , "/tmp");
say for (sort -s $b <=> -s $a @all)[0..9]'
If there are too many files and you're getting Out of memory
, return the sizes and use external sort
and head
to limit the output:
perl -MFile::Find -wE 'find(sub say -s $_, " $File::Find::name" , "/tmp")'
| sort -nr | head -n10
Use File::Find to recursively walk the directory tree:
perl -MFile::Find -wE '
find(sub push @all, $File::Find::name , "/tmp");
say for (sort -s $b <=> -s $a @all)[0..9]'
If there are too many files and you're getting Out of memory
, return the sizes and use external sort
and head
to limit the output:
perl -MFile::Find -wE 'find(sub say -s $_, " $File::Find::name" , "/tmp")'
| sort -nr | head -n10
edited May 18 at 16:59
answered May 18 at 10:50
choroba
24.3k33967
24.3k33967
thanks for the response. I got the file names, how can i get the size of the files too along with the file names.
â Anoop Kumar KR
May 18 at 10:58
1
add| xargs du -h
to the above perl command output
â mkmayank
May 18 at 11:01
I found this for the file size: perlmaven.com/how-to-get-the-size-of-a-file-in-perl
â Kramer
May 18 at 11:06
i see the size details came out are different with the perl command and with the unix find command
â Anoop Kumar KR
May 18 at 13:01
hpauto@st2ba1301:/tmp$ p"); say for (sort -s $b <=> -s $a @all)[0..9]' 2>/dev/null | xargs du -s < 343800 /tmp/Spectra/spectre_meltdown_fix.tar 38336 /tmp/lftp_latest/gcc-4.8.3-1.aix7.1.ppc.rpm
â Anoop Kumar KR
May 18 at 13:02
 |Â
show 9 more comments
thanks for the response. I got the file names, how can i get the size of the files too along with the file names.
â Anoop Kumar KR
May 18 at 10:58
1
add| xargs du -h
to the above perl command output
â mkmayank
May 18 at 11:01
I found this for the file size: perlmaven.com/how-to-get-the-size-of-a-file-in-perl
â Kramer
May 18 at 11:06
i see the size details came out are different with the perl command and with the unix find command
â Anoop Kumar KR
May 18 at 13:01
hpauto@st2ba1301:/tmp$ p"); say for (sort -s $b <=> -s $a @all)[0..9]' 2>/dev/null | xargs du -s < 343800 /tmp/Spectra/spectre_meltdown_fix.tar 38336 /tmp/lftp_latest/gcc-4.8.3-1.aix7.1.ppc.rpm
â Anoop Kumar KR
May 18 at 13:02
thanks for the response. I got the file names, how can i get the size of the files too along with the file names.
â Anoop Kumar KR
May 18 at 10:58
thanks for the response. I got the file names, how can i get the size of the files too along with the file names.
â Anoop Kumar KR
May 18 at 10:58
1
1
add
| xargs du -h
to the above perl command outputâ mkmayank
May 18 at 11:01
add
| xargs du -h
to the above perl command outputâ mkmayank
May 18 at 11:01
I found this for the file size: perlmaven.com/how-to-get-the-size-of-a-file-in-perl
â Kramer
May 18 at 11:06
I found this for the file size: perlmaven.com/how-to-get-the-size-of-a-file-in-perl
â Kramer
May 18 at 11:06
i see the size details came out are different with the perl command and with the unix find command
â Anoop Kumar KR
May 18 at 13:01
i see the size details came out are different with the perl command and with the unix find command
â Anoop Kumar KR
May 18 at 13:01
hpauto@st2ba1301:/tmp$ p"); say for (sort -s $b <=> -s $a @all)[0..9]' 2>/dev/null | xargs du -s < 343800 /tmp/Spectra/spectre_meltdown_fix.tar 38336 /tmp/lftp_latest/gcc-4.8.3-1.aix7.1.ppc.rpm
â Anoop Kumar KR
May 18 at 13:02
hpauto@st2ba1301:/tmp$ p"); say for (sort -s $b <=> -s $a @all)[0..9]' 2>/dev/null | xargs du -s < 343800 /tmp/Spectra/spectre_meltdown_fix.tar 38336 /tmp/lftp_latest/gcc-4.8.3-1.aix7.1.ppc.rpm
â Anoop Kumar KR
May 18 at 13:02
 |Â
show 9 more comments
up vote
1
down vote
zsh -c 'ls -ldS /tmp/**/?(DOL[1,10])'
To l
is
t the 10
L
argest single-character (?
) files in /tmp
and subdirs (**/
), ordered by S
ize.
With perl
, and to avoid storing the whole file list in memory when you only want the 10 largest ones:
perl -MFile::Find -e '
find(
sub
if (length == 1 && $_ ne ".")
@s = sort $b->[0] <=> $a->[0] [-s, $File::Find::name], @s;
splice @s, 10
, "/tmp"
); printf "%16d %sn", @$_ for @s'
(the length == 1 && $_ ne "."
is to match on single-byte file names like your /tmp/?
suggests you want to do).
Instead of printf "%16d %sn", @$_
, you could also run ls
like in the zsh
solution with exec "ls", "-ldS", map $_->[1], @s
doesnt have zsh utility...it doesnt work 4 me
â Anoop Kumar KR
May 18 at 13:14
@AnoopKumarKR, most Unix-like systems havezsh
available as a package.
â Stéphane Chazelas
May 18 at 13:23
i tried to run this, it dint work. Mine is AIX.
â Anoop Kumar KR
May 18 at 13:28
@AnoopKumarKR, again, you'd need to install it first. Even IBM apparently provides zsh packages these days for AIX. Anyway, I've added aperl
solution.
â Stéphane Chazelas
May 18 at 15:48
link to IBM AIX Toolbox for Linux and directly to the zsh rpm
â Jeff Schaller
May 18 at 20:05
 |Â
show 1 more comment
up vote
1
down vote
zsh -c 'ls -ldS /tmp/**/?(DOL[1,10])'
To l
is
t the 10
L
argest single-character (?
) files in /tmp
and subdirs (**/
), ordered by S
ize.
With perl
, and to avoid storing the whole file list in memory when you only want the 10 largest ones:
perl -MFile::Find -e '
find(
sub
if (length == 1 && $_ ne ".")
@s = sort $b->[0] <=> $a->[0] [-s, $File::Find::name], @s;
splice @s, 10
, "/tmp"
); printf "%16d %sn", @$_ for @s'
(the length == 1 && $_ ne "."
is to match on single-byte file names like your /tmp/?
suggests you want to do).
Instead of printf "%16d %sn", @$_
, you could also run ls
like in the zsh
solution with exec "ls", "-ldS", map $_->[1], @s
doesnt have zsh utility...it doesnt work 4 me
â Anoop Kumar KR
May 18 at 13:14
@AnoopKumarKR, most Unix-like systems havezsh
available as a package.
â Stéphane Chazelas
May 18 at 13:23
i tried to run this, it dint work. Mine is AIX.
â Anoop Kumar KR
May 18 at 13:28
@AnoopKumarKR, again, you'd need to install it first. Even IBM apparently provides zsh packages these days for AIX. Anyway, I've added aperl
solution.
â Stéphane Chazelas
May 18 at 15:48
link to IBM AIX Toolbox for Linux and directly to the zsh rpm
â Jeff Schaller
May 18 at 20:05
 |Â
show 1 more comment
up vote
1
down vote
up vote
1
down vote
zsh -c 'ls -ldS /tmp/**/?(DOL[1,10])'
To l
is
t the 10
L
argest single-character (?
) files in /tmp
and subdirs (**/
), ordered by S
ize.
With perl
, and to avoid storing the whole file list in memory when you only want the 10 largest ones:
perl -MFile::Find -e '
find(
sub
if (length == 1 && $_ ne ".")
@s = sort $b->[0] <=> $a->[0] [-s, $File::Find::name], @s;
splice @s, 10
, "/tmp"
); printf "%16d %sn", @$_ for @s'
(the length == 1 && $_ ne "."
is to match on single-byte file names like your /tmp/?
suggests you want to do).
Instead of printf "%16d %sn", @$_
, you could also run ls
like in the zsh
solution with exec "ls", "-ldS", map $_->[1], @s
zsh -c 'ls -ldS /tmp/**/?(DOL[1,10])'
To l
is
t the 10
L
argest single-character (?
) files in /tmp
and subdirs (**/
), ordered by S
ize.
With perl
, and to avoid storing the whole file list in memory when you only want the 10 largest ones:
perl -MFile::Find -e '
find(
sub
if (length == 1 && $_ ne ".")
@s = sort $b->[0] <=> $a->[0] [-s, $File::Find::name], @s;
splice @s, 10
, "/tmp"
); printf "%16d %sn", @$_ for @s'
(the length == 1 && $_ ne "."
is to match on single-byte file names like your /tmp/?
suggests you want to do).
Instead of printf "%16d %sn", @$_
, you could also run ls
like in the zsh
solution with exec "ls", "-ldS", map $_->[1], @s
edited May 18 at 15:44
answered May 18 at 12:27
Stéphane Chazelas
279k53513845
279k53513845
doesnt have zsh utility...it doesnt work 4 me
â Anoop Kumar KR
May 18 at 13:14
@AnoopKumarKR, most Unix-like systems havezsh
available as a package.
â Stéphane Chazelas
May 18 at 13:23
i tried to run this, it dint work. Mine is AIX.
â Anoop Kumar KR
May 18 at 13:28
@AnoopKumarKR, again, you'd need to install it first. Even IBM apparently provides zsh packages these days for AIX. Anyway, I've added aperl
solution.
â Stéphane Chazelas
May 18 at 15:48
link to IBM AIX Toolbox for Linux and directly to the zsh rpm
â Jeff Schaller
May 18 at 20:05
 |Â
show 1 more comment
doesnt have zsh utility...it doesnt work 4 me
â Anoop Kumar KR
May 18 at 13:14
@AnoopKumarKR, most Unix-like systems havezsh
available as a package.
â Stéphane Chazelas
May 18 at 13:23
i tried to run this, it dint work. Mine is AIX.
â Anoop Kumar KR
May 18 at 13:28
@AnoopKumarKR, again, you'd need to install it first. Even IBM apparently provides zsh packages these days for AIX. Anyway, I've added aperl
solution.
â Stéphane Chazelas
May 18 at 15:48
link to IBM AIX Toolbox for Linux and directly to the zsh rpm
â Jeff Schaller
May 18 at 20:05
doesnt have zsh utility...it doesnt work 4 me
â Anoop Kumar KR
May 18 at 13:14
doesnt have zsh utility...it doesnt work 4 me
â Anoop Kumar KR
May 18 at 13:14
@AnoopKumarKR, most Unix-like systems have
zsh
available as a package.â Stéphane Chazelas
May 18 at 13:23
@AnoopKumarKR, most Unix-like systems have
zsh
available as a package.â Stéphane Chazelas
May 18 at 13:23
i tried to run this, it dint work. Mine is AIX.
â Anoop Kumar KR
May 18 at 13:28
i tried to run this, it dint work. Mine is AIX.
â Anoop Kumar KR
May 18 at 13:28
@AnoopKumarKR, again, you'd need to install it first. Even IBM apparently provides zsh packages these days for AIX. Anyway, I've added a
perl
solution.â Stéphane Chazelas
May 18 at 15:48
@AnoopKumarKR, again, you'd need to install it first. Even IBM apparently provides zsh packages these days for AIX. Anyway, I've added a
perl
solution.â Stéphane Chazelas
May 18 at 15:48
link to IBM AIX Toolbox for Linux and directly to the zsh rpm
â Jeff Schaller
May 18 at 20:05
link to IBM AIX Toolbox for Linux and directly to the zsh rpm
â Jeff Schaller
May 18 at 20:05
 |Â
show 1 more comment
up vote
0
down vote
easier with bash I would say
find $PATH_TO_PARENT_FOLDER -type f -exec du -ahx + | sort -rh | head -10
Essentially we are using find
to locate all files inside a folder,
-type
eitherf
for file ord
for dir-exec
executes a command using the output of find and places find result as argument at
then executing du
to calculate the size of each element found through find and adding
-a, --all
write counts for all files, not just directories-h, --human-readable
print sizes in human readable format (e.g., 1K 234M 2G)-x, --one-file-system
skip directories on different file systems
Then we pipe it to sort it form bigger to smaller and we use head to present the top 10 only
sort
to sort the output result
-h, --human-numeric-sort
compare human readable numbers (e.g., 2K 1G)-r, --reverse
reverse the result of comparisons
head
to just read the top results
-<N>
to set the amount of lines you want to see from 1 to N
I have a directory with more than 300000 files, find command takes around 15mins or it goes in hung state.. !! :(
â Anoop Kumar KR
May 18 at 11:00
you could combine the solution from @choroba with mine, read the FS using perl, then apply for each of the found results the commands I shared with you :)
â Kramer
May 18 at 11:02
choroba's command is failing with "Out of memory"
â Anoop Kumar KR
May 18 at 13:23
??????????????????
â Anoop Kumar KR
May 22 at 8:18
add a comment |Â
up vote
0
down vote
easier with bash I would say
find $PATH_TO_PARENT_FOLDER -type f -exec du -ahx + | sort -rh | head -10
Essentially we are using find
to locate all files inside a folder,
-type
eitherf
for file ord
for dir-exec
executes a command using the output of find and places find result as argument at
then executing du
to calculate the size of each element found through find and adding
-a, --all
write counts for all files, not just directories-h, --human-readable
print sizes in human readable format (e.g., 1K 234M 2G)-x, --one-file-system
skip directories on different file systems
Then we pipe it to sort it form bigger to smaller and we use head to present the top 10 only
sort
to sort the output result
-h, --human-numeric-sort
compare human readable numbers (e.g., 2K 1G)-r, --reverse
reverse the result of comparisons
head
to just read the top results
-<N>
to set the amount of lines you want to see from 1 to N
I have a directory with more than 300000 files, find command takes around 15mins or it goes in hung state.. !! :(
â Anoop Kumar KR
May 18 at 11:00
you could combine the solution from @choroba with mine, read the FS using perl, then apply for each of the found results the commands I shared with you :)
â Kramer
May 18 at 11:02
choroba's command is failing with "Out of memory"
â Anoop Kumar KR
May 18 at 13:23
??????????????????
â Anoop Kumar KR
May 22 at 8:18
add a comment |Â
up vote
0
down vote
up vote
0
down vote
easier with bash I would say
find $PATH_TO_PARENT_FOLDER -type f -exec du -ahx + | sort -rh | head -10
Essentially we are using find
to locate all files inside a folder,
-type
eitherf
for file ord
for dir-exec
executes a command using the output of find and places find result as argument at
then executing du
to calculate the size of each element found through find and adding
-a, --all
write counts for all files, not just directories-h, --human-readable
print sizes in human readable format (e.g., 1K 234M 2G)-x, --one-file-system
skip directories on different file systems
Then we pipe it to sort it form bigger to smaller and we use head to present the top 10 only
sort
to sort the output result
-h, --human-numeric-sort
compare human readable numbers (e.g., 2K 1G)-r, --reverse
reverse the result of comparisons
head
to just read the top results
-<N>
to set the amount of lines you want to see from 1 to N
easier with bash I would say
find $PATH_TO_PARENT_FOLDER -type f -exec du -ahx + | sort -rh | head -10
Essentially we are using find
to locate all files inside a folder,
-type
eitherf
for file ord
for dir-exec
executes a command using the output of find and places find result as argument at
then executing du
to calculate the size of each element found through find and adding
-a, --all
write counts for all files, not just directories-h, --human-readable
print sizes in human readable format (e.g., 1K 234M 2G)-x, --one-file-system
skip directories on different file systems
Then we pipe it to sort it form bigger to smaller and we use head to present the top 10 only
sort
to sort the output result
-h, --human-numeric-sort
compare human readable numbers (e.g., 2K 1G)-r, --reverse
reverse the result of comparisons
head
to just read the top results
-<N>
to set the amount of lines you want to see from 1 to N
edited May 18 at 11:01
answered May 18 at 10:58
Kramer
1376
1376
I have a directory with more than 300000 files, find command takes around 15mins or it goes in hung state.. !! :(
â Anoop Kumar KR
May 18 at 11:00
you could combine the solution from @choroba with mine, read the FS using perl, then apply for each of the found results the commands I shared with you :)
â Kramer
May 18 at 11:02
choroba's command is failing with "Out of memory"
â Anoop Kumar KR
May 18 at 13:23
??????????????????
â Anoop Kumar KR
May 22 at 8:18
add a comment |Â
I have a directory with more than 300000 files, find command takes around 15mins or it goes in hung state.. !! :(
â Anoop Kumar KR
May 18 at 11:00
you could combine the solution from @choroba with mine, read the FS using perl, then apply for each of the found results the commands I shared with you :)
â Kramer
May 18 at 11:02
choroba's command is failing with "Out of memory"
â Anoop Kumar KR
May 18 at 13:23
??????????????????
â Anoop Kumar KR
May 22 at 8:18
I have a directory with more than 300000 files, find command takes around 15mins or it goes in hung state.. !! :(
â Anoop Kumar KR
May 18 at 11:00
I have a directory with more than 300000 files, find command takes around 15mins or it goes in hung state.. !! :(
â Anoop Kumar KR
May 18 at 11:00
you could combine the solution from @choroba with mine, read the FS using perl, then apply for each of the found results the commands I shared with you :)
â Kramer
May 18 at 11:02
you could combine the solution from @choroba with mine, read the FS using perl, then apply for each of the found results the commands I shared with you :)
â Kramer
May 18 at 11:02
choroba's command is failing with "Out of memory"
â Anoop Kumar KR
May 18 at 13:23
choroba's command is failing with "Out of memory"
â Anoop Kumar KR
May 18 at 13:23
??????????????????
â Anoop Kumar KR
May 22 at 8:18
??????????????????
â Anoop Kumar KR
May 22 at 8:18
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f444560%2fmodification-required-for-the-perl-command%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password