Can you diff all files in one directory? [duplicate]
Clash Royale CLAN TAG#URR8PPP
up vote
5
down vote
favorite
This question already has an answer here:
How do I do an N-way diff?
3 answers
I have a bunch of files stored in various directories. They have been created at different times, but I need to check that their contents are the same. I cannot find how to do a diff
on ALL files in one directory. Is this possible or is another CLI tool required?
command-line files diff
marked as duplicate by muru, Archemar, Anthon, Romeo Ninov, Anthony Geoghegan Oct 4 '17 at 8:40
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
add a comment |Â
up vote
5
down vote
favorite
This question already has an answer here:
How do I do an N-way diff?
3 answers
I have a bunch of files stored in various directories. They have been created at different times, but I need to check that their contents are the same. I cannot find how to do a diff
on ALL files in one directory. Is this possible or is another CLI tool required?
command-line files diff
marked as duplicate by muru, Archemar, Anthon, Romeo Ninov, Anthony Geoghegan Oct 4 '17 at 8:40
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
Related question/answers on stackoverflow: how to compare more than 2 files at a time
â Mark Plotnick
Oct 3 '17 at 14:16
add a comment |Â
up vote
5
down vote
favorite
up vote
5
down vote
favorite
This question already has an answer here:
How do I do an N-way diff?
3 answers
I have a bunch of files stored in various directories. They have been created at different times, but I need to check that their contents are the same. I cannot find how to do a diff
on ALL files in one directory. Is this possible or is another CLI tool required?
command-line files diff
This question already has an answer here:
How do I do an N-way diff?
3 answers
I have a bunch of files stored in various directories. They have been created at different times, but I need to check that their contents are the same. I cannot find how to do a diff
on ALL files in one directory. Is this possible or is another CLI tool required?
This question already has an answer here:
How do I do an N-way diff?
3 answers
command-line files diff
command-line files diff
edited Oct 3 '17 at 16:22
Jeff Schaller
32.3k849109
32.3k849109
asked Oct 3 '17 at 14:08
eekfonky
172417
172417
marked as duplicate by muru, Archemar, Anthon, Romeo Ninov, Anthony Geoghegan Oct 4 '17 at 8:40
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
marked as duplicate by muru, Archemar, Anthon, Romeo Ninov, Anthony Geoghegan Oct 4 '17 at 8:40
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
Related question/answers on stackoverflow: how to compare more than 2 files at a time
â Mark Plotnick
Oct 3 '17 at 14:16
add a comment |Â
Related question/answers on stackoverflow: how to compare more than 2 files at a time
â Mark Plotnick
Oct 3 '17 at 14:16
Related question/answers on stackoverflow: how to compare more than 2 files at a time
â Mark Plotnick
Oct 3 '17 at 14:16
Related question/answers on stackoverflow: how to compare more than 2 files at a time
â Mark Plotnick
Oct 3 '17 at 14:16
add a comment |Â
4 Answers
4
active
oldest
votes
up vote
4
down vote
accepted
If you don't need to compare them, and only need to know if they differ, you can just diff every file in the directory with any one of the files in the directory via a for-loop...
for i in ./*; do diff -q "$i" known-file; done
...where known-file
is just any given file in the directory. If you get no output, none of the files differ; else you'll get a list of the files that differ from known-file
.
add a comment |Â
up vote
6
down vote
Using the standard cksum
utility along with awk
:
find . -type f -exec cksum + | awk '!ck[$1$2]++ print $3 '
The cksum
utility will output three columns for each file in the current directory. The first is a checksum, the second is a file size, and the third is a filename.
The awk
program will create an array, ck
, keyed on the checksum and size. If the key does not already exist, the filename is printed.
This means that you get the filenames in the current directory that have unique checksums + size. If you get more than one filename, then these two have different checksums and/or size.
Testing:
$ ls -l
total 8
-rw-r--r-- 1 kk kk 0 Oct 3 16:32 file1
-rw-r--r-- 1 kk kk 0 Oct 3 16:32 file2
-rw-r--r-- 1 kk kk 6 Oct 3 16:32 file3
-rw-r--r-- 1 kk kk 0 Oct 3 16:32 file4
-rw-r--r-- 1 kk kk 6 Oct 3 16:34 file5
$ find . -type f -exec cksum + | awk '!ck[$1$2]++ print $3 '
./file1
./file3
The files file1
, file2
and file4
are all empty, but file3
and file5
have some content. The command shows that there are two sets of files: Those that are the same as file1
and those that are the same as file3
.
We may also see exactly what files are the same:
$ find . -type f -exec cksum + | awk ' ck[$1$2] = ck[$1$2] ? ck[$1$2] OFS $3 : $3 END for (i in ck) print ck[i] '
./file3 ./file5
./file1 ./file2 ./file4
add a comment |Â
up vote
1
down vote
Given a set of files in directory d, here are results for 4 codes that look for duplicate files:
Environment: LC_ALL = C, LANG = C
(Versions displayed with local utility "version")
OS, ker|rel, machine: Linux, 3.16.0-4-amd64, x86_64
Distribution : Debian 8.9 (jessie)
bash GNU bash 4.3.30
fdupes 1.51
jdupes 1.5.1 (2016-11-01)
rdfind 1.3.4
duff 0.5.2
-----
Files in directory d:
==> d/f1 <==
1
==> d/f11 <==
1
==> d/f2 <==
2
==> d/f20 <==
Now is the time
for all good men
to come to the aid
of their country.
==> d/f21 <==
Now is the time
for all good men
to come to the aid
of their country.
==> d/f22 <==
Now is the time
for all good men
to come to the aid
of their countryz
==> d/f3 <==
1
-----
Results for fdupes:
d/f1
d/f3
d/f11
d/f20
d/f21
-----
Results for jdupes:
Examining 7 files, 1 dirs (in 1 specified)
d/f1
d/f3
d/f11
d/f20
d/f21
-----
Results for rdfind:
Now scanning "d", found 7 files.
Now have 7 files in total.
Removed 0 files due to nonunique device and inode.
Now removing files with zero size from list...removed 0 files
Total size is 218 bytes or 218 b
Now sorting on size:removed 0 files due to unique sizes from list.7 files left.
Now eliminating candidates based on first bytes:removed 1 files from list.6 files left.
Now eliminating candidates based on last bytes:removed 1 files from list.5 files left.
Now eliminating candidates based on md5 checksum:removed 0 files from list.5 files left.
It seems like you have 5 files that are not unique
Totally, 74 b can be reduced.
Now making results file results.txt
-----
Results for duff:
3 files in cluster 1 (2 bytes, digest e5fa44f2b31c1fb553b6021e7360d07d5d91ff5e)
d/f1
d/f3
d/f11
2 files in cluster 2 (70 bytes, digest 7de790fbe559d66cf890671ea2ef706281a1017f)
d/f20
d/f21
Best wishes ... cheers, drl
add a comment |Â
up vote
0
down vote
You may also try GUI tool meld.
meld dir1 dir2
or
meld dir1 dir2 dir3
https://meldmerge.org/help/command-line.html
Thanks but it has to be CLI. Maybe I'll just write a script to put filenames into an array and try it that way. I was hoping for a simpler solution though! lol
â eekfonky
Oct 3 '17 at 14:20
OK, I've also found this: stackoverflow.com/questions/16787916/⦠It's probably what you're looking for.
â Jaroslav Kucera
Oct 3 '17 at 14:27
add a comment |Â
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
4
down vote
accepted
If you don't need to compare them, and only need to know if they differ, you can just diff every file in the directory with any one of the files in the directory via a for-loop...
for i in ./*; do diff -q "$i" known-file; done
...where known-file
is just any given file in the directory. If you get no output, none of the files differ; else you'll get a list of the files that differ from known-file
.
add a comment |Â
up vote
4
down vote
accepted
If you don't need to compare them, and only need to know if they differ, you can just diff every file in the directory with any one of the files in the directory via a for-loop...
for i in ./*; do diff -q "$i" known-file; done
...where known-file
is just any given file in the directory. If you get no output, none of the files differ; else you'll get a list of the files that differ from known-file
.
add a comment |Â
up vote
4
down vote
accepted
up vote
4
down vote
accepted
If you don't need to compare them, and only need to know if they differ, you can just diff every file in the directory with any one of the files in the directory via a for-loop...
for i in ./*; do diff -q "$i" known-file; done
...where known-file
is just any given file in the directory. If you get no output, none of the files differ; else you'll get a list of the files that differ from known-file
.
If you don't need to compare them, and only need to know if they differ, you can just diff every file in the directory with any one of the files in the directory via a for-loop...
for i in ./*; do diff -q "$i" known-file; done
...where known-file
is just any given file in the directory. If you get no output, none of the files differ; else you'll get a list of the files that differ from known-file
.
answered Oct 3 '17 at 14:25
brhfl
1533
1533
add a comment |Â
add a comment |Â
up vote
6
down vote
Using the standard cksum
utility along with awk
:
find . -type f -exec cksum + | awk '!ck[$1$2]++ print $3 '
The cksum
utility will output three columns for each file in the current directory. The first is a checksum, the second is a file size, and the third is a filename.
The awk
program will create an array, ck
, keyed on the checksum and size. If the key does not already exist, the filename is printed.
This means that you get the filenames in the current directory that have unique checksums + size. If you get more than one filename, then these two have different checksums and/or size.
Testing:
$ ls -l
total 8
-rw-r--r-- 1 kk kk 0 Oct 3 16:32 file1
-rw-r--r-- 1 kk kk 0 Oct 3 16:32 file2
-rw-r--r-- 1 kk kk 6 Oct 3 16:32 file3
-rw-r--r-- 1 kk kk 0 Oct 3 16:32 file4
-rw-r--r-- 1 kk kk 6 Oct 3 16:34 file5
$ find . -type f -exec cksum + | awk '!ck[$1$2]++ print $3 '
./file1
./file3
The files file1
, file2
and file4
are all empty, but file3
and file5
have some content. The command shows that there are two sets of files: Those that are the same as file1
and those that are the same as file3
.
We may also see exactly what files are the same:
$ find . -type f -exec cksum + | awk ' ck[$1$2] = ck[$1$2] ? ck[$1$2] OFS $3 : $3 END for (i in ck) print ck[i] '
./file3 ./file5
./file1 ./file2 ./file4
add a comment |Â
up vote
6
down vote
Using the standard cksum
utility along with awk
:
find . -type f -exec cksum + | awk '!ck[$1$2]++ print $3 '
The cksum
utility will output three columns for each file in the current directory. The first is a checksum, the second is a file size, and the third is a filename.
The awk
program will create an array, ck
, keyed on the checksum and size. If the key does not already exist, the filename is printed.
This means that you get the filenames in the current directory that have unique checksums + size. If you get more than one filename, then these two have different checksums and/or size.
Testing:
$ ls -l
total 8
-rw-r--r-- 1 kk kk 0 Oct 3 16:32 file1
-rw-r--r-- 1 kk kk 0 Oct 3 16:32 file2
-rw-r--r-- 1 kk kk 6 Oct 3 16:32 file3
-rw-r--r-- 1 kk kk 0 Oct 3 16:32 file4
-rw-r--r-- 1 kk kk 6 Oct 3 16:34 file5
$ find . -type f -exec cksum + | awk '!ck[$1$2]++ print $3 '
./file1
./file3
The files file1
, file2
and file4
are all empty, but file3
and file5
have some content. The command shows that there are two sets of files: Those that are the same as file1
and those that are the same as file3
.
We may also see exactly what files are the same:
$ find . -type f -exec cksum + | awk ' ck[$1$2] = ck[$1$2] ? ck[$1$2] OFS $3 : $3 END for (i in ck) print ck[i] '
./file3 ./file5
./file1 ./file2 ./file4
add a comment |Â
up vote
6
down vote
up vote
6
down vote
Using the standard cksum
utility along with awk
:
find . -type f -exec cksum + | awk '!ck[$1$2]++ print $3 '
The cksum
utility will output three columns for each file in the current directory. The first is a checksum, the second is a file size, and the third is a filename.
The awk
program will create an array, ck
, keyed on the checksum and size. If the key does not already exist, the filename is printed.
This means that you get the filenames in the current directory that have unique checksums + size. If you get more than one filename, then these two have different checksums and/or size.
Testing:
$ ls -l
total 8
-rw-r--r-- 1 kk kk 0 Oct 3 16:32 file1
-rw-r--r-- 1 kk kk 0 Oct 3 16:32 file2
-rw-r--r-- 1 kk kk 6 Oct 3 16:32 file3
-rw-r--r-- 1 kk kk 0 Oct 3 16:32 file4
-rw-r--r-- 1 kk kk 6 Oct 3 16:34 file5
$ find . -type f -exec cksum + | awk '!ck[$1$2]++ print $3 '
./file1
./file3
The files file1
, file2
and file4
are all empty, but file3
and file5
have some content. The command shows that there are two sets of files: Those that are the same as file1
and those that are the same as file3
.
We may also see exactly what files are the same:
$ find . -type f -exec cksum + | awk ' ck[$1$2] = ck[$1$2] ? ck[$1$2] OFS $3 : $3 END for (i in ck) print ck[i] '
./file3 ./file5
./file1 ./file2 ./file4
Using the standard cksum
utility along with awk
:
find . -type f -exec cksum + | awk '!ck[$1$2]++ print $3 '
The cksum
utility will output three columns for each file in the current directory. The first is a checksum, the second is a file size, and the third is a filename.
The awk
program will create an array, ck
, keyed on the checksum and size. If the key does not already exist, the filename is printed.
This means that you get the filenames in the current directory that have unique checksums + size. If you get more than one filename, then these two have different checksums and/or size.
Testing:
$ ls -l
total 8
-rw-r--r-- 1 kk kk 0 Oct 3 16:32 file1
-rw-r--r-- 1 kk kk 0 Oct 3 16:32 file2
-rw-r--r-- 1 kk kk 6 Oct 3 16:32 file3
-rw-r--r-- 1 kk kk 0 Oct 3 16:32 file4
-rw-r--r-- 1 kk kk 6 Oct 3 16:34 file5
$ find . -type f -exec cksum + | awk '!ck[$1$2]++ print $3 '
./file1
./file3
The files file1
, file2
and file4
are all empty, but file3
and file5
have some content. The command shows that there are two sets of files: Those that are the same as file1
and those that are the same as file3
.
We may also see exactly what files are the same:
$ find . -type f -exec cksum + | awk ' ck[$1$2] = ck[$1$2] ? ck[$1$2] OFS $3 : $3 END for (i in ck) print ck[i] '
./file3 ./file5
./file1 ./file2 ./file4
answered Oct 3 '17 at 14:35
Kusalananda
105k14209326
105k14209326
add a comment |Â
add a comment |Â
up vote
1
down vote
Given a set of files in directory d, here are results for 4 codes that look for duplicate files:
Environment: LC_ALL = C, LANG = C
(Versions displayed with local utility "version")
OS, ker|rel, machine: Linux, 3.16.0-4-amd64, x86_64
Distribution : Debian 8.9 (jessie)
bash GNU bash 4.3.30
fdupes 1.51
jdupes 1.5.1 (2016-11-01)
rdfind 1.3.4
duff 0.5.2
-----
Files in directory d:
==> d/f1 <==
1
==> d/f11 <==
1
==> d/f2 <==
2
==> d/f20 <==
Now is the time
for all good men
to come to the aid
of their country.
==> d/f21 <==
Now is the time
for all good men
to come to the aid
of their country.
==> d/f22 <==
Now is the time
for all good men
to come to the aid
of their countryz
==> d/f3 <==
1
-----
Results for fdupes:
d/f1
d/f3
d/f11
d/f20
d/f21
-----
Results for jdupes:
Examining 7 files, 1 dirs (in 1 specified)
d/f1
d/f3
d/f11
d/f20
d/f21
-----
Results for rdfind:
Now scanning "d", found 7 files.
Now have 7 files in total.
Removed 0 files due to nonunique device and inode.
Now removing files with zero size from list...removed 0 files
Total size is 218 bytes or 218 b
Now sorting on size:removed 0 files due to unique sizes from list.7 files left.
Now eliminating candidates based on first bytes:removed 1 files from list.6 files left.
Now eliminating candidates based on last bytes:removed 1 files from list.5 files left.
Now eliminating candidates based on md5 checksum:removed 0 files from list.5 files left.
It seems like you have 5 files that are not unique
Totally, 74 b can be reduced.
Now making results file results.txt
-----
Results for duff:
3 files in cluster 1 (2 bytes, digest e5fa44f2b31c1fb553b6021e7360d07d5d91ff5e)
d/f1
d/f3
d/f11
2 files in cluster 2 (70 bytes, digest 7de790fbe559d66cf890671ea2ef706281a1017f)
d/f20
d/f21
Best wishes ... cheers, drl
add a comment |Â
up vote
1
down vote
Given a set of files in directory d, here are results for 4 codes that look for duplicate files:
Environment: LC_ALL = C, LANG = C
(Versions displayed with local utility "version")
OS, ker|rel, machine: Linux, 3.16.0-4-amd64, x86_64
Distribution : Debian 8.9 (jessie)
bash GNU bash 4.3.30
fdupes 1.51
jdupes 1.5.1 (2016-11-01)
rdfind 1.3.4
duff 0.5.2
-----
Files in directory d:
==> d/f1 <==
1
==> d/f11 <==
1
==> d/f2 <==
2
==> d/f20 <==
Now is the time
for all good men
to come to the aid
of their country.
==> d/f21 <==
Now is the time
for all good men
to come to the aid
of their country.
==> d/f22 <==
Now is the time
for all good men
to come to the aid
of their countryz
==> d/f3 <==
1
-----
Results for fdupes:
d/f1
d/f3
d/f11
d/f20
d/f21
-----
Results for jdupes:
Examining 7 files, 1 dirs (in 1 specified)
d/f1
d/f3
d/f11
d/f20
d/f21
-----
Results for rdfind:
Now scanning "d", found 7 files.
Now have 7 files in total.
Removed 0 files due to nonunique device and inode.
Now removing files with zero size from list...removed 0 files
Total size is 218 bytes or 218 b
Now sorting on size:removed 0 files due to unique sizes from list.7 files left.
Now eliminating candidates based on first bytes:removed 1 files from list.6 files left.
Now eliminating candidates based on last bytes:removed 1 files from list.5 files left.
Now eliminating candidates based on md5 checksum:removed 0 files from list.5 files left.
It seems like you have 5 files that are not unique
Totally, 74 b can be reduced.
Now making results file results.txt
-----
Results for duff:
3 files in cluster 1 (2 bytes, digest e5fa44f2b31c1fb553b6021e7360d07d5d91ff5e)
d/f1
d/f3
d/f11
2 files in cluster 2 (70 bytes, digest 7de790fbe559d66cf890671ea2ef706281a1017f)
d/f20
d/f21
Best wishes ... cheers, drl
add a comment |Â
up vote
1
down vote
up vote
1
down vote
Given a set of files in directory d, here are results for 4 codes that look for duplicate files:
Environment: LC_ALL = C, LANG = C
(Versions displayed with local utility "version")
OS, ker|rel, machine: Linux, 3.16.0-4-amd64, x86_64
Distribution : Debian 8.9 (jessie)
bash GNU bash 4.3.30
fdupes 1.51
jdupes 1.5.1 (2016-11-01)
rdfind 1.3.4
duff 0.5.2
-----
Files in directory d:
==> d/f1 <==
1
==> d/f11 <==
1
==> d/f2 <==
2
==> d/f20 <==
Now is the time
for all good men
to come to the aid
of their country.
==> d/f21 <==
Now is the time
for all good men
to come to the aid
of their country.
==> d/f22 <==
Now is the time
for all good men
to come to the aid
of their countryz
==> d/f3 <==
1
-----
Results for fdupes:
d/f1
d/f3
d/f11
d/f20
d/f21
-----
Results for jdupes:
Examining 7 files, 1 dirs (in 1 specified)
d/f1
d/f3
d/f11
d/f20
d/f21
-----
Results for rdfind:
Now scanning "d", found 7 files.
Now have 7 files in total.
Removed 0 files due to nonunique device and inode.
Now removing files with zero size from list...removed 0 files
Total size is 218 bytes or 218 b
Now sorting on size:removed 0 files due to unique sizes from list.7 files left.
Now eliminating candidates based on first bytes:removed 1 files from list.6 files left.
Now eliminating candidates based on last bytes:removed 1 files from list.5 files left.
Now eliminating candidates based on md5 checksum:removed 0 files from list.5 files left.
It seems like you have 5 files that are not unique
Totally, 74 b can be reduced.
Now making results file results.txt
-----
Results for duff:
3 files in cluster 1 (2 bytes, digest e5fa44f2b31c1fb553b6021e7360d07d5d91ff5e)
d/f1
d/f3
d/f11
2 files in cluster 2 (70 bytes, digest 7de790fbe559d66cf890671ea2ef706281a1017f)
d/f20
d/f21
Best wishes ... cheers, drl
Given a set of files in directory d, here are results for 4 codes that look for duplicate files:
Environment: LC_ALL = C, LANG = C
(Versions displayed with local utility "version")
OS, ker|rel, machine: Linux, 3.16.0-4-amd64, x86_64
Distribution : Debian 8.9 (jessie)
bash GNU bash 4.3.30
fdupes 1.51
jdupes 1.5.1 (2016-11-01)
rdfind 1.3.4
duff 0.5.2
-----
Files in directory d:
==> d/f1 <==
1
==> d/f11 <==
1
==> d/f2 <==
2
==> d/f20 <==
Now is the time
for all good men
to come to the aid
of their country.
==> d/f21 <==
Now is the time
for all good men
to come to the aid
of their country.
==> d/f22 <==
Now is the time
for all good men
to come to the aid
of their countryz
==> d/f3 <==
1
-----
Results for fdupes:
d/f1
d/f3
d/f11
d/f20
d/f21
-----
Results for jdupes:
Examining 7 files, 1 dirs (in 1 specified)
d/f1
d/f3
d/f11
d/f20
d/f21
-----
Results for rdfind:
Now scanning "d", found 7 files.
Now have 7 files in total.
Removed 0 files due to nonunique device and inode.
Now removing files with zero size from list...removed 0 files
Total size is 218 bytes or 218 b
Now sorting on size:removed 0 files due to unique sizes from list.7 files left.
Now eliminating candidates based on first bytes:removed 1 files from list.6 files left.
Now eliminating candidates based on last bytes:removed 1 files from list.5 files left.
Now eliminating candidates based on md5 checksum:removed 0 files from list.5 files left.
It seems like you have 5 files that are not unique
Totally, 74 b can be reduced.
Now making results file results.txt
-----
Results for duff:
3 files in cluster 1 (2 bytes, digest e5fa44f2b31c1fb553b6021e7360d07d5d91ff5e)
d/f1
d/f3
d/f11
2 files in cluster 2 (70 bytes, digest 7de790fbe559d66cf890671ea2ef706281a1017f)
d/f20
d/f21
Best wishes ... cheers, drl
answered Oct 3 '17 at 20:01
drl
45225
45225
add a comment |Â
add a comment |Â
up vote
0
down vote
You may also try GUI tool meld.
meld dir1 dir2
or
meld dir1 dir2 dir3
https://meldmerge.org/help/command-line.html
Thanks but it has to be CLI. Maybe I'll just write a script to put filenames into an array and try it that way. I was hoping for a simpler solution though! lol
â eekfonky
Oct 3 '17 at 14:20
OK, I've also found this: stackoverflow.com/questions/16787916/⦠It's probably what you're looking for.
â Jaroslav Kucera
Oct 3 '17 at 14:27
add a comment |Â
up vote
0
down vote
You may also try GUI tool meld.
meld dir1 dir2
or
meld dir1 dir2 dir3
https://meldmerge.org/help/command-line.html
Thanks but it has to be CLI. Maybe I'll just write a script to put filenames into an array and try it that way. I was hoping for a simpler solution though! lol
â eekfonky
Oct 3 '17 at 14:20
OK, I've also found this: stackoverflow.com/questions/16787916/⦠It's probably what you're looking for.
â Jaroslav Kucera
Oct 3 '17 at 14:27
add a comment |Â
up vote
0
down vote
up vote
0
down vote
You may also try GUI tool meld.
meld dir1 dir2
or
meld dir1 dir2 dir3
https://meldmerge.org/help/command-line.html
You may also try GUI tool meld.
meld dir1 dir2
or
meld dir1 dir2 dir3
https://meldmerge.org/help/command-line.html
answered Oct 3 '17 at 14:19
Jaroslav Kucera
4,3904621
4,3904621
Thanks but it has to be CLI. Maybe I'll just write a script to put filenames into an array and try it that way. I was hoping for a simpler solution though! lol
â eekfonky
Oct 3 '17 at 14:20
OK, I've also found this: stackoverflow.com/questions/16787916/⦠It's probably what you're looking for.
â Jaroslav Kucera
Oct 3 '17 at 14:27
add a comment |Â
Thanks but it has to be CLI. Maybe I'll just write a script to put filenames into an array and try it that way. I was hoping for a simpler solution though! lol
â eekfonky
Oct 3 '17 at 14:20
OK, I've also found this: stackoverflow.com/questions/16787916/⦠It's probably what you're looking for.
â Jaroslav Kucera
Oct 3 '17 at 14:27
Thanks but it has to be CLI. Maybe I'll just write a script to put filenames into an array and try it that way. I was hoping for a simpler solution though! lol
â eekfonky
Oct 3 '17 at 14:20
Thanks but it has to be CLI. Maybe I'll just write a script to put filenames into an array and try it that way. I was hoping for a simpler solution though! lol
â eekfonky
Oct 3 '17 at 14:20
OK, I've also found this: stackoverflow.com/questions/16787916/⦠It's probably what you're looking for.
â Jaroslav Kucera
Oct 3 '17 at 14:27
OK, I've also found this: stackoverflow.com/questions/16787916/⦠It's probably what you're looking for.
â Jaroslav Kucera
Oct 3 '17 at 14:27
add a comment |Â
Related question/answers on stackoverflow: how to compare more than 2 files at a time
â Mark Plotnick
Oct 3 '17 at 14:16