Splitting into many .ZIP files using 7-Zip
Clash Royale CLAN TAG#URR8PPP
up vote
6
down vote
favorite
If I have a 100 GB folder and I split ZIP it, is there a difference in how much disk space is consumed if I split it into 100 .ZIP files at 1 GB each or 10 .ZIP files at 10 GB each?
Do 100 .ZIP files at 1 GB each take up more space than 10 .ZIP files at 10 GB each?
disk-space 7-zip
add a comment |Â
up vote
6
down vote
favorite
If I have a 100 GB folder and I split ZIP it, is there a difference in how much disk space is consumed if I split it into 100 .ZIP files at 1 GB each or 10 .ZIP files at 10 GB each?
Do 100 .ZIP files at 1 GB each take up more space than 10 .ZIP files at 10 GB each?
disk-space 7-zip
And you can't find out because?
â Dave
8 hours ago
Why can't you just try it?
â Peter Mortensen
7 hours ago
add a comment |Â
up vote
6
down vote
favorite
up vote
6
down vote
favorite
If I have a 100 GB folder and I split ZIP it, is there a difference in how much disk space is consumed if I split it into 100 .ZIP files at 1 GB each or 10 .ZIP files at 10 GB each?
Do 100 .ZIP files at 1 GB each take up more space than 10 .ZIP files at 10 GB each?
disk-space 7-zip
If I have a 100 GB folder and I split ZIP it, is there a difference in how much disk space is consumed if I split it into 100 .ZIP files at 1 GB each or 10 .ZIP files at 10 GB each?
Do 100 .ZIP files at 1 GB each take up more space than 10 .ZIP files at 10 GB each?
disk-space 7-zip
disk-space 7-zip
edited 12 mins ago
Twisty Impersonator
16.3k126090
16.3k126090
asked 13 hours ago
Upvotes All Downvoted Posts
15816
15816
And you can't find out because?
â Dave
8 hours ago
Why can't you just try it?
â Peter Mortensen
7 hours ago
add a comment |Â
And you can't find out because?
â Dave
8 hours ago
Why can't you just try it?
â Peter Mortensen
7 hours ago
And you can't find out because?
â Dave
8 hours ago
And you can't find out because?
â Dave
8 hours ago
Why can't you just try it?
â Peter Mortensen
7 hours ago
Why can't you just try it?
â Peter Mortensen
7 hours ago
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
6
down vote
accepted
Let's find out!
100 MB files (27 pieces):
7z a -tzip -v100M ./100m/archive ./kali-linux-xfce-2018.2-amd64.iso
$ du ./100m/
2677884 ./100m/
10 MB files (262 pieces):
7z a -tzip -v10M ./10m/archive ./kali-linux-xfce-2018.2-amd64.iso
$ du ./10m/
2677908 ./10m
Results: The 10 MB split archive takes up an extra 24Â KB. So yes, there is a difference, the 100 1Â GB files will take up more space than the 10 10Â GB files.
The difference seems to be negligible though. I would go for whichever is more convenient for you.
New contributor
4
du
doesn't output the size in bytes by default (unless your 270M of files turned into 2,677,908 bytes). It does display the on-disk size of files, which may be different than the actual data size (maybe applicable for uploading or storing on other filesystems)
â Xen2050
10 hours ago
You are correct, it's actually outputting in KB. I've edited the answer to correct this discrepancy. The original file is a Kali Linux ISO, it is ~2.6GB. You have a good point about the on-disk size vs actual data size, I was specifically thinking about on-disk size because it accounts for the overhead of having additional files but you're right that it would be different depending on what you're actually doing with the archives.
â Layne Bernardo
10 hours ago
Sorry, I crossed with your largely similar answer while I was double-checking the run strings.
â AFH
10 hours ago
Zip file max size is 4GB.
â pbies
8 hours ago
Re "The difference seems to be negligible": What is it in %?
â Peter Mortensen
7 hours ago
 |Â
show 1 more comment
up vote
9
down vote
Every file has a file system overhead of unused logical sector space after the end-of-file. But this is eliminated if the split size is a multiple of the logical sector size (not necessarily true of my example below).
There may be extra bytes used by the extra directory entries, but these will not figure unless the directory now occupies an extra logical sector.
The split files are identical in content to those created by a binary splitter program with the same split size.
I verified these on Linux by using the GUI version on a 7+MB file, giving 8 split files of 1MB size with 7-Zip (File.7z.00?
), then created a single, full archive (Full.7z
), which I split with:-
7z -v1000000 a File; # Create split volumes File.7z.00?
7z a Full File; # Create full archive Full.7z
split -b 1000000 -a 3 --numeric-suffixes=1 Full.7z Full.7z.; # Split full archive into Full.7z.00?
for f in 001..008; do cmp Full.7z.$f File.7z.$f; done; # Compare splits with 7z volumes
To test on another OS you may need to download or write an appropriate splitter program.
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
6
down vote
accepted
Let's find out!
100 MB files (27 pieces):
7z a -tzip -v100M ./100m/archive ./kali-linux-xfce-2018.2-amd64.iso
$ du ./100m/
2677884 ./100m/
10 MB files (262 pieces):
7z a -tzip -v10M ./10m/archive ./kali-linux-xfce-2018.2-amd64.iso
$ du ./10m/
2677908 ./10m
Results: The 10 MB split archive takes up an extra 24Â KB. So yes, there is a difference, the 100 1Â GB files will take up more space than the 10 10Â GB files.
The difference seems to be negligible though. I would go for whichever is more convenient for you.
New contributor
4
du
doesn't output the size in bytes by default (unless your 270M of files turned into 2,677,908 bytes). It does display the on-disk size of files, which may be different than the actual data size (maybe applicable for uploading or storing on other filesystems)
â Xen2050
10 hours ago
You are correct, it's actually outputting in KB. I've edited the answer to correct this discrepancy. The original file is a Kali Linux ISO, it is ~2.6GB. You have a good point about the on-disk size vs actual data size, I was specifically thinking about on-disk size because it accounts for the overhead of having additional files but you're right that it would be different depending on what you're actually doing with the archives.
â Layne Bernardo
10 hours ago
Sorry, I crossed with your largely similar answer while I was double-checking the run strings.
â AFH
10 hours ago
Zip file max size is 4GB.
â pbies
8 hours ago
Re "The difference seems to be negligible": What is it in %?
â Peter Mortensen
7 hours ago
 |Â
show 1 more comment
up vote
6
down vote
accepted
Let's find out!
100 MB files (27 pieces):
7z a -tzip -v100M ./100m/archive ./kali-linux-xfce-2018.2-amd64.iso
$ du ./100m/
2677884 ./100m/
10 MB files (262 pieces):
7z a -tzip -v10M ./10m/archive ./kali-linux-xfce-2018.2-amd64.iso
$ du ./10m/
2677908 ./10m
Results: The 10 MB split archive takes up an extra 24Â KB. So yes, there is a difference, the 100 1Â GB files will take up more space than the 10 10Â GB files.
The difference seems to be negligible though. I would go for whichever is more convenient for you.
New contributor
4
du
doesn't output the size in bytes by default (unless your 270M of files turned into 2,677,908 bytes). It does display the on-disk size of files, which may be different than the actual data size (maybe applicable for uploading or storing on other filesystems)
â Xen2050
10 hours ago
You are correct, it's actually outputting in KB. I've edited the answer to correct this discrepancy. The original file is a Kali Linux ISO, it is ~2.6GB. You have a good point about the on-disk size vs actual data size, I was specifically thinking about on-disk size because it accounts for the overhead of having additional files but you're right that it would be different depending on what you're actually doing with the archives.
â Layne Bernardo
10 hours ago
Sorry, I crossed with your largely similar answer while I was double-checking the run strings.
â AFH
10 hours ago
Zip file max size is 4GB.
â pbies
8 hours ago
Re "The difference seems to be negligible": What is it in %?
â Peter Mortensen
7 hours ago
 |Â
show 1 more comment
up vote
6
down vote
accepted
up vote
6
down vote
accepted
Let's find out!
100 MB files (27 pieces):
7z a -tzip -v100M ./100m/archive ./kali-linux-xfce-2018.2-amd64.iso
$ du ./100m/
2677884 ./100m/
10 MB files (262 pieces):
7z a -tzip -v10M ./10m/archive ./kali-linux-xfce-2018.2-amd64.iso
$ du ./10m/
2677908 ./10m
Results: The 10 MB split archive takes up an extra 24Â KB. So yes, there is a difference, the 100 1Â GB files will take up more space than the 10 10Â GB files.
The difference seems to be negligible though. I would go for whichever is more convenient for you.
New contributor
Let's find out!
100 MB files (27 pieces):
7z a -tzip -v100M ./100m/archive ./kali-linux-xfce-2018.2-amd64.iso
$ du ./100m/
2677884 ./100m/
10 MB files (262 pieces):
7z a -tzip -v10M ./10m/archive ./kali-linux-xfce-2018.2-amd64.iso
$ du ./10m/
2677908 ./10m
Results: The 10 MB split archive takes up an extra 24Â KB. So yes, there is a difference, the 100 1Â GB files will take up more space than the 10 10Â GB files.
The difference seems to be negligible though. I would go for whichever is more convenient for you.
New contributor
edited 7 hours ago
Peter Mortensen
8,271166184
8,271166184
New contributor
answered 11 hours ago
Layne Bernardo
863
863
New contributor
New contributor
4
du
doesn't output the size in bytes by default (unless your 270M of files turned into 2,677,908 bytes). It does display the on-disk size of files, which may be different than the actual data size (maybe applicable for uploading or storing on other filesystems)
â Xen2050
10 hours ago
You are correct, it's actually outputting in KB. I've edited the answer to correct this discrepancy. The original file is a Kali Linux ISO, it is ~2.6GB. You have a good point about the on-disk size vs actual data size, I was specifically thinking about on-disk size because it accounts for the overhead of having additional files but you're right that it would be different depending on what you're actually doing with the archives.
â Layne Bernardo
10 hours ago
Sorry, I crossed with your largely similar answer while I was double-checking the run strings.
â AFH
10 hours ago
Zip file max size is 4GB.
â pbies
8 hours ago
Re "The difference seems to be negligible": What is it in %?
â Peter Mortensen
7 hours ago
 |Â
show 1 more comment
4
du
doesn't output the size in bytes by default (unless your 270M of files turned into 2,677,908 bytes). It does display the on-disk size of files, which may be different than the actual data size (maybe applicable for uploading or storing on other filesystems)
â Xen2050
10 hours ago
You are correct, it's actually outputting in KB. I've edited the answer to correct this discrepancy. The original file is a Kali Linux ISO, it is ~2.6GB. You have a good point about the on-disk size vs actual data size, I was specifically thinking about on-disk size because it accounts for the overhead of having additional files but you're right that it would be different depending on what you're actually doing with the archives.
â Layne Bernardo
10 hours ago
Sorry, I crossed with your largely similar answer while I was double-checking the run strings.
â AFH
10 hours ago
Zip file max size is 4GB.
â pbies
8 hours ago
Re "The difference seems to be negligible": What is it in %?
â Peter Mortensen
7 hours ago
4
4
du
doesn't output the size in bytes by default (unless your 270M of files turned into 2,677,908 bytes). It does display the on-disk size of files, which may be different than the actual data size (maybe applicable for uploading or storing on other filesystems)â Xen2050
10 hours ago
du
doesn't output the size in bytes by default (unless your 270M of files turned into 2,677,908 bytes). It does display the on-disk size of files, which may be different than the actual data size (maybe applicable for uploading or storing on other filesystems)â Xen2050
10 hours ago
You are correct, it's actually outputting in KB. I've edited the answer to correct this discrepancy. The original file is a Kali Linux ISO, it is ~2.6GB. You have a good point about the on-disk size vs actual data size, I was specifically thinking about on-disk size because it accounts for the overhead of having additional files but you're right that it would be different depending on what you're actually doing with the archives.
â Layne Bernardo
10 hours ago
You are correct, it's actually outputting in KB. I've edited the answer to correct this discrepancy. The original file is a Kali Linux ISO, it is ~2.6GB. You have a good point about the on-disk size vs actual data size, I was specifically thinking about on-disk size because it accounts for the overhead of having additional files but you're right that it would be different depending on what you're actually doing with the archives.
â Layne Bernardo
10 hours ago
Sorry, I crossed with your largely similar answer while I was double-checking the run strings.
â AFH
10 hours ago
Sorry, I crossed with your largely similar answer while I was double-checking the run strings.
â AFH
10 hours ago
Zip file max size is 4GB.
â pbies
8 hours ago
Zip file max size is 4GB.
â pbies
8 hours ago
Re "The difference seems to be negligible": What is it in %?
â Peter Mortensen
7 hours ago
Re "The difference seems to be negligible": What is it in %?
â Peter Mortensen
7 hours ago
 |Â
show 1 more comment
up vote
9
down vote
Every file has a file system overhead of unused logical sector space after the end-of-file. But this is eliminated if the split size is a multiple of the logical sector size (not necessarily true of my example below).
There may be extra bytes used by the extra directory entries, but these will not figure unless the directory now occupies an extra logical sector.
The split files are identical in content to those created by a binary splitter program with the same split size.
I verified these on Linux by using the GUI version on a 7+MB file, giving 8 split files of 1MB size with 7-Zip (File.7z.00?
), then created a single, full archive (Full.7z
), which I split with:-
7z -v1000000 a File; # Create split volumes File.7z.00?
7z a Full File; # Create full archive Full.7z
split -b 1000000 -a 3 --numeric-suffixes=1 Full.7z Full.7z.; # Split full archive into Full.7z.00?
for f in 001..008; do cmp Full.7z.$f File.7z.$f; done; # Compare splits with 7z volumes
To test on another OS you may need to download or write an appropriate splitter program.
add a comment |Â
up vote
9
down vote
Every file has a file system overhead of unused logical sector space after the end-of-file. But this is eliminated if the split size is a multiple of the logical sector size (not necessarily true of my example below).
There may be extra bytes used by the extra directory entries, but these will not figure unless the directory now occupies an extra logical sector.
The split files are identical in content to those created by a binary splitter program with the same split size.
I verified these on Linux by using the GUI version on a 7+MB file, giving 8 split files of 1MB size with 7-Zip (File.7z.00?
), then created a single, full archive (Full.7z
), which I split with:-
7z -v1000000 a File; # Create split volumes File.7z.00?
7z a Full File; # Create full archive Full.7z
split -b 1000000 -a 3 --numeric-suffixes=1 Full.7z Full.7z.; # Split full archive into Full.7z.00?
for f in 001..008; do cmp Full.7z.$f File.7z.$f; done; # Compare splits with 7z volumes
To test on another OS you may need to download or write an appropriate splitter program.
add a comment |Â
up vote
9
down vote
up vote
9
down vote
Every file has a file system overhead of unused logical sector space after the end-of-file. But this is eliminated if the split size is a multiple of the logical sector size (not necessarily true of my example below).
There may be extra bytes used by the extra directory entries, but these will not figure unless the directory now occupies an extra logical sector.
The split files are identical in content to those created by a binary splitter program with the same split size.
I verified these on Linux by using the GUI version on a 7+MB file, giving 8 split files of 1MB size with 7-Zip (File.7z.00?
), then created a single, full archive (Full.7z
), which I split with:-
7z -v1000000 a File; # Create split volumes File.7z.00?
7z a Full File; # Create full archive Full.7z
split -b 1000000 -a 3 --numeric-suffixes=1 Full.7z Full.7z.; # Split full archive into Full.7z.00?
for f in 001..008; do cmp Full.7z.$f File.7z.$f; done; # Compare splits with 7z volumes
To test on another OS you may need to download or write an appropriate splitter program.
Every file has a file system overhead of unused logical sector space after the end-of-file. But this is eliminated if the split size is a multiple of the logical sector size (not necessarily true of my example below).
There may be extra bytes used by the extra directory entries, but these will not figure unless the directory now occupies an extra logical sector.
The split files are identical in content to those created by a binary splitter program with the same split size.
I verified these on Linux by using the GUI version on a 7+MB file, giving 8 split files of 1MB size with 7-Zip (File.7z.00?
), then created a single, full archive (Full.7z
), which I split with:-
7z -v1000000 a File; # Create split volumes File.7z.00?
7z a Full File; # Create full archive Full.7z
split -b 1000000 -a 3 --numeric-suffixes=1 Full.7z Full.7z.; # Split full archive into Full.7z.00?
for f in 001..008; do cmp Full.7z.$f File.7z.$f; done; # Compare splits with 7z volumes
To test on another OS you may need to download or write an appropriate splitter program.
edited 10 mins ago
Twisty Impersonator
16.3k126090
16.3k126090
answered 10 hours ago
AFH
13.2k31937
13.2k31937
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1370713%2fsplitting-into-many-zip-files-using-7-zip%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
And you can't find out because?
â Dave
8 hours ago
Why can't you just try it?
â Peter Mortensen
7 hours ago