Transferring large (8 GB) files over ssh
Clash Royale CLAN TAG#URR8PPP
up vote
25
down vote
favorite
I tried it with SCP, but it says "Negative file size".
>scp matlab.iso xxx@xxx:/matlab.iso
matlab.iso: Negative file size
Also tried using SFTP, worked fine until 2 GB of the file had transferred, then stopped:
sftp> put matlab.iso
Uploading matlab.iso to /home/x/matlab.iso
matlab.iso -298% 2021MB -16651.-8KB/s 00:5d
o_upload: offset < 0
Any idea what could be wrong? Don't SCP and SFTP support files that are larger than 2 GB? If so, then how can I transfer bigger files over SSH?
The destination file system is ext4. The Linux distribution is CentOS 6.5. The filesystem currently has (accessible) large files on it (up to 100 GB).
scp sftp large-files
 |Â
show 16 more comments
up vote
25
down vote
favorite
I tried it with SCP, but it says "Negative file size".
>scp matlab.iso xxx@xxx:/matlab.iso
matlab.iso: Negative file size
Also tried using SFTP, worked fine until 2 GB of the file had transferred, then stopped:
sftp> put matlab.iso
Uploading matlab.iso to /home/x/matlab.iso
matlab.iso -298% 2021MB -16651.-8KB/s 00:5d
o_upload: offset < 0
Any idea what could be wrong? Don't SCP and SFTP support files that are larger than 2 GB? If so, then how can I transfer bigger files over SSH?
The destination file system is ext4. The Linux distribution is CentOS 6.5. The filesystem currently has (accessible) large files on it (up to 100 GB).
scp sftp large-files
5
Looks like a variable overrun of size. But AFAIK scp/sftp has no size limit. What is the destination file system? Does it support LARGEFILES?
â Milind Dumbare
Mar 16 '15 at 17:09
1
What about the applications sftp and scp? You can find this out using the file command against their binaries.
â mdpc
Mar 16 '15 at 18:54
1
@shepherd - yes.
â mdpc
Mar 16 '15 at 19:12
2
32-bit applications can access large files if they're compiled with-D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64
. But if you're running a 64-bit 6.5 system, it'd probably be easier to have the admins installopenssh-5.3p1-94.el6_6.1.x86_64
andopenssh-server-5.3p1-94.el6_6.1.x86_64
from the standard repos.
â Mark Plotnick
Mar 16 '15 at 21:02
1
lol at software using signed integers for file size
â Lightness Races in Orbit
Mar 17 '15 at 17:04
 |Â
show 16 more comments
up vote
25
down vote
favorite
up vote
25
down vote
favorite
I tried it with SCP, but it says "Negative file size".
>scp matlab.iso xxx@xxx:/matlab.iso
matlab.iso: Negative file size
Also tried using SFTP, worked fine until 2 GB of the file had transferred, then stopped:
sftp> put matlab.iso
Uploading matlab.iso to /home/x/matlab.iso
matlab.iso -298% 2021MB -16651.-8KB/s 00:5d
o_upload: offset < 0
Any idea what could be wrong? Don't SCP and SFTP support files that are larger than 2 GB? If so, then how can I transfer bigger files over SSH?
The destination file system is ext4. The Linux distribution is CentOS 6.5. The filesystem currently has (accessible) large files on it (up to 100 GB).
scp sftp large-files
I tried it with SCP, but it says "Negative file size".
>scp matlab.iso xxx@xxx:/matlab.iso
matlab.iso: Negative file size
Also tried using SFTP, worked fine until 2 GB of the file had transferred, then stopped:
sftp> put matlab.iso
Uploading matlab.iso to /home/x/matlab.iso
matlab.iso -298% 2021MB -16651.-8KB/s 00:5d
o_upload: offset < 0
Any idea what could be wrong? Don't SCP and SFTP support files that are larger than 2 GB? If so, then how can I transfer bigger files over SSH?
The destination file system is ext4. The Linux distribution is CentOS 6.5. The filesystem currently has (accessible) large files on it (up to 100 GB).
scp sftp large-files
scp sftp large-files
edited Aug 26 at 14:42
Jeff Schaller
32.7k849110
32.7k849110
asked Mar 16 '15 at 16:59
eimrek
243137
243137
5
Looks like a variable overrun of size. But AFAIK scp/sftp has no size limit. What is the destination file system? Does it support LARGEFILES?
â Milind Dumbare
Mar 16 '15 at 17:09
1
What about the applications sftp and scp? You can find this out using the file command against their binaries.
â mdpc
Mar 16 '15 at 18:54
1
@shepherd - yes.
â mdpc
Mar 16 '15 at 19:12
2
32-bit applications can access large files if they're compiled with-D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64
. But if you're running a 64-bit 6.5 system, it'd probably be easier to have the admins installopenssh-5.3p1-94.el6_6.1.x86_64
andopenssh-server-5.3p1-94.el6_6.1.x86_64
from the standard repos.
â Mark Plotnick
Mar 16 '15 at 21:02
1
lol at software using signed integers for file size
â Lightness Races in Orbit
Mar 17 '15 at 17:04
 |Â
show 16 more comments
5
Looks like a variable overrun of size. But AFAIK scp/sftp has no size limit. What is the destination file system? Does it support LARGEFILES?
â Milind Dumbare
Mar 16 '15 at 17:09
1
What about the applications sftp and scp? You can find this out using the file command against their binaries.
â mdpc
Mar 16 '15 at 18:54
1
@shepherd - yes.
â mdpc
Mar 16 '15 at 19:12
2
32-bit applications can access large files if they're compiled with-D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64
. But if you're running a 64-bit 6.5 system, it'd probably be easier to have the admins installopenssh-5.3p1-94.el6_6.1.x86_64
andopenssh-server-5.3p1-94.el6_6.1.x86_64
from the standard repos.
â Mark Plotnick
Mar 16 '15 at 21:02
1
lol at software using signed integers for file size
â Lightness Races in Orbit
Mar 17 '15 at 17:04
5
5
Looks like a variable overrun of size. But AFAIK scp/sftp has no size limit. What is the destination file system? Does it support LARGEFILES?
â Milind Dumbare
Mar 16 '15 at 17:09
Looks like a variable overrun of size. But AFAIK scp/sftp has no size limit. What is the destination file system? Does it support LARGEFILES?
â Milind Dumbare
Mar 16 '15 at 17:09
1
1
What about the applications sftp and scp? You can find this out using the file command against their binaries.
â mdpc
Mar 16 '15 at 18:54
What about the applications sftp and scp? You can find this out using the file command against their binaries.
â mdpc
Mar 16 '15 at 18:54
1
1
@shepherd - yes.
â mdpc
Mar 16 '15 at 19:12
@shepherd - yes.
â mdpc
Mar 16 '15 at 19:12
2
2
32-bit applications can access large files if they're compiled with
-D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64
. But if you're running a 64-bit 6.5 system, it'd probably be easier to have the admins install openssh-5.3p1-94.el6_6.1.x86_64
and openssh-server-5.3p1-94.el6_6.1.x86_64
from the standard repos.â Mark Plotnick
Mar 16 '15 at 21:02
32-bit applications can access large files if they're compiled with
-D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64
. But if you're running a 64-bit 6.5 system, it'd probably be easier to have the admins install openssh-5.3p1-94.el6_6.1.x86_64
and openssh-server-5.3p1-94.el6_6.1.x86_64
from the standard repos.â Mark Plotnick
Mar 16 '15 at 21:02
1
1
lol at software using signed integers for file size
â Lightness Races in Orbit
Mar 17 '15 at 17:04
lol at software using signed integers for file size
â Lightness Races in Orbit
Mar 17 '15 at 17:04
 |Â
show 16 more comments
3 Answers
3
active
oldest
votes
up vote
6
down vote
accepted
The original problem (based on reading all comments to the OP question) was that the scp
executable on the 64-bit system was a 32-bit application. A 32-bit application that isn't compiled with "large-file support" ends up with seek pointers that are limited to 2^32 =~ 4GB
.
You may tell if scp
is 32-bit by using the file
command:
file `which scp`
On most modern systems it will be 64-bit, so no file truncation would occur:
$ file `which scp`
/usr/bin/scp: ELF 64-bit LSB shared object, x86-64 ...
A 32-application should still be able to support "large files" but it has to be compiled from source with large-file support which this case apparently wasn't.
The recommended solution is perhaps to use a full standard 64-bit distribution where apps are compiled as 64-bit by default.
add a comment |Â
up vote
32
down vote
Rsync is very well suited for transferring large files over ssh because it is able to continue transfers that were interrupted due to some reason. Since it uses hash functions to detect equal file blocks the continue feature is quite robust.
It is kind of surprising that your sftp
/scp
versions does not seem to support large files - even with 32 Bit binaries, LFS support should be pretty standard, nowadays.
4
Given that a large part of the file is already transferred,rsync
is a good idea now. Use the-P
option to both get progress indication and instruct the receiver to keep an incomplete file in case the transfer is interrupted again.
â Simon Richter
Mar 17 '15 at 0:02
add a comment |Â
up vote
20
down vote
I'm not sure about the file size limits of SCP and SFTP, but you might try working around the problem with split:
split -b 1G matlab.iso
This will create 1 GiB files which, by default, are named as xaa, xab, xac, ...
. You could then use scp to transfer the files:
scp xa* xxx@xxx:
Then on the remote system recreate the originial file with cat:
cat xa* > matlab.iso
Of course, the penalties for this workaround are the time taken in the split and cat operations, as well as the extra disk space needed on the local and remote systems.
1
good idea. I already transferred the file with an usb drive, but this would have probably been more convenient. Not as convenient as getting scp and sftp to work correctly, though.
â eimrek
Mar 16 '15 at 17:46
add a comment |Â
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
6
down vote
accepted
The original problem (based on reading all comments to the OP question) was that the scp
executable on the 64-bit system was a 32-bit application. A 32-bit application that isn't compiled with "large-file support" ends up with seek pointers that are limited to 2^32 =~ 4GB
.
You may tell if scp
is 32-bit by using the file
command:
file `which scp`
On most modern systems it will be 64-bit, so no file truncation would occur:
$ file `which scp`
/usr/bin/scp: ELF 64-bit LSB shared object, x86-64 ...
A 32-application should still be able to support "large files" but it has to be compiled from source with large-file support which this case apparently wasn't.
The recommended solution is perhaps to use a full standard 64-bit distribution where apps are compiled as 64-bit by default.
add a comment |Â
up vote
6
down vote
accepted
The original problem (based on reading all comments to the OP question) was that the scp
executable on the 64-bit system was a 32-bit application. A 32-bit application that isn't compiled with "large-file support" ends up with seek pointers that are limited to 2^32 =~ 4GB
.
You may tell if scp
is 32-bit by using the file
command:
file `which scp`
On most modern systems it will be 64-bit, so no file truncation would occur:
$ file `which scp`
/usr/bin/scp: ELF 64-bit LSB shared object, x86-64 ...
A 32-application should still be able to support "large files" but it has to be compiled from source with large-file support which this case apparently wasn't.
The recommended solution is perhaps to use a full standard 64-bit distribution where apps are compiled as 64-bit by default.
add a comment |Â
up vote
6
down vote
accepted
up vote
6
down vote
accepted
The original problem (based on reading all comments to the OP question) was that the scp
executable on the 64-bit system was a 32-bit application. A 32-bit application that isn't compiled with "large-file support" ends up with seek pointers that are limited to 2^32 =~ 4GB
.
You may tell if scp
is 32-bit by using the file
command:
file `which scp`
On most modern systems it will be 64-bit, so no file truncation would occur:
$ file `which scp`
/usr/bin/scp: ELF 64-bit LSB shared object, x86-64 ...
A 32-application should still be able to support "large files" but it has to be compiled from source with large-file support which this case apparently wasn't.
The recommended solution is perhaps to use a full standard 64-bit distribution where apps are compiled as 64-bit by default.
The original problem (based on reading all comments to the OP question) was that the scp
executable on the 64-bit system was a 32-bit application. A 32-bit application that isn't compiled with "large-file support" ends up with seek pointers that are limited to 2^32 =~ 4GB
.
You may tell if scp
is 32-bit by using the file
command:
file `which scp`
On most modern systems it will be 64-bit, so no file truncation would occur:
$ file `which scp`
/usr/bin/scp: ELF 64-bit LSB shared object, x86-64 ...
A 32-application should still be able to support "large files" but it has to be compiled from source with large-file support which this case apparently wasn't.
The recommended solution is perhaps to use a full standard 64-bit distribution where apps are compiled as 64-bit by default.
answered Mar 23 '15 at 21:25
arielf
49449
49449
add a comment |Â
add a comment |Â
up vote
32
down vote
Rsync is very well suited for transferring large files over ssh because it is able to continue transfers that were interrupted due to some reason. Since it uses hash functions to detect equal file blocks the continue feature is quite robust.
It is kind of surprising that your sftp
/scp
versions does not seem to support large files - even with 32 Bit binaries, LFS support should be pretty standard, nowadays.
4
Given that a large part of the file is already transferred,rsync
is a good idea now. Use the-P
option to both get progress indication and instruct the receiver to keep an incomplete file in case the transfer is interrupted again.
â Simon Richter
Mar 17 '15 at 0:02
add a comment |Â
up vote
32
down vote
Rsync is very well suited for transferring large files over ssh because it is able to continue transfers that were interrupted due to some reason. Since it uses hash functions to detect equal file blocks the continue feature is quite robust.
It is kind of surprising that your sftp
/scp
versions does not seem to support large files - even with 32 Bit binaries, LFS support should be pretty standard, nowadays.
4
Given that a large part of the file is already transferred,rsync
is a good idea now. Use the-P
option to both get progress indication and instruct the receiver to keep an incomplete file in case the transfer is interrupted again.
â Simon Richter
Mar 17 '15 at 0:02
add a comment |Â
up vote
32
down vote
up vote
32
down vote
Rsync is very well suited for transferring large files over ssh because it is able to continue transfers that were interrupted due to some reason. Since it uses hash functions to detect equal file blocks the continue feature is quite robust.
It is kind of surprising that your sftp
/scp
versions does not seem to support large files - even with 32 Bit binaries, LFS support should be pretty standard, nowadays.
Rsync is very well suited for transferring large files over ssh because it is able to continue transfers that were interrupted due to some reason. Since it uses hash functions to detect equal file blocks the continue feature is quite robust.
It is kind of surprising that your sftp
/scp
versions does not seem to support large files - even with 32 Bit binaries, LFS support should be pretty standard, nowadays.
answered Mar 16 '15 at 20:38
maxschlepzig
32.2k30135205
32.2k30135205
4
Given that a large part of the file is already transferred,rsync
is a good idea now. Use the-P
option to both get progress indication and instruct the receiver to keep an incomplete file in case the transfer is interrupted again.
â Simon Richter
Mar 17 '15 at 0:02
add a comment |Â
4
Given that a large part of the file is already transferred,rsync
is a good idea now. Use the-P
option to both get progress indication and instruct the receiver to keep an incomplete file in case the transfer is interrupted again.
â Simon Richter
Mar 17 '15 at 0:02
4
4
Given that a large part of the file is already transferred,
rsync
is a good idea now. Use the -P
option to both get progress indication and instruct the receiver to keep an incomplete file in case the transfer is interrupted again.â Simon Richter
Mar 17 '15 at 0:02
Given that a large part of the file is already transferred,
rsync
is a good idea now. Use the -P
option to both get progress indication and instruct the receiver to keep an incomplete file in case the transfer is interrupted again.â Simon Richter
Mar 17 '15 at 0:02
add a comment |Â
up vote
20
down vote
I'm not sure about the file size limits of SCP and SFTP, but you might try working around the problem with split:
split -b 1G matlab.iso
This will create 1 GiB files which, by default, are named as xaa, xab, xac, ...
. You could then use scp to transfer the files:
scp xa* xxx@xxx:
Then on the remote system recreate the originial file with cat:
cat xa* > matlab.iso
Of course, the penalties for this workaround are the time taken in the split and cat operations, as well as the extra disk space needed on the local and remote systems.
1
good idea. I already transferred the file with an usb drive, but this would have probably been more convenient. Not as convenient as getting scp and sftp to work correctly, though.
â eimrek
Mar 16 '15 at 17:46
add a comment |Â
up vote
20
down vote
I'm not sure about the file size limits of SCP and SFTP, but you might try working around the problem with split:
split -b 1G matlab.iso
This will create 1 GiB files which, by default, are named as xaa, xab, xac, ...
. You could then use scp to transfer the files:
scp xa* xxx@xxx:
Then on the remote system recreate the originial file with cat:
cat xa* > matlab.iso
Of course, the penalties for this workaround are the time taken in the split and cat operations, as well as the extra disk space needed on the local and remote systems.
1
good idea. I already transferred the file with an usb drive, but this would have probably been more convenient. Not as convenient as getting scp and sftp to work correctly, though.
â eimrek
Mar 16 '15 at 17:46
add a comment |Â
up vote
20
down vote
up vote
20
down vote
I'm not sure about the file size limits of SCP and SFTP, but you might try working around the problem with split:
split -b 1G matlab.iso
This will create 1 GiB files which, by default, are named as xaa, xab, xac, ...
. You could then use scp to transfer the files:
scp xa* xxx@xxx:
Then on the remote system recreate the originial file with cat:
cat xa* > matlab.iso
Of course, the penalties for this workaround are the time taken in the split and cat operations, as well as the extra disk space needed on the local and remote systems.
I'm not sure about the file size limits of SCP and SFTP, but you might try working around the problem with split:
split -b 1G matlab.iso
This will create 1 GiB files which, by default, are named as xaa, xab, xac, ...
. You could then use scp to transfer the files:
scp xa* xxx@xxx:
Then on the remote system recreate the originial file with cat:
cat xa* > matlab.iso
Of course, the penalties for this workaround are the time taken in the split and cat operations, as well as the extra disk space needed on the local and remote systems.
answered Mar 16 '15 at 17:23
spinup
30915
30915
1
good idea. I already transferred the file with an usb drive, but this would have probably been more convenient. Not as convenient as getting scp and sftp to work correctly, though.
â eimrek
Mar 16 '15 at 17:46
add a comment |Â
1
good idea. I already transferred the file with an usb drive, but this would have probably been more convenient. Not as convenient as getting scp and sftp to work correctly, though.
â eimrek
Mar 16 '15 at 17:46
1
1
good idea. I already transferred the file with an usb drive, but this would have probably been more convenient. Not as convenient as getting scp and sftp to work correctly, though.
â eimrek
Mar 16 '15 at 17:46
good idea. I already transferred the file with an usb drive, but this would have probably been more convenient. Not as convenient as getting scp and sftp to work correctly, though.
â eimrek
Mar 16 '15 at 17:46
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f190537%2ftransferring-large-8-gb-files-over-ssh%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
5
Looks like a variable overrun of size. But AFAIK scp/sftp has no size limit. What is the destination file system? Does it support LARGEFILES?
â Milind Dumbare
Mar 16 '15 at 17:09
1
What about the applications sftp and scp? You can find this out using the file command against their binaries.
â mdpc
Mar 16 '15 at 18:54
1
@shepherd - yes.
â mdpc
Mar 16 '15 at 19:12
2
32-bit applications can access large files if they're compiled with
-D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64
. But if you're running a 64-bit 6.5 system, it'd probably be easier to have the admins installopenssh-5.3p1-94.el6_6.1.x86_64
andopenssh-server-5.3p1-94.el6_6.1.x86_64
from the standard repos.â Mark Plotnick
Mar 16 '15 at 21:02
1
lol at software using signed integers for file size
â Lightness Races in Orbit
Mar 17 '15 at 17:04