Transferring large (8 GB) files over ssh

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
25
down vote

favorite
7












I tried it with SCP, but it says "Negative file size".



>scp matlab.iso xxx@xxx:/matlab.iso
matlab.iso: Negative file size


Also tried using SFTP, worked fine until 2 GB of the file had transferred, then stopped:



sftp> put matlab.iso
Uploading matlab.iso to /home/x/matlab.iso
matlab.iso -298% 2021MB -16651.-8KB/s 00:5d
o_upload: offset < 0


Any idea what could be wrong? Don't SCP and SFTP support files that are larger than 2 GB? If so, then how can I transfer bigger files over SSH?



The destination file system is ext4. The Linux distribution is CentOS 6.5. The filesystem currently has (accessible) large files on it (up to 100 GB).










share|improve this question



















  • 5




    Looks like a variable overrun of size. But AFAIK scp/sftp has no size limit. What is the destination file system? Does it support LARGEFILES?
    – Milind Dumbare
    Mar 16 '15 at 17:09







  • 1




    What about the applications sftp and scp? You can find this out using the file command against their binaries.
    – mdpc
    Mar 16 '15 at 18:54






  • 1




    @shepherd - yes.
    – mdpc
    Mar 16 '15 at 19:12






  • 2




    32-bit applications can access large files if they're compiled with -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64. But if you're running a 64-bit 6.5 system, it'd probably be easier to have the admins install openssh-5.3p1-94.el6_6.1.x86_64 and openssh-server-5.3p1-94.el6_6.1.x86_64 from the standard repos.
    – Mark Plotnick
    Mar 16 '15 at 21:02






  • 1




    lol at software using signed integers for file size
    – Lightness Races in Orbit
    Mar 17 '15 at 17:04















up vote
25
down vote

favorite
7












I tried it with SCP, but it says "Negative file size".



>scp matlab.iso xxx@xxx:/matlab.iso
matlab.iso: Negative file size


Also tried using SFTP, worked fine until 2 GB of the file had transferred, then stopped:



sftp> put matlab.iso
Uploading matlab.iso to /home/x/matlab.iso
matlab.iso -298% 2021MB -16651.-8KB/s 00:5d
o_upload: offset < 0


Any idea what could be wrong? Don't SCP and SFTP support files that are larger than 2 GB? If so, then how can I transfer bigger files over SSH?



The destination file system is ext4. The Linux distribution is CentOS 6.5. The filesystem currently has (accessible) large files on it (up to 100 GB).










share|improve this question



















  • 5




    Looks like a variable overrun of size. But AFAIK scp/sftp has no size limit. What is the destination file system? Does it support LARGEFILES?
    – Milind Dumbare
    Mar 16 '15 at 17:09







  • 1




    What about the applications sftp and scp? You can find this out using the file command against their binaries.
    – mdpc
    Mar 16 '15 at 18:54






  • 1




    @shepherd - yes.
    – mdpc
    Mar 16 '15 at 19:12






  • 2




    32-bit applications can access large files if they're compiled with -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64. But if you're running a 64-bit 6.5 system, it'd probably be easier to have the admins install openssh-5.3p1-94.el6_6.1.x86_64 and openssh-server-5.3p1-94.el6_6.1.x86_64 from the standard repos.
    – Mark Plotnick
    Mar 16 '15 at 21:02






  • 1




    lol at software using signed integers for file size
    – Lightness Races in Orbit
    Mar 17 '15 at 17:04













up vote
25
down vote

favorite
7









up vote
25
down vote

favorite
7






7





I tried it with SCP, but it says "Negative file size".



>scp matlab.iso xxx@xxx:/matlab.iso
matlab.iso: Negative file size


Also tried using SFTP, worked fine until 2 GB of the file had transferred, then stopped:



sftp> put matlab.iso
Uploading matlab.iso to /home/x/matlab.iso
matlab.iso -298% 2021MB -16651.-8KB/s 00:5d
o_upload: offset < 0


Any idea what could be wrong? Don't SCP and SFTP support files that are larger than 2 GB? If so, then how can I transfer bigger files over SSH?



The destination file system is ext4. The Linux distribution is CentOS 6.5. The filesystem currently has (accessible) large files on it (up to 100 GB).










share|improve this question















I tried it with SCP, but it says "Negative file size".



>scp matlab.iso xxx@xxx:/matlab.iso
matlab.iso: Negative file size


Also tried using SFTP, worked fine until 2 GB of the file had transferred, then stopped:



sftp> put matlab.iso
Uploading matlab.iso to /home/x/matlab.iso
matlab.iso -298% 2021MB -16651.-8KB/s 00:5d
o_upload: offset < 0


Any idea what could be wrong? Don't SCP and SFTP support files that are larger than 2 GB? If so, then how can I transfer bigger files over SSH?



The destination file system is ext4. The Linux distribution is CentOS 6.5. The filesystem currently has (accessible) large files on it (up to 100 GB).







scp sftp large-files






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Aug 26 at 14:42









Jeff Schaller

32.7k849110




32.7k849110










asked Mar 16 '15 at 16:59









eimrek

243137




243137







  • 5




    Looks like a variable overrun of size. But AFAIK scp/sftp has no size limit. What is the destination file system? Does it support LARGEFILES?
    – Milind Dumbare
    Mar 16 '15 at 17:09







  • 1




    What about the applications sftp and scp? You can find this out using the file command against their binaries.
    – mdpc
    Mar 16 '15 at 18:54






  • 1




    @shepherd - yes.
    – mdpc
    Mar 16 '15 at 19:12






  • 2




    32-bit applications can access large files if they're compiled with -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64. But if you're running a 64-bit 6.5 system, it'd probably be easier to have the admins install openssh-5.3p1-94.el6_6.1.x86_64 and openssh-server-5.3p1-94.el6_6.1.x86_64 from the standard repos.
    – Mark Plotnick
    Mar 16 '15 at 21:02






  • 1




    lol at software using signed integers for file size
    – Lightness Races in Orbit
    Mar 17 '15 at 17:04













  • 5




    Looks like a variable overrun of size. But AFAIK scp/sftp has no size limit. What is the destination file system? Does it support LARGEFILES?
    – Milind Dumbare
    Mar 16 '15 at 17:09







  • 1




    What about the applications sftp and scp? You can find this out using the file command against their binaries.
    – mdpc
    Mar 16 '15 at 18:54






  • 1




    @shepherd - yes.
    – mdpc
    Mar 16 '15 at 19:12






  • 2




    32-bit applications can access large files if they're compiled with -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64. But if you're running a 64-bit 6.5 system, it'd probably be easier to have the admins install openssh-5.3p1-94.el6_6.1.x86_64 and openssh-server-5.3p1-94.el6_6.1.x86_64 from the standard repos.
    – Mark Plotnick
    Mar 16 '15 at 21:02






  • 1




    lol at software using signed integers for file size
    – Lightness Races in Orbit
    Mar 17 '15 at 17:04








5




5




Looks like a variable overrun of size. But AFAIK scp/sftp has no size limit. What is the destination file system? Does it support LARGEFILES?
– Milind Dumbare
Mar 16 '15 at 17:09





Looks like a variable overrun of size. But AFAIK scp/sftp has no size limit. What is the destination file system? Does it support LARGEFILES?
– Milind Dumbare
Mar 16 '15 at 17:09





1




1




What about the applications sftp and scp? You can find this out using the file command against their binaries.
– mdpc
Mar 16 '15 at 18:54




What about the applications sftp and scp? You can find this out using the file command against their binaries.
– mdpc
Mar 16 '15 at 18:54




1




1




@shepherd - yes.
– mdpc
Mar 16 '15 at 19:12




@shepherd - yes.
– mdpc
Mar 16 '15 at 19:12




2




2




32-bit applications can access large files if they're compiled with -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64. But if you're running a 64-bit 6.5 system, it'd probably be easier to have the admins install openssh-5.3p1-94.el6_6.1.x86_64 and openssh-server-5.3p1-94.el6_6.1.x86_64 from the standard repos.
– Mark Plotnick
Mar 16 '15 at 21:02




32-bit applications can access large files if they're compiled with -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64. But if you're running a 64-bit 6.5 system, it'd probably be easier to have the admins install openssh-5.3p1-94.el6_6.1.x86_64 and openssh-server-5.3p1-94.el6_6.1.x86_64 from the standard repos.
– Mark Plotnick
Mar 16 '15 at 21:02




1




1




lol at software using signed integers for file size
– Lightness Races in Orbit
Mar 17 '15 at 17:04





lol at software using signed integers for file size
– Lightness Races in Orbit
Mar 17 '15 at 17:04











3 Answers
3






active

oldest

votes

















up vote
6
down vote



accepted










The original problem (based on reading all comments to the OP question) was that the scp executable on the 64-bit system was a 32-bit application. A 32-bit application that isn't compiled with "large-file support" ends up with seek pointers that are limited to 2^32 =~ 4GB.



You may tell if scp is 32-bit by using the file command:



file `which scp`


On most modern systems it will be 64-bit, so no file truncation would occur:



$ file `which scp`
/usr/bin/scp: ELF 64-bit LSB shared object, x86-64 ...


A 32-application should still be able to support "large files" but it has to be compiled from source with large-file support which this case apparently wasn't.



The recommended solution is perhaps to use a full standard 64-bit distribution where apps are compiled as 64-bit by default.






share|improve this answer



























    up vote
    32
    down vote













    Rsync is very well suited for transferring large files over ssh because it is able to continue transfers that were interrupted due to some reason. Since it uses hash functions to detect equal file blocks the continue feature is quite robust.



    It is kind of surprising that your sftp/scp versions does not seem to support large files - even with 32 Bit binaries, LFS support should be pretty standard, nowadays.






    share|improve this answer
















    • 4




      Given that a large part of the file is already transferred, rsync is a good idea now. Use the -P option to both get progress indication and instruct the receiver to keep an incomplete file in case the transfer is interrupted again.
      – Simon Richter
      Mar 17 '15 at 0:02

















    up vote
    20
    down vote













    I'm not sure about the file size limits of SCP and SFTP, but you might try working around the problem with split:



    split -b 1G matlab.iso


    This will create 1 GiB files which, by default, are named as xaa, xab, xac, .... You could then use scp to transfer the files:



    scp xa* xxx@xxx:


    Then on the remote system recreate the originial file with cat:



    cat xa* > matlab.iso


    Of course, the penalties for this workaround are the time taken in the split and cat operations, as well as the extra disk space needed on the local and remote systems.






    share|improve this answer
















    • 1




      good idea. I already transferred the file with an usb drive, but this would have probably been more convenient. Not as convenient as getting scp and sftp to work correctly, though.
      – eimrek
      Mar 16 '15 at 17:46










    Your Answer







    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "106"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: false,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f190537%2ftransferring-large-8-gb-files-over-ssh%23new-answer', 'question_page');

    );

    Post as a guest






























    3 Answers
    3






    active

    oldest

    votes








    3 Answers
    3






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    6
    down vote



    accepted










    The original problem (based on reading all comments to the OP question) was that the scp executable on the 64-bit system was a 32-bit application. A 32-bit application that isn't compiled with "large-file support" ends up with seek pointers that are limited to 2^32 =~ 4GB.



    You may tell if scp is 32-bit by using the file command:



    file `which scp`


    On most modern systems it will be 64-bit, so no file truncation would occur:



    $ file `which scp`
    /usr/bin/scp: ELF 64-bit LSB shared object, x86-64 ...


    A 32-application should still be able to support "large files" but it has to be compiled from source with large-file support which this case apparently wasn't.



    The recommended solution is perhaps to use a full standard 64-bit distribution where apps are compiled as 64-bit by default.






    share|improve this answer
























      up vote
      6
      down vote



      accepted










      The original problem (based on reading all comments to the OP question) was that the scp executable on the 64-bit system was a 32-bit application. A 32-bit application that isn't compiled with "large-file support" ends up with seek pointers that are limited to 2^32 =~ 4GB.



      You may tell if scp is 32-bit by using the file command:



      file `which scp`


      On most modern systems it will be 64-bit, so no file truncation would occur:



      $ file `which scp`
      /usr/bin/scp: ELF 64-bit LSB shared object, x86-64 ...


      A 32-application should still be able to support "large files" but it has to be compiled from source with large-file support which this case apparently wasn't.



      The recommended solution is perhaps to use a full standard 64-bit distribution where apps are compiled as 64-bit by default.






      share|improve this answer






















        up vote
        6
        down vote



        accepted







        up vote
        6
        down vote



        accepted






        The original problem (based on reading all comments to the OP question) was that the scp executable on the 64-bit system was a 32-bit application. A 32-bit application that isn't compiled with "large-file support" ends up with seek pointers that are limited to 2^32 =~ 4GB.



        You may tell if scp is 32-bit by using the file command:



        file `which scp`


        On most modern systems it will be 64-bit, so no file truncation would occur:



        $ file `which scp`
        /usr/bin/scp: ELF 64-bit LSB shared object, x86-64 ...


        A 32-application should still be able to support "large files" but it has to be compiled from source with large-file support which this case apparently wasn't.



        The recommended solution is perhaps to use a full standard 64-bit distribution where apps are compiled as 64-bit by default.






        share|improve this answer












        The original problem (based on reading all comments to the OP question) was that the scp executable on the 64-bit system was a 32-bit application. A 32-bit application that isn't compiled with "large-file support" ends up with seek pointers that are limited to 2^32 =~ 4GB.



        You may tell if scp is 32-bit by using the file command:



        file `which scp`


        On most modern systems it will be 64-bit, so no file truncation would occur:



        $ file `which scp`
        /usr/bin/scp: ELF 64-bit LSB shared object, x86-64 ...


        A 32-application should still be able to support "large files" but it has to be compiled from source with large-file support which this case apparently wasn't.



        The recommended solution is perhaps to use a full standard 64-bit distribution where apps are compiled as 64-bit by default.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Mar 23 '15 at 21:25









        arielf

        49449




        49449






















            up vote
            32
            down vote













            Rsync is very well suited for transferring large files over ssh because it is able to continue transfers that were interrupted due to some reason. Since it uses hash functions to detect equal file blocks the continue feature is quite robust.



            It is kind of surprising that your sftp/scp versions does not seem to support large files - even with 32 Bit binaries, LFS support should be pretty standard, nowadays.






            share|improve this answer
















            • 4




              Given that a large part of the file is already transferred, rsync is a good idea now. Use the -P option to both get progress indication and instruct the receiver to keep an incomplete file in case the transfer is interrupted again.
              – Simon Richter
              Mar 17 '15 at 0:02














            up vote
            32
            down vote













            Rsync is very well suited for transferring large files over ssh because it is able to continue transfers that were interrupted due to some reason. Since it uses hash functions to detect equal file blocks the continue feature is quite robust.



            It is kind of surprising that your sftp/scp versions does not seem to support large files - even with 32 Bit binaries, LFS support should be pretty standard, nowadays.






            share|improve this answer
















            • 4




              Given that a large part of the file is already transferred, rsync is a good idea now. Use the -P option to both get progress indication and instruct the receiver to keep an incomplete file in case the transfer is interrupted again.
              – Simon Richter
              Mar 17 '15 at 0:02












            up vote
            32
            down vote










            up vote
            32
            down vote









            Rsync is very well suited for transferring large files over ssh because it is able to continue transfers that were interrupted due to some reason. Since it uses hash functions to detect equal file blocks the continue feature is quite robust.



            It is kind of surprising that your sftp/scp versions does not seem to support large files - even with 32 Bit binaries, LFS support should be pretty standard, nowadays.






            share|improve this answer












            Rsync is very well suited for transferring large files over ssh because it is able to continue transfers that were interrupted due to some reason. Since it uses hash functions to detect equal file blocks the continue feature is quite robust.



            It is kind of surprising that your sftp/scp versions does not seem to support large files - even with 32 Bit binaries, LFS support should be pretty standard, nowadays.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Mar 16 '15 at 20:38









            maxschlepzig

            32.2k30135205




            32.2k30135205







            • 4




              Given that a large part of the file is already transferred, rsync is a good idea now. Use the -P option to both get progress indication and instruct the receiver to keep an incomplete file in case the transfer is interrupted again.
              – Simon Richter
              Mar 17 '15 at 0:02












            • 4




              Given that a large part of the file is already transferred, rsync is a good idea now. Use the -P option to both get progress indication and instruct the receiver to keep an incomplete file in case the transfer is interrupted again.
              – Simon Richter
              Mar 17 '15 at 0:02







            4




            4




            Given that a large part of the file is already transferred, rsync is a good idea now. Use the -P option to both get progress indication and instruct the receiver to keep an incomplete file in case the transfer is interrupted again.
            – Simon Richter
            Mar 17 '15 at 0:02




            Given that a large part of the file is already transferred, rsync is a good idea now. Use the -P option to both get progress indication and instruct the receiver to keep an incomplete file in case the transfer is interrupted again.
            – Simon Richter
            Mar 17 '15 at 0:02










            up vote
            20
            down vote













            I'm not sure about the file size limits of SCP and SFTP, but you might try working around the problem with split:



            split -b 1G matlab.iso


            This will create 1 GiB files which, by default, are named as xaa, xab, xac, .... You could then use scp to transfer the files:



            scp xa* xxx@xxx:


            Then on the remote system recreate the originial file with cat:



            cat xa* > matlab.iso


            Of course, the penalties for this workaround are the time taken in the split and cat operations, as well as the extra disk space needed on the local and remote systems.






            share|improve this answer
















            • 1




              good idea. I already transferred the file with an usb drive, but this would have probably been more convenient. Not as convenient as getting scp and sftp to work correctly, though.
              – eimrek
              Mar 16 '15 at 17:46














            up vote
            20
            down vote













            I'm not sure about the file size limits of SCP and SFTP, but you might try working around the problem with split:



            split -b 1G matlab.iso


            This will create 1 GiB files which, by default, are named as xaa, xab, xac, .... You could then use scp to transfer the files:



            scp xa* xxx@xxx:


            Then on the remote system recreate the originial file with cat:



            cat xa* > matlab.iso


            Of course, the penalties for this workaround are the time taken in the split and cat operations, as well as the extra disk space needed on the local and remote systems.






            share|improve this answer
















            • 1




              good idea. I already transferred the file with an usb drive, but this would have probably been more convenient. Not as convenient as getting scp and sftp to work correctly, though.
              – eimrek
              Mar 16 '15 at 17:46












            up vote
            20
            down vote










            up vote
            20
            down vote









            I'm not sure about the file size limits of SCP and SFTP, but you might try working around the problem with split:



            split -b 1G matlab.iso


            This will create 1 GiB files which, by default, are named as xaa, xab, xac, .... You could then use scp to transfer the files:



            scp xa* xxx@xxx:


            Then on the remote system recreate the originial file with cat:



            cat xa* > matlab.iso


            Of course, the penalties for this workaround are the time taken in the split and cat operations, as well as the extra disk space needed on the local and remote systems.






            share|improve this answer












            I'm not sure about the file size limits of SCP and SFTP, but you might try working around the problem with split:



            split -b 1G matlab.iso


            This will create 1 GiB files which, by default, are named as xaa, xab, xac, .... You could then use scp to transfer the files:



            scp xa* xxx@xxx:


            Then on the remote system recreate the originial file with cat:



            cat xa* > matlab.iso


            Of course, the penalties for this workaround are the time taken in the split and cat operations, as well as the extra disk space needed on the local and remote systems.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Mar 16 '15 at 17:23









            spinup

            30915




            30915







            • 1




              good idea. I already transferred the file with an usb drive, but this would have probably been more convenient. Not as convenient as getting scp and sftp to work correctly, though.
              – eimrek
              Mar 16 '15 at 17:46












            • 1




              good idea. I already transferred the file with an usb drive, but this would have probably been more convenient. Not as convenient as getting scp and sftp to work correctly, though.
              – eimrek
              Mar 16 '15 at 17:46







            1




            1




            good idea. I already transferred the file with an usb drive, but this would have probably been more convenient. Not as convenient as getting scp and sftp to work correctly, though.
            – eimrek
            Mar 16 '15 at 17:46




            good idea. I already transferred the file with an usb drive, but this would have probably been more convenient. Not as convenient as getting scp and sftp to work correctly, though.
            – eimrek
            Mar 16 '15 at 17:46

















             

            draft saved


            draft discarded















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f190537%2ftransferring-large-8-gb-files-over-ssh%23new-answer', 'question_page');

            );

            Post as a guest













































































            Popular posts from this blog

            How to check contact read email or not when send email to Individual?

            Displaying single band from multi-band raster using QGIS

            How many registers does an x86_64 CPU actually have?