how to efficiently encrypt backup via gpg

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite
1












I have the following problem. I'm currently need to store my backup on a cloud solution like dropbox since my local nas is broken. That's why I have to encrypt my backup. I'm using rsnapshot to generate it.



On the NAS I didn't encrypt it so I'm not experienced with it. What I've done is, I've zipped the latest backup and simple encrypted it via gpg. However, its still encrypting. My backup is aronud 50GB. I've never encrypted such a big file. Is there a way to encrypt such big files more efficiently or what am I doing wrong?







share|improve this question















  • 1




    There are basically two ways you will probably want to look at doing this in the future. Rather than making one giant archive and encrypting that, it might be faster (or at least more manageable) to encrypt the individual files, and archive those. Alternately, look into actually encrypting the entire filesystem itself, but that would not apply to something like DropBox or AWS S3.
    – DopeGhoti
    May 8 at 19:12










  • I too would also suggest finding another encryption solution. I'd suggest using a combination of lvm-snapshots with dm-crypt. That way you have fully encrypted file system backups.
    – Lucas Ramage
    May 8 at 19:33














up vote
1
down vote

favorite
1












I have the following problem. I'm currently need to store my backup on a cloud solution like dropbox since my local nas is broken. That's why I have to encrypt my backup. I'm using rsnapshot to generate it.



On the NAS I didn't encrypt it so I'm not experienced with it. What I've done is, I've zipped the latest backup and simple encrypted it via gpg. However, its still encrypting. My backup is aronud 50GB. I've never encrypted such a big file. Is there a way to encrypt such big files more efficiently or what am I doing wrong?







share|improve this question















  • 1




    There are basically two ways you will probably want to look at doing this in the future. Rather than making one giant archive and encrypting that, it might be faster (or at least more manageable) to encrypt the individual files, and archive those. Alternately, look into actually encrypting the entire filesystem itself, but that would not apply to something like DropBox or AWS S3.
    – DopeGhoti
    May 8 at 19:12










  • I too would also suggest finding another encryption solution. I'd suggest using a combination of lvm-snapshots with dm-crypt. That way you have fully encrypted file system backups.
    – Lucas Ramage
    May 8 at 19:33












up vote
1
down vote

favorite
1









up vote
1
down vote

favorite
1






1





I have the following problem. I'm currently need to store my backup on a cloud solution like dropbox since my local nas is broken. That's why I have to encrypt my backup. I'm using rsnapshot to generate it.



On the NAS I didn't encrypt it so I'm not experienced with it. What I've done is, I've zipped the latest backup and simple encrypted it via gpg. However, its still encrypting. My backup is aronud 50GB. I've never encrypted such a big file. Is there a way to encrypt such big files more efficiently or what am I doing wrong?







share|improve this question











I have the following problem. I'm currently need to store my backup on a cloud solution like dropbox since my local nas is broken. That's why I have to encrypt my backup. I'm using rsnapshot to generate it.



On the NAS I didn't encrypt it so I'm not experienced with it. What I've done is, I've zipped the latest backup and simple encrypted it via gpg. However, its still encrypting. My backup is aronud 50GB. I've never encrypted such a big file. Is there a way to encrypt such big files more efficiently or what am I doing wrong?









share|improve this question










share|improve this question




share|improve this question









asked May 8 at 19:08









math

1017




1017







  • 1




    There are basically two ways you will probably want to look at doing this in the future. Rather than making one giant archive and encrypting that, it might be faster (or at least more manageable) to encrypt the individual files, and archive those. Alternately, look into actually encrypting the entire filesystem itself, but that would not apply to something like DropBox or AWS S3.
    – DopeGhoti
    May 8 at 19:12










  • I too would also suggest finding another encryption solution. I'd suggest using a combination of lvm-snapshots with dm-crypt. That way you have fully encrypted file system backups.
    – Lucas Ramage
    May 8 at 19:33












  • 1




    There are basically two ways you will probably want to look at doing this in the future. Rather than making one giant archive and encrypting that, it might be faster (or at least more manageable) to encrypt the individual files, and archive those. Alternately, look into actually encrypting the entire filesystem itself, but that would not apply to something like DropBox or AWS S3.
    – DopeGhoti
    May 8 at 19:12










  • I too would also suggest finding another encryption solution. I'd suggest using a combination of lvm-snapshots with dm-crypt. That way you have fully encrypted file system backups.
    – Lucas Ramage
    May 8 at 19:33







1




1




There are basically two ways you will probably want to look at doing this in the future. Rather than making one giant archive and encrypting that, it might be faster (or at least more manageable) to encrypt the individual files, and archive those. Alternately, look into actually encrypting the entire filesystem itself, but that would not apply to something like DropBox or AWS S3.
– DopeGhoti
May 8 at 19:12




There are basically two ways you will probably want to look at doing this in the future. Rather than making one giant archive and encrypting that, it might be faster (or at least more manageable) to encrypt the individual files, and archive those. Alternately, look into actually encrypting the entire filesystem itself, but that would not apply to something like DropBox or AWS S3.
– DopeGhoti
May 8 at 19:12












I too would also suggest finding another encryption solution. I'd suggest using a combination of lvm-snapshots with dm-crypt. That way you have fully encrypted file system backups.
– Lucas Ramage
May 8 at 19:33




I too would also suggest finding another encryption solution. I'd suggest using a combination of lvm-snapshots with dm-crypt. That way you have fully encrypted file system backups.
– Lucas Ramage
May 8 at 19:33










1 Answer
1






active

oldest

votes

















up vote
3
down vote



accepted










The time it takes to encrypt is proportional to the size of the data, plus some constant overhead. You can't save time for the whole operation by splitting the data, except by taking advantage of multiple cores so that it takes the same CPU time overall (or very slightly little more) but less wall-clock time. Splitting can of course be advantageous if you later want to access part of the data.



GnuPG compresses data before encrypting it. If the data is already compressed, this won't do anything useful and may slow the process down a little.



I recommend duplicity to make encrypted backups. It takes care of both collecting files and calling GPG and it knows how to do incremental backups. It splits the data into multiple volumes, so it can save wall-clock time by encrypting one volume while it's collecting files for the next one.



The first time you back up 50GB is going to be slow regardless. If you have AES acceleration on your hardware, it helps (as long as you make sure that GPG is using AES — GnuPG used CAST-5 by default before version 2.1, but it uses your public key's preferences and that should default to AES even in GPG 1.4 or 2.0).






share|improve this answer





















    Your Answer







    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "106"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: false,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );








     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f442617%2fhow-to-efficiently-encrypt-backup-via-gpg%23new-answer', 'question_page');

    );

    Post as a guest






























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    3
    down vote



    accepted










    The time it takes to encrypt is proportional to the size of the data, plus some constant overhead. You can't save time for the whole operation by splitting the data, except by taking advantage of multiple cores so that it takes the same CPU time overall (or very slightly little more) but less wall-clock time. Splitting can of course be advantageous if you later want to access part of the data.



    GnuPG compresses data before encrypting it. If the data is already compressed, this won't do anything useful and may slow the process down a little.



    I recommend duplicity to make encrypted backups. It takes care of both collecting files and calling GPG and it knows how to do incremental backups. It splits the data into multiple volumes, so it can save wall-clock time by encrypting one volume while it's collecting files for the next one.



    The first time you back up 50GB is going to be slow regardless. If you have AES acceleration on your hardware, it helps (as long as you make sure that GPG is using AES — GnuPG used CAST-5 by default before version 2.1, but it uses your public key's preferences and that should default to AES even in GPG 1.4 or 2.0).






    share|improve this answer

























      up vote
      3
      down vote



      accepted










      The time it takes to encrypt is proportional to the size of the data, plus some constant overhead. You can't save time for the whole operation by splitting the data, except by taking advantage of multiple cores so that it takes the same CPU time overall (or very slightly little more) but less wall-clock time. Splitting can of course be advantageous if you later want to access part of the data.



      GnuPG compresses data before encrypting it. If the data is already compressed, this won't do anything useful and may slow the process down a little.



      I recommend duplicity to make encrypted backups. It takes care of both collecting files and calling GPG and it knows how to do incremental backups. It splits the data into multiple volumes, so it can save wall-clock time by encrypting one volume while it's collecting files for the next one.



      The first time you back up 50GB is going to be slow regardless. If you have AES acceleration on your hardware, it helps (as long as you make sure that GPG is using AES — GnuPG used CAST-5 by default before version 2.1, but it uses your public key's preferences and that should default to AES even in GPG 1.4 or 2.0).






      share|improve this answer























        up vote
        3
        down vote



        accepted







        up vote
        3
        down vote



        accepted






        The time it takes to encrypt is proportional to the size of the data, plus some constant overhead. You can't save time for the whole operation by splitting the data, except by taking advantage of multiple cores so that it takes the same CPU time overall (or very slightly little more) but less wall-clock time. Splitting can of course be advantageous if you later want to access part of the data.



        GnuPG compresses data before encrypting it. If the data is already compressed, this won't do anything useful and may slow the process down a little.



        I recommend duplicity to make encrypted backups. It takes care of both collecting files and calling GPG and it knows how to do incremental backups. It splits the data into multiple volumes, so it can save wall-clock time by encrypting one volume while it's collecting files for the next one.



        The first time you back up 50GB is going to be slow regardless. If you have AES acceleration on your hardware, it helps (as long as you make sure that GPG is using AES — GnuPG used CAST-5 by default before version 2.1, but it uses your public key's preferences and that should default to AES even in GPG 1.4 or 2.0).






        share|improve this answer













        The time it takes to encrypt is proportional to the size of the data, plus some constant overhead. You can't save time for the whole operation by splitting the data, except by taking advantage of multiple cores so that it takes the same CPU time overall (or very slightly little more) but less wall-clock time. Splitting can of course be advantageous if you later want to access part of the data.



        GnuPG compresses data before encrypting it. If the data is already compressed, this won't do anything useful and may slow the process down a little.



        I recommend duplicity to make encrypted backups. It takes care of both collecting files and calling GPG and it knows how to do incremental backups. It splits the data into multiple volumes, so it can save wall-clock time by encrypting one volume while it's collecting files for the next one.



        The first time you back up 50GB is going to be slow regardless. If you have AES acceleration on your hardware, it helps (as long as you make sure that GPG is using AES — GnuPG used CAST-5 by default before version 2.1, but it uses your public key's preferences and that should default to AES even in GPG 1.4 or 2.0).







        share|improve this answer













        share|improve this answer



        share|improve this answer











        answered May 8 at 19:59









        Gilles

        503k1189951522




        503k1189951522






















             

            draft saved


            draft discarded


























             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f442617%2fhow-to-efficiently-encrypt-backup-via-gpg%23new-answer', 'question_page');

            );

            Post as a guest













































































            Popular posts from this blog

            How to check contact read email or not when send email to Individual?

            Bahrain

            Postfix configuration issue with fips on centos 7; mailgun relay