how to efficiently encrypt backup via gpg
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
I have the following problem. I'm currently need to store my backup on a cloud solution like dropbox since my local nas is broken. That's why I have to encrypt my backup. I'm using rsnapshot to generate it.
On the NAS I didn't encrypt it so I'm not experienced with it. What I've done is, I've zipped the latest backup and simple encrypted it via gpg. However, its still encrypting. My backup is aronud 50GB. I've never encrypted such a big file. Is there a way to encrypt such big files more efficiently or what am I doing wrong?
gpg rsnapshot
add a comment |Â
up vote
1
down vote
favorite
I have the following problem. I'm currently need to store my backup on a cloud solution like dropbox since my local nas is broken. That's why I have to encrypt my backup. I'm using rsnapshot to generate it.
On the NAS I didn't encrypt it so I'm not experienced with it. What I've done is, I've zipped the latest backup and simple encrypted it via gpg. However, its still encrypting. My backup is aronud 50GB. I've never encrypted such a big file. Is there a way to encrypt such big files more efficiently or what am I doing wrong?
gpg rsnapshot
1
There are basically two ways you will probably want to look at doing this in the future. Rather than making one giant archive and encrypting that, it might be faster (or at least more manageable) to encrypt the individual files, and archive those. Alternately, look into actually encrypting the entire filesystem itself, but that would not apply to something like DropBox or AWS S3.
â DopeGhoti
May 8 at 19:12
I too would also suggest finding another encryption solution. I'd suggest using a combination of lvm-snapshots with dm-crypt. That way you have fully encrypted file system backups.
â Lucas Ramage
May 8 at 19:33
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I have the following problem. I'm currently need to store my backup on a cloud solution like dropbox since my local nas is broken. That's why I have to encrypt my backup. I'm using rsnapshot to generate it.
On the NAS I didn't encrypt it so I'm not experienced with it. What I've done is, I've zipped the latest backup and simple encrypted it via gpg. However, its still encrypting. My backup is aronud 50GB. I've never encrypted such a big file. Is there a way to encrypt such big files more efficiently or what am I doing wrong?
gpg rsnapshot
I have the following problem. I'm currently need to store my backup on a cloud solution like dropbox since my local nas is broken. That's why I have to encrypt my backup. I'm using rsnapshot to generate it.
On the NAS I didn't encrypt it so I'm not experienced with it. What I've done is, I've zipped the latest backup and simple encrypted it via gpg. However, its still encrypting. My backup is aronud 50GB. I've never encrypted such a big file. Is there a way to encrypt such big files more efficiently or what am I doing wrong?
gpg rsnapshot
asked May 8 at 19:08
math
1017
1017
1
There are basically two ways you will probably want to look at doing this in the future. Rather than making one giant archive and encrypting that, it might be faster (or at least more manageable) to encrypt the individual files, and archive those. Alternately, look into actually encrypting the entire filesystem itself, but that would not apply to something like DropBox or AWS S3.
â DopeGhoti
May 8 at 19:12
I too would also suggest finding another encryption solution. I'd suggest using a combination of lvm-snapshots with dm-crypt. That way you have fully encrypted file system backups.
â Lucas Ramage
May 8 at 19:33
add a comment |Â
1
There are basically two ways you will probably want to look at doing this in the future. Rather than making one giant archive and encrypting that, it might be faster (or at least more manageable) to encrypt the individual files, and archive those. Alternately, look into actually encrypting the entire filesystem itself, but that would not apply to something like DropBox or AWS S3.
â DopeGhoti
May 8 at 19:12
I too would also suggest finding another encryption solution. I'd suggest using a combination of lvm-snapshots with dm-crypt. That way you have fully encrypted file system backups.
â Lucas Ramage
May 8 at 19:33
1
1
There are basically two ways you will probably want to look at doing this in the future. Rather than making one giant archive and encrypting that, it might be faster (or at least more manageable) to encrypt the individual files, and archive those. Alternately, look into actually encrypting the entire filesystem itself, but that would not apply to something like DropBox or AWS S3.
â DopeGhoti
May 8 at 19:12
There are basically two ways you will probably want to look at doing this in the future. Rather than making one giant archive and encrypting that, it might be faster (or at least more manageable) to encrypt the individual files, and archive those. Alternately, look into actually encrypting the entire filesystem itself, but that would not apply to something like DropBox or AWS S3.
â DopeGhoti
May 8 at 19:12
I too would also suggest finding another encryption solution. I'd suggest using a combination of lvm-snapshots with dm-crypt. That way you have fully encrypted file system backups.
â Lucas Ramage
May 8 at 19:33
I too would also suggest finding another encryption solution. I'd suggest using a combination of lvm-snapshots with dm-crypt. That way you have fully encrypted file system backups.
â Lucas Ramage
May 8 at 19:33
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
3
down vote
accepted
The time it takes to encrypt is proportional to the size of the data, plus some constant overhead. You can't save time for the whole operation by splitting the data, except by taking advantage of multiple cores so that it takes the same CPU time overall (or very slightly little more) but less wall-clock time. Splitting can of course be advantageous if you later want to access part of the data.
GnuPG compresses data before encrypting it. If the data is already compressed, this won't do anything useful and may slow the process down a little.
I recommend duplicity to make encrypted backups. It takes care of both collecting files and calling GPG and it knows how to do incremental backups. It splits the data into multiple volumes, so it can save wall-clock time by encrypting one volume while it's collecting files for the next one.
The first time you back up 50GB is going to be slow regardless. If you have AES acceleration on your hardware, it helps (as long as you make sure that GPG is using AES â GnuPG used CAST-5 by default before version 2.1, but it uses your public key's preferences and that should default to AES even in GPG 1.4 or 2.0).
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
3
down vote
accepted
The time it takes to encrypt is proportional to the size of the data, plus some constant overhead. You can't save time for the whole operation by splitting the data, except by taking advantage of multiple cores so that it takes the same CPU time overall (or very slightly little more) but less wall-clock time. Splitting can of course be advantageous if you later want to access part of the data.
GnuPG compresses data before encrypting it. If the data is already compressed, this won't do anything useful and may slow the process down a little.
I recommend duplicity to make encrypted backups. It takes care of both collecting files and calling GPG and it knows how to do incremental backups. It splits the data into multiple volumes, so it can save wall-clock time by encrypting one volume while it's collecting files for the next one.
The first time you back up 50GB is going to be slow regardless. If you have AES acceleration on your hardware, it helps (as long as you make sure that GPG is using AES â GnuPG used CAST-5 by default before version 2.1, but it uses your public key's preferences and that should default to AES even in GPG 1.4 or 2.0).
add a comment |Â
up vote
3
down vote
accepted
The time it takes to encrypt is proportional to the size of the data, plus some constant overhead. You can't save time for the whole operation by splitting the data, except by taking advantage of multiple cores so that it takes the same CPU time overall (or very slightly little more) but less wall-clock time. Splitting can of course be advantageous if you later want to access part of the data.
GnuPG compresses data before encrypting it. If the data is already compressed, this won't do anything useful and may slow the process down a little.
I recommend duplicity to make encrypted backups. It takes care of both collecting files and calling GPG and it knows how to do incremental backups. It splits the data into multiple volumes, so it can save wall-clock time by encrypting one volume while it's collecting files for the next one.
The first time you back up 50GB is going to be slow regardless. If you have AES acceleration on your hardware, it helps (as long as you make sure that GPG is using AES â GnuPG used CAST-5 by default before version 2.1, but it uses your public key's preferences and that should default to AES even in GPG 1.4 or 2.0).
add a comment |Â
up vote
3
down vote
accepted
up vote
3
down vote
accepted
The time it takes to encrypt is proportional to the size of the data, plus some constant overhead. You can't save time for the whole operation by splitting the data, except by taking advantage of multiple cores so that it takes the same CPU time overall (or very slightly little more) but less wall-clock time. Splitting can of course be advantageous if you later want to access part of the data.
GnuPG compresses data before encrypting it. If the data is already compressed, this won't do anything useful and may slow the process down a little.
I recommend duplicity to make encrypted backups. It takes care of both collecting files and calling GPG and it knows how to do incremental backups. It splits the data into multiple volumes, so it can save wall-clock time by encrypting one volume while it's collecting files for the next one.
The first time you back up 50GB is going to be slow regardless. If you have AES acceleration on your hardware, it helps (as long as you make sure that GPG is using AES â GnuPG used CAST-5 by default before version 2.1, but it uses your public key's preferences and that should default to AES even in GPG 1.4 or 2.0).
The time it takes to encrypt is proportional to the size of the data, plus some constant overhead. You can't save time for the whole operation by splitting the data, except by taking advantage of multiple cores so that it takes the same CPU time overall (or very slightly little more) but less wall-clock time. Splitting can of course be advantageous if you later want to access part of the data.
GnuPG compresses data before encrypting it. If the data is already compressed, this won't do anything useful and may slow the process down a little.
I recommend duplicity to make encrypted backups. It takes care of both collecting files and calling GPG and it knows how to do incremental backups. It splits the data into multiple volumes, so it can save wall-clock time by encrypting one volume while it's collecting files for the next one.
The first time you back up 50GB is going to be slow regardless. If you have AES acceleration on your hardware, it helps (as long as you make sure that GPG is using AES â GnuPG used CAST-5 by default before version 2.1, but it uses your public key's preferences and that should default to AES even in GPG 1.4 or 2.0).
answered May 8 at 19:59
Gilles
503k1189951522
503k1189951522
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f442617%2fhow-to-efficiently-encrypt-backup-via-gpg%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
1
There are basically two ways you will probably want to look at doing this in the future. Rather than making one giant archive and encrypting that, it might be faster (or at least more manageable) to encrypt the individual files, and archive those. Alternately, look into actually encrypting the entire filesystem itself, but that would not apply to something like DropBox or AWS S3.
â DopeGhoti
May 8 at 19:12
I too would also suggest finding another encryption solution. I'd suggest using a combination of lvm-snapshots with dm-crypt. That way you have fully encrypted file system backups.
â Lucas Ramage
May 8 at 19:33