Does ZFS scrub update compression and copies on existing data?
Clash Royale CLAN TAG#URR8PPP
up vote
2
down vote
favorite
I know that ZFS properties like copies and compression only affect newly written data.
However I wonder if a scrub would update that?
Let's say that I have have created a pool and set compression=lz4 and copies=2 before writing 1TB of files.
Then I decided I don't need to keep the ditto blocks, but I also would like to use another type of compression.
If I now set copies=1 and compression=gzip-9, is there a way to apply this to data that is already written to the pool?
Would a scrub do that for me?
linux zfs
add a comment |Â
up vote
2
down vote
favorite
I know that ZFS properties like copies and compression only affect newly written data.
However I wonder if a scrub would update that?
Let's say that I have have created a pool and set compression=lz4 and copies=2 before writing 1TB of files.
Then I decided I don't need to keep the ditto blocks, but I also would like to use another type of compression.
If I now set copies=1 and compression=gzip-9, is there a way to apply this to data that is already written to the pool?
Would a scrub do that for me?
linux zfs
add a comment |Â
up vote
2
down vote
favorite
up vote
2
down vote
favorite
I know that ZFS properties like copies and compression only affect newly written data.
However I wonder if a scrub would update that?
Let's say that I have have created a pool and set compression=lz4 and copies=2 before writing 1TB of files.
Then I decided I don't need to keep the ditto blocks, but I also would like to use another type of compression.
If I now set copies=1 and compression=gzip-9, is there a way to apply this to data that is already written to the pool?
Would a scrub do that for me?
linux zfs
I know that ZFS properties like copies and compression only affect newly written data.
However I wonder if a scrub would update that?
Let's say that I have have created a pool and set compression=lz4 and copies=2 before writing 1TB of files.
Then I decided I don't need to keep the ditto blocks, but I also would like to use another type of compression.
If I now set copies=1 and compression=gzip-9, is there a way to apply this to data that is already written to the pool?
Would a scrub do that for me?
linux zfs
asked Feb 17 at 22:24
unfa
511212
511212
add a comment |Â
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
3
down vote
accepted
No.
Changing dataset properties like compression
and copies
only affects files written after the change. To apply changes like this to existing files, you would need to copy them and mv them over the original. This will, of course, break any connection to any prior snapshots of the same filename (and also to any hard links to the file as the inode will be different).
Alternatively, to apply such changes to an entire pool or dataset, you could zfs send
a snapshot to a different pool (e.g. a backup pool), destroy the dataset from the original pool (or destroy the pool and re-create it), and then zfs send
it back. Note: you can not do this with zfs send
's -R
(--replicate
) option because that also turns on send's -p
(--props
) option. See man zfs
and search for zfs send
for more details.
zfs scrub
checks the existing data on a pool and rewrites any corrupted copies if there is sufficient redundancy to have a good copy that matches the checksum...if not, it just warns of the un-correctable error.
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
3
down vote
accepted
No.
Changing dataset properties like compression
and copies
only affects files written after the change. To apply changes like this to existing files, you would need to copy them and mv them over the original. This will, of course, break any connection to any prior snapshots of the same filename (and also to any hard links to the file as the inode will be different).
Alternatively, to apply such changes to an entire pool or dataset, you could zfs send
a snapshot to a different pool (e.g. a backup pool), destroy the dataset from the original pool (or destroy the pool and re-create it), and then zfs send
it back. Note: you can not do this with zfs send
's -R
(--replicate
) option because that also turns on send's -p
(--props
) option. See man zfs
and search for zfs send
for more details.
zfs scrub
checks the existing data on a pool and rewrites any corrupted copies if there is sufficient redundancy to have a good copy that matches the checksum...if not, it just warns of the un-correctable error.
add a comment |Â
up vote
3
down vote
accepted
No.
Changing dataset properties like compression
and copies
only affects files written after the change. To apply changes like this to existing files, you would need to copy them and mv them over the original. This will, of course, break any connection to any prior snapshots of the same filename (and also to any hard links to the file as the inode will be different).
Alternatively, to apply such changes to an entire pool or dataset, you could zfs send
a snapshot to a different pool (e.g. a backup pool), destroy the dataset from the original pool (or destroy the pool and re-create it), and then zfs send
it back. Note: you can not do this with zfs send
's -R
(--replicate
) option because that also turns on send's -p
(--props
) option. See man zfs
and search for zfs send
for more details.
zfs scrub
checks the existing data on a pool and rewrites any corrupted copies if there is sufficient redundancy to have a good copy that matches the checksum...if not, it just warns of the un-correctable error.
add a comment |Â
up vote
3
down vote
accepted
up vote
3
down vote
accepted
No.
Changing dataset properties like compression
and copies
only affects files written after the change. To apply changes like this to existing files, you would need to copy them and mv them over the original. This will, of course, break any connection to any prior snapshots of the same filename (and also to any hard links to the file as the inode will be different).
Alternatively, to apply such changes to an entire pool or dataset, you could zfs send
a snapshot to a different pool (e.g. a backup pool), destroy the dataset from the original pool (or destroy the pool and re-create it), and then zfs send
it back. Note: you can not do this with zfs send
's -R
(--replicate
) option because that also turns on send's -p
(--props
) option. See man zfs
and search for zfs send
for more details.
zfs scrub
checks the existing data on a pool and rewrites any corrupted copies if there is sufficient redundancy to have a good copy that matches the checksum...if not, it just warns of the un-correctable error.
No.
Changing dataset properties like compression
and copies
only affects files written after the change. To apply changes like this to existing files, you would need to copy them and mv them over the original. This will, of course, break any connection to any prior snapshots of the same filename (and also to any hard links to the file as the inode will be different).
Alternatively, to apply such changes to an entire pool or dataset, you could zfs send
a snapshot to a different pool (e.g. a backup pool), destroy the dataset from the original pool (or destroy the pool and re-create it), and then zfs send
it back. Note: you can not do this with zfs send
's -R
(--replicate
) option because that also turns on send's -p
(--props
) option. See man zfs
and search for zfs send
for more details.
zfs scrub
checks the existing data on a pool and rewrites any corrupted copies if there is sufficient redundancy to have a good copy that matches the checksum...if not, it just warns of the un-correctable error.
edited Feb 19 at 0:04
answered Feb 18 at 1:48
cas
37.6k44392
37.6k44392
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f424873%2fdoes-zfs-scrub-update-compression-and-copies-on-existing-data%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password