How to distinguish ZFS pool from mounts?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
2
down vote

favorite
2












I'd like to zerofill all free space in partitions inside a FreeBSD virtual machine.



The virtual machine contains a 512K boot freebsd-boot type slice (hope I got the terminology right here), followed by a 2.0G freebsd-swap slice and a 254.0G freebsd-zfs slice.



# gpart show da0
=> 34 536870845 da0 GPT (256G)
34 6 - free - (3.0K)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 4194304 2 freebsd-swap (2.0G)
4196352 532672512 3 freebsd-zfs (254G)
536868864 2015 - free - (1.0M)


It's the layout created by the FreeBSD 10.2 installer by default when picking the "root on ZFS" option.



In the past with UFS I'd simply use mount -t ufs to list all UFS labels and simply create a zero-filled file on these mounts until no space was left.



However, with ZFS I am not longer sure. Now I get:



# mount -t zfs
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)


which doesn't give me a clue other than the names, which as I understand are merely a convention (bad style to rely on conventions). And repeating the zero-fill operation on each of those datasets seems a bit silly.



Would it be sufficient then to find all ZFS pools (zpool list -pH|cut -f 1) and look for those in the list mount -t zfs gives me? I.e. ignoring datasets from that ZFS pool.



In short, would it be sufficient to fill the free space on the mount points listed by (using Bash, but likely also works with Zsh):



mount -t zfs|awk '$1 ~ /^'$(zpool list -pH|cut -f 1)'$/ print $3'


or does the fact that ZFS has those datasets change what part I need to zerofill before compacting the virtual machine's disk on the host side?




Output when listing the ZFS pools and the mounts:



# mount -t zfs
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
# zpool list -pH
zroot 270582939648 1668632576 268914307072 - 0% 0 1.00x ONLINE -









share|improve this question























  • one complication would be that each of those zfs filesystems may have different quotas or reservations - so zero-filling just one of the filesystems with a quota smaller than the total zpool won't zero-fill the entire disk. check with zfs get quota -t filesystem
    – cas
    Dec 3 '15 at 20:51










  • @cas: but wouldn't the root dataset (/zroot in my above example) be a good candidate despite quotas in the child datasets? Of course if that has a quote all bets are off.
    – 0xC0000022L
    Dec 3 '15 at 22:18






  • 1




    yep, should be ok unless it has a quota set.
    – cas
    Dec 3 '15 at 22:26






  • 2




    Note that creating files with all-zeroes content won't necessarily write all zeroes to disk, because ZFS supports transparent compression. Check zfs get compression zroot -r. If it isn't off everywhere, then writing huge sequences of zeroes will actually write something else to storage.
    – a CVn
    Jul 21 '16 at 6:25










  • It seems like so long as compression is off on any file system without a quota set that filling the available space with an all-zeros file should write all zeros all of the empty space.
    – airhuff
    Jan 17 '17 at 6:53














up vote
2
down vote

favorite
2












I'd like to zerofill all free space in partitions inside a FreeBSD virtual machine.



The virtual machine contains a 512K boot freebsd-boot type slice (hope I got the terminology right here), followed by a 2.0G freebsd-swap slice and a 254.0G freebsd-zfs slice.



# gpart show da0
=> 34 536870845 da0 GPT (256G)
34 6 - free - (3.0K)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 4194304 2 freebsd-swap (2.0G)
4196352 532672512 3 freebsd-zfs (254G)
536868864 2015 - free - (1.0M)


It's the layout created by the FreeBSD 10.2 installer by default when picking the "root on ZFS" option.



In the past with UFS I'd simply use mount -t ufs to list all UFS labels and simply create a zero-filled file on these mounts until no space was left.



However, with ZFS I am not longer sure. Now I get:



# mount -t zfs
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)


which doesn't give me a clue other than the names, which as I understand are merely a convention (bad style to rely on conventions). And repeating the zero-fill operation on each of those datasets seems a bit silly.



Would it be sufficient then to find all ZFS pools (zpool list -pH|cut -f 1) and look for those in the list mount -t zfs gives me? I.e. ignoring datasets from that ZFS pool.



In short, would it be sufficient to fill the free space on the mount points listed by (using Bash, but likely also works with Zsh):



mount -t zfs|awk '$1 ~ /^'$(zpool list -pH|cut -f 1)'$/ print $3'


or does the fact that ZFS has those datasets change what part I need to zerofill before compacting the virtual machine's disk on the host side?




Output when listing the ZFS pools and the mounts:



# mount -t zfs
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
# zpool list -pH
zroot 270582939648 1668632576 268914307072 - 0% 0 1.00x ONLINE -









share|improve this question























  • one complication would be that each of those zfs filesystems may have different quotas or reservations - so zero-filling just one of the filesystems with a quota smaller than the total zpool won't zero-fill the entire disk. check with zfs get quota -t filesystem
    – cas
    Dec 3 '15 at 20:51










  • @cas: but wouldn't the root dataset (/zroot in my above example) be a good candidate despite quotas in the child datasets? Of course if that has a quote all bets are off.
    – 0xC0000022L
    Dec 3 '15 at 22:18






  • 1




    yep, should be ok unless it has a quota set.
    – cas
    Dec 3 '15 at 22:26






  • 2




    Note that creating files with all-zeroes content won't necessarily write all zeroes to disk, because ZFS supports transparent compression. Check zfs get compression zroot -r. If it isn't off everywhere, then writing huge sequences of zeroes will actually write something else to storage.
    – a CVn
    Jul 21 '16 at 6:25










  • It seems like so long as compression is off on any file system without a quota set that filling the available space with an all-zeros file should write all zeros all of the empty space.
    – airhuff
    Jan 17 '17 at 6:53












up vote
2
down vote

favorite
2









up vote
2
down vote

favorite
2






2





I'd like to zerofill all free space in partitions inside a FreeBSD virtual machine.



The virtual machine contains a 512K boot freebsd-boot type slice (hope I got the terminology right here), followed by a 2.0G freebsd-swap slice and a 254.0G freebsd-zfs slice.



# gpart show da0
=> 34 536870845 da0 GPT (256G)
34 6 - free - (3.0K)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 4194304 2 freebsd-swap (2.0G)
4196352 532672512 3 freebsd-zfs (254G)
536868864 2015 - free - (1.0M)


It's the layout created by the FreeBSD 10.2 installer by default when picking the "root on ZFS" option.



In the past with UFS I'd simply use mount -t ufs to list all UFS labels and simply create a zero-filled file on these mounts until no space was left.



However, with ZFS I am not longer sure. Now I get:



# mount -t zfs
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)


which doesn't give me a clue other than the names, which as I understand are merely a convention (bad style to rely on conventions). And repeating the zero-fill operation on each of those datasets seems a bit silly.



Would it be sufficient then to find all ZFS pools (zpool list -pH|cut -f 1) and look for those in the list mount -t zfs gives me? I.e. ignoring datasets from that ZFS pool.



In short, would it be sufficient to fill the free space on the mount points listed by (using Bash, but likely also works with Zsh):



mount -t zfs|awk '$1 ~ /^'$(zpool list -pH|cut -f 1)'$/ print $3'


or does the fact that ZFS has those datasets change what part I need to zerofill before compacting the virtual machine's disk on the host side?




Output when listing the ZFS pools and the mounts:



# mount -t zfs
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
# zpool list -pH
zroot 270582939648 1668632576 268914307072 - 0% 0 1.00x ONLINE -









share|improve this question















I'd like to zerofill all free space in partitions inside a FreeBSD virtual machine.



The virtual machine contains a 512K boot freebsd-boot type slice (hope I got the terminology right here), followed by a 2.0G freebsd-swap slice and a 254.0G freebsd-zfs slice.



# gpart show da0
=> 34 536870845 da0 GPT (256G)
34 6 - free - (3.0K)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 4194304 2 freebsd-swap (2.0G)
4196352 532672512 3 freebsd-zfs (254G)
536868864 2015 - free - (1.0M)


It's the layout created by the FreeBSD 10.2 installer by default when picking the "root on ZFS" option.



In the past with UFS I'd simply use mount -t ufs to list all UFS labels and simply create a zero-filled file on these mounts until no space was left.



However, with ZFS I am not longer sure. Now I get:



# mount -t zfs
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)


which doesn't give me a clue other than the names, which as I understand are merely a convention (bad style to rely on conventions). And repeating the zero-fill operation on each of those datasets seems a bit silly.



Would it be sufficient then to find all ZFS pools (zpool list -pH|cut -f 1) and look for those in the list mount -t zfs gives me? I.e. ignoring datasets from that ZFS pool.



In short, would it be sufficient to fill the free space on the mount points listed by (using Bash, but likely also works with Zsh):



mount -t zfs|awk '$1 ~ /^'$(zpool list -pH|cut -f 1)'$/ print $3'


or does the fact that ZFS has those datasets change what part I need to zerofill before compacting the virtual machine's disk on the host side?




Output when listing the ZFS pools and the mounts:



# mount -t zfs
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
# zpool list -pH
zroot 270582939648 1668632576 268914307072 - 0% 0 1.00x ONLINE -






freebsd zfs






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Dec 3 '15 at 22:17

























asked Dec 3 '15 at 17:35









0xC0000022L

7,3351564116




7,3351564116











  • one complication would be that each of those zfs filesystems may have different quotas or reservations - so zero-filling just one of the filesystems with a quota smaller than the total zpool won't zero-fill the entire disk. check with zfs get quota -t filesystem
    – cas
    Dec 3 '15 at 20:51










  • @cas: but wouldn't the root dataset (/zroot in my above example) be a good candidate despite quotas in the child datasets? Of course if that has a quote all bets are off.
    – 0xC0000022L
    Dec 3 '15 at 22:18






  • 1




    yep, should be ok unless it has a quota set.
    – cas
    Dec 3 '15 at 22:26






  • 2




    Note that creating files with all-zeroes content won't necessarily write all zeroes to disk, because ZFS supports transparent compression. Check zfs get compression zroot -r. If it isn't off everywhere, then writing huge sequences of zeroes will actually write something else to storage.
    – a CVn
    Jul 21 '16 at 6:25










  • It seems like so long as compression is off on any file system without a quota set that filling the available space with an all-zeros file should write all zeros all of the empty space.
    – airhuff
    Jan 17 '17 at 6:53
















  • one complication would be that each of those zfs filesystems may have different quotas or reservations - so zero-filling just one of the filesystems with a quota smaller than the total zpool won't zero-fill the entire disk. check with zfs get quota -t filesystem
    – cas
    Dec 3 '15 at 20:51










  • @cas: but wouldn't the root dataset (/zroot in my above example) be a good candidate despite quotas in the child datasets? Of course if that has a quote all bets are off.
    – 0xC0000022L
    Dec 3 '15 at 22:18






  • 1




    yep, should be ok unless it has a quota set.
    – cas
    Dec 3 '15 at 22:26






  • 2




    Note that creating files with all-zeroes content won't necessarily write all zeroes to disk, because ZFS supports transparent compression. Check zfs get compression zroot -r. If it isn't off everywhere, then writing huge sequences of zeroes will actually write something else to storage.
    – a CVn
    Jul 21 '16 at 6:25










  • It seems like so long as compression is off on any file system without a quota set that filling the available space with an all-zeros file should write all zeros all of the empty space.
    – airhuff
    Jan 17 '17 at 6:53















one complication would be that each of those zfs filesystems may have different quotas or reservations - so zero-filling just one of the filesystems with a quota smaller than the total zpool won't zero-fill the entire disk. check with zfs get quota -t filesystem
– cas
Dec 3 '15 at 20:51




one complication would be that each of those zfs filesystems may have different quotas or reservations - so zero-filling just one of the filesystems with a quota smaller than the total zpool won't zero-fill the entire disk. check with zfs get quota -t filesystem
– cas
Dec 3 '15 at 20:51












@cas: but wouldn't the root dataset (/zroot in my above example) be a good candidate despite quotas in the child datasets? Of course if that has a quote all bets are off.
– 0xC0000022L
Dec 3 '15 at 22:18




@cas: but wouldn't the root dataset (/zroot in my above example) be a good candidate despite quotas in the child datasets? Of course if that has a quote all bets are off.
– 0xC0000022L
Dec 3 '15 at 22:18




1




1




yep, should be ok unless it has a quota set.
– cas
Dec 3 '15 at 22:26




yep, should be ok unless it has a quota set.
– cas
Dec 3 '15 at 22:26




2




2




Note that creating files with all-zeroes content won't necessarily write all zeroes to disk, because ZFS supports transparent compression. Check zfs get compression zroot -r. If it isn't off everywhere, then writing huge sequences of zeroes will actually write something else to storage.
– a CVn
Jul 21 '16 at 6:25




Note that creating files with all-zeroes content won't necessarily write all zeroes to disk, because ZFS supports transparent compression. Check zfs get compression zroot -r. If it isn't off everywhere, then writing huge sequences of zeroes will actually write something else to storage.
– a CVn
Jul 21 '16 at 6:25












It seems like so long as compression is off on any file system without a quota set that filling the available space with an all-zeros file should write all zeros all of the empty space.
– airhuff
Jan 17 '17 at 6:53




It seems like so long as compression is off on any file system without a quota set that filling the available space with an all-zeros file should write all zeros all of the empty space.
– airhuff
Jan 17 '17 at 6:53










2 Answers
2






active

oldest

votes

















up vote
0
down vote













zpool list will give you a list of the ZFS pools, and zfs list will give you a list of all the ZFS datasets.






share|improve this answer



























    up vote
    0
    down vote













    You're thinking too complicated.



    You just need to zero-fill each pool, not each filesystem! (The latter doesn't make a lot of sense in ZFS.)



    So just iterate over all pools (via zpool list). For each pool, do the following:



    • Create a new ZFS filesystem with disabled(!) compression

    • Create a new file in that system filled with zeros

    • Sync

    • Destroy that ZFS filesystem

    Note that the above algorithm works correctly even in the special case that you have some pool which doesn't contain any filesystem. (either no filesystem yet, or no filesystem anymore)






    share|improve this answer




















      Your Answer








      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "106"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













      draft saved

      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f247170%2fhow-to-distinguish-zfs-pool-from-mounts%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      0
      down vote













      zpool list will give you a list of the ZFS pools, and zfs list will give you a list of all the ZFS datasets.






      share|improve this answer
























        up vote
        0
        down vote













        zpool list will give you a list of the ZFS pools, and zfs list will give you a list of all the ZFS datasets.






        share|improve this answer






















          up vote
          0
          down vote










          up vote
          0
          down vote









          zpool list will give you a list of the ZFS pools, and zfs list will give you a list of all the ZFS datasets.






          share|improve this answer












          zpool list will give you a list of the ZFS pools, and zfs list will give you a list of all the ZFS datasets.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Feb 2 '17 at 23:01









          sleepyweasel

          87319




          87319






















              up vote
              0
              down vote













              You're thinking too complicated.



              You just need to zero-fill each pool, not each filesystem! (The latter doesn't make a lot of sense in ZFS.)



              So just iterate over all pools (via zpool list). For each pool, do the following:



              • Create a new ZFS filesystem with disabled(!) compression

              • Create a new file in that system filled with zeros

              • Sync

              • Destroy that ZFS filesystem

              Note that the above algorithm works correctly even in the special case that you have some pool which doesn't contain any filesystem. (either no filesystem yet, or no filesystem anymore)






              share|improve this answer
























                up vote
                0
                down vote













                You're thinking too complicated.



                You just need to zero-fill each pool, not each filesystem! (The latter doesn't make a lot of sense in ZFS.)



                So just iterate over all pools (via zpool list). For each pool, do the following:



                • Create a new ZFS filesystem with disabled(!) compression

                • Create a new file in that system filled with zeros

                • Sync

                • Destroy that ZFS filesystem

                Note that the above algorithm works correctly even in the special case that you have some pool which doesn't contain any filesystem. (either no filesystem yet, or no filesystem anymore)






                share|improve this answer






















                  up vote
                  0
                  down vote










                  up vote
                  0
                  down vote









                  You're thinking too complicated.



                  You just need to zero-fill each pool, not each filesystem! (The latter doesn't make a lot of sense in ZFS.)



                  So just iterate over all pools (via zpool list). For each pool, do the following:



                  • Create a new ZFS filesystem with disabled(!) compression

                  • Create a new file in that system filled with zeros

                  • Sync

                  • Destroy that ZFS filesystem

                  Note that the above algorithm works correctly even in the special case that you have some pool which doesn't contain any filesystem. (either no filesystem yet, or no filesystem anymore)






                  share|improve this answer












                  You're thinking too complicated.



                  You just need to zero-fill each pool, not each filesystem! (The latter doesn't make a lot of sense in ZFS.)



                  So just iterate over all pools (via zpool list). For each pool, do the following:



                  • Create a new ZFS filesystem with disabled(!) compression

                  • Create a new file in that system filled with zeros

                  • Sync

                  • Destroy that ZFS filesystem

                  Note that the above algorithm works correctly even in the special case that you have some pool which doesn't contain any filesystem. (either no filesystem yet, or no filesystem anymore)







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Dec 4 at 21:12









                  vog

                  16614




                  16614



























                      draft saved

                      draft discarded
















































                      Thanks for contributing an answer to Unix & Linux Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      To learn more, see our tips on writing great answers.





                      Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                      Please pay close attention to the following guidance:


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f247170%2fhow-to-distinguish-zfs-pool-from-mounts%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown






                      Popular posts from this blog

                      How to check contact read email or not when send email to Individual?

                      Bahrain

                      Postfix configuration issue with fips on centos 7; mailgun relay