ZFS magically vanishing available storage space

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












OK this is driving me nuts. Running zfs on a system, and my root partition has been shrinking with seeming no explanation. I have now run out of space and can't find where it all went.



A simple df shows the following (You can ignore the TV filesystems, i'm only concerned with the root):



[root@SV02 /]# df -h
Filesystem Size Used Available Capacity Mounted on
rpool/ROOT/opus-2457409-2017-11-07-release
90G 6.6G 35G 16% /
/devices 0K 0K 0K 0% /devices
/dev 0K 0K 0K 0% /dev
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 11G 404K 11G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
bootfs 0K 0K 0K 0% /system/boot
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1
41G 6.6G 35G 16% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 11G 4K 11G 1% /tmp
swap 11G 60K 11G 1% /var/run
TV05 168T 41K 11T 1% /TV05
TV05/Media05 168T 155T 11T 94% /TV05/Media05
TV05/OpenDrives 168T 1.1T 11T 9% /TV05/OpenDrives
TV08 54T 53K 2.2T 1% /TV08
TV08/Media08 54T 24T 383G 99% /TV08/Media08
TV08/MxSFX 54T 26T 383G 99% /TV08/MxSFX
TV08/RMedia04 54T 51K 383G 1% /TV08/RMedia04
rpool/export 90G 32K 35G 1% /export
rpool/export/home 90G 86K 35G 1% /export/home
rpool/export/home/open
90G 404M 35G 2% /export/home/open
rpool 90G 43K 35G 1% /rpool


Used/available don't seem consistent...



Trying to track it down:



[root@SV02 /]# du -sh *
0K bin
10M boot
10M core
2.0M dev
430K devices
58M etc
405M export
0K home
141M kernel
48M lib
2K media
24K mnt
3K Mounts
0K net
1.5G opt
174M platform
3.6G proc
4K rmdisk
10M root
23K rpool
1.9M sbin
2K scripts
5.1M system
12K tmp
1.5G usr
3.1G var


OK nothing there... Thought maybe a snapshot, but that doesn't seem to be the case:



[root@SV02 /]# zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
TV05 156T 11.4T 41.2K /TV05
TV05/Media05 155T 11.4T 155T /TV05/Media05
TV05/OpenDrives 1.09T 11.4T 1.09T /TV05/OpenDrives
TV08 52.1T 2.16T 53.1K /TV08
TV08/Media08 24.4T 385G 24.4T /TV08/Media08
TV08/MxSFX 25.9T 385G 25.9T /TV08/MxSFX
TV08/RMedia04 51.5K 385G 51.5K /TV08/RMedia04
rpool 55.5G 34.6G 43.5K /rpool
rpool/ROOT 38.6G 34.6G 31K legacy
rpool/ROOT/C_Backup1 1.24M 34.6G 5.61G /
rpool/ROOT/C_backup2 1.33M 34.6G 5.61G /
rpool/ROOT/napp-it-0.8l3 3.00M 34.6G 2.10G /
rpool/ROOT/napp-it-0.9e1 1.66M 34.6G 5.61G /
rpool/ROOT/nfsv4 54K 34.6G 2.11G /
rpool/ROOT/openindiana 18.3M 34.6G 2.02G /
rpool/ROOT/opus-2457044-2015-01-31-install 65.4M 34.6G 31.6G /a
rpool/ROOT/opus-2457044-2015-01-31-preinstall 1K 34.6G 5.61G /
rpool/ROOT/opus-2457044-2015-05-19-pre15 45K 34.6G 6.32G /
rpool/ROOT/opus-2457044-2016-10-03-backup 52K 34.6G 31.3G /
rpool/ROOT/opus-2457409-2016-10-04-install 15.5M 34.6G 31.7G /a
rpool/ROOT/opus-2457409-2016-10-04-preinstall 52K 34.6G 31.5G /
rpool/ROOT/opus-2457409-2017-11-07-release 38.5G 34.6G 6.57G /
rpool/ROOT/opus-2457409-2017-11-07-release@install 4.40M - 1.56G -
rpool/ROOT/opus-2457409-2017-11-07-release@2014-02-25-22:01:25 6.64M - 1.58G -
rpool/ROOT/opus-2457409-2017-11-07-release@2014-02-25-22:05:33 84.6M - 2.02G -
rpool/ROOT/opus-2457409-2017-11-07-release@2014-03-19-23:25:59 58.0M - 2.11G -
rpool/ROOT/opus-2457409-2017-11-07-release@2014-04-12-17:55:40 0 - 2.10G -
rpool/ROOT/opus-2457409-2017-11-07-release@2014-04-12-17:55:48 0 - 2.10G -
rpool/ROOT/opus-2457409-2017-11-07-release@2014-04-12-18:13:10 1.44M - 2.10G -
rpool/ROOT/opus-2457409-2017-11-07-release@2014-04-12-18:14:31 2.17M - 2.12G -
rpool/ROOT/opus-2457409-2017-11-07-release@2015-01-31-21:55:47 15.3M - 5.61G -
rpool/ROOT/opus-2457409-2017-11-07-release@2015-01-31-22:34:52 9.18M - 5.61G -
rpool/ROOT/opus-2457409-2017-11-07-release@2015-01-31-22:43:54 362K - 5.61G -
rpool/ROOT/opus-2457409-2017-11-07-release@2015-01-31-22:44:05 362K - 5.61G -
rpool/ROOT/opus-2457409-2017-11-07-release@2015-05-19-21:49:36 227M - 6.32G -
rpool/ROOT/opus-2457409-2017-11-07-release@2016-10-04-16:11:30 3.15M - 31.3G -
rpool/ROOT/opus-2457409-2017-11-07-release@2016-10-04-16:34:30 936K - 31.5G -
rpool/ROOT/opus-2457409-2017-11-07-release@2016-10-04-16:34:42 986K - 31.5G -
rpool/ROOT/opus-2457409-2017-11-07-release@2016-10-04-16:53:00 112M - 31.8G -
rpool/ROOT/pre_napp-it-0.8l3 35K 34.6G 1.58G /
rpool/ROOT/pre_napp-it-0.9e1 71K 34.6G 2.10G /
rpool/ROOT/sv02-4-10-14 70K 34.6G 2.10G /
rpool/ROOT/sv02-4-10-14-v2 4.17M 34.6G 2.12G /
rpool/dump 8.00G 34.6G 8.00G -
rpool/export 404M 34.6G 32K /export
rpool/export/home 404M 34.6G 86.5K /export/home
rpool/export/home/open 404M 34.6G 404M /export/home/open
rpool/swap 8.50G 43.0G 132M -


34 gigs available but where, I do not know? At a loss and unfortunately I don't quite have a good enough grasp of zfs to properly troubleshoot. It's persistent after reboots and I haven't deleted any large files or anything recently, so don't think it's a process holding on to anything... Save me, you're my only hope!









share







New contributor




Christopher Glenn Schlägel is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.























    up vote
    0
    down vote

    favorite












    OK this is driving me nuts. Running zfs on a system, and my root partition has been shrinking with seeming no explanation. I have now run out of space and can't find where it all went.



    A simple df shows the following (You can ignore the TV filesystems, i'm only concerned with the root):



    [root@SV02 /]# df -h
    Filesystem Size Used Available Capacity Mounted on
    rpool/ROOT/opus-2457409-2017-11-07-release
    90G 6.6G 35G 16% /
    /devices 0K 0K 0K 0% /devices
    /dev 0K 0K 0K 0% /dev
    ctfs 0K 0K 0K 0% /system/contract
    proc 0K 0K 0K 0% /proc
    mnttab 0K 0K 0K 0% /etc/mnttab
    swap 11G 404K 11G 1% /etc/svc/volatile
    objfs 0K 0K 0K 0% /system/object
    bootfs 0K 0K 0K 0% /system/boot
    sharefs 0K 0K 0K 0% /etc/dfs/sharetab
    /usr/lib/libc/libc_hwcap1.so.1
    41G 6.6G 35G 16% /lib/libc.so.1
    fd 0K 0K 0K 0% /dev/fd
    swap 11G 4K 11G 1% /tmp
    swap 11G 60K 11G 1% /var/run
    TV05 168T 41K 11T 1% /TV05
    TV05/Media05 168T 155T 11T 94% /TV05/Media05
    TV05/OpenDrives 168T 1.1T 11T 9% /TV05/OpenDrives
    TV08 54T 53K 2.2T 1% /TV08
    TV08/Media08 54T 24T 383G 99% /TV08/Media08
    TV08/MxSFX 54T 26T 383G 99% /TV08/MxSFX
    TV08/RMedia04 54T 51K 383G 1% /TV08/RMedia04
    rpool/export 90G 32K 35G 1% /export
    rpool/export/home 90G 86K 35G 1% /export/home
    rpool/export/home/open
    90G 404M 35G 2% /export/home/open
    rpool 90G 43K 35G 1% /rpool


    Used/available don't seem consistent...



    Trying to track it down:



    [root@SV02 /]# du -sh *
    0K bin
    10M boot
    10M core
    2.0M dev
    430K devices
    58M etc
    405M export
    0K home
    141M kernel
    48M lib
    2K media
    24K mnt
    3K Mounts
    0K net
    1.5G opt
    174M platform
    3.6G proc
    4K rmdisk
    10M root
    23K rpool
    1.9M sbin
    2K scripts
    5.1M system
    12K tmp
    1.5G usr
    3.1G var


    OK nothing there... Thought maybe a snapshot, but that doesn't seem to be the case:



    [root@SV02 /]# zfs list -t all
    NAME USED AVAIL REFER MOUNTPOINT
    TV05 156T 11.4T 41.2K /TV05
    TV05/Media05 155T 11.4T 155T /TV05/Media05
    TV05/OpenDrives 1.09T 11.4T 1.09T /TV05/OpenDrives
    TV08 52.1T 2.16T 53.1K /TV08
    TV08/Media08 24.4T 385G 24.4T /TV08/Media08
    TV08/MxSFX 25.9T 385G 25.9T /TV08/MxSFX
    TV08/RMedia04 51.5K 385G 51.5K /TV08/RMedia04
    rpool 55.5G 34.6G 43.5K /rpool
    rpool/ROOT 38.6G 34.6G 31K legacy
    rpool/ROOT/C_Backup1 1.24M 34.6G 5.61G /
    rpool/ROOT/C_backup2 1.33M 34.6G 5.61G /
    rpool/ROOT/napp-it-0.8l3 3.00M 34.6G 2.10G /
    rpool/ROOT/napp-it-0.9e1 1.66M 34.6G 5.61G /
    rpool/ROOT/nfsv4 54K 34.6G 2.11G /
    rpool/ROOT/openindiana 18.3M 34.6G 2.02G /
    rpool/ROOT/opus-2457044-2015-01-31-install 65.4M 34.6G 31.6G /a
    rpool/ROOT/opus-2457044-2015-01-31-preinstall 1K 34.6G 5.61G /
    rpool/ROOT/opus-2457044-2015-05-19-pre15 45K 34.6G 6.32G /
    rpool/ROOT/opus-2457044-2016-10-03-backup 52K 34.6G 31.3G /
    rpool/ROOT/opus-2457409-2016-10-04-install 15.5M 34.6G 31.7G /a
    rpool/ROOT/opus-2457409-2016-10-04-preinstall 52K 34.6G 31.5G /
    rpool/ROOT/opus-2457409-2017-11-07-release 38.5G 34.6G 6.57G /
    rpool/ROOT/opus-2457409-2017-11-07-release@install 4.40M - 1.56G -
    rpool/ROOT/opus-2457409-2017-11-07-release@2014-02-25-22:01:25 6.64M - 1.58G -
    rpool/ROOT/opus-2457409-2017-11-07-release@2014-02-25-22:05:33 84.6M - 2.02G -
    rpool/ROOT/opus-2457409-2017-11-07-release@2014-03-19-23:25:59 58.0M - 2.11G -
    rpool/ROOT/opus-2457409-2017-11-07-release@2014-04-12-17:55:40 0 - 2.10G -
    rpool/ROOT/opus-2457409-2017-11-07-release@2014-04-12-17:55:48 0 - 2.10G -
    rpool/ROOT/opus-2457409-2017-11-07-release@2014-04-12-18:13:10 1.44M - 2.10G -
    rpool/ROOT/opus-2457409-2017-11-07-release@2014-04-12-18:14:31 2.17M - 2.12G -
    rpool/ROOT/opus-2457409-2017-11-07-release@2015-01-31-21:55:47 15.3M - 5.61G -
    rpool/ROOT/opus-2457409-2017-11-07-release@2015-01-31-22:34:52 9.18M - 5.61G -
    rpool/ROOT/opus-2457409-2017-11-07-release@2015-01-31-22:43:54 362K - 5.61G -
    rpool/ROOT/opus-2457409-2017-11-07-release@2015-01-31-22:44:05 362K - 5.61G -
    rpool/ROOT/opus-2457409-2017-11-07-release@2015-05-19-21:49:36 227M - 6.32G -
    rpool/ROOT/opus-2457409-2017-11-07-release@2016-10-04-16:11:30 3.15M - 31.3G -
    rpool/ROOT/opus-2457409-2017-11-07-release@2016-10-04-16:34:30 936K - 31.5G -
    rpool/ROOT/opus-2457409-2017-11-07-release@2016-10-04-16:34:42 986K - 31.5G -
    rpool/ROOT/opus-2457409-2017-11-07-release@2016-10-04-16:53:00 112M - 31.8G -
    rpool/ROOT/pre_napp-it-0.8l3 35K 34.6G 1.58G /
    rpool/ROOT/pre_napp-it-0.9e1 71K 34.6G 2.10G /
    rpool/ROOT/sv02-4-10-14 70K 34.6G 2.10G /
    rpool/ROOT/sv02-4-10-14-v2 4.17M 34.6G 2.12G /
    rpool/dump 8.00G 34.6G 8.00G -
    rpool/export 404M 34.6G 32K /export
    rpool/export/home 404M 34.6G 86.5K /export/home
    rpool/export/home/open 404M 34.6G 404M /export/home/open
    rpool/swap 8.50G 43.0G 132M -


    34 gigs available but where, I do not know? At a loss and unfortunately I don't quite have a good enough grasp of zfs to properly troubleshoot. It's persistent after reboots and I haven't deleted any large files or anything recently, so don't think it's a process holding on to anything... Save me, you're my only hope!









    share







    New contributor




    Christopher Glenn Schlägel is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.





















      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      OK this is driving me nuts. Running zfs on a system, and my root partition has been shrinking with seeming no explanation. I have now run out of space and can't find where it all went.



      A simple df shows the following (You can ignore the TV filesystems, i'm only concerned with the root):



      [root@SV02 /]# df -h
      Filesystem Size Used Available Capacity Mounted on
      rpool/ROOT/opus-2457409-2017-11-07-release
      90G 6.6G 35G 16% /
      /devices 0K 0K 0K 0% /devices
      /dev 0K 0K 0K 0% /dev
      ctfs 0K 0K 0K 0% /system/contract
      proc 0K 0K 0K 0% /proc
      mnttab 0K 0K 0K 0% /etc/mnttab
      swap 11G 404K 11G 1% /etc/svc/volatile
      objfs 0K 0K 0K 0% /system/object
      bootfs 0K 0K 0K 0% /system/boot
      sharefs 0K 0K 0K 0% /etc/dfs/sharetab
      /usr/lib/libc/libc_hwcap1.so.1
      41G 6.6G 35G 16% /lib/libc.so.1
      fd 0K 0K 0K 0% /dev/fd
      swap 11G 4K 11G 1% /tmp
      swap 11G 60K 11G 1% /var/run
      TV05 168T 41K 11T 1% /TV05
      TV05/Media05 168T 155T 11T 94% /TV05/Media05
      TV05/OpenDrives 168T 1.1T 11T 9% /TV05/OpenDrives
      TV08 54T 53K 2.2T 1% /TV08
      TV08/Media08 54T 24T 383G 99% /TV08/Media08
      TV08/MxSFX 54T 26T 383G 99% /TV08/MxSFX
      TV08/RMedia04 54T 51K 383G 1% /TV08/RMedia04
      rpool/export 90G 32K 35G 1% /export
      rpool/export/home 90G 86K 35G 1% /export/home
      rpool/export/home/open
      90G 404M 35G 2% /export/home/open
      rpool 90G 43K 35G 1% /rpool


      Used/available don't seem consistent...



      Trying to track it down:



      [root@SV02 /]# du -sh *
      0K bin
      10M boot
      10M core
      2.0M dev
      430K devices
      58M etc
      405M export
      0K home
      141M kernel
      48M lib
      2K media
      24K mnt
      3K Mounts
      0K net
      1.5G opt
      174M platform
      3.6G proc
      4K rmdisk
      10M root
      23K rpool
      1.9M sbin
      2K scripts
      5.1M system
      12K tmp
      1.5G usr
      3.1G var


      OK nothing there... Thought maybe a snapshot, but that doesn't seem to be the case:



      [root@SV02 /]# zfs list -t all
      NAME USED AVAIL REFER MOUNTPOINT
      TV05 156T 11.4T 41.2K /TV05
      TV05/Media05 155T 11.4T 155T /TV05/Media05
      TV05/OpenDrives 1.09T 11.4T 1.09T /TV05/OpenDrives
      TV08 52.1T 2.16T 53.1K /TV08
      TV08/Media08 24.4T 385G 24.4T /TV08/Media08
      TV08/MxSFX 25.9T 385G 25.9T /TV08/MxSFX
      TV08/RMedia04 51.5K 385G 51.5K /TV08/RMedia04
      rpool 55.5G 34.6G 43.5K /rpool
      rpool/ROOT 38.6G 34.6G 31K legacy
      rpool/ROOT/C_Backup1 1.24M 34.6G 5.61G /
      rpool/ROOT/C_backup2 1.33M 34.6G 5.61G /
      rpool/ROOT/napp-it-0.8l3 3.00M 34.6G 2.10G /
      rpool/ROOT/napp-it-0.9e1 1.66M 34.6G 5.61G /
      rpool/ROOT/nfsv4 54K 34.6G 2.11G /
      rpool/ROOT/openindiana 18.3M 34.6G 2.02G /
      rpool/ROOT/opus-2457044-2015-01-31-install 65.4M 34.6G 31.6G /a
      rpool/ROOT/opus-2457044-2015-01-31-preinstall 1K 34.6G 5.61G /
      rpool/ROOT/opus-2457044-2015-05-19-pre15 45K 34.6G 6.32G /
      rpool/ROOT/opus-2457044-2016-10-03-backup 52K 34.6G 31.3G /
      rpool/ROOT/opus-2457409-2016-10-04-install 15.5M 34.6G 31.7G /a
      rpool/ROOT/opus-2457409-2016-10-04-preinstall 52K 34.6G 31.5G /
      rpool/ROOT/opus-2457409-2017-11-07-release 38.5G 34.6G 6.57G /
      rpool/ROOT/opus-2457409-2017-11-07-release@install 4.40M - 1.56G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2014-02-25-22:01:25 6.64M - 1.58G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2014-02-25-22:05:33 84.6M - 2.02G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2014-03-19-23:25:59 58.0M - 2.11G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2014-04-12-17:55:40 0 - 2.10G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2014-04-12-17:55:48 0 - 2.10G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2014-04-12-18:13:10 1.44M - 2.10G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2014-04-12-18:14:31 2.17M - 2.12G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2015-01-31-21:55:47 15.3M - 5.61G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2015-01-31-22:34:52 9.18M - 5.61G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2015-01-31-22:43:54 362K - 5.61G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2015-01-31-22:44:05 362K - 5.61G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2015-05-19-21:49:36 227M - 6.32G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2016-10-04-16:11:30 3.15M - 31.3G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2016-10-04-16:34:30 936K - 31.5G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2016-10-04-16:34:42 986K - 31.5G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2016-10-04-16:53:00 112M - 31.8G -
      rpool/ROOT/pre_napp-it-0.8l3 35K 34.6G 1.58G /
      rpool/ROOT/pre_napp-it-0.9e1 71K 34.6G 2.10G /
      rpool/ROOT/sv02-4-10-14 70K 34.6G 2.10G /
      rpool/ROOT/sv02-4-10-14-v2 4.17M 34.6G 2.12G /
      rpool/dump 8.00G 34.6G 8.00G -
      rpool/export 404M 34.6G 32K /export
      rpool/export/home 404M 34.6G 86.5K /export/home
      rpool/export/home/open 404M 34.6G 404M /export/home/open
      rpool/swap 8.50G 43.0G 132M -


      34 gigs available but where, I do not know? At a loss and unfortunately I don't quite have a good enough grasp of zfs to properly troubleshoot. It's persistent after reboots and I haven't deleted any large files or anything recently, so don't think it's a process holding on to anything... Save me, you're my only hope!









      share







      New contributor




      Christopher Glenn Schlägel is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      OK this is driving me nuts. Running zfs on a system, and my root partition has been shrinking with seeming no explanation. I have now run out of space and can't find where it all went.



      A simple df shows the following (You can ignore the TV filesystems, i'm only concerned with the root):



      [root@SV02 /]# df -h
      Filesystem Size Used Available Capacity Mounted on
      rpool/ROOT/opus-2457409-2017-11-07-release
      90G 6.6G 35G 16% /
      /devices 0K 0K 0K 0% /devices
      /dev 0K 0K 0K 0% /dev
      ctfs 0K 0K 0K 0% /system/contract
      proc 0K 0K 0K 0% /proc
      mnttab 0K 0K 0K 0% /etc/mnttab
      swap 11G 404K 11G 1% /etc/svc/volatile
      objfs 0K 0K 0K 0% /system/object
      bootfs 0K 0K 0K 0% /system/boot
      sharefs 0K 0K 0K 0% /etc/dfs/sharetab
      /usr/lib/libc/libc_hwcap1.so.1
      41G 6.6G 35G 16% /lib/libc.so.1
      fd 0K 0K 0K 0% /dev/fd
      swap 11G 4K 11G 1% /tmp
      swap 11G 60K 11G 1% /var/run
      TV05 168T 41K 11T 1% /TV05
      TV05/Media05 168T 155T 11T 94% /TV05/Media05
      TV05/OpenDrives 168T 1.1T 11T 9% /TV05/OpenDrives
      TV08 54T 53K 2.2T 1% /TV08
      TV08/Media08 54T 24T 383G 99% /TV08/Media08
      TV08/MxSFX 54T 26T 383G 99% /TV08/MxSFX
      TV08/RMedia04 54T 51K 383G 1% /TV08/RMedia04
      rpool/export 90G 32K 35G 1% /export
      rpool/export/home 90G 86K 35G 1% /export/home
      rpool/export/home/open
      90G 404M 35G 2% /export/home/open
      rpool 90G 43K 35G 1% /rpool


      Used/available don't seem consistent...



      Trying to track it down:



      [root@SV02 /]# du -sh *
      0K bin
      10M boot
      10M core
      2.0M dev
      430K devices
      58M etc
      405M export
      0K home
      141M kernel
      48M lib
      2K media
      24K mnt
      3K Mounts
      0K net
      1.5G opt
      174M platform
      3.6G proc
      4K rmdisk
      10M root
      23K rpool
      1.9M sbin
      2K scripts
      5.1M system
      12K tmp
      1.5G usr
      3.1G var


      OK nothing there... Thought maybe a snapshot, but that doesn't seem to be the case:



      [root@SV02 /]# zfs list -t all
      NAME USED AVAIL REFER MOUNTPOINT
      TV05 156T 11.4T 41.2K /TV05
      TV05/Media05 155T 11.4T 155T /TV05/Media05
      TV05/OpenDrives 1.09T 11.4T 1.09T /TV05/OpenDrives
      TV08 52.1T 2.16T 53.1K /TV08
      TV08/Media08 24.4T 385G 24.4T /TV08/Media08
      TV08/MxSFX 25.9T 385G 25.9T /TV08/MxSFX
      TV08/RMedia04 51.5K 385G 51.5K /TV08/RMedia04
      rpool 55.5G 34.6G 43.5K /rpool
      rpool/ROOT 38.6G 34.6G 31K legacy
      rpool/ROOT/C_Backup1 1.24M 34.6G 5.61G /
      rpool/ROOT/C_backup2 1.33M 34.6G 5.61G /
      rpool/ROOT/napp-it-0.8l3 3.00M 34.6G 2.10G /
      rpool/ROOT/napp-it-0.9e1 1.66M 34.6G 5.61G /
      rpool/ROOT/nfsv4 54K 34.6G 2.11G /
      rpool/ROOT/openindiana 18.3M 34.6G 2.02G /
      rpool/ROOT/opus-2457044-2015-01-31-install 65.4M 34.6G 31.6G /a
      rpool/ROOT/opus-2457044-2015-01-31-preinstall 1K 34.6G 5.61G /
      rpool/ROOT/opus-2457044-2015-05-19-pre15 45K 34.6G 6.32G /
      rpool/ROOT/opus-2457044-2016-10-03-backup 52K 34.6G 31.3G /
      rpool/ROOT/opus-2457409-2016-10-04-install 15.5M 34.6G 31.7G /a
      rpool/ROOT/opus-2457409-2016-10-04-preinstall 52K 34.6G 31.5G /
      rpool/ROOT/opus-2457409-2017-11-07-release 38.5G 34.6G 6.57G /
      rpool/ROOT/opus-2457409-2017-11-07-release@install 4.40M - 1.56G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2014-02-25-22:01:25 6.64M - 1.58G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2014-02-25-22:05:33 84.6M - 2.02G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2014-03-19-23:25:59 58.0M - 2.11G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2014-04-12-17:55:40 0 - 2.10G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2014-04-12-17:55:48 0 - 2.10G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2014-04-12-18:13:10 1.44M - 2.10G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2014-04-12-18:14:31 2.17M - 2.12G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2015-01-31-21:55:47 15.3M - 5.61G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2015-01-31-22:34:52 9.18M - 5.61G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2015-01-31-22:43:54 362K - 5.61G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2015-01-31-22:44:05 362K - 5.61G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2015-05-19-21:49:36 227M - 6.32G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2016-10-04-16:11:30 3.15M - 31.3G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2016-10-04-16:34:30 936K - 31.5G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2016-10-04-16:34:42 986K - 31.5G -
      rpool/ROOT/opus-2457409-2017-11-07-release@2016-10-04-16:53:00 112M - 31.8G -
      rpool/ROOT/pre_napp-it-0.8l3 35K 34.6G 1.58G /
      rpool/ROOT/pre_napp-it-0.9e1 71K 34.6G 2.10G /
      rpool/ROOT/sv02-4-10-14 70K 34.6G 2.10G /
      rpool/ROOT/sv02-4-10-14-v2 4.17M 34.6G 2.12G /
      rpool/dump 8.00G 34.6G 8.00G -
      rpool/export 404M 34.6G 32K /export
      rpool/export/home 404M 34.6G 86.5K /export/home
      rpool/export/home/open 404M 34.6G 404M /export/home/open
      rpool/swap 8.50G 43.0G 132M -


      34 gigs available but where, I do not know? At a loss and unfortunately I don't quite have a good enough grasp of zfs to properly troubleshoot. It's persistent after reboots and I haven't deleted any large files or anything recently, so don't think it's a process holding on to anything... Save me, you're my only hope!







      solaris zfs storage





      share







      New contributor




      Christopher Glenn Schlägel is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.










      share







      New contributor




      Christopher Glenn Schlägel is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.








      share



      share






      New contributor




      Christopher Glenn Schlägel is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 23 secs ago









      Christopher Glenn Schlägel

      1




      1




      New contributor




      Christopher Glenn Schlägel is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      Christopher Glenn Schlägel is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      Christopher Glenn Schlägel is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.

























          active

          oldest

          votes











          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "106"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );






          Christopher Glenn Schlägel is a new contributor. Be nice, and check out our Code of Conduct.









           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f479490%2fzfs-magically-vanishing-available-storage-space%23new-answer', 'question_page');

          );

          Post as a guest



































          active

          oldest

          votes













          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          Christopher Glenn Schlägel is a new contributor. Be nice, and check out our Code of Conduct.









           

          draft saved


          draft discarded


















          Christopher Glenn Schlägel is a new contributor. Be nice, and check out our Code of Conduct.












          Christopher Glenn Schlägel is a new contributor. Be nice, and check out our Code of Conduct.











          Christopher Glenn Schlägel is a new contributor. Be nice, and check out our Code of Conduct.













           


          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f479490%2fzfs-magically-vanishing-available-storage-space%23new-answer', 'question_page');

          );

          Post as a guest













































































          Popular posts from this blog

          How to check contact read email or not when send email to Individual?

          Bahrain

          Postfix configuration issue with fips on centos 7; mailgun relay