df -h results different than VGdisplay / LVdisplay

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP












0















Im trying to remove the added disk to the VM, as the disk space is not needed anymore.
However, it looks to be that the VM has in the VG almost all the data used what is available there.



However, i've managed to resize the zoneminder--vg to 6G almost from 1,5TB
with resiz2fs but lvdisplay says differently.



Here is the output of some commands.



root@zoneminder:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root zoneminder-vg -wi-ao---- 1.52t
swap_1 zoneminder-vg -wi-ao---- 976.00m
root@zoneminder:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 zoneminder-vg lvm2 a-- 900.00g 0
/dev/sda5 zoneminder-vg lvm2 a-- 699.52g 46.57g
root@zoneminder:~# vgs
VG #PV #LV #SN Attr VSize VFree
zoneminder-vg 2 2 0 wz--n- 1.56t 46.57g


DF - H



Filesystem Size Used Avail Use% Mounted on
udev 7.9G 0 7.9G 0% /dev
tmpfs 1.6G 8.9M 1.6G 1% /run
/dev/mapper/zoneminder--vg-root 5.6G 4.9G 431M 92% /
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sda1 472M 108M 340M 25% /boot
tmpfs 1.6G 0 1.6G 0% /run/user/1000


pvdisplay



 --- Physical volume ---
PV Name /dev/sda5
VG Name zoneminder-vg
PV Size 699.52 GiB / not usable 2.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 179077
Free PE 11922
Allocated PE 167155
PV UUID SVGqoc-SQ42-tDzp-Qc7H-n90f-1g9n-x0eLWe

--- Physical volume ---
PV Name /dev/sda3
VG Name zoneminder-vg
PV Size 900.00 GiB / not usable 0
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 230400
Free PE 0
Allocated PE 230400
PV UUID Cdanv0-2pLJ-Yp2n-3zsl-JvjH-72QS-Ciwhaj


lvdisplay



 --- Logical volume ---
LV Path /dev/zoneminder-vg/root
LV Name root
VG Name zoneminder-vg
LV UUID poThtY-v96W-e2Ai-nan7-ckqn-aeBm-T0Kqji
LV Write Access read/write
LV Creation host, time zoneminder, 2018-08-01 22:22:13 +0200
LV Status available
# open 1
LV Size 1.52 TiB
Current LE 397311
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:0

--- Logical volume ---
LV Path /dev/zoneminder-vg/swap_1
LV Name swap_1
VG Name zoneminder-vg
LV UUID SXQ36r-5Kum-Z3Wa-m9DE-CBVb-h9Wx-kmctKT
LV Write Access read/write
LV Creation host, time zoneminder, 2018-08-01 22:22:13 +0200
LV Status available
# open 2
LV Size 976.00 MiB
Current LE 244
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:1


vgdisplay



 --- Volume group ---
VG Name zoneminder-vg
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 2
Act PV 2
VG Size 1.56 TiB
PE Size 4.00 MiB
Total PE 409477

Alloc PE / Size 397555 / 1.52 TiB
Free PE / Size 11922 / 46.57 GiB

VG UUID lTo8U0-dIL9-Yye3-RVYk-rJu6-w6WQ-zIpL8f


How can i make the /dev/sda3 remove without hitting any data on the LVM?
And then rebuilt the VGroup back again, to 100G or something?



fdisk -l



Disk /dev/sda: 1.6 TiB, 1717986918400 bytes, 3355443200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x21880f4a

Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 999423 997376 487M 83 Linux
/dev/sda2 1001470 1468004351 1467002882 699.5G 5 Extended
/dev/sda3 1468004352 3355443199 1887438848 900G 8e Linux LVM
/dev/sda5 1001472 1468004351 1467002880 699.5G 8e Linux LVM

Partition table entries are not in disk order.


Disk /dev/mapper/zoneminder--vg-root: 1.5 TiB, 1666443116544 bytes, 3254771712 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/zoneminder--vg-swap_1: 976 MiB, 1023410176 bytes, 1998848 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@zoneminder:~#


Update#1:



@telcoM : Thank you for you kind and detailed answer

All the actions mentioned by you where done.
I came across also that the data was ofcourse spread all over the disk, so I had to physically move the data also:



root@zoneminder:~# pvresize --setphysicalvolume 101G -v /dev/sda5
Using physical volume(s) on command line.
Archiving volume group "zoneminder-vg" metadata (seqno 9).
/dev/sda5: Pretending size is 211812352 not 1467002880 sectors.
Resizing volume "/dev/sda5" to 211812352 sectors.
Resizing physical volume /dev/sda5 from 0 to 25855 extents.
/dev/sda5: cannot resize to 25855 extents as later ones are allocated.
0 physical volume(s) resized / 1 physical volume(s) not resized


root@zoneminder:~# pvs -v --segments /dev/sda5
Using physical volume(s) on command line.
Wiping cache of LVM-capable devices
PV VG Fmt Attr PSize PFree Start SSize LV Start Type PE Ranges
/dev/sda5 zoneminder-vg lvm2 a-- 699.52g 598.57g 0 25600 root 0 linear /dev/sda5:0-25599
/dev/sda5 zoneminder-vg lvm2 a-- 699.52g 598.57g 25600 141311 0 free
/dev/sda5 zoneminder-vg lvm2 a-- 699.52g 598.57g 166911 244 swap_1 0 linear /dev/sda5:166911-167154
/dev/sda5 zoneminder-vg lvm2 a-- 699.52g 598.57g 167155 11922 0 free

root@zoneminder:~# sudo pvmove --alloc anywhere /dev/sda5:166911-167154 /dev/sda5:25601-25845
/dev/sda5: Moved: 0.4%
/dev/sda5: Moved: 100.0%

root@zoneminder:~# pvresize --setphysicalvolume 101G -v /dev/sda5
Using physical volume(s) on command line.
Archiving volume group "zoneminder-vg" metadata (seqno 12).
/dev/sda5: Pretending size is 211812352 not 1467002880 sectors.
Resizing volume "/dev/sda5" to 211812352 sectors.
Resizing physical volume /dev/sda5 from 0 to 25855 extents.
Updating physical volume "/dev/sda5"
Creating volume group backup "/etc/lvm/backup/zoneminder-vg" (seqno 13).
Physical volume "/dev/sda5" changed
1 physical volume(s) resized / 0 physical volume(s) not resized

root@zoneminder:~# pvs -v --segments /dev/sda5
Using physical volume(s) on command line.
Wiping cache of LVM-capable devices
PV VG Fmt Attr PSize PFree Start SSize LV Start Type PE Ranges
/dev/sda5 zoneminder-vg lvm2 a-- 101.00g 44.00m 0 25600 root 0 linear /dev/sda5:0-25599
/dev/sda5 zoneminder-vg lvm2 a-- 101.00g 44.00m 25600 1 0 free
/dev/sda5 zoneminder-vg lvm2 a-- 101.00g 44.00m 25601 244 swap_1 0 linear /dev/sda5:25601-25844
/dev/sda5 zoneminder-vg lvm2 a-- 101.00g 44.00m 25845 10 0 free


Actually what I'm trying to achieve is the following (post #18):
https://communities.vmware.com/message/2723540#2723540



I'm now stuck at that /dev/sda3 has not been removed, still seeing in fdisk -l and the size of /dev/sda5 is still 700G.



root@zoneminder:~# fdisk -l
Disk /dev/sda: 1.6 TiB, 1717986918400 bytes, 3355443200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x21880f4a

Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 999423 997376 487M 83 Linux
/dev/sda2 1001470 1468004351 1467002882 699.5G 5 Extended
/dev/sda3 1468004352 3355443199 1887438848 900G 8e Linux LVM
/dev/sda5 1001472 1468004351 1467002880 699.5G 8e Linux LVM

Partition table entries are not in disk order.


Disk /dev/mapper/zoneminder--vg-root: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/zoneminder--vg-swap_1: 976 MiB, 1023410176 bytes, 1998848 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


UPDATE #2:



I've managed via an LIVE-CD to use GPARTED and PARTED to remove the disk and resize the /dev/sda2 wherein also /dev/sda5 is with the LV(M)



Because of the snapshots in ESXi I've made, I need to redo all the steps again. Because the nesteling of the vmdk's are a bit nasty as it seems to be.



It takes a while untill the orignal VM has been copied (1,6TB). After this I will do a full dump of all the steps such that in the future, someone else can use it also.










share|improve this question




























    0















    Im trying to remove the added disk to the VM, as the disk space is not needed anymore.
    However, it looks to be that the VM has in the VG almost all the data used what is available there.



    However, i've managed to resize the zoneminder--vg to 6G almost from 1,5TB
    with resiz2fs but lvdisplay says differently.



    Here is the output of some commands.



    root@zoneminder:~# lvs
    LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
    root zoneminder-vg -wi-ao---- 1.52t
    swap_1 zoneminder-vg -wi-ao---- 976.00m
    root@zoneminder:~# pvs
    PV VG Fmt Attr PSize PFree
    /dev/sda3 zoneminder-vg lvm2 a-- 900.00g 0
    /dev/sda5 zoneminder-vg lvm2 a-- 699.52g 46.57g
    root@zoneminder:~# vgs
    VG #PV #LV #SN Attr VSize VFree
    zoneminder-vg 2 2 0 wz--n- 1.56t 46.57g


    DF - H



    Filesystem Size Used Avail Use% Mounted on
    udev 7.9G 0 7.9G 0% /dev
    tmpfs 1.6G 8.9M 1.6G 1% /run
    /dev/mapper/zoneminder--vg-root 5.6G 4.9G 431M 92% /
    tmpfs 7.9G 0 7.9G 0% /dev/shm
    tmpfs 5.0M 0 5.0M 0% /run/lock
    tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
    /dev/sda1 472M 108M 340M 25% /boot
    tmpfs 1.6G 0 1.6G 0% /run/user/1000


    pvdisplay



     --- Physical volume ---
    PV Name /dev/sda5
    VG Name zoneminder-vg
    PV Size 699.52 GiB / not usable 2.00 MiB
    Allocatable yes
    PE Size 4.00 MiB
    Total PE 179077
    Free PE 11922
    Allocated PE 167155
    PV UUID SVGqoc-SQ42-tDzp-Qc7H-n90f-1g9n-x0eLWe

    --- Physical volume ---
    PV Name /dev/sda3
    VG Name zoneminder-vg
    PV Size 900.00 GiB / not usable 0
    Allocatable yes (but full)
    PE Size 4.00 MiB
    Total PE 230400
    Free PE 0
    Allocated PE 230400
    PV UUID Cdanv0-2pLJ-Yp2n-3zsl-JvjH-72QS-Ciwhaj


    lvdisplay



     --- Logical volume ---
    LV Path /dev/zoneminder-vg/root
    LV Name root
    VG Name zoneminder-vg
    LV UUID poThtY-v96W-e2Ai-nan7-ckqn-aeBm-T0Kqji
    LV Write Access read/write
    LV Creation host, time zoneminder, 2018-08-01 22:22:13 +0200
    LV Status available
    # open 1
    LV Size 1.52 TiB
    Current LE 397311
    Segments 2
    Allocation inherit
    Read ahead sectors auto
    - currently set to 256
    Block device 252:0

    --- Logical volume ---
    LV Path /dev/zoneminder-vg/swap_1
    LV Name swap_1
    VG Name zoneminder-vg
    LV UUID SXQ36r-5Kum-Z3Wa-m9DE-CBVb-h9Wx-kmctKT
    LV Write Access read/write
    LV Creation host, time zoneminder, 2018-08-01 22:22:13 +0200
    LV Status available
    # open 2
    LV Size 976.00 MiB
    Current LE 244
    Segments 1
    Allocation inherit
    Read ahead sectors auto
    - currently set to 256
    Block device 252:1


    vgdisplay



     --- Volume group ---
    VG Name zoneminder-vg
    System ID
    Format lvm2
    Metadata Areas 2
    Metadata Sequence No 6
    VG Access read/write
    VG Status resizable
    MAX LV 0
    Cur LV 2
    Open LV 2
    Max PV 0
    Cur PV 2
    Act PV 2
    VG Size 1.56 TiB
    PE Size 4.00 MiB
    Total PE 409477

    Alloc PE / Size 397555 / 1.52 TiB
    Free PE / Size 11922 / 46.57 GiB

    VG UUID lTo8U0-dIL9-Yye3-RVYk-rJu6-w6WQ-zIpL8f


    How can i make the /dev/sda3 remove without hitting any data on the LVM?
    And then rebuilt the VGroup back again, to 100G or something?



    fdisk -l



    Disk /dev/sda: 1.6 TiB, 1717986918400 bytes, 3355443200 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x21880f4a

    Device Boot Start End Sectors Size Id Type
    /dev/sda1 * 2048 999423 997376 487M 83 Linux
    /dev/sda2 1001470 1468004351 1467002882 699.5G 5 Extended
    /dev/sda3 1468004352 3355443199 1887438848 900G 8e Linux LVM
    /dev/sda5 1001472 1468004351 1467002880 699.5G 8e Linux LVM

    Partition table entries are not in disk order.


    Disk /dev/mapper/zoneminder--vg-root: 1.5 TiB, 1666443116544 bytes, 3254771712 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes


    Disk /dev/mapper/zoneminder--vg-swap_1: 976 MiB, 1023410176 bytes, 1998848 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    root@zoneminder:~#


    Update#1:



    @telcoM : Thank you for you kind and detailed answer

    All the actions mentioned by you where done.
    I came across also that the data was ofcourse spread all over the disk, so I had to physically move the data also:



    root@zoneminder:~# pvresize --setphysicalvolume 101G -v /dev/sda5
    Using physical volume(s) on command line.
    Archiving volume group "zoneminder-vg" metadata (seqno 9).
    /dev/sda5: Pretending size is 211812352 not 1467002880 sectors.
    Resizing volume "/dev/sda5" to 211812352 sectors.
    Resizing physical volume /dev/sda5 from 0 to 25855 extents.
    /dev/sda5: cannot resize to 25855 extents as later ones are allocated.
    0 physical volume(s) resized / 1 physical volume(s) not resized


    root@zoneminder:~# pvs -v --segments /dev/sda5
    Using physical volume(s) on command line.
    Wiping cache of LVM-capable devices
    PV VG Fmt Attr PSize PFree Start SSize LV Start Type PE Ranges
    /dev/sda5 zoneminder-vg lvm2 a-- 699.52g 598.57g 0 25600 root 0 linear /dev/sda5:0-25599
    /dev/sda5 zoneminder-vg lvm2 a-- 699.52g 598.57g 25600 141311 0 free
    /dev/sda5 zoneminder-vg lvm2 a-- 699.52g 598.57g 166911 244 swap_1 0 linear /dev/sda5:166911-167154
    /dev/sda5 zoneminder-vg lvm2 a-- 699.52g 598.57g 167155 11922 0 free

    root@zoneminder:~# sudo pvmove --alloc anywhere /dev/sda5:166911-167154 /dev/sda5:25601-25845
    /dev/sda5: Moved: 0.4%
    /dev/sda5: Moved: 100.0%

    root@zoneminder:~# pvresize --setphysicalvolume 101G -v /dev/sda5
    Using physical volume(s) on command line.
    Archiving volume group "zoneminder-vg" metadata (seqno 12).
    /dev/sda5: Pretending size is 211812352 not 1467002880 sectors.
    Resizing volume "/dev/sda5" to 211812352 sectors.
    Resizing physical volume /dev/sda5 from 0 to 25855 extents.
    Updating physical volume "/dev/sda5"
    Creating volume group backup "/etc/lvm/backup/zoneminder-vg" (seqno 13).
    Physical volume "/dev/sda5" changed
    1 physical volume(s) resized / 0 physical volume(s) not resized

    root@zoneminder:~# pvs -v --segments /dev/sda5
    Using physical volume(s) on command line.
    Wiping cache of LVM-capable devices
    PV VG Fmt Attr PSize PFree Start SSize LV Start Type PE Ranges
    /dev/sda5 zoneminder-vg lvm2 a-- 101.00g 44.00m 0 25600 root 0 linear /dev/sda5:0-25599
    /dev/sda5 zoneminder-vg lvm2 a-- 101.00g 44.00m 25600 1 0 free
    /dev/sda5 zoneminder-vg lvm2 a-- 101.00g 44.00m 25601 244 swap_1 0 linear /dev/sda5:25601-25844
    /dev/sda5 zoneminder-vg lvm2 a-- 101.00g 44.00m 25845 10 0 free


    Actually what I'm trying to achieve is the following (post #18):
    https://communities.vmware.com/message/2723540#2723540



    I'm now stuck at that /dev/sda3 has not been removed, still seeing in fdisk -l and the size of /dev/sda5 is still 700G.



    root@zoneminder:~# fdisk -l
    Disk /dev/sda: 1.6 TiB, 1717986918400 bytes, 3355443200 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x21880f4a

    Device Boot Start End Sectors Size Id Type
    /dev/sda1 * 2048 999423 997376 487M 83 Linux
    /dev/sda2 1001470 1468004351 1467002882 699.5G 5 Extended
    /dev/sda3 1468004352 3355443199 1887438848 900G 8e Linux LVM
    /dev/sda5 1001472 1468004351 1467002880 699.5G 8e Linux LVM

    Partition table entries are not in disk order.


    Disk /dev/mapper/zoneminder--vg-root: 100 GiB, 107374182400 bytes, 209715200 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes


    Disk /dev/mapper/zoneminder--vg-swap_1: 976 MiB, 1023410176 bytes, 1998848 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes


    UPDATE #2:



    I've managed via an LIVE-CD to use GPARTED and PARTED to remove the disk and resize the /dev/sda2 wherein also /dev/sda5 is with the LV(M)



    Because of the snapshots in ESXi I've made, I need to redo all the steps again. Because the nesteling of the vmdk's are a bit nasty as it seems to be.



    It takes a while untill the orignal VM has been copied (1,6TB). After this I will do a full dump of all the steps such that in the future, someone else can use it also.










    share|improve this question


























      0












      0








      0








      Im trying to remove the added disk to the VM, as the disk space is not needed anymore.
      However, it looks to be that the VM has in the VG almost all the data used what is available there.



      However, i've managed to resize the zoneminder--vg to 6G almost from 1,5TB
      with resiz2fs but lvdisplay says differently.



      Here is the output of some commands.



      root@zoneminder:~# lvs
      LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
      root zoneminder-vg -wi-ao---- 1.52t
      swap_1 zoneminder-vg -wi-ao---- 976.00m
      root@zoneminder:~# pvs
      PV VG Fmt Attr PSize PFree
      /dev/sda3 zoneminder-vg lvm2 a-- 900.00g 0
      /dev/sda5 zoneminder-vg lvm2 a-- 699.52g 46.57g
      root@zoneminder:~# vgs
      VG #PV #LV #SN Attr VSize VFree
      zoneminder-vg 2 2 0 wz--n- 1.56t 46.57g


      DF - H



      Filesystem Size Used Avail Use% Mounted on
      udev 7.9G 0 7.9G 0% /dev
      tmpfs 1.6G 8.9M 1.6G 1% /run
      /dev/mapper/zoneminder--vg-root 5.6G 4.9G 431M 92% /
      tmpfs 7.9G 0 7.9G 0% /dev/shm
      tmpfs 5.0M 0 5.0M 0% /run/lock
      tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
      /dev/sda1 472M 108M 340M 25% /boot
      tmpfs 1.6G 0 1.6G 0% /run/user/1000


      pvdisplay



       --- Physical volume ---
      PV Name /dev/sda5
      VG Name zoneminder-vg
      PV Size 699.52 GiB / not usable 2.00 MiB
      Allocatable yes
      PE Size 4.00 MiB
      Total PE 179077
      Free PE 11922
      Allocated PE 167155
      PV UUID SVGqoc-SQ42-tDzp-Qc7H-n90f-1g9n-x0eLWe

      --- Physical volume ---
      PV Name /dev/sda3
      VG Name zoneminder-vg
      PV Size 900.00 GiB / not usable 0
      Allocatable yes (but full)
      PE Size 4.00 MiB
      Total PE 230400
      Free PE 0
      Allocated PE 230400
      PV UUID Cdanv0-2pLJ-Yp2n-3zsl-JvjH-72QS-Ciwhaj


      lvdisplay



       --- Logical volume ---
      LV Path /dev/zoneminder-vg/root
      LV Name root
      VG Name zoneminder-vg
      LV UUID poThtY-v96W-e2Ai-nan7-ckqn-aeBm-T0Kqji
      LV Write Access read/write
      LV Creation host, time zoneminder, 2018-08-01 22:22:13 +0200
      LV Status available
      # open 1
      LV Size 1.52 TiB
      Current LE 397311
      Segments 2
      Allocation inherit
      Read ahead sectors auto
      - currently set to 256
      Block device 252:0

      --- Logical volume ---
      LV Path /dev/zoneminder-vg/swap_1
      LV Name swap_1
      VG Name zoneminder-vg
      LV UUID SXQ36r-5Kum-Z3Wa-m9DE-CBVb-h9Wx-kmctKT
      LV Write Access read/write
      LV Creation host, time zoneminder, 2018-08-01 22:22:13 +0200
      LV Status available
      # open 2
      LV Size 976.00 MiB
      Current LE 244
      Segments 1
      Allocation inherit
      Read ahead sectors auto
      - currently set to 256
      Block device 252:1


      vgdisplay



       --- Volume group ---
      VG Name zoneminder-vg
      System ID
      Format lvm2
      Metadata Areas 2
      Metadata Sequence No 6
      VG Access read/write
      VG Status resizable
      MAX LV 0
      Cur LV 2
      Open LV 2
      Max PV 0
      Cur PV 2
      Act PV 2
      VG Size 1.56 TiB
      PE Size 4.00 MiB
      Total PE 409477

      Alloc PE / Size 397555 / 1.52 TiB
      Free PE / Size 11922 / 46.57 GiB

      VG UUID lTo8U0-dIL9-Yye3-RVYk-rJu6-w6WQ-zIpL8f


      How can i make the /dev/sda3 remove without hitting any data on the LVM?
      And then rebuilt the VGroup back again, to 100G or something?



      fdisk -l



      Disk /dev/sda: 1.6 TiB, 1717986918400 bytes, 3355443200 sectors
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disklabel type: dos
      Disk identifier: 0x21880f4a

      Device Boot Start End Sectors Size Id Type
      /dev/sda1 * 2048 999423 997376 487M 83 Linux
      /dev/sda2 1001470 1468004351 1467002882 699.5G 5 Extended
      /dev/sda3 1468004352 3355443199 1887438848 900G 8e Linux LVM
      /dev/sda5 1001472 1468004351 1467002880 699.5G 8e Linux LVM

      Partition table entries are not in disk order.


      Disk /dev/mapper/zoneminder--vg-root: 1.5 TiB, 1666443116544 bytes, 3254771712 sectors
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes


      Disk /dev/mapper/zoneminder--vg-swap_1: 976 MiB, 1023410176 bytes, 1998848 sectors
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      root@zoneminder:~#


      Update#1:



      @telcoM : Thank you for you kind and detailed answer

      All the actions mentioned by you where done.
      I came across also that the data was ofcourse spread all over the disk, so I had to physically move the data also:



      root@zoneminder:~# pvresize --setphysicalvolume 101G -v /dev/sda5
      Using physical volume(s) on command line.
      Archiving volume group "zoneminder-vg" metadata (seqno 9).
      /dev/sda5: Pretending size is 211812352 not 1467002880 sectors.
      Resizing volume "/dev/sda5" to 211812352 sectors.
      Resizing physical volume /dev/sda5 from 0 to 25855 extents.
      /dev/sda5: cannot resize to 25855 extents as later ones are allocated.
      0 physical volume(s) resized / 1 physical volume(s) not resized


      root@zoneminder:~# pvs -v --segments /dev/sda5
      Using physical volume(s) on command line.
      Wiping cache of LVM-capable devices
      PV VG Fmt Attr PSize PFree Start SSize LV Start Type PE Ranges
      /dev/sda5 zoneminder-vg lvm2 a-- 699.52g 598.57g 0 25600 root 0 linear /dev/sda5:0-25599
      /dev/sda5 zoneminder-vg lvm2 a-- 699.52g 598.57g 25600 141311 0 free
      /dev/sda5 zoneminder-vg lvm2 a-- 699.52g 598.57g 166911 244 swap_1 0 linear /dev/sda5:166911-167154
      /dev/sda5 zoneminder-vg lvm2 a-- 699.52g 598.57g 167155 11922 0 free

      root@zoneminder:~# sudo pvmove --alloc anywhere /dev/sda5:166911-167154 /dev/sda5:25601-25845
      /dev/sda5: Moved: 0.4%
      /dev/sda5: Moved: 100.0%

      root@zoneminder:~# pvresize --setphysicalvolume 101G -v /dev/sda5
      Using physical volume(s) on command line.
      Archiving volume group "zoneminder-vg" metadata (seqno 12).
      /dev/sda5: Pretending size is 211812352 not 1467002880 sectors.
      Resizing volume "/dev/sda5" to 211812352 sectors.
      Resizing physical volume /dev/sda5 from 0 to 25855 extents.
      Updating physical volume "/dev/sda5"
      Creating volume group backup "/etc/lvm/backup/zoneminder-vg" (seqno 13).
      Physical volume "/dev/sda5" changed
      1 physical volume(s) resized / 0 physical volume(s) not resized

      root@zoneminder:~# pvs -v --segments /dev/sda5
      Using physical volume(s) on command line.
      Wiping cache of LVM-capable devices
      PV VG Fmt Attr PSize PFree Start SSize LV Start Type PE Ranges
      /dev/sda5 zoneminder-vg lvm2 a-- 101.00g 44.00m 0 25600 root 0 linear /dev/sda5:0-25599
      /dev/sda5 zoneminder-vg lvm2 a-- 101.00g 44.00m 25600 1 0 free
      /dev/sda5 zoneminder-vg lvm2 a-- 101.00g 44.00m 25601 244 swap_1 0 linear /dev/sda5:25601-25844
      /dev/sda5 zoneminder-vg lvm2 a-- 101.00g 44.00m 25845 10 0 free


      Actually what I'm trying to achieve is the following (post #18):
      https://communities.vmware.com/message/2723540#2723540



      I'm now stuck at that /dev/sda3 has not been removed, still seeing in fdisk -l and the size of /dev/sda5 is still 700G.



      root@zoneminder:~# fdisk -l
      Disk /dev/sda: 1.6 TiB, 1717986918400 bytes, 3355443200 sectors
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disklabel type: dos
      Disk identifier: 0x21880f4a

      Device Boot Start End Sectors Size Id Type
      /dev/sda1 * 2048 999423 997376 487M 83 Linux
      /dev/sda2 1001470 1468004351 1467002882 699.5G 5 Extended
      /dev/sda3 1468004352 3355443199 1887438848 900G 8e Linux LVM
      /dev/sda5 1001472 1468004351 1467002880 699.5G 8e Linux LVM

      Partition table entries are not in disk order.


      Disk /dev/mapper/zoneminder--vg-root: 100 GiB, 107374182400 bytes, 209715200 sectors
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes


      Disk /dev/mapper/zoneminder--vg-swap_1: 976 MiB, 1023410176 bytes, 1998848 sectors
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes


      UPDATE #2:



      I've managed via an LIVE-CD to use GPARTED and PARTED to remove the disk and resize the /dev/sda2 wherein also /dev/sda5 is with the LV(M)



      Because of the snapshots in ESXi I've made, I need to redo all the steps again. Because the nesteling of the vmdk's are a bit nasty as it seems to be.



      It takes a while untill the orignal VM has been copied (1,6TB). After this I will do a full dump of all the steps such that in the future, someone else can use it also.










      share|improve this question
















      Im trying to remove the added disk to the VM, as the disk space is not needed anymore.
      However, it looks to be that the VM has in the VG almost all the data used what is available there.



      However, i've managed to resize the zoneminder--vg to 6G almost from 1,5TB
      with resiz2fs but lvdisplay says differently.



      Here is the output of some commands.



      root@zoneminder:~# lvs
      LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
      root zoneminder-vg -wi-ao---- 1.52t
      swap_1 zoneminder-vg -wi-ao---- 976.00m
      root@zoneminder:~# pvs
      PV VG Fmt Attr PSize PFree
      /dev/sda3 zoneminder-vg lvm2 a-- 900.00g 0
      /dev/sda5 zoneminder-vg lvm2 a-- 699.52g 46.57g
      root@zoneminder:~# vgs
      VG #PV #LV #SN Attr VSize VFree
      zoneminder-vg 2 2 0 wz--n- 1.56t 46.57g


      DF - H



      Filesystem Size Used Avail Use% Mounted on
      udev 7.9G 0 7.9G 0% /dev
      tmpfs 1.6G 8.9M 1.6G 1% /run
      /dev/mapper/zoneminder--vg-root 5.6G 4.9G 431M 92% /
      tmpfs 7.9G 0 7.9G 0% /dev/shm
      tmpfs 5.0M 0 5.0M 0% /run/lock
      tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
      /dev/sda1 472M 108M 340M 25% /boot
      tmpfs 1.6G 0 1.6G 0% /run/user/1000


      pvdisplay



       --- Physical volume ---
      PV Name /dev/sda5
      VG Name zoneminder-vg
      PV Size 699.52 GiB / not usable 2.00 MiB
      Allocatable yes
      PE Size 4.00 MiB
      Total PE 179077
      Free PE 11922
      Allocated PE 167155
      PV UUID SVGqoc-SQ42-tDzp-Qc7H-n90f-1g9n-x0eLWe

      --- Physical volume ---
      PV Name /dev/sda3
      VG Name zoneminder-vg
      PV Size 900.00 GiB / not usable 0
      Allocatable yes (but full)
      PE Size 4.00 MiB
      Total PE 230400
      Free PE 0
      Allocated PE 230400
      PV UUID Cdanv0-2pLJ-Yp2n-3zsl-JvjH-72QS-Ciwhaj


      lvdisplay



       --- Logical volume ---
      LV Path /dev/zoneminder-vg/root
      LV Name root
      VG Name zoneminder-vg
      LV UUID poThtY-v96W-e2Ai-nan7-ckqn-aeBm-T0Kqji
      LV Write Access read/write
      LV Creation host, time zoneminder, 2018-08-01 22:22:13 +0200
      LV Status available
      # open 1
      LV Size 1.52 TiB
      Current LE 397311
      Segments 2
      Allocation inherit
      Read ahead sectors auto
      - currently set to 256
      Block device 252:0

      --- Logical volume ---
      LV Path /dev/zoneminder-vg/swap_1
      LV Name swap_1
      VG Name zoneminder-vg
      LV UUID SXQ36r-5Kum-Z3Wa-m9DE-CBVb-h9Wx-kmctKT
      LV Write Access read/write
      LV Creation host, time zoneminder, 2018-08-01 22:22:13 +0200
      LV Status available
      # open 2
      LV Size 976.00 MiB
      Current LE 244
      Segments 1
      Allocation inherit
      Read ahead sectors auto
      - currently set to 256
      Block device 252:1


      vgdisplay



       --- Volume group ---
      VG Name zoneminder-vg
      System ID
      Format lvm2
      Metadata Areas 2
      Metadata Sequence No 6
      VG Access read/write
      VG Status resizable
      MAX LV 0
      Cur LV 2
      Open LV 2
      Max PV 0
      Cur PV 2
      Act PV 2
      VG Size 1.56 TiB
      PE Size 4.00 MiB
      Total PE 409477

      Alloc PE / Size 397555 / 1.52 TiB
      Free PE / Size 11922 / 46.57 GiB

      VG UUID lTo8U0-dIL9-Yye3-RVYk-rJu6-w6WQ-zIpL8f


      How can i make the /dev/sda3 remove without hitting any data on the LVM?
      And then rebuilt the VGroup back again, to 100G or something?



      fdisk -l



      Disk /dev/sda: 1.6 TiB, 1717986918400 bytes, 3355443200 sectors
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disklabel type: dos
      Disk identifier: 0x21880f4a

      Device Boot Start End Sectors Size Id Type
      /dev/sda1 * 2048 999423 997376 487M 83 Linux
      /dev/sda2 1001470 1468004351 1467002882 699.5G 5 Extended
      /dev/sda3 1468004352 3355443199 1887438848 900G 8e Linux LVM
      /dev/sda5 1001472 1468004351 1467002880 699.5G 8e Linux LVM

      Partition table entries are not in disk order.


      Disk /dev/mapper/zoneminder--vg-root: 1.5 TiB, 1666443116544 bytes, 3254771712 sectors
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes


      Disk /dev/mapper/zoneminder--vg-swap_1: 976 MiB, 1023410176 bytes, 1998848 sectors
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      root@zoneminder:~#


      Update#1:



      @telcoM : Thank you for you kind and detailed answer

      All the actions mentioned by you where done.
      I came across also that the data was ofcourse spread all over the disk, so I had to physically move the data also:



      root@zoneminder:~# pvresize --setphysicalvolume 101G -v /dev/sda5
      Using physical volume(s) on command line.
      Archiving volume group "zoneminder-vg" metadata (seqno 9).
      /dev/sda5: Pretending size is 211812352 not 1467002880 sectors.
      Resizing volume "/dev/sda5" to 211812352 sectors.
      Resizing physical volume /dev/sda5 from 0 to 25855 extents.
      /dev/sda5: cannot resize to 25855 extents as later ones are allocated.
      0 physical volume(s) resized / 1 physical volume(s) not resized


      root@zoneminder:~# pvs -v --segments /dev/sda5
      Using physical volume(s) on command line.
      Wiping cache of LVM-capable devices
      PV VG Fmt Attr PSize PFree Start SSize LV Start Type PE Ranges
      /dev/sda5 zoneminder-vg lvm2 a-- 699.52g 598.57g 0 25600 root 0 linear /dev/sda5:0-25599
      /dev/sda5 zoneminder-vg lvm2 a-- 699.52g 598.57g 25600 141311 0 free
      /dev/sda5 zoneminder-vg lvm2 a-- 699.52g 598.57g 166911 244 swap_1 0 linear /dev/sda5:166911-167154
      /dev/sda5 zoneminder-vg lvm2 a-- 699.52g 598.57g 167155 11922 0 free

      root@zoneminder:~# sudo pvmove --alloc anywhere /dev/sda5:166911-167154 /dev/sda5:25601-25845
      /dev/sda5: Moved: 0.4%
      /dev/sda5: Moved: 100.0%

      root@zoneminder:~# pvresize --setphysicalvolume 101G -v /dev/sda5
      Using physical volume(s) on command line.
      Archiving volume group "zoneminder-vg" metadata (seqno 12).
      /dev/sda5: Pretending size is 211812352 not 1467002880 sectors.
      Resizing volume "/dev/sda5" to 211812352 sectors.
      Resizing physical volume /dev/sda5 from 0 to 25855 extents.
      Updating physical volume "/dev/sda5"
      Creating volume group backup "/etc/lvm/backup/zoneminder-vg" (seqno 13).
      Physical volume "/dev/sda5" changed
      1 physical volume(s) resized / 0 physical volume(s) not resized

      root@zoneminder:~# pvs -v --segments /dev/sda5
      Using physical volume(s) on command line.
      Wiping cache of LVM-capable devices
      PV VG Fmt Attr PSize PFree Start SSize LV Start Type PE Ranges
      /dev/sda5 zoneminder-vg lvm2 a-- 101.00g 44.00m 0 25600 root 0 linear /dev/sda5:0-25599
      /dev/sda5 zoneminder-vg lvm2 a-- 101.00g 44.00m 25600 1 0 free
      /dev/sda5 zoneminder-vg lvm2 a-- 101.00g 44.00m 25601 244 swap_1 0 linear /dev/sda5:25601-25844
      /dev/sda5 zoneminder-vg lvm2 a-- 101.00g 44.00m 25845 10 0 free


      Actually what I'm trying to achieve is the following (post #18):
      https://communities.vmware.com/message/2723540#2723540



      I'm now stuck at that /dev/sda3 has not been removed, still seeing in fdisk -l and the size of /dev/sda5 is still 700G.



      root@zoneminder:~# fdisk -l
      Disk /dev/sda: 1.6 TiB, 1717986918400 bytes, 3355443200 sectors
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disklabel type: dos
      Disk identifier: 0x21880f4a

      Device Boot Start End Sectors Size Id Type
      /dev/sda1 * 2048 999423 997376 487M 83 Linux
      /dev/sda2 1001470 1468004351 1467002882 699.5G 5 Extended
      /dev/sda3 1468004352 3355443199 1887438848 900G 8e Linux LVM
      /dev/sda5 1001472 1468004351 1467002880 699.5G 8e Linux LVM

      Partition table entries are not in disk order.


      Disk /dev/mapper/zoneminder--vg-root: 100 GiB, 107374182400 bytes, 209715200 sectors
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes


      Disk /dev/mapper/zoneminder--vg-swap_1: 976 MiB, 1023410176 bytes, 1998848 sectors
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes


      UPDATE #2:



      I've managed via an LIVE-CD to use GPARTED and PARTED to remove the disk and resize the /dev/sda2 wherein also /dev/sda5 is with the LV(M)



      Because of the snapshots in ESXi I've made, I need to redo all the steps again. Because the nesteling of the vmdk's are a bit nasty as it seems to be.



      It takes a while untill the orignal VM has been copied (1,6TB). After this I will do a full dump of all the steps such that in the future, someone else can use it also.







      lvm data pv






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Feb 20 at 14:08







      Serdar Ebeng

















      asked Feb 19 at 15:35









      Serdar EbengSerdar Ebeng

      12




      12




















          2 Answers
          2






          active

          oldest

          votes


















          1














          Great, you've successfully shrunk the filesystem inside the root LV.



          The next step is to shrink the LV to match the new size of the filesystem. For the sake of safety, you may want to leave a bit of slack, as accidentally cutting off too much would be a far worse problem.



          Filesystem shrinking operations are always bit more risky than extensions, especially if you are not yet quite familiar with them. So backup anything you might need, just in case something goes wrong.



          Then use tune2fs -l to get the exact block count from the filesystem:



          tune2fs -l /dev/mapper/zoneminder--vg-root | grep "Block "
          Block count: NNNNNNNNN
          Block size: XXXX


          Multiply these two numbers together to get the exact size of the filesystem, then divide by 1024 to get binary kilobytes, and again to get binary megabytes. Add one to protect against rounding errors:



          expr NNNNNNNNN * XXXX / 1024 / 1024 + 1
          SSSSSS


          Now, shrink the LV:



          lvreduce -L SSSSSS /dev/mapper/zoneminder--vg-root


          Now you should have plenty of free space in your Zoneminder VG. Use the pvs to confirm whether /dev/sda3 is now completely unused or not:



          If, in the pvs output, the PFree value is not equal to PSize for /dev/sda3, there are still some parts of the root LV on that PV, and you'll need to move them out of there. pvmove can easily do that. If /dev/sda3 is now completely fbree, you can skip this step.



          pvmove /dev/sda3


          This essentially says "make sda3 empty by moving all the LV data that's still in ot to other PVs belonging to this same VG."



          pvmove works by mirroring a piece of the data to be moved to its new location, then "removing the mirror" from the old side. So if pvmove gets interrupted by a system crash, it's not catastrophic. Just run pvmove with no parameters to continue from where it was.



          Now, the sda3 PV should be completely empty. Remove it from the VG:



          vgreduce zoneminder-vg /dev/sda3


          At this point, /dev/sda3 will be an unattached, completely free LVM PV. You can wipe the PVID from it if you wish:



          pvremove /dev/sda3


          Now you'll be free to reuse the /dev/sda3 partition any way you like.
          (If you plan to do something that causes the partition to be overwritten anyway, the pvremove command won't be strictly necessary.)



          Now, if you want to extend the root LV to 100 GiB, here are the steps:



          lvextend -L 100G /dev/mapper/zoneminder--vg-root
          resize2fs /dev/mapper/zoneminder--vg-root


          And you're done.



          Note that I didn't say "unmount the filesystem" or "reboot the system" at any point here. It isn't necessary.






          share|improve this answer























          • thank you!! see my updated question please

            – Serdar Ebeng
            Feb 20 at 11:33











          • Think of it as layers: at the bottom, there is the physical disk. On top of it, there is the partitioning layer, then inside the partition, there is the LVM layer on top of the partition layer. And the filesystem is on top of LVM. You've now successfully manipulated the filesystem and LVM layers to make sda3 empty and unused; now you're free to use fdisk or other partitioning tool to either change the type of the partition to something else, or to delete the entire partition and possibly put something else in its place.

            – telcoM
            Feb 20 at 13:10











          • Since sda3 is physically located after sda5 on the disk, you can now just delete it. And as you also resized the sda5 PV to 101G, you can also shrink that partition to 101G (or preferably to slightly more than that, to account for rounding errors). Once that's successfully done, you should also resize sda2 as the extended partition acts as an "envelope" for the "logical" partitions (sda5 and above). Then you can tell VMware to take away the extra disk space.

            – telcoM
            Feb 20 at 13:19











          • You know after a lot of reading there is still no perception of how it is being built. I've come to see more and more, with practising it myself and your key-words has helped me with getting further to the finish line :)

            – Serdar Ebeng
            Feb 20 at 14:02



















          0














          The LVM system has no idea of how you use your logical volumes. Even if you reduces the filesystem that you use in your LV, it does not change the size of the LV, nor the free space in your VG and PV (all space allocated to a LV is considered in use).



          If you reduced you filesystem, you can reduce your LV with the lvreduce command, but you should be extra careful not to reduce it to small to fit your filesystem, or you may lose some data.



          Once you have reduced your logical volume, you may move the used space from one PV to another one, with the pvmove command, and then remove the PV from the VG with the vgreduce command.






          share|improve this answer






















            Your Answer








            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "106"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f501618%2fdf-h-results-different-than-vgdisplay-lvdisplay%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1














            Great, you've successfully shrunk the filesystem inside the root LV.



            The next step is to shrink the LV to match the new size of the filesystem. For the sake of safety, you may want to leave a bit of slack, as accidentally cutting off too much would be a far worse problem.



            Filesystem shrinking operations are always bit more risky than extensions, especially if you are not yet quite familiar with them. So backup anything you might need, just in case something goes wrong.



            Then use tune2fs -l to get the exact block count from the filesystem:



            tune2fs -l /dev/mapper/zoneminder--vg-root | grep "Block "
            Block count: NNNNNNNNN
            Block size: XXXX


            Multiply these two numbers together to get the exact size of the filesystem, then divide by 1024 to get binary kilobytes, and again to get binary megabytes. Add one to protect against rounding errors:



            expr NNNNNNNNN * XXXX / 1024 / 1024 + 1
            SSSSSS


            Now, shrink the LV:



            lvreduce -L SSSSSS /dev/mapper/zoneminder--vg-root


            Now you should have plenty of free space in your Zoneminder VG. Use the pvs to confirm whether /dev/sda3 is now completely unused or not:



            If, in the pvs output, the PFree value is not equal to PSize for /dev/sda3, there are still some parts of the root LV on that PV, and you'll need to move them out of there. pvmove can easily do that. If /dev/sda3 is now completely fbree, you can skip this step.



            pvmove /dev/sda3


            This essentially says "make sda3 empty by moving all the LV data that's still in ot to other PVs belonging to this same VG."



            pvmove works by mirroring a piece of the data to be moved to its new location, then "removing the mirror" from the old side. So if pvmove gets interrupted by a system crash, it's not catastrophic. Just run pvmove with no parameters to continue from where it was.



            Now, the sda3 PV should be completely empty. Remove it from the VG:



            vgreduce zoneminder-vg /dev/sda3


            At this point, /dev/sda3 will be an unattached, completely free LVM PV. You can wipe the PVID from it if you wish:



            pvremove /dev/sda3


            Now you'll be free to reuse the /dev/sda3 partition any way you like.
            (If you plan to do something that causes the partition to be overwritten anyway, the pvremove command won't be strictly necessary.)



            Now, if you want to extend the root LV to 100 GiB, here are the steps:



            lvextend -L 100G /dev/mapper/zoneminder--vg-root
            resize2fs /dev/mapper/zoneminder--vg-root


            And you're done.



            Note that I didn't say "unmount the filesystem" or "reboot the system" at any point here. It isn't necessary.






            share|improve this answer























            • thank you!! see my updated question please

              – Serdar Ebeng
              Feb 20 at 11:33











            • Think of it as layers: at the bottom, there is the physical disk. On top of it, there is the partitioning layer, then inside the partition, there is the LVM layer on top of the partition layer. And the filesystem is on top of LVM. You've now successfully manipulated the filesystem and LVM layers to make sda3 empty and unused; now you're free to use fdisk or other partitioning tool to either change the type of the partition to something else, or to delete the entire partition and possibly put something else in its place.

              – telcoM
              Feb 20 at 13:10











            • Since sda3 is physically located after sda5 on the disk, you can now just delete it. And as you also resized the sda5 PV to 101G, you can also shrink that partition to 101G (or preferably to slightly more than that, to account for rounding errors). Once that's successfully done, you should also resize sda2 as the extended partition acts as an "envelope" for the "logical" partitions (sda5 and above). Then you can tell VMware to take away the extra disk space.

              – telcoM
              Feb 20 at 13:19











            • You know after a lot of reading there is still no perception of how it is being built. I've come to see more and more, with practising it myself and your key-words has helped me with getting further to the finish line :)

              – Serdar Ebeng
              Feb 20 at 14:02
















            1














            Great, you've successfully shrunk the filesystem inside the root LV.



            The next step is to shrink the LV to match the new size of the filesystem. For the sake of safety, you may want to leave a bit of slack, as accidentally cutting off too much would be a far worse problem.



            Filesystem shrinking operations are always bit more risky than extensions, especially if you are not yet quite familiar with them. So backup anything you might need, just in case something goes wrong.



            Then use tune2fs -l to get the exact block count from the filesystem:



            tune2fs -l /dev/mapper/zoneminder--vg-root | grep "Block "
            Block count: NNNNNNNNN
            Block size: XXXX


            Multiply these two numbers together to get the exact size of the filesystem, then divide by 1024 to get binary kilobytes, and again to get binary megabytes. Add one to protect against rounding errors:



            expr NNNNNNNNN * XXXX / 1024 / 1024 + 1
            SSSSSS


            Now, shrink the LV:



            lvreduce -L SSSSSS /dev/mapper/zoneminder--vg-root


            Now you should have plenty of free space in your Zoneminder VG. Use the pvs to confirm whether /dev/sda3 is now completely unused or not:



            If, in the pvs output, the PFree value is not equal to PSize for /dev/sda3, there are still some parts of the root LV on that PV, and you'll need to move them out of there. pvmove can easily do that. If /dev/sda3 is now completely fbree, you can skip this step.



            pvmove /dev/sda3


            This essentially says "make sda3 empty by moving all the LV data that's still in ot to other PVs belonging to this same VG."



            pvmove works by mirroring a piece of the data to be moved to its new location, then "removing the mirror" from the old side. So if pvmove gets interrupted by a system crash, it's not catastrophic. Just run pvmove with no parameters to continue from where it was.



            Now, the sda3 PV should be completely empty. Remove it from the VG:



            vgreduce zoneminder-vg /dev/sda3


            At this point, /dev/sda3 will be an unattached, completely free LVM PV. You can wipe the PVID from it if you wish:



            pvremove /dev/sda3


            Now you'll be free to reuse the /dev/sda3 partition any way you like.
            (If you plan to do something that causes the partition to be overwritten anyway, the pvremove command won't be strictly necessary.)



            Now, if you want to extend the root LV to 100 GiB, here are the steps:



            lvextend -L 100G /dev/mapper/zoneminder--vg-root
            resize2fs /dev/mapper/zoneminder--vg-root


            And you're done.



            Note that I didn't say "unmount the filesystem" or "reboot the system" at any point here. It isn't necessary.






            share|improve this answer























            • thank you!! see my updated question please

              – Serdar Ebeng
              Feb 20 at 11:33











            • Think of it as layers: at the bottom, there is the physical disk. On top of it, there is the partitioning layer, then inside the partition, there is the LVM layer on top of the partition layer. And the filesystem is on top of LVM. You've now successfully manipulated the filesystem and LVM layers to make sda3 empty and unused; now you're free to use fdisk or other partitioning tool to either change the type of the partition to something else, or to delete the entire partition and possibly put something else in its place.

              – telcoM
              Feb 20 at 13:10











            • Since sda3 is physically located after sda5 on the disk, you can now just delete it. And as you also resized the sda5 PV to 101G, you can also shrink that partition to 101G (or preferably to slightly more than that, to account for rounding errors). Once that's successfully done, you should also resize sda2 as the extended partition acts as an "envelope" for the "logical" partitions (sda5 and above). Then you can tell VMware to take away the extra disk space.

              – telcoM
              Feb 20 at 13:19











            • You know after a lot of reading there is still no perception of how it is being built. I've come to see more and more, with practising it myself and your key-words has helped me with getting further to the finish line :)

              – Serdar Ebeng
              Feb 20 at 14:02














            1












            1








            1







            Great, you've successfully shrunk the filesystem inside the root LV.



            The next step is to shrink the LV to match the new size of the filesystem. For the sake of safety, you may want to leave a bit of slack, as accidentally cutting off too much would be a far worse problem.



            Filesystem shrinking operations are always bit more risky than extensions, especially if you are not yet quite familiar with them. So backup anything you might need, just in case something goes wrong.



            Then use tune2fs -l to get the exact block count from the filesystem:



            tune2fs -l /dev/mapper/zoneminder--vg-root | grep "Block "
            Block count: NNNNNNNNN
            Block size: XXXX


            Multiply these two numbers together to get the exact size of the filesystem, then divide by 1024 to get binary kilobytes, and again to get binary megabytes. Add one to protect against rounding errors:



            expr NNNNNNNNN * XXXX / 1024 / 1024 + 1
            SSSSSS


            Now, shrink the LV:



            lvreduce -L SSSSSS /dev/mapper/zoneminder--vg-root


            Now you should have plenty of free space in your Zoneminder VG. Use the pvs to confirm whether /dev/sda3 is now completely unused or not:



            If, in the pvs output, the PFree value is not equal to PSize for /dev/sda3, there are still some parts of the root LV on that PV, and you'll need to move them out of there. pvmove can easily do that. If /dev/sda3 is now completely fbree, you can skip this step.



            pvmove /dev/sda3


            This essentially says "make sda3 empty by moving all the LV data that's still in ot to other PVs belonging to this same VG."



            pvmove works by mirroring a piece of the data to be moved to its new location, then "removing the mirror" from the old side. So if pvmove gets interrupted by a system crash, it's not catastrophic. Just run pvmove with no parameters to continue from where it was.



            Now, the sda3 PV should be completely empty. Remove it from the VG:



            vgreduce zoneminder-vg /dev/sda3


            At this point, /dev/sda3 will be an unattached, completely free LVM PV. You can wipe the PVID from it if you wish:



            pvremove /dev/sda3


            Now you'll be free to reuse the /dev/sda3 partition any way you like.
            (If you plan to do something that causes the partition to be overwritten anyway, the pvremove command won't be strictly necessary.)



            Now, if you want to extend the root LV to 100 GiB, here are the steps:



            lvextend -L 100G /dev/mapper/zoneminder--vg-root
            resize2fs /dev/mapper/zoneminder--vg-root


            And you're done.



            Note that I didn't say "unmount the filesystem" or "reboot the system" at any point here. It isn't necessary.






            share|improve this answer













            Great, you've successfully shrunk the filesystem inside the root LV.



            The next step is to shrink the LV to match the new size of the filesystem. For the sake of safety, you may want to leave a bit of slack, as accidentally cutting off too much would be a far worse problem.



            Filesystem shrinking operations are always bit more risky than extensions, especially if you are not yet quite familiar with them. So backup anything you might need, just in case something goes wrong.



            Then use tune2fs -l to get the exact block count from the filesystem:



            tune2fs -l /dev/mapper/zoneminder--vg-root | grep "Block "
            Block count: NNNNNNNNN
            Block size: XXXX


            Multiply these two numbers together to get the exact size of the filesystem, then divide by 1024 to get binary kilobytes, and again to get binary megabytes. Add one to protect against rounding errors:



            expr NNNNNNNNN * XXXX / 1024 / 1024 + 1
            SSSSSS


            Now, shrink the LV:



            lvreduce -L SSSSSS /dev/mapper/zoneminder--vg-root


            Now you should have plenty of free space in your Zoneminder VG. Use the pvs to confirm whether /dev/sda3 is now completely unused or not:



            If, in the pvs output, the PFree value is not equal to PSize for /dev/sda3, there are still some parts of the root LV on that PV, and you'll need to move them out of there. pvmove can easily do that. If /dev/sda3 is now completely fbree, you can skip this step.



            pvmove /dev/sda3


            This essentially says "make sda3 empty by moving all the LV data that's still in ot to other PVs belonging to this same VG."



            pvmove works by mirroring a piece of the data to be moved to its new location, then "removing the mirror" from the old side. So if pvmove gets interrupted by a system crash, it's not catastrophic. Just run pvmove with no parameters to continue from where it was.



            Now, the sda3 PV should be completely empty. Remove it from the VG:



            vgreduce zoneminder-vg /dev/sda3


            At this point, /dev/sda3 will be an unattached, completely free LVM PV. You can wipe the PVID from it if you wish:



            pvremove /dev/sda3


            Now you'll be free to reuse the /dev/sda3 partition any way you like.
            (If you plan to do something that causes the partition to be overwritten anyway, the pvremove command won't be strictly necessary.)



            Now, if you want to extend the root LV to 100 GiB, here are the steps:



            lvextend -L 100G /dev/mapper/zoneminder--vg-root
            resize2fs /dev/mapper/zoneminder--vg-root


            And you're done.



            Note that I didn't say "unmount the filesystem" or "reboot the system" at any point here. It isn't necessary.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Feb 19 at 18:17









            telcoMtelcoM

            19.1k12348




            19.1k12348












            • thank you!! see my updated question please

              – Serdar Ebeng
              Feb 20 at 11:33











            • Think of it as layers: at the bottom, there is the physical disk. On top of it, there is the partitioning layer, then inside the partition, there is the LVM layer on top of the partition layer. And the filesystem is on top of LVM. You've now successfully manipulated the filesystem and LVM layers to make sda3 empty and unused; now you're free to use fdisk or other partitioning tool to either change the type of the partition to something else, or to delete the entire partition and possibly put something else in its place.

              – telcoM
              Feb 20 at 13:10











            • Since sda3 is physically located after sda5 on the disk, you can now just delete it. And as you also resized the sda5 PV to 101G, you can also shrink that partition to 101G (or preferably to slightly more than that, to account for rounding errors). Once that's successfully done, you should also resize sda2 as the extended partition acts as an "envelope" for the "logical" partitions (sda5 and above). Then you can tell VMware to take away the extra disk space.

              – telcoM
              Feb 20 at 13:19











            • You know after a lot of reading there is still no perception of how it is being built. I've come to see more and more, with practising it myself and your key-words has helped me with getting further to the finish line :)

              – Serdar Ebeng
              Feb 20 at 14:02


















            • thank you!! see my updated question please

              – Serdar Ebeng
              Feb 20 at 11:33











            • Think of it as layers: at the bottom, there is the physical disk. On top of it, there is the partitioning layer, then inside the partition, there is the LVM layer on top of the partition layer. And the filesystem is on top of LVM. You've now successfully manipulated the filesystem and LVM layers to make sda3 empty and unused; now you're free to use fdisk or other partitioning tool to either change the type of the partition to something else, or to delete the entire partition and possibly put something else in its place.

              – telcoM
              Feb 20 at 13:10











            • Since sda3 is physically located after sda5 on the disk, you can now just delete it. And as you also resized the sda5 PV to 101G, you can also shrink that partition to 101G (or preferably to slightly more than that, to account for rounding errors). Once that's successfully done, you should also resize sda2 as the extended partition acts as an "envelope" for the "logical" partitions (sda5 and above). Then you can tell VMware to take away the extra disk space.

              – telcoM
              Feb 20 at 13:19











            • You know after a lot of reading there is still no perception of how it is being built. I've come to see more and more, with practising it myself and your key-words has helped me with getting further to the finish line :)

              – Serdar Ebeng
              Feb 20 at 14:02

















            thank you!! see my updated question please

            – Serdar Ebeng
            Feb 20 at 11:33





            thank you!! see my updated question please

            – Serdar Ebeng
            Feb 20 at 11:33













            Think of it as layers: at the bottom, there is the physical disk. On top of it, there is the partitioning layer, then inside the partition, there is the LVM layer on top of the partition layer. And the filesystem is on top of LVM. You've now successfully manipulated the filesystem and LVM layers to make sda3 empty and unused; now you're free to use fdisk or other partitioning tool to either change the type of the partition to something else, or to delete the entire partition and possibly put something else in its place.

            – telcoM
            Feb 20 at 13:10





            Think of it as layers: at the bottom, there is the physical disk. On top of it, there is the partitioning layer, then inside the partition, there is the LVM layer on top of the partition layer. And the filesystem is on top of LVM. You've now successfully manipulated the filesystem and LVM layers to make sda3 empty and unused; now you're free to use fdisk or other partitioning tool to either change the type of the partition to something else, or to delete the entire partition and possibly put something else in its place.

            – telcoM
            Feb 20 at 13:10













            Since sda3 is physically located after sda5 on the disk, you can now just delete it. And as you also resized the sda5 PV to 101G, you can also shrink that partition to 101G (or preferably to slightly more than that, to account for rounding errors). Once that's successfully done, you should also resize sda2 as the extended partition acts as an "envelope" for the "logical" partitions (sda5 and above). Then you can tell VMware to take away the extra disk space.

            – telcoM
            Feb 20 at 13:19





            Since sda3 is physically located after sda5 on the disk, you can now just delete it. And as you also resized the sda5 PV to 101G, you can also shrink that partition to 101G (or preferably to slightly more than that, to account for rounding errors). Once that's successfully done, you should also resize sda2 as the extended partition acts as an "envelope" for the "logical" partitions (sda5 and above). Then you can tell VMware to take away the extra disk space.

            – telcoM
            Feb 20 at 13:19













            You know after a lot of reading there is still no perception of how it is being built. I've come to see more and more, with practising it myself and your key-words has helped me with getting further to the finish line :)

            – Serdar Ebeng
            Feb 20 at 14:02






            You know after a lot of reading there is still no perception of how it is being built. I've come to see more and more, with practising it myself and your key-words has helped me with getting further to the finish line :)

            – Serdar Ebeng
            Feb 20 at 14:02














            0














            The LVM system has no idea of how you use your logical volumes. Even if you reduces the filesystem that you use in your LV, it does not change the size of the LV, nor the free space in your VG and PV (all space allocated to a LV is considered in use).



            If you reduced you filesystem, you can reduce your LV with the lvreduce command, but you should be extra careful not to reduce it to small to fit your filesystem, or you may lose some data.



            Once you have reduced your logical volume, you may move the used space from one PV to another one, with the pvmove command, and then remove the PV from the VG with the vgreduce command.






            share|improve this answer



























              0














              The LVM system has no idea of how you use your logical volumes. Even if you reduces the filesystem that you use in your LV, it does not change the size of the LV, nor the free space in your VG and PV (all space allocated to a LV is considered in use).



              If you reduced you filesystem, you can reduce your LV with the lvreduce command, but you should be extra careful not to reduce it to small to fit your filesystem, or you may lose some data.



              Once you have reduced your logical volume, you may move the used space from one PV to another one, with the pvmove command, and then remove the PV from the VG with the vgreduce command.






              share|improve this answer

























                0












                0








                0







                The LVM system has no idea of how you use your logical volumes. Even if you reduces the filesystem that you use in your LV, it does not change the size of the LV, nor the free space in your VG and PV (all space allocated to a LV is considered in use).



                If you reduced you filesystem, you can reduce your LV with the lvreduce command, but you should be extra careful not to reduce it to small to fit your filesystem, or you may lose some data.



                Once you have reduced your logical volume, you may move the used space from one PV to another one, with the pvmove command, and then remove the PV from the VG with the vgreduce command.






                share|improve this answer













                The LVM system has no idea of how you use your logical volumes. Even if you reduces the filesystem that you use in your LV, it does not change the size of the LV, nor the free space in your VG and PV (all space allocated to a LV is considered in use).



                If you reduced you filesystem, you can reduce your LV with the lvreduce command, but you should be extra careful not to reduce it to small to fit your filesystem, or you may lose some data.



                Once you have reduced your logical volume, you may move the used space from one PV to another one, with the pvmove command, and then remove the PV from the VG with the vgreduce command.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Feb 19 at 17:57









                user2233709user2233709

                1,098412




                1,098412



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Unix & Linux Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f501618%2fdf-h-results-different-than-vgdisplay-lvdisplay%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown






                    Popular posts from this blog

                    How to check contact read email or not when send email to Individual?

                    Bahrain

                    Postfix configuration issue with fips on centos 7; mailgun relay