Command 'pvs' says PV device not found, but LVs are mapped and mounted

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
4
down vote

favorite












I had problem with my system (faulty internal power cable). When I got the system back up and running, arrays rebuilding, etc, I seem to have a situation where the pvs command (and vgs and lvs) reports No device found for PV <UUID> but the logical volume which is on the supposedly missing physical volume can be successfully mounted as their DM devices exist and are mapped in /dev/mapper.



The PV device is an md-raid RAID10 array, which seems fine, except for the confusion that it's not appearing in the pvs output.



I assume this is a problem with some internal tables being out of sync. How do I get things mapped correctly (wthout a reboot, which, I assume would fix it)?




Update:



A reboot did NOT fix the problem. I believe the issue is due to the configuration of the 'missing' PV (/dev/md99) as a RAID10 far-2 array built from a 750b disk (/dev/sdk) and a RAID0 array (/dev/md90) build from a 250GB disk (/dev/sdh) and a 500Gb disk (/dev/sdl). It seems from the output of pvscan -vvv that the lvm2 signature is found on /dev/sdh, but not on /dev/md99.



 Asking lvmetad for VG f1bpcw-oavs-1SlJ-0Gxf-4YZI-AiMD-WGAErL (name unknown)
Setting response to OK
Setting response to OK
Setting name to b
Setting metadata/format to lvm2
Metadata cache has no info for vgname: "b"
Setting id to AzKyTe-5Ut4-dxgq-txEc-7V9v-Bkm5-mOeMBN
Setting format to lvm2
Setting device to 2160
Setting dev_size to 1464383488
Setting label_sector to 1
Opened /dev/sdh RO O_DIRECT
/dev/sdh: size is 488397168 sectors
/dev/sdh: block size is 4096 bytes
/dev/sdh: physical block size is 512 bytes
Closed /dev/sdh
/dev/sdh: size is 488397168 sectors
Opened /dev/sdh RO O_DIRECT
/dev/sdh: block size is 4096 bytes
/dev/sdh: physical block size is 512 bytes
Closed /dev/sdh
/dev/sdh: Skipping md component device
No device found for PV AzKyTe-5Ut4-dxgq-txEc-7V9v-Bkm5-mOeMBN.
Allocated VG b at 0x7fdeb00419f0.
Couldn't find device with uuid AzKyTe-5Ut4-dxgq-txEc-7V9v-Bkm5-mOeMBN.
Freeing VG b at 0x7fdeb00419f0.


The only reference to /dev/md99, which should be the PV, is when it's added to the device cache.




Update 2:



Stopping lvm2-lvmetad and repeating the pvscan confirms than the issue is that the system is confused about which PVs to use as it's finding 2 with the same UUID



 Using /dev/sdh
Opened /dev/sdh RO O_DIRECT
/dev/sdh: block size is 4096 bytes
/dev/sdh: physical block size is 512 bytes
/dev/sdh: lvm2 label detected at sector 1
Found duplicate PV AzKyTe5Ut4dxgqtxEc7V9vBkm5mOeMBN: using /dev/sdh not /dev/md99
/dev/sdh: PV header extension version 1 found
Incorrect metadata area header checksum on /dev/sdh at offset 4096
Closed /dev/sdh
Opened /dev/sdh RO O_DIRECT
/dev/sdh: block size is 4096 bytes
/dev/sdh: physical block size is 512 bytes
Incorrect metadata area header checksum on /dev/sdh at offset 4096
Closed /dev/sdh
Opened /dev/sdh RO O_DIRECT
/dev/sdh: block size is 4096 bytes
/dev/sdh: physical block size is 512 bytes
Closed /dev/sdh
Incorrect metadata area header checksum on /dev/sdh at offset 4096
Telling lvmetad to store PV /dev/sdh (AzKyTe-5Ut4-dxgq-txEc-7V9v-Bkm5-mOeMBN)
Setting response to OK


since this configuration was only meant to be temporary, I think I'd do better to rearrange my disk usage.



Unless anyone can tell me how to explicitly override the order in which pvscan views devices?










share|improve this question























  • Have you tried running another pvscan after the MD array comes up? Just in case it's a timing issue.
    – Bratchley
    May 17 '15 at 22:44










  • That would involve a reboot, which would probably fix it. I was hoping to just fix the current state, e.g.by restarting lvmetad (which didn't fix it).
    – StarNamer
    May 18 '15 at 10:40











  • pvscan is a command that you can run as root after it's booted. If this works we can work on resolving the timing issue.
    – Bratchley
    May 18 '15 at 11:30










  • I've been using LVM for about 5 years; I know what pvscan is. My point was that I wanted to avoid rebooting the machine since that will almost certainly fix the problem. What I wanted was some way to get LVM back to a consistent state without rebooting. I get the impression this isn't possible. Since LVs in the 'missing' VG can be mounted, I've taken the precaution of copying the data to a new set of LVs in a new VG and will probably reboot later this evening. I expect this will fix the issue, but won't provide info as to why it happened in the first place and if an online fix can be done.
    – StarNamer
    May 18 '15 at 15:12










  • @Bratchley, by the way, if you read my comment to derobert, you'd have realised that I've already tried a pvscan (several times).
    – StarNamer
    May 18 '15 at 15:16














up vote
4
down vote

favorite












I had problem with my system (faulty internal power cable). When I got the system back up and running, arrays rebuilding, etc, I seem to have a situation where the pvs command (and vgs and lvs) reports No device found for PV <UUID> but the logical volume which is on the supposedly missing physical volume can be successfully mounted as their DM devices exist and are mapped in /dev/mapper.



The PV device is an md-raid RAID10 array, which seems fine, except for the confusion that it's not appearing in the pvs output.



I assume this is a problem with some internal tables being out of sync. How do I get things mapped correctly (wthout a reboot, which, I assume would fix it)?




Update:



A reboot did NOT fix the problem. I believe the issue is due to the configuration of the 'missing' PV (/dev/md99) as a RAID10 far-2 array built from a 750b disk (/dev/sdk) and a RAID0 array (/dev/md90) build from a 250GB disk (/dev/sdh) and a 500Gb disk (/dev/sdl). It seems from the output of pvscan -vvv that the lvm2 signature is found on /dev/sdh, but not on /dev/md99.



 Asking lvmetad for VG f1bpcw-oavs-1SlJ-0Gxf-4YZI-AiMD-WGAErL (name unknown)
Setting response to OK
Setting response to OK
Setting name to b
Setting metadata/format to lvm2
Metadata cache has no info for vgname: "b"
Setting id to AzKyTe-5Ut4-dxgq-txEc-7V9v-Bkm5-mOeMBN
Setting format to lvm2
Setting device to 2160
Setting dev_size to 1464383488
Setting label_sector to 1
Opened /dev/sdh RO O_DIRECT
/dev/sdh: size is 488397168 sectors
/dev/sdh: block size is 4096 bytes
/dev/sdh: physical block size is 512 bytes
Closed /dev/sdh
/dev/sdh: size is 488397168 sectors
Opened /dev/sdh RO O_DIRECT
/dev/sdh: block size is 4096 bytes
/dev/sdh: physical block size is 512 bytes
Closed /dev/sdh
/dev/sdh: Skipping md component device
No device found for PV AzKyTe-5Ut4-dxgq-txEc-7V9v-Bkm5-mOeMBN.
Allocated VG b at 0x7fdeb00419f0.
Couldn't find device with uuid AzKyTe-5Ut4-dxgq-txEc-7V9v-Bkm5-mOeMBN.
Freeing VG b at 0x7fdeb00419f0.


The only reference to /dev/md99, which should be the PV, is when it's added to the device cache.




Update 2:



Stopping lvm2-lvmetad and repeating the pvscan confirms than the issue is that the system is confused about which PVs to use as it's finding 2 with the same UUID



 Using /dev/sdh
Opened /dev/sdh RO O_DIRECT
/dev/sdh: block size is 4096 bytes
/dev/sdh: physical block size is 512 bytes
/dev/sdh: lvm2 label detected at sector 1
Found duplicate PV AzKyTe5Ut4dxgqtxEc7V9vBkm5mOeMBN: using /dev/sdh not /dev/md99
/dev/sdh: PV header extension version 1 found
Incorrect metadata area header checksum on /dev/sdh at offset 4096
Closed /dev/sdh
Opened /dev/sdh RO O_DIRECT
/dev/sdh: block size is 4096 bytes
/dev/sdh: physical block size is 512 bytes
Incorrect metadata area header checksum on /dev/sdh at offset 4096
Closed /dev/sdh
Opened /dev/sdh RO O_DIRECT
/dev/sdh: block size is 4096 bytes
/dev/sdh: physical block size is 512 bytes
Closed /dev/sdh
Incorrect metadata area header checksum on /dev/sdh at offset 4096
Telling lvmetad to store PV /dev/sdh (AzKyTe-5Ut4-dxgq-txEc-7V9v-Bkm5-mOeMBN)
Setting response to OK


since this configuration was only meant to be temporary, I think I'd do better to rearrange my disk usage.



Unless anyone can tell me how to explicitly override the order in which pvscan views devices?










share|improve this question























  • Have you tried running another pvscan after the MD array comes up? Just in case it's a timing issue.
    – Bratchley
    May 17 '15 at 22:44










  • That would involve a reboot, which would probably fix it. I was hoping to just fix the current state, e.g.by restarting lvmetad (which didn't fix it).
    – StarNamer
    May 18 '15 at 10:40











  • pvscan is a command that you can run as root after it's booted. If this works we can work on resolving the timing issue.
    – Bratchley
    May 18 '15 at 11:30










  • I've been using LVM for about 5 years; I know what pvscan is. My point was that I wanted to avoid rebooting the machine since that will almost certainly fix the problem. What I wanted was some way to get LVM back to a consistent state without rebooting. I get the impression this isn't possible. Since LVs in the 'missing' VG can be mounted, I've taken the precaution of copying the data to a new set of LVs in a new VG and will probably reboot later this evening. I expect this will fix the issue, but won't provide info as to why it happened in the first place and if an online fix can be done.
    – StarNamer
    May 18 '15 at 15:12










  • @Bratchley, by the way, if you read my comment to derobert, you'd have realised that I've already tried a pvscan (several times).
    – StarNamer
    May 18 '15 at 15:16












up vote
4
down vote

favorite









up vote
4
down vote

favorite











I had problem with my system (faulty internal power cable). When I got the system back up and running, arrays rebuilding, etc, I seem to have a situation where the pvs command (and vgs and lvs) reports No device found for PV <UUID> but the logical volume which is on the supposedly missing physical volume can be successfully mounted as their DM devices exist and are mapped in /dev/mapper.



The PV device is an md-raid RAID10 array, which seems fine, except for the confusion that it's not appearing in the pvs output.



I assume this is a problem with some internal tables being out of sync. How do I get things mapped correctly (wthout a reboot, which, I assume would fix it)?




Update:



A reboot did NOT fix the problem. I believe the issue is due to the configuration of the 'missing' PV (/dev/md99) as a RAID10 far-2 array built from a 750b disk (/dev/sdk) and a RAID0 array (/dev/md90) build from a 250GB disk (/dev/sdh) and a 500Gb disk (/dev/sdl). It seems from the output of pvscan -vvv that the lvm2 signature is found on /dev/sdh, but not on /dev/md99.



 Asking lvmetad for VG f1bpcw-oavs-1SlJ-0Gxf-4YZI-AiMD-WGAErL (name unknown)
Setting response to OK
Setting response to OK
Setting name to b
Setting metadata/format to lvm2
Metadata cache has no info for vgname: "b"
Setting id to AzKyTe-5Ut4-dxgq-txEc-7V9v-Bkm5-mOeMBN
Setting format to lvm2
Setting device to 2160
Setting dev_size to 1464383488
Setting label_sector to 1
Opened /dev/sdh RO O_DIRECT
/dev/sdh: size is 488397168 sectors
/dev/sdh: block size is 4096 bytes
/dev/sdh: physical block size is 512 bytes
Closed /dev/sdh
/dev/sdh: size is 488397168 sectors
Opened /dev/sdh RO O_DIRECT
/dev/sdh: block size is 4096 bytes
/dev/sdh: physical block size is 512 bytes
Closed /dev/sdh
/dev/sdh: Skipping md component device
No device found for PV AzKyTe-5Ut4-dxgq-txEc-7V9v-Bkm5-mOeMBN.
Allocated VG b at 0x7fdeb00419f0.
Couldn't find device with uuid AzKyTe-5Ut4-dxgq-txEc-7V9v-Bkm5-mOeMBN.
Freeing VG b at 0x7fdeb00419f0.


The only reference to /dev/md99, which should be the PV, is when it's added to the device cache.




Update 2:



Stopping lvm2-lvmetad and repeating the pvscan confirms than the issue is that the system is confused about which PVs to use as it's finding 2 with the same UUID



 Using /dev/sdh
Opened /dev/sdh RO O_DIRECT
/dev/sdh: block size is 4096 bytes
/dev/sdh: physical block size is 512 bytes
/dev/sdh: lvm2 label detected at sector 1
Found duplicate PV AzKyTe5Ut4dxgqtxEc7V9vBkm5mOeMBN: using /dev/sdh not /dev/md99
/dev/sdh: PV header extension version 1 found
Incorrect metadata area header checksum on /dev/sdh at offset 4096
Closed /dev/sdh
Opened /dev/sdh RO O_DIRECT
/dev/sdh: block size is 4096 bytes
/dev/sdh: physical block size is 512 bytes
Incorrect metadata area header checksum on /dev/sdh at offset 4096
Closed /dev/sdh
Opened /dev/sdh RO O_DIRECT
/dev/sdh: block size is 4096 bytes
/dev/sdh: physical block size is 512 bytes
Closed /dev/sdh
Incorrect metadata area header checksum on /dev/sdh at offset 4096
Telling lvmetad to store PV /dev/sdh (AzKyTe-5Ut4-dxgq-txEc-7V9v-Bkm5-mOeMBN)
Setting response to OK


since this configuration was only meant to be temporary, I think I'd do better to rearrange my disk usage.



Unless anyone can tell me how to explicitly override the order in which pvscan views devices?










share|improve this question















I had problem with my system (faulty internal power cable). When I got the system back up and running, arrays rebuilding, etc, I seem to have a situation where the pvs command (and vgs and lvs) reports No device found for PV <UUID> but the logical volume which is on the supposedly missing physical volume can be successfully mounted as their DM devices exist and are mapped in /dev/mapper.



The PV device is an md-raid RAID10 array, which seems fine, except for the confusion that it's not appearing in the pvs output.



I assume this is a problem with some internal tables being out of sync. How do I get things mapped correctly (wthout a reboot, which, I assume would fix it)?




Update:



A reboot did NOT fix the problem. I believe the issue is due to the configuration of the 'missing' PV (/dev/md99) as a RAID10 far-2 array built from a 750b disk (/dev/sdk) and a RAID0 array (/dev/md90) build from a 250GB disk (/dev/sdh) and a 500Gb disk (/dev/sdl). It seems from the output of pvscan -vvv that the lvm2 signature is found on /dev/sdh, but not on /dev/md99.



 Asking lvmetad for VG f1bpcw-oavs-1SlJ-0Gxf-4YZI-AiMD-WGAErL (name unknown)
Setting response to OK
Setting response to OK
Setting name to b
Setting metadata/format to lvm2
Metadata cache has no info for vgname: "b"
Setting id to AzKyTe-5Ut4-dxgq-txEc-7V9v-Bkm5-mOeMBN
Setting format to lvm2
Setting device to 2160
Setting dev_size to 1464383488
Setting label_sector to 1
Opened /dev/sdh RO O_DIRECT
/dev/sdh: size is 488397168 sectors
/dev/sdh: block size is 4096 bytes
/dev/sdh: physical block size is 512 bytes
Closed /dev/sdh
/dev/sdh: size is 488397168 sectors
Opened /dev/sdh RO O_DIRECT
/dev/sdh: block size is 4096 bytes
/dev/sdh: physical block size is 512 bytes
Closed /dev/sdh
/dev/sdh: Skipping md component device
No device found for PV AzKyTe-5Ut4-dxgq-txEc-7V9v-Bkm5-mOeMBN.
Allocated VG b at 0x7fdeb00419f0.
Couldn't find device with uuid AzKyTe-5Ut4-dxgq-txEc-7V9v-Bkm5-mOeMBN.
Freeing VG b at 0x7fdeb00419f0.


The only reference to /dev/md99, which should be the PV, is when it's added to the device cache.




Update 2:



Stopping lvm2-lvmetad and repeating the pvscan confirms than the issue is that the system is confused about which PVs to use as it's finding 2 with the same UUID



 Using /dev/sdh
Opened /dev/sdh RO O_DIRECT
/dev/sdh: block size is 4096 bytes
/dev/sdh: physical block size is 512 bytes
/dev/sdh: lvm2 label detected at sector 1
Found duplicate PV AzKyTe5Ut4dxgqtxEc7V9vBkm5mOeMBN: using /dev/sdh not /dev/md99
/dev/sdh: PV header extension version 1 found
Incorrect metadata area header checksum on /dev/sdh at offset 4096
Closed /dev/sdh
Opened /dev/sdh RO O_DIRECT
/dev/sdh: block size is 4096 bytes
/dev/sdh: physical block size is 512 bytes
Incorrect metadata area header checksum on /dev/sdh at offset 4096
Closed /dev/sdh
Opened /dev/sdh RO O_DIRECT
/dev/sdh: block size is 4096 bytes
/dev/sdh: physical block size is 512 bytes
Closed /dev/sdh
Incorrect metadata area header checksum on /dev/sdh at offset 4096
Telling lvmetad to store PV /dev/sdh (AzKyTe-5Ut4-dxgq-txEc-7V9v-Bkm5-mOeMBN)
Setting response to OK


since this configuration was only meant to be temporary, I think I'd do better to rearrange my disk usage.



Unless anyone can tell me how to explicitly override the order in which pvscan views devices?







lvm md






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited May 19 '15 at 6:20

























asked May 17 '15 at 14:55









StarNamer

2,02511224




2,02511224











  • Have you tried running another pvscan after the MD array comes up? Just in case it's a timing issue.
    – Bratchley
    May 17 '15 at 22:44










  • That would involve a reboot, which would probably fix it. I was hoping to just fix the current state, e.g.by restarting lvmetad (which didn't fix it).
    – StarNamer
    May 18 '15 at 10:40











  • pvscan is a command that you can run as root after it's booted. If this works we can work on resolving the timing issue.
    – Bratchley
    May 18 '15 at 11:30










  • I've been using LVM for about 5 years; I know what pvscan is. My point was that I wanted to avoid rebooting the machine since that will almost certainly fix the problem. What I wanted was some way to get LVM back to a consistent state without rebooting. I get the impression this isn't possible. Since LVs in the 'missing' VG can be mounted, I've taken the precaution of copying the data to a new set of LVs in a new VG and will probably reboot later this evening. I expect this will fix the issue, but won't provide info as to why it happened in the first place and if an online fix can be done.
    – StarNamer
    May 18 '15 at 15:12










  • @Bratchley, by the way, if you read my comment to derobert, you'd have realised that I've already tried a pvscan (several times).
    – StarNamer
    May 18 '15 at 15:16
















  • Have you tried running another pvscan after the MD array comes up? Just in case it's a timing issue.
    – Bratchley
    May 17 '15 at 22:44










  • That would involve a reboot, which would probably fix it. I was hoping to just fix the current state, e.g.by restarting lvmetad (which didn't fix it).
    – StarNamer
    May 18 '15 at 10:40











  • pvscan is a command that you can run as root after it's booted. If this works we can work on resolving the timing issue.
    – Bratchley
    May 18 '15 at 11:30










  • I've been using LVM for about 5 years; I know what pvscan is. My point was that I wanted to avoid rebooting the machine since that will almost certainly fix the problem. What I wanted was some way to get LVM back to a consistent state without rebooting. I get the impression this isn't possible. Since LVs in the 'missing' VG can be mounted, I've taken the precaution of copying the data to a new set of LVs in a new VG and will probably reboot later this evening. I expect this will fix the issue, but won't provide info as to why it happened in the first place and if an online fix can be done.
    – StarNamer
    May 18 '15 at 15:12










  • @Bratchley, by the way, if you read my comment to derobert, you'd have realised that I've already tried a pvscan (several times).
    – StarNamer
    May 18 '15 at 15:16















Have you tried running another pvscan after the MD array comes up? Just in case it's a timing issue.
– Bratchley
May 17 '15 at 22:44




Have you tried running another pvscan after the MD array comes up? Just in case it's a timing issue.
– Bratchley
May 17 '15 at 22:44












That would involve a reboot, which would probably fix it. I was hoping to just fix the current state, e.g.by restarting lvmetad (which didn't fix it).
– StarNamer
May 18 '15 at 10:40





That would involve a reboot, which would probably fix it. I was hoping to just fix the current state, e.g.by restarting lvmetad (which didn't fix it).
– StarNamer
May 18 '15 at 10:40













pvscan is a command that you can run as root after it's booted. If this works we can work on resolving the timing issue.
– Bratchley
May 18 '15 at 11:30




pvscan is a command that you can run as root after it's booted. If this works we can work on resolving the timing issue.
– Bratchley
May 18 '15 at 11:30












I've been using LVM for about 5 years; I know what pvscan is. My point was that I wanted to avoid rebooting the machine since that will almost certainly fix the problem. What I wanted was some way to get LVM back to a consistent state without rebooting. I get the impression this isn't possible. Since LVs in the 'missing' VG can be mounted, I've taken the precaution of copying the data to a new set of LVs in a new VG and will probably reboot later this evening. I expect this will fix the issue, but won't provide info as to why it happened in the first place and if an online fix can be done.
– StarNamer
May 18 '15 at 15:12




I've been using LVM for about 5 years; I know what pvscan is. My point was that I wanted to avoid rebooting the machine since that will almost certainly fix the problem. What I wanted was some way to get LVM back to a consistent state without rebooting. I get the impression this isn't possible. Since LVs in the 'missing' VG can be mounted, I've taken the precaution of copying the data to a new set of LVs in a new VG and will probably reboot later this evening. I expect this will fix the issue, but won't provide info as to why it happened in the first place and if an online fix can be done.
– StarNamer
May 18 '15 at 15:12












@Bratchley, by the way, if you read my comment to derobert, you'd have realised that I've already tried a pvscan (several times).
– StarNamer
May 18 '15 at 15:16




@Bratchley, by the way, if you read my comment to derobert, you'd have realised that I've already tried a pvscan (several times).
– StarNamer
May 18 '15 at 15:16










3 Answers
3






active

oldest

votes

















up vote
2
down vote













The first thing to check are your filter and global_filter options in /etc/lvm/lvm.conf. Make sure you aren't filtering out the devices your PVs reside on.



The cache is set with the cache_dir option in the same file; on my Debian box it defaults to /run/lvm. The cache (if any) should be in that directory. If obtain_device_list_from_udev is set, I believe no cache is used.



Finally, check if use_lvmetad is set. If so, you may need to restart the LVM metadata daemon.






share|improve this answer




















  • No filters and obtain_device_list_from_udev is 1. Stopping lvmetad and then doing a pvscan finda duplicate UUD of one of the component devices of the RAID device which make up the RAID10 aary, but the checksum is wrong. It then goes back to saying 'No device found...'.
    – StarNamer
    May 17 '15 at 18:31

















up vote
1
down vote



accepted










The problem appears to be pvscan getting confused over seeing the same UUID on both a component device of the RAID array and the RAID array itself. I assume this is avoided normally by recognising that the device is a direct component. In my case, I had created a situation where the device was not directly a component of the RAID device which should be the PV.



My solution was to backup the LVs, force the array to be degraded and then reconfigure the disks so as not to use the multilevel RAID. Note that after another reboot the device lettering had changed. 500Gb = /dev/sdi, 250Gb = /dev/sdj, 750Gb = /dev/sdk



# mdadm /dev/md99 --fail --force /dev/md90
# mdadm /dev/md99 --remove failed
# mdadm --stop /dev/md90
# wipefs -a /dev/sdi /dev/sdj # wipe components
# systemctl stop lvm2-lvmetad
# pvscan -vvv
# pvs
..... /dev/md99 is now correctly reported as the PV for VG b
# fdisk /dev/sdi
...... Create 2 partitions of equal size, i.e. 250Gb
# fdisk /dev/sdj
...... Create a single 250Gb patitiion
# mdadm /dev/md91 --create -lraid5 -n3 /dev/sdi1 /dev/sdj1 missing
# mdadm /dev/md92 --create -lraid1 -n2 /dev/sdi2 missing
# pvcreate /dev/md91 /dev/md92
# vgextend b /dev/md91 /dev/md92
# pvmove /dev/md99
# vgreduce b /dev/md99
# pvremove /dev/md99
# mdadm --stop /dev/md99
# wipefs -a /dev/sdk
# fdisk /dev/sdk
..... Create 3 250Gb partitions
# mdadm /dev/md91 --add /dev/sdk1
# mdadm /dev/md92 --add /dev/sdk2


Moral of the story:



Do not introduce too many levels of indirection into the filesystem!






share|improve this answer



























    up vote
    0
    down vote













    As root, this commands will fix it.



    pvscan --cache ;
    pvscan





    share




















      Your Answer







      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "106"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      convertImagesToLinks: false,
      noModals: false,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













       

      draft saved


      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f203925%2fcommand-pvs-says-pv-device-not-found-but-lvs-are-mapped-and-mounted%23new-answer', 'question_page');

      );

      Post as a guest






























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      2
      down vote













      The first thing to check are your filter and global_filter options in /etc/lvm/lvm.conf. Make sure you aren't filtering out the devices your PVs reside on.



      The cache is set with the cache_dir option in the same file; on my Debian box it defaults to /run/lvm. The cache (if any) should be in that directory. If obtain_device_list_from_udev is set, I believe no cache is used.



      Finally, check if use_lvmetad is set. If so, you may need to restart the LVM metadata daemon.






      share|improve this answer




















      • No filters and obtain_device_list_from_udev is 1. Stopping lvmetad and then doing a pvscan finda duplicate UUD of one of the component devices of the RAID device which make up the RAID10 aary, but the checksum is wrong. It then goes back to saying 'No device found...'.
        – StarNamer
        May 17 '15 at 18:31














      up vote
      2
      down vote













      The first thing to check are your filter and global_filter options in /etc/lvm/lvm.conf. Make sure you aren't filtering out the devices your PVs reside on.



      The cache is set with the cache_dir option in the same file; on my Debian box it defaults to /run/lvm. The cache (if any) should be in that directory. If obtain_device_list_from_udev is set, I believe no cache is used.



      Finally, check if use_lvmetad is set. If so, you may need to restart the LVM metadata daemon.






      share|improve this answer




















      • No filters and obtain_device_list_from_udev is 1. Stopping lvmetad and then doing a pvscan finda duplicate UUD of one of the component devices of the RAID device which make up the RAID10 aary, but the checksum is wrong. It then goes back to saying 'No device found...'.
        – StarNamer
        May 17 '15 at 18:31












      up vote
      2
      down vote










      up vote
      2
      down vote









      The first thing to check are your filter and global_filter options in /etc/lvm/lvm.conf. Make sure you aren't filtering out the devices your PVs reside on.



      The cache is set with the cache_dir option in the same file; on my Debian box it defaults to /run/lvm. The cache (if any) should be in that directory. If obtain_device_list_from_udev is set, I believe no cache is used.



      Finally, check if use_lvmetad is set. If so, you may need to restart the LVM metadata daemon.






      share|improve this answer












      The first thing to check are your filter and global_filter options in /etc/lvm/lvm.conf. Make sure you aren't filtering out the devices your PVs reside on.



      The cache is set with the cache_dir option in the same file; on my Debian box it defaults to /run/lvm. The cache (if any) should be in that directory. If obtain_device_list_from_udev is set, I believe no cache is used.



      Finally, check if use_lvmetad is set. If so, you may need to restart the LVM metadata daemon.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered May 17 '15 at 15:30









      derobert

      69.9k8151207




      69.9k8151207











      • No filters and obtain_device_list_from_udev is 1. Stopping lvmetad and then doing a pvscan finda duplicate UUD of one of the component devices of the RAID device which make up the RAID10 aary, but the checksum is wrong. It then goes back to saying 'No device found...'.
        – StarNamer
        May 17 '15 at 18:31
















      • No filters and obtain_device_list_from_udev is 1. Stopping lvmetad and then doing a pvscan finda duplicate UUD of one of the component devices of the RAID device which make up the RAID10 aary, but the checksum is wrong. It then goes back to saying 'No device found...'.
        – StarNamer
        May 17 '15 at 18:31















      No filters and obtain_device_list_from_udev is 1. Stopping lvmetad and then doing a pvscan finda duplicate UUD of one of the component devices of the RAID device which make up the RAID10 aary, but the checksum is wrong. It then goes back to saying 'No device found...'.
      – StarNamer
      May 17 '15 at 18:31




      No filters and obtain_device_list_from_udev is 1. Stopping lvmetad and then doing a pvscan finda duplicate UUD of one of the component devices of the RAID device which make up the RAID10 aary, but the checksum is wrong. It then goes back to saying 'No device found...'.
      – StarNamer
      May 17 '15 at 18:31












      up vote
      1
      down vote



      accepted










      The problem appears to be pvscan getting confused over seeing the same UUID on both a component device of the RAID array and the RAID array itself. I assume this is avoided normally by recognising that the device is a direct component. In my case, I had created a situation where the device was not directly a component of the RAID device which should be the PV.



      My solution was to backup the LVs, force the array to be degraded and then reconfigure the disks so as not to use the multilevel RAID. Note that after another reboot the device lettering had changed. 500Gb = /dev/sdi, 250Gb = /dev/sdj, 750Gb = /dev/sdk



      # mdadm /dev/md99 --fail --force /dev/md90
      # mdadm /dev/md99 --remove failed
      # mdadm --stop /dev/md90
      # wipefs -a /dev/sdi /dev/sdj # wipe components
      # systemctl stop lvm2-lvmetad
      # pvscan -vvv
      # pvs
      ..... /dev/md99 is now correctly reported as the PV for VG b
      # fdisk /dev/sdi
      ...... Create 2 partitions of equal size, i.e. 250Gb
      # fdisk /dev/sdj
      ...... Create a single 250Gb patitiion
      # mdadm /dev/md91 --create -lraid5 -n3 /dev/sdi1 /dev/sdj1 missing
      # mdadm /dev/md92 --create -lraid1 -n2 /dev/sdi2 missing
      # pvcreate /dev/md91 /dev/md92
      # vgextend b /dev/md91 /dev/md92
      # pvmove /dev/md99
      # vgreduce b /dev/md99
      # pvremove /dev/md99
      # mdadm --stop /dev/md99
      # wipefs -a /dev/sdk
      # fdisk /dev/sdk
      ..... Create 3 250Gb partitions
      # mdadm /dev/md91 --add /dev/sdk1
      # mdadm /dev/md92 --add /dev/sdk2


      Moral of the story:



      Do not introduce too many levels of indirection into the filesystem!






      share|improve this answer
























        up vote
        1
        down vote



        accepted










        The problem appears to be pvscan getting confused over seeing the same UUID on both a component device of the RAID array and the RAID array itself. I assume this is avoided normally by recognising that the device is a direct component. In my case, I had created a situation where the device was not directly a component of the RAID device which should be the PV.



        My solution was to backup the LVs, force the array to be degraded and then reconfigure the disks so as not to use the multilevel RAID. Note that after another reboot the device lettering had changed. 500Gb = /dev/sdi, 250Gb = /dev/sdj, 750Gb = /dev/sdk



        # mdadm /dev/md99 --fail --force /dev/md90
        # mdadm /dev/md99 --remove failed
        # mdadm --stop /dev/md90
        # wipefs -a /dev/sdi /dev/sdj # wipe components
        # systemctl stop lvm2-lvmetad
        # pvscan -vvv
        # pvs
        ..... /dev/md99 is now correctly reported as the PV for VG b
        # fdisk /dev/sdi
        ...... Create 2 partitions of equal size, i.e. 250Gb
        # fdisk /dev/sdj
        ...... Create a single 250Gb patitiion
        # mdadm /dev/md91 --create -lraid5 -n3 /dev/sdi1 /dev/sdj1 missing
        # mdadm /dev/md92 --create -lraid1 -n2 /dev/sdi2 missing
        # pvcreate /dev/md91 /dev/md92
        # vgextend b /dev/md91 /dev/md92
        # pvmove /dev/md99
        # vgreduce b /dev/md99
        # pvremove /dev/md99
        # mdadm --stop /dev/md99
        # wipefs -a /dev/sdk
        # fdisk /dev/sdk
        ..... Create 3 250Gb partitions
        # mdadm /dev/md91 --add /dev/sdk1
        # mdadm /dev/md92 --add /dev/sdk2


        Moral of the story:



        Do not introduce too many levels of indirection into the filesystem!






        share|improve this answer






















          up vote
          1
          down vote



          accepted







          up vote
          1
          down vote



          accepted






          The problem appears to be pvscan getting confused over seeing the same UUID on both a component device of the RAID array and the RAID array itself. I assume this is avoided normally by recognising that the device is a direct component. In my case, I had created a situation where the device was not directly a component of the RAID device which should be the PV.



          My solution was to backup the LVs, force the array to be degraded and then reconfigure the disks so as not to use the multilevel RAID. Note that after another reboot the device lettering had changed. 500Gb = /dev/sdi, 250Gb = /dev/sdj, 750Gb = /dev/sdk



          # mdadm /dev/md99 --fail --force /dev/md90
          # mdadm /dev/md99 --remove failed
          # mdadm --stop /dev/md90
          # wipefs -a /dev/sdi /dev/sdj # wipe components
          # systemctl stop lvm2-lvmetad
          # pvscan -vvv
          # pvs
          ..... /dev/md99 is now correctly reported as the PV for VG b
          # fdisk /dev/sdi
          ...... Create 2 partitions of equal size, i.e. 250Gb
          # fdisk /dev/sdj
          ...... Create a single 250Gb patitiion
          # mdadm /dev/md91 --create -lraid5 -n3 /dev/sdi1 /dev/sdj1 missing
          # mdadm /dev/md92 --create -lraid1 -n2 /dev/sdi2 missing
          # pvcreate /dev/md91 /dev/md92
          # vgextend b /dev/md91 /dev/md92
          # pvmove /dev/md99
          # vgreduce b /dev/md99
          # pvremove /dev/md99
          # mdadm --stop /dev/md99
          # wipefs -a /dev/sdk
          # fdisk /dev/sdk
          ..... Create 3 250Gb partitions
          # mdadm /dev/md91 --add /dev/sdk1
          # mdadm /dev/md92 --add /dev/sdk2


          Moral of the story:



          Do not introduce too many levels of indirection into the filesystem!






          share|improve this answer












          The problem appears to be pvscan getting confused over seeing the same UUID on both a component device of the RAID array and the RAID array itself. I assume this is avoided normally by recognising that the device is a direct component. In my case, I had created a situation where the device was not directly a component of the RAID device which should be the PV.



          My solution was to backup the LVs, force the array to be degraded and then reconfigure the disks so as not to use the multilevel RAID. Note that after another reboot the device lettering had changed. 500Gb = /dev/sdi, 250Gb = /dev/sdj, 750Gb = /dev/sdk



          # mdadm /dev/md99 --fail --force /dev/md90
          # mdadm /dev/md99 --remove failed
          # mdadm --stop /dev/md90
          # wipefs -a /dev/sdi /dev/sdj # wipe components
          # systemctl stop lvm2-lvmetad
          # pvscan -vvv
          # pvs
          ..... /dev/md99 is now correctly reported as the PV for VG b
          # fdisk /dev/sdi
          ...... Create 2 partitions of equal size, i.e. 250Gb
          # fdisk /dev/sdj
          ...... Create a single 250Gb patitiion
          # mdadm /dev/md91 --create -lraid5 -n3 /dev/sdi1 /dev/sdj1 missing
          # mdadm /dev/md92 --create -lraid1 -n2 /dev/sdi2 missing
          # pvcreate /dev/md91 /dev/md92
          # vgextend b /dev/md91 /dev/md92
          # pvmove /dev/md99
          # vgreduce b /dev/md99
          # pvremove /dev/md99
          # mdadm --stop /dev/md99
          # wipefs -a /dev/sdk
          # fdisk /dev/sdk
          ..... Create 3 250Gb partitions
          # mdadm /dev/md91 --add /dev/sdk1
          # mdadm /dev/md92 --add /dev/sdk2


          Moral of the story:



          Do not introduce too many levels of indirection into the filesystem!







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered May 19 '15 at 22:38









          StarNamer

          2,02511224




          2,02511224




















              up vote
              0
              down vote













              As root, this commands will fix it.



              pvscan --cache ;
              pvscan





              share
























                up vote
                0
                down vote













                As root, this commands will fix it.



                pvscan --cache ;
                pvscan





                share






















                  up vote
                  0
                  down vote










                  up vote
                  0
                  down vote









                  As root, this commands will fix it.



                  pvscan --cache ;
                  pvscan





                  share












                  As root, this commands will fix it.



                  pvscan --cache ;
                  pvscan






                  share











                  share


                  share










                  answered 3 mins ago









                  user125971

                  162




                  162



























                       

                      draft saved


                      draft discarded















































                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f203925%2fcommand-pvs-says-pv-device-not-found-but-lvs-are-mapped-and-mounted%23new-answer', 'question_page');

                      );

                      Post as a guest













































































                      Popular posts from this blog

                      How to check contact read email or not when send email to Individual?

                      Bahrain

                      Postfix configuration issue with fips on centos 7; mailgun relay