/dev/md127 refuses to stop. No open files
Clash Royale CLAN TAG#URR8PPP
up vote
7
down vote
favorite
So I'm trying to stop /dev/md127 on my Ubuntu 12.10 box. It was set up as RAID1, but I'm trying to move everything (well, rename) to md0. I read that renaming isn't possible, so I'm trying to remove the drives and put them into a new array as md0. I've been able to remove one drive (sdb) by using --fail and --remove, but sdc isn't responding, nor will md127 respond to --stop --force.
I've run fuser and lsof, and neither show anything using md127. I was running LVM on top of md127, but I've umounted the LVs and I've done "lv,vgchange -an vg_Name".
I'm at a loss for what to try next. And for those who want to know why I want to rename/move, I'm a little OCD over things like that.
If it's relevant, here are the exact commands I've used, though the stop/fail/remove commands have been tried multiple times:
mdadm --stop --force /dev/md127 # this failed with the error message "mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group?"
fuser /dev/md127 # no output
lsof /dev/md127 # no output
mdadm --fail /dev/md127 /dev/sdb # succeeded
mdadm --remove /dev/md127 /dev/sdb # succeeded
mdadm --fail /dev/md127 /dev/sdc # this failed - "device or resource busy"
mdadm --remove /dev/md127 /dev/sdc # this failed - "device or resource busy"
lvchange -an vg_Name
vgchange -an vg_Name
lvm mdadm ubuntu
add a comment |Â
up vote
7
down vote
favorite
So I'm trying to stop /dev/md127 on my Ubuntu 12.10 box. It was set up as RAID1, but I'm trying to move everything (well, rename) to md0. I read that renaming isn't possible, so I'm trying to remove the drives and put them into a new array as md0. I've been able to remove one drive (sdb) by using --fail and --remove, but sdc isn't responding, nor will md127 respond to --stop --force.
I've run fuser and lsof, and neither show anything using md127. I was running LVM on top of md127, but I've umounted the LVs and I've done "lv,vgchange -an vg_Name".
I'm at a loss for what to try next. And for those who want to know why I want to rename/move, I'm a little OCD over things like that.
If it's relevant, here are the exact commands I've used, though the stop/fail/remove commands have been tried multiple times:
mdadm --stop --force /dev/md127 # this failed with the error message "mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group?"
fuser /dev/md127 # no output
lsof /dev/md127 # no output
mdadm --fail /dev/md127 /dev/sdb # succeeded
mdadm --remove /dev/md127 /dev/sdb # succeeded
mdadm --fail /dev/md127 /dev/sdc # this failed - "device or resource busy"
mdadm --remove /dev/md127 /dev/sdc # this failed - "device or resource busy"
lvchange -an vg_Name
vgchange -an vg_Name
lvm mdadm ubuntu
What is printed when you runmount
?
â sparticvs
Oct 29 '12 at 0:38
add a comment |Â
up vote
7
down vote
favorite
up vote
7
down vote
favorite
So I'm trying to stop /dev/md127 on my Ubuntu 12.10 box. It was set up as RAID1, but I'm trying to move everything (well, rename) to md0. I read that renaming isn't possible, so I'm trying to remove the drives and put them into a new array as md0. I've been able to remove one drive (sdb) by using --fail and --remove, but sdc isn't responding, nor will md127 respond to --stop --force.
I've run fuser and lsof, and neither show anything using md127. I was running LVM on top of md127, but I've umounted the LVs and I've done "lv,vgchange -an vg_Name".
I'm at a loss for what to try next. And for those who want to know why I want to rename/move, I'm a little OCD over things like that.
If it's relevant, here are the exact commands I've used, though the stop/fail/remove commands have been tried multiple times:
mdadm --stop --force /dev/md127 # this failed with the error message "mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group?"
fuser /dev/md127 # no output
lsof /dev/md127 # no output
mdadm --fail /dev/md127 /dev/sdb # succeeded
mdadm --remove /dev/md127 /dev/sdb # succeeded
mdadm --fail /dev/md127 /dev/sdc # this failed - "device or resource busy"
mdadm --remove /dev/md127 /dev/sdc # this failed - "device or resource busy"
lvchange -an vg_Name
vgchange -an vg_Name
lvm mdadm ubuntu
So I'm trying to stop /dev/md127 on my Ubuntu 12.10 box. It was set up as RAID1, but I'm trying to move everything (well, rename) to md0. I read that renaming isn't possible, so I'm trying to remove the drives and put them into a new array as md0. I've been able to remove one drive (sdb) by using --fail and --remove, but sdc isn't responding, nor will md127 respond to --stop --force.
I've run fuser and lsof, and neither show anything using md127. I was running LVM on top of md127, but I've umounted the LVs and I've done "lv,vgchange -an vg_Name".
I'm at a loss for what to try next. And for those who want to know why I want to rename/move, I'm a little OCD over things like that.
If it's relevant, here are the exact commands I've used, though the stop/fail/remove commands have been tried multiple times:
mdadm --stop --force /dev/md127 # this failed with the error message "mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group?"
fuser /dev/md127 # no output
lsof /dev/md127 # no output
mdadm --fail /dev/md127 /dev/sdb # succeeded
mdadm --remove /dev/md127 /dev/sdb # succeeded
mdadm --fail /dev/md127 /dev/sdc # this failed - "device or resource busy"
mdadm --remove /dev/md127 /dev/sdc # this failed - "device or resource busy"
lvchange -an vg_Name
vgchange -an vg_Name
lvm mdadm ubuntu
lvm mdadm ubuntu
asked Oct 28 '12 at 18:08
David Young
4114
4114
What is printed when you runmount
?
â sparticvs
Oct 29 '12 at 0:38
add a comment |Â
What is printed when you runmount
?
â sparticvs
Oct 29 '12 at 0:38
What is printed when you run
mount
?â sparticvs
Oct 29 '12 at 0:38
What is printed when you run
mount
?â sparticvs
Oct 29 '12 at 0:38
add a comment |Â
4 Answers
4
active
oldest
votes
up vote
3
down vote
If all you're trying to do is change the device number, add the array to your config file with the device number of our choice using the following command:
echo "ARRAY /dev/md0 level=raid1 num-devices=2 UUID=$(blkid -s UUID -o value /dev/md127) devices=/dev/sdb,/dev/sdc" >> /etc/mdadm.conf
Once you've put your raid in /etc/mdadm.conf, just reboot and the raid should automatically reassemble using the device number you've specified. This has the added benefit of ensuring that your raid will be built with the same device name at every boot.
add a comment |Â
up vote
2
down vote
Can you please paste the output of the following commands?
mdadm -D /dev/md127
mdadm -E /dev/sdc
cat /proc/mdstat
Please note that it is possible to "rename" the raid.
Renaming in this case is depending on the superblock version your raid is using.
To rename a superblock 0.90 raid you should use the following command:mdadm -A /dev/md0 -U super-minor -u <uuid of the array>
To rename a superblock 1.X raid you should use the following command:mdadm -A /dev/md0 -U name -N <new name> -u <uuid of the array>
As i didn't understand it, can you please explain why you want to rename it? The node name md127 is assembled by your initramfs scripts, as these are starting from md127. As far as i know you can change the preferred minor number, but the initramfs scripts will regardless of the minor number start with assembling the node 127.
add a comment |Â
up vote
1
down vote
If you're using LVM on top of mdadm, sometimes LVM will not delete the Device Mapper devices when deactivating the volume group. You can delete it manually.
- Ensure there's nothing in the output of
sudo vgdisplay
. - Look in
/dev/mapper/
. Aside from thecontrol
file, there should be a Device Mapper device named after your volume group, e.g.VolGroupArray-name
. - Run
sudo dmsetup remove VolGroupArray-name
(substitutingVolGroupArray-name
with the name of the Device Mapper device). - You should now be able to run
sudo mdadm --stop /dev/md0
(or whatever the name of themdadm
device is).
After much Googling, this is what I needed. Thanks!
â fukawi2
Nov 27 '17 at 11:07
add a comment |Â
up vote
0
down vote
Hate to open up and old thread, but I had this problem. I had 2 SATA mirrored in Centos 6.5, and upgraded to 7.5. My 3Ware controller was no longer supported.
NOTE: I had a 3Ware RAID controller but I used mdadm to make a software RAID on 6.5 so I never had a hardware RAID built.
So while I was at the computer store getting a new PCI SATA controller I Decided to add another drive and go to a RAID 5 setup. I could not do a mkfs on the volume it said another process had it in use. I couldn't stop it or remove it.
While trying everything I could think of I got this message:
mdadm --fail /dev/sda
mdadm: /dev/sda does not appear to be an md device
[root@TomNAS1 ~]# mdadm /dev/md5
/dev/md5: 3725.78GiB raid5 3 devices, 1 spare. Use mdadm --detail for more detail.
/dev/md5: device 0 in 2 device undetected raid1 /dev/md/2_0. Use mdadm --examine for more detail.
So I did:
mdadm --examine /dev/md5
/dev/md5:
Magic : a92b4efc
Version : 0.90.00
UUID : ffd28566:a7b7ad42:b26b218f:452df0ca
Creation Time : Wed Dec 8 12:52:37 2010
Raid Level : raid1
Used Dev Size : 1951311040 (1860.92 GiB 1998.14 GB)
Array Size : 1951311040 (1860.92 GiB 1998.14 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Update Time : Mon Jul 2 12:39:31 2012
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : 59b0bc94 - correct
Events : 1111864
Number Major Minor RaidDevice State
this 0 8 19 0 active sync
0 0 8 19 0 active sync
1 1 8 3 1 active sync
Notice the Raid Leveel RAID1 (I still had some superblocks with the old raid info), but I still couldn't delete it.
I finally did:
mdadm --stop --scan
[root@TomNAS1 ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
unused devices:
[root@TomNAS1 ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
unused devices:
Using the --scan option instead of /dev/md5 finally did it. I was then able to remove it, zero the superblocks and recreate it
[root@TomNAS1 ~]# mdadm --remove /dev/md5
mdadm: error opening /dev/md5: No such file or directory
[root@TomNAS1 ~]# mdadm --zero-superblock /dev/sda
[root@TomNAS1 ~]# mdadm --zero-superblock /dev/sdb
[root@TomNAS1 ~]# mdadm --zero-superblock /dev/sdd
[root@TomNAS1 ~]# mdadm -E /dev/md5
mdadm: cannot open /dev/md5: No such file or directory
[root@TomNAS1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
âÂÂâÂÂsda1 8:1 0 1.8T 0 part
sdb 8:16 0 1.8T 0 disk
âÂÂâÂÂsdb1 8:17 0 1.8T 0 part
sdc 8:32 0 298.1G 0 disk
âÂÂâÂÂsdc1 8:33 0 1G 0 part /boot
âÂÂâÂÂsdc2 8:34 0 297G 0 part
âÂÂâÂÂcentos-root 253:0 0 283G 0 lvm /
âÂÂâÂÂcentos-swap 253:1 0 4G 0 lvm [SWAP]
âÂÂâÂÂcentos-dev_shm 253:2 0 10G 0 lvm
sdd 8:48 0 1.8T 0 disk
âÂÂâÂÂsdd1 8:49 0 1.8T 0 part
sr0 11:0 1 1024M 0 rom
[root@TomNAS1 ~]# mdadm --create /dev/md5 --level=5 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdd1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.
[root@TomNAS1 ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdd1[3] sdb1[1] sda1[0]
3906762752 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[>....................] recovery = 0.0% (475180/1953381376) finish=684.9min speed=47519K/sec
bitmap: 0/15 pages [0KB], 65536KB chunk
unused devices:
[root@TomNAS1 ~]# mkfs.ext4 /dev/md5
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
244178944 inodes, 976690688 blocks
48834534 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=3124756480
29807 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
[root@TomNAS1 ~]#
HAPPY DAYS ARE HERE AGAIN! Hope this helps someone else running into a similar issue!
New contributor
add a comment |Â
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
3
down vote
If all you're trying to do is change the device number, add the array to your config file with the device number of our choice using the following command:
echo "ARRAY /dev/md0 level=raid1 num-devices=2 UUID=$(blkid -s UUID -o value /dev/md127) devices=/dev/sdb,/dev/sdc" >> /etc/mdadm.conf
Once you've put your raid in /etc/mdadm.conf, just reboot and the raid should automatically reassemble using the device number you've specified. This has the added benefit of ensuring that your raid will be built with the same device name at every boot.
add a comment |Â
up vote
3
down vote
If all you're trying to do is change the device number, add the array to your config file with the device number of our choice using the following command:
echo "ARRAY /dev/md0 level=raid1 num-devices=2 UUID=$(blkid -s UUID -o value /dev/md127) devices=/dev/sdb,/dev/sdc" >> /etc/mdadm.conf
Once you've put your raid in /etc/mdadm.conf, just reboot and the raid should automatically reassemble using the device number you've specified. This has the added benefit of ensuring that your raid will be built with the same device name at every boot.
add a comment |Â
up vote
3
down vote
up vote
3
down vote
If all you're trying to do is change the device number, add the array to your config file with the device number of our choice using the following command:
echo "ARRAY /dev/md0 level=raid1 num-devices=2 UUID=$(blkid -s UUID -o value /dev/md127) devices=/dev/sdb,/dev/sdc" >> /etc/mdadm.conf
Once you've put your raid in /etc/mdadm.conf, just reboot and the raid should automatically reassemble using the device number you've specified. This has the added benefit of ensuring that your raid will be built with the same device name at every boot.
If all you're trying to do is change the device number, add the array to your config file with the device number of our choice using the following command:
echo "ARRAY /dev/md0 level=raid1 num-devices=2 UUID=$(blkid -s UUID -o value /dev/md127) devices=/dev/sdb,/dev/sdc" >> /etc/mdadm.conf
Once you've put your raid in /etc/mdadm.conf, just reboot and the raid should automatically reassemble using the device number you've specified. This has the added benefit of ensuring that your raid will be built with the same device name at every boot.
edited Mar 22 '16 at 13:02
answered Apr 16 '13 at 19:08
smokes2345
717314
717314
add a comment |Â
add a comment |Â
up vote
2
down vote
Can you please paste the output of the following commands?
mdadm -D /dev/md127
mdadm -E /dev/sdc
cat /proc/mdstat
Please note that it is possible to "rename" the raid.
Renaming in this case is depending on the superblock version your raid is using.
To rename a superblock 0.90 raid you should use the following command:mdadm -A /dev/md0 -U super-minor -u <uuid of the array>
To rename a superblock 1.X raid you should use the following command:mdadm -A /dev/md0 -U name -N <new name> -u <uuid of the array>
As i didn't understand it, can you please explain why you want to rename it? The node name md127 is assembled by your initramfs scripts, as these are starting from md127. As far as i know you can change the preferred minor number, but the initramfs scripts will regardless of the minor number start with assembling the node 127.
add a comment |Â
up vote
2
down vote
Can you please paste the output of the following commands?
mdadm -D /dev/md127
mdadm -E /dev/sdc
cat /proc/mdstat
Please note that it is possible to "rename" the raid.
Renaming in this case is depending on the superblock version your raid is using.
To rename a superblock 0.90 raid you should use the following command:mdadm -A /dev/md0 -U super-minor -u <uuid of the array>
To rename a superblock 1.X raid you should use the following command:mdadm -A /dev/md0 -U name -N <new name> -u <uuid of the array>
As i didn't understand it, can you please explain why you want to rename it? The node name md127 is assembled by your initramfs scripts, as these are starting from md127. As far as i know you can change the preferred minor number, but the initramfs scripts will regardless of the minor number start with assembling the node 127.
add a comment |Â
up vote
2
down vote
up vote
2
down vote
Can you please paste the output of the following commands?
mdadm -D /dev/md127
mdadm -E /dev/sdc
cat /proc/mdstat
Please note that it is possible to "rename" the raid.
Renaming in this case is depending on the superblock version your raid is using.
To rename a superblock 0.90 raid you should use the following command:mdadm -A /dev/md0 -U super-minor -u <uuid of the array>
To rename a superblock 1.X raid you should use the following command:mdadm -A /dev/md0 -U name -N <new name> -u <uuid of the array>
As i didn't understand it, can you please explain why you want to rename it? The node name md127 is assembled by your initramfs scripts, as these are starting from md127. As far as i know you can change the preferred minor number, but the initramfs scripts will regardless of the minor number start with assembling the node 127.
Can you please paste the output of the following commands?
mdadm -D /dev/md127
mdadm -E /dev/sdc
cat /proc/mdstat
Please note that it is possible to "rename" the raid.
Renaming in this case is depending on the superblock version your raid is using.
To rename a superblock 0.90 raid you should use the following command:mdadm -A /dev/md0 -U super-minor -u <uuid of the array>
To rename a superblock 1.X raid you should use the following command:mdadm -A /dev/md0 -U name -N <new name> -u <uuid of the array>
As i didn't understand it, can you please explain why you want to rename it? The node name md127 is assembled by your initramfs scripts, as these are starting from md127. As far as i know you can change the preferred minor number, but the initramfs scripts will regardless of the minor number start with assembling the node 127.
edited Jun 20 '16 at 10:48
Pierre.Vriens
95441015
95441015
answered Jan 4 '13 at 20:48
teissler
25929
25929
add a comment |Â
add a comment |Â
up vote
1
down vote
If you're using LVM on top of mdadm, sometimes LVM will not delete the Device Mapper devices when deactivating the volume group. You can delete it manually.
- Ensure there's nothing in the output of
sudo vgdisplay
. - Look in
/dev/mapper/
. Aside from thecontrol
file, there should be a Device Mapper device named after your volume group, e.g.VolGroupArray-name
. - Run
sudo dmsetup remove VolGroupArray-name
(substitutingVolGroupArray-name
with the name of the Device Mapper device). - You should now be able to run
sudo mdadm --stop /dev/md0
(or whatever the name of themdadm
device is).
After much Googling, this is what I needed. Thanks!
â fukawi2
Nov 27 '17 at 11:07
add a comment |Â
up vote
1
down vote
If you're using LVM on top of mdadm, sometimes LVM will not delete the Device Mapper devices when deactivating the volume group. You can delete it manually.
- Ensure there's nothing in the output of
sudo vgdisplay
. - Look in
/dev/mapper/
. Aside from thecontrol
file, there should be a Device Mapper device named after your volume group, e.g.VolGroupArray-name
. - Run
sudo dmsetup remove VolGroupArray-name
(substitutingVolGroupArray-name
with the name of the Device Mapper device). - You should now be able to run
sudo mdadm --stop /dev/md0
(or whatever the name of themdadm
device is).
After much Googling, this is what I needed. Thanks!
â fukawi2
Nov 27 '17 at 11:07
add a comment |Â
up vote
1
down vote
up vote
1
down vote
If you're using LVM on top of mdadm, sometimes LVM will not delete the Device Mapper devices when deactivating the volume group. You can delete it manually.
- Ensure there's nothing in the output of
sudo vgdisplay
. - Look in
/dev/mapper/
. Aside from thecontrol
file, there should be a Device Mapper device named after your volume group, e.g.VolGroupArray-name
. - Run
sudo dmsetup remove VolGroupArray-name
(substitutingVolGroupArray-name
with the name of the Device Mapper device). - You should now be able to run
sudo mdadm --stop /dev/md0
(or whatever the name of themdadm
device is).
If you're using LVM on top of mdadm, sometimes LVM will not delete the Device Mapper devices when deactivating the volume group. You can delete it manually.
- Ensure there's nothing in the output of
sudo vgdisplay
. - Look in
/dev/mapper/
. Aside from thecontrol
file, there should be a Device Mapper device named after your volume group, e.g.VolGroupArray-name
. - Run
sudo dmsetup remove VolGroupArray-name
(substitutingVolGroupArray-name
with the name of the Device Mapper device). - You should now be able to run
sudo mdadm --stop /dev/md0
(or whatever the name of themdadm
device is).
answered Sep 20 '17 at 9:00
Vladimir Panteleev
789418
789418
After much Googling, this is what I needed. Thanks!
â fukawi2
Nov 27 '17 at 11:07
add a comment |Â
After much Googling, this is what I needed. Thanks!
â fukawi2
Nov 27 '17 at 11:07
After much Googling, this is what I needed. Thanks!
â fukawi2
Nov 27 '17 at 11:07
After much Googling, this is what I needed. Thanks!
â fukawi2
Nov 27 '17 at 11:07
add a comment |Â
up vote
0
down vote
Hate to open up and old thread, but I had this problem. I had 2 SATA mirrored in Centos 6.5, and upgraded to 7.5. My 3Ware controller was no longer supported.
NOTE: I had a 3Ware RAID controller but I used mdadm to make a software RAID on 6.5 so I never had a hardware RAID built.
So while I was at the computer store getting a new PCI SATA controller I Decided to add another drive and go to a RAID 5 setup. I could not do a mkfs on the volume it said another process had it in use. I couldn't stop it or remove it.
While trying everything I could think of I got this message:
mdadm --fail /dev/sda
mdadm: /dev/sda does not appear to be an md device
[root@TomNAS1 ~]# mdadm /dev/md5
/dev/md5: 3725.78GiB raid5 3 devices, 1 spare. Use mdadm --detail for more detail.
/dev/md5: device 0 in 2 device undetected raid1 /dev/md/2_0. Use mdadm --examine for more detail.
So I did:
mdadm --examine /dev/md5
/dev/md5:
Magic : a92b4efc
Version : 0.90.00
UUID : ffd28566:a7b7ad42:b26b218f:452df0ca
Creation Time : Wed Dec 8 12:52:37 2010
Raid Level : raid1
Used Dev Size : 1951311040 (1860.92 GiB 1998.14 GB)
Array Size : 1951311040 (1860.92 GiB 1998.14 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Update Time : Mon Jul 2 12:39:31 2012
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : 59b0bc94 - correct
Events : 1111864
Number Major Minor RaidDevice State
this 0 8 19 0 active sync
0 0 8 19 0 active sync
1 1 8 3 1 active sync
Notice the Raid Leveel RAID1 (I still had some superblocks with the old raid info), but I still couldn't delete it.
I finally did:
mdadm --stop --scan
[root@TomNAS1 ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
unused devices:
[root@TomNAS1 ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
unused devices:
Using the --scan option instead of /dev/md5 finally did it. I was then able to remove it, zero the superblocks and recreate it
[root@TomNAS1 ~]# mdadm --remove /dev/md5
mdadm: error opening /dev/md5: No such file or directory
[root@TomNAS1 ~]# mdadm --zero-superblock /dev/sda
[root@TomNAS1 ~]# mdadm --zero-superblock /dev/sdb
[root@TomNAS1 ~]# mdadm --zero-superblock /dev/sdd
[root@TomNAS1 ~]# mdadm -E /dev/md5
mdadm: cannot open /dev/md5: No such file or directory
[root@TomNAS1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
âÂÂâÂÂsda1 8:1 0 1.8T 0 part
sdb 8:16 0 1.8T 0 disk
âÂÂâÂÂsdb1 8:17 0 1.8T 0 part
sdc 8:32 0 298.1G 0 disk
âÂÂâÂÂsdc1 8:33 0 1G 0 part /boot
âÂÂâÂÂsdc2 8:34 0 297G 0 part
âÂÂâÂÂcentos-root 253:0 0 283G 0 lvm /
âÂÂâÂÂcentos-swap 253:1 0 4G 0 lvm [SWAP]
âÂÂâÂÂcentos-dev_shm 253:2 0 10G 0 lvm
sdd 8:48 0 1.8T 0 disk
âÂÂâÂÂsdd1 8:49 0 1.8T 0 part
sr0 11:0 1 1024M 0 rom
[root@TomNAS1 ~]# mdadm --create /dev/md5 --level=5 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdd1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.
[root@TomNAS1 ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdd1[3] sdb1[1] sda1[0]
3906762752 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[>....................] recovery = 0.0% (475180/1953381376) finish=684.9min speed=47519K/sec
bitmap: 0/15 pages [0KB], 65536KB chunk
unused devices:
[root@TomNAS1 ~]# mkfs.ext4 /dev/md5
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
244178944 inodes, 976690688 blocks
48834534 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=3124756480
29807 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
[root@TomNAS1 ~]#
HAPPY DAYS ARE HERE AGAIN! Hope this helps someone else running into a similar issue!
New contributor
add a comment |Â
up vote
0
down vote
Hate to open up and old thread, but I had this problem. I had 2 SATA mirrored in Centos 6.5, and upgraded to 7.5. My 3Ware controller was no longer supported.
NOTE: I had a 3Ware RAID controller but I used mdadm to make a software RAID on 6.5 so I never had a hardware RAID built.
So while I was at the computer store getting a new PCI SATA controller I Decided to add another drive and go to a RAID 5 setup. I could not do a mkfs on the volume it said another process had it in use. I couldn't stop it or remove it.
While trying everything I could think of I got this message:
mdadm --fail /dev/sda
mdadm: /dev/sda does not appear to be an md device
[root@TomNAS1 ~]# mdadm /dev/md5
/dev/md5: 3725.78GiB raid5 3 devices, 1 spare. Use mdadm --detail for more detail.
/dev/md5: device 0 in 2 device undetected raid1 /dev/md/2_0. Use mdadm --examine for more detail.
So I did:
mdadm --examine /dev/md5
/dev/md5:
Magic : a92b4efc
Version : 0.90.00
UUID : ffd28566:a7b7ad42:b26b218f:452df0ca
Creation Time : Wed Dec 8 12:52:37 2010
Raid Level : raid1
Used Dev Size : 1951311040 (1860.92 GiB 1998.14 GB)
Array Size : 1951311040 (1860.92 GiB 1998.14 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Update Time : Mon Jul 2 12:39:31 2012
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : 59b0bc94 - correct
Events : 1111864
Number Major Minor RaidDevice State
this 0 8 19 0 active sync
0 0 8 19 0 active sync
1 1 8 3 1 active sync
Notice the Raid Leveel RAID1 (I still had some superblocks with the old raid info), but I still couldn't delete it.
I finally did:
mdadm --stop --scan
[root@TomNAS1 ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
unused devices:
[root@TomNAS1 ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
unused devices:
Using the --scan option instead of /dev/md5 finally did it. I was then able to remove it, zero the superblocks and recreate it
[root@TomNAS1 ~]# mdadm --remove /dev/md5
mdadm: error opening /dev/md5: No such file or directory
[root@TomNAS1 ~]# mdadm --zero-superblock /dev/sda
[root@TomNAS1 ~]# mdadm --zero-superblock /dev/sdb
[root@TomNAS1 ~]# mdadm --zero-superblock /dev/sdd
[root@TomNAS1 ~]# mdadm -E /dev/md5
mdadm: cannot open /dev/md5: No such file or directory
[root@TomNAS1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
âÂÂâÂÂsda1 8:1 0 1.8T 0 part
sdb 8:16 0 1.8T 0 disk
âÂÂâÂÂsdb1 8:17 0 1.8T 0 part
sdc 8:32 0 298.1G 0 disk
âÂÂâÂÂsdc1 8:33 0 1G 0 part /boot
âÂÂâÂÂsdc2 8:34 0 297G 0 part
âÂÂâÂÂcentos-root 253:0 0 283G 0 lvm /
âÂÂâÂÂcentos-swap 253:1 0 4G 0 lvm [SWAP]
âÂÂâÂÂcentos-dev_shm 253:2 0 10G 0 lvm
sdd 8:48 0 1.8T 0 disk
âÂÂâÂÂsdd1 8:49 0 1.8T 0 part
sr0 11:0 1 1024M 0 rom
[root@TomNAS1 ~]# mdadm --create /dev/md5 --level=5 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdd1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.
[root@TomNAS1 ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdd1[3] sdb1[1] sda1[0]
3906762752 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[>....................] recovery = 0.0% (475180/1953381376) finish=684.9min speed=47519K/sec
bitmap: 0/15 pages [0KB], 65536KB chunk
unused devices:
[root@TomNAS1 ~]# mkfs.ext4 /dev/md5
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
244178944 inodes, 976690688 blocks
48834534 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=3124756480
29807 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
[root@TomNAS1 ~]#
HAPPY DAYS ARE HERE AGAIN! Hope this helps someone else running into a similar issue!
New contributor
add a comment |Â
up vote
0
down vote
up vote
0
down vote
Hate to open up and old thread, but I had this problem. I had 2 SATA mirrored in Centos 6.5, and upgraded to 7.5. My 3Ware controller was no longer supported.
NOTE: I had a 3Ware RAID controller but I used mdadm to make a software RAID on 6.5 so I never had a hardware RAID built.
So while I was at the computer store getting a new PCI SATA controller I Decided to add another drive and go to a RAID 5 setup. I could not do a mkfs on the volume it said another process had it in use. I couldn't stop it or remove it.
While trying everything I could think of I got this message:
mdadm --fail /dev/sda
mdadm: /dev/sda does not appear to be an md device
[root@TomNAS1 ~]# mdadm /dev/md5
/dev/md5: 3725.78GiB raid5 3 devices, 1 spare. Use mdadm --detail for more detail.
/dev/md5: device 0 in 2 device undetected raid1 /dev/md/2_0. Use mdadm --examine for more detail.
So I did:
mdadm --examine /dev/md5
/dev/md5:
Magic : a92b4efc
Version : 0.90.00
UUID : ffd28566:a7b7ad42:b26b218f:452df0ca
Creation Time : Wed Dec 8 12:52:37 2010
Raid Level : raid1
Used Dev Size : 1951311040 (1860.92 GiB 1998.14 GB)
Array Size : 1951311040 (1860.92 GiB 1998.14 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Update Time : Mon Jul 2 12:39:31 2012
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : 59b0bc94 - correct
Events : 1111864
Number Major Minor RaidDevice State
this 0 8 19 0 active sync
0 0 8 19 0 active sync
1 1 8 3 1 active sync
Notice the Raid Leveel RAID1 (I still had some superblocks with the old raid info), but I still couldn't delete it.
I finally did:
mdadm --stop --scan
[root@TomNAS1 ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
unused devices:
[root@TomNAS1 ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
unused devices:
Using the --scan option instead of /dev/md5 finally did it. I was then able to remove it, zero the superblocks and recreate it
[root@TomNAS1 ~]# mdadm --remove /dev/md5
mdadm: error opening /dev/md5: No such file or directory
[root@TomNAS1 ~]# mdadm --zero-superblock /dev/sda
[root@TomNAS1 ~]# mdadm --zero-superblock /dev/sdb
[root@TomNAS1 ~]# mdadm --zero-superblock /dev/sdd
[root@TomNAS1 ~]# mdadm -E /dev/md5
mdadm: cannot open /dev/md5: No such file or directory
[root@TomNAS1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
âÂÂâÂÂsda1 8:1 0 1.8T 0 part
sdb 8:16 0 1.8T 0 disk
âÂÂâÂÂsdb1 8:17 0 1.8T 0 part
sdc 8:32 0 298.1G 0 disk
âÂÂâÂÂsdc1 8:33 0 1G 0 part /boot
âÂÂâÂÂsdc2 8:34 0 297G 0 part
âÂÂâÂÂcentos-root 253:0 0 283G 0 lvm /
âÂÂâÂÂcentos-swap 253:1 0 4G 0 lvm [SWAP]
âÂÂâÂÂcentos-dev_shm 253:2 0 10G 0 lvm
sdd 8:48 0 1.8T 0 disk
âÂÂâÂÂsdd1 8:49 0 1.8T 0 part
sr0 11:0 1 1024M 0 rom
[root@TomNAS1 ~]# mdadm --create /dev/md5 --level=5 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdd1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.
[root@TomNAS1 ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdd1[3] sdb1[1] sda1[0]
3906762752 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[>....................] recovery = 0.0% (475180/1953381376) finish=684.9min speed=47519K/sec
bitmap: 0/15 pages [0KB], 65536KB chunk
unused devices:
[root@TomNAS1 ~]# mkfs.ext4 /dev/md5
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
244178944 inodes, 976690688 blocks
48834534 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=3124756480
29807 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
[root@TomNAS1 ~]#
HAPPY DAYS ARE HERE AGAIN! Hope this helps someone else running into a similar issue!
New contributor
Hate to open up and old thread, but I had this problem. I had 2 SATA mirrored in Centos 6.5, and upgraded to 7.5. My 3Ware controller was no longer supported.
NOTE: I had a 3Ware RAID controller but I used mdadm to make a software RAID on 6.5 so I never had a hardware RAID built.
So while I was at the computer store getting a new PCI SATA controller I Decided to add another drive and go to a RAID 5 setup. I could not do a mkfs on the volume it said another process had it in use. I couldn't stop it or remove it.
While trying everything I could think of I got this message:
mdadm --fail /dev/sda
mdadm: /dev/sda does not appear to be an md device
[root@TomNAS1 ~]# mdadm /dev/md5
/dev/md5: 3725.78GiB raid5 3 devices, 1 spare. Use mdadm --detail for more detail.
/dev/md5: device 0 in 2 device undetected raid1 /dev/md/2_0. Use mdadm --examine for more detail.
So I did:
mdadm --examine /dev/md5
/dev/md5:
Magic : a92b4efc
Version : 0.90.00
UUID : ffd28566:a7b7ad42:b26b218f:452df0ca
Creation Time : Wed Dec 8 12:52:37 2010
Raid Level : raid1
Used Dev Size : 1951311040 (1860.92 GiB 1998.14 GB)
Array Size : 1951311040 (1860.92 GiB 1998.14 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Update Time : Mon Jul 2 12:39:31 2012
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : 59b0bc94 - correct
Events : 1111864
Number Major Minor RaidDevice State
this 0 8 19 0 active sync
0 0 8 19 0 active sync
1 1 8 3 1 active sync
Notice the Raid Leveel RAID1 (I still had some superblocks with the old raid info), but I still couldn't delete it.
I finally did:
mdadm --stop --scan
[root@TomNAS1 ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
unused devices:
[root@TomNAS1 ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
unused devices:
Using the --scan option instead of /dev/md5 finally did it. I was then able to remove it, zero the superblocks and recreate it
[root@TomNAS1 ~]# mdadm --remove /dev/md5
mdadm: error opening /dev/md5: No such file or directory
[root@TomNAS1 ~]# mdadm --zero-superblock /dev/sda
[root@TomNAS1 ~]# mdadm --zero-superblock /dev/sdb
[root@TomNAS1 ~]# mdadm --zero-superblock /dev/sdd
[root@TomNAS1 ~]# mdadm -E /dev/md5
mdadm: cannot open /dev/md5: No such file or directory
[root@TomNAS1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
âÂÂâÂÂsda1 8:1 0 1.8T 0 part
sdb 8:16 0 1.8T 0 disk
âÂÂâÂÂsdb1 8:17 0 1.8T 0 part
sdc 8:32 0 298.1G 0 disk
âÂÂâÂÂsdc1 8:33 0 1G 0 part /boot
âÂÂâÂÂsdc2 8:34 0 297G 0 part
âÂÂâÂÂcentos-root 253:0 0 283G 0 lvm /
âÂÂâÂÂcentos-swap 253:1 0 4G 0 lvm [SWAP]
âÂÂâÂÂcentos-dev_shm 253:2 0 10G 0 lvm
sdd 8:48 0 1.8T 0 disk
âÂÂâÂÂsdd1 8:49 0 1.8T 0 part
sr0 11:0 1 1024M 0 rom
[root@TomNAS1 ~]# mdadm --create /dev/md5 --level=5 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdd1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.
[root@TomNAS1 ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdd1[3] sdb1[1] sda1[0]
3906762752 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[>....................] recovery = 0.0% (475180/1953381376) finish=684.9min speed=47519K/sec
bitmap: 0/15 pages [0KB], 65536KB chunk
unused devices:
[root@TomNAS1 ~]# mkfs.ext4 /dev/md5
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
244178944 inodes, 976690688 blocks
48834534 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=3124756480
29807 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
[root@TomNAS1 ~]#
HAPPY DAYS ARE HERE AGAIN! Hope this helps someone else running into a similar issue!
New contributor
New contributor
answered 13 mins ago
Joseph Mulkey
1
1
New contributor
New contributor
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f53129%2fdev-md127-refuses-to-stop-no-open-files%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
What is printed when you run
mount
?â sparticvs
Oct 29 '12 at 0:38