MDADM RAID SET HDD MEMBER

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












Did anyone know if there is a .conf file or text file that list all the HDD uuid or identification id within a MDADM RAID. let me know if you have idea and where i can find it (I am using CentOS 6.5 and using a partition level MDADM RAID)



Reason why i need it? Because my RAID10 did not auto sync when I re-mount the its HDD member(HDD i've remove just to simulate the hdd failure)



Thanks for the future response!



cat /proc/mdstat



Personalities : [raid10]
md127 : active raid10 sdd1[3] sda1[0] sdb1[1] sdc1[2]
62925824 blocks 512K chunks 2 near-copies [4/4] [UUUU]


blkid



/dev/sda1: UUID="f7d9afe6-6af8-40bd-ac13-b1e73e47358b" TYPE="xfs"
/dev/sdb1: UUID="36575b92-c1ec-c445-b0ce-cbb5108c28e3" TYPE="linux_raid_member"
/dev/sdd1: UUID="36575b92-c1ec-c445-b0ce-cbb5108c28e3" TYPE="linux_raid_member"
/dev/sdc1: UUID="36575b92-c1ec-c445-b0ce-cbb5108c28e3" TYPE="linux_raid_member"
/dev/md127: UUID="f7d9afe6-6af8-40bd-ac13-b1e73e47358b" TYPE="xfs"
/dev/sde1: UUID="eaaba714-c055-42b4-a9bc-2a0ff02e8ad4" TYPE="ext2"
/dev/sde2: UUID="d46a11a4-ed95-4711-9f90-41e2b2e5f81e" TYPE="swap"


mdadm --detail /dev/md127



/dev/md127:
Version : 0.90
Creation Time : Fri Sep 28 16:12:54 2018
Raid Level : raid10
Array Size : 62925824 (60.01 GiB 64.44 GB)
Used Dev Size : 31462912 (30.01 GiB 32.22 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 127
Persistence : Superblock is persistent

Update Time : Mon Oct 1 11:09:52 2018
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0

Layout : near=2
Chunk Size : 512K

UUID : 36575b92:c1ecc445:b0cecbb5:108c28e3 (local to host HIROSHI)
Events : 0.214

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1


cat /etc/mdadm.conf



ARRAY /dev/md/store metadata=1.2 name=HIROSHI:store UUID=36bee2f9:32f79bba:16330a1c:3cae6f1b
MAILADDR root@localhost
ARRAY /dev/md0 metadata=1.2 name=HIROSHI:data UUID=f70e9f93:d701fc56:7d790db4:efed16dd
MAILADDR root@localhost
ARRAY /dev/md0 metadata=1.2 name=HIROSHI:data UUID=a52488c5:068f47d6:121c88b0:f2d7e0db
MAILADDR root@localhost
ARRAY /dev/md0 metadata=1.2 name=HIROSHI:data UUID=a52488c5:068f47d6:121c88b0:f2d7e0db
MAILADDR root@HIROSHI
ARRAY /dev/md0 metadata=1.2 name=HIROSHI:data UUID=a52488c5:068f47d6:121c88b0:f2d7e0db









share|improve this question







New contributor




demz merven is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.



















  • Your RAID10 is complete, what is the problem?
    – wurtel
    Oct 1 at 8:43














up vote
0
down vote

favorite












Did anyone know if there is a .conf file or text file that list all the HDD uuid or identification id within a MDADM RAID. let me know if you have idea and where i can find it (I am using CentOS 6.5 and using a partition level MDADM RAID)



Reason why i need it? Because my RAID10 did not auto sync when I re-mount the its HDD member(HDD i've remove just to simulate the hdd failure)



Thanks for the future response!



cat /proc/mdstat



Personalities : [raid10]
md127 : active raid10 sdd1[3] sda1[0] sdb1[1] sdc1[2]
62925824 blocks 512K chunks 2 near-copies [4/4] [UUUU]


blkid



/dev/sda1: UUID="f7d9afe6-6af8-40bd-ac13-b1e73e47358b" TYPE="xfs"
/dev/sdb1: UUID="36575b92-c1ec-c445-b0ce-cbb5108c28e3" TYPE="linux_raid_member"
/dev/sdd1: UUID="36575b92-c1ec-c445-b0ce-cbb5108c28e3" TYPE="linux_raid_member"
/dev/sdc1: UUID="36575b92-c1ec-c445-b0ce-cbb5108c28e3" TYPE="linux_raid_member"
/dev/md127: UUID="f7d9afe6-6af8-40bd-ac13-b1e73e47358b" TYPE="xfs"
/dev/sde1: UUID="eaaba714-c055-42b4-a9bc-2a0ff02e8ad4" TYPE="ext2"
/dev/sde2: UUID="d46a11a4-ed95-4711-9f90-41e2b2e5f81e" TYPE="swap"


mdadm --detail /dev/md127



/dev/md127:
Version : 0.90
Creation Time : Fri Sep 28 16:12:54 2018
Raid Level : raid10
Array Size : 62925824 (60.01 GiB 64.44 GB)
Used Dev Size : 31462912 (30.01 GiB 32.22 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 127
Persistence : Superblock is persistent

Update Time : Mon Oct 1 11:09:52 2018
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0

Layout : near=2
Chunk Size : 512K

UUID : 36575b92:c1ecc445:b0cecbb5:108c28e3 (local to host HIROSHI)
Events : 0.214

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1


cat /etc/mdadm.conf



ARRAY /dev/md/store metadata=1.2 name=HIROSHI:store UUID=36bee2f9:32f79bba:16330a1c:3cae6f1b
MAILADDR root@localhost
ARRAY /dev/md0 metadata=1.2 name=HIROSHI:data UUID=f70e9f93:d701fc56:7d790db4:efed16dd
MAILADDR root@localhost
ARRAY /dev/md0 metadata=1.2 name=HIROSHI:data UUID=a52488c5:068f47d6:121c88b0:f2d7e0db
MAILADDR root@localhost
ARRAY /dev/md0 metadata=1.2 name=HIROSHI:data UUID=a52488c5:068f47d6:121c88b0:f2d7e0db
MAILADDR root@HIROSHI
ARRAY /dev/md0 metadata=1.2 name=HIROSHI:data UUID=a52488c5:068f47d6:121c88b0:f2d7e0db









share|improve this question







New contributor




demz merven is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.



















  • Your RAID10 is complete, what is the problem?
    – wurtel
    Oct 1 at 8:43












up vote
0
down vote

favorite









up vote
0
down vote

favorite











Did anyone know if there is a .conf file or text file that list all the HDD uuid or identification id within a MDADM RAID. let me know if you have idea and where i can find it (I am using CentOS 6.5 and using a partition level MDADM RAID)



Reason why i need it? Because my RAID10 did not auto sync when I re-mount the its HDD member(HDD i've remove just to simulate the hdd failure)



Thanks for the future response!



cat /proc/mdstat



Personalities : [raid10]
md127 : active raid10 sdd1[3] sda1[0] sdb1[1] sdc1[2]
62925824 blocks 512K chunks 2 near-copies [4/4] [UUUU]


blkid



/dev/sda1: UUID="f7d9afe6-6af8-40bd-ac13-b1e73e47358b" TYPE="xfs"
/dev/sdb1: UUID="36575b92-c1ec-c445-b0ce-cbb5108c28e3" TYPE="linux_raid_member"
/dev/sdd1: UUID="36575b92-c1ec-c445-b0ce-cbb5108c28e3" TYPE="linux_raid_member"
/dev/sdc1: UUID="36575b92-c1ec-c445-b0ce-cbb5108c28e3" TYPE="linux_raid_member"
/dev/md127: UUID="f7d9afe6-6af8-40bd-ac13-b1e73e47358b" TYPE="xfs"
/dev/sde1: UUID="eaaba714-c055-42b4-a9bc-2a0ff02e8ad4" TYPE="ext2"
/dev/sde2: UUID="d46a11a4-ed95-4711-9f90-41e2b2e5f81e" TYPE="swap"


mdadm --detail /dev/md127



/dev/md127:
Version : 0.90
Creation Time : Fri Sep 28 16:12:54 2018
Raid Level : raid10
Array Size : 62925824 (60.01 GiB 64.44 GB)
Used Dev Size : 31462912 (30.01 GiB 32.22 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 127
Persistence : Superblock is persistent

Update Time : Mon Oct 1 11:09:52 2018
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0

Layout : near=2
Chunk Size : 512K

UUID : 36575b92:c1ecc445:b0cecbb5:108c28e3 (local to host HIROSHI)
Events : 0.214

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1


cat /etc/mdadm.conf



ARRAY /dev/md/store metadata=1.2 name=HIROSHI:store UUID=36bee2f9:32f79bba:16330a1c:3cae6f1b
MAILADDR root@localhost
ARRAY /dev/md0 metadata=1.2 name=HIROSHI:data UUID=f70e9f93:d701fc56:7d790db4:efed16dd
MAILADDR root@localhost
ARRAY /dev/md0 metadata=1.2 name=HIROSHI:data UUID=a52488c5:068f47d6:121c88b0:f2d7e0db
MAILADDR root@localhost
ARRAY /dev/md0 metadata=1.2 name=HIROSHI:data UUID=a52488c5:068f47d6:121c88b0:f2d7e0db
MAILADDR root@HIROSHI
ARRAY /dev/md0 metadata=1.2 name=HIROSHI:data UUID=a52488c5:068f47d6:121c88b0:f2d7e0db









share|improve this question







New contributor




demz merven is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











Did anyone know if there is a .conf file or text file that list all the HDD uuid or identification id within a MDADM RAID. let me know if you have idea and where i can find it (I am using CentOS 6.5 and using a partition level MDADM RAID)



Reason why i need it? Because my RAID10 did not auto sync when I re-mount the its HDD member(HDD i've remove just to simulate the hdd failure)



Thanks for the future response!



cat /proc/mdstat



Personalities : [raid10]
md127 : active raid10 sdd1[3] sda1[0] sdb1[1] sdc1[2]
62925824 blocks 512K chunks 2 near-copies [4/4] [UUUU]


blkid



/dev/sda1: UUID="f7d9afe6-6af8-40bd-ac13-b1e73e47358b" TYPE="xfs"
/dev/sdb1: UUID="36575b92-c1ec-c445-b0ce-cbb5108c28e3" TYPE="linux_raid_member"
/dev/sdd1: UUID="36575b92-c1ec-c445-b0ce-cbb5108c28e3" TYPE="linux_raid_member"
/dev/sdc1: UUID="36575b92-c1ec-c445-b0ce-cbb5108c28e3" TYPE="linux_raid_member"
/dev/md127: UUID="f7d9afe6-6af8-40bd-ac13-b1e73e47358b" TYPE="xfs"
/dev/sde1: UUID="eaaba714-c055-42b4-a9bc-2a0ff02e8ad4" TYPE="ext2"
/dev/sde2: UUID="d46a11a4-ed95-4711-9f90-41e2b2e5f81e" TYPE="swap"


mdadm --detail /dev/md127



/dev/md127:
Version : 0.90
Creation Time : Fri Sep 28 16:12:54 2018
Raid Level : raid10
Array Size : 62925824 (60.01 GiB 64.44 GB)
Used Dev Size : 31462912 (30.01 GiB 32.22 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 127
Persistence : Superblock is persistent

Update Time : Mon Oct 1 11:09:52 2018
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0

Layout : near=2
Chunk Size : 512K

UUID : 36575b92:c1ecc445:b0cecbb5:108c28e3 (local to host HIROSHI)
Events : 0.214

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1


cat /etc/mdadm.conf



ARRAY /dev/md/store metadata=1.2 name=HIROSHI:store UUID=36bee2f9:32f79bba:16330a1c:3cae6f1b
MAILADDR root@localhost
ARRAY /dev/md0 metadata=1.2 name=HIROSHI:data UUID=f70e9f93:d701fc56:7d790db4:efed16dd
MAILADDR root@localhost
ARRAY /dev/md0 metadata=1.2 name=HIROSHI:data UUID=a52488c5:068f47d6:121c88b0:f2d7e0db
MAILADDR root@localhost
ARRAY /dev/md0 metadata=1.2 name=HIROSHI:data UUID=a52488c5:068f47d6:121c88b0:f2d7e0db
MAILADDR root@HIROSHI
ARRAY /dev/md0 metadata=1.2 name=HIROSHI:data UUID=a52488c5:068f47d6:121c88b0:f2d7e0db






centos raid mdadm software-raid






share|improve this question







New contributor




demz merven is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question







New contributor




demz merven is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question






New contributor




demz merven is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked Oct 1 at 3:24









demz merven

12




12




New contributor




demz merven is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





demz merven is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






demz merven is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











  • Your RAID10 is complete, what is the problem?
    – wurtel
    Oct 1 at 8:43
















  • Your RAID10 is complete, what is the problem?
    – wurtel
    Oct 1 at 8:43















Your RAID10 is complete, what is the problem?
– wurtel
Oct 1 at 8:43




Your RAID10 is complete, what is the problem?
– wurtel
Oct 1 at 8:43















active

oldest

votes











Your Answer







StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);






demz merven is a new contributor. Be nice, and check out our Code of Conduct.









 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f472491%2fmdadm-raid-set-hdd-member%23new-answer', 'question_page');

);

Post as a guest



































active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes








demz merven is a new contributor. Be nice, and check out our Code of Conduct.









 

draft saved


draft discarded


















demz merven is a new contributor. Be nice, and check out our Code of Conduct.












demz merven is a new contributor. Be nice, and check out our Code of Conduct.











demz merven is a new contributor. Be nice, and check out our Code of Conduct.













 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f472491%2fmdadm-raid-set-hdd-member%23new-answer', 'question_page');

);

Post as a guest













































































Popular posts from this blog

How to check contact read email or not when send email to Individual?

Bahrain

Postfix configuration issue with fips on centos 7; mailgun relay