Raid partitioning - how do I add a new partition?
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
I have a raid 1 setup already. Its 4tb. I want to split that into 4 partitions - boot, swap, home, and root.
Normally you just use fdisk. This can't be used anymore because of complexities of raid for which I do not understand.
I know this is easily achievable in Windows and other operating systems, for which they have GUIs to easily click through the process.
How do I acheive this easily in Linux?
(Using Arch if that matters)
partition raid raid1
 |Â
show 1 more comment
up vote
0
down vote
favorite
I have a raid 1 setup already. Its 4tb. I want to split that into 4 partitions - boot, swap, home, and root.
Normally you just use fdisk. This can't be used anymore because of complexities of raid for which I do not understand.
I know this is easily achievable in Windows and other operating systems, for which they have GUIs to easily click through the process.
How do I acheive this easily in Linux?
(Using Arch if that matters)
partition raid raid1
1
Not an answer to your question, but if you are using software raid, an idea is to create four partitions on each drive, and then create four different raid arrays. On Linux, you can create three instead of four combining root and home, and use LVM to further subdivide the third array. This gives you more flexibility. You can then put grub on each and boot off of any of them if you have a drive failure.
â Steve Brandli
Feb 9 at 0:49
1
You should be able to use fdisk just fine on your RAID device. If it's Linux's software RAID, the resulting device may be called something like/dev/md0
. If you are in a different situation, please provide more context.
â dhag
Feb 9 at 3:28
I tried that but the mdadm changes something at this level I believe - it was split into 4 very random partitions which did not add up to 4TB. I did not do this, and lsblk shows it to be 4TB consistent for /dev/md0, so I figure it's an artifact of the raid.
â chase
Feb 9 at 3:39
I ended up nuking my system out of rage. Unfortunately now it will not boot, due to bootloader / uefi problems.
â chase
Feb 9 at 3:40
if you are this impatient and easily frustrated have you ever stopped to consider that the problem may be you and not "... mdadm changes something at this level I believe ..."?
â cas
Feb 9 at 5:26
 |Â
show 1 more comment
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I have a raid 1 setup already. Its 4tb. I want to split that into 4 partitions - boot, swap, home, and root.
Normally you just use fdisk. This can't be used anymore because of complexities of raid for which I do not understand.
I know this is easily achievable in Windows and other operating systems, for which they have GUIs to easily click through the process.
How do I acheive this easily in Linux?
(Using Arch if that matters)
partition raid raid1
I have a raid 1 setup already. Its 4tb. I want to split that into 4 partitions - boot, swap, home, and root.
Normally you just use fdisk. This can't be used anymore because of complexities of raid for which I do not understand.
I know this is easily achievable in Windows and other operating systems, for which they have GUIs to easily click through the process.
How do I acheive this easily in Linux?
(Using Arch if that matters)
partition raid raid1
asked Feb 9 at 0:27
chase
101
101
1
Not an answer to your question, but if you are using software raid, an idea is to create four partitions on each drive, and then create four different raid arrays. On Linux, you can create three instead of four combining root and home, and use LVM to further subdivide the third array. This gives you more flexibility. You can then put grub on each and boot off of any of them if you have a drive failure.
â Steve Brandli
Feb 9 at 0:49
1
You should be able to use fdisk just fine on your RAID device. If it's Linux's software RAID, the resulting device may be called something like/dev/md0
. If you are in a different situation, please provide more context.
â dhag
Feb 9 at 3:28
I tried that but the mdadm changes something at this level I believe - it was split into 4 very random partitions which did not add up to 4TB. I did not do this, and lsblk shows it to be 4TB consistent for /dev/md0, so I figure it's an artifact of the raid.
â chase
Feb 9 at 3:39
I ended up nuking my system out of rage. Unfortunately now it will not boot, due to bootloader / uefi problems.
â chase
Feb 9 at 3:40
if you are this impatient and easily frustrated have you ever stopped to consider that the problem may be you and not "... mdadm changes something at this level I believe ..."?
â cas
Feb 9 at 5:26
 |Â
show 1 more comment
1
Not an answer to your question, but if you are using software raid, an idea is to create four partitions on each drive, and then create four different raid arrays. On Linux, you can create three instead of four combining root and home, and use LVM to further subdivide the third array. This gives you more flexibility. You can then put grub on each and boot off of any of them if you have a drive failure.
â Steve Brandli
Feb 9 at 0:49
1
You should be able to use fdisk just fine on your RAID device. If it's Linux's software RAID, the resulting device may be called something like/dev/md0
. If you are in a different situation, please provide more context.
â dhag
Feb 9 at 3:28
I tried that but the mdadm changes something at this level I believe - it was split into 4 very random partitions which did not add up to 4TB. I did not do this, and lsblk shows it to be 4TB consistent for /dev/md0, so I figure it's an artifact of the raid.
â chase
Feb 9 at 3:39
I ended up nuking my system out of rage. Unfortunately now it will not boot, due to bootloader / uefi problems.
â chase
Feb 9 at 3:40
if you are this impatient and easily frustrated have you ever stopped to consider that the problem may be you and not "... mdadm changes something at this level I believe ..."?
â cas
Feb 9 at 5:26
1
1
Not an answer to your question, but if you are using software raid, an idea is to create four partitions on each drive, and then create four different raid arrays. On Linux, you can create three instead of four combining root and home, and use LVM to further subdivide the third array. This gives you more flexibility. You can then put grub on each and boot off of any of them if you have a drive failure.
â Steve Brandli
Feb 9 at 0:49
Not an answer to your question, but if you are using software raid, an idea is to create four partitions on each drive, and then create four different raid arrays. On Linux, you can create three instead of four combining root and home, and use LVM to further subdivide the third array. This gives you more flexibility. You can then put grub on each and boot off of any of them if you have a drive failure.
â Steve Brandli
Feb 9 at 0:49
1
1
You should be able to use fdisk just fine on your RAID device. If it's Linux's software RAID, the resulting device may be called something like
/dev/md0
. If you are in a different situation, please provide more context.â dhag
Feb 9 at 3:28
You should be able to use fdisk just fine on your RAID device. If it's Linux's software RAID, the resulting device may be called something like
/dev/md0
. If you are in a different situation, please provide more context.â dhag
Feb 9 at 3:28
I tried that but the mdadm changes something at this level I believe - it was split into 4 very random partitions which did not add up to 4TB. I did not do this, and lsblk shows it to be 4TB consistent for /dev/md0, so I figure it's an artifact of the raid.
â chase
Feb 9 at 3:39
I tried that but the mdadm changes something at this level I believe - it was split into 4 very random partitions which did not add up to 4TB. I did not do this, and lsblk shows it to be 4TB consistent for /dev/md0, so I figure it's an artifact of the raid.
â chase
Feb 9 at 3:39
I ended up nuking my system out of rage. Unfortunately now it will not boot, due to bootloader / uefi problems.
â chase
Feb 9 at 3:40
I ended up nuking my system out of rage. Unfortunately now it will not boot, due to bootloader / uefi problems.
â chase
Feb 9 at 3:40
if you are this impatient and easily frustrated have you ever stopped to consider that the problem may be you and not "... mdadm changes something at this level I believe ..."?
â cas
Feb 9 at 5:26
if you are this impatient and easily frustrated have you ever stopped to consider that the problem may be you and not "... mdadm changes something at this level I believe ..."?
â cas
Feb 9 at 5:26
 |Â
show 1 more comment
1 Answer
1
active
oldest
votes
up vote
0
down vote
An easy partition manager is gparted. However, according to the Software-RAID HOWTO1: "RAID devices cannot be partitioned, like ordinary disks can."
When using software RAID, consider partitioning first before creating your RAID devices.
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
An easy partition manager is gparted. However, according to the Software-RAID HOWTO1: "RAID devices cannot be partitioned, like ordinary disks can."
When using software RAID, consider partitioning first before creating your RAID devices.
add a comment |Â
up vote
0
down vote
An easy partition manager is gparted. However, according to the Software-RAID HOWTO1: "RAID devices cannot be partitioned, like ordinary disks can."
When using software RAID, consider partitioning first before creating your RAID devices.
add a comment |Â
up vote
0
down vote
up vote
0
down vote
An easy partition manager is gparted. However, according to the Software-RAID HOWTO1: "RAID devices cannot be partitioned, like ordinary disks can."
When using software RAID, consider partitioning first before creating your RAID devices.
An easy partition manager is gparted. However, according to the Software-RAID HOWTO1: "RAID devices cannot be partitioned, like ordinary disks can."
When using software RAID, consider partitioning first before creating your RAID devices.
edited Feb 9 at 3:56
answered Feb 9 at 3:39
Angelo
9081618
9081618
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f422939%2fraid-partitioning-how-do-i-add-a-new-partition%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
1
Not an answer to your question, but if you are using software raid, an idea is to create four partitions on each drive, and then create four different raid arrays. On Linux, you can create three instead of four combining root and home, and use LVM to further subdivide the third array. This gives you more flexibility. You can then put grub on each and boot off of any of them if you have a drive failure.
â Steve Brandli
Feb 9 at 0:49
1
You should be able to use fdisk just fine on your RAID device. If it's Linux's software RAID, the resulting device may be called something like
/dev/md0
. If you are in a different situation, please provide more context.â dhag
Feb 9 at 3:28
I tried that but the mdadm changes something at this level I believe - it was split into 4 very random partitions which did not add up to 4TB. I did not do this, and lsblk shows it to be 4TB consistent for /dev/md0, so I figure it's an artifact of the raid.
â chase
Feb 9 at 3:39
I ended up nuking my system out of rage. Unfortunately now it will not boot, due to bootloader / uefi problems.
â chase
Feb 9 at 3:40
if you are this impatient and easily frustrated have you ever stopped to consider that the problem may be you and not "... mdadm changes something at this level I believe ..."?
â cas
Feb 9 at 5:26