How many arrays can be created with mdadm?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
2
down vote

favorite












For an experiment purposes, I need as many disks in a single system as possible.
Since I have only six spare disks, I decided to partition them to 128 GPT partitions and create RAID0 array of one device using each partition.



The problem is that mdadm had created only 512 arrays /dev/md[0-511] and I cannot create any additional arrays.



After an attempt to create 513th array I get an error:





% mdadm -C /dev/md512 -l raid0 -n 1 /dev/sdd128 --force



mdadm: unexpected failure opening /dev/md512





Is it a designed limitation? Is there any way to bypass it?










share|improve this question



















  • 1




    Possible duplicate of What is the disk size limit for mdadm?
    – Ipor Sircer
    Sep 16 at 18:20










  • @IporSircer, no, the max number of arrays is different than the max number of component drives in an array, or the max size of an array or component device.
    – ilkkachu
    Sep 16 at 20:36










  • @sotona, do you need them to be MD disks? Why not just use those 768 partitions by themselves? If you need more, you might want to check LVM. I'd at least assume it can create more than 512 logical volumes, though I haven't tried...
    – ilkkachu
    Sep 16 at 20:39










  • @ilkkachu in fact I need those partitions to appear as some sort of separate disks
    – sotona
    Sep 16 at 20:55














up vote
2
down vote

favorite












For an experiment purposes, I need as many disks in a single system as possible.
Since I have only six spare disks, I decided to partition them to 128 GPT partitions and create RAID0 array of one device using each partition.



The problem is that mdadm had created only 512 arrays /dev/md[0-511] and I cannot create any additional arrays.



After an attempt to create 513th array I get an error:





% mdadm -C /dev/md512 -l raid0 -n 1 /dev/sdd128 --force



mdadm: unexpected failure opening /dev/md512





Is it a designed limitation? Is there any way to bypass it?










share|improve this question



















  • 1




    Possible duplicate of What is the disk size limit for mdadm?
    – Ipor Sircer
    Sep 16 at 18:20










  • @IporSircer, no, the max number of arrays is different than the max number of component drives in an array, or the max size of an array or component device.
    – ilkkachu
    Sep 16 at 20:36










  • @sotona, do you need them to be MD disks? Why not just use those 768 partitions by themselves? If you need more, you might want to check LVM. I'd at least assume it can create more than 512 logical volumes, though I haven't tried...
    – ilkkachu
    Sep 16 at 20:39










  • @ilkkachu in fact I need those partitions to appear as some sort of separate disks
    – sotona
    Sep 16 at 20:55












up vote
2
down vote

favorite









up vote
2
down vote

favorite











For an experiment purposes, I need as many disks in a single system as possible.
Since I have only six spare disks, I decided to partition them to 128 GPT partitions and create RAID0 array of one device using each partition.



The problem is that mdadm had created only 512 arrays /dev/md[0-511] and I cannot create any additional arrays.



After an attempt to create 513th array I get an error:





% mdadm -C /dev/md512 -l raid0 -n 1 /dev/sdd128 --force



mdadm: unexpected failure opening /dev/md512





Is it a designed limitation? Is there any way to bypass it?










share|improve this question















For an experiment purposes, I need as many disks in a single system as possible.
Since I have only six spare disks, I decided to partition them to 128 GPT partitions and create RAID0 array of one device using each partition.



The problem is that mdadm had created only 512 arrays /dev/md[0-511] and I cannot create any additional arrays.



After an attempt to create 513th array I get an error:





% mdadm -C /dev/md512 -l raid0 -n 1 /dev/sdd128 --force



mdadm: unexpected failure opening /dev/md512





Is it a designed limitation? Is there any way to bypass it?







hard-disk mdadm






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Sep 16 at 20:23

























asked Sep 16 at 16:14









sotona

1166




1166







  • 1




    Possible duplicate of What is the disk size limit for mdadm?
    – Ipor Sircer
    Sep 16 at 18:20










  • @IporSircer, no, the max number of arrays is different than the max number of component drives in an array, or the max size of an array or component device.
    – ilkkachu
    Sep 16 at 20:36










  • @sotona, do you need them to be MD disks? Why not just use those 768 partitions by themselves? If you need more, you might want to check LVM. I'd at least assume it can create more than 512 logical volumes, though I haven't tried...
    – ilkkachu
    Sep 16 at 20:39










  • @ilkkachu in fact I need those partitions to appear as some sort of separate disks
    – sotona
    Sep 16 at 20:55












  • 1




    Possible duplicate of What is the disk size limit for mdadm?
    – Ipor Sircer
    Sep 16 at 18:20










  • @IporSircer, no, the max number of arrays is different than the max number of component drives in an array, or the max size of an array or component device.
    – ilkkachu
    Sep 16 at 20:36










  • @sotona, do you need them to be MD disks? Why not just use those 768 partitions by themselves? If you need more, you might want to check LVM. I'd at least assume it can create more than 512 logical volumes, though I haven't tried...
    – ilkkachu
    Sep 16 at 20:39










  • @ilkkachu in fact I need those partitions to appear as some sort of separate disks
    – sotona
    Sep 16 at 20:55







1




1




Possible duplicate of What is the disk size limit for mdadm?
– Ipor Sircer
Sep 16 at 18:20




Possible duplicate of What is the disk size limit for mdadm?
– Ipor Sircer
Sep 16 at 18:20












@IporSircer, no, the max number of arrays is different than the max number of component drives in an array, or the max size of an array or component device.
– ilkkachu
Sep 16 at 20:36




@IporSircer, no, the max number of arrays is different than the max number of component drives in an array, or the max size of an array or component device.
– ilkkachu
Sep 16 at 20:36












@sotona, do you need them to be MD disks? Why not just use those 768 partitions by themselves? If you need more, you might want to check LVM. I'd at least assume it can create more than 512 logical volumes, though I haven't tried...
– ilkkachu
Sep 16 at 20:39




@sotona, do you need them to be MD disks? Why not just use those 768 partitions by themselves? If you need more, you might want to check LVM. I'd at least assume it can create more than 512 logical volumes, though I haven't tried...
– ilkkachu
Sep 16 at 20:39












@ilkkachu in fact I need those partitions to appear as some sort of separate disks
– sotona
Sep 16 at 20:55




@ilkkachu in fact I need those partitions to appear as some sort of separate disks
– sotona
Sep 16 at 20:55










2 Answers
2






active

oldest

votes

















up vote
2
down vote



accepted










You have hit the maximum limit of /dev/md* arrays on a single Linux system.



This is related to traditional Unix device major & minor numbers.
Originally, the MD RAID driver was assigned major block device number 9 (defined in /usr/include/linux/raid/md_u.h as MD_MAJOR), and that allowed a set of 256 minor device numbers, and thus 256 unique RAID array devices. (The canonical list for device number allocation is included in the documentation that comes with the kernel source package.)



This eventually proved insufficient, and a mechanism was developed to use one additional major number (known in kernel code as mdp_major) if more than 256 RAID arrays are needed. You can find the code for handling this in the kernel source file .../drivers/md/md.c. The mdp_major extra major device number is allocated dynamically from the dynamic major device number range (234..254, start from the top and allocate downwards).



To use more than 512 MD RAID arrays on a single host, this mechanism needs to be rewritten to use more than one dynamic major number if required.






share|improve this answer





























    up vote
    0
    down vote













    There is quite a dirty workaround (works with latest kernels from 3.10.0-862.11.6 branch)



    # echo md512 > /sys/module/md_mod/paramaters/new_array
    # mdadm -C /dev/md512 -l raid0 -n 1 /dev/sdd128 --force





    share|improve this answer




















      Your Answer







      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "106"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      convertImagesToLinks: false,
      noModals: false,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













       

      draft saved


      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f469407%2fhow-many-arrays-can-be-created-with-mdadm%23new-answer', 'question_page');

      );

      Post as a guest






























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      2
      down vote



      accepted










      You have hit the maximum limit of /dev/md* arrays on a single Linux system.



      This is related to traditional Unix device major & minor numbers.
      Originally, the MD RAID driver was assigned major block device number 9 (defined in /usr/include/linux/raid/md_u.h as MD_MAJOR), and that allowed a set of 256 minor device numbers, and thus 256 unique RAID array devices. (The canonical list for device number allocation is included in the documentation that comes with the kernel source package.)



      This eventually proved insufficient, and a mechanism was developed to use one additional major number (known in kernel code as mdp_major) if more than 256 RAID arrays are needed. You can find the code for handling this in the kernel source file .../drivers/md/md.c. The mdp_major extra major device number is allocated dynamically from the dynamic major device number range (234..254, start from the top and allocate downwards).



      To use more than 512 MD RAID arrays on a single host, this mechanism needs to be rewritten to use more than one dynamic major number if required.






      share|improve this answer


























        up vote
        2
        down vote



        accepted










        You have hit the maximum limit of /dev/md* arrays on a single Linux system.



        This is related to traditional Unix device major & minor numbers.
        Originally, the MD RAID driver was assigned major block device number 9 (defined in /usr/include/linux/raid/md_u.h as MD_MAJOR), and that allowed a set of 256 minor device numbers, and thus 256 unique RAID array devices. (The canonical list for device number allocation is included in the documentation that comes with the kernel source package.)



        This eventually proved insufficient, and a mechanism was developed to use one additional major number (known in kernel code as mdp_major) if more than 256 RAID arrays are needed. You can find the code for handling this in the kernel source file .../drivers/md/md.c. The mdp_major extra major device number is allocated dynamically from the dynamic major device number range (234..254, start from the top and allocate downwards).



        To use more than 512 MD RAID arrays on a single host, this mechanism needs to be rewritten to use more than one dynamic major number if required.






        share|improve this answer
























          up vote
          2
          down vote



          accepted







          up vote
          2
          down vote



          accepted






          You have hit the maximum limit of /dev/md* arrays on a single Linux system.



          This is related to traditional Unix device major & minor numbers.
          Originally, the MD RAID driver was assigned major block device number 9 (defined in /usr/include/linux/raid/md_u.h as MD_MAJOR), and that allowed a set of 256 minor device numbers, and thus 256 unique RAID array devices. (The canonical list for device number allocation is included in the documentation that comes with the kernel source package.)



          This eventually proved insufficient, and a mechanism was developed to use one additional major number (known in kernel code as mdp_major) if more than 256 RAID arrays are needed. You can find the code for handling this in the kernel source file .../drivers/md/md.c. The mdp_major extra major device number is allocated dynamically from the dynamic major device number range (234..254, start from the top and allocate downwards).



          To use more than 512 MD RAID arrays on a single host, this mechanism needs to be rewritten to use more than one dynamic major number if required.






          share|improve this answer














          You have hit the maximum limit of /dev/md* arrays on a single Linux system.



          This is related to traditional Unix device major & minor numbers.
          Originally, the MD RAID driver was assigned major block device number 9 (defined in /usr/include/linux/raid/md_u.h as MD_MAJOR), and that allowed a set of 256 minor device numbers, and thus 256 unique RAID array devices. (The canonical list for device number allocation is included in the documentation that comes with the kernel source package.)



          This eventually proved insufficient, and a mechanism was developed to use one additional major number (known in kernel code as mdp_major) if more than 256 RAID arrays are needed. You can find the code for handling this in the kernel source file .../drivers/md/md.c. The mdp_major extra major device number is allocated dynamically from the dynamic major device number range (234..254, start from the top and allocate downwards).



          To use more than 512 MD RAID arrays on a single host, this mechanism needs to be rewritten to use more than one dynamic major number if required.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Sep 17 at 8:18









          Stephen Kitt

          148k22324393




          148k22324393










          answered Sep 16 at 19:11









          telcoM

          12.1k11335




          12.1k11335






















              up vote
              0
              down vote













              There is quite a dirty workaround (works with latest kernels from 3.10.0-862.11.6 branch)



              # echo md512 > /sys/module/md_mod/paramaters/new_array
              # mdadm -C /dev/md512 -l raid0 -n 1 /dev/sdd128 --force





              share|improve this answer
























                up vote
                0
                down vote













                There is quite a dirty workaround (works with latest kernels from 3.10.0-862.11.6 branch)



                # echo md512 > /sys/module/md_mod/paramaters/new_array
                # mdadm -C /dev/md512 -l raid0 -n 1 /dev/sdd128 --force





                share|improve this answer






















                  up vote
                  0
                  down vote










                  up vote
                  0
                  down vote









                  There is quite a dirty workaround (works with latest kernels from 3.10.0-862.11.6 branch)



                  # echo md512 > /sys/module/md_mod/paramaters/new_array
                  # mdadm -C /dev/md512 -l raid0 -n 1 /dev/sdd128 --force





                  share|improve this answer












                  There is quite a dirty workaround (works with latest kernels from 3.10.0-862.11.6 branch)



                  # echo md512 > /sys/module/md_mod/paramaters/new_array
                  # mdadm -C /dev/md512 -l raid0 -n 1 /dev/sdd128 --force






                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Sep 17 at 11:45









                  sotona

                  1166




                  1166



























                       

                      draft saved


                      draft discarded















































                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f469407%2fhow-many-arrays-can-be-created-with-mdadm%23new-answer', 'question_page');

                      );

                      Post as a guest













































































                      Popular posts from this blog

                      How to check contact read email or not when send email to Individual?

                      Bahrain

                      Postfix configuration issue with fips on centos 7; mailgun relay