Low-end hardware RAID vs Software RAID [on hold]

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
15
down vote

favorite
1












I want to build a low-end 6TB RAID 1 archive, on an old pc.



MB: Intel d2500hn 64bit
CPU: Intel Atom D2500
RAM: 4GB DDR3 533 MHz
PSU: Chinese 500W
NO GPU
1x Ethernet 1Gbps
2x SATA2 ports
1x PCI port
4x USB 2.0


I want to build a RAID1 archive on Linux (CentOS 7 I think, then I will install all I need, I think ownCloud or something similar), I will use it in my home local network.



Is it better a 10-20$ raid PCI controller or a software RAID?



If software raid is better, which should I choose on CentOS? Is it better to put the system on an external USB and using 2 disks on the connectors or should I put the system in one disk and then create RAID?



If I would do a 3 disks RAID 5, should I choose hardware raid PCI or a simply PCI SATA connector?










share|improve this question









New contributor




Igor Z. is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











put on hold as off-topic by kasperd, Gerald Schneider, womble♦ Oct 5 at 2:09


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "Questions on Server Fault must be about managing information technology systems in a business environment. Home and end-user computing questions may be asked on Super User, and questions about development, testing and development tools may be asked on Stack Overflow." – Gerald Schneider, womble
If this question can be reworded to fit the rules in the help center, please edit the question.








  • 11




    Please don’t do R5, it’s dangerous
    – Chopper3
    Oct 4 at 11:55






  • 1




    Hasn't this question been answered before? E.g. serverfault.com/questions/214/raid-software-vs-hardware
    – Tom
    Oct 4 at 12:08






  • 3




    This is a question about opinions, you will find a lot of people rooting for software and a lot of people rooting for hardware. In my opinion it depends. The Linux Software RAID is well established and proved its worth over and over again but it creates a very light overhead (which is negligible, especially in RAID 1). RAID 5 should not be used if you value your data because of URE, see youtube.com/watch?v=A2OxG2UjiV4 Rule of thumb is, if you use RAID 1 and have the option between cheap hardware RAID and software RAID, go for software.
    – Broco
    Oct 4 at 12:20






  • 3




    @Tom These answers are ~9y old and the HW/SW-RAID matter changed quite a bit, I think. OP: In your case I'd mirror the disks in software-RAID1 including the CentOS installation.
    – Lenniey
    Oct 4 at 12:22







  • 2




    People always claim that hardware RAID saves on CPU usage. But the CPU usage required to copy data around is almost zero. I cannot imagine CPU usage being an issue in software RAID.
    – usr
    Oct 4 at 21:29














up vote
15
down vote

favorite
1












I want to build a low-end 6TB RAID 1 archive, on an old pc.



MB: Intel d2500hn 64bit
CPU: Intel Atom D2500
RAM: 4GB DDR3 533 MHz
PSU: Chinese 500W
NO GPU
1x Ethernet 1Gbps
2x SATA2 ports
1x PCI port
4x USB 2.0


I want to build a RAID1 archive on Linux (CentOS 7 I think, then I will install all I need, I think ownCloud or something similar), I will use it in my home local network.



Is it better a 10-20$ raid PCI controller or a software RAID?



If software raid is better, which should I choose on CentOS? Is it better to put the system on an external USB and using 2 disks on the connectors or should I put the system in one disk and then create RAID?



If I would do a 3 disks RAID 5, should I choose hardware raid PCI or a simply PCI SATA connector?










share|improve this question









New contributor




Igor Z. is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











put on hold as off-topic by kasperd, Gerald Schneider, womble♦ Oct 5 at 2:09


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "Questions on Server Fault must be about managing information technology systems in a business environment. Home and end-user computing questions may be asked on Super User, and questions about development, testing and development tools may be asked on Stack Overflow." – Gerald Schneider, womble
If this question can be reworded to fit the rules in the help center, please edit the question.








  • 11




    Please don’t do R5, it’s dangerous
    – Chopper3
    Oct 4 at 11:55






  • 1




    Hasn't this question been answered before? E.g. serverfault.com/questions/214/raid-software-vs-hardware
    – Tom
    Oct 4 at 12:08






  • 3




    This is a question about opinions, you will find a lot of people rooting for software and a lot of people rooting for hardware. In my opinion it depends. The Linux Software RAID is well established and proved its worth over and over again but it creates a very light overhead (which is negligible, especially in RAID 1). RAID 5 should not be used if you value your data because of URE, see youtube.com/watch?v=A2OxG2UjiV4 Rule of thumb is, if you use RAID 1 and have the option between cheap hardware RAID and software RAID, go for software.
    – Broco
    Oct 4 at 12:20






  • 3




    @Tom These answers are ~9y old and the HW/SW-RAID matter changed quite a bit, I think. OP: In your case I'd mirror the disks in software-RAID1 including the CentOS installation.
    – Lenniey
    Oct 4 at 12:22







  • 2




    People always claim that hardware RAID saves on CPU usage. But the CPU usage required to copy data around is almost zero. I cannot imagine CPU usage being an issue in software RAID.
    – usr
    Oct 4 at 21:29












up vote
15
down vote

favorite
1









up vote
15
down vote

favorite
1






1





I want to build a low-end 6TB RAID 1 archive, on an old pc.



MB: Intel d2500hn 64bit
CPU: Intel Atom D2500
RAM: 4GB DDR3 533 MHz
PSU: Chinese 500W
NO GPU
1x Ethernet 1Gbps
2x SATA2 ports
1x PCI port
4x USB 2.0


I want to build a RAID1 archive on Linux (CentOS 7 I think, then I will install all I need, I think ownCloud or something similar), I will use it in my home local network.



Is it better a 10-20$ raid PCI controller or a software RAID?



If software raid is better, which should I choose on CentOS? Is it better to put the system on an external USB and using 2 disks on the connectors or should I put the system in one disk and then create RAID?



If I would do a 3 disks RAID 5, should I choose hardware raid PCI or a simply PCI SATA connector?










share|improve this question









New contributor




Igor Z. is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











I want to build a low-end 6TB RAID 1 archive, on an old pc.



MB: Intel d2500hn 64bit
CPU: Intel Atom D2500
RAM: 4GB DDR3 533 MHz
PSU: Chinese 500W
NO GPU
1x Ethernet 1Gbps
2x SATA2 ports
1x PCI port
4x USB 2.0


I want to build a RAID1 archive on Linux (CentOS 7 I think, then I will install all I need, I think ownCloud or something similar), I will use it in my home local network.



Is it better a 10-20$ raid PCI controller or a software RAID?



If software raid is better, which should I choose on CentOS? Is it better to put the system on an external USB and using 2 disks on the connectors or should I put the system in one disk and then create RAID?



If I would do a 3 disks RAID 5, should I choose hardware raid PCI or a simply PCI SATA connector?







raid centos7 software-raid hardware-raid raid1






share|improve this question









New contributor




Igor Z. is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




Igor Z. is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited Oct 4 at 23:05









Patrick Mevzek

2,6553923




2,6553923






New contributor




Igor Z. is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked Oct 4 at 10:50









Igor Z.

846




846




New contributor




Igor Z. is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Igor Z. is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Igor Z. is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




put on hold as off-topic by kasperd, Gerald Schneider, womble♦ Oct 5 at 2:09


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "Questions on Server Fault must be about managing information technology systems in a business environment. Home and end-user computing questions may be asked on Super User, and questions about development, testing and development tools may be asked on Stack Overflow." – Gerald Schneider, womble
If this question can be reworded to fit the rules in the help center, please edit the question.




put on hold as off-topic by kasperd, Gerald Schneider, womble♦ Oct 5 at 2:09


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "Questions on Server Fault must be about managing information technology systems in a business environment. Home and end-user computing questions may be asked on Super User, and questions about development, testing and development tools may be asked on Stack Overflow." – Gerald Schneider, womble
If this question can be reworded to fit the rules in the help center, please edit the question.







  • 11




    Please don’t do R5, it’s dangerous
    – Chopper3
    Oct 4 at 11:55






  • 1




    Hasn't this question been answered before? E.g. serverfault.com/questions/214/raid-software-vs-hardware
    – Tom
    Oct 4 at 12:08






  • 3




    This is a question about opinions, you will find a lot of people rooting for software and a lot of people rooting for hardware. In my opinion it depends. The Linux Software RAID is well established and proved its worth over and over again but it creates a very light overhead (which is negligible, especially in RAID 1). RAID 5 should not be used if you value your data because of URE, see youtube.com/watch?v=A2OxG2UjiV4 Rule of thumb is, if you use RAID 1 and have the option between cheap hardware RAID and software RAID, go for software.
    – Broco
    Oct 4 at 12:20






  • 3




    @Tom These answers are ~9y old and the HW/SW-RAID matter changed quite a bit, I think. OP: In your case I'd mirror the disks in software-RAID1 including the CentOS installation.
    – Lenniey
    Oct 4 at 12:22







  • 2




    People always claim that hardware RAID saves on CPU usage. But the CPU usage required to copy data around is almost zero. I cannot imagine CPU usage being an issue in software RAID.
    – usr
    Oct 4 at 21:29












  • 11




    Please don’t do R5, it’s dangerous
    – Chopper3
    Oct 4 at 11:55






  • 1




    Hasn't this question been answered before? E.g. serverfault.com/questions/214/raid-software-vs-hardware
    – Tom
    Oct 4 at 12:08






  • 3




    This is a question about opinions, you will find a lot of people rooting for software and a lot of people rooting for hardware. In my opinion it depends. The Linux Software RAID is well established and proved its worth over and over again but it creates a very light overhead (which is negligible, especially in RAID 1). RAID 5 should not be used if you value your data because of URE, see youtube.com/watch?v=A2OxG2UjiV4 Rule of thumb is, if you use RAID 1 and have the option between cheap hardware RAID and software RAID, go for software.
    – Broco
    Oct 4 at 12:20






  • 3




    @Tom These answers are ~9y old and the HW/SW-RAID matter changed quite a bit, I think. OP: In your case I'd mirror the disks in software-RAID1 including the CentOS installation.
    – Lenniey
    Oct 4 at 12:22







  • 2




    People always claim that hardware RAID saves on CPU usage. But the CPU usage required to copy data around is almost zero. I cannot imagine CPU usage being an issue in software RAID.
    – usr
    Oct 4 at 21:29







11




11




Please don’t do R5, it’s dangerous
– Chopper3
Oct 4 at 11:55




Please don’t do R5, it’s dangerous
– Chopper3
Oct 4 at 11:55




1




1




Hasn't this question been answered before? E.g. serverfault.com/questions/214/raid-software-vs-hardware
– Tom
Oct 4 at 12:08




Hasn't this question been answered before? E.g. serverfault.com/questions/214/raid-software-vs-hardware
– Tom
Oct 4 at 12:08




3




3




This is a question about opinions, you will find a lot of people rooting for software and a lot of people rooting for hardware. In my opinion it depends. The Linux Software RAID is well established and proved its worth over and over again but it creates a very light overhead (which is negligible, especially in RAID 1). RAID 5 should not be used if you value your data because of URE, see youtube.com/watch?v=A2OxG2UjiV4 Rule of thumb is, if you use RAID 1 and have the option between cheap hardware RAID and software RAID, go for software.
– Broco
Oct 4 at 12:20




This is a question about opinions, you will find a lot of people rooting for software and a lot of people rooting for hardware. In my opinion it depends. The Linux Software RAID is well established and proved its worth over and over again but it creates a very light overhead (which is negligible, especially in RAID 1). RAID 5 should not be used if you value your data because of URE, see youtube.com/watch?v=A2OxG2UjiV4 Rule of thumb is, if you use RAID 1 and have the option between cheap hardware RAID and software RAID, go for software.
– Broco
Oct 4 at 12:20




3




3




@Tom These answers are ~9y old and the HW/SW-RAID matter changed quite a bit, I think. OP: In your case I'd mirror the disks in software-RAID1 including the CentOS installation.
– Lenniey
Oct 4 at 12:22





@Tom These answers are ~9y old and the HW/SW-RAID matter changed quite a bit, I think. OP: In your case I'd mirror the disks in software-RAID1 including the CentOS installation.
– Lenniey
Oct 4 at 12:22





2




2




People always claim that hardware RAID saves on CPU usage. But the CPU usage required to copy data around is almost zero. I cannot imagine CPU usage being an issue in software RAID.
– usr
Oct 4 at 21:29




People always claim that hardware RAID saves on CPU usage. But the CPU usage required to copy data around is almost zero. I cannot imagine CPU usage being an issue in software RAID.
– usr
Oct 4 at 21:29










4 Answers
4






active

oldest

votes

















up vote
39
down vote



accepted










A 10-20$ "hardware" RAID card is nothing more than a opaque, binary driver blob running a crap software-only RAID implementation. Stay well away from it.



A 200$ RAID card offer proper hardware support (ie: a RoC running another opaque, binary blob which is better and does not run on the main host CPU). I suggest to stay away from these cards also because, lacking a writeback cache, they do not provide any tangible benefit over a software RAID implementation.



A 300/400$ RAID card offering a powerloss-protected writeback cache is worth buying, but not for small, Atom-based PC/NAS.



In short: I strongly suggest you to use Linux software RAID. Another option to seriously consider is a mirrored ZFS setup but, with an Atom CPU and only 4 GB RAM, do not expect high performance.



For other information, read here






share|improve this answer






















  • Thanks, i will use mdadm, do you advice to put the system on an external usb and two disks used as memories or should I install the system and then create the raid adding the disk? Thanks
    – Igor Z.
    Oct 4 at 13:15











  • @IgorZ.It's not clear to me how do you want to connect your drives. From your post, it seems you only have 2 SATA ports, so I would install the OS on a USB HDD or flash drive (if going the USB flash route, be sure to buy a pendrive with decent 4k random write performance).
    – shodanshok
    Oct 4 at 16:05










  • RoC? SoC would be a system-on-a-chip, i.e. "a small computer", but what's an RoC?
    – ilkkachu
    Oct 4 at 19:09










  • POC. Proof of Concept?
    – BaronSamedi1958
    Oct 4 at 19:43






  • 1




    RoC means Raid on Chip. Basically, a marketing term to identify an embedded system running a RAID-related OS with hardware offload for parity calculation.
    – shodanshok
    Oct 4 at 20:11


















up vote
11
down vote













Go ZFS. Seriously. It's so much better compared to hardware RAID, and reason is simple: It uses variable size strips so parity RAID modes (Z1 & Z2, RAID5 & RAID6) equivalents are performing @ RAID10 level still being extremely cost-efficient. + you can use flash cache (ZIL, L2ARC etc) running @ dedicated set of PCIe lanes.



https://storagemojo.com/2006/08/15/zfs-performance-versus-hardware-raid/



There's ZFS on Linux, ZoL.



https://zfsonlinux.org/






share|improve this answer
















  • 3




    I would normally agree wholeheartedly here but he only has 4GiB of RAM so ZFS may not perform optimally...
    – Josh
    Oct 4 at 14:41






  • 1




    +1. Anyway, ZRAID is know for low IOPS compared to mirroring+striping: basically, each top-level vdev has the IOPS peformance of a single disk. Give a look here
    – shodanshok
    Oct 4 at 16:11











  • Last time I looked, ZFS required 1 GB of RAM per TB of RAID, so the OP doesn't have enough RAM. Has that changed?
    – Mark
    Oct 4 at 19:52






  • 2




    Agreed. ZFS is the best choice for the archival data. The performance always depends on the stripe size and the size of the blocks which will be written to the disk, so it is complicated to calculate it, but very easy to optimize the performance :) Moreover, ZFS was not designed for virtualization or intensive IO workload.
    – Strepsils
    2 days ago






  • 1




    2Mark: That’s for deduplicated capacity.
    – BaronSamedi1958
    2 days ago


















up vote
3
down vote













Here is another argument for software on a cheap system.



Stuff breaks, you know this that is why you are using raid, but raid controllers also break, as does ram, processor, power-supply and everything else, including software. In most failures it is simple enough to replace the damaged component with an equivalent or better. Blow a 100w power-supply, grab a 150w one and get going. Similar with most components. However with a hardware raid there are now three exceptions to this pattern: raid controller, hard drives, and motherboard (or other upstream if not an expansion card).



Let's look at the raid card. Most raid cards are poorly documented, and incompatible. You cannot replace a card by company xyz with one by abc, as they store data differently (assuming you can figure out who made the card to begin with). The solution to this is to have a spare raid card, exactly identical to the production one.



Hard drives are not as bad as raid cards, but as raid cards have physical connectors to the drives you must use compatible drives and significantly larger drives may cause problems. Significant care is needed in ordering replacement drives.



Motherboards are typically more difficult than drives but less than raid cards. In most cases just verifying that compatible slots are available is sufficient but bootable raids may be no end of headaches. The way to avoid this problem is external enclosures, but this is not cheap.



All these problems can be solved by throwing money at the problem, but for a cheap system this is not desirable. Software raids on the other hand are immune to most (but not quite all) of these issues because it can use any block device.



The one drawback to software raid on a cheap system is booting. As far as I know the only bootloader that supports raid is grub and it only supports raid 1 which means your /boot must be stored on raid 1 which is not a problem as long as you are only using raid 1 and only a minor problem in most other cases. However grub itself (specifically the first stage boot block) cannot be stored on the raid. This can be managed by putting a spare copy on the other drives.






share|improve this answer




















  • The way I set up my bootable RAID 1 was to create a /boot partition on each and a data partition on each for / (instead of dedicating the entire disk to the array). As long as you create a separate boot partition on each drive and run grub-install to each drive, they should all be bootable, and md should be able to mount the degraded array. I imagine it would work with flavors other than RAID 1 as well.
    – nstenz
    Oct 4 at 18:15










  • @nstenz, you described my setup almost exactly. The data partition got raid6 and lvm and boot got raid 1.
    – hildred
    Oct 4 at 20:57

















up vote
1
down vote













  1. As others have said, there's no benefit to hardware RAID, and various downsides. My main reasons for preferring software RAID is that it's simpler and more portable (and thus more likely to actually have a successful recovery from various failure scenarios).


  2. (Also as others have said) 3 disk RAID 5 is a really bad RAID scheme -- it's almost the worst of all worlds, with very little benefit. Sort of a compromise between RAID 0 and RAID 1, and slightly better than either of those, but that's about the only good thing to say about it. RAID has moved on to much better schemes, like RAID 6.



  3. My advice (hardware):



    • Get a 4-port SATA card for that PCI slot, bringing you to six total SATA ports -- one for a boot drive, and five for data drives. I see one for ~$15, advertised as hardware RAID, but you can just ignore those features and use it as plain SATA.


    • Get a small SSD for the boot drive. I know there's still the perception that "SSDs are too expensive", but it's barely true anymore, and not at all on the small end -- 120GB is way more than you'll need for this boot drive, and you can get one for ~$25.


    • An optional but really nice addition (if your PC case has 3x 5.25" drive bays) is to get a drive bay converter: you can turn 3 5.25" (optical) drive bays into 5 hot-swappable front-loading 3.5" (HDD) bays, so you wont have to take the machine apart (or even shut it down) to swap drives. (Search for "backplane 5 in 3".)


    • Use 5x whatever size HDDs in RAID 6 (dual redundancy, 3x drive size usable space).



  4. My advice (software): Look at OpenMediaVault for the OS / file-server software. It's an "appliance distro" perfect for exactly this kind of use -- Debian-based (actually a Linux port of the BSD-based FreeNAS) with everything pre-configured for a NAS server. It makes setting up and managing software RAID (as well as LVM, network shares, etc.) really simple.






share|improve this answer








New contributor




dgould is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
























    4 Answers
    4






    active

    oldest

    votes








    4 Answers
    4






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    39
    down vote



    accepted










    A 10-20$ "hardware" RAID card is nothing more than a opaque, binary driver blob running a crap software-only RAID implementation. Stay well away from it.



    A 200$ RAID card offer proper hardware support (ie: a RoC running another opaque, binary blob which is better and does not run on the main host CPU). I suggest to stay away from these cards also because, lacking a writeback cache, they do not provide any tangible benefit over a software RAID implementation.



    A 300/400$ RAID card offering a powerloss-protected writeback cache is worth buying, but not for small, Atom-based PC/NAS.



    In short: I strongly suggest you to use Linux software RAID. Another option to seriously consider is a mirrored ZFS setup but, with an Atom CPU and only 4 GB RAM, do not expect high performance.



    For other information, read here






    share|improve this answer






















    • Thanks, i will use mdadm, do you advice to put the system on an external usb and two disks used as memories or should I install the system and then create the raid adding the disk? Thanks
      – Igor Z.
      Oct 4 at 13:15











    • @IgorZ.It's not clear to me how do you want to connect your drives. From your post, it seems you only have 2 SATA ports, so I would install the OS on a USB HDD or flash drive (if going the USB flash route, be sure to buy a pendrive with decent 4k random write performance).
      – shodanshok
      Oct 4 at 16:05










    • RoC? SoC would be a system-on-a-chip, i.e. "a small computer", but what's an RoC?
      – ilkkachu
      Oct 4 at 19:09










    • POC. Proof of Concept?
      – BaronSamedi1958
      Oct 4 at 19:43






    • 1




      RoC means Raid on Chip. Basically, a marketing term to identify an embedded system running a RAID-related OS with hardware offload for parity calculation.
      – shodanshok
      Oct 4 at 20:11















    up vote
    39
    down vote



    accepted










    A 10-20$ "hardware" RAID card is nothing more than a opaque, binary driver blob running a crap software-only RAID implementation. Stay well away from it.



    A 200$ RAID card offer proper hardware support (ie: a RoC running another opaque, binary blob which is better and does not run on the main host CPU). I suggest to stay away from these cards also because, lacking a writeback cache, they do not provide any tangible benefit over a software RAID implementation.



    A 300/400$ RAID card offering a powerloss-protected writeback cache is worth buying, but not for small, Atom-based PC/NAS.



    In short: I strongly suggest you to use Linux software RAID. Another option to seriously consider is a mirrored ZFS setup but, with an Atom CPU and only 4 GB RAM, do not expect high performance.



    For other information, read here






    share|improve this answer






















    • Thanks, i will use mdadm, do you advice to put the system on an external usb and two disks used as memories or should I install the system and then create the raid adding the disk? Thanks
      – Igor Z.
      Oct 4 at 13:15











    • @IgorZ.It's not clear to me how do you want to connect your drives. From your post, it seems you only have 2 SATA ports, so I would install the OS on a USB HDD or flash drive (if going the USB flash route, be sure to buy a pendrive with decent 4k random write performance).
      – shodanshok
      Oct 4 at 16:05










    • RoC? SoC would be a system-on-a-chip, i.e. "a small computer", but what's an RoC?
      – ilkkachu
      Oct 4 at 19:09










    • POC. Proof of Concept?
      – BaronSamedi1958
      Oct 4 at 19:43






    • 1




      RoC means Raid on Chip. Basically, a marketing term to identify an embedded system running a RAID-related OS with hardware offload for parity calculation.
      – shodanshok
      Oct 4 at 20:11













    up vote
    39
    down vote



    accepted







    up vote
    39
    down vote



    accepted






    A 10-20$ "hardware" RAID card is nothing more than a opaque, binary driver blob running a crap software-only RAID implementation. Stay well away from it.



    A 200$ RAID card offer proper hardware support (ie: a RoC running another opaque, binary blob which is better and does not run on the main host CPU). I suggest to stay away from these cards also because, lacking a writeback cache, they do not provide any tangible benefit over a software RAID implementation.



    A 300/400$ RAID card offering a powerloss-protected writeback cache is worth buying, but not for small, Atom-based PC/NAS.



    In short: I strongly suggest you to use Linux software RAID. Another option to seriously consider is a mirrored ZFS setup but, with an Atom CPU and only 4 GB RAM, do not expect high performance.



    For other information, read here






    share|improve this answer














    A 10-20$ "hardware" RAID card is nothing more than a opaque, binary driver blob running a crap software-only RAID implementation. Stay well away from it.



    A 200$ RAID card offer proper hardware support (ie: a RoC running another opaque, binary blob which is better and does not run on the main host CPU). I suggest to stay away from these cards also because, lacking a writeback cache, they do not provide any tangible benefit over a software RAID implementation.



    A 300/400$ RAID card offering a powerloss-protected writeback cache is worth buying, but not for small, Atom-based PC/NAS.



    In short: I strongly suggest you to use Linux software RAID. Another option to seriously consider is a mirrored ZFS setup but, with an Atom CPU and only 4 GB RAM, do not expect high performance.



    For other information, read here







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Oct 4 at 16:08

























    answered Oct 4 at 12:35









    shodanshok

    24.1k34079




    24.1k34079











    • Thanks, i will use mdadm, do you advice to put the system on an external usb and two disks used as memories or should I install the system and then create the raid adding the disk? Thanks
      – Igor Z.
      Oct 4 at 13:15











    • @IgorZ.It's not clear to me how do you want to connect your drives. From your post, it seems you only have 2 SATA ports, so I would install the OS on a USB HDD or flash drive (if going the USB flash route, be sure to buy a pendrive with decent 4k random write performance).
      – shodanshok
      Oct 4 at 16:05










    • RoC? SoC would be a system-on-a-chip, i.e. "a small computer", but what's an RoC?
      – ilkkachu
      Oct 4 at 19:09










    • POC. Proof of Concept?
      – BaronSamedi1958
      Oct 4 at 19:43






    • 1




      RoC means Raid on Chip. Basically, a marketing term to identify an embedded system running a RAID-related OS with hardware offload for parity calculation.
      – shodanshok
      Oct 4 at 20:11

















    • Thanks, i will use mdadm, do you advice to put the system on an external usb and two disks used as memories or should I install the system and then create the raid adding the disk? Thanks
      – Igor Z.
      Oct 4 at 13:15











    • @IgorZ.It's not clear to me how do you want to connect your drives. From your post, it seems you only have 2 SATA ports, so I would install the OS on a USB HDD or flash drive (if going the USB flash route, be sure to buy a pendrive with decent 4k random write performance).
      – shodanshok
      Oct 4 at 16:05










    • RoC? SoC would be a system-on-a-chip, i.e. "a small computer", but what's an RoC?
      – ilkkachu
      Oct 4 at 19:09










    • POC. Proof of Concept?
      – BaronSamedi1958
      Oct 4 at 19:43






    • 1




      RoC means Raid on Chip. Basically, a marketing term to identify an embedded system running a RAID-related OS with hardware offload for parity calculation.
      – shodanshok
      Oct 4 at 20:11
















    Thanks, i will use mdadm, do you advice to put the system on an external usb and two disks used as memories or should I install the system and then create the raid adding the disk? Thanks
    – Igor Z.
    Oct 4 at 13:15





    Thanks, i will use mdadm, do you advice to put the system on an external usb and two disks used as memories or should I install the system and then create the raid adding the disk? Thanks
    – Igor Z.
    Oct 4 at 13:15













    @IgorZ.It's not clear to me how do you want to connect your drives. From your post, it seems you only have 2 SATA ports, so I would install the OS on a USB HDD or flash drive (if going the USB flash route, be sure to buy a pendrive with decent 4k random write performance).
    – shodanshok
    Oct 4 at 16:05




    @IgorZ.It's not clear to me how do you want to connect your drives. From your post, it seems you only have 2 SATA ports, so I would install the OS on a USB HDD or flash drive (if going the USB flash route, be sure to buy a pendrive with decent 4k random write performance).
    – shodanshok
    Oct 4 at 16:05












    RoC? SoC would be a system-on-a-chip, i.e. "a small computer", but what's an RoC?
    – ilkkachu
    Oct 4 at 19:09




    RoC? SoC would be a system-on-a-chip, i.e. "a small computer", but what's an RoC?
    – ilkkachu
    Oct 4 at 19:09












    POC. Proof of Concept?
    – BaronSamedi1958
    Oct 4 at 19:43




    POC. Proof of Concept?
    – BaronSamedi1958
    Oct 4 at 19:43




    1




    1




    RoC means Raid on Chip. Basically, a marketing term to identify an embedded system running a RAID-related OS with hardware offload for parity calculation.
    – shodanshok
    Oct 4 at 20:11





    RoC means Raid on Chip. Basically, a marketing term to identify an embedded system running a RAID-related OS with hardware offload for parity calculation.
    – shodanshok
    Oct 4 at 20:11













    up vote
    11
    down vote













    Go ZFS. Seriously. It's so much better compared to hardware RAID, and reason is simple: It uses variable size strips so parity RAID modes (Z1 & Z2, RAID5 & RAID6) equivalents are performing @ RAID10 level still being extremely cost-efficient. + you can use flash cache (ZIL, L2ARC etc) running @ dedicated set of PCIe lanes.



    https://storagemojo.com/2006/08/15/zfs-performance-versus-hardware-raid/



    There's ZFS on Linux, ZoL.



    https://zfsonlinux.org/






    share|improve this answer
















    • 3




      I would normally agree wholeheartedly here but he only has 4GiB of RAM so ZFS may not perform optimally...
      – Josh
      Oct 4 at 14:41






    • 1




      +1. Anyway, ZRAID is know for low IOPS compared to mirroring+striping: basically, each top-level vdev has the IOPS peformance of a single disk. Give a look here
      – shodanshok
      Oct 4 at 16:11











    • Last time I looked, ZFS required 1 GB of RAM per TB of RAID, so the OP doesn't have enough RAM. Has that changed?
      – Mark
      Oct 4 at 19:52






    • 2




      Agreed. ZFS is the best choice for the archival data. The performance always depends on the stripe size and the size of the blocks which will be written to the disk, so it is complicated to calculate it, but very easy to optimize the performance :) Moreover, ZFS was not designed for virtualization or intensive IO workload.
      – Strepsils
      2 days ago






    • 1




      2Mark: That’s for deduplicated capacity.
      – BaronSamedi1958
      2 days ago















    up vote
    11
    down vote













    Go ZFS. Seriously. It's so much better compared to hardware RAID, and reason is simple: It uses variable size strips so parity RAID modes (Z1 & Z2, RAID5 & RAID6) equivalents are performing @ RAID10 level still being extremely cost-efficient. + you can use flash cache (ZIL, L2ARC etc) running @ dedicated set of PCIe lanes.



    https://storagemojo.com/2006/08/15/zfs-performance-versus-hardware-raid/



    There's ZFS on Linux, ZoL.



    https://zfsonlinux.org/






    share|improve this answer
















    • 3




      I would normally agree wholeheartedly here but he only has 4GiB of RAM so ZFS may not perform optimally...
      – Josh
      Oct 4 at 14:41






    • 1




      +1. Anyway, ZRAID is know for low IOPS compared to mirroring+striping: basically, each top-level vdev has the IOPS peformance of a single disk. Give a look here
      – shodanshok
      Oct 4 at 16:11











    • Last time I looked, ZFS required 1 GB of RAM per TB of RAID, so the OP doesn't have enough RAM. Has that changed?
      – Mark
      Oct 4 at 19:52






    • 2




      Agreed. ZFS is the best choice for the archival data. The performance always depends on the stripe size and the size of the blocks which will be written to the disk, so it is complicated to calculate it, but very easy to optimize the performance :) Moreover, ZFS was not designed for virtualization or intensive IO workload.
      – Strepsils
      2 days ago






    • 1




      2Mark: That’s for deduplicated capacity.
      – BaronSamedi1958
      2 days ago













    up vote
    11
    down vote










    up vote
    11
    down vote









    Go ZFS. Seriously. It's so much better compared to hardware RAID, and reason is simple: It uses variable size strips so parity RAID modes (Z1 & Z2, RAID5 & RAID6) equivalents are performing @ RAID10 level still being extremely cost-efficient. + you can use flash cache (ZIL, L2ARC etc) running @ dedicated set of PCIe lanes.



    https://storagemojo.com/2006/08/15/zfs-performance-versus-hardware-raid/



    There's ZFS on Linux, ZoL.



    https://zfsonlinux.org/






    share|improve this answer












    Go ZFS. Seriously. It's so much better compared to hardware RAID, and reason is simple: It uses variable size strips so parity RAID modes (Z1 & Z2, RAID5 & RAID6) equivalents are performing @ RAID10 level still being extremely cost-efficient. + you can use flash cache (ZIL, L2ARC etc) running @ dedicated set of PCIe lanes.



    https://storagemojo.com/2006/08/15/zfs-performance-versus-hardware-raid/



    There's ZFS on Linux, ZoL.



    https://zfsonlinux.org/







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Oct 4 at 13:51









    BaronSamedi1958

    6,8431927




    6,8431927







    • 3




      I would normally agree wholeheartedly here but he only has 4GiB of RAM so ZFS may not perform optimally...
      – Josh
      Oct 4 at 14:41






    • 1




      +1. Anyway, ZRAID is know for low IOPS compared to mirroring+striping: basically, each top-level vdev has the IOPS peformance of a single disk. Give a look here
      – shodanshok
      Oct 4 at 16:11











    • Last time I looked, ZFS required 1 GB of RAM per TB of RAID, so the OP doesn't have enough RAM. Has that changed?
      – Mark
      Oct 4 at 19:52






    • 2




      Agreed. ZFS is the best choice for the archival data. The performance always depends on the stripe size and the size of the blocks which will be written to the disk, so it is complicated to calculate it, but very easy to optimize the performance :) Moreover, ZFS was not designed for virtualization or intensive IO workload.
      – Strepsils
      2 days ago






    • 1




      2Mark: That’s for deduplicated capacity.
      – BaronSamedi1958
      2 days ago













    • 3




      I would normally agree wholeheartedly here but he only has 4GiB of RAM so ZFS may not perform optimally...
      – Josh
      Oct 4 at 14:41






    • 1




      +1. Anyway, ZRAID is know for low IOPS compared to mirroring+striping: basically, each top-level vdev has the IOPS peformance of a single disk. Give a look here
      – shodanshok
      Oct 4 at 16:11











    • Last time I looked, ZFS required 1 GB of RAM per TB of RAID, so the OP doesn't have enough RAM. Has that changed?
      – Mark
      Oct 4 at 19:52






    • 2




      Agreed. ZFS is the best choice for the archival data. The performance always depends on the stripe size and the size of the blocks which will be written to the disk, so it is complicated to calculate it, but very easy to optimize the performance :) Moreover, ZFS was not designed for virtualization or intensive IO workload.
      – Strepsils
      2 days ago






    • 1




      2Mark: That’s for deduplicated capacity.
      – BaronSamedi1958
      2 days ago








    3




    3




    I would normally agree wholeheartedly here but he only has 4GiB of RAM so ZFS may not perform optimally...
    – Josh
    Oct 4 at 14:41




    I would normally agree wholeheartedly here but he only has 4GiB of RAM so ZFS may not perform optimally...
    – Josh
    Oct 4 at 14:41




    1




    1




    +1. Anyway, ZRAID is know for low IOPS compared to mirroring+striping: basically, each top-level vdev has the IOPS peformance of a single disk. Give a look here
    – shodanshok
    Oct 4 at 16:11





    +1. Anyway, ZRAID is know for low IOPS compared to mirroring+striping: basically, each top-level vdev has the IOPS peformance of a single disk. Give a look here
    – shodanshok
    Oct 4 at 16:11













    Last time I looked, ZFS required 1 GB of RAM per TB of RAID, so the OP doesn't have enough RAM. Has that changed?
    – Mark
    Oct 4 at 19:52




    Last time I looked, ZFS required 1 GB of RAM per TB of RAID, so the OP doesn't have enough RAM. Has that changed?
    – Mark
    Oct 4 at 19:52




    2




    2




    Agreed. ZFS is the best choice for the archival data. The performance always depends on the stripe size and the size of the blocks which will be written to the disk, so it is complicated to calculate it, but very easy to optimize the performance :) Moreover, ZFS was not designed for virtualization or intensive IO workload.
    – Strepsils
    2 days ago




    Agreed. ZFS is the best choice for the archival data. The performance always depends on the stripe size and the size of the blocks which will be written to the disk, so it is complicated to calculate it, but very easy to optimize the performance :) Moreover, ZFS was not designed for virtualization or intensive IO workload.
    – Strepsils
    2 days ago




    1




    1




    2Mark: That’s for deduplicated capacity.
    – BaronSamedi1958
    2 days ago





    2Mark: That’s for deduplicated capacity.
    – BaronSamedi1958
    2 days ago











    up vote
    3
    down vote













    Here is another argument for software on a cheap system.



    Stuff breaks, you know this that is why you are using raid, but raid controllers also break, as does ram, processor, power-supply and everything else, including software. In most failures it is simple enough to replace the damaged component with an equivalent or better. Blow a 100w power-supply, grab a 150w one and get going. Similar with most components. However with a hardware raid there are now three exceptions to this pattern: raid controller, hard drives, and motherboard (or other upstream if not an expansion card).



    Let's look at the raid card. Most raid cards are poorly documented, and incompatible. You cannot replace a card by company xyz with one by abc, as they store data differently (assuming you can figure out who made the card to begin with). The solution to this is to have a spare raid card, exactly identical to the production one.



    Hard drives are not as bad as raid cards, but as raid cards have physical connectors to the drives you must use compatible drives and significantly larger drives may cause problems. Significant care is needed in ordering replacement drives.



    Motherboards are typically more difficult than drives but less than raid cards. In most cases just verifying that compatible slots are available is sufficient but bootable raids may be no end of headaches. The way to avoid this problem is external enclosures, but this is not cheap.



    All these problems can be solved by throwing money at the problem, but for a cheap system this is not desirable. Software raids on the other hand are immune to most (but not quite all) of these issues because it can use any block device.



    The one drawback to software raid on a cheap system is booting. As far as I know the only bootloader that supports raid is grub and it only supports raid 1 which means your /boot must be stored on raid 1 which is not a problem as long as you are only using raid 1 and only a minor problem in most other cases. However grub itself (specifically the first stage boot block) cannot be stored on the raid. This can be managed by putting a spare copy on the other drives.






    share|improve this answer




















    • The way I set up my bootable RAID 1 was to create a /boot partition on each and a data partition on each for / (instead of dedicating the entire disk to the array). As long as you create a separate boot partition on each drive and run grub-install to each drive, they should all be bootable, and md should be able to mount the degraded array. I imagine it would work with flavors other than RAID 1 as well.
      – nstenz
      Oct 4 at 18:15










    • @nstenz, you described my setup almost exactly. The data partition got raid6 and lvm and boot got raid 1.
      – hildred
      Oct 4 at 20:57














    up vote
    3
    down vote













    Here is another argument for software on a cheap system.



    Stuff breaks, you know this that is why you are using raid, but raid controllers also break, as does ram, processor, power-supply and everything else, including software. In most failures it is simple enough to replace the damaged component with an equivalent or better. Blow a 100w power-supply, grab a 150w one and get going. Similar with most components. However with a hardware raid there are now three exceptions to this pattern: raid controller, hard drives, and motherboard (or other upstream if not an expansion card).



    Let's look at the raid card. Most raid cards are poorly documented, and incompatible. You cannot replace a card by company xyz with one by abc, as they store data differently (assuming you can figure out who made the card to begin with). The solution to this is to have a spare raid card, exactly identical to the production one.



    Hard drives are not as bad as raid cards, but as raid cards have physical connectors to the drives you must use compatible drives and significantly larger drives may cause problems. Significant care is needed in ordering replacement drives.



    Motherboards are typically more difficult than drives but less than raid cards. In most cases just verifying that compatible slots are available is sufficient but bootable raids may be no end of headaches. The way to avoid this problem is external enclosures, but this is not cheap.



    All these problems can be solved by throwing money at the problem, but for a cheap system this is not desirable. Software raids on the other hand are immune to most (but not quite all) of these issues because it can use any block device.



    The one drawback to software raid on a cheap system is booting. As far as I know the only bootloader that supports raid is grub and it only supports raid 1 which means your /boot must be stored on raid 1 which is not a problem as long as you are only using raid 1 and only a minor problem in most other cases. However grub itself (specifically the first stage boot block) cannot be stored on the raid. This can be managed by putting a spare copy on the other drives.






    share|improve this answer




















    • The way I set up my bootable RAID 1 was to create a /boot partition on each and a data partition on each for / (instead of dedicating the entire disk to the array). As long as you create a separate boot partition on each drive and run grub-install to each drive, they should all be bootable, and md should be able to mount the degraded array. I imagine it would work with flavors other than RAID 1 as well.
      – nstenz
      Oct 4 at 18:15










    • @nstenz, you described my setup almost exactly. The data partition got raid6 and lvm and boot got raid 1.
      – hildred
      Oct 4 at 20:57












    up vote
    3
    down vote










    up vote
    3
    down vote









    Here is another argument for software on a cheap system.



    Stuff breaks, you know this that is why you are using raid, but raid controllers also break, as does ram, processor, power-supply and everything else, including software. In most failures it is simple enough to replace the damaged component with an equivalent or better. Blow a 100w power-supply, grab a 150w one and get going. Similar with most components. However with a hardware raid there are now three exceptions to this pattern: raid controller, hard drives, and motherboard (or other upstream if not an expansion card).



    Let's look at the raid card. Most raid cards are poorly documented, and incompatible. You cannot replace a card by company xyz with one by abc, as they store data differently (assuming you can figure out who made the card to begin with). The solution to this is to have a spare raid card, exactly identical to the production one.



    Hard drives are not as bad as raid cards, but as raid cards have physical connectors to the drives you must use compatible drives and significantly larger drives may cause problems. Significant care is needed in ordering replacement drives.



    Motherboards are typically more difficult than drives but less than raid cards. In most cases just verifying that compatible slots are available is sufficient but bootable raids may be no end of headaches. The way to avoid this problem is external enclosures, but this is not cheap.



    All these problems can be solved by throwing money at the problem, but for a cheap system this is not desirable. Software raids on the other hand are immune to most (but not quite all) of these issues because it can use any block device.



    The one drawback to software raid on a cheap system is booting. As far as I know the only bootloader that supports raid is grub and it only supports raid 1 which means your /boot must be stored on raid 1 which is not a problem as long as you are only using raid 1 and only a minor problem in most other cases. However grub itself (specifically the first stage boot block) cannot be stored on the raid. This can be managed by putting a spare copy on the other drives.






    share|improve this answer












    Here is another argument for software on a cheap system.



    Stuff breaks, you know this that is why you are using raid, but raid controllers also break, as does ram, processor, power-supply and everything else, including software. In most failures it is simple enough to replace the damaged component with an equivalent or better. Blow a 100w power-supply, grab a 150w one and get going. Similar with most components. However with a hardware raid there are now three exceptions to this pattern: raid controller, hard drives, and motherboard (or other upstream if not an expansion card).



    Let's look at the raid card. Most raid cards are poorly documented, and incompatible. You cannot replace a card by company xyz with one by abc, as they store data differently (assuming you can figure out who made the card to begin with). The solution to this is to have a spare raid card, exactly identical to the production one.



    Hard drives are not as bad as raid cards, but as raid cards have physical connectors to the drives you must use compatible drives and significantly larger drives may cause problems. Significant care is needed in ordering replacement drives.



    Motherboards are typically more difficult than drives but less than raid cards. In most cases just verifying that compatible slots are available is sufficient but bootable raids may be no end of headaches. The way to avoid this problem is external enclosures, but this is not cheap.



    All these problems can be solved by throwing money at the problem, but for a cheap system this is not desirable. Software raids on the other hand are immune to most (but not quite all) of these issues because it can use any block device.



    The one drawback to software raid on a cheap system is booting. As far as I know the only bootloader that supports raid is grub and it only supports raid 1 which means your /boot must be stored on raid 1 which is not a problem as long as you are only using raid 1 and only a minor problem in most other cases. However grub itself (specifically the first stage boot block) cannot be stored on the raid. This can be managed by putting a spare copy on the other drives.







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Oct 4 at 17:23









    hildred

    196128




    196128











    • The way I set up my bootable RAID 1 was to create a /boot partition on each and a data partition on each for / (instead of dedicating the entire disk to the array). As long as you create a separate boot partition on each drive and run grub-install to each drive, they should all be bootable, and md should be able to mount the degraded array. I imagine it would work with flavors other than RAID 1 as well.
      – nstenz
      Oct 4 at 18:15










    • @nstenz, you described my setup almost exactly. The data partition got raid6 and lvm and boot got raid 1.
      – hildred
      Oct 4 at 20:57
















    • The way I set up my bootable RAID 1 was to create a /boot partition on each and a data partition on each for / (instead of dedicating the entire disk to the array). As long as you create a separate boot partition on each drive and run grub-install to each drive, they should all be bootable, and md should be able to mount the degraded array. I imagine it would work with flavors other than RAID 1 as well.
      – nstenz
      Oct 4 at 18:15










    • @nstenz, you described my setup almost exactly. The data partition got raid6 and lvm and boot got raid 1.
      – hildred
      Oct 4 at 20:57















    The way I set up my bootable RAID 1 was to create a /boot partition on each and a data partition on each for / (instead of dedicating the entire disk to the array). As long as you create a separate boot partition on each drive and run grub-install to each drive, they should all be bootable, and md should be able to mount the degraded array. I imagine it would work with flavors other than RAID 1 as well.
    – nstenz
    Oct 4 at 18:15




    The way I set up my bootable RAID 1 was to create a /boot partition on each and a data partition on each for / (instead of dedicating the entire disk to the array). As long as you create a separate boot partition on each drive and run grub-install to each drive, they should all be bootable, and md should be able to mount the degraded array. I imagine it would work with flavors other than RAID 1 as well.
    – nstenz
    Oct 4 at 18:15












    @nstenz, you described my setup almost exactly. The data partition got raid6 and lvm and boot got raid 1.
    – hildred
    Oct 4 at 20:57




    @nstenz, you described my setup almost exactly. The data partition got raid6 and lvm and boot got raid 1.
    – hildred
    Oct 4 at 20:57










    up vote
    1
    down vote













    1. As others have said, there's no benefit to hardware RAID, and various downsides. My main reasons for preferring software RAID is that it's simpler and more portable (and thus more likely to actually have a successful recovery from various failure scenarios).


    2. (Also as others have said) 3 disk RAID 5 is a really bad RAID scheme -- it's almost the worst of all worlds, with very little benefit. Sort of a compromise between RAID 0 and RAID 1, and slightly better than either of those, but that's about the only good thing to say about it. RAID has moved on to much better schemes, like RAID 6.



    3. My advice (hardware):



      • Get a 4-port SATA card for that PCI slot, bringing you to six total SATA ports -- one for a boot drive, and five for data drives. I see one for ~$15, advertised as hardware RAID, but you can just ignore those features and use it as plain SATA.


      • Get a small SSD for the boot drive. I know there's still the perception that "SSDs are too expensive", but it's barely true anymore, and not at all on the small end -- 120GB is way more than you'll need for this boot drive, and you can get one for ~$25.


      • An optional but really nice addition (if your PC case has 3x 5.25" drive bays) is to get a drive bay converter: you can turn 3 5.25" (optical) drive bays into 5 hot-swappable front-loading 3.5" (HDD) bays, so you wont have to take the machine apart (or even shut it down) to swap drives. (Search for "backplane 5 in 3".)


      • Use 5x whatever size HDDs in RAID 6 (dual redundancy, 3x drive size usable space).



    4. My advice (software): Look at OpenMediaVault for the OS / file-server software. It's an "appliance distro" perfect for exactly this kind of use -- Debian-based (actually a Linux port of the BSD-based FreeNAS) with everything pre-configured for a NAS server. It makes setting up and managing software RAID (as well as LVM, network shares, etc.) really simple.






    share|improve this answer








    New contributor




    dgould is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.





















      up vote
      1
      down vote













      1. As others have said, there's no benefit to hardware RAID, and various downsides. My main reasons for preferring software RAID is that it's simpler and more portable (and thus more likely to actually have a successful recovery from various failure scenarios).


      2. (Also as others have said) 3 disk RAID 5 is a really bad RAID scheme -- it's almost the worst of all worlds, with very little benefit. Sort of a compromise between RAID 0 and RAID 1, and slightly better than either of those, but that's about the only good thing to say about it. RAID has moved on to much better schemes, like RAID 6.



      3. My advice (hardware):



        • Get a 4-port SATA card for that PCI slot, bringing you to six total SATA ports -- one for a boot drive, and five for data drives. I see one for ~$15, advertised as hardware RAID, but you can just ignore those features and use it as plain SATA.


        • Get a small SSD for the boot drive. I know there's still the perception that "SSDs are too expensive", but it's barely true anymore, and not at all on the small end -- 120GB is way more than you'll need for this boot drive, and you can get one for ~$25.


        • An optional but really nice addition (if your PC case has 3x 5.25" drive bays) is to get a drive bay converter: you can turn 3 5.25" (optical) drive bays into 5 hot-swappable front-loading 3.5" (HDD) bays, so you wont have to take the machine apart (or even shut it down) to swap drives. (Search for "backplane 5 in 3".)


        • Use 5x whatever size HDDs in RAID 6 (dual redundancy, 3x drive size usable space).



      4. My advice (software): Look at OpenMediaVault for the OS / file-server software. It's an "appliance distro" perfect for exactly this kind of use -- Debian-based (actually a Linux port of the BSD-based FreeNAS) with everything pre-configured for a NAS server. It makes setting up and managing software RAID (as well as LVM, network shares, etc.) really simple.






      share|improve this answer








      New contributor




      dgould is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.



















        up vote
        1
        down vote










        up vote
        1
        down vote









        1. As others have said, there's no benefit to hardware RAID, and various downsides. My main reasons for preferring software RAID is that it's simpler and more portable (and thus more likely to actually have a successful recovery from various failure scenarios).


        2. (Also as others have said) 3 disk RAID 5 is a really bad RAID scheme -- it's almost the worst of all worlds, with very little benefit. Sort of a compromise between RAID 0 and RAID 1, and slightly better than either of those, but that's about the only good thing to say about it. RAID has moved on to much better schemes, like RAID 6.



        3. My advice (hardware):



          • Get a 4-port SATA card for that PCI slot, bringing you to six total SATA ports -- one for a boot drive, and five for data drives. I see one for ~$15, advertised as hardware RAID, but you can just ignore those features and use it as plain SATA.


          • Get a small SSD for the boot drive. I know there's still the perception that "SSDs are too expensive", but it's barely true anymore, and not at all on the small end -- 120GB is way more than you'll need for this boot drive, and you can get one for ~$25.


          • An optional but really nice addition (if your PC case has 3x 5.25" drive bays) is to get a drive bay converter: you can turn 3 5.25" (optical) drive bays into 5 hot-swappable front-loading 3.5" (HDD) bays, so you wont have to take the machine apart (or even shut it down) to swap drives. (Search for "backplane 5 in 3".)


          • Use 5x whatever size HDDs in RAID 6 (dual redundancy, 3x drive size usable space).



        4. My advice (software): Look at OpenMediaVault for the OS / file-server software. It's an "appliance distro" perfect for exactly this kind of use -- Debian-based (actually a Linux port of the BSD-based FreeNAS) with everything pre-configured for a NAS server. It makes setting up and managing software RAID (as well as LVM, network shares, etc.) really simple.






        share|improve this answer








        New contributor




        dgould is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        1. As others have said, there's no benefit to hardware RAID, and various downsides. My main reasons for preferring software RAID is that it's simpler and more portable (and thus more likely to actually have a successful recovery from various failure scenarios).


        2. (Also as others have said) 3 disk RAID 5 is a really bad RAID scheme -- it's almost the worst of all worlds, with very little benefit. Sort of a compromise between RAID 0 and RAID 1, and slightly better than either of those, but that's about the only good thing to say about it. RAID has moved on to much better schemes, like RAID 6.



        3. My advice (hardware):



          • Get a 4-port SATA card for that PCI slot, bringing you to six total SATA ports -- one for a boot drive, and five for data drives. I see one for ~$15, advertised as hardware RAID, but you can just ignore those features and use it as plain SATA.


          • Get a small SSD for the boot drive. I know there's still the perception that "SSDs are too expensive", but it's barely true anymore, and not at all on the small end -- 120GB is way more than you'll need for this boot drive, and you can get one for ~$25.


          • An optional but really nice addition (if your PC case has 3x 5.25" drive bays) is to get a drive bay converter: you can turn 3 5.25" (optical) drive bays into 5 hot-swappable front-loading 3.5" (HDD) bays, so you wont have to take the machine apart (or even shut it down) to swap drives. (Search for "backplane 5 in 3".)


          • Use 5x whatever size HDDs in RAID 6 (dual redundancy, 3x drive size usable space).



        4. My advice (software): Look at OpenMediaVault for the OS / file-server software. It's an "appliance distro" perfect for exactly this kind of use -- Debian-based (actually a Linux port of the BSD-based FreeNAS) with everything pre-configured for a NAS server. It makes setting up and managing software RAID (as well as LVM, network shares, etc.) really simple.







        share|improve this answer








        New contributor




        dgould is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        share|improve this answer



        share|improve this answer






        New contributor




        dgould is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        answered Oct 4 at 19:05









        dgould

        111




        111




        New contributor




        dgould is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.





        New contributor





        dgould is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.






        dgould is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.












            Popular posts from this blog

            How to check contact read email or not when send email to Individual?

            Bahrain

            Postfix configuration issue with fips on centos 7; mailgun relay