Write access time slow on RAID1

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP












0















I'm running MongoDB on my personal computer. I noticed that performance is much slower when the data are on my 2-recent-spinning-hard-drive-software-RAID1 than when they are on an older spinning hard drive without RAID.



Old drive, no RAID



Single operations:



> var d = new Date(); db.test.createIndex( "test": 1 ); print(new Date - d + 'ms');
251ms
> var d = new Date(); db.test.createIndex( "test": "2dsphere" ); print(new Date - d + 'ms');
83ms
> var d = new Date(); db.dropDatabase(); print(new Date - d + 'ms');
71ms


Whole test suite: 250s



 Recent drives, RAID1 



Single operations:



> var d = new Date(); db.test.createIndex( "test": 1 ); print(new Date - d + 'ms');
1220ms
> var d = new Date(); db.test.createIndex( "test": "2dsphere" ); print(new Date - d + 'ms');
597ms
> var d = new Date(); db.dropDatabase(); print(new Date - d + 'ms');
671ms
> var d = new Date(); db.dropDatabase(); print(new Date - d + 'ms');
1ms


Whole test suite: 700s



 Configuration files



In case it would be useful (I doubt it):



/etc/fstab



UUID=d719f337-d835-4688-baf2-3e29f147ff15 / ext4 errors=remount-ro 0 1
# /home was on /dev/md0p3 during installation
UUID=def01643-c71e-47df-9dc8-67096243aee6 /home ext4 defaults 0 2
# swap was on /dev/md0p1 during installation
UUID=d43319a8-92fb-437d-b576-ef964276cde none swap sw 0 0
/dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0

UUID="dd8b1f05-c65b-42e1-a45e-0ef421faf1df" /mnt/bak ext4 defaults,errors=remount-ro 0 1


/etc/mdadm/mdadm.conf



# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 UUID=3a0f91ae:51c48198:3d1e26ed:118a1938 name=bouzin:0

# This configuration was auto-generated on Sun, 24 Jan 2016 18:00:55 +0100 by mkconf


Question



From what I've read, write access on the RAID1 should be roughly equal to the write access on a single equivalent drive.



  • Could this 5400/7200 factor explain the order of magnitude of difference in the tests above?


  • Could it be better without the RAID?


  • Any interesting test/benchmark I could run? For now, I only have Mongo shell tests, but they seem to point to the RAID, or the drives, rather than to Mongo itself. Is there some application-agnostic test I could run to identify anything?


  • Could anything be wrong or suboptimal with the RAID configuration?


EDIT:



Important is, that I mixed 7200 rpm and 5400 rpm drives.










share|improve this question
























  • I can't confirm it without full past and current mount info, but based On what I see here I would surmise that previously you had swap on a separate drive while currently your swap partition is sharing a physical drive with your raid array resulting in a measurable reduction in performance. This conjecture is based only on the comment in your fstab that reads # swap was on /dev/md0p1 during installation

    – Elder Geek
    Jan 30 '17 at 22:38












  • Clearly since a 5400 RPM drive spins at 75% of the speed of a 7200 RPM drive that will have an impact. By itself though, it doesn't explain the magnitude of the difference you are seeing. There may be other variables such as interface speed, seek speed and the aforementioned swap location.

    – Elder Geek
    Jan 30 '17 at 22:45












  • There is no before/after. Both the RAID and the 3rd drive are mounted on the machine. Indeed, from what I remember, I assembled the two drives in a RAID then created a logical volume in there, in which I put three partitions: /, /home and swap. I added the third drive afterwards and it has only one partition, no swap. (I'm no RAID/LVM expert, more info here: unix.stackexchange.com/questions/256878/…)

    – Jérôme
    Jan 30 '17 at 22:46












  • So it could be partly the RPM difference, partly the swap issue? I don't get the swap issue, though. What if I don't use swap at that moment? Is there still an impact?

    – Jérôme
    Jan 30 '17 at 22:48






  • 1





    Please, check drives health by command smartctl and see the parameter current pending sectors.

    – Khirgiy Mikhail
    Jan 31 '17 at 5:36















0















I'm running MongoDB on my personal computer. I noticed that performance is much slower when the data are on my 2-recent-spinning-hard-drive-software-RAID1 than when they are on an older spinning hard drive without RAID.



Old drive, no RAID



Single operations:



> var d = new Date(); db.test.createIndex( "test": 1 ); print(new Date - d + 'ms');
251ms
> var d = new Date(); db.test.createIndex( "test": "2dsphere" ); print(new Date - d + 'ms');
83ms
> var d = new Date(); db.dropDatabase(); print(new Date - d + 'ms');
71ms


Whole test suite: 250s



 Recent drives, RAID1 



Single operations:



> var d = new Date(); db.test.createIndex( "test": 1 ); print(new Date - d + 'ms');
1220ms
> var d = new Date(); db.test.createIndex( "test": "2dsphere" ); print(new Date - d + 'ms');
597ms
> var d = new Date(); db.dropDatabase(); print(new Date - d + 'ms');
671ms
> var d = new Date(); db.dropDatabase(); print(new Date - d + 'ms');
1ms


Whole test suite: 700s



 Configuration files



In case it would be useful (I doubt it):



/etc/fstab



UUID=d719f337-d835-4688-baf2-3e29f147ff15 / ext4 errors=remount-ro 0 1
# /home was on /dev/md0p3 during installation
UUID=def01643-c71e-47df-9dc8-67096243aee6 /home ext4 defaults 0 2
# swap was on /dev/md0p1 during installation
UUID=d43319a8-92fb-437d-b576-ef964276cde none swap sw 0 0
/dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0

UUID="dd8b1f05-c65b-42e1-a45e-0ef421faf1df" /mnt/bak ext4 defaults,errors=remount-ro 0 1


/etc/mdadm/mdadm.conf



# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 UUID=3a0f91ae:51c48198:3d1e26ed:118a1938 name=bouzin:0

# This configuration was auto-generated on Sun, 24 Jan 2016 18:00:55 +0100 by mkconf


Question



From what I've read, write access on the RAID1 should be roughly equal to the write access on a single equivalent drive.



  • Could this 5400/7200 factor explain the order of magnitude of difference in the tests above?


  • Could it be better without the RAID?


  • Any interesting test/benchmark I could run? For now, I only have Mongo shell tests, but they seem to point to the RAID, or the drives, rather than to Mongo itself. Is there some application-agnostic test I could run to identify anything?


  • Could anything be wrong or suboptimal with the RAID configuration?


EDIT:



Important is, that I mixed 7200 rpm and 5400 rpm drives.










share|improve this question
























  • I can't confirm it without full past and current mount info, but based On what I see here I would surmise that previously you had swap on a separate drive while currently your swap partition is sharing a physical drive with your raid array resulting in a measurable reduction in performance. This conjecture is based only on the comment in your fstab that reads # swap was on /dev/md0p1 during installation

    – Elder Geek
    Jan 30 '17 at 22:38












  • Clearly since a 5400 RPM drive spins at 75% of the speed of a 7200 RPM drive that will have an impact. By itself though, it doesn't explain the magnitude of the difference you are seeing. There may be other variables such as interface speed, seek speed and the aforementioned swap location.

    – Elder Geek
    Jan 30 '17 at 22:45












  • There is no before/after. Both the RAID and the 3rd drive are mounted on the machine. Indeed, from what I remember, I assembled the two drives in a RAID then created a logical volume in there, in which I put three partitions: /, /home and swap. I added the third drive afterwards and it has only one partition, no swap. (I'm no RAID/LVM expert, more info here: unix.stackexchange.com/questions/256878/…)

    – Jérôme
    Jan 30 '17 at 22:46












  • So it could be partly the RPM difference, partly the swap issue? I don't get the swap issue, though. What if I don't use swap at that moment? Is there still an impact?

    – Jérôme
    Jan 30 '17 at 22:48






  • 1





    Please, check drives health by command smartctl and see the parameter current pending sectors.

    – Khirgiy Mikhail
    Jan 31 '17 at 5:36













0












0








0








I'm running MongoDB on my personal computer. I noticed that performance is much slower when the data are on my 2-recent-spinning-hard-drive-software-RAID1 than when they are on an older spinning hard drive without RAID.



Old drive, no RAID



Single operations:



> var d = new Date(); db.test.createIndex( "test": 1 ); print(new Date - d + 'ms');
251ms
> var d = new Date(); db.test.createIndex( "test": "2dsphere" ); print(new Date - d + 'ms');
83ms
> var d = new Date(); db.dropDatabase(); print(new Date - d + 'ms');
71ms


Whole test suite: 250s



 Recent drives, RAID1 



Single operations:



> var d = new Date(); db.test.createIndex( "test": 1 ); print(new Date - d + 'ms');
1220ms
> var d = new Date(); db.test.createIndex( "test": "2dsphere" ); print(new Date - d + 'ms');
597ms
> var d = new Date(); db.dropDatabase(); print(new Date - d + 'ms');
671ms
> var d = new Date(); db.dropDatabase(); print(new Date - d + 'ms');
1ms


Whole test suite: 700s



 Configuration files



In case it would be useful (I doubt it):



/etc/fstab



UUID=d719f337-d835-4688-baf2-3e29f147ff15 / ext4 errors=remount-ro 0 1
# /home was on /dev/md0p3 during installation
UUID=def01643-c71e-47df-9dc8-67096243aee6 /home ext4 defaults 0 2
# swap was on /dev/md0p1 during installation
UUID=d43319a8-92fb-437d-b576-ef964276cde none swap sw 0 0
/dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0

UUID="dd8b1f05-c65b-42e1-a45e-0ef421faf1df" /mnt/bak ext4 defaults,errors=remount-ro 0 1


/etc/mdadm/mdadm.conf



# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 UUID=3a0f91ae:51c48198:3d1e26ed:118a1938 name=bouzin:0

# This configuration was auto-generated on Sun, 24 Jan 2016 18:00:55 +0100 by mkconf


Question



From what I've read, write access on the RAID1 should be roughly equal to the write access on a single equivalent drive.



  • Could this 5400/7200 factor explain the order of magnitude of difference in the tests above?


  • Could it be better without the RAID?


  • Any interesting test/benchmark I could run? For now, I only have Mongo shell tests, but they seem to point to the RAID, or the drives, rather than to Mongo itself. Is there some application-agnostic test I could run to identify anything?


  • Could anything be wrong or suboptimal with the RAID configuration?


EDIT:



Important is, that I mixed 7200 rpm and 5400 rpm drives.










share|improve this question
















I'm running MongoDB on my personal computer. I noticed that performance is much slower when the data are on my 2-recent-spinning-hard-drive-software-RAID1 than when they are on an older spinning hard drive without RAID.



Old drive, no RAID



Single operations:



> var d = new Date(); db.test.createIndex( "test": 1 ); print(new Date - d + 'ms');
251ms
> var d = new Date(); db.test.createIndex( "test": "2dsphere" ); print(new Date - d + 'ms');
83ms
> var d = new Date(); db.dropDatabase(); print(new Date - d + 'ms');
71ms


Whole test suite: 250s



 Recent drives, RAID1 



Single operations:



> var d = new Date(); db.test.createIndex( "test": 1 ); print(new Date - d + 'ms');
1220ms
> var d = new Date(); db.test.createIndex( "test": "2dsphere" ); print(new Date - d + 'ms');
597ms
> var d = new Date(); db.dropDatabase(); print(new Date - d + 'ms');
671ms
> var d = new Date(); db.dropDatabase(); print(new Date - d + 'ms');
1ms


Whole test suite: 700s



 Configuration files



In case it would be useful (I doubt it):



/etc/fstab



UUID=d719f337-d835-4688-baf2-3e29f147ff15 / ext4 errors=remount-ro 0 1
# /home was on /dev/md0p3 during installation
UUID=def01643-c71e-47df-9dc8-67096243aee6 /home ext4 defaults 0 2
# swap was on /dev/md0p1 during installation
UUID=d43319a8-92fb-437d-b576-ef964276cde none swap sw 0 0
/dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0

UUID="dd8b1f05-c65b-42e1-a45e-0ef421faf1df" /mnt/bak ext4 defaults,errors=remount-ro 0 1


/etc/mdadm/mdadm.conf



# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 UUID=3a0f91ae:51c48198:3d1e26ed:118a1938 name=bouzin:0

# This configuration was auto-generated on Sun, 24 Jan 2016 18:00:55 +0100 by mkconf


Question



From what I've read, write access on the RAID1 should be roughly equal to the write access on a single equivalent drive.



  • Could this 5400/7200 factor explain the order of magnitude of difference in the tests above?


  • Could it be better without the RAID?


  • Any interesting test/benchmark I could run? For now, I only have Mongo shell tests, but they seem to point to the RAID, or the drives, rather than to Mongo itself. Is there some application-agnostic test I could run to identify anything?


  • Could anything be wrong or suboptimal with the RAID configuration?


EDIT:



Important is, that I mixed 7200 rpm and 5400 rpm drives.







performance software-raid raid1






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jan 31 '17 at 8:28







Jérôme

















asked Jan 30 '17 at 22:30









JérômeJérôme

8412731




8412731












  • I can't confirm it without full past and current mount info, but based On what I see here I would surmise that previously you had swap on a separate drive while currently your swap partition is sharing a physical drive with your raid array resulting in a measurable reduction in performance. This conjecture is based only on the comment in your fstab that reads # swap was on /dev/md0p1 during installation

    – Elder Geek
    Jan 30 '17 at 22:38












  • Clearly since a 5400 RPM drive spins at 75% of the speed of a 7200 RPM drive that will have an impact. By itself though, it doesn't explain the magnitude of the difference you are seeing. There may be other variables such as interface speed, seek speed and the aforementioned swap location.

    – Elder Geek
    Jan 30 '17 at 22:45












  • There is no before/after. Both the RAID and the 3rd drive are mounted on the machine. Indeed, from what I remember, I assembled the two drives in a RAID then created a logical volume in there, in which I put three partitions: /, /home and swap. I added the third drive afterwards and it has only one partition, no swap. (I'm no RAID/LVM expert, more info here: unix.stackexchange.com/questions/256878/…)

    – Jérôme
    Jan 30 '17 at 22:46












  • So it could be partly the RPM difference, partly the swap issue? I don't get the swap issue, though. What if I don't use swap at that moment? Is there still an impact?

    – Jérôme
    Jan 30 '17 at 22:48






  • 1





    Please, check drives health by command smartctl and see the parameter current pending sectors.

    – Khirgiy Mikhail
    Jan 31 '17 at 5:36

















  • I can't confirm it without full past and current mount info, but based On what I see here I would surmise that previously you had swap on a separate drive while currently your swap partition is sharing a physical drive with your raid array resulting in a measurable reduction in performance. This conjecture is based only on the comment in your fstab that reads # swap was on /dev/md0p1 during installation

    – Elder Geek
    Jan 30 '17 at 22:38












  • Clearly since a 5400 RPM drive spins at 75% of the speed of a 7200 RPM drive that will have an impact. By itself though, it doesn't explain the magnitude of the difference you are seeing. There may be other variables such as interface speed, seek speed and the aforementioned swap location.

    – Elder Geek
    Jan 30 '17 at 22:45












  • There is no before/after. Both the RAID and the 3rd drive are mounted on the machine. Indeed, from what I remember, I assembled the two drives in a RAID then created a logical volume in there, in which I put three partitions: /, /home and swap. I added the third drive afterwards and it has only one partition, no swap. (I'm no RAID/LVM expert, more info here: unix.stackexchange.com/questions/256878/…)

    – Jérôme
    Jan 30 '17 at 22:46












  • So it could be partly the RPM difference, partly the swap issue? I don't get the swap issue, though. What if I don't use swap at that moment? Is there still an impact?

    – Jérôme
    Jan 30 '17 at 22:48






  • 1





    Please, check drives health by command smartctl and see the parameter current pending sectors.

    – Khirgiy Mikhail
    Jan 31 '17 at 5:36
















I can't confirm it without full past and current mount info, but based On what I see here I would surmise that previously you had swap on a separate drive while currently your swap partition is sharing a physical drive with your raid array resulting in a measurable reduction in performance. This conjecture is based only on the comment in your fstab that reads # swap was on /dev/md0p1 during installation

– Elder Geek
Jan 30 '17 at 22:38






I can't confirm it without full past and current mount info, but based On what I see here I would surmise that previously you had swap on a separate drive while currently your swap partition is sharing a physical drive with your raid array resulting in a measurable reduction in performance. This conjecture is based only on the comment in your fstab that reads # swap was on /dev/md0p1 during installation

– Elder Geek
Jan 30 '17 at 22:38














Clearly since a 5400 RPM drive spins at 75% of the speed of a 7200 RPM drive that will have an impact. By itself though, it doesn't explain the magnitude of the difference you are seeing. There may be other variables such as interface speed, seek speed and the aforementioned swap location.

– Elder Geek
Jan 30 '17 at 22:45






Clearly since a 5400 RPM drive spins at 75% of the speed of a 7200 RPM drive that will have an impact. By itself though, it doesn't explain the magnitude of the difference you are seeing. There may be other variables such as interface speed, seek speed and the aforementioned swap location.

– Elder Geek
Jan 30 '17 at 22:45














There is no before/after. Both the RAID and the 3rd drive are mounted on the machine. Indeed, from what I remember, I assembled the two drives in a RAID then created a logical volume in there, in which I put three partitions: /, /home and swap. I added the third drive afterwards and it has only one partition, no swap. (I'm no RAID/LVM expert, more info here: unix.stackexchange.com/questions/256878/…)

– Jérôme
Jan 30 '17 at 22:46






There is no before/after. Both the RAID and the 3rd drive are mounted on the machine. Indeed, from what I remember, I assembled the two drives in a RAID then created a logical volume in there, in which I put three partitions: /, /home and swap. I added the third drive afterwards and it has only one partition, no swap. (I'm no RAID/LVM expert, more info here: unix.stackexchange.com/questions/256878/…)

– Jérôme
Jan 30 '17 at 22:46














So it could be partly the RPM difference, partly the swap issue? I don't get the swap issue, though. What if I don't use swap at that moment? Is there still an impact?

– Jérôme
Jan 30 '17 at 22:48





So it could be partly the RPM difference, partly the swap issue? I don't get the swap issue, though. What if I don't use swap at that moment? Is there still an impact?

– Jérôme
Jan 30 '17 at 22:48




1




1





Please, check drives health by command smartctl and see the parameter current pending sectors.

– Khirgiy Mikhail
Jan 31 '17 at 5:36





Please, check drives health by command smartctl and see the parameter current pending sectors.

– Khirgiy Mikhail
Jan 31 '17 at 5:36










2 Answers
2






active

oldest

votes


















0














RAID1 will be slower than a single drive even the drive specs are equal.



The reason for this is while RAID1 improves reliability by performing every write to both drives, this same action reduces performance



RAID0 splits writes between 2 drives which improves performance by sharing the load but reduces reliability for the same reason.



RAID5 is a happy medium that results in better performance than a single drive as well as increased reliability as the failure of a drive won't make the array inoperable (although it will slow down substantially under these conditions).



Regardless of your method of benchmarking, to obtain accurate benchmarks you should run several tests and average the results and do so in single user mode when the system is not running other tasks as anything else will skew your results, likely resulting in higher iowait times than anticipated.



Another simple form of benchmarking would be to run dd with a sample file of a specific size. say you had (or created) a sourcefile of random data of X GB in size. you could then run time dd if=sourcefile of=target file



by using dd's bs= parameter you could run the test with different block sizes (see man dd) which might be useful for tuning depending on your needs/environment.






share|improve this answer




















  • 1





    Please do yourself a favor, and don't recommend RAID5. By current standards, RAID6 is acceptable.

    – Vlastimil
    Jan 31 '17 at 2:41











  • @Vlastimil You make a valid point regarding RAID6 (sadly it requires a minimum of a 100% increase in the number of drives required over the RAID1 that the OP is using where RAID5 only requires a minimum of 50% increase in the number of drives.) Perhaps I was incorrect in my assumption that someone using RAID1 might have budgetary considerations. Furthermore, RAID 6 rebuilds work the array a lot harder during the rebuild; the whole point of RAID is to maintain availability in the presence of a drive failure. The extra work during the rebuild reduces availability. I see both sides.

    – Elder Geek
    Jan 31 '17 at 3:51











  • I read write performance is equal to worst disk but that probably only applies to hardware RAID, not software.

    – Jérôme
    Jan 31 '17 at 8:28











  • @Jérôme That sounds correct to me. (Applies to hardware RAID)

    – Elder Geek
    Jan 31 '17 at 18:49


















0














RAID1 will be, at maximum, as fast as the slowest drive in the array.



Even if you had 3 drives in RAID1, where two of them are enterprise SSDs and one is consumer HDD, you will get the speed of that HDD.



For those, who never used or saw RAID1 on 3 or more drives, here is a wiki excerpt (link):




RAID 1 consists of an exact copy (or mirror) of a set of data on two or more disks







share|improve this answer

























  • RAID1 on 3 drives? I've never seen a mirror use 3 drives. Are you perhaps referring to RAID1E?

    – Elder Geek
    Jan 31 '17 at 3:56











  • @ElderGeek No, I am not. You can have mirrored data across as many drives, as you wish.

    – Vlastimil
    Jan 31 '17 at 4:08











  • "RAID1 will be, at maximum, as fast as the slowest drive in the array." Yes I know. Both drives in the array are identical. WD Red 1 To NAS 5400 RPM. Can't remember why I picked 5400 RPM. Energy consumption, perhaps. Besides, picking identical disks for a RAID makes it likely to have both disks break at the same time. Using a RAID was probably useless or at least overkill on this home desktop anyway. From further experience, a daily rsync totally suits my needs.

    – Jérôme
    Jan 31 '17 at 8:26











  • @Jérôme I can assure you it is not overkill, I use RAID6 for home purposes. It is a matter how valuable the data are to you. If there are just a bunch of movies, you can download them anytime, right, but having e.g. 10TB of movies would be actually quite exhausting to replace.

    – Vlastimil
    Jan 31 '17 at 8:53











  • Interesting. I'm curious why you would mirror data over 3 drives in a RAID 1 configuration rather than utilize the same drives in a RAID 5 array resulting in double the storage and less stress on the drives.

    – Elder Geek
    Jan 31 '17 at 18:55










Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f341315%2fwrite-access-time-slow-on-raid1%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









0














RAID1 will be slower than a single drive even the drive specs are equal.



The reason for this is while RAID1 improves reliability by performing every write to both drives, this same action reduces performance



RAID0 splits writes between 2 drives which improves performance by sharing the load but reduces reliability for the same reason.



RAID5 is a happy medium that results in better performance than a single drive as well as increased reliability as the failure of a drive won't make the array inoperable (although it will slow down substantially under these conditions).



Regardless of your method of benchmarking, to obtain accurate benchmarks you should run several tests and average the results and do so in single user mode when the system is not running other tasks as anything else will skew your results, likely resulting in higher iowait times than anticipated.



Another simple form of benchmarking would be to run dd with a sample file of a specific size. say you had (or created) a sourcefile of random data of X GB in size. you could then run time dd if=sourcefile of=target file



by using dd's bs= parameter you could run the test with different block sizes (see man dd) which might be useful for tuning depending on your needs/environment.






share|improve this answer




















  • 1





    Please do yourself a favor, and don't recommend RAID5. By current standards, RAID6 is acceptable.

    – Vlastimil
    Jan 31 '17 at 2:41











  • @Vlastimil You make a valid point regarding RAID6 (sadly it requires a minimum of a 100% increase in the number of drives required over the RAID1 that the OP is using where RAID5 only requires a minimum of 50% increase in the number of drives.) Perhaps I was incorrect in my assumption that someone using RAID1 might have budgetary considerations. Furthermore, RAID 6 rebuilds work the array a lot harder during the rebuild; the whole point of RAID is to maintain availability in the presence of a drive failure. The extra work during the rebuild reduces availability. I see both sides.

    – Elder Geek
    Jan 31 '17 at 3:51











  • I read write performance is equal to worst disk but that probably only applies to hardware RAID, not software.

    – Jérôme
    Jan 31 '17 at 8:28











  • @Jérôme That sounds correct to me. (Applies to hardware RAID)

    – Elder Geek
    Jan 31 '17 at 18:49















0














RAID1 will be slower than a single drive even the drive specs are equal.



The reason for this is while RAID1 improves reliability by performing every write to both drives, this same action reduces performance



RAID0 splits writes between 2 drives which improves performance by sharing the load but reduces reliability for the same reason.



RAID5 is a happy medium that results in better performance than a single drive as well as increased reliability as the failure of a drive won't make the array inoperable (although it will slow down substantially under these conditions).



Regardless of your method of benchmarking, to obtain accurate benchmarks you should run several tests and average the results and do so in single user mode when the system is not running other tasks as anything else will skew your results, likely resulting in higher iowait times than anticipated.



Another simple form of benchmarking would be to run dd with a sample file of a specific size. say you had (or created) a sourcefile of random data of X GB in size. you could then run time dd if=sourcefile of=target file



by using dd's bs= parameter you could run the test with different block sizes (see man dd) which might be useful for tuning depending on your needs/environment.






share|improve this answer




















  • 1





    Please do yourself a favor, and don't recommend RAID5. By current standards, RAID6 is acceptable.

    – Vlastimil
    Jan 31 '17 at 2:41











  • @Vlastimil You make a valid point regarding RAID6 (sadly it requires a minimum of a 100% increase in the number of drives required over the RAID1 that the OP is using where RAID5 only requires a minimum of 50% increase in the number of drives.) Perhaps I was incorrect in my assumption that someone using RAID1 might have budgetary considerations. Furthermore, RAID 6 rebuilds work the array a lot harder during the rebuild; the whole point of RAID is to maintain availability in the presence of a drive failure. The extra work during the rebuild reduces availability. I see both sides.

    – Elder Geek
    Jan 31 '17 at 3:51











  • I read write performance is equal to worst disk but that probably only applies to hardware RAID, not software.

    – Jérôme
    Jan 31 '17 at 8:28











  • @Jérôme That sounds correct to me. (Applies to hardware RAID)

    – Elder Geek
    Jan 31 '17 at 18:49













0












0








0







RAID1 will be slower than a single drive even the drive specs are equal.



The reason for this is while RAID1 improves reliability by performing every write to both drives, this same action reduces performance



RAID0 splits writes between 2 drives which improves performance by sharing the load but reduces reliability for the same reason.



RAID5 is a happy medium that results in better performance than a single drive as well as increased reliability as the failure of a drive won't make the array inoperable (although it will slow down substantially under these conditions).



Regardless of your method of benchmarking, to obtain accurate benchmarks you should run several tests and average the results and do so in single user mode when the system is not running other tasks as anything else will skew your results, likely resulting in higher iowait times than anticipated.



Another simple form of benchmarking would be to run dd with a sample file of a specific size. say you had (or created) a sourcefile of random data of X GB in size. you could then run time dd if=sourcefile of=target file



by using dd's bs= parameter you could run the test with different block sizes (see man dd) which might be useful for tuning depending on your needs/environment.






share|improve this answer















RAID1 will be slower than a single drive even the drive specs are equal.



The reason for this is while RAID1 improves reliability by performing every write to both drives, this same action reduces performance



RAID0 splits writes between 2 drives which improves performance by sharing the load but reduces reliability for the same reason.



RAID5 is a happy medium that results in better performance than a single drive as well as increased reliability as the failure of a drive won't make the array inoperable (although it will slow down substantially under these conditions).



Regardless of your method of benchmarking, to obtain accurate benchmarks you should run several tests and average the results and do so in single user mode when the system is not running other tasks as anything else will skew your results, likely resulting in higher iowait times than anticipated.



Another simple form of benchmarking would be to run dd with a sample file of a specific size. say you had (or created) a sourcefile of random data of X GB in size. you could then run time dd if=sourcefile of=target file



by using dd's bs= parameter you could run the test with different block sizes (see man dd) which might be useful for tuning depending on your needs/environment.







share|improve this answer














share|improve this answer



share|improve this answer








edited Jan 31 '17 at 3:58

























answered Jan 30 '17 at 23:09









Elder GeekElder Geek

557318




557318







  • 1





    Please do yourself a favor, and don't recommend RAID5. By current standards, RAID6 is acceptable.

    – Vlastimil
    Jan 31 '17 at 2:41











  • @Vlastimil You make a valid point regarding RAID6 (sadly it requires a minimum of a 100% increase in the number of drives required over the RAID1 that the OP is using where RAID5 only requires a minimum of 50% increase in the number of drives.) Perhaps I was incorrect in my assumption that someone using RAID1 might have budgetary considerations. Furthermore, RAID 6 rebuilds work the array a lot harder during the rebuild; the whole point of RAID is to maintain availability in the presence of a drive failure. The extra work during the rebuild reduces availability. I see both sides.

    – Elder Geek
    Jan 31 '17 at 3:51











  • I read write performance is equal to worst disk but that probably only applies to hardware RAID, not software.

    – Jérôme
    Jan 31 '17 at 8:28











  • @Jérôme That sounds correct to me. (Applies to hardware RAID)

    – Elder Geek
    Jan 31 '17 at 18:49












  • 1





    Please do yourself a favor, and don't recommend RAID5. By current standards, RAID6 is acceptable.

    – Vlastimil
    Jan 31 '17 at 2:41











  • @Vlastimil You make a valid point regarding RAID6 (sadly it requires a minimum of a 100% increase in the number of drives required over the RAID1 that the OP is using where RAID5 only requires a minimum of 50% increase in the number of drives.) Perhaps I was incorrect in my assumption that someone using RAID1 might have budgetary considerations. Furthermore, RAID 6 rebuilds work the array a lot harder during the rebuild; the whole point of RAID is to maintain availability in the presence of a drive failure. The extra work during the rebuild reduces availability. I see both sides.

    – Elder Geek
    Jan 31 '17 at 3:51











  • I read write performance is equal to worst disk but that probably only applies to hardware RAID, not software.

    – Jérôme
    Jan 31 '17 at 8:28











  • @Jérôme That sounds correct to me. (Applies to hardware RAID)

    – Elder Geek
    Jan 31 '17 at 18:49







1




1





Please do yourself a favor, and don't recommend RAID5. By current standards, RAID6 is acceptable.

– Vlastimil
Jan 31 '17 at 2:41





Please do yourself a favor, and don't recommend RAID5. By current standards, RAID6 is acceptable.

– Vlastimil
Jan 31 '17 at 2:41













@Vlastimil You make a valid point regarding RAID6 (sadly it requires a minimum of a 100% increase in the number of drives required over the RAID1 that the OP is using where RAID5 only requires a minimum of 50% increase in the number of drives.) Perhaps I was incorrect in my assumption that someone using RAID1 might have budgetary considerations. Furthermore, RAID 6 rebuilds work the array a lot harder during the rebuild; the whole point of RAID is to maintain availability in the presence of a drive failure. The extra work during the rebuild reduces availability. I see both sides.

– Elder Geek
Jan 31 '17 at 3:51





@Vlastimil You make a valid point regarding RAID6 (sadly it requires a minimum of a 100% increase in the number of drives required over the RAID1 that the OP is using where RAID5 only requires a minimum of 50% increase in the number of drives.) Perhaps I was incorrect in my assumption that someone using RAID1 might have budgetary considerations. Furthermore, RAID 6 rebuilds work the array a lot harder during the rebuild; the whole point of RAID is to maintain availability in the presence of a drive failure. The extra work during the rebuild reduces availability. I see both sides.

– Elder Geek
Jan 31 '17 at 3:51













I read write performance is equal to worst disk but that probably only applies to hardware RAID, not software.

– Jérôme
Jan 31 '17 at 8:28





I read write performance is equal to worst disk but that probably only applies to hardware RAID, not software.

– Jérôme
Jan 31 '17 at 8:28













@Jérôme That sounds correct to me. (Applies to hardware RAID)

– Elder Geek
Jan 31 '17 at 18:49





@Jérôme That sounds correct to me. (Applies to hardware RAID)

– Elder Geek
Jan 31 '17 at 18:49













0














RAID1 will be, at maximum, as fast as the slowest drive in the array.



Even if you had 3 drives in RAID1, where two of them are enterprise SSDs and one is consumer HDD, you will get the speed of that HDD.



For those, who never used or saw RAID1 on 3 or more drives, here is a wiki excerpt (link):




RAID 1 consists of an exact copy (or mirror) of a set of data on two or more disks







share|improve this answer

























  • RAID1 on 3 drives? I've never seen a mirror use 3 drives. Are you perhaps referring to RAID1E?

    – Elder Geek
    Jan 31 '17 at 3:56











  • @ElderGeek No, I am not. You can have mirrored data across as many drives, as you wish.

    – Vlastimil
    Jan 31 '17 at 4:08











  • "RAID1 will be, at maximum, as fast as the slowest drive in the array." Yes I know. Both drives in the array are identical. WD Red 1 To NAS 5400 RPM. Can't remember why I picked 5400 RPM. Energy consumption, perhaps. Besides, picking identical disks for a RAID makes it likely to have both disks break at the same time. Using a RAID was probably useless or at least overkill on this home desktop anyway. From further experience, a daily rsync totally suits my needs.

    – Jérôme
    Jan 31 '17 at 8:26











  • @Jérôme I can assure you it is not overkill, I use RAID6 for home purposes. It is a matter how valuable the data are to you. If there are just a bunch of movies, you can download them anytime, right, but having e.g. 10TB of movies would be actually quite exhausting to replace.

    – Vlastimil
    Jan 31 '17 at 8:53











  • Interesting. I'm curious why you would mirror data over 3 drives in a RAID 1 configuration rather than utilize the same drives in a RAID 5 array resulting in double the storage and less stress on the drives.

    – Elder Geek
    Jan 31 '17 at 18:55















0














RAID1 will be, at maximum, as fast as the slowest drive in the array.



Even if you had 3 drives in RAID1, where two of them are enterprise SSDs and one is consumer HDD, you will get the speed of that HDD.



For those, who never used or saw RAID1 on 3 or more drives, here is a wiki excerpt (link):




RAID 1 consists of an exact copy (or mirror) of a set of data on two or more disks







share|improve this answer

























  • RAID1 on 3 drives? I've never seen a mirror use 3 drives. Are you perhaps referring to RAID1E?

    – Elder Geek
    Jan 31 '17 at 3:56











  • @ElderGeek No, I am not. You can have mirrored data across as many drives, as you wish.

    – Vlastimil
    Jan 31 '17 at 4:08











  • "RAID1 will be, at maximum, as fast as the slowest drive in the array." Yes I know. Both drives in the array are identical. WD Red 1 To NAS 5400 RPM. Can't remember why I picked 5400 RPM. Energy consumption, perhaps. Besides, picking identical disks for a RAID makes it likely to have both disks break at the same time. Using a RAID was probably useless or at least overkill on this home desktop anyway. From further experience, a daily rsync totally suits my needs.

    – Jérôme
    Jan 31 '17 at 8:26











  • @Jérôme I can assure you it is not overkill, I use RAID6 for home purposes. It is a matter how valuable the data are to you. If there are just a bunch of movies, you can download them anytime, right, but having e.g. 10TB of movies would be actually quite exhausting to replace.

    – Vlastimil
    Jan 31 '17 at 8:53











  • Interesting. I'm curious why you would mirror data over 3 drives in a RAID 1 configuration rather than utilize the same drives in a RAID 5 array resulting in double the storage and less stress on the drives.

    – Elder Geek
    Jan 31 '17 at 18:55













0












0








0







RAID1 will be, at maximum, as fast as the slowest drive in the array.



Even if you had 3 drives in RAID1, where two of them are enterprise SSDs and one is consumer HDD, you will get the speed of that HDD.



For those, who never used or saw RAID1 on 3 or more drives, here is a wiki excerpt (link):




RAID 1 consists of an exact copy (or mirror) of a set of data on two or more disks







share|improve this answer















RAID1 will be, at maximum, as fast as the slowest drive in the array.



Even if you had 3 drives in RAID1, where two of them are enterprise SSDs and one is consumer HDD, you will get the speed of that HDD.



For those, who never used or saw RAID1 on 3 or more drives, here is a wiki excerpt (link):




RAID 1 consists of an exact copy (or mirror) of a set of data on two or more disks








share|improve this answer














share|improve this answer



share|improve this answer








edited Jan 31 '17 at 4:23

























answered Jan 31 '17 at 2:50









VlastimilVlastimil

8,0291363136




8,0291363136












  • RAID1 on 3 drives? I've never seen a mirror use 3 drives. Are you perhaps referring to RAID1E?

    – Elder Geek
    Jan 31 '17 at 3:56











  • @ElderGeek No, I am not. You can have mirrored data across as many drives, as you wish.

    – Vlastimil
    Jan 31 '17 at 4:08











  • "RAID1 will be, at maximum, as fast as the slowest drive in the array." Yes I know. Both drives in the array are identical. WD Red 1 To NAS 5400 RPM. Can't remember why I picked 5400 RPM. Energy consumption, perhaps. Besides, picking identical disks for a RAID makes it likely to have both disks break at the same time. Using a RAID was probably useless or at least overkill on this home desktop anyway. From further experience, a daily rsync totally suits my needs.

    – Jérôme
    Jan 31 '17 at 8:26











  • @Jérôme I can assure you it is not overkill, I use RAID6 for home purposes. It is a matter how valuable the data are to you. If there are just a bunch of movies, you can download them anytime, right, but having e.g. 10TB of movies would be actually quite exhausting to replace.

    – Vlastimil
    Jan 31 '17 at 8:53











  • Interesting. I'm curious why you would mirror data over 3 drives in a RAID 1 configuration rather than utilize the same drives in a RAID 5 array resulting in double the storage and less stress on the drives.

    – Elder Geek
    Jan 31 '17 at 18:55

















  • RAID1 on 3 drives? I've never seen a mirror use 3 drives. Are you perhaps referring to RAID1E?

    – Elder Geek
    Jan 31 '17 at 3:56











  • @ElderGeek No, I am not. You can have mirrored data across as many drives, as you wish.

    – Vlastimil
    Jan 31 '17 at 4:08











  • "RAID1 will be, at maximum, as fast as the slowest drive in the array." Yes I know. Both drives in the array are identical. WD Red 1 To NAS 5400 RPM. Can't remember why I picked 5400 RPM. Energy consumption, perhaps. Besides, picking identical disks for a RAID makes it likely to have both disks break at the same time. Using a RAID was probably useless or at least overkill on this home desktop anyway. From further experience, a daily rsync totally suits my needs.

    – Jérôme
    Jan 31 '17 at 8:26











  • @Jérôme I can assure you it is not overkill, I use RAID6 for home purposes. It is a matter how valuable the data are to you. If there are just a bunch of movies, you can download them anytime, right, but having e.g. 10TB of movies would be actually quite exhausting to replace.

    – Vlastimil
    Jan 31 '17 at 8:53











  • Interesting. I'm curious why you would mirror data over 3 drives in a RAID 1 configuration rather than utilize the same drives in a RAID 5 array resulting in double the storage and less stress on the drives.

    – Elder Geek
    Jan 31 '17 at 18:55
















RAID1 on 3 drives? I've never seen a mirror use 3 drives. Are you perhaps referring to RAID1E?

– Elder Geek
Jan 31 '17 at 3:56





RAID1 on 3 drives? I've never seen a mirror use 3 drives. Are you perhaps referring to RAID1E?

– Elder Geek
Jan 31 '17 at 3:56













@ElderGeek No, I am not. You can have mirrored data across as many drives, as you wish.

– Vlastimil
Jan 31 '17 at 4:08





@ElderGeek No, I am not. You can have mirrored data across as many drives, as you wish.

– Vlastimil
Jan 31 '17 at 4:08













"RAID1 will be, at maximum, as fast as the slowest drive in the array." Yes I know. Both drives in the array are identical. WD Red 1 To NAS 5400 RPM. Can't remember why I picked 5400 RPM. Energy consumption, perhaps. Besides, picking identical disks for a RAID makes it likely to have both disks break at the same time. Using a RAID was probably useless or at least overkill on this home desktop anyway. From further experience, a daily rsync totally suits my needs.

– Jérôme
Jan 31 '17 at 8:26





"RAID1 will be, at maximum, as fast as the slowest drive in the array." Yes I know. Both drives in the array are identical. WD Red 1 To NAS 5400 RPM. Can't remember why I picked 5400 RPM. Energy consumption, perhaps. Besides, picking identical disks for a RAID makes it likely to have both disks break at the same time. Using a RAID was probably useless or at least overkill on this home desktop anyway. From further experience, a daily rsync totally suits my needs.

– Jérôme
Jan 31 '17 at 8:26













@Jérôme I can assure you it is not overkill, I use RAID6 for home purposes. It is a matter how valuable the data are to you. If there are just a bunch of movies, you can download them anytime, right, but having e.g. 10TB of movies would be actually quite exhausting to replace.

– Vlastimil
Jan 31 '17 at 8:53





@Jérôme I can assure you it is not overkill, I use RAID6 for home purposes. It is a matter how valuable the data are to you. If there are just a bunch of movies, you can download them anytime, right, but having e.g. 10TB of movies would be actually quite exhausting to replace.

– Vlastimil
Jan 31 '17 at 8:53













Interesting. I'm curious why you would mirror data over 3 drives in a RAID 1 configuration rather than utilize the same drives in a RAID 5 array resulting in double the storage and less stress on the drives.

– Elder Geek
Jan 31 '17 at 18:55





Interesting. I'm curious why you would mirror data over 3 drives in a RAID 1 configuration rather than utilize the same drives in a RAID 5 array resulting in double the storage and less stress on the drives.

– Elder Geek
Jan 31 '17 at 18:55

















draft saved

draft discarded
















































Thanks for contributing an answer to Unix & Linux Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f341315%2fwrite-access-time-slow-on-raid1%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown






Popular posts from this blog

How to check contact read email or not when send email to Individual?

Bahrain

Postfix configuration issue with fips on centos 7; mailgun relay