How to specify multiple schedulers on the kernel boot command line?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite
1












We have systems with both spinning mechanical disks, and NVME storage. We want to reduce the CPU overhead for IO by taking any IO scheduler out of the way. We want to specify this on the Linux boot command line; i.e. in GRUB_CMDLINE_LINUX, in the file /etc/default/grub.



  • For mechanical disks, we can append elevator=noop to the command line. This corresponds to the noop value in /sys/block/sda/queue/scheduler

  • For NVME storage, we instead use none in /sys/block/nvme0n1/queue/scheduler; which presumably (could not confirm) can be specified at boot time by appending elevator=none.

This becomes a two-part question:



  1. Is elevator=none the correct value to use for NVME storage in GRUB_CMDLINE_LINUX?

  2. Can both values be specified in GRUB_CMDLINE_LINUX?

If the second is correct, I'm guessing that elevator=noop will set correctly for the spinning disks, but the NVME controller will gracefully ignore it; then elevator=none will set correctly for NVME disks, but the spinning disk controller will gracefully ignore that.










share|improve this question























  • Thanks -- You may as well put that up as the answer. And the answer to the "headline" question is "you don't".
    – Rich
    Sep 10 at 17:11














up vote
1
down vote

favorite
1












We have systems with both spinning mechanical disks, and NVME storage. We want to reduce the CPU overhead for IO by taking any IO scheduler out of the way. We want to specify this on the Linux boot command line; i.e. in GRUB_CMDLINE_LINUX, in the file /etc/default/grub.



  • For mechanical disks, we can append elevator=noop to the command line. This corresponds to the noop value in /sys/block/sda/queue/scheduler

  • For NVME storage, we instead use none in /sys/block/nvme0n1/queue/scheduler; which presumably (could not confirm) can be specified at boot time by appending elevator=none.

This becomes a two-part question:



  1. Is elevator=none the correct value to use for NVME storage in GRUB_CMDLINE_LINUX?

  2. Can both values be specified in GRUB_CMDLINE_LINUX?

If the second is correct, I'm guessing that elevator=noop will set correctly for the spinning disks, but the NVME controller will gracefully ignore it; then elevator=none will set correctly for NVME disks, but the spinning disk controller will gracefully ignore that.










share|improve this question























  • Thanks -- You may as well put that up as the answer. And the answer to the "headline" question is "you don't".
    – Rich
    Sep 10 at 17:11












up vote
1
down vote

favorite
1









up vote
1
down vote

favorite
1






1





We have systems with both spinning mechanical disks, and NVME storage. We want to reduce the CPU overhead for IO by taking any IO scheduler out of the way. We want to specify this on the Linux boot command line; i.e. in GRUB_CMDLINE_LINUX, in the file /etc/default/grub.



  • For mechanical disks, we can append elevator=noop to the command line. This corresponds to the noop value in /sys/block/sda/queue/scheduler

  • For NVME storage, we instead use none in /sys/block/nvme0n1/queue/scheduler; which presumably (could not confirm) can be specified at boot time by appending elevator=none.

This becomes a two-part question:



  1. Is elevator=none the correct value to use for NVME storage in GRUB_CMDLINE_LINUX?

  2. Can both values be specified in GRUB_CMDLINE_LINUX?

If the second is correct, I'm guessing that elevator=noop will set correctly for the spinning disks, but the NVME controller will gracefully ignore it; then elevator=none will set correctly for NVME disks, but the spinning disk controller will gracefully ignore that.










share|improve this question















We have systems with both spinning mechanical disks, and NVME storage. We want to reduce the CPU overhead for IO by taking any IO scheduler out of the way. We want to specify this on the Linux boot command line; i.e. in GRUB_CMDLINE_LINUX, in the file /etc/default/grub.



  • For mechanical disks, we can append elevator=noop to the command line. This corresponds to the noop value in /sys/block/sda/queue/scheduler

  • For NVME storage, we instead use none in /sys/block/nvme0n1/queue/scheduler; which presumably (could not confirm) can be specified at boot time by appending elevator=none.

This becomes a two-part question:



  1. Is elevator=none the correct value to use for NVME storage in GRUB_CMDLINE_LINUX?

  2. Can both values be specified in GRUB_CMDLINE_LINUX?

If the second is correct, I'm guessing that elevator=noop will set correctly for the spinning disks, but the NVME controller will gracefully ignore it; then elevator=none will set correctly for NVME disks, but the spinning disk controller will gracefully ignore that.







grub2 io kernel-parameters nvme






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Sep 10 at 18:40









don_crissti

47.4k15126155




47.4k15126155










asked Sep 7 at 22:11









Rich

322211




322211











  • Thanks -- You may as well put that up as the answer. And the answer to the "headline" question is "you don't".
    – Rich
    Sep 10 at 17:11
















  • Thanks -- You may as well put that up as the answer. And the answer to the "headline" question is "you don't".
    – Rich
    Sep 10 at 17:11















Thanks -- You may as well put that up as the answer. And the answer to the "headline" question is "you don't".
– Rich
Sep 10 at 17:11




Thanks -- You may as well put that up as the answer. And the answer to the "headline" question is "you don't".
– Rich
Sep 10 at 17:11










1 Answer
1






active

oldest

votes

















up vote
2
down vote



accepted










I/O schedulers are assigned globally at boot time.

Even if you use multiple elevator=[value] assignments only the last one will take effect.

To automatically/permanently set per-device schedulers you could use udev rules, systemd services or configuration & performance tuning tools like tuned.

As to your other question, the answer is yes, elevator=none is the correct value to use for NVME storage.






share|improve this answer




















  • I like the simplicity of the systemd approach, very similar to rc.local, but those only work for existing attached devices. The udev method will also work for removable disks.
    – Rich
    Sep 12 at 0:32






  • 1




    @Rich - yup, udev rules rule (pun intended).
    – don_crissti
    Sep 12 at 1:31










Your Answer







StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f467635%2fhow-to-specify-multiple-schedulers-on-the-kernel-boot-command-line%23new-answer', 'question_page');

);

Post as a guest






























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
2
down vote



accepted










I/O schedulers are assigned globally at boot time.

Even if you use multiple elevator=[value] assignments only the last one will take effect.

To automatically/permanently set per-device schedulers you could use udev rules, systemd services or configuration & performance tuning tools like tuned.

As to your other question, the answer is yes, elevator=none is the correct value to use for NVME storage.






share|improve this answer




















  • I like the simplicity of the systemd approach, very similar to rc.local, but those only work for existing attached devices. The udev method will also work for removable disks.
    – Rich
    Sep 12 at 0:32






  • 1




    @Rich - yup, udev rules rule (pun intended).
    – don_crissti
    Sep 12 at 1:31














up vote
2
down vote



accepted










I/O schedulers are assigned globally at boot time.

Even if you use multiple elevator=[value] assignments only the last one will take effect.

To automatically/permanently set per-device schedulers you could use udev rules, systemd services or configuration & performance tuning tools like tuned.

As to your other question, the answer is yes, elevator=none is the correct value to use for NVME storage.






share|improve this answer




















  • I like the simplicity of the systemd approach, very similar to rc.local, but those only work for existing attached devices. The udev method will also work for removable disks.
    – Rich
    Sep 12 at 0:32






  • 1




    @Rich - yup, udev rules rule (pun intended).
    – don_crissti
    Sep 12 at 1:31












up vote
2
down vote



accepted







up vote
2
down vote



accepted






I/O schedulers are assigned globally at boot time.

Even if you use multiple elevator=[value] assignments only the last one will take effect.

To automatically/permanently set per-device schedulers you could use udev rules, systemd services or configuration & performance tuning tools like tuned.

As to your other question, the answer is yes, elevator=none is the correct value to use for NVME storage.






share|improve this answer












I/O schedulers are assigned globally at boot time.

Even if you use multiple elevator=[value] assignments only the last one will take effect.

To automatically/permanently set per-device schedulers you could use udev rules, systemd services or configuration & performance tuning tools like tuned.

As to your other question, the answer is yes, elevator=none is the correct value to use for NVME storage.







share|improve this answer












share|improve this answer



share|improve this answer










answered Sep 10 at 18:38









don_crissti

47.4k15126155




47.4k15126155











  • I like the simplicity of the systemd approach, very similar to rc.local, but those only work for existing attached devices. The udev method will also work for removable disks.
    – Rich
    Sep 12 at 0:32






  • 1




    @Rich - yup, udev rules rule (pun intended).
    – don_crissti
    Sep 12 at 1:31
















  • I like the simplicity of the systemd approach, very similar to rc.local, but those only work for existing attached devices. The udev method will also work for removable disks.
    – Rich
    Sep 12 at 0:32






  • 1




    @Rich - yup, udev rules rule (pun intended).
    – don_crissti
    Sep 12 at 1:31















I like the simplicity of the systemd approach, very similar to rc.local, but those only work for existing attached devices. The udev method will also work for removable disks.
– Rich
Sep 12 at 0:32




I like the simplicity of the systemd approach, very similar to rc.local, but those only work for existing attached devices. The udev method will also work for removable disks.
– Rich
Sep 12 at 0:32




1




1




@Rich - yup, udev rules rule (pun intended).
– don_crissti
Sep 12 at 1:31




@Rich - yup, udev rules rule (pun intended).
– don_crissti
Sep 12 at 1:31

















 

draft saved


draft discarded















































 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f467635%2fhow-to-specify-multiple-schedulers-on-the-kernel-boot-command-line%23new-answer', 'question_page');

);

Post as a guest













































































Popular posts from this blog

How to check contact read email or not when send email to Individual?

Displaying single band from multi-band raster using QGIS

How many registers does an x86_64 CPU actually have?