How can one restrict the number of CPU cores each user can use?
Clash Royale CLAN TAG#URR8PPP
We have a computer whose CPU has 32 cores and it's going to be used for running programs by a few different users. Is there any way to restrict the number of cores each user can use at any time so that one user will not monopolize all the CPU power?
permissions users administration multi-core
add a comment |
We have a computer whose CPU has 32 cores and it's going to be used for running programs by a few different users. Is there any way to restrict the number of cores each user can use at any time so that one user will not monopolize all the CPU power?
permissions users administration multi-core
5
Not an answer, only an idea. You might want to look into setting up several virtual machines. Each could have only a limited amount of CPU-s. Each user would only be on one of the virtual machines, and the users on that VM would be limited in CPU usage. It might be that some of the virtualization softwares has tools for supporting this.
– ghellquist
Jan 9 at 17:16
1
@ghellquist you should make that an answer
– slebetman
Jan 10 at 0:38
@ghellquist: You probably want something as light-weight as possible, like Linux containers, if you just want different users to only see some of the CPUs. (e.g. so when they start an OpenMP or other program that starts as many threads as it sees cores, it will start an appropriate number for the amount of cores you're letting each user actually use). Full virtualization, like KVM, has a performance cost even with hardware support like VT-X or AMD-V, from extra levels of page tables even when VM exits are avoided, in code that gets any TLB misses from touching lots of memory.
– Peter Cordes
Jan 10 at 4:55
I'm sorry, but is there even a need for this? As a multi-user system, Linux by default already implements preemptive multitasking, so the situation where a single (non-malicious) user just hogs the entire system for themselves shouldn't come up.
– Cubic
Jan 10 at 15:37
add a comment |
We have a computer whose CPU has 32 cores and it's going to be used for running programs by a few different users. Is there any way to restrict the number of cores each user can use at any time so that one user will not monopolize all the CPU power?
permissions users administration multi-core
We have a computer whose CPU has 32 cores and it's going to be used for running programs by a few different users. Is there any way to restrict the number of cores each user can use at any time so that one user will not monopolize all the CPU power?
permissions users administration multi-core
permissions users administration multi-core
asked Jan 9 at 13:50
RezaReza
4052927
4052927
5
Not an answer, only an idea. You might want to look into setting up several virtual machines. Each could have only a limited amount of CPU-s. Each user would only be on one of the virtual machines, and the users on that VM would be limited in CPU usage. It might be that some of the virtualization softwares has tools for supporting this.
– ghellquist
Jan 9 at 17:16
1
@ghellquist you should make that an answer
– slebetman
Jan 10 at 0:38
@ghellquist: You probably want something as light-weight as possible, like Linux containers, if you just want different users to only see some of the CPUs. (e.g. so when they start an OpenMP or other program that starts as many threads as it sees cores, it will start an appropriate number for the amount of cores you're letting each user actually use). Full virtualization, like KVM, has a performance cost even with hardware support like VT-X or AMD-V, from extra levels of page tables even when VM exits are avoided, in code that gets any TLB misses from touching lots of memory.
– Peter Cordes
Jan 10 at 4:55
I'm sorry, but is there even a need for this? As a multi-user system, Linux by default already implements preemptive multitasking, so the situation where a single (non-malicious) user just hogs the entire system for themselves shouldn't come up.
– Cubic
Jan 10 at 15:37
add a comment |
5
Not an answer, only an idea. You might want to look into setting up several virtual machines. Each could have only a limited amount of CPU-s. Each user would only be on one of the virtual machines, and the users on that VM would be limited in CPU usage. It might be that some of the virtualization softwares has tools for supporting this.
– ghellquist
Jan 9 at 17:16
1
@ghellquist you should make that an answer
– slebetman
Jan 10 at 0:38
@ghellquist: You probably want something as light-weight as possible, like Linux containers, if you just want different users to only see some of the CPUs. (e.g. so when they start an OpenMP or other program that starts as many threads as it sees cores, it will start an appropriate number for the amount of cores you're letting each user actually use). Full virtualization, like KVM, has a performance cost even with hardware support like VT-X or AMD-V, from extra levels of page tables even when VM exits are avoided, in code that gets any TLB misses from touching lots of memory.
– Peter Cordes
Jan 10 at 4:55
I'm sorry, but is there even a need for this? As a multi-user system, Linux by default already implements preemptive multitasking, so the situation where a single (non-malicious) user just hogs the entire system for themselves shouldn't come up.
– Cubic
Jan 10 at 15:37
5
5
Not an answer, only an idea. You might want to look into setting up several virtual machines. Each could have only a limited amount of CPU-s. Each user would only be on one of the virtual machines, and the users on that VM would be limited in CPU usage. It might be that some of the virtualization softwares has tools for supporting this.
– ghellquist
Jan 9 at 17:16
Not an answer, only an idea. You might want to look into setting up several virtual machines. Each could have only a limited amount of CPU-s. Each user would only be on one of the virtual machines, and the users on that VM would be limited in CPU usage. It might be that some of the virtualization softwares has tools for supporting this.
– ghellquist
Jan 9 at 17:16
1
1
@ghellquist you should make that an answer
– slebetman
Jan 10 at 0:38
@ghellquist you should make that an answer
– slebetman
Jan 10 at 0:38
@ghellquist: You probably want something as light-weight as possible, like Linux containers, if you just want different users to only see some of the CPUs. (e.g. so when they start an OpenMP or other program that starts as many threads as it sees cores, it will start an appropriate number for the amount of cores you're letting each user actually use). Full virtualization, like KVM, has a performance cost even with hardware support like VT-X or AMD-V, from extra levels of page tables even when VM exits are avoided, in code that gets any TLB misses from touching lots of memory.
– Peter Cordes
Jan 10 at 4:55
@ghellquist: You probably want something as light-weight as possible, like Linux containers, if you just want different users to only see some of the CPUs. (e.g. so when they start an OpenMP or other program that starts as many threads as it sees cores, it will start an appropriate number for the amount of cores you're letting each user actually use). Full virtualization, like KVM, has a performance cost even with hardware support like VT-X or AMD-V, from extra levels of page tables even when VM exits are avoided, in code that gets any TLB misses from touching lots of memory.
– Peter Cordes
Jan 10 at 4:55
I'm sorry, but is there even a need for this? As a multi-user system, Linux by default already implements preemptive multitasking, so the situation where a single (non-malicious) user just hogs the entire system for themselves shouldn't come up.
– Cubic
Jan 10 at 15:37
I'm sorry, but is there even a need for this? As a multi-user system, Linux by default already implements preemptive multitasking, so the situation where a single (non-malicious) user just hogs the entire system for themselves shouldn't come up.
– Cubic
Jan 10 at 15:37
add a comment |
2 Answers
2
active
oldest
votes
While this is possible, it is complicated and almost certainly a bad idea. If only one user is using the machine at the moment, restricting them to N cores is a waste of resources. A far better approach would be to run everything with nice
:
NAME
nice - run a program with modified scheduling priority
SYNOPSIS
nice [OPTION] [COMMAND [ARG]...]
DESCRIPTION
Run COMMAND with an adjusted niceness, which affects process scheduling. With
no COMMAND, print the current niceness. Niceness values range from -20 (most
favorable to the process) to 19 (least favorable to the process).
This is a great tool that sets the priority of a process. So if only one user is running something, they'll get as much CPU time as they need, but if someone else launches their own (also niced) job, they will be nice and share with each other. That way, if your users all launch commands with nice 10 command
, nobody will be hogging resources (and nobody will bring the server to its knees).
Note that a high nice value means a low priority. This is a measure of how nice we should be and the nicer we are, the more we share.
Also note that this will not help manage memory allocation, it only affectes CPU scheduling. So if multiple users launch multiple memory-intensive processes, you will still have a problem. If that's an issue, you should look into proper queuing systems such as torque.
Thanks for your answer. There are some "workload managers" such as SLURM but they are for computers with multiple nodes. I guess it makes sense that people have not developed similar apps for single node computers as there is not as much demand.
– Reza
Jan 9 at 15:54
@Reza trynice
, from what you describe, that's pretty much exactly what you need.
– terdon♦
Jan 9 at 16:00
3
@Reza: That's because the OS already does that. It automatically time-shares the available CPUs to threads/processes as needed.
– BlueRaja - Danny Pflughoeft
Jan 9 at 19:19
add a comment |
TL;DR: From brief research it appears it is possible to restrict commands to specific number of cores, however in all cases you have to use a command which actually enforces the restriction.
cgroups
Linux has cgroups
which is frequently used exactly for the purpose of restricting resources available to processes. From a very brief research, you can find an example in Arch Wiki with Matlab ( a scientific software ) configuration set in /etc/cgconfig.conf
:
group matlab
perm
admin
uid = username;
task
uid = username;
cpuset
cpuset.mems="0";
cpuset.cpus="0-5";
memory
memory.limit_in_bytes = 5000000000;
In order for such config to take effect, you have to run the process via cgexec
command, e.g. from the same wiki page:
$ cgexec -g memory,cpuset:matlab /opt/MATLAB/2012b/bin/matlab -desktop
taskset
A related question on Ask Ubuntu and How to limit a process to one CPU core in Linux? [duplicate] on Unix&Linux site show an example of using taskset
to limit the CPUs for the process. In the first question, it's achieved through parsing all processes for a particular user
$ ps aux | awk '/^housezet/print $2' | xargs -l taskset -p 0x00000001
In the the other question, a process is started via taskset
itself:
$ taskset -c 0 mycommand --option # start a command with the given affinity
Conclusion
While it is certainly possible to limit processes, it seems it's not so simple to achieve that for particular users. The example in linked Ask Ubuntu post would require consistent scanning for processes belonging to each user and using taskset
on each new one. A far more reasonable approach would be to selectively run CPU intensive applications, either via cgexec
or taskset
; it also makes no sense to restrict all processes to specific number of CPUS, especially for those that actually make use of parallelism and concurrency to run their tasks faster - limiting them to specific number of CPUs can have the effect of slowing down the processing. Additionally, as terdon's answer mentioned it's a waste of resources
Running select applications via taskset
or cgexec
requires communicating with your users to let them know what applications they can run, or creating wrapper scripts which will launch select applications via tasksel
or cgexec
.
Additionally, consider setting number of processes a user or group can spawn instead of setting limit on number of CPUs. This can be achieved via /etc/security/limits.conf
file.
See also
- How to limit resource usage for a given process?
1
well, there is cgrulesengd and cgrules.conf to automatically move processes to the appropriate cgroup based on user/group instead of relying on the users running their processes with cgexec. But it seems, setting this up in ubuntu is somewhat non-trivial.
– Hans-Jakob
Jan 9 at 15:57
@Hans-Jakob It does look somewhat convoluted, plus requires adding kernel flags in GRUB. Probably for enterprise level of machine, where you do have lots of users and don't want them to crash the system, that's probably worthwhile, but for desktop - too much work. Thank you for linking that.
– Sergiy Kolodyazhnyy
Jan 9 at 16:02
2
sched_setaffinity(2)
says the affinity mask is preserved acrossexecve(2)
, and that a child inherits it onfork(2)
. So if you taskset the shell for a user (or their graphical shell for an X session), everything they start from that shell will, by default, use the same affinity mask.
– Peter Cordes
Jan 10 at 4:50
1
One possible downside is programs that check how many CPUs the machine has when deciding how many threads to start; they'll have too many threads for the number of cores they'll actually get scheduled on. Did you find out if cgroups might do anything about that?
– Peter Cordes
Jan 10 at 4:51
@PeterCordes Spawning shell idea sounds interesting. I'll need to look into that. Thanks ! As for second comment, no, I've not researched cgroups enough at this point.
– Sergiy Kolodyazhnyy
Jan 10 at 5:09
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "89"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f1108293%2fhow-can-one-restrict-the-number-of-cpu-cores-each-user-can-use%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
While this is possible, it is complicated and almost certainly a bad idea. If only one user is using the machine at the moment, restricting them to N cores is a waste of resources. A far better approach would be to run everything with nice
:
NAME
nice - run a program with modified scheduling priority
SYNOPSIS
nice [OPTION] [COMMAND [ARG]...]
DESCRIPTION
Run COMMAND with an adjusted niceness, which affects process scheduling. With
no COMMAND, print the current niceness. Niceness values range from -20 (most
favorable to the process) to 19 (least favorable to the process).
This is a great tool that sets the priority of a process. So if only one user is running something, they'll get as much CPU time as they need, but if someone else launches their own (also niced) job, they will be nice and share with each other. That way, if your users all launch commands with nice 10 command
, nobody will be hogging resources (and nobody will bring the server to its knees).
Note that a high nice value means a low priority. This is a measure of how nice we should be and the nicer we are, the more we share.
Also note that this will not help manage memory allocation, it only affectes CPU scheduling. So if multiple users launch multiple memory-intensive processes, you will still have a problem. If that's an issue, you should look into proper queuing systems such as torque.
Thanks for your answer. There are some "workload managers" such as SLURM but they are for computers with multiple nodes. I guess it makes sense that people have not developed similar apps for single node computers as there is not as much demand.
– Reza
Jan 9 at 15:54
@Reza trynice
, from what you describe, that's pretty much exactly what you need.
– terdon♦
Jan 9 at 16:00
3
@Reza: That's because the OS already does that. It automatically time-shares the available CPUs to threads/processes as needed.
– BlueRaja - Danny Pflughoeft
Jan 9 at 19:19
add a comment |
While this is possible, it is complicated and almost certainly a bad idea. If only one user is using the machine at the moment, restricting them to N cores is a waste of resources. A far better approach would be to run everything with nice
:
NAME
nice - run a program with modified scheduling priority
SYNOPSIS
nice [OPTION] [COMMAND [ARG]...]
DESCRIPTION
Run COMMAND with an adjusted niceness, which affects process scheduling. With
no COMMAND, print the current niceness. Niceness values range from -20 (most
favorable to the process) to 19 (least favorable to the process).
This is a great tool that sets the priority of a process. So if only one user is running something, they'll get as much CPU time as they need, but if someone else launches their own (also niced) job, they will be nice and share with each other. That way, if your users all launch commands with nice 10 command
, nobody will be hogging resources (and nobody will bring the server to its knees).
Note that a high nice value means a low priority. This is a measure of how nice we should be and the nicer we are, the more we share.
Also note that this will not help manage memory allocation, it only affectes CPU scheduling. So if multiple users launch multiple memory-intensive processes, you will still have a problem. If that's an issue, you should look into proper queuing systems such as torque.
Thanks for your answer. There are some "workload managers" such as SLURM but they are for computers with multiple nodes. I guess it makes sense that people have not developed similar apps for single node computers as there is not as much demand.
– Reza
Jan 9 at 15:54
@Reza trynice
, from what you describe, that's pretty much exactly what you need.
– terdon♦
Jan 9 at 16:00
3
@Reza: That's because the OS already does that. It automatically time-shares the available CPUs to threads/processes as needed.
– BlueRaja - Danny Pflughoeft
Jan 9 at 19:19
add a comment |
While this is possible, it is complicated and almost certainly a bad idea. If only one user is using the machine at the moment, restricting them to N cores is a waste of resources. A far better approach would be to run everything with nice
:
NAME
nice - run a program with modified scheduling priority
SYNOPSIS
nice [OPTION] [COMMAND [ARG]...]
DESCRIPTION
Run COMMAND with an adjusted niceness, which affects process scheduling. With
no COMMAND, print the current niceness. Niceness values range from -20 (most
favorable to the process) to 19 (least favorable to the process).
This is a great tool that sets the priority of a process. So if only one user is running something, they'll get as much CPU time as they need, but if someone else launches their own (also niced) job, they will be nice and share with each other. That way, if your users all launch commands with nice 10 command
, nobody will be hogging resources (and nobody will bring the server to its knees).
Note that a high nice value means a low priority. This is a measure of how nice we should be and the nicer we are, the more we share.
Also note that this will not help manage memory allocation, it only affectes CPU scheduling. So if multiple users launch multiple memory-intensive processes, you will still have a problem. If that's an issue, you should look into proper queuing systems such as torque.
While this is possible, it is complicated and almost certainly a bad idea. If only one user is using the machine at the moment, restricting them to N cores is a waste of resources. A far better approach would be to run everything with nice
:
NAME
nice - run a program with modified scheduling priority
SYNOPSIS
nice [OPTION] [COMMAND [ARG]...]
DESCRIPTION
Run COMMAND with an adjusted niceness, which affects process scheduling. With
no COMMAND, print the current niceness. Niceness values range from -20 (most
favorable to the process) to 19 (least favorable to the process).
This is a great tool that sets the priority of a process. So if only one user is running something, they'll get as much CPU time as they need, but if someone else launches their own (also niced) job, they will be nice and share with each other. That way, if your users all launch commands with nice 10 command
, nobody will be hogging resources (and nobody will bring the server to its knees).
Note that a high nice value means a low priority. This is a measure of how nice we should be and the nicer we are, the more we share.
Also note that this will not help manage memory allocation, it only affectes CPU scheduling. So if multiple users launch multiple memory-intensive processes, you will still have a problem. If that's an issue, you should look into proper queuing systems such as torque.
edited Jan 9 at 15:19
answered Jan 9 at 14:05
terdon♦terdon
65.5k12138220
65.5k12138220
Thanks for your answer. There are some "workload managers" such as SLURM but they are for computers with multiple nodes. I guess it makes sense that people have not developed similar apps for single node computers as there is not as much demand.
– Reza
Jan 9 at 15:54
@Reza trynice
, from what you describe, that's pretty much exactly what you need.
– terdon♦
Jan 9 at 16:00
3
@Reza: That's because the OS already does that. It automatically time-shares the available CPUs to threads/processes as needed.
– BlueRaja - Danny Pflughoeft
Jan 9 at 19:19
add a comment |
Thanks for your answer. There are some "workload managers" such as SLURM but they are for computers with multiple nodes. I guess it makes sense that people have not developed similar apps for single node computers as there is not as much demand.
– Reza
Jan 9 at 15:54
@Reza trynice
, from what you describe, that's pretty much exactly what you need.
– terdon♦
Jan 9 at 16:00
3
@Reza: That's because the OS already does that. It automatically time-shares the available CPUs to threads/processes as needed.
– BlueRaja - Danny Pflughoeft
Jan 9 at 19:19
Thanks for your answer. There are some "workload managers" such as SLURM but they are for computers with multiple nodes. I guess it makes sense that people have not developed similar apps for single node computers as there is not as much demand.
– Reza
Jan 9 at 15:54
Thanks for your answer. There are some "workload managers" such as SLURM but they are for computers with multiple nodes. I guess it makes sense that people have not developed similar apps for single node computers as there is not as much demand.
– Reza
Jan 9 at 15:54
@Reza try
nice
, from what you describe, that's pretty much exactly what you need.– terdon♦
Jan 9 at 16:00
@Reza try
nice
, from what you describe, that's pretty much exactly what you need.– terdon♦
Jan 9 at 16:00
3
3
@Reza: That's because the OS already does that. It automatically time-shares the available CPUs to threads/processes as needed.
– BlueRaja - Danny Pflughoeft
Jan 9 at 19:19
@Reza: That's because the OS already does that. It automatically time-shares the available CPUs to threads/processes as needed.
– BlueRaja - Danny Pflughoeft
Jan 9 at 19:19
add a comment |
TL;DR: From brief research it appears it is possible to restrict commands to specific number of cores, however in all cases you have to use a command which actually enforces the restriction.
cgroups
Linux has cgroups
which is frequently used exactly for the purpose of restricting resources available to processes. From a very brief research, you can find an example in Arch Wiki with Matlab ( a scientific software ) configuration set in /etc/cgconfig.conf
:
group matlab
perm
admin
uid = username;
task
uid = username;
cpuset
cpuset.mems="0";
cpuset.cpus="0-5";
memory
memory.limit_in_bytes = 5000000000;
In order for such config to take effect, you have to run the process via cgexec
command, e.g. from the same wiki page:
$ cgexec -g memory,cpuset:matlab /opt/MATLAB/2012b/bin/matlab -desktop
taskset
A related question on Ask Ubuntu and How to limit a process to one CPU core in Linux? [duplicate] on Unix&Linux site show an example of using taskset
to limit the CPUs for the process. In the first question, it's achieved through parsing all processes for a particular user
$ ps aux | awk '/^housezet/print $2' | xargs -l taskset -p 0x00000001
In the the other question, a process is started via taskset
itself:
$ taskset -c 0 mycommand --option # start a command with the given affinity
Conclusion
While it is certainly possible to limit processes, it seems it's not so simple to achieve that for particular users. The example in linked Ask Ubuntu post would require consistent scanning for processes belonging to each user and using taskset
on each new one. A far more reasonable approach would be to selectively run CPU intensive applications, either via cgexec
or taskset
; it also makes no sense to restrict all processes to specific number of CPUS, especially for those that actually make use of parallelism and concurrency to run their tasks faster - limiting them to specific number of CPUs can have the effect of slowing down the processing. Additionally, as terdon's answer mentioned it's a waste of resources
Running select applications via taskset
or cgexec
requires communicating with your users to let them know what applications they can run, or creating wrapper scripts which will launch select applications via tasksel
or cgexec
.
Additionally, consider setting number of processes a user or group can spawn instead of setting limit on number of CPUs. This can be achieved via /etc/security/limits.conf
file.
See also
- How to limit resource usage for a given process?
1
well, there is cgrulesengd and cgrules.conf to automatically move processes to the appropriate cgroup based on user/group instead of relying on the users running their processes with cgexec. But it seems, setting this up in ubuntu is somewhat non-trivial.
– Hans-Jakob
Jan 9 at 15:57
@Hans-Jakob It does look somewhat convoluted, plus requires adding kernel flags in GRUB. Probably for enterprise level of machine, where you do have lots of users and don't want them to crash the system, that's probably worthwhile, but for desktop - too much work. Thank you for linking that.
– Sergiy Kolodyazhnyy
Jan 9 at 16:02
2
sched_setaffinity(2)
says the affinity mask is preserved acrossexecve(2)
, and that a child inherits it onfork(2)
. So if you taskset the shell for a user (or their graphical shell for an X session), everything they start from that shell will, by default, use the same affinity mask.
– Peter Cordes
Jan 10 at 4:50
1
One possible downside is programs that check how many CPUs the machine has when deciding how many threads to start; they'll have too many threads for the number of cores they'll actually get scheduled on. Did you find out if cgroups might do anything about that?
– Peter Cordes
Jan 10 at 4:51
@PeterCordes Spawning shell idea sounds interesting. I'll need to look into that. Thanks ! As for second comment, no, I've not researched cgroups enough at this point.
– Sergiy Kolodyazhnyy
Jan 10 at 5:09
add a comment |
TL;DR: From brief research it appears it is possible to restrict commands to specific number of cores, however in all cases you have to use a command which actually enforces the restriction.
cgroups
Linux has cgroups
which is frequently used exactly for the purpose of restricting resources available to processes. From a very brief research, you can find an example in Arch Wiki with Matlab ( a scientific software ) configuration set in /etc/cgconfig.conf
:
group matlab
perm
admin
uid = username;
task
uid = username;
cpuset
cpuset.mems="0";
cpuset.cpus="0-5";
memory
memory.limit_in_bytes = 5000000000;
In order for such config to take effect, you have to run the process via cgexec
command, e.g. from the same wiki page:
$ cgexec -g memory,cpuset:matlab /opt/MATLAB/2012b/bin/matlab -desktop
taskset
A related question on Ask Ubuntu and How to limit a process to one CPU core in Linux? [duplicate] on Unix&Linux site show an example of using taskset
to limit the CPUs for the process. In the first question, it's achieved through parsing all processes for a particular user
$ ps aux | awk '/^housezet/print $2' | xargs -l taskset -p 0x00000001
In the the other question, a process is started via taskset
itself:
$ taskset -c 0 mycommand --option # start a command with the given affinity
Conclusion
While it is certainly possible to limit processes, it seems it's not so simple to achieve that for particular users. The example in linked Ask Ubuntu post would require consistent scanning for processes belonging to each user and using taskset
on each new one. A far more reasonable approach would be to selectively run CPU intensive applications, either via cgexec
or taskset
; it also makes no sense to restrict all processes to specific number of CPUS, especially for those that actually make use of parallelism and concurrency to run their tasks faster - limiting them to specific number of CPUs can have the effect of slowing down the processing. Additionally, as terdon's answer mentioned it's a waste of resources
Running select applications via taskset
or cgexec
requires communicating with your users to let them know what applications they can run, or creating wrapper scripts which will launch select applications via tasksel
or cgexec
.
Additionally, consider setting number of processes a user or group can spawn instead of setting limit on number of CPUs. This can be achieved via /etc/security/limits.conf
file.
See also
- How to limit resource usage for a given process?
1
well, there is cgrulesengd and cgrules.conf to automatically move processes to the appropriate cgroup based on user/group instead of relying on the users running their processes with cgexec. But it seems, setting this up in ubuntu is somewhat non-trivial.
– Hans-Jakob
Jan 9 at 15:57
@Hans-Jakob It does look somewhat convoluted, plus requires adding kernel flags in GRUB. Probably for enterprise level of machine, where you do have lots of users and don't want them to crash the system, that's probably worthwhile, but for desktop - too much work. Thank you for linking that.
– Sergiy Kolodyazhnyy
Jan 9 at 16:02
2
sched_setaffinity(2)
says the affinity mask is preserved acrossexecve(2)
, and that a child inherits it onfork(2)
. So if you taskset the shell for a user (or their graphical shell for an X session), everything they start from that shell will, by default, use the same affinity mask.
– Peter Cordes
Jan 10 at 4:50
1
One possible downside is programs that check how many CPUs the machine has when deciding how many threads to start; they'll have too many threads for the number of cores they'll actually get scheduled on. Did you find out if cgroups might do anything about that?
– Peter Cordes
Jan 10 at 4:51
@PeterCordes Spawning shell idea sounds interesting. I'll need to look into that. Thanks ! As for second comment, no, I've not researched cgroups enough at this point.
– Sergiy Kolodyazhnyy
Jan 10 at 5:09
add a comment |
TL;DR: From brief research it appears it is possible to restrict commands to specific number of cores, however in all cases you have to use a command which actually enforces the restriction.
cgroups
Linux has cgroups
which is frequently used exactly for the purpose of restricting resources available to processes. From a very brief research, you can find an example in Arch Wiki with Matlab ( a scientific software ) configuration set in /etc/cgconfig.conf
:
group matlab
perm
admin
uid = username;
task
uid = username;
cpuset
cpuset.mems="0";
cpuset.cpus="0-5";
memory
memory.limit_in_bytes = 5000000000;
In order for such config to take effect, you have to run the process via cgexec
command, e.g. from the same wiki page:
$ cgexec -g memory,cpuset:matlab /opt/MATLAB/2012b/bin/matlab -desktop
taskset
A related question on Ask Ubuntu and How to limit a process to one CPU core in Linux? [duplicate] on Unix&Linux site show an example of using taskset
to limit the CPUs for the process. In the first question, it's achieved through parsing all processes for a particular user
$ ps aux | awk '/^housezet/print $2' | xargs -l taskset -p 0x00000001
In the the other question, a process is started via taskset
itself:
$ taskset -c 0 mycommand --option # start a command with the given affinity
Conclusion
While it is certainly possible to limit processes, it seems it's not so simple to achieve that for particular users. The example in linked Ask Ubuntu post would require consistent scanning for processes belonging to each user and using taskset
on each new one. A far more reasonable approach would be to selectively run CPU intensive applications, either via cgexec
or taskset
; it also makes no sense to restrict all processes to specific number of CPUS, especially for those that actually make use of parallelism and concurrency to run their tasks faster - limiting them to specific number of CPUs can have the effect of slowing down the processing. Additionally, as terdon's answer mentioned it's a waste of resources
Running select applications via taskset
or cgexec
requires communicating with your users to let them know what applications they can run, or creating wrapper scripts which will launch select applications via tasksel
or cgexec
.
Additionally, consider setting number of processes a user or group can spawn instead of setting limit on number of CPUs. This can be achieved via /etc/security/limits.conf
file.
See also
- How to limit resource usage for a given process?
TL;DR: From brief research it appears it is possible to restrict commands to specific number of cores, however in all cases you have to use a command which actually enforces the restriction.
cgroups
Linux has cgroups
which is frequently used exactly for the purpose of restricting resources available to processes. From a very brief research, you can find an example in Arch Wiki with Matlab ( a scientific software ) configuration set in /etc/cgconfig.conf
:
group matlab
perm
admin
uid = username;
task
uid = username;
cpuset
cpuset.mems="0";
cpuset.cpus="0-5";
memory
memory.limit_in_bytes = 5000000000;
In order for such config to take effect, you have to run the process via cgexec
command, e.g. from the same wiki page:
$ cgexec -g memory,cpuset:matlab /opt/MATLAB/2012b/bin/matlab -desktop
taskset
A related question on Ask Ubuntu and How to limit a process to one CPU core in Linux? [duplicate] on Unix&Linux site show an example of using taskset
to limit the CPUs for the process. In the first question, it's achieved through parsing all processes for a particular user
$ ps aux | awk '/^housezet/print $2' | xargs -l taskset -p 0x00000001
In the the other question, a process is started via taskset
itself:
$ taskset -c 0 mycommand --option # start a command with the given affinity
Conclusion
While it is certainly possible to limit processes, it seems it's not so simple to achieve that for particular users. The example in linked Ask Ubuntu post would require consistent scanning for processes belonging to each user and using taskset
on each new one. A far more reasonable approach would be to selectively run CPU intensive applications, either via cgexec
or taskset
; it also makes no sense to restrict all processes to specific number of CPUS, especially for those that actually make use of parallelism and concurrency to run their tasks faster - limiting them to specific number of CPUs can have the effect of slowing down the processing. Additionally, as terdon's answer mentioned it's a waste of resources
Running select applications via taskset
or cgexec
requires communicating with your users to let them know what applications they can run, or creating wrapper scripts which will launch select applications via tasksel
or cgexec
.
Additionally, consider setting number of processes a user or group can spawn instead of setting limit on number of CPUs. This can be achieved via /etc/security/limits.conf
file.
See also
- How to limit resource usage for a given process?
edited Jan 9 at 14:39
answered Jan 9 at 14:32
Sergiy KolodyazhnyySergiy Kolodyazhnyy
71.5k9147313
71.5k9147313
1
well, there is cgrulesengd and cgrules.conf to automatically move processes to the appropriate cgroup based on user/group instead of relying on the users running their processes with cgexec. But it seems, setting this up in ubuntu is somewhat non-trivial.
– Hans-Jakob
Jan 9 at 15:57
@Hans-Jakob It does look somewhat convoluted, plus requires adding kernel flags in GRUB. Probably for enterprise level of machine, where you do have lots of users and don't want them to crash the system, that's probably worthwhile, but for desktop - too much work. Thank you for linking that.
– Sergiy Kolodyazhnyy
Jan 9 at 16:02
2
sched_setaffinity(2)
says the affinity mask is preserved acrossexecve(2)
, and that a child inherits it onfork(2)
. So if you taskset the shell for a user (or their graphical shell for an X session), everything they start from that shell will, by default, use the same affinity mask.
– Peter Cordes
Jan 10 at 4:50
1
One possible downside is programs that check how many CPUs the machine has when deciding how many threads to start; they'll have too many threads for the number of cores they'll actually get scheduled on. Did you find out if cgroups might do anything about that?
– Peter Cordes
Jan 10 at 4:51
@PeterCordes Spawning shell idea sounds interesting. I'll need to look into that. Thanks ! As for second comment, no, I've not researched cgroups enough at this point.
– Sergiy Kolodyazhnyy
Jan 10 at 5:09
add a comment |
1
well, there is cgrulesengd and cgrules.conf to automatically move processes to the appropriate cgroup based on user/group instead of relying on the users running their processes with cgexec. But it seems, setting this up in ubuntu is somewhat non-trivial.
– Hans-Jakob
Jan 9 at 15:57
@Hans-Jakob It does look somewhat convoluted, plus requires adding kernel flags in GRUB. Probably for enterprise level of machine, where you do have lots of users and don't want them to crash the system, that's probably worthwhile, but for desktop - too much work. Thank you for linking that.
– Sergiy Kolodyazhnyy
Jan 9 at 16:02
2
sched_setaffinity(2)
says the affinity mask is preserved acrossexecve(2)
, and that a child inherits it onfork(2)
. So if you taskset the shell for a user (or their graphical shell for an X session), everything they start from that shell will, by default, use the same affinity mask.
– Peter Cordes
Jan 10 at 4:50
1
One possible downside is programs that check how many CPUs the machine has when deciding how many threads to start; they'll have too many threads for the number of cores they'll actually get scheduled on. Did you find out if cgroups might do anything about that?
– Peter Cordes
Jan 10 at 4:51
@PeterCordes Spawning shell idea sounds interesting. I'll need to look into that. Thanks ! As for second comment, no, I've not researched cgroups enough at this point.
– Sergiy Kolodyazhnyy
Jan 10 at 5:09
1
1
well, there is cgrulesengd and cgrules.conf to automatically move processes to the appropriate cgroup based on user/group instead of relying on the users running their processes with cgexec. But it seems, setting this up in ubuntu is somewhat non-trivial.
– Hans-Jakob
Jan 9 at 15:57
well, there is cgrulesengd and cgrules.conf to automatically move processes to the appropriate cgroup based on user/group instead of relying on the users running their processes with cgexec. But it seems, setting this up in ubuntu is somewhat non-trivial.
– Hans-Jakob
Jan 9 at 15:57
@Hans-Jakob It does look somewhat convoluted, plus requires adding kernel flags in GRUB. Probably for enterprise level of machine, where you do have lots of users and don't want them to crash the system, that's probably worthwhile, but for desktop - too much work. Thank you for linking that.
– Sergiy Kolodyazhnyy
Jan 9 at 16:02
@Hans-Jakob It does look somewhat convoluted, plus requires adding kernel flags in GRUB. Probably for enterprise level of machine, where you do have lots of users and don't want them to crash the system, that's probably worthwhile, but for desktop - too much work. Thank you for linking that.
– Sergiy Kolodyazhnyy
Jan 9 at 16:02
2
2
sched_setaffinity(2)
says the affinity mask is preserved across execve(2)
, and that a child inherits it on fork(2)
. So if you taskset the shell for a user (or their graphical shell for an X session), everything they start from that shell will, by default, use the same affinity mask.– Peter Cordes
Jan 10 at 4:50
sched_setaffinity(2)
says the affinity mask is preserved across execve(2)
, and that a child inherits it on fork(2)
. So if you taskset the shell for a user (or their graphical shell for an X session), everything they start from that shell will, by default, use the same affinity mask.– Peter Cordes
Jan 10 at 4:50
1
1
One possible downside is programs that check how many CPUs the machine has when deciding how many threads to start; they'll have too many threads for the number of cores they'll actually get scheduled on. Did you find out if cgroups might do anything about that?
– Peter Cordes
Jan 10 at 4:51
One possible downside is programs that check how many CPUs the machine has when deciding how many threads to start; they'll have too many threads for the number of cores they'll actually get scheduled on. Did you find out if cgroups might do anything about that?
– Peter Cordes
Jan 10 at 4:51
@PeterCordes Spawning shell idea sounds interesting. I'll need to look into that. Thanks ! As for second comment, no, I've not researched cgroups enough at this point.
– Sergiy Kolodyazhnyy
Jan 10 at 5:09
@PeterCordes Spawning shell idea sounds interesting. I'll need to look into that. Thanks ! As for second comment, no, I've not researched cgroups enough at this point.
– Sergiy Kolodyazhnyy
Jan 10 at 5:09
add a comment |
Thanks for contributing an answer to Ask Ubuntu!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f1108293%2fhow-can-one-restrict-the-number-of-cpu-cores-each-user-can-use%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
5
Not an answer, only an idea. You might want to look into setting up several virtual machines. Each could have only a limited amount of CPU-s. Each user would only be on one of the virtual machines, and the users on that VM would be limited in CPU usage. It might be that some of the virtualization softwares has tools for supporting this.
– ghellquist
Jan 9 at 17:16
1
@ghellquist you should make that an answer
– slebetman
Jan 10 at 0:38
@ghellquist: You probably want something as light-weight as possible, like Linux containers, if you just want different users to only see some of the CPUs. (e.g. so when they start an OpenMP or other program that starts as many threads as it sees cores, it will start an appropriate number for the amount of cores you're letting each user actually use). Full virtualization, like KVM, has a performance cost even with hardware support like VT-X or AMD-V, from extra levels of page tables even when VM exits are avoided, in code that gets any TLB misses from touching lots of memory.
– Peter Cordes
Jan 10 at 4:55
I'm sorry, but is there even a need for this? As a multi-user system, Linux by default already implements preemptive multitasking, so the situation where a single (non-malicious) user just hogs the entire system for themselves shouldn't come up.
– Cubic
Jan 10 at 15:37