How to create a user with limited RAM usage?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP












34














So I have 4 GB RAM + 4GB swap. I want to create a user with limited ram and swap: 3 GB RAM and 1 GB swap. Is such thing possible? Is it possible to start applications with limited RAM and swap avaliable to them without creating a separate user (and not installing any special apps - having just a default Debian/CentOS server configuration, and not using sudo)?



Update:



So I opened terminall and typed into it ulimit command: ulimit -v 1000000 which shall be like 976,6Mb limitation. Next I called ulimit -a and saw that limitation is "on". Then I started some bash script that compiles and starts my app in nohup, a long one nohup ./cloud-updater-linux.sh >& /dev/null &... but after some time I saw:



enter image description here



(which would be ok if no limitations were applied - it downloaded some large lib, and started to compile it.)



But I thought I applied limitations to the shell and all processes launched with/from it with ulimit -v 1000000? What did I get wrong? How to make a terminal and all sub processes it launches be limited on ram usage?










share|improve this question



















  • 1




    You can't put memory restrictions on a user as a whole, only on each process. And you can't distinguish between RAM and swap usage. If you want finer control, run the user's processes in a virtual machine.
    – Gilles
    Mar 16 '12 at 23:30










  • @Gilles pretty sure that virtual machines just use cgroups and namespaces, or derivatives of
    – RapidWebs
    Aug 15 '14 at 0:38










  • @RapidWebs no they don't. They just emulate the predefined amount of RAM, and the guest OS then decides how to allocate it to the processes.
    – Ruslan
    Aug 10 '16 at 16:18










  • Containers (not virtual machines) use cgroups, to limit memory usage. Limiting virtual memory is not a good idea; A process can use a lot of virtual memory, but may only use a little RAM. For example my system has 34359738367 kB of virtual memory allocated, but much less ram.
    – ctrl-alt-delor
    Dec 10 at 20:12















34














So I have 4 GB RAM + 4GB swap. I want to create a user with limited ram and swap: 3 GB RAM and 1 GB swap. Is such thing possible? Is it possible to start applications with limited RAM and swap avaliable to them without creating a separate user (and not installing any special apps - having just a default Debian/CentOS server configuration, and not using sudo)?



Update:



So I opened terminall and typed into it ulimit command: ulimit -v 1000000 which shall be like 976,6Mb limitation. Next I called ulimit -a and saw that limitation is "on". Then I started some bash script that compiles and starts my app in nohup, a long one nohup ./cloud-updater-linux.sh >& /dev/null &... but after some time I saw:



enter image description here



(which would be ok if no limitations were applied - it downloaded some large lib, and started to compile it.)



But I thought I applied limitations to the shell and all processes launched with/from it with ulimit -v 1000000? What did I get wrong? How to make a terminal and all sub processes it launches be limited on ram usage?










share|improve this question



















  • 1




    You can't put memory restrictions on a user as a whole, only on each process. And you can't distinguish between RAM and swap usage. If you want finer control, run the user's processes in a virtual machine.
    – Gilles
    Mar 16 '12 at 23:30










  • @Gilles pretty sure that virtual machines just use cgroups and namespaces, or derivatives of
    – RapidWebs
    Aug 15 '14 at 0:38










  • @RapidWebs no they don't. They just emulate the predefined amount of RAM, and the guest OS then decides how to allocate it to the processes.
    – Ruslan
    Aug 10 '16 at 16:18










  • Containers (not virtual machines) use cgroups, to limit memory usage. Limiting virtual memory is not a good idea; A process can use a lot of virtual memory, but may only use a little RAM. For example my system has 34359738367 kB of virtual memory allocated, but much less ram.
    – ctrl-alt-delor
    Dec 10 at 20:12













34












34








34


17





So I have 4 GB RAM + 4GB swap. I want to create a user with limited ram and swap: 3 GB RAM and 1 GB swap. Is such thing possible? Is it possible to start applications with limited RAM and swap avaliable to them without creating a separate user (and not installing any special apps - having just a default Debian/CentOS server configuration, and not using sudo)?



Update:



So I opened terminall and typed into it ulimit command: ulimit -v 1000000 which shall be like 976,6Mb limitation. Next I called ulimit -a and saw that limitation is "on". Then I started some bash script that compiles and starts my app in nohup, a long one nohup ./cloud-updater-linux.sh >& /dev/null &... but after some time I saw:



enter image description here



(which would be ok if no limitations were applied - it downloaded some large lib, and started to compile it.)



But I thought I applied limitations to the shell and all processes launched with/from it with ulimit -v 1000000? What did I get wrong? How to make a terminal and all sub processes it launches be limited on ram usage?










share|improve this question















So I have 4 GB RAM + 4GB swap. I want to create a user with limited ram and swap: 3 GB RAM and 1 GB swap. Is such thing possible? Is it possible to start applications with limited RAM and swap avaliable to them without creating a separate user (and not installing any special apps - having just a default Debian/CentOS server configuration, and not using sudo)?



Update:



So I opened terminall and typed into it ulimit command: ulimit -v 1000000 which shall be like 976,6Mb limitation. Next I called ulimit -a and saw that limitation is "on". Then I started some bash script that compiles and starts my app in nohup, a long one nohup ./cloud-updater-linux.sh >& /dev/null &... but after some time I saw:



enter image description here



(which would be ok if no limitations were applied - it downloaded some large lib, and started to compile it.)



But I thought I applied limitations to the shell and all processes launched with/from it with ulimit -v 1000000? What did I get wrong? How to make a terminal and all sub processes it launches be limited on ram usage?







users memory not-root-user limit






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jul 21 '16 at 20:51









HalosGhost

3,70592235




3,70592235










asked Mar 16 '12 at 12:54









myWallJSON

52131018




52131018







  • 1




    You can't put memory restrictions on a user as a whole, only on each process. And you can't distinguish between RAM and swap usage. If you want finer control, run the user's processes in a virtual machine.
    – Gilles
    Mar 16 '12 at 23:30










  • @Gilles pretty sure that virtual machines just use cgroups and namespaces, or derivatives of
    – RapidWebs
    Aug 15 '14 at 0:38










  • @RapidWebs no they don't. They just emulate the predefined amount of RAM, and the guest OS then decides how to allocate it to the processes.
    – Ruslan
    Aug 10 '16 at 16:18










  • Containers (not virtual machines) use cgroups, to limit memory usage. Limiting virtual memory is not a good idea; A process can use a lot of virtual memory, but may only use a little RAM. For example my system has 34359738367 kB of virtual memory allocated, but much less ram.
    – ctrl-alt-delor
    Dec 10 at 20:12












  • 1




    You can't put memory restrictions on a user as a whole, only on each process. And you can't distinguish between RAM and swap usage. If you want finer control, run the user's processes in a virtual machine.
    – Gilles
    Mar 16 '12 at 23:30










  • @Gilles pretty sure that virtual machines just use cgroups and namespaces, or derivatives of
    – RapidWebs
    Aug 15 '14 at 0:38










  • @RapidWebs no they don't. They just emulate the predefined amount of RAM, and the guest OS then decides how to allocate it to the processes.
    – Ruslan
    Aug 10 '16 at 16:18










  • Containers (not virtual machines) use cgroups, to limit memory usage. Limiting virtual memory is not a good idea; A process can use a lot of virtual memory, but may only use a little RAM. For example my system has 34359738367 kB of virtual memory allocated, but much less ram.
    – ctrl-alt-delor
    Dec 10 at 20:12







1




1




You can't put memory restrictions on a user as a whole, only on each process. And you can't distinguish between RAM and swap usage. If you want finer control, run the user's processes in a virtual machine.
– Gilles
Mar 16 '12 at 23:30




You can't put memory restrictions on a user as a whole, only on each process. And you can't distinguish between RAM and swap usage. If you want finer control, run the user's processes in a virtual machine.
– Gilles
Mar 16 '12 at 23:30












@Gilles pretty sure that virtual machines just use cgroups and namespaces, or derivatives of
– RapidWebs
Aug 15 '14 at 0:38




@Gilles pretty sure that virtual machines just use cgroups and namespaces, or derivatives of
– RapidWebs
Aug 15 '14 at 0:38












@RapidWebs no they don't. They just emulate the predefined amount of RAM, and the guest OS then decides how to allocate it to the processes.
– Ruslan
Aug 10 '16 at 16:18




@RapidWebs no they don't. They just emulate the predefined amount of RAM, and the guest OS then decides how to allocate it to the processes.
– Ruslan
Aug 10 '16 at 16:18












Containers (not virtual machines) use cgroups, to limit memory usage. Limiting virtual memory is not a good idea; A process can use a lot of virtual memory, but may only use a little RAM. For example my system has 34359738367 kB of virtual memory allocated, but much less ram.
– ctrl-alt-delor
Dec 10 at 20:12




Containers (not virtual machines) use cgroups, to limit memory usage. Limiting virtual memory is not a good idea; A process can use a lot of virtual memory, but may only use a little RAM. For example my system has 34359738367 kB of virtual memory allocated, but much less ram.
– ctrl-alt-delor
Dec 10 at 20:12










3 Answers
3






active

oldest

votes


















54














ulimit is made for this.
You can setup defaults for ulimit on a per user or a per group basis in



/etc/security/limits.conf


ulimit -v KBYTES sets max virtual memory size. I don't think you can give a max amount of swap. It's just a limit on the amount of virtual memory the user can use.



So you limits.conf would have the line (to a maximum of 4G of memory)



luser hard as 4000000


UPDATE - CGroups



The limits imposed by ulimit and limits.conf is per process. I definitely wasn't clear on that point.



If you want to limit the total amount of memory a users uses (which is what you asked). You want to use cgroups.



In /etc/cgconfig.conf:



group memlimit 
memory
memory.limit_in_bytes = 4294967296;




This creates a cgroup that has a max memory limit of 4GiB.



In /etc/cgrules.conf:



luser memory memlimit/


This will cause all processes run by luser to be run inside the memlimit cgroups created in cgconfig.conf.






share|improve this answer






















  • is such thing settable on useradd?
    – myWallJSON
    Mar 16 '12 at 13:41






  • 4




    @myWallJSON Not directly, but you can immediately add it to limits.conf, or you can setup a group with certain limits in limits.conf and add user to that group.
    – utopiabound
    Mar 16 '12 at 14:56






  • 1




    That's awesome! I didn't know you could do this! Great answer +1
    – Yanick Girouard
    Mar 16 '12 at 16:50






  • 1




    @utopiabound: Updated my Q with some data I got trying to use ulimit.
    – myWallJSON
    Mar 16 '12 at 22:24






  • 1




    @f.ardelian Upgrade the kernel. Here's an article about how to do just that!
    – Daniel C. Sobral
    Feb 11 '13 at 1:05


















3














You cannot cap memory usage at the user level, ulimit can do that but for a single process.



Even with using per user limits in /etc/security/limits.conf, a user can use all memory by running multiple processes.



Should you really want to cap resources, you need to use a resource management tool, like rcapd used by projects and zones under Solaris.



There is something that seems to provide similar features on Linux that you might investigate: cgroups.






share|improve this answer




















  • Well, I suppose setting a cap on users login shell or something like that could be interpreted as "setting a limit for the user", since all processes would inherit from that shell?
    – amn
    Apr 18 '16 at 14:31






  • 1




    @amn It won't. A user might simply open a new login shell to workaround such a limit.
    – jlliagre
    Apr 18 '16 at 14:46










  • Right, that invalidates my assumption alright.
    – amn
    Apr 18 '16 at 16:24


















0














cgroups are the right way to do this, as other answers have pointed out. Unfortunately there is no perfect solution to the problem, as we'll get into below. There are a bunch of different ways to set cgroup memory usage limits. How one goes about making a user's login session automatically part of a cgroup varies from system to system. Red Hat has some tools, and so does systemd.



memory.memsw.limit_in_bytes and memory.limit_in_bytes set limits including and not including swap, respectively. The downside memory.limit_in_bytes is that it counts files cached by the kernel on behalf of processes in the cgroup against the group's quota. Less caching means more disk access, so you're potentially giving up some performance if the system otherwise had some memory available.



On the other hand, memory.soft_limit_in_bytes allows the cgroup to go over-quota, but if the kernel OOM killer gets invoked then those cgroups which are over their quotas get killed first, logically. The downside of that, however, is that there are situations where some memory is needed immediately and there isn't time for the OOM killer to look around for processes to kill, in which case something might fail before the over-quota user's processes are killed.



ulimit, however, is absolutely the wrong tool for this. ulimit places limits on virtual memory usage, which is almost certainly not what you want. Many real-world applications use far more virtual memory than physical memory. Most garbage-collected runtimes (Java, Go) work this way to avoid fragmentation. A trivial "hello world" program in C, if compiled with address sanitizer, can use 20TB of virtual memory. Allocators which do not rely on sbrk, such as jemalloc (which is the default allocator for Rust) or tcmalloc, will also have virtual memory usage substantially in excess of their physical usage. For efficiency, many tools will mmap files, which increases virtual usage but not necessarily physical usage. All of my Chrome processes are using 2TB of virtual memory each. I'm on a laptop with 8GB of physical memory. Any way one tried to set up virtual memory quotas here would either break Chrome, force Chrome to disable some security features which rely on allocating (but not using) large amounts of virtual memory, or be completely ineffective at preventing a user from abusing the system.






share|improve this answer




















    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "106"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f34334%2fhow-to-create-a-user-with-limited-ram-usage%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    3 Answers
    3






    active

    oldest

    votes








    3 Answers
    3






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    54














    ulimit is made for this.
    You can setup defaults for ulimit on a per user or a per group basis in



    /etc/security/limits.conf


    ulimit -v KBYTES sets max virtual memory size. I don't think you can give a max amount of swap. It's just a limit on the amount of virtual memory the user can use.



    So you limits.conf would have the line (to a maximum of 4G of memory)



    luser hard as 4000000


    UPDATE - CGroups



    The limits imposed by ulimit and limits.conf is per process. I definitely wasn't clear on that point.



    If you want to limit the total amount of memory a users uses (which is what you asked). You want to use cgroups.



    In /etc/cgconfig.conf:



    group memlimit 
    memory
    memory.limit_in_bytes = 4294967296;




    This creates a cgroup that has a max memory limit of 4GiB.



    In /etc/cgrules.conf:



    luser memory memlimit/


    This will cause all processes run by luser to be run inside the memlimit cgroups created in cgconfig.conf.






    share|improve this answer






















    • is such thing settable on useradd?
      – myWallJSON
      Mar 16 '12 at 13:41






    • 4




      @myWallJSON Not directly, but you can immediately add it to limits.conf, or you can setup a group with certain limits in limits.conf and add user to that group.
      – utopiabound
      Mar 16 '12 at 14:56






    • 1




      That's awesome! I didn't know you could do this! Great answer +1
      – Yanick Girouard
      Mar 16 '12 at 16:50






    • 1




      @utopiabound: Updated my Q with some data I got trying to use ulimit.
      – myWallJSON
      Mar 16 '12 at 22:24






    • 1




      @f.ardelian Upgrade the kernel. Here's an article about how to do just that!
      – Daniel C. Sobral
      Feb 11 '13 at 1:05















    54














    ulimit is made for this.
    You can setup defaults for ulimit on a per user or a per group basis in



    /etc/security/limits.conf


    ulimit -v KBYTES sets max virtual memory size. I don't think you can give a max amount of swap. It's just a limit on the amount of virtual memory the user can use.



    So you limits.conf would have the line (to a maximum of 4G of memory)



    luser hard as 4000000


    UPDATE - CGroups



    The limits imposed by ulimit and limits.conf is per process. I definitely wasn't clear on that point.



    If you want to limit the total amount of memory a users uses (which is what you asked). You want to use cgroups.



    In /etc/cgconfig.conf:



    group memlimit 
    memory
    memory.limit_in_bytes = 4294967296;




    This creates a cgroup that has a max memory limit of 4GiB.



    In /etc/cgrules.conf:



    luser memory memlimit/


    This will cause all processes run by luser to be run inside the memlimit cgroups created in cgconfig.conf.






    share|improve this answer






















    • is such thing settable on useradd?
      – myWallJSON
      Mar 16 '12 at 13:41






    • 4




      @myWallJSON Not directly, but you can immediately add it to limits.conf, or you can setup a group with certain limits in limits.conf and add user to that group.
      – utopiabound
      Mar 16 '12 at 14:56






    • 1




      That's awesome! I didn't know you could do this! Great answer +1
      – Yanick Girouard
      Mar 16 '12 at 16:50






    • 1




      @utopiabound: Updated my Q with some data I got trying to use ulimit.
      – myWallJSON
      Mar 16 '12 at 22:24






    • 1




      @f.ardelian Upgrade the kernel. Here's an article about how to do just that!
      – Daniel C. Sobral
      Feb 11 '13 at 1:05













    54












    54








    54






    ulimit is made for this.
    You can setup defaults for ulimit on a per user or a per group basis in



    /etc/security/limits.conf


    ulimit -v KBYTES sets max virtual memory size. I don't think you can give a max amount of swap. It's just a limit on the amount of virtual memory the user can use.



    So you limits.conf would have the line (to a maximum of 4G of memory)



    luser hard as 4000000


    UPDATE - CGroups



    The limits imposed by ulimit and limits.conf is per process. I definitely wasn't clear on that point.



    If you want to limit the total amount of memory a users uses (which is what you asked). You want to use cgroups.



    In /etc/cgconfig.conf:



    group memlimit 
    memory
    memory.limit_in_bytes = 4294967296;




    This creates a cgroup that has a max memory limit of 4GiB.



    In /etc/cgrules.conf:



    luser memory memlimit/


    This will cause all processes run by luser to be run inside the memlimit cgroups created in cgconfig.conf.






    share|improve this answer














    ulimit is made for this.
    You can setup defaults for ulimit on a per user or a per group basis in



    /etc/security/limits.conf


    ulimit -v KBYTES sets max virtual memory size. I don't think you can give a max amount of swap. It's just a limit on the amount of virtual memory the user can use.



    So you limits.conf would have the line (to a maximum of 4G of memory)



    luser hard as 4000000


    UPDATE - CGroups



    The limits imposed by ulimit and limits.conf is per process. I definitely wasn't clear on that point.



    If you want to limit the total amount of memory a users uses (which is what you asked). You want to use cgroups.



    In /etc/cgconfig.conf:



    group memlimit 
    memory
    memory.limit_in_bytes = 4294967296;




    This creates a cgroup that has a max memory limit of 4GiB.



    In /etc/cgrules.conf:



    luser memory memlimit/


    This will cause all processes run by luser to be run inside the memlimit cgroups created in cgconfig.conf.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Mar 19 '12 at 13:16

























    answered Mar 16 '12 at 13:09









    utopiabound

    2,6611518




    2,6611518











    • is such thing settable on useradd?
      – myWallJSON
      Mar 16 '12 at 13:41






    • 4




      @myWallJSON Not directly, but you can immediately add it to limits.conf, or you can setup a group with certain limits in limits.conf and add user to that group.
      – utopiabound
      Mar 16 '12 at 14:56






    • 1




      That's awesome! I didn't know you could do this! Great answer +1
      – Yanick Girouard
      Mar 16 '12 at 16:50






    • 1




      @utopiabound: Updated my Q with some data I got trying to use ulimit.
      – myWallJSON
      Mar 16 '12 at 22:24






    • 1




      @f.ardelian Upgrade the kernel. Here's an article about how to do just that!
      – Daniel C. Sobral
      Feb 11 '13 at 1:05
















    • is such thing settable on useradd?
      – myWallJSON
      Mar 16 '12 at 13:41






    • 4




      @myWallJSON Not directly, but you can immediately add it to limits.conf, or you can setup a group with certain limits in limits.conf and add user to that group.
      – utopiabound
      Mar 16 '12 at 14:56






    • 1




      That's awesome! I didn't know you could do this! Great answer +1
      – Yanick Girouard
      Mar 16 '12 at 16:50






    • 1




      @utopiabound: Updated my Q with some data I got trying to use ulimit.
      – myWallJSON
      Mar 16 '12 at 22:24






    • 1




      @f.ardelian Upgrade the kernel. Here's an article about how to do just that!
      – Daniel C. Sobral
      Feb 11 '13 at 1:05















    is such thing settable on useradd?
    – myWallJSON
    Mar 16 '12 at 13:41




    is such thing settable on useradd?
    – myWallJSON
    Mar 16 '12 at 13:41




    4




    4




    @myWallJSON Not directly, but you can immediately add it to limits.conf, or you can setup a group with certain limits in limits.conf and add user to that group.
    – utopiabound
    Mar 16 '12 at 14:56




    @myWallJSON Not directly, but you can immediately add it to limits.conf, or you can setup a group with certain limits in limits.conf and add user to that group.
    – utopiabound
    Mar 16 '12 at 14:56




    1




    1




    That's awesome! I didn't know you could do this! Great answer +1
    – Yanick Girouard
    Mar 16 '12 at 16:50




    That's awesome! I didn't know you could do this! Great answer +1
    – Yanick Girouard
    Mar 16 '12 at 16:50




    1




    1




    @utopiabound: Updated my Q with some data I got trying to use ulimit.
    – myWallJSON
    Mar 16 '12 at 22:24




    @utopiabound: Updated my Q with some data I got trying to use ulimit.
    – myWallJSON
    Mar 16 '12 at 22:24




    1




    1




    @f.ardelian Upgrade the kernel. Here's an article about how to do just that!
    – Daniel C. Sobral
    Feb 11 '13 at 1:05




    @f.ardelian Upgrade the kernel. Here's an article about how to do just that!
    – Daniel C. Sobral
    Feb 11 '13 at 1:05













    3














    You cannot cap memory usage at the user level, ulimit can do that but for a single process.



    Even with using per user limits in /etc/security/limits.conf, a user can use all memory by running multiple processes.



    Should you really want to cap resources, you need to use a resource management tool, like rcapd used by projects and zones under Solaris.



    There is something that seems to provide similar features on Linux that you might investigate: cgroups.






    share|improve this answer




















    • Well, I suppose setting a cap on users login shell or something like that could be interpreted as "setting a limit for the user", since all processes would inherit from that shell?
      – amn
      Apr 18 '16 at 14:31






    • 1




      @amn It won't. A user might simply open a new login shell to workaround such a limit.
      – jlliagre
      Apr 18 '16 at 14:46










    • Right, that invalidates my assumption alright.
      – amn
      Apr 18 '16 at 16:24















    3














    You cannot cap memory usage at the user level, ulimit can do that but for a single process.



    Even with using per user limits in /etc/security/limits.conf, a user can use all memory by running multiple processes.



    Should you really want to cap resources, you need to use a resource management tool, like rcapd used by projects and zones under Solaris.



    There is something that seems to provide similar features on Linux that you might investigate: cgroups.






    share|improve this answer




















    • Well, I suppose setting a cap on users login shell or something like that could be interpreted as "setting a limit for the user", since all processes would inherit from that shell?
      – amn
      Apr 18 '16 at 14:31






    • 1




      @amn It won't. A user might simply open a new login shell to workaround such a limit.
      – jlliagre
      Apr 18 '16 at 14:46










    • Right, that invalidates my assumption alright.
      – amn
      Apr 18 '16 at 16:24













    3












    3








    3






    You cannot cap memory usage at the user level, ulimit can do that but for a single process.



    Even with using per user limits in /etc/security/limits.conf, a user can use all memory by running multiple processes.



    Should you really want to cap resources, you need to use a resource management tool, like rcapd used by projects and zones under Solaris.



    There is something that seems to provide similar features on Linux that you might investigate: cgroups.






    share|improve this answer












    You cannot cap memory usage at the user level, ulimit can do that but for a single process.



    Even with using per user limits in /etc/security/limits.conf, a user can use all memory by running multiple processes.



    Should you really want to cap resources, you need to use a resource management tool, like rcapd used by projects and zones under Solaris.



    There is something that seems to provide similar features on Linux that you might investigate: cgroups.







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Mar 17 '12 at 0:52









    jlliagre

    46.3k783132




    46.3k783132











    • Well, I suppose setting a cap on users login shell or something like that could be interpreted as "setting a limit for the user", since all processes would inherit from that shell?
      – amn
      Apr 18 '16 at 14:31






    • 1




      @amn It won't. A user might simply open a new login shell to workaround such a limit.
      – jlliagre
      Apr 18 '16 at 14:46










    • Right, that invalidates my assumption alright.
      – amn
      Apr 18 '16 at 16:24
















    • Well, I suppose setting a cap on users login shell or something like that could be interpreted as "setting a limit for the user", since all processes would inherit from that shell?
      – amn
      Apr 18 '16 at 14:31






    • 1




      @amn It won't. A user might simply open a new login shell to workaround such a limit.
      – jlliagre
      Apr 18 '16 at 14:46










    • Right, that invalidates my assumption alright.
      – amn
      Apr 18 '16 at 16:24















    Well, I suppose setting a cap on users login shell or something like that could be interpreted as "setting a limit for the user", since all processes would inherit from that shell?
    – amn
    Apr 18 '16 at 14:31




    Well, I suppose setting a cap on users login shell or something like that could be interpreted as "setting a limit for the user", since all processes would inherit from that shell?
    – amn
    Apr 18 '16 at 14:31




    1




    1




    @amn It won't. A user might simply open a new login shell to workaround such a limit.
    – jlliagre
    Apr 18 '16 at 14:46




    @amn It won't. A user might simply open a new login shell to workaround such a limit.
    – jlliagre
    Apr 18 '16 at 14:46












    Right, that invalidates my assumption alright.
    – amn
    Apr 18 '16 at 16:24




    Right, that invalidates my assumption alright.
    – amn
    Apr 18 '16 at 16:24











    0














    cgroups are the right way to do this, as other answers have pointed out. Unfortunately there is no perfect solution to the problem, as we'll get into below. There are a bunch of different ways to set cgroup memory usage limits. How one goes about making a user's login session automatically part of a cgroup varies from system to system. Red Hat has some tools, and so does systemd.



    memory.memsw.limit_in_bytes and memory.limit_in_bytes set limits including and not including swap, respectively. The downside memory.limit_in_bytes is that it counts files cached by the kernel on behalf of processes in the cgroup against the group's quota. Less caching means more disk access, so you're potentially giving up some performance if the system otherwise had some memory available.



    On the other hand, memory.soft_limit_in_bytes allows the cgroup to go over-quota, but if the kernel OOM killer gets invoked then those cgroups which are over their quotas get killed first, logically. The downside of that, however, is that there are situations where some memory is needed immediately and there isn't time for the OOM killer to look around for processes to kill, in which case something might fail before the over-quota user's processes are killed.



    ulimit, however, is absolutely the wrong tool for this. ulimit places limits on virtual memory usage, which is almost certainly not what you want. Many real-world applications use far more virtual memory than physical memory. Most garbage-collected runtimes (Java, Go) work this way to avoid fragmentation. A trivial "hello world" program in C, if compiled with address sanitizer, can use 20TB of virtual memory. Allocators which do not rely on sbrk, such as jemalloc (which is the default allocator for Rust) or tcmalloc, will also have virtual memory usage substantially in excess of their physical usage. For efficiency, many tools will mmap files, which increases virtual usage but not necessarily physical usage. All of my Chrome processes are using 2TB of virtual memory each. I'm on a laptop with 8GB of physical memory. Any way one tried to set up virtual memory quotas here would either break Chrome, force Chrome to disable some security features which rely on allocating (but not using) large amounts of virtual memory, or be completely ineffective at preventing a user from abusing the system.






    share|improve this answer

























      0














      cgroups are the right way to do this, as other answers have pointed out. Unfortunately there is no perfect solution to the problem, as we'll get into below. There are a bunch of different ways to set cgroup memory usage limits. How one goes about making a user's login session automatically part of a cgroup varies from system to system. Red Hat has some tools, and so does systemd.



      memory.memsw.limit_in_bytes and memory.limit_in_bytes set limits including and not including swap, respectively. The downside memory.limit_in_bytes is that it counts files cached by the kernel on behalf of processes in the cgroup against the group's quota. Less caching means more disk access, so you're potentially giving up some performance if the system otherwise had some memory available.



      On the other hand, memory.soft_limit_in_bytes allows the cgroup to go over-quota, but if the kernel OOM killer gets invoked then those cgroups which are over their quotas get killed first, logically. The downside of that, however, is that there are situations where some memory is needed immediately and there isn't time for the OOM killer to look around for processes to kill, in which case something might fail before the over-quota user's processes are killed.



      ulimit, however, is absolutely the wrong tool for this. ulimit places limits on virtual memory usage, which is almost certainly not what you want. Many real-world applications use far more virtual memory than physical memory. Most garbage-collected runtimes (Java, Go) work this way to avoid fragmentation. A trivial "hello world" program in C, if compiled with address sanitizer, can use 20TB of virtual memory. Allocators which do not rely on sbrk, such as jemalloc (which is the default allocator for Rust) or tcmalloc, will also have virtual memory usage substantially in excess of their physical usage. For efficiency, many tools will mmap files, which increases virtual usage but not necessarily physical usage. All of my Chrome processes are using 2TB of virtual memory each. I'm on a laptop with 8GB of physical memory. Any way one tried to set up virtual memory quotas here would either break Chrome, force Chrome to disable some security features which rely on allocating (but not using) large amounts of virtual memory, or be completely ineffective at preventing a user from abusing the system.






      share|improve this answer























        0












        0








        0






        cgroups are the right way to do this, as other answers have pointed out. Unfortunately there is no perfect solution to the problem, as we'll get into below. There are a bunch of different ways to set cgroup memory usage limits. How one goes about making a user's login session automatically part of a cgroup varies from system to system. Red Hat has some tools, and so does systemd.



        memory.memsw.limit_in_bytes and memory.limit_in_bytes set limits including and not including swap, respectively. The downside memory.limit_in_bytes is that it counts files cached by the kernel on behalf of processes in the cgroup against the group's quota. Less caching means more disk access, so you're potentially giving up some performance if the system otherwise had some memory available.



        On the other hand, memory.soft_limit_in_bytes allows the cgroup to go over-quota, but if the kernel OOM killer gets invoked then those cgroups which are over their quotas get killed first, logically. The downside of that, however, is that there are situations where some memory is needed immediately and there isn't time for the OOM killer to look around for processes to kill, in which case something might fail before the over-quota user's processes are killed.



        ulimit, however, is absolutely the wrong tool for this. ulimit places limits on virtual memory usage, which is almost certainly not what you want. Many real-world applications use far more virtual memory than physical memory. Most garbage-collected runtimes (Java, Go) work this way to avoid fragmentation. A trivial "hello world" program in C, if compiled with address sanitizer, can use 20TB of virtual memory. Allocators which do not rely on sbrk, such as jemalloc (which is the default allocator for Rust) or tcmalloc, will also have virtual memory usage substantially in excess of their physical usage. For efficiency, many tools will mmap files, which increases virtual usage but not necessarily physical usage. All of my Chrome processes are using 2TB of virtual memory each. I'm on a laptop with 8GB of physical memory. Any way one tried to set up virtual memory quotas here would either break Chrome, force Chrome to disable some security features which rely on allocating (but not using) large amounts of virtual memory, or be completely ineffective at preventing a user from abusing the system.






        share|improve this answer












        cgroups are the right way to do this, as other answers have pointed out. Unfortunately there is no perfect solution to the problem, as we'll get into below. There are a bunch of different ways to set cgroup memory usage limits. How one goes about making a user's login session automatically part of a cgroup varies from system to system. Red Hat has some tools, and so does systemd.



        memory.memsw.limit_in_bytes and memory.limit_in_bytes set limits including and not including swap, respectively. The downside memory.limit_in_bytes is that it counts files cached by the kernel on behalf of processes in the cgroup against the group's quota. Less caching means more disk access, so you're potentially giving up some performance if the system otherwise had some memory available.



        On the other hand, memory.soft_limit_in_bytes allows the cgroup to go over-quota, but if the kernel OOM killer gets invoked then those cgroups which are over their quotas get killed first, logically. The downside of that, however, is that there are situations where some memory is needed immediately and there isn't time for the OOM killer to look around for processes to kill, in which case something might fail before the over-quota user's processes are killed.



        ulimit, however, is absolutely the wrong tool for this. ulimit places limits on virtual memory usage, which is almost certainly not what you want. Many real-world applications use far more virtual memory than physical memory. Most garbage-collected runtimes (Java, Go) work this way to avoid fragmentation. A trivial "hello world" program in C, if compiled with address sanitizer, can use 20TB of virtual memory. Allocators which do not rely on sbrk, such as jemalloc (which is the default allocator for Rust) or tcmalloc, will also have virtual memory usage substantially in excess of their physical usage. For efficiency, many tools will mmap files, which increases virtual usage but not necessarily physical usage. All of my Chrome processes are using 2TB of virtual memory each. I'm on a laptop with 8GB of physical memory. Any way one tried to set up virtual memory quotas here would either break Chrome, force Chrome to disable some security features which rely on allocating (but not using) large amounts of virtual memory, or be completely ineffective at preventing a user from abusing the system.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Dec 10 at 19:35









        Adam Azarchs

        1




        1



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Unix & Linux Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f34334%2fhow-to-create-a-user-with-limited-ram-usage%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown






            Popular posts from this blog

            How to check contact read email or not when send email to Individual?

            Bahrain

            Postfix configuration issue with fips on centos 7; mailgun relay