what is the purpose of memory overcommitment on Linux?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
5
down vote

favorite
2












I know about memory overcommitment and I profoundly dislike it and usually disable it.



A well written program could malloc (or mmap which is often used by malloc) more memory than available and crash when using it. Without memory overcommitment, that malloc or mmap would fail and the well written program would catch that failure. The poorly written program (using malloc without checks against failure) would crash when using the result of a failed malloc.



Of course virtual address space (which gets extended by mmap so by malloc) is not the same as RAM (RAM is a resource managed by the kernel, see this; processes have their virtual address space initialized by execve(2) and extended by mmap & sbrk so don't consume directly RAM, only virtual memory).



Notice that optimizing RAM usage could be done with madvise(2) (which could give a hint, using MADV_DONTNEED to the kernel to swap some pages onto the disk), when really needed. Programs wanting some overcommitment could use mmap(2) with MAP_NORESERVE. My understanding of memory overcommitment is as if every memory mapping (by execve or mmap) is using implicitly MAP_NORESERVE



My perception of it is that it is simply useful for very buggy programs. But IMHO a real developer should always check failure of malloc, mmap and related virtual address space changing functions (e.g. like here). And most free software programs whose source code I have studied have such check, perhaps as some xmalloc function....



Are there real life programs, e.g. packaged in a typical Linux distributions, which actually need and are using memory overcommitment in a sane and useful way? I know none of them!



What are the disadvantages of disabling memory overcommitment? Many older Unixes (e.g. SunOS4, SunOS5 from the previous century) did not have it, and IMHO their malloc (and perhaps even the general full-system performance, malloc-wise) was not much worse (and improvements since then are unrelated to memory overcommitment).



I believe that memory overcommitment is a misfeature for lazy programmers.







share|improve this question

















  • 2




    Certain packages for R have trouble on OpenBSD because they want large amounts of virtual memory and OpenBSD says nope. Those same packages are fine on Linux and do not cause the crazy drunk oom killer to start blasting away at the process table.
    – thrig
    May 2 at 19:15






  • 1




    @thrig that could be an answer, notably if you where more specific. What R packages? Can they still work in practice? Why do they need memory overcommit?
    – Basile Starynkevitch
    May 2 at 19:17











  • this was 5+ years ago and I could not find the offending package (or something has been fixed, meanwhile...) in a brief search
    – thrig
    May 2 at 22:48










  • A couple of comments on the edits you have made to the question: without memory overcommitment, a call to malloc would fail, even if there is plenty of memory still available, I think it would be unrealistic to require all programs to use madvise. The kernel usually gets the hint automatically anyway, by keeping a count of which pages have been recenlty used and which have not. The MAP_NORESERVE flag to the mmap system call only means no swap space is reserved for the mapping, it does not disable demand paging.
    – Johan Myréen
    May 3 at 7:33










  • what kind of "plenty of memory" do you refer to? virtual memory or RAM? IMHO RAM is managed by the kernel, and application code don't care about it. It uses virtual address space.
    – Basile Starynkevitch
    May 3 at 7:34















up vote
5
down vote

favorite
2












I know about memory overcommitment and I profoundly dislike it and usually disable it.



A well written program could malloc (or mmap which is often used by malloc) more memory than available and crash when using it. Without memory overcommitment, that malloc or mmap would fail and the well written program would catch that failure. The poorly written program (using malloc without checks against failure) would crash when using the result of a failed malloc.



Of course virtual address space (which gets extended by mmap so by malloc) is not the same as RAM (RAM is a resource managed by the kernel, see this; processes have their virtual address space initialized by execve(2) and extended by mmap & sbrk so don't consume directly RAM, only virtual memory).



Notice that optimizing RAM usage could be done with madvise(2) (which could give a hint, using MADV_DONTNEED to the kernel to swap some pages onto the disk), when really needed. Programs wanting some overcommitment could use mmap(2) with MAP_NORESERVE. My understanding of memory overcommitment is as if every memory mapping (by execve or mmap) is using implicitly MAP_NORESERVE



My perception of it is that it is simply useful for very buggy programs. But IMHO a real developer should always check failure of malloc, mmap and related virtual address space changing functions (e.g. like here). And most free software programs whose source code I have studied have such check, perhaps as some xmalloc function....



Are there real life programs, e.g. packaged in a typical Linux distributions, which actually need and are using memory overcommitment in a sane and useful way? I know none of them!



What are the disadvantages of disabling memory overcommitment? Many older Unixes (e.g. SunOS4, SunOS5 from the previous century) did not have it, and IMHO their malloc (and perhaps even the general full-system performance, malloc-wise) was not much worse (and improvements since then are unrelated to memory overcommitment).



I believe that memory overcommitment is a misfeature for lazy programmers.







share|improve this question

















  • 2




    Certain packages for R have trouble on OpenBSD because they want large amounts of virtual memory and OpenBSD says nope. Those same packages are fine on Linux and do not cause the crazy drunk oom killer to start blasting away at the process table.
    – thrig
    May 2 at 19:15






  • 1




    @thrig that could be an answer, notably if you where more specific. What R packages? Can they still work in practice? Why do they need memory overcommit?
    – Basile Starynkevitch
    May 2 at 19:17











  • this was 5+ years ago and I could not find the offending package (or something has been fixed, meanwhile...) in a brief search
    – thrig
    May 2 at 22:48










  • A couple of comments on the edits you have made to the question: without memory overcommitment, a call to malloc would fail, even if there is plenty of memory still available, I think it would be unrealistic to require all programs to use madvise. The kernel usually gets the hint automatically anyway, by keeping a count of which pages have been recenlty used and which have not. The MAP_NORESERVE flag to the mmap system call only means no swap space is reserved for the mapping, it does not disable demand paging.
    – Johan Myréen
    May 3 at 7:33










  • what kind of "plenty of memory" do you refer to? virtual memory or RAM? IMHO RAM is managed by the kernel, and application code don't care about it. It uses virtual address space.
    – Basile Starynkevitch
    May 3 at 7:34













up vote
5
down vote

favorite
2









up vote
5
down vote

favorite
2






2





I know about memory overcommitment and I profoundly dislike it and usually disable it.



A well written program could malloc (or mmap which is often used by malloc) more memory than available and crash when using it. Without memory overcommitment, that malloc or mmap would fail and the well written program would catch that failure. The poorly written program (using malloc without checks against failure) would crash when using the result of a failed malloc.



Of course virtual address space (which gets extended by mmap so by malloc) is not the same as RAM (RAM is a resource managed by the kernel, see this; processes have their virtual address space initialized by execve(2) and extended by mmap & sbrk so don't consume directly RAM, only virtual memory).



Notice that optimizing RAM usage could be done with madvise(2) (which could give a hint, using MADV_DONTNEED to the kernel to swap some pages onto the disk), when really needed. Programs wanting some overcommitment could use mmap(2) with MAP_NORESERVE. My understanding of memory overcommitment is as if every memory mapping (by execve or mmap) is using implicitly MAP_NORESERVE



My perception of it is that it is simply useful for very buggy programs. But IMHO a real developer should always check failure of malloc, mmap and related virtual address space changing functions (e.g. like here). And most free software programs whose source code I have studied have such check, perhaps as some xmalloc function....



Are there real life programs, e.g. packaged in a typical Linux distributions, which actually need and are using memory overcommitment in a sane and useful way? I know none of them!



What are the disadvantages of disabling memory overcommitment? Many older Unixes (e.g. SunOS4, SunOS5 from the previous century) did not have it, and IMHO their malloc (and perhaps even the general full-system performance, malloc-wise) was not much worse (and improvements since then are unrelated to memory overcommitment).



I believe that memory overcommitment is a misfeature for lazy programmers.







share|improve this question













I know about memory overcommitment and I profoundly dislike it and usually disable it.



A well written program could malloc (or mmap which is often used by malloc) more memory than available and crash when using it. Without memory overcommitment, that malloc or mmap would fail and the well written program would catch that failure. The poorly written program (using malloc without checks against failure) would crash when using the result of a failed malloc.



Of course virtual address space (which gets extended by mmap so by malloc) is not the same as RAM (RAM is a resource managed by the kernel, see this; processes have their virtual address space initialized by execve(2) and extended by mmap & sbrk so don't consume directly RAM, only virtual memory).



Notice that optimizing RAM usage could be done with madvise(2) (which could give a hint, using MADV_DONTNEED to the kernel to swap some pages onto the disk), when really needed. Programs wanting some overcommitment could use mmap(2) with MAP_NORESERVE. My understanding of memory overcommitment is as if every memory mapping (by execve or mmap) is using implicitly MAP_NORESERVE



My perception of it is that it is simply useful for very buggy programs. But IMHO a real developer should always check failure of malloc, mmap and related virtual address space changing functions (e.g. like here). And most free software programs whose source code I have studied have such check, perhaps as some xmalloc function....



Are there real life programs, e.g. packaged in a typical Linux distributions, which actually need and are using memory overcommitment in a sane and useful way? I know none of them!



What are the disadvantages of disabling memory overcommitment? Many older Unixes (e.g. SunOS4, SunOS5 from the previous century) did not have it, and IMHO their malloc (and perhaps even the general full-system performance, malloc-wise) was not much worse (and improvements since then are unrelated to memory overcommitment).



I believe that memory overcommitment is a misfeature for lazy programmers.









share|improve this question












share|improve this question




share|improve this question








edited May 3 at 8:19
























asked May 2 at 16:56









Basile Starynkevitch

7,8581940




7,8581940







  • 2




    Certain packages for R have trouble on OpenBSD because they want large amounts of virtual memory and OpenBSD says nope. Those same packages are fine on Linux and do not cause the crazy drunk oom killer to start blasting away at the process table.
    – thrig
    May 2 at 19:15






  • 1




    @thrig that could be an answer, notably if you where more specific. What R packages? Can they still work in practice? Why do they need memory overcommit?
    – Basile Starynkevitch
    May 2 at 19:17











  • this was 5+ years ago and I could not find the offending package (or something has been fixed, meanwhile...) in a brief search
    – thrig
    May 2 at 22:48










  • A couple of comments on the edits you have made to the question: without memory overcommitment, a call to malloc would fail, even if there is plenty of memory still available, I think it would be unrealistic to require all programs to use madvise. The kernel usually gets the hint automatically anyway, by keeping a count of which pages have been recenlty used and which have not. The MAP_NORESERVE flag to the mmap system call only means no swap space is reserved for the mapping, it does not disable demand paging.
    – Johan Myréen
    May 3 at 7:33










  • what kind of "plenty of memory" do you refer to? virtual memory or RAM? IMHO RAM is managed by the kernel, and application code don't care about it. It uses virtual address space.
    – Basile Starynkevitch
    May 3 at 7:34













  • 2




    Certain packages for R have trouble on OpenBSD because they want large amounts of virtual memory and OpenBSD says nope. Those same packages are fine on Linux and do not cause the crazy drunk oom killer to start blasting away at the process table.
    – thrig
    May 2 at 19:15






  • 1




    @thrig that could be an answer, notably if you where more specific. What R packages? Can they still work in practice? Why do they need memory overcommit?
    – Basile Starynkevitch
    May 2 at 19:17











  • this was 5+ years ago and I could not find the offending package (or something has been fixed, meanwhile...) in a brief search
    – thrig
    May 2 at 22:48










  • A couple of comments on the edits you have made to the question: without memory overcommitment, a call to malloc would fail, even if there is plenty of memory still available, I think it would be unrealistic to require all programs to use madvise. The kernel usually gets the hint automatically anyway, by keeping a count of which pages have been recenlty used and which have not. The MAP_NORESERVE flag to the mmap system call only means no swap space is reserved for the mapping, it does not disable demand paging.
    – Johan Myréen
    May 3 at 7:33










  • what kind of "plenty of memory" do you refer to? virtual memory or RAM? IMHO RAM is managed by the kernel, and application code don't care about it. It uses virtual address space.
    – Basile Starynkevitch
    May 3 at 7:34








2




2




Certain packages for R have trouble on OpenBSD because they want large amounts of virtual memory and OpenBSD says nope. Those same packages are fine on Linux and do not cause the crazy drunk oom killer to start blasting away at the process table.
– thrig
May 2 at 19:15




Certain packages for R have trouble on OpenBSD because they want large amounts of virtual memory and OpenBSD says nope. Those same packages are fine on Linux and do not cause the crazy drunk oom killer to start blasting away at the process table.
– thrig
May 2 at 19:15




1




1




@thrig that could be an answer, notably if you where more specific. What R packages? Can they still work in practice? Why do they need memory overcommit?
– Basile Starynkevitch
May 2 at 19:17





@thrig that could be an answer, notably if you where more specific. What R packages? Can they still work in practice? Why do they need memory overcommit?
– Basile Starynkevitch
May 2 at 19:17













this was 5+ years ago and I could not find the offending package (or something has been fixed, meanwhile...) in a brief search
– thrig
May 2 at 22:48




this was 5+ years ago and I could not find the offending package (or something has been fixed, meanwhile...) in a brief search
– thrig
May 2 at 22:48












A couple of comments on the edits you have made to the question: without memory overcommitment, a call to malloc would fail, even if there is plenty of memory still available, I think it would be unrealistic to require all programs to use madvise. The kernel usually gets the hint automatically anyway, by keeping a count of which pages have been recenlty used and which have not. The MAP_NORESERVE flag to the mmap system call only means no swap space is reserved for the mapping, it does not disable demand paging.
– Johan Myréen
May 3 at 7:33




A couple of comments on the edits you have made to the question: without memory overcommitment, a call to malloc would fail, even if there is plenty of memory still available, I think it would be unrealistic to require all programs to use madvise. The kernel usually gets the hint automatically anyway, by keeping a count of which pages have been recenlty used and which have not. The MAP_NORESERVE flag to the mmap system call only means no swap space is reserved for the mapping, it does not disable demand paging.
– Johan Myréen
May 3 at 7:33












what kind of "plenty of memory" do you refer to? virtual memory or RAM? IMHO RAM is managed by the kernel, and application code don't care about it. It uses virtual address space.
– Basile Starynkevitch
May 3 at 7:34





what kind of "plenty of memory" do you refer to? virtual memory or RAM? IMHO RAM is managed by the kernel, and application code don't care about it. It uses virtual address space.
– Basile Starynkevitch
May 3 at 7:34











3 Answers
3






active

oldest

votes

















up vote
8
down vote













The reason for overcommitting is to avoid underutilization of physical RAM. There is a difference between how much virtual memory a process has allocated and how much of this virtual memory has been actually mapped to physical page frames. In fact, right after a process is started, it reserves very little RAM. This is due to demand paging: the process has a virtual memory layout, but the mapping from the virtual memory address to a physical page frame isn't established until the memory is read or written.



A program typically never uses its whole virtual memory space, and the memory areas touched varies during the run ot the program. For example, mappings to page frames containing initialization code that is executed only at the start of the run can be discarded and the page frames can be used for other mappings.



The same applies to data: when a program calls malloc, it reserves a sufficiently large contiguous virtual address space for storing data. However, mappings to physical page frames are not esablished until the pages are actually used, if ever. Or consider the program stack: every process gets a fairly big contiguous virtual memory area set aside for the stack (typically 8 MB). A process typically uses only a fraction of this stack space; small and well-behaving programs use even less.



A Linux computer typically has a lot of heterogeneous processes running in different stages of their lifetimes. Statistically, at any point in time, they do not collectively need a mapping for every virtual page they have been assigned (or will be assigned later in the program run).



A strictly non-overcommitting scheme would create a static mapping from virtual address pages to physical RAM page frames at the moment the virtual pages are allocated. This would result in a system that can run far fewer programs concurrently, because a lot of RAM page frames would be reserved for nothing.



I don't deny that overcommitting memory has its dangers, and can lead to out-of-memory situations that are messy to deal with. It's all about finding the right compromise.






share|improve this answer





















  • Avoiding underutilization of RAM might be done with madvise(2) using MADV_DONTNEED (and perhaps should be done so by applications), so I am still not very convinced. But your explanation is interesting, so thanks!
    – Basile Starynkevitch
    May 3 at 5:10











  • And the usual scheme would be to reserve virtual memory (by allocating it on swap area), not RAM
    – Basile Starynkevitch
    May 3 at 5:23






  • 1




    Processes have a virtual address space (in virtual memory). RAM is managed by the kernel, whole system wise. So be careful in expliciting virtual address space, virtual memory, RAM
    – Basile Starynkevitch
    May 3 at 8:13


















up vote
3
down vote













You say this as if laziness is not considered a virtue in programming :).



Large quantities of software are optimized for simplicity and maintainability, with surviving low-memory conditions as a very low priority. It is common to treat allocation failure as fatal. Exiting the process which exhausts memory avoids a situation where there is no free memory, and the system cannot make progress without either allocating more memory, or complexity in the form of comprehensive pre-allocation.



Notice how fine the difference is between checking allocations and dying, or not checking and crashing. It would not be fair to blame overcommit on programmers simply not bothering to check whether malloc() succeeded or failed.



There is a only a small amount of software which you can trust to continue "correctly" in the face of failed allocations. The kernel should generally be expected to survive. sqlite has a notoriously robust test which includes out of memory testing, specifically because it is intended to support various small embedded systems.



As a failure path not used in normal operation, handling low memory conditions correctly imposes a significant extra burden in maintenance and testing. If that effort does not yield a commensurate benefit, it can more profitably be spent elsewhere.



Where this breaks down, it is also common to have special cases for large allocations, to handle the most common causes of failure.



Allowing a certain amount of overcommit is probably best viewed in this context. It is part of the current default compromise on Linux.



Note the idea that one should disable kernel-level overcommit and instead provide more swap than you ever want to use, also has its haters. The gap in speed between RAM and rotating hard drives has grown over time, such that when the system actually uses the swap space you have allowed it, it can more often be described as "grinding to a halt".






share|improve this answer




























    up vote
    3
    down vote













    I agree with and upvoted Johan Myréen answer but here are more explanations that might help you understanding the issue.



    You seem to confuse the swap area, i.e. on disk space intended to store less used RAM and virtual memory. The latter is made of a combination of RAM areas and on disk areas.



    Processes are reserving and using virtual memory. They have no idea about where it is stored. When they need to access some data which isn't in RAM, processes (or threads) are suspended until the kernel do the job for the data page to be available.



    When there is RAM demand, the kernel free some RAM by storing less used process pages on the disk swap area.



    When a process reserves memory (i.e. malloc and the likes), non overcommiting OSes mark unused portions of the virtual memory as unavailable. That means that when the process having made the allocation will actually need to access the reserved pages, they will be guaranteed to be present.



    The drawback is given the fact that memory is not usable by any other process is preventing these processes to have their pages paginated out, so RAM is wasted if these processes are inactive. Worst, if the sum of reservations is larger than the size of the swap area, pages of RAM will also be reserved to match the reservation despite not containing any data. This is quite a bad scenario because you'll have RAM which is both unusable and unused. Finally, the worst case scenario is for a huge reservation not to be able to be accepted because there is no more virtual memory (swap + ram) available for it. The process doing the reservation will usually crash.



    On the other hand, overcommiting OSes like Linux bet there will be no virtual memory shortage at any given time. They accept most memory reservations (but not unrealistic ones, this can be more or less tuned) and in general, this allows a better utilization of the RAM and swap resources.



    This is similar to airline companies overbooking seats. This improve occupancy rate but some passengers might be unhappy. Hopefully, airlines just book them to another flight and possibly compensate them while Linux just throws the heavier passengers out of the flying plane...



    To summarize, Linux reserves memory a lazy, "best effort" way while several other OSes do guarantee reservations.



    Real cases scenario where overcommitting makes a lot of sense is when a program which uses a lot of virtual memory does a fork followed by an exec.



    Let's say you have 4 GB of RAM from which 3 GB are available for virtual memory and 4 GB of swap. There is a process reserving 4 GB but only using 1 GB out of it. There is no pagination so the system performs well. On a non overcommiting OS, that process cannot fork because just after the fork, 4 GB more of virtual memory need to be reserved, and there is only 3 GB left.



    On Linux, this fork (or clone) system call will complete successfully (but cheating under the cover) and after the following exec (if any), these reserved but unused 4 GB will be freed without any harm.






    share|improve this answer























      Your Answer







      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "106"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      convertImagesToLinks: false,
      noModals: false,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );








       

      draft saved


      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f441364%2fwhat-is-the-purpose-of-memory-overcommitment-on-linux%23new-answer', 'question_page');

      );

      Post as a guest






























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      8
      down vote













      The reason for overcommitting is to avoid underutilization of physical RAM. There is a difference between how much virtual memory a process has allocated and how much of this virtual memory has been actually mapped to physical page frames. In fact, right after a process is started, it reserves very little RAM. This is due to demand paging: the process has a virtual memory layout, but the mapping from the virtual memory address to a physical page frame isn't established until the memory is read or written.



      A program typically never uses its whole virtual memory space, and the memory areas touched varies during the run ot the program. For example, mappings to page frames containing initialization code that is executed only at the start of the run can be discarded and the page frames can be used for other mappings.



      The same applies to data: when a program calls malloc, it reserves a sufficiently large contiguous virtual address space for storing data. However, mappings to physical page frames are not esablished until the pages are actually used, if ever. Or consider the program stack: every process gets a fairly big contiguous virtual memory area set aside for the stack (typically 8 MB). A process typically uses only a fraction of this stack space; small and well-behaving programs use even less.



      A Linux computer typically has a lot of heterogeneous processes running in different stages of their lifetimes. Statistically, at any point in time, they do not collectively need a mapping for every virtual page they have been assigned (or will be assigned later in the program run).



      A strictly non-overcommitting scheme would create a static mapping from virtual address pages to physical RAM page frames at the moment the virtual pages are allocated. This would result in a system that can run far fewer programs concurrently, because a lot of RAM page frames would be reserved for nothing.



      I don't deny that overcommitting memory has its dangers, and can lead to out-of-memory situations that are messy to deal with. It's all about finding the right compromise.






      share|improve this answer





















      • Avoiding underutilization of RAM might be done with madvise(2) using MADV_DONTNEED (and perhaps should be done so by applications), so I am still not very convinced. But your explanation is interesting, so thanks!
        – Basile Starynkevitch
        May 3 at 5:10











      • And the usual scheme would be to reserve virtual memory (by allocating it on swap area), not RAM
        – Basile Starynkevitch
        May 3 at 5:23






      • 1




        Processes have a virtual address space (in virtual memory). RAM is managed by the kernel, whole system wise. So be careful in expliciting virtual address space, virtual memory, RAM
        – Basile Starynkevitch
        May 3 at 8:13















      up vote
      8
      down vote













      The reason for overcommitting is to avoid underutilization of physical RAM. There is a difference between how much virtual memory a process has allocated and how much of this virtual memory has been actually mapped to physical page frames. In fact, right after a process is started, it reserves very little RAM. This is due to demand paging: the process has a virtual memory layout, but the mapping from the virtual memory address to a physical page frame isn't established until the memory is read or written.



      A program typically never uses its whole virtual memory space, and the memory areas touched varies during the run ot the program. For example, mappings to page frames containing initialization code that is executed only at the start of the run can be discarded and the page frames can be used for other mappings.



      The same applies to data: when a program calls malloc, it reserves a sufficiently large contiguous virtual address space for storing data. However, mappings to physical page frames are not esablished until the pages are actually used, if ever. Or consider the program stack: every process gets a fairly big contiguous virtual memory area set aside for the stack (typically 8 MB). A process typically uses only a fraction of this stack space; small and well-behaving programs use even less.



      A Linux computer typically has a lot of heterogeneous processes running in different stages of their lifetimes. Statistically, at any point in time, they do not collectively need a mapping for every virtual page they have been assigned (or will be assigned later in the program run).



      A strictly non-overcommitting scheme would create a static mapping from virtual address pages to physical RAM page frames at the moment the virtual pages are allocated. This would result in a system that can run far fewer programs concurrently, because a lot of RAM page frames would be reserved for nothing.



      I don't deny that overcommitting memory has its dangers, and can lead to out-of-memory situations that are messy to deal with. It's all about finding the right compromise.






      share|improve this answer





















      • Avoiding underutilization of RAM might be done with madvise(2) using MADV_DONTNEED (and perhaps should be done so by applications), so I am still not very convinced. But your explanation is interesting, so thanks!
        – Basile Starynkevitch
        May 3 at 5:10











      • And the usual scheme would be to reserve virtual memory (by allocating it on swap area), not RAM
        – Basile Starynkevitch
        May 3 at 5:23






      • 1




        Processes have a virtual address space (in virtual memory). RAM is managed by the kernel, whole system wise. So be careful in expliciting virtual address space, virtual memory, RAM
        – Basile Starynkevitch
        May 3 at 8:13













      up vote
      8
      down vote










      up vote
      8
      down vote









      The reason for overcommitting is to avoid underutilization of physical RAM. There is a difference between how much virtual memory a process has allocated and how much of this virtual memory has been actually mapped to physical page frames. In fact, right after a process is started, it reserves very little RAM. This is due to demand paging: the process has a virtual memory layout, but the mapping from the virtual memory address to a physical page frame isn't established until the memory is read or written.



      A program typically never uses its whole virtual memory space, and the memory areas touched varies during the run ot the program. For example, mappings to page frames containing initialization code that is executed only at the start of the run can be discarded and the page frames can be used for other mappings.



      The same applies to data: when a program calls malloc, it reserves a sufficiently large contiguous virtual address space for storing data. However, mappings to physical page frames are not esablished until the pages are actually used, if ever. Or consider the program stack: every process gets a fairly big contiguous virtual memory area set aside for the stack (typically 8 MB). A process typically uses only a fraction of this stack space; small and well-behaving programs use even less.



      A Linux computer typically has a lot of heterogeneous processes running in different stages of their lifetimes. Statistically, at any point in time, they do not collectively need a mapping for every virtual page they have been assigned (or will be assigned later in the program run).



      A strictly non-overcommitting scheme would create a static mapping from virtual address pages to physical RAM page frames at the moment the virtual pages are allocated. This would result in a system that can run far fewer programs concurrently, because a lot of RAM page frames would be reserved for nothing.



      I don't deny that overcommitting memory has its dangers, and can lead to out-of-memory situations that are messy to deal with. It's all about finding the right compromise.






      share|improve this answer













      The reason for overcommitting is to avoid underutilization of physical RAM. There is a difference between how much virtual memory a process has allocated and how much of this virtual memory has been actually mapped to physical page frames. In fact, right after a process is started, it reserves very little RAM. This is due to demand paging: the process has a virtual memory layout, but the mapping from the virtual memory address to a physical page frame isn't established until the memory is read or written.



      A program typically never uses its whole virtual memory space, and the memory areas touched varies during the run ot the program. For example, mappings to page frames containing initialization code that is executed only at the start of the run can be discarded and the page frames can be used for other mappings.



      The same applies to data: when a program calls malloc, it reserves a sufficiently large contiguous virtual address space for storing data. However, mappings to physical page frames are not esablished until the pages are actually used, if ever. Or consider the program stack: every process gets a fairly big contiguous virtual memory area set aside for the stack (typically 8 MB). A process typically uses only a fraction of this stack space; small and well-behaving programs use even less.



      A Linux computer typically has a lot of heterogeneous processes running in different stages of their lifetimes. Statistically, at any point in time, they do not collectively need a mapping for every virtual page they have been assigned (or will be assigned later in the program run).



      A strictly non-overcommitting scheme would create a static mapping from virtual address pages to physical RAM page frames at the moment the virtual pages are allocated. This would result in a system that can run far fewer programs concurrently, because a lot of RAM page frames would be reserved for nothing.



      I don't deny that overcommitting memory has its dangers, and can lead to out-of-memory situations that are messy to deal with. It's all about finding the right compromise.







      share|improve this answer













      share|improve this answer



      share|improve this answer











      answered May 2 at 21:23









      Johan Myréen

      6,76711221




      6,76711221











      • Avoiding underutilization of RAM might be done with madvise(2) using MADV_DONTNEED (and perhaps should be done so by applications), so I am still not very convinced. But your explanation is interesting, so thanks!
        – Basile Starynkevitch
        May 3 at 5:10











      • And the usual scheme would be to reserve virtual memory (by allocating it on swap area), not RAM
        – Basile Starynkevitch
        May 3 at 5:23






      • 1




        Processes have a virtual address space (in virtual memory). RAM is managed by the kernel, whole system wise. So be careful in expliciting virtual address space, virtual memory, RAM
        – Basile Starynkevitch
        May 3 at 8:13

















      • Avoiding underutilization of RAM might be done with madvise(2) using MADV_DONTNEED (and perhaps should be done so by applications), so I am still not very convinced. But your explanation is interesting, so thanks!
        – Basile Starynkevitch
        May 3 at 5:10











      • And the usual scheme would be to reserve virtual memory (by allocating it on swap area), not RAM
        – Basile Starynkevitch
        May 3 at 5:23






      • 1




        Processes have a virtual address space (in virtual memory). RAM is managed by the kernel, whole system wise. So be careful in expliciting virtual address space, virtual memory, RAM
        – Basile Starynkevitch
        May 3 at 8:13
















      Avoiding underutilization of RAM might be done with madvise(2) using MADV_DONTNEED (and perhaps should be done so by applications), so I am still not very convinced. But your explanation is interesting, so thanks!
      – Basile Starynkevitch
      May 3 at 5:10





      Avoiding underutilization of RAM might be done with madvise(2) using MADV_DONTNEED (and perhaps should be done so by applications), so I am still not very convinced. But your explanation is interesting, so thanks!
      – Basile Starynkevitch
      May 3 at 5:10













      And the usual scheme would be to reserve virtual memory (by allocating it on swap area), not RAM
      – Basile Starynkevitch
      May 3 at 5:23




      And the usual scheme would be to reserve virtual memory (by allocating it on swap area), not RAM
      – Basile Starynkevitch
      May 3 at 5:23




      1




      1




      Processes have a virtual address space (in virtual memory). RAM is managed by the kernel, whole system wise. So be careful in expliciting virtual address space, virtual memory, RAM
      – Basile Starynkevitch
      May 3 at 8:13





      Processes have a virtual address space (in virtual memory). RAM is managed by the kernel, whole system wise. So be careful in expliciting virtual address space, virtual memory, RAM
      – Basile Starynkevitch
      May 3 at 8:13













      up vote
      3
      down vote













      You say this as if laziness is not considered a virtue in programming :).



      Large quantities of software are optimized for simplicity and maintainability, with surviving low-memory conditions as a very low priority. It is common to treat allocation failure as fatal. Exiting the process which exhausts memory avoids a situation where there is no free memory, and the system cannot make progress without either allocating more memory, or complexity in the form of comprehensive pre-allocation.



      Notice how fine the difference is between checking allocations and dying, or not checking and crashing. It would not be fair to blame overcommit on programmers simply not bothering to check whether malloc() succeeded or failed.



      There is a only a small amount of software which you can trust to continue "correctly" in the face of failed allocations. The kernel should generally be expected to survive. sqlite has a notoriously robust test which includes out of memory testing, specifically because it is intended to support various small embedded systems.



      As a failure path not used in normal operation, handling low memory conditions correctly imposes a significant extra burden in maintenance and testing. If that effort does not yield a commensurate benefit, it can more profitably be spent elsewhere.



      Where this breaks down, it is also common to have special cases for large allocations, to handle the most common causes of failure.



      Allowing a certain amount of overcommit is probably best viewed in this context. It is part of the current default compromise on Linux.



      Note the idea that one should disable kernel-level overcommit and instead provide more swap than you ever want to use, also has its haters. The gap in speed between RAM and rotating hard drives has grown over time, such that when the system actually uses the swap space you have allowed it, it can more often be described as "grinding to a halt".






      share|improve this answer

























        up vote
        3
        down vote













        You say this as if laziness is not considered a virtue in programming :).



        Large quantities of software are optimized for simplicity and maintainability, with surviving low-memory conditions as a very low priority. It is common to treat allocation failure as fatal. Exiting the process which exhausts memory avoids a situation where there is no free memory, and the system cannot make progress without either allocating more memory, or complexity in the form of comprehensive pre-allocation.



        Notice how fine the difference is between checking allocations and dying, or not checking and crashing. It would not be fair to blame overcommit on programmers simply not bothering to check whether malloc() succeeded or failed.



        There is a only a small amount of software which you can trust to continue "correctly" in the face of failed allocations. The kernel should generally be expected to survive. sqlite has a notoriously robust test which includes out of memory testing, specifically because it is intended to support various small embedded systems.



        As a failure path not used in normal operation, handling low memory conditions correctly imposes a significant extra burden in maintenance and testing. If that effort does not yield a commensurate benefit, it can more profitably be spent elsewhere.



        Where this breaks down, it is also common to have special cases for large allocations, to handle the most common causes of failure.



        Allowing a certain amount of overcommit is probably best viewed in this context. It is part of the current default compromise on Linux.



        Note the idea that one should disable kernel-level overcommit and instead provide more swap than you ever want to use, also has its haters. The gap in speed between RAM and rotating hard drives has grown over time, such that when the system actually uses the swap space you have allowed it, it can more often be described as "grinding to a halt".






        share|improve this answer























          up vote
          3
          down vote










          up vote
          3
          down vote









          You say this as if laziness is not considered a virtue in programming :).



          Large quantities of software are optimized for simplicity and maintainability, with surviving low-memory conditions as a very low priority. It is common to treat allocation failure as fatal. Exiting the process which exhausts memory avoids a situation where there is no free memory, and the system cannot make progress without either allocating more memory, or complexity in the form of comprehensive pre-allocation.



          Notice how fine the difference is between checking allocations and dying, or not checking and crashing. It would not be fair to blame overcommit on programmers simply not bothering to check whether malloc() succeeded or failed.



          There is a only a small amount of software which you can trust to continue "correctly" in the face of failed allocations. The kernel should generally be expected to survive. sqlite has a notoriously robust test which includes out of memory testing, specifically because it is intended to support various small embedded systems.



          As a failure path not used in normal operation, handling low memory conditions correctly imposes a significant extra burden in maintenance and testing. If that effort does not yield a commensurate benefit, it can more profitably be spent elsewhere.



          Where this breaks down, it is also common to have special cases for large allocations, to handle the most common causes of failure.



          Allowing a certain amount of overcommit is probably best viewed in this context. It is part of the current default compromise on Linux.



          Note the idea that one should disable kernel-level overcommit and instead provide more swap than you ever want to use, also has its haters. The gap in speed between RAM and rotating hard drives has grown over time, such that when the system actually uses the swap space you have allowed it, it can more often be described as "grinding to a halt".






          share|improve this answer













          You say this as if laziness is not considered a virtue in programming :).



          Large quantities of software are optimized for simplicity and maintainability, with surviving low-memory conditions as a very low priority. It is common to treat allocation failure as fatal. Exiting the process which exhausts memory avoids a situation where there is no free memory, and the system cannot make progress without either allocating more memory, or complexity in the form of comprehensive pre-allocation.



          Notice how fine the difference is between checking allocations and dying, or not checking and crashing. It would not be fair to blame overcommit on programmers simply not bothering to check whether malloc() succeeded or failed.



          There is a only a small amount of software which you can trust to continue "correctly" in the face of failed allocations. The kernel should generally be expected to survive. sqlite has a notoriously robust test which includes out of memory testing, specifically because it is intended to support various small embedded systems.



          As a failure path not used in normal operation, handling low memory conditions correctly imposes a significant extra burden in maintenance and testing. If that effort does not yield a commensurate benefit, it can more profitably be spent elsewhere.



          Where this breaks down, it is also common to have special cases for large allocations, to handle the most common causes of failure.



          Allowing a certain amount of overcommit is probably best viewed in this context. It is part of the current default compromise on Linux.



          Note the idea that one should disable kernel-level overcommit and instead provide more swap than you ever want to use, also has its haters. The gap in speed between RAM and rotating hard drives has grown over time, such that when the system actually uses the swap space you have allowed it, it can more often be described as "grinding to a halt".







          share|improve this answer













          share|improve this answer



          share|improve this answer











          answered May 3 at 9:09









          sourcejedi

          18.2k32475




          18.2k32475




















              up vote
              3
              down vote













              I agree with and upvoted Johan Myréen answer but here are more explanations that might help you understanding the issue.



              You seem to confuse the swap area, i.e. on disk space intended to store less used RAM and virtual memory. The latter is made of a combination of RAM areas and on disk areas.



              Processes are reserving and using virtual memory. They have no idea about where it is stored. When they need to access some data which isn't in RAM, processes (or threads) are suspended until the kernel do the job for the data page to be available.



              When there is RAM demand, the kernel free some RAM by storing less used process pages on the disk swap area.



              When a process reserves memory (i.e. malloc and the likes), non overcommiting OSes mark unused portions of the virtual memory as unavailable. That means that when the process having made the allocation will actually need to access the reserved pages, they will be guaranteed to be present.



              The drawback is given the fact that memory is not usable by any other process is preventing these processes to have their pages paginated out, so RAM is wasted if these processes are inactive. Worst, if the sum of reservations is larger than the size of the swap area, pages of RAM will also be reserved to match the reservation despite not containing any data. This is quite a bad scenario because you'll have RAM which is both unusable and unused. Finally, the worst case scenario is for a huge reservation not to be able to be accepted because there is no more virtual memory (swap + ram) available for it. The process doing the reservation will usually crash.



              On the other hand, overcommiting OSes like Linux bet there will be no virtual memory shortage at any given time. They accept most memory reservations (but not unrealistic ones, this can be more or less tuned) and in general, this allows a better utilization of the RAM and swap resources.



              This is similar to airline companies overbooking seats. This improve occupancy rate but some passengers might be unhappy. Hopefully, airlines just book them to another flight and possibly compensate them while Linux just throws the heavier passengers out of the flying plane...



              To summarize, Linux reserves memory a lazy, "best effort" way while several other OSes do guarantee reservations.



              Real cases scenario where overcommitting makes a lot of sense is when a program which uses a lot of virtual memory does a fork followed by an exec.



              Let's say you have 4 GB of RAM from which 3 GB are available for virtual memory and 4 GB of swap. There is a process reserving 4 GB but only using 1 GB out of it. There is no pagination so the system performs well. On a non overcommiting OS, that process cannot fork because just after the fork, 4 GB more of virtual memory need to be reserved, and there is only 3 GB left.



              On Linux, this fork (or clone) system call will complete successfully (but cheating under the cover) and after the following exec (if any), these reserved but unused 4 GB will be freed without any harm.






              share|improve this answer



























                up vote
                3
                down vote













                I agree with and upvoted Johan Myréen answer but here are more explanations that might help you understanding the issue.



                You seem to confuse the swap area, i.e. on disk space intended to store less used RAM and virtual memory. The latter is made of a combination of RAM areas and on disk areas.



                Processes are reserving and using virtual memory. They have no idea about where it is stored. When they need to access some data which isn't in RAM, processes (or threads) are suspended until the kernel do the job for the data page to be available.



                When there is RAM demand, the kernel free some RAM by storing less used process pages on the disk swap area.



                When a process reserves memory (i.e. malloc and the likes), non overcommiting OSes mark unused portions of the virtual memory as unavailable. That means that when the process having made the allocation will actually need to access the reserved pages, they will be guaranteed to be present.



                The drawback is given the fact that memory is not usable by any other process is preventing these processes to have their pages paginated out, so RAM is wasted if these processes are inactive. Worst, if the sum of reservations is larger than the size of the swap area, pages of RAM will also be reserved to match the reservation despite not containing any data. This is quite a bad scenario because you'll have RAM which is both unusable and unused. Finally, the worst case scenario is for a huge reservation not to be able to be accepted because there is no more virtual memory (swap + ram) available for it. The process doing the reservation will usually crash.



                On the other hand, overcommiting OSes like Linux bet there will be no virtual memory shortage at any given time. They accept most memory reservations (but not unrealistic ones, this can be more or less tuned) and in general, this allows a better utilization of the RAM and swap resources.



                This is similar to airline companies overbooking seats. This improve occupancy rate but some passengers might be unhappy. Hopefully, airlines just book them to another flight and possibly compensate them while Linux just throws the heavier passengers out of the flying plane...



                To summarize, Linux reserves memory a lazy, "best effort" way while several other OSes do guarantee reservations.



                Real cases scenario where overcommitting makes a lot of sense is when a program which uses a lot of virtual memory does a fork followed by an exec.



                Let's say you have 4 GB of RAM from which 3 GB are available for virtual memory and 4 GB of swap. There is a process reserving 4 GB but only using 1 GB out of it. There is no pagination so the system performs well. On a non overcommiting OS, that process cannot fork because just after the fork, 4 GB more of virtual memory need to be reserved, and there is only 3 GB left.



                On Linux, this fork (or clone) system call will complete successfully (but cheating under the cover) and after the following exec (if any), these reserved but unused 4 GB will be freed without any harm.






                share|improve this answer

























                  up vote
                  3
                  down vote










                  up vote
                  3
                  down vote









                  I agree with and upvoted Johan Myréen answer but here are more explanations that might help you understanding the issue.



                  You seem to confuse the swap area, i.e. on disk space intended to store less used RAM and virtual memory. The latter is made of a combination of RAM areas and on disk areas.



                  Processes are reserving and using virtual memory. They have no idea about where it is stored. When they need to access some data which isn't in RAM, processes (or threads) are suspended until the kernel do the job for the data page to be available.



                  When there is RAM demand, the kernel free some RAM by storing less used process pages on the disk swap area.



                  When a process reserves memory (i.e. malloc and the likes), non overcommiting OSes mark unused portions of the virtual memory as unavailable. That means that when the process having made the allocation will actually need to access the reserved pages, they will be guaranteed to be present.



                  The drawback is given the fact that memory is not usable by any other process is preventing these processes to have their pages paginated out, so RAM is wasted if these processes are inactive. Worst, if the sum of reservations is larger than the size of the swap area, pages of RAM will also be reserved to match the reservation despite not containing any data. This is quite a bad scenario because you'll have RAM which is both unusable and unused. Finally, the worst case scenario is for a huge reservation not to be able to be accepted because there is no more virtual memory (swap + ram) available for it. The process doing the reservation will usually crash.



                  On the other hand, overcommiting OSes like Linux bet there will be no virtual memory shortage at any given time. They accept most memory reservations (but not unrealistic ones, this can be more or less tuned) and in general, this allows a better utilization of the RAM and swap resources.



                  This is similar to airline companies overbooking seats. This improve occupancy rate but some passengers might be unhappy. Hopefully, airlines just book them to another flight and possibly compensate them while Linux just throws the heavier passengers out of the flying plane...



                  To summarize, Linux reserves memory a lazy, "best effort" way while several other OSes do guarantee reservations.



                  Real cases scenario where overcommitting makes a lot of sense is when a program which uses a lot of virtual memory does a fork followed by an exec.



                  Let's say you have 4 GB of RAM from which 3 GB are available for virtual memory and 4 GB of swap. There is a process reserving 4 GB but only using 1 GB out of it. There is no pagination so the system performs well. On a non overcommiting OS, that process cannot fork because just after the fork, 4 GB more of virtual memory need to be reserved, and there is only 3 GB left.



                  On Linux, this fork (or clone) system call will complete successfully (but cheating under the cover) and after the following exec (if any), these reserved but unused 4 GB will be freed without any harm.






                  share|improve this answer















                  I agree with and upvoted Johan Myréen answer but here are more explanations that might help you understanding the issue.



                  You seem to confuse the swap area, i.e. on disk space intended to store less used RAM and virtual memory. The latter is made of a combination of RAM areas and on disk areas.



                  Processes are reserving and using virtual memory. They have no idea about where it is stored. When they need to access some data which isn't in RAM, processes (or threads) are suspended until the kernel do the job for the data page to be available.



                  When there is RAM demand, the kernel free some RAM by storing less used process pages on the disk swap area.



                  When a process reserves memory (i.e. malloc and the likes), non overcommiting OSes mark unused portions of the virtual memory as unavailable. That means that when the process having made the allocation will actually need to access the reserved pages, they will be guaranteed to be present.



                  The drawback is given the fact that memory is not usable by any other process is preventing these processes to have their pages paginated out, so RAM is wasted if these processes are inactive. Worst, if the sum of reservations is larger than the size of the swap area, pages of RAM will also be reserved to match the reservation despite not containing any data. This is quite a bad scenario because you'll have RAM which is both unusable and unused. Finally, the worst case scenario is for a huge reservation not to be able to be accepted because there is no more virtual memory (swap + ram) available for it. The process doing the reservation will usually crash.



                  On the other hand, overcommiting OSes like Linux bet there will be no virtual memory shortage at any given time. They accept most memory reservations (but not unrealistic ones, this can be more or less tuned) and in general, this allows a better utilization of the RAM and swap resources.



                  This is similar to airline companies overbooking seats. This improve occupancy rate but some passengers might be unhappy. Hopefully, airlines just book them to another flight and possibly compensate them while Linux just throws the heavier passengers out of the flying plane...



                  To summarize, Linux reserves memory a lazy, "best effort" way while several other OSes do guarantee reservations.



                  Real cases scenario where overcommitting makes a lot of sense is when a program which uses a lot of virtual memory does a fork followed by an exec.



                  Let's say you have 4 GB of RAM from which 3 GB are available for virtual memory and 4 GB of swap. There is a process reserving 4 GB but only using 1 GB out of it. There is no pagination so the system performs well. On a non overcommiting OS, that process cannot fork because just after the fork, 4 GB more of virtual memory need to be reserved, and there is only 3 GB left.



                  On Linux, this fork (or clone) system call will complete successfully (but cheating under the cover) and after the following exec (if any), these reserved but unused 4 GB will be freed without any harm.







                  share|improve this answer















                  share|improve this answer



                  share|improve this answer








                  edited May 3 at 12:16


























                  answered May 3 at 10:19









                  jlliagre

                  44.7k578122




                  44.7k578122






















                       

                      draft saved


                      draft discarded


























                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f441364%2fwhat-is-the-purpose-of-memory-overcommitment-on-linux%23new-answer', 'question_page');

                      );

                      Post as a guest













































































                      Popular posts from this blog

                      How to check contact read email or not when send email to Individual?

                      Displaying single band from multi-band raster using QGIS

                      How many registers does an x86_64 CPU actually have?