Tool to force use FILE instead of RAM

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP












3















I'm wondering why there is no such tool like ram2swap, similar to cpulimit in parts.



E.g.



There is program daemon.bin.



Let's create temporary swap file ( 1 Gb ):



% dd if=/dev/zero of=tmp.swap bs=1m count=1000


Launch:



% cpulimit -l 10 ram2swap -f tmp.swap daemon.bin


Instead of RAM usage or SWAP, daemon.bin will allocate memory in the tmp.swap file.



Dramatically decrease performance, yes. But How this is flexibly.










share|improve this question

















  • 2





    not a tool, but you can increase swapiness to encourage the kernel to swap to disc more often.

    – user61786
    Apr 15 '14 at 14:00











  • Yes swappiness (/proc/sys/vm/swappiness) is the most direct way to controlling this.

    – slm
    Apr 15 '14 at 14:09











  • You can use this command to push blocks back to RAM from swap: sudo sync; echo 3 >> /proc/sys/vm/drop_caches

    – slm
    Apr 15 '14 at 14:23











  • This is offtopic here. I'm not talking about drop_caches and trivial swapiness.

    – Anomalous Awe
    Apr 15 '14 at 14:26











  • @AnomalousAwe - not off-topic, I understand what you're asking, there is no such tool, the only way you can influence where something is located (RAM vs. SWAP) is through swappiness or by dropping caches.

    – slm
    Apr 15 '14 at 14:54















3















I'm wondering why there is no such tool like ram2swap, similar to cpulimit in parts.



E.g.



There is program daemon.bin.



Let's create temporary swap file ( 1 Gb ):



% dd if=/dev/zero of=tmp.swap bs=1m count=1000


Launch:



% cpulimit -l 10 ram2swap -f tmp.swap daemon.bin


Instead of RAM usage or SWAP, daemon.bin will allocate memory in the tmp.swap file.



Dramatically decrease performance, yes. But How this is flexibly.










share|improve this question

















  • 2





    not a tool, but you can increase swapiness to encourage the kernel to swap to disc more often.

    – user61786
    Apr 15 '14 at 14:00











  • Yes swappiness (/proc/sys/vm/swappiness) is the most direct way to controlling this.

    – slm
    Apr 15 '14 at 14:09











  • You can use this command to push blocks back to RAM from swap: sudo sync; echo 3 >> /proc/sys/vm/drop_caches

    – slm
    Apr 15 '14 at 14:23











  • This is offtopic here. I'm not talking about drop_caches and trivial swapiness.

    – Anomalous Awe
    Apr 15 '14 at 14:26











  • @AnomalousAwe - not off-topic, I understand what you're asking, there is no such tool, the only way you can influence where something is located (RAM vs. SWAP) is through swappiness or by dropping caches.

    – slm
    Apr 15 '14 at 14:54













3












3








3








I'm wondering why there is no such tool like ram2swap, similar to cpulimit in parts.



E.g.



There is program daemon.bin.



Let's create temporary swap file ( 1 Gb ):



% dd if=/dev/zero of=tmp.swap bs=1m count=1000


Launch:



% cpulimit -l 10 ram2swap -f tmp.swap daemon.bin


Instead of RAM usage or SWAP, daemon.bin will allocate memory in the tmp.swap file.



Dramatically decrease performance, yes. But How this is flexibly.










share|improve this question














I'm wondering why there is no such tool like ram2swap, similar to cpulimit in parts.



E.g.



There is program daemon.bin.



Let's create temporary swap file ( 1 Gb ):



% dd if=/dev/zero of=tmp.swap bs=1m count=1000


Launch:



% cpulimit -l 10 ram2swap -f tmp.swap daemon.bin


Instead of RAM usage or SWAP, daemon.bin will allocate memory in the tmp.swap file.



Dramatically decrease performance, yes. But How this is flexibly.







linux utilities swap limit ram






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Apr 15 '14 at 13:47









Anomalous AweAnomalous Awe

79127




79127







  • 2





    not a tool, but you can increase swapiness to encourage the kernel to swap to disc more often.

    – user61786
    Apr 15 '14 at 14:00











  • Yes swappiness (/proc/sys/vm/swappiness) is the most direct way to controlling this.

    – slm
    Apr 15 '14 at 14:09











  • You can use this command to push blocks back to RAM from swap: sudo sync; echo 3 >> /proc/sys/vm/drop_caches

    – slm
    Apr 15 '14 at 14:23











  • This is offtopic here. I'm not talking about drop_caches and trivial swapiness.

    – Anomalous Awe
    Apr 15 '14 at 14:26











  • @AnomalousAwe - not off-topic, I understand what you're asking, there is no such tool, the only way you can influence where something is located (RAM vs. SWAP) is through swappiness or by dropping caches.

    – slm
    Apr 15 '14 at 14:54












  • 2





    not a tool, but you can increase swapiness to encourage the kernel to swap to disc more often.

    – user61786
    Apr 15 '14 at 14:00











  • Yes swappiness (/proc/sys/vm/swappiness) is the most direct way to controlling this.

    – slm
    Apr 15 '14 at 14:09











  • You can use this command to push blocks back to RAM from swap: sudo sync; echo 3 >> /proc/sys/vm/drop_caches

    – slm
    Apr 15 '14 at 14:23











  • This is offtopic here. I'm not talking about drop_caches and trivial swapiness.

    – Anomalous Awe
    Apr 15 '14 at 14:26











  • @AnomalousAwe - not off-topic, I understand what you're asking, there is no such tool, the only way you can influence where something is located (RAM vs. SWAP) is through swappiness or by dropping caches.

    – slm
    Apr 15 '14 at 14:54







2




2





not a tool, but you can increase swapiness to encourage the kernel to swap to disc more often.

– user61786
Apr 15 '14 at 14:00





not a tool, but you can increase swapiness to encourage the kernel to swap to disc more often.

– user61786
Apr 15 '14 at 14:00













Yes swappiness (/proc/sys/vm/swappiness) is the most direct way to controlling this.

– slm
Apr 15 '14 at 14:09





Yes swappiness (/proc/sys/vm/swappiness) is the most direct way to controlling this.

– slm
Apr 15 '14 at 14:09













You can use this command to push blocks back to RAM from swap: sudo sync; echo 3 >> /proc/sys/vm/drop_caches

– slm
Apr 15 '14 at 14:23





You can use this command to push blocks back to RAM from swap: sudo sync; echo 3 >> /proc/sys/vm/drop_caches

– slm
Apr 15 '14 at 14:23













This is offtopic here. I'm not talking about drop_caches and trivial swapiness.

– Anomalous Awe
Apr 15 '14 at 14:26





This is offtopic here. I'm not talking about drop_caches and trivial swapiness.

– Anomalous Awe
Apr 15 '14 at 14:26













@AnomalousAwe - not off-topic, I understand what you're asking, there is no such tool, the only way you can influence where something is located (RAM vs. SWAP) is through swappiness or by dropping caches.

– slm
Apr 15 '14 at 14:54





@AnomalousAwe - not off-topic, I understand what you're asking, there is no such tool, the only way you can influence where something is located (RAM vs. SWAP) is through swappiness or by dropping caches.

– slm
Apr 15 '14 at 14:54










2 Answers
2






active

oldest

votes


















4














There is no such tool because it does not make any sense from a single-program point of view.



One can consider CPU/HDD/RAM/swap as resources. These resources can be shared in different ways by the operating system among processes, users, contexts, etc.



In some specific situations, it makes sense to tell the operating system to enforce hard limits:



  • Don't allow this program to use more than 60% of memory.

  • Don't allow this user to use more than 20% of the CPU time if another user needs it. Else allow him to use 100% CPU.

  • Don't allow users from this group to use more than 2 cores.

  • ...

This provide a real flexibility: resources are shared between the users according to the administrator whishes and the OS complies.



Why manually putting a program to swap is not a good idea?



  • You are basically assuming that you are better than the heuristic in the kernel. The kernel already handles swap space by itself. If a daemon has not been activated for a long time and the OS lacks RAM, it will eventually be put into swap.

  • AFAIK, swap is not executable: before being executed, the content of the swap needs to be retrieved into the RAM. So if you were thinking about first class programs being executed from RAM and second class programs being executed from swap: stop now, that won't work.

  • "Yes, but that specific daemon is called twice a month. I don't need it in RAM". If it's true, the kernel will put in in swap if you lack RAM.

  • "Why waiting for RAM lacking?" Putting things in and out of swap is very expensive, especially on plain old HDD. If your system can keep everything in RAM, it's better to let it do. If you force something into the swap, for the time being, your system will be less responsive. Since your other "1st class" daemons should most probably do some HDD IO too, those will also be slowed down. Same thing when the "2nd class" daemon wakes up and needs to be put into RAM.

Now, why can't a user use a magic command line to put a program in swap?



  • It's not clear what should be put into the swap from a userspace (non-kernel) perspective. Your program is linked to libmylib.so, should it put that in swap too? And what about libc.so?

  • What's the intended behaviour actually? Do you want the daemon to be put directly in swap? But it will have to do some initialization work, won't it? Then it will go back to RAM as soon as you'll have it loaded.

  • How do you proceed to know the daemon is not used anymore and can be put safely into the swap again?

  • Should it be sent back into swap directly or should you wait a little? Otherwise at each sleep, your daemon will be put into swap and at each wake up it will be put back in RAM. If your daemon updates your computer clock for instance, be ready for hours of swapping.

  • ...

In short, you need the kernel to handle it and this is precisely what it does. For most of the needs, tweaking the swapiness is more than enough to get a responsive system.



Now, if you really want to
shoot yourself in the foot
(source: freakoutnation.com) , the kernel offer this "flexibility" by using cgroups. You can obtain what you think you want by setting max mem and max mem+swap for your daemon in cgroups. You can obtain a more resonable result by setting per-program swapiness. But either ways, it is not a good idea and I won't go further than this as an explanation.






share|improve this answer
































    2














    There is something kind of what you describe: there is a feature to limit the amount of RAM used by a process (RAM, as opposed to virtual memory). The RLIMIT_RSS limit sets an upper bound a program's resident set size, i.e. the part of the memory of that process which is resident in memory (as opposed to swapped out). However, it is not implemented on Linux.



    RLIMIT_RSS existed in some old Unix systems (BSD only?) but has been dropped from many modern systems. On Linux, it existed only on the 2.4 series (and even there didn't fully work). On Solaris it was dropped at some point in the distant past (when Solaris SunOS switched from BSD to System V, maybe?). FreeBSD seems to still have it.



    I don't know why RLIMIT_RSS was removed. Alan Cox had this to say in 2006:




    The original mm for Linux didn't enforce it as it did not
    have any way to track RSS that was not computationally expensive. The
    current mm is probably capable of implementing RSS limits although
    questions then still remain about what effect this has if a user sets a
    low RSS limit and then causes a lot of swap thrashing.



    If you can see a way to implement it efficiently then go for it.




    Since then the topic has come up several times but as far as I know no patch was accepted to the Linux kernel.



    One reason that enforcing a limit on physical memory usage for a process is that it's hard to define how much physical memory a process is using. Ok, count its stack and heap (the part that's not swapped out). What about memory-mapped files? To measure the physical memory usage of a process, you have to count the cache used by the files that it maps. What about shared libraries and other shared mappings? If they're used by a single process then obviously they should be counted against it — but when they're used by multiple processes, it's hard to know which process is using which part.



    It doesn't make all that much sense to limit physical memory usage of a single process. Given that resource limits are inherited, each child would be allowed to use as much physical memory. It makes a little more sense to allow a limit on physical memory for a set of processes. Linux has this feature built into cgroups, a partial virtualization system where a process and its descendants run in a container which can have its own filesystem root, its own network controllers, its own resource limits, etc. A cgroup's memory usage can be limited with the memory.limit_in_bytes parameter (memory.memsw.limit_in_bytes controls the use of RAM plus swap). The documentation warns that pages shared between groups are assigned in a somewhat arbitrary way (“Shared pages are accounted on the basis of the first touch approach. The
    cgroup that first touches a page is accounted for the page.”). There may be other ways in which the limit isn't strictly enforced.



    The other part of what you're asking for is to have a dedicated swap file for a process. This would require considerable complexity in the kernel. Remember that pages can be shared; which swap file do you use if the two processes have different swap space assigned to them? On the other hand, the feature can be implemented fairly easily from the process side: map a file to memory. For shared pages, there's still a single file. As a side benefit, different areas of a process can use different “swap space”.






    share|improve this answer
























      Your Answer








      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "106"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













      draft saved

      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f124895%2ftool-to-force-use-file-instead-of-ram%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      4














      There is no such tool because it does not make any sense from a single-program point of view.



      One can consider CPU/HDD/RAM/swap as resources. These resources can be shared in different ways by the operating system among processes, users, contexts, etc.



      In some specific situations, it makes sense to tell the operating system to enforce hard limits:



      • Don't allow this program to use more than 60% of memory.

      • Don't allow this user to use more than 20% of the CPU time if another user needs it. Else allow him to use 100% CPU.

      • Don't allow users from this group to use more than 2 cores.

      • ...

      This provide a real flexibility: resources are shared between the users according to the administrator whishes and the OS complies.



      Why manually putting a program to swap is not a good idea?



      • You are basically assuming that you are better than the heuristic in the kernel. The kernel already handles swap space by itself. If a daemon has not been activated for a long time and the OS lacks RAM, it will eventually be put into swap.

      • AFAIK, swap is not executable: before being executed, the content of the swap needs to be retrieved into the RAM. So if you were thinking about first class programs being executed from RAM and second class programs being executed from swap: stop now, that won't work.

      • "Yes, but that specific daemon is called twice a month. I don't need it in RAM". If it's true, the kernel will put in in swap if you lack RAM.

      • "Why waiting for RAM lacking?" Putting things in and out of swap is very expensive, especially on plain old HDD. If your system can keep everything in RAM, it's better to let it do. If you force something into the swap, for the time being, your system will be less responsive. Since your other "1st class" daemons should most probably do some HDD IO too, those will also be slowed down. Same thing when the "2nd class" daemon wakes up and needs to be put into RAM.

      Now, why can't a user use a magic command line to put a program in swap?



      • It's not clear what should be put into the swap from a userspace (non-kernel) perspective. Your program is linked to libmylib.so, should it put that in swap too? And what about libc.so?

      • What's the intended behaviour actually? Do you want the daemon to be put directly in swap? But it will have to do some initialization work, won't it? Then it will go back to RAM as soon as you'll have it loaded.

      • How do you proceed to know the daemon is not used anymore and can be put safely into the swap again?

      • Should it be sent back into swap directly or should you wait a little? Otherwise at each sleep, your daemon will be put into swap and at each wake up it will be put back in RAM. If your daemon updates your computer clock for instance, be ready for hours of swapping.

      • ...

      In short, you need the kernel to handle it and this is precisely what it does. For most of the needs, tweaking the swapiness is more than enough to get a responsive system.



      Now, if you really want to
      shoot yourself in the foot
      (source: freakoutnation.com) , the kernel offer this "flexibility" by using cgroups. You can obtain what you think you want by setting max mem and max mem+swap for your daemon in cgroups. You can obtain a more resonable result by setting per-program swapiness. But either ways, it is not a good idea and I won't go further than this as an explanation.






      share|improve this answer





























        4














        There is no such tool because it does not make any sense from a single-program point of view.



        One can consider CPU/HDD/RAM/swap as resources. These resources can be shared in different ways by the operating system among processes, users, contexts, etc.



        In some specific situations, it makes sense to tell the operating system to enforce hard limits:



        • Don't allow this program to use more than 60% of memory.

        • Don't allow this user to use more than 20% of the CPU time if another user needs it. Else allow him to use 100% CPU.

        • Don't allow users from this group to use more than 2 cores.

        • ...

        This provide a real flexibility: resources are shared between the users according to the administrator whishes and the OS complies.



        Why manually putting a program to swap is not a good idea?



        • You are basically assuming that you are better than the heuristic in the kernel. The kernel already handles swap space by itself. If a daemon has not been activated for a long time and the OS lacks RAM, it will eventually be put into swap.

        • AFAIK, swap is not executable: before being executed, the content of the swap needs to be retrieved into the RAM. So if you were thinking about first class programs being executed from RAM and second class programs being executed from swap: stop now, that won't work.

        • "Yes, but that specific daemon is called twice a month. I don't need it in RAM". If it's true, the kernel will put in in swap if you lack RAM.

        • "Why waiting for RAM lacking?" Putting things in and out of swap is very expensive, especially on plain old HDD. If your system can keep everything in RAM, it's better to let it do. If you force something into the swap, for the time being, your system will be less responsive. Since your other "1st class" daemons should most probably do some HDD IO too, those will also be slowed down. Same thing when the "2nd class" daemon wakes up and needs to be put into RAM.

        Now, why can't a user use a magic command line to put a program in swap?



        • It's not clear what should be put into the swap from a userspace (non-kernel) perspective. Your program is linked to libmylib.so, should it put that in swap too? And what about libc.so?

        • What's the intended behaviour actually? Do you want the daemon to be put directly in swap? But it will have to do some initialization work, won't it? Then it will go back to RAM as soon as you'll have it loaded.

        • How do you proceed to know the daemon is not used anymore and can be put safely into the swap again?

        • Should it be sent back into swap directly or should you wait a little? Otherwise at each sleep, your daemon will be put into swap and at each wake up it will be put back in RAM. If your daemon updates your computer clock for instance, be ready for hours of swapping.

        • ...

        In short, you need the kernel to handle it and this is precisely what it does. For most of the needs, tweaking the swapiness is more than enough to get a responsive system.



        Now, if you really want to
        shoot yourself in the foot
        (source: freakoutnation.com) , the kernel offer this "flexibility" by using cgroups. You can obtain what you think you want by setting max mem and max mem+swap for your daemon in cgroups. You can obtain a more resonable result by setting per-program swapiness. But either ways, it is not a good idea and I won't go further than this as an explanation.






        share|improve this answer



























          4












          4








          4







          There is no such tool because it does not make any sense from a single-program point of view.



          One can consider CPU/HDD/RAM/swap as resources. These resources can be shared in different ways by the operating system among processes, users, contexts, etc.



          In some specific situations, it makes sense to tell the operating system to enforce hard limits:



          • Don't allow this program to use more than 60% of memory.

          • Don't allow this user to use more than 20% of the CPU time if another user needs it. Else allow him to use 100% CPU.

          • Don't allow users from this group to use more than 2 cores.

          • ...

          This provide a real flexibility: resources are shared between the users according to the administrator whishes and the OS complies.



          Why manually putting a program to swap is not a good idea?



          • You are basically assuming that you are better than the heuristic in the kernel. The kernel already handles swap space by itself. If a daemon has not been activated for a long time and the OS lacks RAM, it will eventually be put into swap.

          • AFAIK, swap is not executable: before being executed, the content of the swap needs to be retrieved into the RAM. So if you were thinking about first class programs being executed from RAM and second class programs being executed from swap: stop now, that won't work.

          • "Yes, but that specific daemon is called twice a month. I don't need it in RAM". If it's true, the kernel will put in in swap if you lack RAM.

          • "Why waiting for RAM lacking?" Putting things in and out of swap is very expensive, especially on plain old HDD. If your system can keep everything in RAM, it's better to let it do. If you force something into the swap, for the time being, your system will be less responsive. Since your other "1st class" daemons should most probably do some HDD IO too, those will also be slowed down. Same thing when the "2nd class" daemon wakes up and needs to be put into RAM.

          Now, why can't a user use a magic command line to put a program in swap?



          • It's not clear what should be put into the swap from a userspace (non-kernel) perspective. Your program is linked to libmylib.so, should it put that in swap too? And what about libc.so?

          • What's the intended behaviour actually? Do you want the daemon to be put directly in swap? But it will have to do some initialization work, won't it? Then it will go back to RAM as soon as you'll have it loaded.

          • How do you proceed to know the daemon is not used anymore and can be put safely into the swap again?

          • Should it be sent back into swap directly or should you wait a little? Otherwise at each sleep, your daemon will be put into swap and at each wake up it will be put back in RAM. If your daemon updates your computer clock for instance, be ready for hours of swapping.

          • ...

          In short, you need the kernel to handle it and this is precisely what it does. For most of the needs, tweaking the swapiness is more than enough to get a responsive system.



          Now, if you really want to
          shoot yourself in the foot
          (source: freakoutnation.com) , the kernel offer this "flexibility" by using cgroups. You can obtain what you think you want by setting max mem and max mem+swap for your daemon in cgroups. You can obtain a more resonable result by setting per-program swapiness. But either ways, it is not a good idea and I won't go further than this as an explanation.






          share|improve this answer















          There is no such tool because it does not make any sense from a single-program point of view.



          One can consider CPU/HDD/RAM/swap as resources. These resources can be shared in different ways by the operating system among processes, users, contexts, etc.



          In some specific situations, it makes sense to tell the operating system to enforce hard limits:



          • Don't allow this program to use more than 60% of memory.

          • Don't allow this user to use more than 20% of the CPU time if another user needs it. Else allow him to use 100% CPU.

          • Don't allow users from this group to use more than 2 cores.

          • ...

          This provide a real flexibility: resources are shared between the users according to the administrator whishes and the OS complies.



          Why manually putting a program to swap is not a good idea?



          • You are basically assuming that you are better than the heuristic in the kernel. The kernel already handles swap space by itself. If a daemon has not been activated for a long time and the OS lacks RAM, it will eventually be put into swap.

          • AFAIK, swap is not executable: before being executed, the content of the swap needs to be retrieved into the RAM. So if you were thinking about first class programs being executed from RAM and second class programs being executed from swap: stop now, that won't work.

          • "Yes, but that specific daemon is called twice a month. I don't need it in RAM". If it's true, the kernel will put in in swap if you lack RAM.

          • "Why waiting for RAM lacking?" Putting things in and out of swap is very expensive, especially on plain old HDD. If your system can keep everything in RAM, it's better to let it do. If you force something into the swap, for the time being, your system will be less responsive. Since your other "1st class" daemons should most probably do some HDD IO too, those will also be slowed down. Same thing when the "2nd class" daemon wakes up and needs to be put into RAM.

          Now, why can't a user use a magic command line to put a program in swap?



          • It's not clear what should be put into the swap from a userspace (non-kernel) perspective. Your program is linked to libmylib.so, should it put that in swap too? And what about libc.so?

          • What's the intended behaviour actually? Do you want the daemon to be put directly in swap? But it will have to do some initialization work, won't it? Then it will go back to RAM as soon as you'll have it loaded.

          • How do you proceed to know the daemon is not used anymore and can be put safely into the swap again?

          • Should it be sent back into swap directly or should you wait a little? Otherwise at each sleep, your daemon will be put into swap and at each wake up it will be put back in RAM. If your daemon updates your computer clock for instance, be ready for hours of swapping.

          • ...

          In short, you need the kernel to handle it and this is precisely what it does. For most of the needs, tweaking the swapiness is more than enough to get a responsive system.



          Now, if you really want to
          shoot yourself in the foot
          (source: freakoutnation.com) , the kernel offer this "flexibility" by using cgroups. You can obtain what you think you want by setting max mem and max mem+swap for your daemon in cgroups. You can obtain a more resonable result by setting per-program swapiness. But either ways, it is not a good idea and I won't go further than this as an explanation.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Feb 27 at 8:30









          Glorfindel

          3171511




          3171511










          answered Apr 15 '14 at 15:28







          user21228






























              2














              There is something kind of what you describe: there is a feature to limit the amount of RAM used by a process (RAM, as opposed to virtual memory). The RLIMIT_RSS limit sets an upper bound a program's resident set size, i.e. the part of the memory of that process which is resident in memory (as opposed to swapped out). However, it is not implemented on Linux.



              RLIMIT_RSS existed in some old Unix systems (BSD only?) but has been dropped from many modern systems. On Linux, it existed only on the 2.4 series (and even there didn't fully work). On Solaris it was dropped at some point in the distant past (when Solaris SunOS switched from BSD to System V, maybe?). FreeBSD seems to still have it.



              I don't know why RLIMIT_RSS was removed. Alan Cox had this to say in 2006:




              The original mm for Linux didn't enforce it as it did not
              have any way to track RSS that was not computationally expensive. The
              current mm is probably capable of implementing RSS limits although
              questions then still remain about what effect this has if a user sets a
              low RSS limit and then causes a lot of swap thrashing.



              If you can see a way to implement it efficiently then go for it.




              Since then the topic has come up several times but as far as I know no patch was accepted to the Linux kernel.



              One reason that enforcing a limit on physical memory usage for a process is that it's hard to define how much physical memory a process is using. Ok, count its stack and heap (the part that's not swapped out). What about memory-mapped files? To measure the physical memory usage of a process, you have to count the cache used by the files that it maps. What about shared libraries and other shared mappings? If they're used by a single process then obviously they should be counted against it — but when they're used by multiple processes, it's hard to know which process is using which part.



              It doesn't make all that much sense to limit physical memory usage of a single process. Given that resource limits are inherited, each child would be allowed to use as much physical memory. It makes a little more sense to allow a limit on physical memory for a set of processes. Linux has this feature built into cgroups, a partial virtualization system where a process and its descendants run in a container which can have its own filesystem root, its own network controllers, its own resource limits, etc. A cgroup's memory usage can be limited with the memory.limit_in_bytes parameter (memory.memsw.limit_in_bytes controls the use of RAM plus swap). The documentation warns that pages shared between groups are assigned in a somewhat arbitrary way (“Shared pages are accounted on the basis of the first touch approach. The
              cgroup that first touches a page is accounted for the page.”). There may be other ways in which the limit isn't strictly enforced.



              The other part of what you're asking for is to have a dedicated swap file for a process. This would require considerable complexity in the kernel. Remember that pages can be shared; which swap file do you use if the two processes have different swap space assigned to them? On the other hand, the feature can be implemented fairly easily from the process side: map a file to memory. For shared pages, there's still a single file. As a side benefit, different areas of a process can use different “swap space”.






              share|improve this answer





























                2














                There is something kind of what you describe: there is a feature to limit the amount of RAM used by a process (RAM, as opposed to virtual memory). The RLIMIT_RSS limit sets an upper bound a program's resident set size, i.e. the part of the memory of that process which is resident in memory (as opposed to swapped out). However, it is not implemented on Linux.



                RLIMIT_RSS existed in some old Unix systems (BSD only?) but has been dropped from many modern systems. On Linux, it existed only on the 2.4 series (and even there didn't fully work). On Solaris it was dropped at some point in the distant past (when Solaris SunOS switched from BSD to System V, maybe?). FreeBSD seems to still have it.



                I don't know why RLIMIT_RSS was removed. Alan Cox had this to say in 2006:




                The original mm for Linux didn't enforce it as it did not
                have any way to track RSS that was not computationally expensive. The
                current mm is probably capable of implementing RSS limits although
                questions then still remain about what effect this has if a user sets a
                low RSS limit and then causes a lot of swap thrashing.



                If you can see a way to implement it efficiently then go for it.




                Since then the topic has come up several times but as far as I know no patch was accepted to the Linux kernel.



                One reason that enforcing a limit on physical memory usage for a process is that it's hard to define how much physical memory a process is using. Ok, count its stack and heap (the part that's not swapped out). What about memory-mapped files? To measure the physical memory usage of a process, you have to count the cache used by the files that it maps. What about shared libraries and other shared mappings? If they're used by a single process then obviously they should be counted against it — but when they're used by multiple processes, it's hard to know which process is using which part.



                It doesn't make all that much sense to limit physical memory usage of a single process. Given that resource limits are inherited, each child would be allowed to use as much physical memory. It makes a little more sense to allow a limit on physical memory for a set of processes. Linux has this feature built into cgroups, a partial virtualization system where a process and its descendants run in a container which can have its own filesystem root, its own network controllers, its own resource limits, etc. A cgroup's memory usage can be limited with the memory.limit_in_bytes parameter (memory.memsw.limit_in_bytes controls the use of RAM plus swap). The documentation warns that pages shared between groups are assigned in a somewhat arbitrary way (“Shared pages are accounted on the basis of the first touch approach. The
                cgroup that first touches a page is accounted for the page.”). There may be other ways in which the limit isn't strictly enforced.



                The other part of what you're asking for is to have a dedicated swap file for a process. This would require considerable complexity in the kernel. Remember that pages can be shared; which swap file do you use if the two processes have different swap space assigned to them? On the other hand, the feature can be implemented fairly easily from the process side: map a file to memory. For shared pages, there's still a single file. As a side benefit, different areas of a process can use different “swap space”.






                share|improve this answer



























                  2












                  2








                  2







                  There is something kind of what you describe: there is a feature to limit the amount of RAM used by a process (RAM, as opposed to virtual memory). The RLIMIT_RSS limit sets an upper bound a program's resident set size, i.e. the part of the memory of that process which is resident in memory (as opposed to swapped out). However, it is not implemented on Linux.



                  RLIMIT_RSS existed in some old Unix systems (BSD only?) but has been dropped from many modern systems. On Linux, it existed only on the 2.4 series (and even there didn't fully work). On Solaris it was dropped at some point in the distant past (when Solaris SunOS switched from BSD to System V, maybe?). FreeBSD seems to still have it.



                  I don't know why RLIMIT_RSS was removed. Alan Cox had this to say in 2006:




                  The original mm for Linux didn't enforce it as it did not
                  have any way to track RSS that was not computationally expensive. The
                  current mm is probably capable of implementing RSS limits although
                  questions then still remain about what effect this has if a user sets a
                  low RSS limit and then causes a lot of swap thrashing.



                  If you can see a way to implement it efficiently then go for it.




                  Since then the topic has come up several times but as far as I know no patch was accepted to the Linux kernel.



                  One reason that enforcing a limit on physical memory usage for a process is that it's hard to define how much physical memory a process is using. Ok, count its stack and heap (the part that's not swapped out). What about memory-mapped files? To measure the physical memory usage of a process, you have to count the cache used by the files that it maps. What about shared libraries and other shared mappings? If they're used by a single process then obviously they should be counted against it — but when they're used by multiple processes, it's hard to know which process is using which part.



                  It doesn't make all that much sense to limit physical memory usage of a single process. Given that resource limits are inherited, each child would be allowed to use as much physical memory. It makes a little more sense to allow a limit on physical memory for a set of processes. Linux has this feature built into cgroups, a partial virtualization system where a process and its descendants run in a container which can have its own filesystem root, its own network controllers, its own resource limits, etc. A cgroup's memory usage can be limited with the memory.limit_in_bytes parameter (memory.memsw.limit_in_bytes controls the use of RAM plus swap). The documentation warns that pages shared between groups are assigned in a somewhat arbitrary way (“Shared pages are accounted on the basis of the first touch approach. The
                  cgroup that first touches a page is accounted for the page.”). There may be other ways in which the limit isn't strictly enforced.



                  The other part of what you're asking for is to have a dedicated swap file for a process. This would require considerable complexity in the kernel. Remember that pages can be shared; which swap file do you use if the two processes have different swap space assigned to them? On the other hand, the feature can be implemented fairly easily from the process side: map a file to memory. For shared pages, there's still a single file. As a side benefit, different areas of a process can use different “swap space”.






                  share|improve this answer















                  There is something kind of what you describe: there is a feature to limit the amount of RAM used by a process (RAM, as opposed to virtual memory). The RLIMIT_RSS limit sets an upper bound a program's resident set size, i.e. the part of the memory of that process which is resident in memory (as opposed to swapped out). However, it is not implemented on Linux.



                  RLIMIT_RSS existed in some old Unix systems (BSD only?) but has been dropped from many modern systems. On Linux, it existed only on the 2.4 series (and even there didn't fully work). On Solaris it was dropped at some point in the distant past (when Solaris SunOS switched from BSD to System V, maybe?). FreeBSD seems to still have it.



                  I don't know why RLIMIT_RSS was removed. Alan Cox had this to say in 2006:




                  The original mm for Linux didn't enforce it as it did not
                  have any way to track RSS that was not computationally expensive. The
                  current mm is probably capable of implementing RSS limits although
                  questions then still remain about what effect this has if a user sets a
                  low RSS limit and then causes a lot of swap thrashing.



                  If you can see a way to implement it efficiently then go for it.




                  Since then the topic has come up several times but as far as I know no patch was accepted to the Linux kernel.



                  One reason that enforcing a limit on physical memory usage for a process is that it's hard to define how much physical memory a process is using. Ok, count its stack and heap (the part that's not swapped out). What about memory-mapped files? To measure the physical memory usage of a process, you have to count the cache used by the files that it maps. What about shared libraries and other shared mappings? If they're used by a single process then obviously they should be counted against it — but when they're used by multiple processes, it's hard to know which process is using which part.



                  It doesn't make all that much sense to limit physical memory usage of a single process. Given that resource limits are inherited, each child would be allowed to use as much physical memory. It makes a little more sense to allow a limit on physical memory for a set of processes. Linux has this feature built into cgroups, a partial virtualization system where a process and its descendants run in a container which can have its own filesystem root, its own network controllers, its own resource limits, etc. A cgroup's memory usage can be limited with the memory.limit_in_bytes parameter (memory.memsw.limit_in_bytes controls the use of RAM plus swap). The documentation warns that pages shared between groups are assigned in a somewhat arbitrary way (“Shared pages are accounted on the basis of the first touch approach. The
                  cgroup that first touches a page is accounted for the page.”). There may be other ways in which the limit isn't strictly enforced.



                  The other part of what you're asking for is to have a dedicated swap file for a process. This would require considerable complexity in the kernel. Remember that pages can be shared; which swap file do you use if the two processes have different swap space assigned to them? On the other hand, the feature can be implemented fairly easily from the process side: map a file to memory. For shared pages, there's still a single file. As a side benefit, different areas of a process can use different “swap space”.







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited May 23 '17 at 11:33









                  Community

                  1




                  1










                  answered Apr 15 '14 at 16:52









                  GillesGilles

                  544k12811021619




                  544k12811021619



























                      draft saved

                      draft discarded
















































                      Thanks for contributing an answer to Unix & Linux Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f124895%2ftool-to-force-use-file-instead-of-ram%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown






                      Popular posts from this blog

                      How to check contact read email or not when send email to Individual?

                      Displaying single band from multi-band raster using QGIS

                      How many registers does an x86_64 CPU actually have?