How can i calculate the size of shared memory available to the system

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
2
down vote

favorite












According to the rhel document, the total amount of shared memory available on the system equals to shmall*PAGE_SIZE.
After I completed the installation of RHEL6, the value of the shmall kernel parameter defaults to 4294967296,which means that the total amount of shared memory pages that can be used system wide is 4294967296, and the page size is 4096B. So, based on the formula, the size of shared memory is



4294967296*4096/1024/1024/1024/1024=16TB


which is much more than the size of RAM(8GB) the operating system has.How an os can find 16TB shared memory to allocate?



So, is the size of /dev/shm actually equal to the size of shared memory? If not, how can I get the actual size of shared memory?










share|improve this question























  • Please don't ask multiple questions in a single post. I have removed the second one, you can ask it as a separate question. Also, please edit your question and i) clarify why you say the system has less memory than what you show. What kind of memory are you referring to? How do you measure it? ii) How do you get 16TB from your formula? What you show is 16 gigabits, not 16 terabytes.
    – terdon♦
    Nov 2 '16 at 9:24














up vote
2
down vote

favorite












According to the rhel document, the total amount of shared memory available on the system equals to shmall*PAGE_SIZE.
After I completed the installation of RHEL6, the value of the shmall kernel parameter defaults to 4294967296,which means that the total amount of shared memory pages that can be used system wide is 4294967296, and the page size is 4096B. So, based on the formula, the size of shared memory is



4294967296*4096/1024/1024/1024/1024=16TB


which is much more than the size of RAM(8GB) the operating system has.How an os can find 16TB shared memory to allocate?



So, is the size of /dev/shm actually equal to the size of shared memory? If not, how can I get the actual size of shared memory?










share|improve this question























  • Please don't ask multiple questions in a single post. I have removed the second one, you can ask it as a separate question. Also, please edit your question and i) clarify why you say the system has less memory than what you show. What kind of memory are you referring to? How do you measure it? ii) How do you get 16TB from your formula? What you show is 16 gigabits, not 16 terabytes.
    – terdon♦
    Nov 2 '16 at 9:24












up vote
2
down vote

favorite









up vote
2
down vote

favorite











According to the rhel document, the total amount of shared memory available on the system equals to shmall*PAGE_SIZE.
After I completed the installation of RHEL6, the value of the shmall kernel parameter defaults to 4294967296,which means that the total amount of shared memory pages that can be used system wide is 4294967296, and the page size is 4096B. So, based on the formula, the size of shared memory is



4294967296*4096/1024/1024/1024/1024=16TB


which is much more than the size of RAM(8GB) the operating system has.How an os can find 16TB shared memory to allocate?



So, is the size of /dev/shm actually equal to the size of shared memory? If not, how can I get the actual size of shared memory?










share|improve this question















According to the rhel document, the total amount of shared memory available on the system equals to shmall*PAGE_SIZE.
After I completed the installation of RHEL6, the value of the shmall kernel parameter defaults to 4294967296,which means that the total amount of shared memory pages that can be used system wide is 4294967296, and the page size is 4096B. So, based on the formula, the size of shared memory is



4294967296*4096/1024/1024/1024/1024=16TB


which is much more than the size of RAM(8GB) the operating system has.How an os can find 16TB shared memory to allocate?



So, is the size of /dev/shm actually equal to the size of shared memory? If not, how can I get the actual size of shared memory?







memory shared-memory






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 2 '16 at 9:44

























asked Nov 2 '16 at 8:52









user4535727

114




114











  • Please don't ask multiple questions in a single post. I have removed the second one, you can ask it as a separate question. Also, please edit your question and i) clarify why you say the system has less memory than what you show. What kind of memory are you referring to? How do you measure it? ii) How do you get 16TB from your formula? What you show is 16 gigabits, not 16 terabytes.
    – terdon♦
    Nov 2 '16 at 9:24
















  • Please don't ask multiple questions in a single post. I have removed the second one, you can ask it as a separate question. Also, please edit your question and i) clarify why you say the system has less memory than what you show. What kind of memory are you referring to? How do you measure it? ii) How do you get 16TB from your formula? What you show is 16 gigabits, not 16 terabytes.
    – terdon♦
    Nov 2 '16 at 9:24















Please don't ask multiple questions in a single post. I have removed the second one, you can ask it as a separate question. Also, please edit your question and i) clarify why you say the system has less memory than what you show. What kind of memory are you referring to? How do you measure it? ii) How do you get 16TB from your formula? What you show is 16 gigabits, not 16 terabytes.
– terdon♦
Nov 2 '16 at 9:24




Please don't ask multiple questions in a single post. I have removed the second one, you can ask it as a separate question. Also, please edit your question and i) clarify why you say the system has less memory than what you show. What kind of memory are you referring to? How do you measure it? ii) How do you get 16TB from your formula? What you show is 16 gigabits, not 16 terabytes.
– terdon♦
Nov 2 '16 at 9:24










1 Answer
1






active

oldest

votes

















up vote
0
down vote













Your calculation is correct. shmall can be set higher than the available virtual memory. If you would try to use all of it then it would not fail because of shmall is exceeded but because of other reasons.



BTW there are also commands to find these IPC limits:



ipcs -l
lsipc # util-linux>=2.27


Note that even the virtual memory is unlimited on Linux by default, greater-than RAM+swap. See



https://serverfault.com/questions/606185/how-does-vm-overcommit-memory-work



How the OOM killer decides which process to kill first?



On the other hand you could limit the virtual memory per process using ulimt -v which wouldn't affect kernel's /proc/sys/kernel/shmall neither.






share|improve this answer






















  • the result of ipcs -l is depend on the setting shmall,it does not show the real limits
    – user4535727
    Nov 2 '16 at 11:42










  • @user4535727 Probably a bug because of integer overflow. Maybe it's correct for smaller values echo $(( 1024*1024*1024 )) > /proc/sys/kernel/shmall; ipcs -l
    – rudimeier
    Nov 2 '16 at 11:59










Your Answer







StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f320491%2fhow-can-i-calculate-the-size-of-shared-memory-available-to-the-system%23new-answer', 'question_page');

);

Post as a guest






























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
0
down vote













Your calculation is correct. shmall can be set higher than the available virtual memory. If you would try to use all of it then it would not fail because of shmall is exceeded but because of other reasons.



BTW there are also commands to find these IPC limits:



ipcs -l
lsipc # util-linux>=2.27


Note that even the virtual memory is unlimited on Linux by default, greater-than RAM+swap. See



https://serverfault.com/questions/606185/how-does-vm-overcommit-memory-work



How the OOM killer decides which process to kill first?



On the other hand you could limit the virtual memory per process using ulimt -v which wouldn't affect kernel's /proc/sys/kernel/shmall neither.






share|improve this answer






















  • the result of ipcs -l is depend on the setting shmall,it does not show the real limits
    – user4535727
    Nov 2 '16 at 11:42










  • @user4535727 Probably a bug because of integer overflow. Maybe it's correct for smaller values echo $(( 1024*1024*1024 )) > /proc/sys/kernel/shmall; ipcs -l
    – rudimeier
    Nov 2 '16 at 11:59














up vote
0
down vote













Your calculation is correct. shmall can be set higher than the available virtual memory. If you would try to use all of it then it would not fail because of shmall is exceeded but because of other reasons.



BTW there are also commands to find these IPC limits:



ipcs -l
lsipc # util-linux>=2.27


Note that even the virtual memory is unlimited on Linux by default, greater-than RAM+swap. See



https://serverfault.com/questions/606185/how-does-vm-overcommit-memory-work



How the OOM killer decides which process to kill first?



On the other hand you could limit the virtual memory per process using ulimt -v which wouldn't affect kernel's /proc/sys/kernel/shmall neither.






share|improve this answer






















  • the result of ipcs -l is depend on the setting shmall,it does not show the real limits
    – user4535727
    Nov 2 '16 at 11:42










  • @user4535727 Probably a bug because of integer overflow. Maybe it's correct for smaller values echo $(( 1024*1024*1024 )) > /proc/sys/kernel/shmall; ipcs -l
    – rudimeier
    Nov 2 '16 at 11:59












up vote
0
down vote










up vote
0
down vote









Your calculation is correct. shmall can be set higher than the available virtual memory. If you would try to use all of it then it would not fail because of shmall is exceeded but because of other reasons.



BTW there are also commands to find these IPC limits:



ipcs -l
lsipc # util-linux>=2.27


Note that even the virtual memory is unlimited on Linux by default, greater-than RAM+swap. See



https://serverfault.com/questions/606185/how-does-vm-overcommit-memory-work



How the OOM killer decides which process to kill first?



On the other hand you could limit the virtual memory per process using ulimt -v which wouldn't affect kernel's /proc/sys/kernel/shmall neither.






share|improve this answer














Your calculation is correct. shmall can be set higher than the available virtual memory. If you would try to use all of it then it would not fail because of shmall is exceeded but because of other reasons.



BTW there are also commands to find these IPC limits:



ipcs -l
lsipc # util-linux>=2.27


Note that even the virtual memory is unlimited on Linux by default, greater-than RAM+swap. See



https://serverfault.com/questions/606185/how-does-vm-overcommit-memory-work



How the OOM killer decides which process to kill first?



On the other hand you could limit the virtual memory per process using ulimt -v which wouldn't affect kernel's /proc/sys/kernel/shmall neither.







share|improve this answer














share|improve this answer



share|improve this answer








edited Apr 13 '17 at 12:36









Community♦

1




1










answered Nov 2 '16 at 10:25









rudimeier

5,2421532




5,2421532











  • the result of ipcs -l is depend on the setting shmall,it does not show the real limits
    – user4535727
    Nov 2 '16 at 11:42










  • @user4535727 Probably a bug because of integer overflow. Maybe it's correct for smaller values echo $(( 1024*1024*1024 )) > /proc/sys/kernel/shmall; ipcs -l
    – rudimeier
    Nov 2 '16 at 11:59
















  • the result of ipcs -l is depend on the setting shmall,it does not show the real limits
    – user4535727
    Nov 2 '16 at 11:42










  • @user4535727 Probably a bug because of integer overflow. Maybe it's correct for smaller values echo $(( 1024*1024*1024 )) > /proc/sys/kernel/shmall; ipcs -l
    – rudimeier
    Nov 2 '16 at 11:59















the result of ipcs -l is depend on the setting shmall,it does not show the real limits
– user4535727
Nov 2 '16 at 11:42




the result of ipcs -l is depend on the setting shmall,it does not show the real limits
– user4535727
Nov 2 '16 at 11:42












@user4535727 Probably a bug because of integer overflow. Maybe it's correct for smaller values echo $(( 1024*1024*1024 )) > /proc/sys/kernel/shmall; ipcs -l
– rudimeier
Nov 2 '16 at 11:59




@user4535727 Probably a bug because of integer overflow. Maybe it's correct for smaller values echo $(( 1024*1024*1024 )) > /proc/sys/kernel/shmall; ipcs -l
– rudimeier
Nov 2 '16 at 11:59

















 

draft saved


draft discarded















































 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f320491%2fhow-can-i-calculate-the-size-of-shared-memory-available-to-the-system%23new-answer', 'question_page');

);

Post as a guest













































































Popular posts from this blog

How to check contact read email or not when send email to Individual?

Bahrain

Postfix configuration issue with fips on centos 7; mailgun relay