How can i calculate the size of shared memory available to the system
Clash Royale CLAN TAG#URR8PPP
up vote
2
down vote
favorite
According to the rhel document, the total amount of shared memory available on the system equals to shmall*PAGE_SIZE.
After I completed the installation of RHEL6, the value of the shmall kernel parameter defaults to 4294967296ï¼Âwhich means that the total amount of shared memory pages that can be used system wide is 4294967296, and the page size is 4096B. So, based on the formula, the size of shared memory is
4294967296*4096/1024/1024/1024/1024=16TB
which is much more than the size of RAM(8GB) the operating system has.How an os can find 16TB shared memory to allocate?
So, is the size of /dev/shm
actually equal to the size of shared memory? If not, how can I get the actual size of shared memory?
memory shared-memory
add a comment |Â
up vote
2
down vote
favorite
According to the rhel document, the total amount of shared memory available on the system equals to shmall*PAGE_SIZE.
After I completed the installation of RHEL6, the value of the shmall kernel parameter defaults to 4294967296ï¼Âwhich means that the total amount of shared memory pages that can be used system wide is 4294967296, and the page size is 4096B. So, based on the formula, the size of shared memory is
4294967296*4096/1024/1024/1024/1024=16TB
which is much more than the size of RAM(8GB) the operating system has.How an os can find 16TB shared memory to allocate?
So, is the size of /dev/shm
actually equal to the size of shared memory? If not, how can I get the actual size of shared memory?
memory shared-memory
Please don't ask multiple questions in a single post. I have removed the second one, you can ask it as a separate question. Also, please edit your question and i) clarify why you say the system has less memory than what you show. What kind of memory are you referring to? How do you measure it? ii) How do you get 16TB from your formula? What you show is 16 gigabits, not 16 terabytes.
â terdonâ¦
Nov 2 '16 at 9:24
add a comment |Â
up vote
2
down vote
favorite
up vote
2
down vote
favorite
According to the rhel document, the total amount of shared memory available on the system equals to shmall*PAGE_SIZE.
After I completed the installation of RHEL6, the value of the shmall kernel parameter defaults to 4294967296ï¼Âwhich means that the total amount of shared memory pages that can be used system wide is 4294967296, and the page size is 4096B. So, based on the formula, the size of shared memory is
4294967296*4096/1024/1024/1024/1024=16TB
which is much more than the size of RAM(8GB) the operating system has.How an os can find 16TB shared memory to allocate?
So, is the size of /dev/shm
actually equal to the size of shared memory? If not, how can I get the actual size of shared memory?
memory shared-memory
According to the rhel document, the total amount of shared memory available on the system equals to shmall*PAGE_SIZE.
After I completed the installation of RHEL6, the value of the shmall kernel parameter defaults to 4294967296ï¼Âwhich means that the total amount of shared memory pages that can be used system wide is 4294967296, and the page size is 4096B. So, based on the formula, the size of shared memory is
4294967296*4096/1024/1024/1024/1024=16TB
which is much more than the size of RAM(8GB) the operating system has.How an os can find 16TB shared memory to allocate?
So, is the size of /dev/shm
actually equal to the size of shared memory? If not, how can I get the actual size of shared memory?
memory shared-memory
memory shared-memory
edited Nov 2 '16 at 9:44
asked Nov 2 '16 at 8:52
user4535727
114
114
Please don't ask multiple questions in a single post. I have removed the second one, you can ask it as a separate question. Also, please edit your question and i) clarify why you say the system has less memory than what you show. What kind of memory are you referring to? How do you measure it? ii) How do you get 16TB from your formula? What you show is 16 gigabits, not 16 terabytes.
â terdonâ¦
Nov 2 '16 at 9:24
add a comment |Â
Please don't ask multiple questions in a single post. I have removed the second one, you can ask it as a separate question. Also, please edit your question and i) clarify why you say the system has less memory than what you show. What kind of memory are you referring to? How do you measure it? ii) How do you get 16TB from your formula? What you show is 16 gigabits, not 16 terabytes.
â terdonâ¦
Nov 2 '16 at 9:24
Please don't ask multiple questions in a single post. I have removed the second one, you can ask it as a separate question. Also, please edit your question and i) clarify why you say the system has less memory than what you show. What kind of memory are you referring to? How do you measure it? ii) How do you get 16TB from your formula? What you show is 16 gigabits, not 16 terabytes.
â terdonâ¦
Nov 2 '16 at 9:24
Please don't ask multiple questions in a single post. I have removed the second one, you can ask it as a separate question. Also, please edit your question and i) clarify why you say the system has less memory than what you show. What kind of memory are you referring to? How do you measure it? ii) How do you get 16TB from your formula? What you show is 16 gigabits, not 16 terabytes.
â terdonâ¦
Nov 2 '16 at 9:24
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
0
down vote
Your calculation is correct. shmall can be set higher than the available virtual memory. If you would try to use all of it then it would not fail because of shmall is exceeded but because of other reasons.
BTW there are also commands to find these IPC limits:
ipcs -l
lsipc # util-linux>=2.27
Note that even the virtual memory is unlimited on Linux by default, greater-than RAM+swap. See
https://serverfault.com/questions/606185/how-does-vm-overcommit-memory-work
How the OOM killer decides which process to kill first?
On the other hand you could limit the virtual memory per process using ulimt -v
which wouldn't affect kernel's /proc/sys/kernel/shmall
neither.
the result of ipcs -l is depend on the setting shmall,it does not show the real limits
â user4535727
Nov 2 '16 at 11:42
@user4535727 Probably a bug because of integer overflow. Maybe it's correct for smaller valuesecho $(( 1024*1024*1024 )) > /proc/sys/kernel/shmall; ipcs -l
â rudimeier
Nov 2 '16 at 11:59
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
Your calculation is correct. shmall can be set higher than the available virtual memory. If you would try to use all of it then it would not fail because of shmall is exceeded but because of other reasons.
BTW there are also commands to find these IPC limits:
ipcs -l
lsipc # util-linux>=2.27
Note that even the virtual memory is unlimited on Linux by default, greater-than RAM+swap. See
https://serverfault.com/questions/606185/how-does-vm-overcommit-memory-work
How the OOM killer decides which process to kill first?
On the other hand you could limit the virtual memory per process using ulimt -v
which wouldn't affect kernel's /proc/sys/kernel/shmall
neither.
the result of ipcs -l is depend on the setting shmall,it does not show the real limits
â user4535727
Nov 2 '16 at 11:42
@user4535727 Probably a bug because of integer overflow. Maybe it's correct for smaller valuesecho $(( 1024*1024*1024 )) > /proc/sys/kernel/shmall; ipcs -l
â rudimeier
Nov 2 '16 at 11:59
add a comment |Â
up vote
0
down vote
Your calculation is correct. shmall can be set higher than the available virtual memory. If you would try to use all of it then it would not fail because of shmall is exceeded but because of other reasons.
BTW there are also commands to find these IPC limits:
ipcs -l
lsipc # util-linux>=2.27
Note that even the virtual memory is unlimited on Linux by default, greater-than RAM+swap. See
https://serverfault.com/questions/606185/how-does-vm-overcommit-memory-work
How the OOM killer decides which process to kill first?
On the other hand you could limit the virtual memory per process using ulimt -v
which wouldn't affect kernel's /proc/sys/kernel/shmall
neither.
the result of ipcs -l is depend on the setting shmall,it does not show the real limits
â user4535727
Nov 2 '16 at 11:42
@user4535727 Probably a bug because of integer overflow. Maybe it's correct for smaller valuesecho $(( 1024*1024*1024 )) > /proc/sys/kernel/shmall; ipcs -l
â rudimeier
Nov 2 '16 at 11:59
add a comment |Â
up vote
0
down vote
up vote
0
down vote
Your calculation is correct. shmall can be set higher than the available virtual memory. If you would try to use all of it then it would not fail because of shmall is exceeded but because of other reasons.
BTW there are also commands to find these IPC limits:
ipcs -l
lsipc # util-linux>=2.27
Note that even the virtual memory is unlimited on Linux by default, greater-than RAM+swap. See
https://serverfault.com/questions/606185/how-does-vm-overcommit-memory-work
How the OOM killer decides which process to kill first?
On the other hand you could limit the virtual memory per process using ulimt -v
which wouldn't affect kernel's /proc/sys/kernel/shmall
neither.
Your calculation is correct. shmall can be set higher than the available virtual memory. If you would try to use all of it then it would not fail because of shmall is exceeded but because of other reasons.
BTW there are also commands to find these IPC limits:
ipcs -l
lsipc # util-linux>=2.27
Note that even the virtual memory is unlimited on Linux by default, greater-than RAM+swap. See
https://serverfault.com/questions/606185/how-does-vm-overcommit-memory-work
How the OOM killer decides which process to kill first?
On the other hand you could limit the virtual memory per process using ulimt -v
which wouldn't affect kernel's /proc/sys/kernel/shmall
neither.
edited Apr 13 '17 at 12:36
Communityâ¦
1
1
answered Nov 2 '16 at 10:25
rudimeier
5,2421532
5,2421532
the result of ipcs -l is depend on the setting shmall,it does not show the real limits
â user4535727
Nov 2 '16 at 11:42
@user4535727 Probably a bug because of integer overflow. Maybe it's correct for smaller valuesecho $(( 1024*1024*1024 )) > /proc/sys/kernel/shmall; ipcs -l
â rudimeier
Nov 2 '16 at 11:59
add a comment |Â
the result of ipcs -l is depend on the setting shmall,it does not show the real limits
â user4535727
Nov 2 '16 at 11:42
@user4535727 Probably a bug because of integer overflow. Maybe it's correct for smaller valuesecho $(( 1024*1024*1024 )) > /proc/sys/kernel/shmall; ipcs -l
â rudimeier
Nov 2 '16 at 11:59
the result of ipcs -l is depend on the setting shmall,it does not show the real limits
â user4535727
Nov 2 '16 at 11:42
the result of ipcs -l is depend on the setting shmall,it does not show the real limits
â user4535727
Nov 2 '16 at 11:42
@user4535727 Probably a bug because of integer overflow. Maybe it's correct for smaller values
echo $(( 1024*1024*1024 )) > /proc/sys/kernel/shmall; ipcs -l
â rudimeier
Nov 2 '16 at 11:59
@user4535727 Probably a bug because of integer overflow. Maybe it's correct for smaller values
echo $(( 1024*1024*1024 )) > /proc/sys/kernel/shmall; ipcs -l
â rudimeier
Nov 2 '16 at 11:59
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f320491%2fhow-can-i-calculate-the-size-of-shared-memory-available-to-the-system%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Please don't ask multiple questions in a single post. I have removed the second one, you can ask it as a separate question. Also, please edit your question and i) clarify why you say the system has less memory than what you show. What kind of memory are you referring to? How do you measure it? ii) How do you get 16TB from your formula? What you show is 16 gigabits, not 16 terabytes.
â terdonâ¦
Nov 2 '16 at 9:24