How many page faults does this program need?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












Operating System Concepts says




Let’s look at a contrived but informative example. Assume that pages
are 128 words in size. Consider a C program whose function is to
initialize to 0 each element of a 128-by-128 array. The following code
is typical:



int i, j;
int[128][128] data;
for (j = 0; j < 128; j++)
for (i = 0; i < 128; i++)
data[i][j] = 0;


Notice that the array is stored row major; that is, the array is
stored data[0][0], data[0][1], · · ·, data[0][127], data[1][0],
data[1][1], · · ·, data[127][127]. For pages of 128 words, each row
takes one page. Thus, the preceding code zeros one word in each page,
then another word in each page, and so on. If the operating system
allocates fewer than 128 frames to the entire program, then its
execution will result in 128 × 128 = 16,384 page faults.




Does the sentence in highlight mean that when initializing each element of the array, a page fault happens, and, after page replacement and initialization of the element, the page is immediately moved out of RAM?



"the operating system allocates fewer than 128 frames to the entire program" doesn't necessarily mean that "the operating system allocates a single frame to the entire program". Then why is the text so sure that the most recent page is moved out of RAM immediately after being accessed?



Suppose the OS allocates "n", which is fewer than 128, frames to the program.
Can it just keep "n-1" pages i.e. rows in RAM, and use the remaining one page for all the page faults and replacements? So the total number of page faults can be reduced from 128*128 to (n-1) + (128-(n-1))*128?










share|improve this question























  • Is this specific to Linux/UNIX or just general consideration?
    – mrc02_kr
    9 hours ago










  • Linux..........
    – Tim
    9 hours ago














up vote
0
down vote

favorite












Operating System Concepts says




Let’s look at a contrived but informative example. Assume that pages
are 128 words in size. Consider a C program whose function is to
initialize to 0 each element of a 128-by-128 array. The following code
is typical:



int i, j;
int[128][128] data;
for (j = 0; j < 128; j++)
for (i = 0; i < 128; i++)
data[i][j] = 0;


Notice that the array is stored row major; that is, the array is
stored data[0][0], data[0][1], · · ·, data[0][127], data[1][0],
data[1][1], · · ·, data[127][127]. For pages of 128 words, each row
takes one page. Thus, the preceding code zeros one word in each page,
then another word in each page, and so on. If the operating system
allocates fewer than 128 frames to the entire program, then its
execution will result in 128 × 128 = 16,384 page faults.




Does the sentence in highlight mean that when initializing each element of the array, a page fault happens, and, after page replacement and initialization of the element, the page is immediately moved out of RAM?



"the operating system allocates fewer than 128 frames to the entire program" doesn't necessarily mean that "the operating system allocates a single frame to the entire program". Then why is the text so sure that the most recent page is moved out of RAM immediately after being accessed?



Suppose the OS allocates "n", which is fewer than 128, frames to the program.
Can it just keep "n-1" pages i.e. rows in RAM, and use the remaining one page for all the page faults and replacements? So the total number of page faults can be reduced from 128*128 to (n-1) + (128-(n-1))*128?










share|improve this question























  • Is this specific to Linux/UNIX or just general consideration?
    – mrc02_kr
    9 hours ago










  • Linux..........
    – Tim
    9 hours ago












up vote
0
down vote

favorite









up vote
0
down vote

favorite











Operating System Concepts says




Let’s look at a contrived but informative example. Assume that pages
are 128 words in size. Consider a C program whose function is to
initialize to 0 each element of a 128-by-128 array. The following code
is typical:



int i, j;
int[128][128] data;
for (j = 0; j < 128; j++)
for (i = 0; i < 128; i++)
data[i][j] = 0;


Notice that the array is stored row major; that is, the array is
stored data[0][0], data[0][1], · · ·, data[0][127], data[1][0],
data[1][1], · · ·, data[127][127]. For pages of 128 words, each row
takes one page. Thus, the preceding code zeros one word in each page,
then another word in each page, and so on. If the operating system
allocates fewer than 128 frames to the entire program, then its
execution will result in 128 × 128 = 16,384 page faults.




Does the sentence in highlight mean that when initializing each element of the array, a page fault happens, and, after page replacement and initialization of the element, the page is immediately moved out of RAM?



"the operating system allocates fewer than 128 frames to the entire program" doesn't necessarily mean that "the operating system allocates a single frame to the entire program". Then why is the text so sure that the most recent page is moved out of RAM immediately after being accessed?



Suppose the OS allocates "n", which is fewer than 128, frames to the program.
Can it just keep "n-1" pages i.e. rows in RAM, and use the remaining one page for all the page faults and replacements? So the total number of page faults can be reduced from 128*128 to (n-1) + (128-(n-1))*128?










share|improve this question















Operating System Concepts says




Let’s look at a contrived but informative example. Assume that pages
are 128 words in size. Consider a C program whose function is to
initialize to 0 each element of a 128-by-128 array. The following code
is typical:



int i, j;
int[128][128] data;
for (j = 0; j < 128; j++)
for (i = 0; i < 128; i++)
data[i][j] = 0;


Notice that the array is stored row major; that is, the array is
stored data[0][0], data[0][1], · · ·, data[0][127], data[1][0],
data[1][1], · · ·, data[127][127]. For pages of 128 words, each row
takes one page. Thus, the preceding code zeros one word in each page,
then another word in each page, and so on. If the operating system
allocates fewer than 128 frames to the entire program, then its
execution will result in 128 × 128 = 16,384 page faults.




Does the sentence in highlight mean that when initializing each element of the array, a page fault happens, and, after page replacement and initialization of the element, the page is immediately moved out of RAM?



"the operating system allocates fewer than 128 frames to the entire program" doesn't necessarily mean that "the operating system allocates a single frame to the entire program". Then why is the text so sure that the most recent page is moved out of RAM immediately after being accessed?



Suppose the OS allocates "n", which is fewer than 128, frames to the program.
Can it just keep "n-1" pages i.e. rows in RAM, and use the remaining one page for all the page faults and replacements? So the total number of page faults can be reduced from 128*128 to (n-1) + (128-(n-1))*128?







virtual-memory






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 9 hours ago









Rui F Ribeiro

37k1273117




37k1273117










asked 10 hours ago









Tim

23.9k67232418




23.9k67232418











  • Is this specific to Linux/UNIX or just general consideration?
    – mrc02_kr
    9 hours ago










  • Linux..........
    – Tim
    9 hours ago
















  • Is this specific to Linux/UNIX or just general consideration?
    – mrc02_kr
    9 hours ago










  • Linux..........
    – Tim
    9 hours ago















Is this specific to Linux/UNIX or just general consideration?
– mrc02_kr
9 hours ago




Is this specific to Linux/UNIX or just general consideration?
– mrc02_kr
9 hours ago












Linux..........
– Tim
9 hours ago




Linux..........
– Tim
9 hours ago










1 Answer
1






active

oldest

votes

















up vote
2
down vote














Then why is the text so sure that the most recent page is moved out of RAM immediately after being accessed?




Typically, it will be the page accessed least recently which will be evicted, but that does lead to the pathological behaviour described. The first time through the inner loop, the first n frames are paged in; then when page n + 1 needs to be paged in, page 1 is paged out, and so it goes, ensuring that all pages need to be paged back in every time round the loop.



However, this scenario is really unlikely. If the system is totally starved of RAM (physical and swap), the kernel will kill a program to free some memory; given the test program’s behaviour, it’s unlikely to be the candidate. If the system is only starved of physical RAM, the kernel will swap pages out, or reduce its caches; if it swaps pages out, it’s unlikely to target the test program. In both cases, the test program will then have enough RAM to fit its working set. If you do contrive somehow to starve the test program only (e.g. by increasing its working set so it dominates the system’s memory), you’re more likely in practice to see it killed with a SIGSEGV than you are to see it continuously page its working set in and out. (This is fine, it’s a thought experiment in a text book. Learn the resulting principles, don’t necessarily try to apply the example in practice.)




Can it just keep "n-1" pages i.e. rows in RAM, and use the remaining one page for all the page faults and replacements?




It could, but it would be unusual for the system to do this; how is the system to know what future memory access patterns are going to be like? Generally speaking you’ll see an LRU eviction, so the loop will exhibit pathological behaviour as described above.



If you want to play around with this, fix the program so it matches 4KB size of a page (as used on x86; I’m assuming Linux on 64-bit x86 here), and actually compiles:



int main(int argc, char **argv) 
int i, j;
int data[128][1024];
for (j = 0; j < 1024; j++)
for (i = 0; i < 128; i++)
data[i][j] = 0;



Then run it using /usr/bin/time, which will show the number of page faults:



0.00user 0.00system 0:00.00elapsed 100%CPU (0avgtext+0avgdata 1612maxresident)k
0inputs+0outputs (0major+180minor)pagefaults 0swaps


This kind of array handling will cause more problems with cache line evictions than it will with page faults in practice.






share|improve this answer






















  • Thanks. In the "fixed" program, how do you calculate "(0major+180minor)pagefaults"?
    – Tim
    8 hours ago










  • “Then run it using /usr/bin/time, which will show the number of page faults:” — I don’t calculate it, time tells me how many page faults really occurred. There are typically around 64 involved in loading the program and libraries, for a small program using only the C library, then the rest for the loop (so not quite 128, but still in the right ballpark).
    – Stephen Kitt
    8 hours ago











  • In what sense does it fix the problem of the original program, and why does it?
    – Tim
    7 hours ago











  • The original code snippet doesn’t compile. My version matches the underlying page size so it’s easier to correlate real page faults.
    – Stephen Kitt
    7 hours ago











  • Thanks. Why "this kind of array handling will cause more problems with cache line evictions than it will with page faults in practice"? Generally speaking, what kinds of cases will "cause more problems with cache line evictions than it will with page faults", and what will cause the opposite, and what will cause both comparably?
    – Tim
    4 hours ago










Your Answer







StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f474260%2fhow-many-page-faults-does-this-program-need%23new-answer', 'question_page');

);

Post as a guest






























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
2
down vote














Then why is the text so sure that the most recent page is moved out of RAM immediately after being accessed?




Typically, it will be the page accessed least recently which will be evicted, but that does lead to the pathological behaviour described. The first time through the inner loop, the first n frames are paged in; then when page n + 1 needs to be paged in, page 1 is paged out, and so it goes, ensuring that all pages need to be paged back in every time round the loop.



However, this scenario is really unlikely. If the system is totally starved of RAM (physical and swap), the kernel will kill a program to free some memory; given the test program’s behaviour, it’s unlikely to be the candidate. If the system is only starved of physical RAM, the kernel will swap pages out, or reduce its caches; if it swaps pages out, it’s unlikely to target the test program. In both cases, the test program will then have enough RAM to fit its working set. If you do contrive somehow to starve the test program only (e.g. by increasing its working set so it dominates the system’s memory), you’re more likely in practice to see it killed with a SIGSEGV than you are to see it continuously page its working set in and out. (This is fine, it’s a thought experiment in a text book. Learn the resulting principles, don’t necessarily try to apply the example in practice.)




Can it just keep "n-1" pages i.e. rows in RAM, and use the remaining one page for all the page faults and replacements?




It could, but it would be unusual for the system to do this; how is the system to know what future memory access patterns are going to be like? Generally speaking you’ll see an LRU eviction, so the loop will exhibit pathological behaviour as described above.



If you want to play around with this, fix the program so it matches 4KB size of a page (as used on x86; I’m assuming Linux on 64-bit x86 here), and actually compiles:



int main(int argc, char **argv) 
int i, j;
int data[128][1024];
for (j = 0; j < 1024; j++)
for (i = 0; i < 128; i++)
data[i][j] = 0;



Then run it using /usr/bin/time, which will show the number of page faults:



0.00user 0.00system 0:00.00elapsed 100%CPU (0avgtext+0avgdata 1612maxresident)k
0inputs+0outputs (0major+180minor)pagefaults 0swaps


This kind of array handling will cause more problems with cache line evictions than it will with page faults in practice.






share|improve this answer






















  • Thanks. In the "fixed" program, how do you calculate "(0major+180minor)pagefaults"?
    – Tim
    8 hours ago










  • “Then run it using /usr/bin/time, which will show the number of page faults:” — I don’t calculate it, time tells me how many page faults really occurred. There are typically around 64 involved in loading the program and libraries, for a small program using only the C library, then the rest for the loop (so not quite 128, but still in the right ballpark).
    – Stephen Kitt
    8 hours ago











  • In what sense does it fix the problem of the original program, and why does it?
    – Tim
    7 hours ago











  • The original code snippet doesn’t compile. My version matches the underlying page size so it’s easier to correlate real page faults.
    – Stephen Kitt
    7 hours ago











  • Thanks. Why "this kind of array handling will cause more problems with cache line evictions than it will with page faults in practice"? Generally speaking, what kinds of cases will "cause more problems with cache line evictions than it will with page faults", and what will cause the opposite, and what will cause both comparably?
    – Tim
    4 hours ago














up vote
2
down vote














Then why is the text so sure that the most recent page is moved out of RAM immediately after being accessed?




Typically, it will be the page accessed least recently which will be evicted, but that does lead to the pathological behaviour described. The first time through the inner loop, the first n frames are paged in; then when page n + 1 needs to be paged in, page 1 is paged out, and so it goes, ensuring that all pages need to be paged back in every time round the loop.



However, this scenario is really unlikely. If the system is totally starved of RAM (physical and swap), the kernel will kill a program to free some memory; given the test program’s behaviour, it’s unlikely to be the candidate. If the system is only starved of physical RAM, the kernel will swap pages out, or reduce its caches; if it swaps pages out, it’s unlikely to target the test program. In both cases, the test program will then have enough RAM to fit its working set. If you do contrive somehow to starve the test program only (e.g. by increasing its working set so it dominates the system’s memory), you’re more likely in practice to see it killed with a SIGSEGV than you are to see it continuously page its working set in and out. (This is fine, it’s a thought experiment in a text book. Learn the resulting principles, don’t necessarily try to apply the example in practice.)




Can it just keep "n-1" pages i.e. rows in RAM, and use the remaining one page for all the page faults and replacements?




It could, but it would be unusual for the system to do this; how is the system to know what future memory access patterns are going to be like? Generally speaking you’ll see an LRU eviction, so the loop will exhibit pathological behaviour as described above.



If you want to play around with this, fix the program so it matches 4KB size of a page (as used on x86; I’m assuming Linux on 64-bit x86 here), and actually compiles:



int main(int argc, char **argv) 
int i, j;
int data[128][1024];
for (j = 0; j < 1024; j++)
for (i = 0; i < 128; i++)
data[i][j] = 0;



Then run it using /usr/bin/time, which will show the number of page faults:



0.00user 0.00system 0:00.00elapsed 100%CPU (0avgtext+0avgdata 1612maxresident)k
0inputs+0outputs (0major+180minor)pagefaults 0swaps


This kind of array handling will cause more problems with cache line evictions than it will with page faults in practice.






share|improve this answer






















  • Thanks. In the "fixed" program, how do you calculate "(0major+180minor)pagefaults"?
    – Tim
    8 hours ago










  • “Then run it using /usr/bin/time, which will show the number of page faults:” — I don’t calculate it, time tells me how many page faults really occurred. There are typically around 64 involved in loading the program and libraries, for a small program using only the C library, then the rest for the loop (so not quite 128, but still in the right ballpark).
    – Stephen Kitt
    8 hours ago











  • In what sense does it fix the problem of the original program, and why does it?
    – Tim
    7 hours ago











  • The original code snippet doesn’t compile. My version matches the underlying page size so it’s easier to correlate real page faults.
    – Stephen Kitt
    7 hours ago











  • Thanks. Why "this kind of array handling will cause more problems with cache line evictions than it will with page faults in practice"? Generally speaking, what kinds of cases will "cause more problems with cache line evictions than it will with page faults", and what will cause the opposite, and what will cause both comparably?
    – Tim
    4 hours ago












up vote
2
down vote










up vote
2
down vote










Then why is the text so sure that the most recent page is moved out of RAM immediately after being accessed?




Typically, it will be the page accessed least recently which will be evicted, but that does lead to the pathological behaviour described. The first time through the inner loop, the first n frames are paged in; then when page n + 1 needs to be paged in, page 1 is paged out, and so it goes, ensuring that all pages need to be paged back in every time round the loop.



However, this scenario is really unlikely. If the system is totally starved of RAM (physical and swap), the kernel will kill a program to free some memory; given the test program’s behaviour, it’s unlikely to be the candidate. If the system is only starved of physical RAM, the kernel will swap pages out, or reduce its caches; if it swaps pages out, it’s unlikely to target the test program. In both cases, the test program will then have enough RAM to fit its working set. If you do contrive somehow to starve the test program only (e.g. by increasing its working set so it dominates the system’s memory), you’re more likely in practice to see it killed with a SIGSEGV than you are to see it continuously page its working set in and out. (This is fine, it’s a thought experiment in a text book. Learn the resulting principles, don’t necessarily try to apply the example in practice.)




Can it just keep "n-1" pages i.e. rows in RAM, and use the remaining one page for all the page faults and replacements?




It could, but it would be unusual for the system to do this; how is the system to know what future memory access patterns are going to be like? Generally speaking you’ll see an LRU eviction, so the loop will exhibit pathological behaviour as described above.



If you want to play around with this, fix the program so it matches 4KB size of a page (as used on x86; I’m assuming Linux on 64-bit x86 here), and actually compiles:



int main(int argc, char **argv) 
int i, j;
int data[128][1024];
for (j = 0; j < 1024; j++)
for (i = 0; i < 128; i++)
data[i][j] = 0;



Then run it using /usr/bin/time, which will show the number of page faults:



0.00user 0.00system 0:00.00elapsed 100%CPU (0avgtext+0avgdata 1612maxresident)k
0inputs+0outputs (0major+180minor)pagefaults 0swaps


This kind of array handling will cause more problems with cache line evictions than it will with page faults in practice.






share|improve this answer















Then why is the text so sure that the most recent page is moved out of RAM immediately after being accessed?




Typically, it will be the page accessed least recently which will be evicted, but that does lead to the pathological behaviour described. The first time through the inner loop, the first n frames are paged in; then when page n + 1 needs to be paged in, page 1 is paged out, and so it goes, ensuring that all pages need to be paged back in every time round the loop.



However, this scenario is really unlikely. If the system is totally starved of RAM (physical and swap), the kernel will kill a program to free some memory; given the test program’s behaviour, it’s unlikely to be the candidate. If the system is only starved of physical RAM, the kernel will swap pages out, or reduce its caches; if it swaps pages out, it’s unlikely to target the test program. In both cases, the test program will then have enough RAM to fit its working set. If you do contrive somehow to starve the test program only (e.g. by increasing its working set so it dominates the system’s memory), you’re more likely in practice to see it killed with a SIGSEGV than you are to see it continuously page its working set in and out. (This is fine, it’s a thought experiment in a text book. Learn the resulting principles, don’t necessarily try to apply the example in practice.)




Can it just keep "n-1" pages i.e. rows in RAM, and use the remaining one page for all the page faults and replacements?




It could, but it would be unusual for the system to do this; how is the system to know what future memory access patterns are going to be like? Generally speaking you’ll see an LRU eviction, so the loop will exhibit pathological behaviour as described above.



If you want to play around with this, fix the program so it matches 4KB size of a page (as used on x86; I’m assuming Linux on 64-bit x86 here), and actually compiles:



int main(int argc, char **argv) 
int i, j;
int data[128][1024];
for (j = 0; j < 1024; j++)
for (i = 0; i < 128; i++)
data[i][j] = 0;



Then run it using /usr/bin/time, which will show the number of page faults:



0.00user 0.00system 0:00.00elapsed 100%CPU (0avgtext+0avgdata 1612maxresident)k
0inputs+0outputs (0major+180minor)pagefaults 0swaps


This kind of array handling will cause more problems with cache line evictions than it will with page faults in practice.







share|improve this answer














share|improve this answer



share|improve this answer








edited 8 hours ago









Tim

23.9k67232418




23.9k67232418










answered 9 hours ago









Stephen Kitt

149k23332399




149k23332399











  • Thanks. In the "fixed" program, how do you calculate "(0major+180minor)pagefaults"?
    – Tim
    8 hours ago










  • “Then run it using /usr/bin/time, which will show the number of page faults:” — I don’t calculate it, time tells me how many page faults really occurred. There are typically around 64 involved in loading the program and libraries, for a small program using only the C library, then the rest for the loop (so not quite 128, but still in the right ballpark).
    – Stephen Kitt
    8 hours ago











  • In what sense does it fix the problem of the original program, and why does it?
    – Tim
    7 hours ago











  • The original code snippet doesn’t compile. My version matches the underlying page size so it’s easier to correlate real page faults.
    – Stephen Kitt
    7 hours ago











  • Thanks. Why "this kind of array handling will cause more problems with cache line evictions than it will with page faults in practice"? Generally speaking, what kinds of cases will "cause more problems with cache line evictions than it will with page faults", and what will cause the opposite, and what will cause both comparably?
    – Tim
    4 hours ago
















  • Thanks. In the "fixed" program, how do you calculate "(0major+180minor)pagefaults"?
    – Tim
    8 hours ago










  • “Then run it using /usr/bin/time, which will show the number of page faults:” — I don’t calculate it, time tells me how many page faults really occurred. There are typically around 64 involved in loading the program and libraries, for a small program using only the C library, then the rest for the loop (so not quite 128, but still in the right ballpark).
    – Stephen Kitt
    8 hours ago











  • In what sense does it fix the problem of the original program, and why does it?
    – Tim
    7 hours ago











  • The original code snippet doesn’t compile. My version matches the underlying page size so it’s easier to correlate real page faults.
    – Stephen Kitt
    7 hours ago











  • Thanks. Why "this kind of array handling will cause more problems with cache line evictions than it will with page faults in practice"? Generally speaking, what kinds of cases will "cause more problems with cache line evictions than it will with page faults", and what will cause the opposite, and what will cause both comparably?
    – Tim
    4 hours ago















Thanks. In the "fixed" program, how do you calculate "(0major+180minor)pagefaults"?
– Tim
8 hours ago




Thanks. In the "fixed" program, how do you calculate "(0major+180minor)pagefaults"?
– Tim
8 hours ago












“Then run it using /usr/bin/time, which will show the number of page faults:” — I don’t calculate it, time tells me how many page faults really occurred. There are typically around 64 involved in loading the program and libraries, for a small program using only the C library, then the rest for the loop (so not quite 128, but still in the right ballpark).
– Stephen Kitt
8 hours ago





“Then run it using /usr/bin/time, which will show the number of page faults:” — I don’t calculate it, time tells me how many page faults really occurred. There are typically around 64 involved in loading the program and libraries, for a small program using only the C library, then the rest for the loop (so not quite 128, but still in the right ballpark).
– Stephen Kitt
8 hours ago













In what sense does it fix the problem of the original program, and why does it?
– Tim
7 hours ago





In what sense does it fix the problem of the original program, and why does it?
– Tim
7 hours ago













The original code snippet doesn’t compile. My version matches the underlying page size so it’s easier to correlate real page faults.
– Stephen Kitt
7 hours ago





The original code snippet doesn’t compile. My version matches the underlying page size so it’s easier to correlate real page faults.
– Stephen Kitt
7 hours ago













Thanks. Why "this kind of array handling will cause more problems with cache line evictions than it will with page faults in practice"? Generally speaking, what kinds of cases will "cause more problems with cache line evictions than it will with page faults", and what will cause the opposite, and what will cause both comparably?
– Tim
4 hours ago




Thanks. Why "this kind of array handling will cause more problems with cache line evictions than it will with page faults in practice"? Generally speaking, what kinds of cases will "cause more problems with cache line evictions than it will with page faults", and what will cause the opposite, and what will cause both comparably?
– Tim
4 hours ago

















 

draft saved


draft discarded















































 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f474260%2fhow-many-page-faults-does-this-program-need%23new-answer', 'question_page');

);

Post as a guest













































































Popular posts from this blog

How to check contact read email or not when send email to Individual?

Bahrain

Postfix configuration issue with fips on centos 7; mailgun relay