How many page faults does this program need?
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
Operating System Concepts says
LetâÂÂs look at a contrived but informative example. Assume that pages
are 128 words in size. Consider a C program whose function is to
initialize to 0 each element of a 128-by-128 array. The following code
is typical:int i, j;
int[128][128] data;
for (j = 0; j < 128; j++)
for (i = 0; i < 128; i++)
data[i][j] = 0;
Notice that the array is stored row major; that is, the array is
stored data[0][0], data[0][1], ÷ ÷ ÷, data[0][127], data[1][0],
data[1][1], ÷ ÷ ÷, data[127][127]. For pages of 128 words, each row
takes one page. Thus, the preceding code zeros one word in each page,
then another word in each page, and so on. If the operating system
allocates fewer than 128 frames to the entire program, then its
execution will result in 128 ÃÂ 128 = 16,384 page faults.
Does the sentence in highlight mean that when initializing each element of the array, a page fault happens, and, after page replacement and initialization of the element, the page is immediately moved out of RAM?
"the operating system allocates fewer than 128 frames to the entire program" doesn't necessarily mean that "the operating system allocates a single frame to the entire program". Then why is the text so sure that the most recent page is moved out of RAM immediately after being accessed?
Suppose the OS allocates "n", which is fewer than 128, frames to the program.
Can it just keep "n-1" pages i.e. rows in RAM, and use the remaining one page for all the page faults and replacements? So the total number of page faults can be reduced from 128*128 to (n-1) + (128-(n-1))*128?
virtual-memory
add a comment |Â
up vote
0
down vote
favorite
Operating System Concepts says
LetâÂÂs look at a contrived but informative example. Assume that pages
are 128 words in size. Consider a C program whose function is to
initialize to 0 each element of a 128-by-128 array. The following code
is typical:int i, j;
int[128][128] data;
for (j = 0; j < 128; j++)
for (i = 0; i < 128; i++)
data[i][j] = 0;
Notice that the array is stored row major; that is, the array is
stored data[0][0], data[0][1], ÷ ÷ ÷, data[0][127], data[1][0],
data[1][1], ÷ ÷ ÷, data[127][127]. For pages of 128 words, each row
takes one page. Thus, the preceding code zeros one word in each page,
then another word in each page, and so on. If the operating system
allocates fewer than 128 frames to the entire program, then its
execution will result in 128 ÃÂ 128 = 16,384 page faults.
Does the sentence in highlight mean that when initializing each element of the array, a page fault happens, and, after page replacement and initialization of the element, the page is immediately moved out of RAM?
"the operating system allocates fewer than 128 frames to the entire program" doesn't necessarily mean that "the operating system allocates a single frame to the entire program". Then why is the text so sure that the most recent page is moved out of RAM immediately after being accessed?
Suppose the OS allocates "n", which is fewer than 128, frames to the program.
Can it just keep "n-1" pages i.e. rows in RAM, and use the remaining one page for all the page faults and replacements? So the total number of page faults can be reduced from 128*128 to (n-1) + (128-(n-1))*128?
virtual-memory
Is this specific to Linux/UNIX or just general consideration?
â mrc02_kr
9 hours ago
Linux..........
â Tim
9 hours ago
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
Operating System Concepts says
LetâÂÂs look at a contrived but informative example. Assume that pages
are 128 words in size. Consider a C program whose function is to
initialize to 0 each element of a 128-by-128 array. The following code
is typical:int i, j;
int[128][128] data;
for (j = 0; j < 128; j++)
for (i = 0; i < 128; i++)
data[i][j] = 0;
Notice that the array is stored row major; that is, the array is
stored data[0][0], data[0][1], ÷ ÷ ÷, data[0][127], data[1][0],
data[1][1], ÷ ÷ ÷, data[127][127]. For pages of 128 words, each row
takes one page. Thus, the preceding code zeros one word in each page,
then another word in each page, and so on. If the operating system
allocates fewer than 128 frames to the entire program, then its
execution will result in 128 ÃÂ 128 = 16,384 page faults.
Does the sentence in highlight mean that when initializing each element of the array, a page fault happens, and, after page replacement and initialization of the element, the page is immediately moved out of RAM?
"the operating system allocates fewer than 128 frames to the entire program" doesn't necessarily mean that "the operating system allocates a single frame to the entire program". Then why is the text so sure that the most recent page is moved out of RAM immediately after being accessed?
Suppose the OS allocates "n", which is fewer than 128, frames to the program.
Can it just keep "n-1" pages i.e. rows in RAM, and use the remaining one page for all the page faults and replacements? So the total number of page faults can be reduced from 128*128 to (n-1) + (128-(n-1))*128?
virtual-memory
Operating System Concepts says
LetâÂÂs look at a contrived but informative example. Assume that pages
are 128 words in size. Consider a C program whose function is to
initialize to 0 each element of a 128-by-128 array. The following code
is typical:int i, j;
int[128][128] data;
for (j = 0; j < 128; j++)
for (i = 0; i < 128; i++)
data[i][j] = 0;
Notice that the array is stored row major; that is, the array is
stored data[0][0], data[0][1], ÷ ÷ ÷, data[0][127], data[1][0],
data[1][1], ÷ ÷ ÷, data[127][127]. For pages of 128 words, each row
takes one page. Thus, the preceding code zeros one word in each page,
then another word in each page, and so on. If the operating system
allocates fewer than 128 frames to the entire program, then its
execution will result in 128 ÃÂ 128 = 16,384 page faults.
Does the sentence in highlight mean that when initializing each element of the array, a page fault happens, and, after page replacement and initialization of the element, the page is immediately moved out of RAM?
"the operating system allocates fewer than 128 frames to the entire program" doesn't necessarily mean that "the operating system allocates a single frame to the entire program". Then why is the text so sure that the most recent page is moved out of RAM immediately after being accessed?
Suppose the OS allocates "n", which is fewer than 128, frames to the program.
Can it just keep "n-1" pages i.e. rows in RAM, and use the remaining one page for all the page faults and replacements? So the total number of page faults can be reduced from 128*128 to (n-1) + (128-(n-1))*128?
virtual-memory
virtual-memory
edited 9 hours ago
Rui F Ribeiro
37k1273117
37k1273117
asked 10 hours ago
Tim
23.9k67232418
23.9k67232418
Is this specific to Linux/UNIX or just general consideration?
â mrc02_kr
9 hours ago
Linux..........
â Tim
9 hours ago
add a comment |Â
Is this specific to Linux/UNIX or just general consideration?
â mrc02_kr
9 hours ago
Linux..........
â Tim
9 hours ago
Is this specific to Linux/UNIX or just general consideration?
â mrc02_kr
9 hours ago
Is this specific to Linux/UNIX or just general consideration?
â mrc02_kr
9 hours ago
Linux..........
â Tim
9 hours ago
Linux..........
â Tim
9 hours ago
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
2
down vote
Then why is the text so sure that the most recent page is moved out of RAM immediately after being accessed?
Typically, it will be the page accessed least recently which will be evicted, but that does lead to the pathological behaviour described. The first time through the inner loop, the first n frames are paged in; then when page n + 1 needs to be paged in, page 1 is paged out, and so it goes, ensuring that all pages need to be paged back in every time round the loop.
However, this scenario is really unlikely. If the system is totally starved of RAM (physical and swap), the kernel will kill a program to free some memory; given the test programâÂÂs behaviour, itâÂÂs unlikely to be the candidate. If the system is only starved of physical RAM, the kernel will swap pages out, or reduce its caches; if it swaps pages out, itâÂÂs unlikely to target the test program. In both cases, the test program will then have enough RAM to fit its working set. If you do contrive somehow to starve the test program only (e.g. by increasing its working set so it dominates the systemâÂÂs memory), youâÂÂre more likely in practice to see it killed with a SIGSEGV
than you are to see it continuously page its working set in and out. (This is fine, itâÂÂs a thought experiment in a text book. Learn the resulting principles, donâÂÂt necessarily try to apply the example in practice.)
Can it just keep "n-1" pages i.e. rows in RAM, and use the remaining one page for all the page faults and replacements?
It could, but it would be unusual for the system to do this; how is the system to know what future memory access patterns are going to be like? Generally speaking youâÂÂll see an LRU eviction, so the loop will exhibit pathological behaviour as described above.
If you want to play around with this, fix the program so it matches 4KB size of a page (as used on x86; IâÂÂm assuming Linux on 64-bit x86 here), and actually compiles:
int main(int argc, char **argv)
int i, j;
int data[128][1024];
for (j = 0; j < 1024; j++)
for (i = 0; i < 128; i++)
data[i][j] = 0;
Then run it using /usr/bin/time
, which will show the number of page faults:
0.00user 0.00system 0:00.00elapsed 100%CPU (0avgtext+0avgdata 1612maxresident)k
0inputs+0outputs (0major+180minor)pagefaults 0swaps
This kind of array handling will cause more problems with cache line evictions than it will with page faults in practice.
Thanks. In the "fixed" program, how do you calculate "(0major+180minor)pagefaults"?
â Tim
8 hours ago
âÂÂThen run it using/usr/bin/time
, which will show the number of page faults:â â I donâÂÂt calculate it,time
tells me how many page faults really occurred. There are typically around 64 involved in loading the program and libraries, for a small program using only the C library, then the rest for the loop (so not quite 128, but still in the right ballpark).
â Stephen Kitt
8 hours ago
In what sense does it fix the problem of the original program, and why does it?
â Tim
7 hours ago
The original code snippet doesnâÂÂt compile. My version matches the underlying page size so itâÂÂs easier to correlate real page faults.
â Stephen Kitt
7 hours ago
Thanks. Why "this kind of array handling will cause more problems with cache line evictions than it will with page faults in practice"? Generally speaking, what kinds of cases will "cause more problems with cache line evictions than it will with page faults", and what will cause the opposite, and what will cause both comparably?
â Tim
4 hours ago
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
Then why is the text so sure that the most recent page is moved out of RAM immediately after being accessed?
Typically, it will be the page accessed least recently which will be evicted, but that does lead to the pathological behaviour described. The first time through the inner loop, the first n frames are paged in; then when page n + 1 needs to be paged in, page 1 is paged out, and so it goes, ensuring that all pages need to be paged back in every time round the loop.
However, this scenario is really unlikely. If the system is totally starved of RAM (physical and swap), the kernel will kill a program to free some memory; given the test programâÂÂs behaviour, itâÂÂs unlikely to be the candidate. If the system is only starved of physical RAM, the kernel will swap pages out, or reduce its caches; if it swaps pages out, itâÂÂs unlikely to target the test program. In both cases, the test program will then have enough RAM to fit its working set. If you do contrive somehow to starve the test program only (e.g. by increasing its working set so it dominates the systemâÂÂs memory), youâÂÂre more likely in practice to see it killed with a SIGSEGV
than you are to see it continuously page its working set in and out. (This is fine, itâÂÂs a thought experiment in a text book. Learn the resulting principles, donâÂÂt necessarily try to apply the example in practice.)
Can it just keep "n-1" pages i.e. rows in RAM, and use the remaining one page for all the page faults and replacements?
It could, but it would be unusual for the system to do this; how is the system to know what future memory access patterns are going to be like? Generally speaking youâÂÂll see an LRU eviction, so the loop will exhibit pathological behaviour as described above.
If you want to play around with this, fix the program so it matches 4KB size of a page (as used on x86; IâÂÂm assuming Linux on 64-bit x86 here), and actually compiles:
int main(int argc, char **argv)
int i, j;
int data[128][1024];
for (j = 0; j < 1024; j++)
for (i = 0; i < 128; i++)
data[i][j] = 0;
Then run it using /usr/bin/time
, which will show the number of page faults:
0.00user 0.00system 0:00.00elapsed 100%CPU (0avgtext+0avgdata 1612maxresident)k
0inputs+0outputs (0major+180minor)pagefaults 0swaps
This kind of array handling will cause more problems with cache line evictions than it will with page faults in practice.
Thanks. In the "fixed" program, how do you calculate "(0major+180minor)pagefaults"?
â Tim
8 hours ago
âÂÂThen run it using/usr/bin/time
, which will show the number of page faults:â â I donâÂÂt calculate it,time
tells me how many page faults really occurred. There are typically around 64 involved in loading the program and libraries, for a small program using only the C library, then the rest for the loop (so not quite 128, but still in the right ballpark).
â Stephen Kitt
8 hours ago
In what sense does it fix the problem of the original program, and why does it?
â Tim
7 hours ago
The original code snippet doesnâÂÂt compile. My version matches the underlying page size so itâÂÂs easier to correlate real page faults.
â Stephen Kitt
7 hours ago
Thanks. Why "this kind of array handling will cause more problems with cache line evictions than it will with page faults in practice"? Generally speaking, what kinds of cases will "cause more problems with cache line evictions than it will with page faults", and what will cause the opposite, and what will cause both comparably?
â Tim
4 hours ago
add a comment |Â
up vote
2
down vote
Then why is the text so sure that the most recent page is moved out of RAM immediately after being accessed?
Typically, it will be the page accessed least recently which will be evicted, but that does lead to the pathological behaviour described. The first time through the inner loop, the first n frames are paged in; then when page n + 1 needs to be paged in, page 1 is paged out, and so it goes, ensuring that all pages need to be paged back in every time round the loop.
However, this scenario is really unlikely. If the system is totally starved of RAM (physical and swap), the kernel will kill a program to free some memory; given the test programâÂÂs behaviour, itâÂÂs unlikely to be the candidate. If the system is only starved of physical RAM, the kernel will swap pages out, or reduce its caches; if it swaps pages out, itâÂÂs unlikely to target the test program. In both cases, the test program will then have enough RAM to fit its working set. If you do contrive somehow to starve the test program only (e.g. by increasing its working set so it dominates the systemâÂÂs memory), youâÂÂre more likely in practice to see it killed with a SIGSEGV
than you are to see it continuously page its working set in and out. (This is fine, itâÂÂs a thought experiment in a text book. Learn the resulting principles, donâÂÂt necessarily try to apply the example in practice.)
Can it just keep "n-1" pages i.e. rows in RAM, and use the remaining one page for all the page faults and replacements?
It could, but it would be unusual for the system to do this; how is the system to know what future memory access patterns are going to be like? Generally speaking youâÂÂll see an LRU eviction, so the loop will exhibit pathological behaviour as described above.
If you want to play around with this, fix the program so it matches 4KB size of a page (as used on x86; IâÂÂm assuming Linux on 64-bit x86 here), and actually compiles:
int main(int argc, char **argv)
int i, j;
int data[128][1024];
for (j = 0; j < 1024; j++)
for (i = 0; i < 128; i++)
data[i][j] = 0;
Then run it using /usr/bin/time
, which will show the number of page faults:
0.00user 0.00system 0:00.00elapsed 100%CPU (0avgtext+0avgdata 1612maxresident)k
0inputs+0outputs (0major+180minor)pagefaults 0swaps
This kind of array handling will cause more problems with cache line evictions than it will with page faults in practice.
Thanks. In the "fixed" program, how do you calculate "(0major+180minor)pagefaults"?
â Tim
8 hours ago
âÂÂThen run it using/usr/bin/time
, which will show the number of page faults:â â I donâÂÂt calculate it,time
tells me how many page faults really occurred. There are typically around 64 involved in loading the program and libraries, for a small program using only the C library, then the rest for the loop (so not quite 128, but still in the right ballpark).
â Stephen Kitt
8 hours ago
In what sense does it fix the problem of the original program, and why does it?
â Tim
7 hours ago
The original code snippet doesnâÂÂt compile. My version matches the underlying page size so itâÂÂs easier to correlate real page faults.
â Stephen Kitt
7 hours ago
Thanks. Why "this kind of array handling will cause more problems with cache line evictions than it will with page faults in practice"? Generally speaking, what kinds of cases will "cause more problems with cache line evictions than it will with page faults", and what will cause the opposite, and what will cause both comparably?
â Tim
4 hours ago
add a comment |Â
up vote
2
down vote
up vote
2
down vote
Then why is the text so sure that the most recent page is moved out of RAM immediately after being accessed?
Typically, it will be the page accessed least recently which will be evicted, but that does lead to the pathological behaviour described. The first time through the inner loop, the first n frames are paged in; then when page n + 1 needs to be paged in, page 1 is paged out, and so it goes, ensuring that all pages need to be paged back in every time round the loop.
However, this scenario is really unlikely. If the system is totally starved of RAM (physical and swap), the kernel will kill a program to free some memory; given the test programâÂÂs behaviour, itâÂÂs unlikely to be the candidate. If the system is only starved of physical RAM, the kernel will swap pages out, or reduce its caches; if it swaps pages out, itâÂÂs unlikely to target the test program. In both cases, the test program will then have enough RAM to fit its working set. If you do contrive somehow to starve the test program only (e.g. by increasing its working set so it dominates the systemâÂÂs memory), youâÂÂre more likely in practice to see it killed with a SIGSEGV
than you are to see it continuously page its working set in and out. (This is fine, itâÂÂs a thought experiment in a text book. Learn the resulting principles, donâÂÂt necessarily try to apply the example in practice.)
Can it just keep "n-1" pages i.e. rows in RAM, and use the remaining one page for all the page faults and replacements?
It could, but it would be unusual for the system to do this; how is the system to know what future memory access patterns are going to be like? Generally speaking youâÂÂll see an LRU eviction, so the loop will exhibit pathological behaviour as described above.
If you want to play around with this, fix the program so it matches 4KB size of a page (as used on x86; IâÂÂm assuming Linux on 64-bit x86 here), and actually compiles:
int main(int argc, char **argv)
int i, j;
int data[128][1024];
for (j = 0; j < 1024; j++)
for (i = 0; i < 128; i++)
data[i][j] = 0;
Then run it using /usr/bin/time
, which will show the number of page faults:
0.00user 0.00system 0:00.00elapsed 100%CPU (0avgtext+0avgdata 1612maxresident)k
0inputs+0outputs (0major+180minor)pagefaults 0swaps
This kind of array handling will cause more problems with cache line evictions than it will with page faults in practice.
Then why is the text so sure that the most recent page is moved out of RAM immediately after being accessed?
Typically, it will be the page accessed least recently which will be evicted, but that does lead to the pathological behaviour described. The first time through the inner loop, the first n frames are paged in; then when page n + 1 needs to be paged in, page 1 is paged out, and so it goes, ensuring that all pages need to be paged back in every time round the loop.
However, this scenario is really unlikely. If the system is totally starved of RAM (physical and swap), the kernel will kill a program to free some memory; given the test programâÂÂs behaviour, itâÂÂs unlikely to be the candidate. If the system is only starved of physical RAM, the kernel will swap pages out, or reduce its caches; if it swaps pages out, itâÂÂs unlikely to target the test program. In both cases, the test program will then have enough RAM to fit its working set. If you do contrive somehow to starve the test program only (e.g. by increasing its working set so it dominates the systemâÂÂs memory), youâÂÂre more likely in practice to see it killed with a SIGSEGV
than you are to see it continuously page its working set in and out. (This is fine, itâÂÂs a thought experiment in a text book. Learn the resulting principles, donâÂÂt necessarily try to apply the example in practice.)
Can it just keep "n-1" pages i.e. rows in RAM, and use the remaining one page for all the page faults and replacements?
It could, but it would be unusual for the system to do this; how is the system to know what future memory access patterns are going to be like? Generally speaking youâÂÂll see an LRU eviction, so the loop will exhibit pathological behaviour as described above.
If you want to play around with this, fix the program so it matches 4KB size of a page (as used on x86; IâÂÂm assuming Linux on 64-bit x86 here), and actually compiles:
int main(int argc, char **argv)
int i, j;
int data[128][1024];
for (j = 0; j < 1024; j++)
for (i = 0; i < 128; i++)
data[i][j] = 0;
Then run it using /usr/bin/time
, which will show the number of page faults:
0.00user 0.00system 0:00.00elapsed 100%CPU (0avgtext+0avgdata 1612maxresident)k
0inputs+0outputs (0major+180minor)pagefaults 0swaps
This kind of array handling will cause more problems with cache line evictions than it will with page faults in practice.
edited 8 hours ago
Tim
23.9k67232418
23.9k67232418
answered 9 hours ago
Stephen Kitt
149k23332399
149k23332399
Thanks. In the "fixed" program, how do you calculate "(0major+180minor)pagefaults"?
â Tim
8 hours ago
âÂÂThen run it using/usr/bin/time
, which will show the number of page faults:â â I donâÂÂt calculate it,time
tells me how many page faults really occurred. There are typically around 64 involved in loading the program and libraries, for a small program using only the C library, then the rest for the loop (so not quite 128, but still in the right ballpark).
â Stephen Kitt
8 hours ago
In what sense does it fix the problem of the original program, and why does it?
â Tim
7 hours ago
The original code snippet doesnâÂÂt compile. My version matches the underlying page size so itâÂÂs easier to correlate real page faults.
â Stephen Kitt
7 hours ago
Thanks. Why "this kind of array handling will cause more problems with cache line evictions than it will with page faults in practice"? Generally speaking, what kinds of cases will "cause more problems with cache line evictions than it will with page faults", and what will cause the opposite, and what will cause both comparably?
â Tim
4 hours ago
add a comment |Â
Thanks. In the "fixed" program, how do you calculate "(0major+180minor)pagefaults"?
â Tim
8 hours ago
âÂÂThen run it using/usr/bin/time
, which will show the number of page faults:â â I donâÂÂt calculate it,time
tells me how many page faults really occurred. There are typically around 64 involved in loading the program and libraries, for a small program using only the C library, then the rest for the loop (so not quite 128, but still in the right ballpark).
â Stephen Kitt
8 hours ago
In what sense does it fix the problem of the original program, and why does it?
â Tim
7 hours ago
The original code snippet doesnâÂÂt compile. My version matches the underlying page size so itâÂÂs easier to correlate real page faults.
â Stephen Kitt
7 hours ago
Thanks. Why "this kind of array handling will cause more problems with cache line evictions than it will with page faults in practice"? Generally speaking, what kinds of cases will "cause more problems with cache line evictions than it will with page faults", and what will cause the opposite, and what will cause both comparably?
â Tim
4 hours ago
Thanks. In the "fixed" program, how do you calculate "(0major+180minor)pagefaults"?
â Tim
8 hours ago
Thanks. In the "fixed" program, how do you calculate "(0major+180minor)pagefaults"?
â Tim
8 hours ago
âÂÂThen run it using
/usr/bin/time
, which will show the number of page faults:â â I donâÂÂt calculate it, time
tells me how many page faults really occurred. There are typically around 64 involved in loading the program and libraries, for a small program using only the C library, then the rest for the loop (so not quite 128, but still in the right ballpark).â Stephen Kitt
8 hours ago
âÂÂThen run it using
/usr/bin/time
, which will show the number of page faults:â â I donâÂÂt calculate it, time
tells me how many page faults really occurred. There are typically around 64 involved in loading the program and libraries, for a small program using only the C library, then the rest for the loop (so not quite 128, but still in the right ballpark).â Stephen Kitt
8 hours ago
In what sense does it fix the problem of the original program, and why does it?
â Tim
7 hours ago
In what sense does it fix the problem of the original program, and why does it?
â Tim
7 hours ago
The original code snippet doesnâÂÂt compile. My version matches the underlying page size so itâÂÂs easier to correlate real page faults.
â Stephen Kitt
7 hours ago
The original code snippet doesnâÂÂt compile. My version matches the underlying page size so itâÂÂs easier to correlate real page faults.
â Stephen Kitt
7 hours ago
Thanks. Why "this kind of array handling will cause more problems with cache line evictions than it will with page faults in practice"? Generally speaking, what kinds of cases will "cause more problems with cache line evictions than it will with page faults", and what will cause the opposite, and what will cause both comparably?
â Tim
4 hours ago
Thanks. Why "this kind of array handling will cause more problems with cache line evictions than it will with page faults in practice"? Generally speaking, what kinds of cases will "cause more problems with cache line evictions than it will with page faults", and what will cause the opposite, and what will cause both comparably?
â Tim
4 hours ago
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f474260%2fhow-many-page-faults-does-this-program-need%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Is this specific to Linux/UNIX or just general consideration?
â mrc02_kr
9 hours ago
Linux..........
â Tim
9 hours ago