What is the benefit of providing each process with an address space?
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
I understand that the most basic level of benefit is providing a layer of abstraction, that allows the process to not worry about how its virtual memory should be assigned physically, which is left to the OS to deal with. However, apart from this, what other pros/cons can we expect from doing this?
memory
add a comment |Â
up vote
0
down vote
favorite
I understand that the most basic level of benefit is providing a layer of abstraction, that allows the process to not worry about how its virtual memory should be assigned physically, which is left to the OS to deal with. However, apart from this, what other pros/cons can we expect from doing this?
memory
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I understand that the most basic level of benefit is providing a layer of abstraction, that allows the process to not worry about how its virtual memory should be assigned physically, which is left to the OS to deal with. However, apart from this, what other pros/cons can we expect from doing this?
memory
I understand that the most basic level of benefit is providing a layer of abstraction, that allows the process to not worry about how its virtual memory should be assigned physically, which is left to the OS to deal with. However, apart from this, what other pros/cons can we expect from doing this?
memory
asked Mar 14 at 18:42
oldselflearner1959
1085
1085
add a comment |Â
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
1
down vote
Each process getting its own address space follows the model of a single program running on a machine at a time, like was done in the early years of computing. On modern time-sharing operating systems, processes still see the world as if they were running with the CPU and memory solely provided for them. (This is simplifying a bit, processes can of course be aware of other processes via IPC mechanisms.)
Now, let's look at the alternative: an address space shared by all processes running on the machine. Because memory addresses are in use by already running processes when a program is started, the memory segments would have to be relocated when the program is loaded into memory. The memory layout of a program would probably be different every time, and would the program would have to be relinked "on-the-fly" before it could be started.
A shared memory space could be implemented by scattering the pages belonging to different processes all over the memory space. This would require an elaborate protection scheme implemented in hardware, where each memory area (page or segment) would be associated with information on which process owns the area. This solution would also have the drawback that memory would get fragmented, and could lead to that a program can't be loaded because it might require a large area with contiguous memory addresses, and a large enough area might not be available.
The alternative would be to reserve a contiguous memory space to each process. This would make the protection scheme a bit simpler (just low and high limits), but would be wasteful with the address space and would also suffer from fragmentation.
Somewhat related to this are shared libraries, which contain code and data shared by many unrelated processes. The pages of a shared library should preferably be loaded unmodified to memory (i.e. no per-process patching of absolute addresses), otherwise the physical memory frames can't be shared between processes. On the other hand, a shared library must typically be loaded at different virtual addresses in different processes, or else they get too difficult to handle. Modern shared libraries use position-independent code and access data through indirection so they can be placed mostly unmodified and still run using varying virtual addresses. The first generation Linux "a.out" shared libraries were actually simpler, and were loaded at fixed locations in the address space. This required a central registry of virtual addresses reserved for each known shared library, with some extra room reserved for future growth.
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
Each process getting its own address space follows the model of a single program running on a machine at a time, like was done in the early years of computing. On modern time-sharing operating systems, processes still see the world as if they were running with the CPU and memory solely provided for them. (This is simplifying a bit, processes can of course be aware of other processes via IPC mechanisms.)
Now, let's look at the alternative: an address space shared by all processes running on the machine. Because memory addresses are in use by already running processes when a program is started, the memory segments would have to be relocated when the program is loaded into memory. The memory layout of a program would probably be different every time, and would the program would have to be relinked "on-the-fly" before it could be started.
A shared memory space could be implemented by scattering the pages belonging to different processes all over the memory space. This would require an elaborate protection scheme implemented in hardware, where each memory area (page or segment) would be associated with information on which process owns the area. This solution would also have the drawback that memory would get fragmented, and could lead to that a program can't be loaded because it might require a large area with contiguous memory addresses, and a large enough area might not be available.
The alternative would be to reserve a contiguous memory space to each process. This would make the protection scheme a bit simpler (just low and high limits), but would be wasteful with the address space and would also suffer from fragmentation.
Somewhat related to this are shared libraries, which contain code and data shared by many unrelated processes. The pages of a shared library should preferably be loaded unmodified to memory (i.e. no per-process patching of absolute addresses), otherwise the physical memory frames can't be shared between processes. On the other hand, a shared library must typically be loaded at different virtual addresses in different processes, or else they get too difficult to handle. Modern shared libraries use position-independent code and access data through indirection so they can be placed mostly unmodified and still run using varying virtual addresses. The first generation Linux "a.out" shared libraries were actually simpler, and were loaded at fixed locations in the address space. This required a central registry of virtual addresses reserved for each known shared library, with some extra room reserved for future growth.
add a comment |Â
up vote
1
down vote
Each process getting its own address space follows the model of a single program running on a machine at a time, like was done in the early years of computing. On modern time-sharing operating systems, processes still see the world as if they were running with the CPU and memory solely provided for them. (This is simplifying a bit, processes can of course be aware of other processes via IPC mechanisms.)
Now, let's look at the alternative: an address space shared by all processes running on the machine. Because memory addresses are in use by already running processes when a program is started, the memory segments would have to be relocated when the program is loaded into memory. The memory layout of a program would probably be different every time, and would the program would have to be relinked "on-the-fly" before it could be started.
A shared memory space could be implemented by scattering the pages belonging to different processes all over the memory space. This would require an elaborate protection scheme implemented in hardware, where each memory area (page or segment) would be associated with information on which process owns the area. This solution would also have the drawback that memory would get fragmented, and could lead to that a program can't be loaded because it might require a large area with contiguous memory addresses, and a large enough area might not be available.
The alternative would be to reserve a contiguous memory space to each process. This would make the protection scheme a bit simpler (just low and high limits), but would be wasteful with the address space and would also suffer from fragmentation.
Somewhat related to this are shared libraries, which contain code and data shared by many unrelated processes. The pages of a shared library should preferably be loaded unmodified to memory (i.e. no per-process patching of absolute addresses), otherwise the physical memory frames can't be shared between processes. On the other hand, a shared library must typically be loaded at different virtual addresses in different processes, or else they get too difficult to handle. Modern shared libraries use position-independent code and access data through indirection so they can be placed mostly unmodified and still run using varying virtual addresses. The first generation Linux "a.out" shared libraries were actually simpler, and were loaded at fixed locations in the address space. This required a central registry of virtual addresses reserved for each known shared library, with some extra room reserved for future growth.
add a comment |Â
up vote
1
down vote
up vote
1
down vote
Each process getting its own address space follows the model of a single program running on a machine at a time, like was done in the early years of computing. On modern time-sharing operating systems, processes still see the world as if they were running with the CPU and memory solely provided for them. (This is simplifying a bit, processes can of course be aware of other processes via IPC mechanisms.)
Now, let's look at the alternative: an address space shared by all processes running on the machine. Because memory addresses are in use by already running processes when a program is started, the memory segments would have to be relocated when the program is loaded into memory. The memory layout of a program would probably be different every time, and would the program would have to be relinked "on-the-fly" before it could be started.
A shared memory space could be implemented by scattering the pages belonging to different processes all over the memory space. This would require an elaborate protection scheme implemented in hardware, where each memory area (page or segment) would be associated with information on which process owns the area. This solution would also have the drawback that memory would get fragmented, and could lead to that a program can't be loaded because it might require a large area with contiguous memory addresses, and a large enough area might not be available.
The alternative would be to reserve a contiguous memory space to each process. This would make the protection scheme a bit simpler (just low and high limits), but would be wasteful with the address space and would also suffer from fragmentation.
Somewhat related to this are shared libraries, which contain code and data shared by many unrelated processes. The pages of a shared library should preferably be loaded unmodified to memory (i.e. no per-process patching of absolute addresses), otherwise the physical memory frames can't be shared between processes. On the other hand, a shared library must typically be loaded at different virtual addresses in different processes, or else they get too difficult to handle. Modern shared libraries use position-independent code and access data through indirection so they can be placed mostly unmodified and still run using varying virtual addresses. The first generation Linux "a.out" shared libraries were actually simpler, and were loaded at fixed locations in the address space. This required a central registry of virtual addresses reserved for each known shared library, with some extra room reserved for future growth.
Each process getting its own address space follows the model of a single program running on a machine at a time, like was done in the early years of computing. On modern time-sharing operating systems, processes still see the world as if they were running with the CPU and memory solely provided for them. (This is simplifying a bit, processes can of course be aware of other processes via IPC mechanisms.)
Now, let's look at the alternative: an address space shared by all processes running on the machine. Because memory addresses are in use by already running processes when a program is started, the memory segments would have to be relocated when the program is loaded into memory. The memory layout of a program would probably be different every time, and would the program would have to be relinked "on-the-fly" before it could be started.
A shared memory space could be implemented by scattering the pages belonging to different processes all over the memory space. This would require an elaborate protection scheme implemented in hardware, where each memory area (page or segment) would be associated with information on which process owns the area. This solution would also have the drawback that memory would get fragmented, and could lead to that a program can't be loaded because it might require a large area with contiguous memory addresses, and a large enough area might not be available.
The alternative would be to reserve a contiguous memory space to each process. This would make the protection scheme a bit simpler (just low and high limits), but would be wasteful with the address space and would also suffer from fragmentation.
Somewhat related to this are shared libraries, which contain code and data shared by many unrelated processes. The pages of a shared library should preferably be loaded unmodified to memory (i.e. no per-process patching of absolute addresses), otherwise the physical memory frames can't be shared between processes. On the other hand, a shared library must typically be loaded at different virtual addresses in different processes, or else they get too difficult to handle. Modern shared libraries use position-independent code and access data through indirection so they can be placed mostly unmodified and still run using varying virtual addresses. The first generation Linux "a.out" shared libraries were actually simpler, and were loaded at fixed locations in the address space. This required a central registry of virtual addresses reserved for each known shared library, with some extra room reserved for future growth.
answered Mar 15 at 18:31
Johan Myréen
6,76711221
6,76711221
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f430238%2fwhat-is-the-benefit-of-providing-each-process-with-an-address-space%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password