Why do we still grow the stack backwards?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP












46















When compiling C code and looking at assembly, it all has the stack grow backwards like this:



_main:
pushq %rbp
movl $5, -4(%rbp)
popq %rbp
ret


-4(%rbp) - does this mean the base pointer or the stack pointer are actually moving down the memory addresses instead of going up? Why is that?



I changed $5, -4(%rbp) to $5, +4(%rbp), compiled and ran the code and there were no errors. So why do we have to still go backwards on the memory stack?










share|improve this question



















  • 2





    Note that -4(%rbp) doesn't move the base pointer at all and that +4(%rbp) couldn't have possibly work.

    – Margaret Bloom
    Jan 27 at 20:24






  • 14





    "why do we have to still go backwards" - what do you think would be the advantage of going forwards? Ultimately, it doesn't matter, you just have to choose one.

    – Bergi
    Jan 27 at 20:57







  • 30





    "why do we grow the stack backwards?" -- because if we don't someone else would ask why malloc grows the heap backwards

    – slebetman
    Jan 28 at 3:24







  • 2





    @MargaretBloom: Apparently on the OP's platform, the CRT startup code doesn't care if main clobbers its RBP. That's certainly possible. (And yes, writing 4(%rbp) would step on the saved RBP value). Err actually, this main never does mov %rsp, %rbp, so the memory access is relative to the caller's RBP, if that's what the OP actually tested!!! If this was actually copied from compiler output, some instructions were left out!

    – Peter Cordes
    Jan 28 at 3:44







  • 1





    It seems to me that "backwards" or "forwards" (or "down" and "up") depends on your point of view. If you diagrammed memory as a column with low addresses on top, then growing the stack by decrementing a stack pointer would be analogous to a physical stack.

    – jamesdlin
    Jan 29 at 8:00















46















When compiling C code and looking at assembly, it all has the stack grow backwards like this:



_main:
pushq %rbp
movl $5, -4(%rbp)
popq %rbp
ret


-4(%rbp) - does this mean the base pointer or the stack pointer are actually moving down the memory addresses instead of going up? Why is that?



I changed $5, -4(%rbp) to $5, +4(%rbp), compiled and ran the code and there were no errors. So why do we have to still go backwards on the memory stack?










share|improve this question



















  • 2





    Note that -4(%rbp) doesn't move the base pointer at all and that +4(%rbp) couldn't have possibly work.

    – Margaret Bloom
    Jan 27 at 20:24






  • 14





    "why do we have to still go backwards" - what do you think would be the advantage of going forwards? Ultimately, it doesn't matter, you just have to choose one.

    – Bergi
    Jan 27 at 20:57







  • 30





    "why do we grow the stack backwards?" -- because if we don't someone else would ask why malloc grows the heap backwards

    – slebetman
    Jan 28 at 3:24







  • 2





    @MargaretBloom: Apparently on the OP's platform, the CRT startup code doesn't care if main clobbers its RBP. That's certainly possible. (And yes, writing 4(%rbp) would step on the saved RBP value). Err actually, this main never does mov %rsp, %rbp, so the memory access is relative to the caller's RBP, if that's what the OP actually tested!!! If this was actually copied from compiler output, some instructions were left out!

    – Peter Cordes
    Jan 28 at 3:44







  • 1





    It seems to me that "backwards" or "forwards" (or "down" and "up") depends on your point of view. If you diagrammed memory as a column with low addresses on top, then growing the stack by decrementing a stack pointer would be analogous to a physical stack.

    – jamesdlin
    Jan 29 at 8:00













46












46








46


5






When compiling C code and looking at assembly, it all has the stack grow backwards like this:



_main:
pushq %rbp
movl $5, -4(%rbp)
popq %rbp
ret


-4(%rbp) - does this mean the base pointer or the stack pointer are actually moving down the memory addresses instead of going up? Why is that?



I changed $5, -4(%rbp) to $5, +4(%rbp), compiled and ran the code and there were no errors. So why do we have to still go backwards on the memory stack?










share|improve this question
















When compiling C code and looking at assembly, it all has the stack grow backwards like this:



_main:
pushq %rbp
movl $5, -4(%rbp)
popq %rbp
ret


-4(%rbp) - does this mean the base pointer or the stack pointer are actually moving down the memory addresses instead of going up? Why is that?



I changed $5, -4(%rbp) to $5, +4(%rbp), compiled and ran the code and there were no errors. So why do we have to still go backwards on the memory stack?







c memory assembly






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jan 31 at 23:44









Peter Mortensen

1,11521114




1,11521114










asked Jan 27 at 15:44









alexalex

23723




23723







  • 2





    Note that -4(%rbp) doesn't move the base pointer at all and that +4(%rbp) couldn't have possibly work.

    – Margaret Bloom
    Jan 27 at 20:24






  • 14





    "why do we have to still go backwards" - what do you think would be the advantage of going forwards? Ultimately, it doesn't matter, you just have to choose one.

    – Bergi
    Jan 27 at 20:57







  • 30





    "why do we grow the stack backwards?" -- because if we don't someone else would ask why malloc grows the heap backwards

    – slebetman
    Jan 28 at 3:24







  • 2





    @MargaretBloom: Apparently on the OP's platform, the CRT startup code doesn't care if main clobbers its RBP. That's certainly possible. (And yes, writing 4(%rbp) would step on the saved RBP value). Err actually, this main never does mov %rsp, %rbp, so the memory access is relative to the caller's RBP, if that's what the OP actually tested!!! If this was actually copied from compiler output, some instructions were left out!

    – Peter Cordes
    Jan 28 at 3:44







  • 1





    It seems to me that "backwards" or "forwards" (or "down" and "up") depends on your point of view. If you diagrammed memory as a column with low addresses on top, then growing the stack by decrementing a stack pointer would be analogous to a physical stack.

    – jamesdlin
    Jan 29 at 8:00












  • 2





    Note that -4(%rbp) doesn't move the base pointer at all and that +4(%rbp) couldn't have possibly work.

    – Margaret Bloom
    Jan 27 at 20:24






  • 14





    "why do we have to still go backwards" - what do you think would be the advantage of going forwards? Ultimately, it doesn't matter, you just have to choose one.

    – Bergi
    Jan 27 at 20:57







  • 30





    "why do we grow the stack backwards?" -- because if we don't someone else would ask why malloc grows the heap backwards

    – slebetman
    Jan 28 at 3:24







  • 2





    @MargaretBloom: Apparently on the OP's platform, the CRT startup code doesn't care if main clobbers its RBP. That's certainly possible. (And yes, writing 4(%rbp) would step on the saved RBP value). Err actually, this main never does mov %rsp, %rbp, so the memory access is relative to the caller's RBP, if that's what the OP actually tested!!! If this was actually copied from compiler output, some instructions were left out!

    – Peter Cordes
    Jan 28 at 3:44







  • 1





    It seems to me that "backwards" or "forwards" (or "down" and "up") depends on your point of view. If you diagrammed memory as a column with low addresses on top, then growing the stack by decrementing a stack pointer would be analogous to a physical stack.

    – jamesdlin
    Jan 29 at 8:00







2




2





Note that -4(%rbp) doesn't move the base pointer at all and that +4(%rbp) couldn't have possibly work.

– Margaret Bloom
Jan 27 at 20:24





Note that -4(%rbp) doesn't move the base pointer at all and that +4(%rbp) couldn't have possibly work.

– Margaret Bloom
Jan 27 at 20:24




14




14





"why do we have to still go backwards" - what do you think would be the advantage of going forwards? Ultimately, it doesn't matter, you just have to choose one.

– Bergi
Jan 27 at 20:57






"why do we have to still go backwards" - what do you think would be the advantage of going forwards? Ultimately, it doesn't matter, you just have to choose one.

– Bergi
Jan 27 at 20:57





30




30





"why do we grow the stack backwards?" -- because if we don't someone else would ask why malloc grows the heap backwards

– slebetman
Jan 28 at 3:24






"why do we grow the stack backwards?" -- because if we don't someone else would ask why malloc grows the heap backwards

– slebetman
Jan 28 at 3:24





2




2





@MargaretBloom: Apparently on the OP's platform, the CRT startup code doesn't care if main clobbers its RBP. That's certainly possible. (And yes, writing 4(%rbp) would step on the saved RBP value). Err actually, this main never does mov %rsp, %rbp, so the memory access is relative to the caller's RBP, if that's what the OP actually tested!!! If this was actually copied from compiler output, some instructions were left out!

– Peter Cordes
Jan 28 at 3:44






@MargaretBloom: Apparently on the OP's platform, the CRT startup code doesn't care if main clobbers its RBP. That's certainly possible. (And yes, writing 4(%rbp) would step on the saved RBP value). Err actually, this main never does mov %rsp, %rbp, so the memory access is relative to the caller's RBP, if that's what the OP actually tested!!! If this was actually copied from compiler output, some instructions were left out!

– Peter Cordes
Jan 28 at 3:44





1




1





It seems to me that "backwards" or "forwards" (or "down" and "up") depends on your point of view. If you diagrammed memory as a column with low addresses on top, then growing the stack by decrementing a stack pointer would be analogous to a physical stack.

– jamesdlin
Jan 29 at 8:00





It seems to me that "backwards" or "forwards" (or "down" and "up") depends on your point of view. If you diagrammed memory as a column with low addresses on top, then growing the stack by decrementing a stack pointer would be analogous to a physical stack.

– jamesdlin
Jan 29 at 8:00










3 Answers
3






active

oldest

votes


















86















Does this mean the base pointer or the stack pointer are actually moving down the memory addresses instead of going up? Why is that?




Yes, the push instructions decrement the stack pointer and write to the stack, while the pop do the reverse, read from the stack and increment the stack pointer.



This is somewhat historical in that for machines with limited memory, the stack was placed high and grown downwards, while the heap was placed low and grown upwards.  There is only one gap of "free memory" — between the heap & stack, and this gap is shared, either one can grow into the gap as individually needed.  Thus, the program only runs out of memory when the stack and heap collide leaving no free memory. 



If the stack and heap both grow in the same direction, then there are two gaps, and the stack cannot really grow into the heap's gap (the vice versa is also problematic).



Originally, processors had no dedicated stack handling instructions.  However, as stack support was added to the hardware, it took on this pattern of growing downward, and processors still follow this pattern today.



One could argue that on a 64-bit machine there is sufficient address space to allow multiple gaps — and as evidence, multiple gaps are necessarily the case when a process has multiple threads.  Though this is not sufficient motivation to change things around, since with multiple gap systems, the growth direction is arguably arbitrary, so tradition/compatibility tips the scale.




You'd have to change the CPU stack handling instructions in order to change the direction of the stack, or else give up on use of the dedicated pushing & popping instructions (e.g. push, pop, call, ret, others).



Note that the MIPS instruction set architecture does not have dedicated push & pop, so it is practical to grow the stack in either direction — you still might want a one-gap memory layout for a single thread process, but could grow the stack upwards and the heap downwards.  If you did that, however, some C varargs code might require adjustment in source or in under-the-hood parameter passing.



(In fact, since there is no dedicated stack handling on MIPS, we could use pre or post increment or pre or post decrement for pushing onto the stack as long as we used the exact reverse for popping off the stack, and also assuming that the operating system respects the chosen stack usage model.  Indeed, in some embedded systems and some educational systems, the MIPS stack is grown upwards.)






share|improve this answer




















  • 32





    It's not just push and pop on most architectures, but also the far more important interrupt-handling, call, ret, and whatever else has baked-in interaction with the stack.

    – Deduplicator
    Jan 27 at 16:46







  • 3





    ARM can have all all four stack flavours.

    – Margaret Bloom
    Jan 27 at 20:28






  • 14





    For what it's worth, I don't think "the growth direction is arbitrary" in the sense that either choice is equally good. Growing down has the property that overflowing the end of a buffer clobbers earlier stack frames, including saved return addresses. Growing up has the property that overflowing the end of a buffer clobbers only storage in the same or later (if the buffer is not in the latest, there may be later ones) call frame, and possibly even only unused space (all assuming a guard page after the stack). So from a safety perspective, growing up seems highly preferable

    – R..
    Jan 28 at 2:30







  • 6





    @R..: growing up doesn't eliminate buffer overrun exploits, because vulnerable functions are usually not leaf functions: they call other functions, placing a return address above the buffer. Leaf functions that get a pointer from their caller could become vulernable to overwriting their own return address. e.g. If a function allocates a buffer on the stack and passes it to gets(), or does a strcpy() that doesn't get inlined, then the return in those library functions will use the overwritten return address. Currently with downward-growing stacks, it's when their caller returns.

    – Peter Cordes
    Jan 28 at 6:00






  • 5





    @PeterCordes: Indeed my comment noted that same-level or more recent stack frames than the overflowed buffer are still potentially clobberable, but that's a lot less. In the case where the clobbering function is a leaf function directly called by the function whose buffer it is (e.g. strcpy), on an arch where return address is kept in a register unless it needs to be spilled, there is no access to clobber the return address.

    – R..
    Jan 28 at 14:40


















7














In your specific system the stack starts from high memory address and "grow" downwards to low memory addresses. (the symmetric case from low to high also exists)



And since you changed from -4 and +4 and it ran it doesn't mean that it's correct. The memory layout of a running program is more complex and dependent on many other factors that may contributed to the fact that you didn't instantly crashed on this extremely simple program.






share|improve this answer






























    1














    The stack pointer points at the boundary between allocated and unallocated stack memory. Growing it downwards means that it points at the start of the first structure in allocated stack space, with other allocated items following at larger addresses. Having pointers point to the start of allocated structures is much more common than the other way round.



    Now on many systems these days, there is a separate register for stack frames which can be somewhat reliably be unwound in order to figure out the call chain, with local variable storage interspersed. The way this stack frame register is set up on some architectures means that it ends up pointing behind the local variable storage as opposed to the stack pointer before it. So using this stack frame register then requires negative indexing.



    Note that stack frames and their indexing are a side aspect of compiled computer languages so it is the compiler's code generator that has to deal with the "unnaturalness" rather than a poor assembly language programmer.



    So while there were good historical reasons for choosing stacks to grow downward (and some of them are retained if you program in assembly language and don't bother setting up a proper stack frame), they have become less visible.






    share|improve this answer


















    • 1





      "Now on many systems these days, there is a separate register for stack frames" you are behind the times. Richer debug information formats have largely removed the need for frame pointers nowadays.

      – Peter Green
      Jan 29 at 0:56









    protected by gnat Jan 28 at 16:13



    Thank you for your interest in this question.
    Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



    Would you like to answer one of these unanswered questions instead?














    3 Answers
    3






    active

    oldest

    votes








    3 Answers
    3






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    86















    Does this mean the base pointer or the stack pointer are actually moving down the memory addresses instead of going up? Why is that?




    Yes, the push instructions decrement the stack pointer and write to the stack, while the pop do the reverse, read from the stack and increment the stack pointer.



    This is somewhat historical in that for machines with limited memory, the stack was placed high and grown downwards, while the heap was placed low and grown upwards.  There is only one gap of "free memory" — between the heap & stack, and this gap is shared, either one can grow into the gap as individually needed.  Thus, the program only runs out of memory when the stack and heap collide leaving no free memory. 



    If the stack and heap both grow in the same direction, then there are two gaps, and the stack cannot really grow into the heap's gap (the vice versa is also problematic).



    Originally, processors had no dedicated stack handling instructions.  However, as stack support was added to the hardware, it took on this pattern of growing downward, and processors still follow this pattern today.



    One could argue that on a 64-bit machine there is sufficient address space to allow multiple gaps — and as evidence, multiple gaps are necessarily the case when a process has multiple threads.  Though this is not sufficient motivation to change things around, since with multiple gap systems, the growth direction is arguably arbitrary, so tradition/compatibility tips the scale.




    You'd have to change the CPU stack handling instructions in order to change the direction of the stack, or else give up on use of the dedicated pushing & popping instructions (e.g. push, pop, call, ret, others).



    Note that the MIPS instruction set architecture does not have dedicated push & pop, so it is practical to grow the stack in either direction — you still might want a one-gap memory layout for a single thread process, but could grow the stack upwards and the heap downwards.  If you did that, however, some C varargs code might require adjustment in source or in under-the-hood parameter passing.



    (In fact, since there is no dedicated stack handling on MIPS, we could use pre or post increment or pre or post decrement for pushing onto the stack as long as we used the exact reverse for popping off the stack, and also assuming that the operating system respects the chosen stack usage model.  Indeed, in some embedded systems and some educational systems, the MIPS stack is grown upwards.)






    share|improve this answer




















    • 32





      It's not just push and pop on most architectures, but also the far more important interrupt-handling, call, ret, and whatever else has baked-in interaction with the stack.

      – Deduplicator
      Jan 27 at 16:46







    • 3





      ARM can have all all four stack flavours.

      – Margaret Bloom
      Jan 27 at 20:28






    • 14





      For what it's worth, I don't think "the growth direction is arbitrary" in the sense that either choice is equally good. Growing down has the property that overflowing the end of a buffer clobbers earlier stack frames, including saved return addresses. Growing up has the property that overflowing the end of a buffer clobbers only storage in the same or later (if the buffer is not in the latest, there may be later ones) call frame, and possibly even only unused space (all assuming a guard page after the stack). So from a safety perspective, growing up seems highly preferable

      – R..
      Jan 28 at 2:30







    • 6





      @R..: growing up doesn't eliminate buffer overrun exploits, because vulnerable functions are usually not leaf functions: they call other functions, placing a return address above the buffer. Leaf functions that get a pointer from their caller could become vulernable to overwriting their own return address. e.g. If a function allocates a buffer on the stack and passes it to gets(), or does a strcpy() that doesn't get inlined, then the return in those library functions will use the overwritten return address. Currently with downward-growing stacks, it's when their caller returns.

      – Peter Cordes
      Jan 28 at 6:00






    • 5





      @PeterCordes: Indeed my comment noted that same-level or more recent stack frames than the overflowed buffer are still potentially clobberable, but that's a lot less. In the case where the clobbering function is a leaf function directly called by the function whose buffer it is (e.g. strcpy), on an arch where return address is kept in a register unless it needs to be spilled, there is no access to clobber the return address.

      – R..
      Jan 28 at 14:40















    86















    Does this mean the base pointer or the stack pointer are actually moving down the memory addresses instead of going up? Why is that?




    Yes, the push instructions decrement the stack pointer and write to the stack, while the pop do the reverse, read from the stack and increment the stack pointer.



    This is somewhat historical in that for machines with limited memory, the stack was placed high and grown downwards, while the heap was placed low and grown upwards.  There is only one gap of "free memory" — between the heap & stack, and this gap is shared, either one can grow into the gap as individually needed.  Thus, the program only runs out of memory when the stack and heap collide leaving no free memory. 



    If the stack and heap both grow in the same direction, then there are two gaps, and the stack cannot really grow into the heap's gap (the vice versa is also problematic).



    Originally, processors had no dedicated stack handling instructions.  However, as stack support was added to the hardware, it took on this pattern of growing downward, and processors still follow this pattern today.



    One could argue that on a 64-bit machine there is sufficient address space to allow multiple gaps — and as evidence, multiple gaps are necessarily the case when a process has multiple threads.  Though this is not sufficient motivation to change things around, since with multiple gap systems, the growth direction is arguably arbitrary, so tradition/compatibility tips the scale.




    You'd have to change the CPU stack handling instructions in order to change the direction of the stack, or else give up on use of the dedicated pushing & popping instructions (e.g. push, pop, call, ret, others).



    Note that the MIPS instruction set architecture does not have dedicated push & pop, so it is practical to grow the stack in either direction — you still might want a one-gap memory layout for a single thread process, but could grow the stack upwards and the heap downwards.  If you did that, however, some C varargs code might require adjustment in source or in under-the-hood parameter passing.



    (In fact, since there is no dedicated stack handling on MIPS, we could use pre or post increment or pre or post decrement for pushing onto the stack as long as we used the exact reverse for popping off the stack, and also assuming that the operating system respects the chosen stack usage model.  Indeed, in some embedded systems and some educational systems, the MIPS stack is grown upwards.)






    share|improve this answer




















    • 32





      It's not just push and pop on most architectures, but also the far more important interrupt-handling, call, ret, and whatever else has baked-in interaction with the stack.

      – Deduplicator
      Jan 27 at 16:46







    • 3





      ARM can have all all four stack flavours.

      – Margaret Bloom
      Jan 27 at 20:28






    • 14





      For what it's worth, I don't think "the growth direction is arbitrary" in the sense that either choice is equally good. Growing down has the property that overflowing the end of a buffer clobbers earlier stack frames, including saved return addresses. Growing up has the property that overflowing the end of a buffer clobbers only storage in the same or later (if the buffer is not in the latest, there may be later ones) call frame, and possibly even only unused space (all assuming a guard page after the stack). So from a safety perspective, growing up seems highly preferable

      – R..
      Jan 28 at 2:30







    • 6





      @R..: growing up doesn't eliminate buffer overrun exploits, because vulnerable functions are usually not leaf functions: they call other functions, placing a return address above the buffer. Leaf functions that get a pointer from their caller could become vulernable to overwriting their own return address. e.g. If a function allocates a buffer on the stack and passes it to gets(), or does a strcpy() that doesn't get inlined, then the return in those library functions will use the overwritten return address. Currently with downward-growing stacks, it's when their caller returns.

      – Peter Cordes
      Jan 28 at 6:00






    • 5





      @PeterCordes: Indeed my comment noted that same-level or more recent stack frames than the overflowed buffer are still potentially clobberable, but that's a lot less. In the case where the clobbering function is a leaf function directly called by the function whose buffer it is (e.g. strcpy), on an arch where return address is kept in a register unless it needs to be spilled, there is no access to clobber the return address.

      – R..
      Jan 28 at 14:40













    86












    86








    86








    Does this mean the base pointer or the stack pointer are actually moving down the memory addresses instead of going up? Why is that?




    Yes, the push instructions decrement the stack pointer and write to the stack, while the pop do the reverse, read from the stack and increment the stack pointer.



    This is somewhat historical in that for machines with limited memory, the stack was placed high and grown downwards, while the heap was placed low and grown upwards.  There is only one gap of "free memory" — between the heap & stack, and this gap is shared, either one can grow into the gap as individually needed.  Thus, the program only runs out of memory when the stack and heap collide leaving no free memory. 



    If the stack and heap both grow in the same direction, then there are two gaps, and the stack cannot really grow into the heap's gap (the vice versa is also problematic).



    Originally, processors had no dedicated stack handling instructions.  However, as stack support was added to the hardware, it took on this pattern of growing downward, and processors still follow this pattern today.



    One could argue that on a 64-bit machine there is sufficient address space to allow multiple gaps — and as evidence, multiple gaps are necessarily the case when a process has multiple threads.  Though this is not sufficient motivation to change things around, since with multiple gap systems, the growth direction is arguably arbitrary, so tradition/compatibility tips the scale.




    You'd have to change the CPU stack handling instructions in order to change the direction of the stack, or else give up on use of the dedicated pushing & popping instructions (e.g. push, pop, call, ret, others).



    Note that the MIPS instruction set architecture does not have dedicated push & pop, so it is practical to grow the stack in either direction — you still might want a one-gap memory layout for a single thread process, but could grow the stack upwards and the heap downwards.  If you did that, however, some C varargs code might require adjustment in source or in under-the-hood parameter passing.



    (In fact, since there is no dedicated stack handling on MIPS, we could use pre or post increment or pre or post decrement for pushing onto the stack as long as we used the exact reverse for popping off the stack, and also assuming that the operating system respects the chosen stack usage model.  Indeed, in some embedded systems and some educational systems, the MIPS stack is grown upwards.)






    share|improve this answer
















    Does this mean the base pointer or the stack pointer are actually moving down the memory addresses instead of going up? Why is that?




    Yes, the push instructions decrement the stack pointer and write to the stack, while the pop do the reverse, read from the stack and increment the stack pointer.



    This is somewhat historical in that for machines with limited memory, the stack was placed high and grown downwards, while the heap was placed low and grown upwards.  There is only one gap of "free memory" — between the heap & stack, and this gap is shared, either one can grow into the gap as individually needed.  Thus, the program only runs out of memory when the stack and heap collide leaving no free memory. 



    If the stack and heap both grow in the same direction, then there are two gaps, and the stack cannot really grow into the heap's gap (the vice versa is also problematic).



    Originally, processors had no dedicated stack handling instructions.  However, as stack support was added to the hardware, it took on this pattern of growing downward, and processors still follow this pattern today.



    One could argue that on a 64-bit machine there is sufficient address space to allow multiple gaps — and as evidence, multiple gaps are necessarily the case when a process has multiple threads.  Though this is not sufficient motivation to change things around, since with multiple gap systems, the growth direction is arguably arbitrary, so tradition/compatibility tips the scale.




    You'd have to change the CPU stack handling instructions in order to change the direction of the stack, or else give up on use of the dedicated pushing & popping instructions (e.g. push, pop, call, ret, others).



    Note that the MIPS instruction set architecture does not have dedicated push & pop, so it is practical to grow the stack in either direction — you still might want a one-gap memory layout for a single thread process, but could grow the stack upwards and the heap downwards.  If you did that, however, some C varargs code might require adjustment in source or in under-the-hood parameter passing.



    (In fact, since there is no dedicated stack handling on MIPS, we could use pre or post increment or pre or post decrement for pushing onto the stack as long as we used the exact reverse for popping off the stack, and also assuming that the operating system respects the chosen stack usage model.  Indeed, in some embedded systems and some educational systems, the MIPS stack is grown upwards.)







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Jan 29 at 20:21









    Ghost4Man

    1094




    1094










    answered Jan 27 at 16:01









    Erik EidtErik Eidt

    24.3k43567




    24.3k43567







    • 32





      It's not just push and pop on most architectures, but also the far more important interrupt-handling, call, ret, and whatever else has baked-in interaction with the stack.

      – Deduplicator
      Jan 27 at 16:46







    • 3





      ARM can have all all four stack flavours.

      – Margaret Bloom
      Jan 27 at 20:28






    • 14





      For what it's worth, I don't think "the growth direction is arbitrary" in the sense that either choice is equally good. Growing down has the property that overflowing the end of a buffer clobbers earlier stack frames, including saved return addresses. Growing up has the property that overflowing the end of a buffer clobbers only storage in the same or later (if the buffer is not in the latest, there may be later ones) call frame, and possibly even only unused space (all assuming a guard page after the stack). So from a safety perspective, growing up seems highly preferable

      – R..
      Jan 28 at 2:30







    • 6





      @R..: growing up doesn't eliminate buffer overrun exploits, because vulnerable functions are usually not leaf functions: they call other functions, placing a return address above the buffer. Leaf functions that get a pointer from their caller could become vulernable to overwriting their own return address. e.g. If a function allocates a buffer on the stack and passes it to gets(), or does a strcpy() that doesn't get inlined, then the return in those library functions will use the overwritten return address. Currently with downward-growing stacks, it's when their caller returns.

      – Peter Cordes
      Jan 28 at 6:00






    • 5





      @PeterCordes: Indeed my comment noted that same-level or more recent stack frames than the overflowed buffer are still potentially clobberable, but that's a lot less. In the case where the clobbering function is a leaf function directly called by the function whose buffer it is (e.g. strcpy), on an arch where return address is kept in a register unless it needs to be spilled, there is no access to clobber the return address.

      – R..
      Jan 28 at 14:40












    • 32





      It's not just push and pop on most architectures, but also the far more important interrupt-handling, call, ret, and whatever else has baked-in interaction with the stack.

      – Deduplicator
      Jan 27 at 16:46







    • 3





      ARM can have all all four stack flavours.

      – Margaret Bloom
      Jan 27 at 20:28






    • 14





      For what it's worth, I don't think "the growth direction is arbitrary" in the sense that either choice is equally good. Growing down has the property that overflowing the end of a buffer clobbers earlier stack frames, including saved return addresses. Growing up has the property that overflowing the end of a buffer clobbers only storage in the same or later (if the buffer is not in the latest, there may be later ones) call frame, and possibly even only unused space (all assuming a guard page after the stack). So from a safety perspective, growing up seems highly preferable

      – R..
      Jan 28 at 2:30







    • 6





      @R..: growing up doesn't eliminate buffer overrun exploits, because vulnerable functions are usually not leaf functions: they call other functions, placing a return address above the buffer. Leaf functions that get a pointer from their caller could become vulernable to overwriting their own return address. e.g. If a function allocates a buffer on the stack and passes it to gets(), or does a strcpy() that doesn't get inlined, then the return in those library functions will use the overwritten return address. Currently with downward-growing stacks, it's when their caller returns.

      – Peter Cordes
      Jan 28 at 6:00






    • 5





      @PeterCordes: Indeed my comment noted that same-level or more recent stack frames than the overflowed buffer are still potentially clobberable, but that's a lot less. In the case where the clobbering function is a leaf function directly called by the function whose buffer it is (e.g. strcpy), on an arch where return address is kept in a register unless it needs to be spilled, there is no access to clobber the return address.

      – R..
      Jan 28 at 14:40







    32




    32





    It's not just push and pop on most architectures, but also the far more important interrupt-handling, call, ret, and whatever else has baked-in interaction with the stack.

    – Deduplicator
    Jan 27 at 16:46






    It's not just push and pop on most architectures, but also the far more important interrupt-handling, call, ret, and whatever else has baked-in interaction with the stack.

    – Deduplicator
    Jan 27 at 16:46





    3




    3





    ARM can have all all four stack flavours.

    – Margaret Bloom
    Jan 27 at 20:28





    ARM can have all all four stack flavours.

    – Margaret Bloom
    Jan 27 at 20:28




    14




    14





    For what it's worth, I don't think "the growth direction is arbitrary" in the sense that either choice is equally good. Growing down has the property that overflowing the end of a buffer clobbers earlier stack frames, including saved return addresses. Growing up has the property that overflowing the end of a buffer clobbers only storage in the same or later (if the buffer is not in the latest, there may be later ones) call frame, and possibly even only unused space (all assuming a guard page after the stack). So from a safety perspective, growing up seems highly preferable

    – R..
    Jan 28 at 2:30






    For what it's worth, I don't think "the growth direction is arbitrary" in the sense that either choice is equally good. Growing down has the property that overflowing the end of a buffer clobbers earlier stack frames, including saved return addresses. Growing up has the property that overflowing the end of a buffer clobbers only storage in the same or later (if the buffer is not in the latest, there may be later ones) call frame, and possibly even only unused space (all assuming a guard page after the stack). So from a safety perspective, growing up seems highly preferable

    – R..
    Jan 28 at 2:30





    6




    6





    @R..: growing up doesn't eliminate buffer overrun exploits, because vulnerable functions are usually not leaf functions: they call other functions, placing a return address above the buffer. Leaf functions that get a pointer from their caller could become vulernable to overwriting their own return address. e.g. If a function allocates a buffer on the stack and passes it to gets(), or does a strcpy() that doesn't get inlined, then the return in those library functions will use the overwritten return address. Currently with downward-growing stacks, it's when their caller returns.

    – Peter Cordes
    Jan 28 at 6:00





    @R..: growing up doesn't eliminate buffer overrun exploits, because vulnerable functions are usually not leaf functions: they call other functions, placing a return address above the buffer. Leaf functions that get a pointer from their caller could become vulernable to overwriting their own return address. e.g. If a function allocates a buffer on the stack and passes it to gets(), or does a strcpy() that doesn't get inlined, then the return in those library functions will use the overwritten return address. Currently with downward-growing stacks, it's when their caller returns.

    – Peter Cordes
    Jan 28 at 6:00




    5




    5





    @PeterCordes: Indeed my comment noted that same-level or more recent stack frames than the overflowed buffer are still potentially clobberable, but that's a lot less. In the case where the clobbering function is a leaf function directly called by the function whose buffer it is (e.g. strcpy), on an arch where return address is kept in a register unless it needs to be spilled, there is no access to clobber the return address.

    – R..
    Jan 28 at 14:40





    @PeterCordes: Indeed my comment noted that same-level or more recent stack frames than the overflowed buffer are still potentially clobberable, but that's a lot less. In the case where the clobbering function is a leaf function directly called by the function whose buffer it is (e.g. strcpy), on an arch where return address is kept in a register unless it needs to be spilled, there is no access to clobber the return address.

    – R..
    Jan 28 at 14:40













    7














    In your specific system the stack starts from high memory address and "grow" downwards to low memory addresses. (the symmetric case from low to high also exists)



    And since you changed from -4 and +4 and it ran it doesn't mean that it's correct. The memory layout of a running program is more complex and dependent on many other factors that may contributed to the fact that you didn't instantly crashed on this extremely simple program.






    share|improve this answer



























      7














      In your specific system the stack starts from high memory address and "grow" downwards to low memory addresses. (the symmetric case from low to high also exists)



      And since you changed from -4 and +4 and it ran it doesn't mean that it's correct. The memory layout of a running program is more complex and dependent on many other factors that may contributed to the fact that you didn't instantly crashed on this extremely simple program.






      share|improve this answer

























        7












        7








        7







        In your specific system the stack starts from high memory address and "grow" downwards to low memory addresses. (the symmetric case from low to high also exists)



        And since you changed from -4 and +4 and it ran it doesn't mean that it's correct. The memory layout of a running program is more complex and dependent on many other factors that may contributed to the fact that you didn't instantly crashed on this extremely simple program.






        share|improve this answer













        In your specific system the stack starts from high memory address and "grow" downwards to low memory addresses. (the symmetric case from low to high also exists)



        And since you changed from -4 and +4 and it ran it doesn't mean that it's correct. The memory layout of a running program is more complex and dependent on many other factors that may contributed to the fact that you didn't instantly crashed on this extremely simple program.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Jan 27 at 15:58









        nadirnadir

        49428




        49428





















            1














            The stack pointer points at the boundary between allocated and unallocated stack memory. Growing it downwards means that it points at the start of the first structure in allocated stack space, with other allocated items following at larger addresses. Having pointers point to the start of allocated structures is much more common than the other way round.



            Now on many systems these days, there is a separate register for stack frames which can be somewhat reliably be unwound in order to figure out the call chain, with local variable storage interspersed. The way this stack frame register is set up on some architectures means that it ends up pointing behind the local variable storage as opposed to the stack pointer before it. So using this stack frame register then requires negative indexing.



            Note that stack frames and their indexing are a side aspect of compiled computer languages so it is the compiler's code generator that has to deal with the "unnaturalness" rather than a poor assembly language programmer.



            So while there were good historical reasons for choosing stacks to grow downward (and some of them are retained if you program in assembly language and don't bother setting up a proper stack frame), they have become less visible.






            share|improve this answer


















            • 1





              "Now on many systems these days, there is a separate register for stack frames" you are behind the times. Richer debug information formats have largely removed the need for frame pointers nowadays.

              – Peter Green
              Jan 29 at 0:56















            1














            The stack pointer points at the boundary between allocated and unallocated stack memory. Growing it downwards means that it points at the start of the first structure in allocated stack space, with other allocated items following at larger addresses. Having pointers point to the start of allocated structures is much more common than the other way round.



            Now on many systems these days, there is a separate register for stack frames which can be somewhat reliably be unwound in order to figure out the call chain, with local variable storage interspersed. The way this stack frame register is set up on some architectures means that it ends up pointing behind the local variable storage as opposed to the stack pointer before it. So using this stack frame register then requires negative indexing.



            Note that stack frames and their indexing are a side aspect of compiled computer languages so it is the compiler's code generator that has to deal with the "unnaturalness" rather than a poor assembly language programmer.



            So while there were good historical reasons for choosing stacks to grow downward (and some of them are retained if you program in assembly language and don't bother setting up a proper stack frame), they have become less visible.






            share|improve this answer


















            • 1





              "Now on many systems these days, there is a separate register for stack frames" you are behind the times. Richer debug information formats have largely removed the need for frame pointers nowadays.

              – Peter Green
              Jan 29 at 0:56













            1












            1








            1







            The stack pointer points at the boundary between allocated and unallocated stack memory. Growing it downwards means that it points at the start of the first structure in allocated stack space, with other allocated items following at larger addresses. Having pointers point to the start of allocated structures is much more common than the other way round.



            Now on many systems these days, there is a separate register for stack frames which can be somewhat reliably be unwound in order to figure out the call chain, with local variable storage interspersed. The way this stack frame register is set up on some architectures means that it ends up pointing behind the local variable storage as opposed to the stack pointer before it. So using this stack frame register then requires negative indexing.



            Note that stack frames and their indexing are a side aspect of compiled computer languages so it is the compiler's code generator that has to deal with the "unnaturalness" rather than a poor assembly language programmer.



            So while there were good historical reasons for choosing stacks to grow downward (and some of them are retained if you program in assembly language and don't bother setting up a proper stack frame), they have become less visible.






            share|improve this answer













            The stack pointer points at the boundary between allocated and unallocated stack memory. Growing it downwards means that it points at the start of the first structure in allocated stack space, with other allocated items following at larger addresses. Having pointers point to the start of allocated structures is much more common than the other way round.



            Now on many systems these days, there is a separate register for stack frames which can be somewhat reliably be unwound in order to figure out the call chain, with local variable storage interspersed. The way this stack frame register is set up on some architectures means that it ends up pointing behind the local variable storage as opposed to the stack pointer before it. So using this stack frame register then requires negative indexing.



            Note that stack frames and their indexing are a side aspect of compiled computer languages so it is the compiler's code generator that has to deal with the "unnaturalness" rather than a poor assembly language programmer.



            So while there were good historical reasons for choosing stacks to grow downward (and some of them are retained if you program in assembly language and don't bother setting up a proper stack frame), they have become less visible.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Jan 28 at 13:49







            user327022














            • 1





              "Now on many systems these days, there is a separate register for stack frames" you are behind the times. Richer debug information formats have largely removed the need for frame pointers nowadays.

              – Peter Green
              Jan 29 at 0:56












            • 1





              "Now on many systems these days, there is a separate register for stack frames" you are behind the times. Richer debug information formats have largely removed the need for frame pointers nowadays.

              – Peter Green
              Jan 29 at 0:56







            1




            1





            "Now on many systems these days, there is a separate register for stack frames" you are behind the times. Richer debug information formats have largely removed the need for frame pointers nowadays.

            – Peter Green
            Jan 29 at 0:56





            "Now on many systems these days, there is a separate register for stack frames" you are behind the times. Richer debug information formats have largely removed the need for frame pointers nowadays.

            – Peter Green
            Jan 29 at 0:56





            protected by gnat Jan 28 at 16:13



            Thank you for your interest in this question.
            Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



            Would you like to answer one of these unanswered questions instead?


            Popular posts from this blog

            How to check contact read email or not when send email to Individual?

            Bahrain

            Postfix configuration issue with fips on centos 7; mailgun relay