Who set the 640K limit?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
37
down vote

favorite
9












We all know that "640K should be enough for everyone". But who actually set this limit? The quote is often attributed to Bill Gates, but it doesn't seem like a decision for an Operating System vendor to make. And does MS-DOS have some kind of 640K limit? Doesn't it just come from the hardware?



But maybe Bill Gates was consulted on the matter, perhaps? And if he was, by who?



I feel we need to establish a timeline of key decisions made in the IBM PC memory architecture. When did Microsoft become involved with the IBM PC design? Was the 8086 processor already designed?



As far as I know, the 8086 has two magical addresses. The first is the beginning of vectors, which is address zero. The vectors need to be mutable, so RAM must be attached to this address. Thus each 8086 system needs RAM at address zero, the beginning of address space.



The second magical address is the Instruction Pointer reset value, the location from where the 8086 starts execution. Since that boot firmware must be fixed, there must be ROM at that address. The end of memory space was chosen for this location, address 0xFFFF0 to be exact.



Was Microsoft involved in these two decisions, which must have been done at Intel? I find that hard to believe. Who at Intel chose these addresses? Was it Stephen P. Morse, the 8086 principal architect?



This leads to the biggest question at hand, the 640K limit. Who set it? Where does it come from? I know that EGA and VGA video cards have memory at that address, address 0xA0000 onwards. But didn't these cards come, like, a decade after the release of 8086 and the first version of IBM PC?



So, was there a 640K limit in the original IBM PC? Or did that come later? Was there something attached to the 0xA0000 address in the original PC? Some original video card that was used on the PC? Something else? Who designed that hardware and chose that it would use the memory at 0xA0000?



Having designed quite a few embedded computers already back in the days when external logic was always needed for address encoding, I can kind of see how it could have happened. In my imagination it's like "Ok, I've got this RAM at zero and the BIOS eprom at 0xF0000 so where should I place the video RAM...? Hmm, somewhere near the end, I think, so I can expand the main RAM... but not at the very end so I can expand the video RAM too... let's put it a 0xA0000, it's a nice round figure... we can change it at the next PCB revision anyway... Ok, ship it." But would it have happened like this?



Was there some early consencus on breaking the continuous memory address space at 0xA0000? Who chose it? Was it some clerk behind a typewriter, making history? Some engineers at a meeting, late for a lunch appointment? Maybe some guy with a soldering iron or a wirewrap gun, hacking up the first prototype of... what?



We need to get to the bottom of this, the world needs to know!










share|improve this question

















  • 16




    From the horse's mouth, Bill Gates: "I never said '640K should be enough for anybody!'". Another quote in there: "The IBM PC had 1 megabyte of logical address space. But 384K of this was assigned to special purposes, leaving 640K of memory available. That's where the now-infamous ``640K barrier'' came from"
    – RobIII
    Oct 1 at 15:21







  • 2




    @Robill It wasn't that 384K (There was actually another 64K minus 16 bytes if you turned on bit 20 of the address bus.) was assigned to other purposes. By the '90s, a lot of that "upper memory" was put to use with EMM386! It was that DOS could only load an executable into a contiguous bloxk of memory, and IBM chose to start video memory at A0000h.
    – Davislor
    Oct 1 at 18:58







  • 5




    The "640k limit" (hole in contiguous RAM) was just 1 of a series of short-sighted, odd, and/or bad decisions involving the PC. Others include a) IBM choosing 8086 family over 68000. b) IBM not buying MSDOS outright instead of allowing Microsoft to co-own and co-market it. c) Intel choosing to overlap memory pointers instead of normative flat memory pointers. d) Intel thinking it would be an important feature of the 8086 to be able to assemble 8085 code, complicating and probably limiting the chip. e) Intel not realizing that the 286 protected mode would benefit from a way to go back to real.
    – RichF
    Oct 1 at 21:47






  • 4




    It probably goes without saying but Intel choose 20 bits for the address bus of the 8086, which constrained the address space to at most 1 megabyte. Beyond the address pins, there are also limitations in the instruction set that assume a 20 bit address space (the mechanism of the segment registers). Other constraints, such as IBM's board design, then shrank the potential RAM from there.
    – Erik Eidt
    Oct 1 at 22:41






  • 2




    @RichF: The overlapping pointers were a good design for programs that didn't need to handle individual objects over 64K. Every "linear" design I've seen on 16-bit platforms would require programs to either subdivide memory into 64K sections and ensure no object crossed a section boundary, or else add extra code for every access that could straddle a section boundary. Effective coding often required having more than two uncommitted data segments, but the 8088 design was much better than the 80286 design.
    – supercat
    Oct 2 at 18:49














up vote
37
down vote

favorite
9












We all know that "640K should be enough for everyone". But who actually set this limit? The quote is often attributed to Bill Gates, but it doesn't seem like a decision for an Operating System vendor to make. And does MS-DOS have some kind of 640K limit? Doesn't it just come from the hardware?



But maybe Bill Gates was consulted on the matter, perhaps? And if he was, by who?



I feel we need to establish a timeline of key decisions made in the IBM PC memory architecture. When did Microsoft become involved with the IBM PC design? Was the 8086 processor already designed?



As far as I know, the 8086 has two magical addresses. The first is the beginning of vectors, which is address zero. The vectors need to be mutable, so RAM must be attached to this address. Thus each 8086 system needs RAM at address zero, the beginning of address space.



The second magical address is the Instruction Pointer reset value, the location from where the 8086 starts execution. Since that boot firmware must be fixed, there must be ROM at that address. The end of memory space was chosen for this location, address 0xFFFF0 to be exact.



Was Microsoft involved in these two decisions, which must have been done at Intel? I find that hard to believe. Who at Intel chose these addresses? Was it Stephen P. Morse, the 8086 principal architect?



This leads to the biggest question at hand, the 640K limit. Who set it? Where does it come from? I know that EGA and VGA video cards have memory at that address, address 0xA0000 onwards. But didn't these cards come, like, a decade after the release of 8086 and the first version of IBM PC?



So, was there a 640K limit in the original IBM PC? Or did that come later? Was there something attached to the 0xA0000 address in the original PC? Some original video card that was used on the PC? Something else? Who designed that hardware and chose that it would use the memory at 0xA0000?



Having designed quite a few embedded computers already back in the days when external logic was always needed for address encoding, I can kind of see how it could have happened. In my imagination it's like "Ok, I've got this RAM at zero and the BIOS eprom at 0xF0000 so where should I place the video RAM...? Hmm, somewhere near the end, I think, so I can expand the main RAM... but not at the very end so I can expand the video RAM too... let's put it a 0xA0000, it's a nice round figure... we can change it at the next PCB revision anyway... Ok, ship it." But would it have happened like this?



Was there some early consencus on breaking the continuous memory address space at 0xA0000? Who chose it? Was it some clerk behind a typewriter, making history? Some engineers at a meeting, late for a lunch appointment? Maybe some guy with a soldering iron or a wirewrap gun, hacking up the first prototype of... what?



We need to get to the bottom of this, the world needs to know!










share|improve this question

















  • 16




    From the horse's mouth, Bill Gates: "I never said '640K should be enough for anybody!'". Another quote in there: "The IBM PC had 1 megabyte of logical address space. But 384K of this was assigned to special purposes, leaving 640K of memory available. That's where the now-infamous ``640K barrier'' came from"
    – RobIII
    Oct 1 at 15:21







  • 2




    @Robill It wasn't that 384K (There was actually another 64K minus 16 bytes if you turned on bit 20 of the address bus.) was assigned to other purposes. By the '90s, a lot of that "upper memory" was put to use with EMM386! It was that DOS could only load an executable into a contiguous bloxk of memory, and IBM chose to start video memory at A0000h.
    – Davislor
    Oct 1 at 18:58







  • 5




    The "640k limit" (hole in contiguous RAM) was just 1 of a series of short-sighted, odd, and/or bad decisions involving the PC. Others include a) IBM choosing 8086 family over 68000. b) IBM not buying MSDOS outright instead of allowing Microsoft to co-own and co-market it. c) Intel choosing to overlap memory pointers instead of normative flat memory pointers. d) Intel thinking it would be an important feature of the 8086 to be able to assemble 8085 code, complicating and probably limiting the chip. e) Intel not realizing that the 286 protected mode would benefit from a way to go back to real.
    – RichF
    Oct 1 at 21:47






  • 4




    It probably goes without saying but Intel choose 20 bits for the address bus of the 8086, which constrained the address space to at most 1 megabyte. Beyond the address pins, there are also limitations in the instruction set that assume a 20 bit address space (the mechanism of the segment registers). Other constraints, such as IBM's board design, then shrank the potential RAM from there.
    – Erik Eidt
    Oct 1 at 22:41






  • 2




    @RichF: The overlapping pointers were a good design for programs that didn't need to handle individual objects over 64K. Every "linear" design I've seen on 16-bit platforms would require programs to either subdivide memory into 64K sections and ensure no object crossed a section boundary, or else add extra code for every access that could straddle a section boundary. Effective coding often required having more than two uncommitted data segments, but the 8088 design was much better than the 80286 design.
    – supercat
    Oct 2 at 18:49












up vote
37
down vote

favorite
9









up vote
37
down vote

favorite
9






9





We all know that "640K should be enough for everyone". But who actually set this limit? The quote is often attributed to Bill Gates, but it doesn't seem like a decision for an Operating System vendor to make. And does MS-DOS have some kind of 640K limit? Doesn't it just come from the hardware?



But maybe Bill Gates was consulted on the matter, perhaps? And if he was, by who?



I feel we need to establish a timeline of key decisions made in the IBM PC memory architecture. When did Microsoft become involved with the IBM PC design? Was the 8086 processor already designed?



As far as I know, the 8086 has two magical addresses. The first is the beginning of vectors, which is address zero. The vectors need to be mutable, so RAM must be attached to this address. Thus each 8086 system needs RAM at address zero, the beginning of address space.



The second magical address is the Instruction Pointer reset value, the location from where the 8086 starts execution. Since that boot firmware must be fixed, there must be ROM at that address. The end of memory space was chosen for this location, address 0xFFFF0 to be exact.



Was Microsoft involved in these two decisions, which must have been done at Intel? I find that hard to believe. Who at Intel chose these addresses? Was it Stephen P. Morse, the 8086 principal architect?



This leads to the biggest question at hand, the 640K limit. Who set it? Where does it come from? I know that EGA and VGA video cards have memory at that address, address 0xA0000 onwards. But didn't these cards come, like, a decade after the release of 8086 and the first version of IBM PC?



So, was there a 640K limit in the original IBM PC? Or did that come later? Was there something attached to the 0xA0000 address in the original PC? Some original video card that was used on the PC? Something else? Who designed that hardware and chose that it would use the memory at 0xA0000?



Having designed quite a few embedded computers already back in the days when external logic was always needed for address encoding, I can kind of see how it could have happened. In my imagination it's like "Ok, I've got this RAM at zero and the BIOS eprom at 0xF0000 so where should I place the video RAM...? Hmm, somewhere near the end, I think, so I can expand the main RAM... but not at the very end so I can expand the video RAM too... let's put it a 0xA0000, it's a nice round figure... we can change it at the next PCB revision anyway... Ok, ship it." But would it have happened like this?



Was there some early consencus on breaking the continuous memory address space at 0xA0000? Who chose it? Was it some clerk behind a typewriter, making history? Some engineers at a meeting, late for a lunch appointment? Maybe some guy with a soldering iron or a wirewrap gun, hacking up the first prototype of... what?



We need to get to the bottom of this, the world needs to know!










share|improve this question













We all know that "640K should be enough for everyone". But who actually set this limit? The quote is often attributed to Bill Gates, but it doesn't seem like a decision for an Operating System vendor to make. And does MS-DOS have some kind of 640K limit? Doesn't it just come from the hardware?



But maybe Bill Gates was consulted on the matter, perhaps? And if he was, by who?



I feel we need to establish a timeline of key decisions made in the IBM PC memory architecture. When did Microsoft become involved with the IBM PC design? Was the 8086 processor already designed?



As far as I know, the 8086 has two magical addresses. The first is the beginning of vectors, which is address zero. The vectors need to be mutable, so RAM must be attached to this address. Thus each 8086 system needs RAM at address zero, the beginning of address space.



The second magical address is the Instruction Pointer reset value, the location from where the 8086 starts execution. Since that boot firmware must be fixed, there must be ROM at that address. The end of memory space was chosen for this location, address 0xFFFF0 to be exact.



Was Microsoft involved in these two decisions, which must have been done at Intel? I find that hard to believe. Who at Intel chose these addresses? Was it Stephen P. Morse, the 8086 principal architect?



This leads to the biggest question at hand, the 640K limit. Who set it? Where does it come from? I know that EGA and VGA video cards have memory at that address, address 0xA0000 onwards. But didn't these cards come, like, a decade after the release of 8086 and the first version of IBM PC?



So, was there a 640K limit in the original IBM PC? Or did that come later? Was there something attached to the 0xA0000 address in the original PC? Some original video card that was used on the PC? Something else? Who designed that hardware and chose that it would use the memory at 0xA0000?



Having designed quite a few embedded computers already back in the days when external logic was always needed for address encoding, I can kind of see how it could have happened. In my imagination it's like "Ok, I've got this RAM at zero and the BIOS eprom at 0xF0000 so where should I place the video RAM...? Hmm, somewhere near the end, I think, so I can expand the main RAM... but not at the very end so I can expand the video RAM too... let's put it a 0xA0000, it's a nice round figure... we can change it at the next PCB revision anyway... Ok, ship it." But would it have happened like this?



Was there some early consencus on breaking the continuous memory address space at 0xA0000? Who chose it? Was it some clerk behind a typewriter, making history? Some engineers at a meeting, late for a lunch appointment? Maybe some guy with a soldering iron or a wirewrap gun, hacking up the first prototype of... what?



We need to get to the bottom of this, the world needs to know!







ibm-pc memory-layout






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Oct 1 at 12:57









PkP

41938




41938







  • 16




    From the horse's mouth, Bill Gates: "I never said '640K should be enough for anybody!'". Another quote in there: "The IBM PC had 1 megabyte of logical address space. But 384K of this was assigned to special purposes, leaving 640K of memory available. That's where the now-infamous ``640K barrier'' came from"
    – RobIII
    Oct 1 at 15:21







  • 2




    @Robill It wasn't that 384K (There was actually another 64K minus 16 bytes if you turned on bit 20 of the address bus.) was assigned to other purposes. By the '90s, a lot of that "upper memory" was put to use with EMM386! It was that DOS could only load an executable into a contiguous bloxk of memory, and IBM chose to start video memory at A0000h.
    – Davislor
    Oct 1 at 18:58







  • 5




    The "640k limit" (hole in contiguous RAM) was just 1 of a series of short-sighted, odd, and/or bad decisions involving the PC. Others include a) IBM choosing 8086 family over 68000. b) IBM not buying MSDOS outright instead of allowing Microsoft to co-own and co-market it. c) Intel choosing to overlap memory pointers instead of normative flat memory pointers. d) Intel thinking it would be an important feature of the 8086 to be able to assemble 8085 code, complicating and probably limiting the chip. e) Intel not realizing that the 286 protected mode would benefit from a way to go back to real.
    – RichF
    Oct 1 at 21:47






  • 4




    It probably goes without saying but Intel choose 20 bits for the address bus of the 8086, which constrained the address space to at most 1 megabyte. Beyond the address pins, there are also limitations in the instruction set that assume a 20 bit address space (the mechanism of the segment registers). Other constraints, such as IBM's board design, then shrank the potential RAM from there.
    – Erik Eidt
    Oct 1 at 22:41






  • 2




    @RichF: The overlapping pointers were a good design for programs that didn't need to handle individual objects over 64K. Every "linear" design I've seen on 16-bit platforms would require programs to either subdivide memory into 64K sections and ensure no object crossed a section boundary, or else add extra code for every access that could straddle a section boundary. Effective coding often required having more than two uncommitted data segments, but the 8088 design was much better than the 80286 design.
    – supercat
    Oct 2 at 18:49












  • 16




    From the horse's mouth, Bill Gates: "I never said '640K should be enough for anybody!'". Another quote in there: "The IBM PC had 1 megabyte of logical address space. But 384K of this was assigned to special purposes, leaving 640K of memory available. That's where the now-infamous ``640K barrier'' came from"
    – RobIII
    Oct 1 at 15:21







  • 2




    @Robill It wasn't that 384K (There was actually another 64K minus 16 bytes if you turned on bit 20 of the address bus.) was assigned to other purposes. By the '90s, a lot of that "upper memory" was put to use with EMM386! It was that DOS could only load an executable into a contiguous bloxk of memory, and IBM chose to start video memory at A0000h.
    – Davislor
    Oct 1 at 18:58







  • 5




    The "640k limit" (hole in contiguous RAM) was just 1 of a series of short-sighted, odd, and/or bad decisions involving the PC. Others include a) IBM choosing 8086 family over 68000. b) IBM not buying MSDOS outright instead of allowing Microsoft to co-own and co-market it. c) Intel choosing to overlap memory pointers instead of normative flat memory pointers. d) Intel thinking it would be an important feature of the 8086 to be able to assemble 8085 code, complicating and probably limiting the chip. e) Intel not realizing that the 286 protected mode would benefit from a way to go back to real.
    – RichF
    Oct 1 at 21:47






  • 4




    It probably goes without saying but Intel choose 20 bits for the address bus of the 8086, which constrained the address space to at most 1 megabyte. Beyond the address pins, there are also limitations in the instruction set that assume a 20 bit address space (the mechanism of the segment registers). Other constraints, such as IBM's board design, then shrank the potential RAM from there.
    – Erik Eidt
    Oct 1 at 22:41






  • 2




    @RichF: The overlapping pointers were a good design for programs that didn't need to handle individual objects over 64K. Every "linear" design I've seen on 16-bit platforms would require programs to either subdivide memory into 64K sections and ensure no object crossed a section boundary, or else add extra code for every access that could straddle a section boundary. Effective coding often required having more than two uncommitted data segments, but the 8088 design was much better than the 80286 design.
    – supercat
    Oct 2 at 18:49







16




16




From the horse's mouth, Bill Gates: "I never said '640K should be enough for anybody!'". Another quote in there: "The IBM PC had 1 megabyte of logical address space. But 384K of this was assigned to special purposes, leaving 640K of memory available. That's where the now-infamous ``640K barrier'' came from"
– RobIII
Oct 1 at 15:21





From the horse's mouth, Bill Gates: "I never said '640K should be enough for anybody!'". Another quote in there: "The IBM PC had 1 megabyte of logical address space. But 384K of this was assigned to special purposes, leaving 640K of memory available. That's where the now-infamous ``640K barrier'' came from"
– RobIII
Oct 1 at 15:21





2




2




@Robill It wasn't that 384K (There was actually another 64K minus 16 bytes if you turned on bit 20 of the address bus.) was assigned to other purposes. By the '90s, a lot of that "upper memory" was put to use with EMM386! It was that DOS could only load an executable into a contiguous bloxk of memory, and IBM chose to start video memory at A0000h.
– Davislor
Oct 1 at 18:58





@Robill It wasn't that 384K (There was actually another 64K minus 16 bytes if you turned on bit 20 of the address bus.) was assigned to other purposes. By the '90s, a lot of that "upper memory" was put to use with EMM386! It was that DOS could only load an executable into a contiguous bloxk of memory, and IBM chose to start video memory at A0000h.
– Davislor
Oct 1 at 18:58





5




5




The "640k limit" (hole in contiguous RAM) was just 1 of a series of short-sighted, odd, and/or bad decisions involving the PC. Others include a) IBM choosing 8086 family over 68000. b) IBM not buying MSDOS outright instead of allowing Microsoft to co-own and co-market it. c) Intel choosing to overlap memory pointers instead of normative flat memory pointers. d) Intel thinking it would be an important feature of the 8086 to be able to assemble 8085 code, complicating and probably limiting the chip. e) Intel not realizing that the 286 protected mode would benefit from a way to go back to real.
– RichF
Oct 1 at 21:47




The "640k limit" (hole in contiguous RAM) was just 1 of a series of short-sighted, odd, and/or bad decisions involving the PC. Others include a) IBM choosing 8086 family over 68000. b) IBM not buying MSDOS outright instead of allowing Microsoft to co-own and co-market it. c) Intel choosing to overlap memory pointers instead of normative flat memory pointers. d) Intel thinking it would be an important feature of the 8086 to be able to assemble 8085 code, complicating and probably limiting the chip. e) Intel not realizing that the 286 protected mode would benefit from a way to go back to real.
– RichF
Oct 1 at 21:47




4




4




It probably goes without saying but Intel choose 20 bits for the address bus of the 8086, which constrained the address space to at most 1 megabyte. Beyond the address pins, there are also limitations in the instruction set that assume a 20 bit address space (the mechanism of the segment registers). Other constraints, such as IBM's board design, then shrank the potential RAM from there.
– Erik Eidt
Oct 1 at 22:41




It probably goes without saying but Intel choose 20 bits for the address bus of the 8086, which constrained the address space to at most 1 megabyte. Beyond the address pins, there are also limitations in the instruction set that assume a 20 bit address space (the mechanism of the segment registers). Other constraints, such as IBM's board design, then shrank the potential RAM from there.
– Erik Eidt
Oct 1 at 22:41




2




2




@RichF: The overlapping pointers were a good design for programs that didn't need to handle individual objects over 64K. Every "linear" design I've seen on 16-bit platforms would require programs to either subdivide memory into 64K sections and ensure no object crossed a section boundary, or else add extra code for every access that could straddle a section boundary. Effective coding often required having more than two uncommitted data segments, but the 8088 design was much better than the 80286 design.
– supercat
Oct 2 at 18:49




@RichF: The overlapping pointers were a good design for programs that didn't need to handle individual objects over 64K. Every "linear" design I've seen on 16-bit platforms would require programs to either subdivide memory into 64K sections and ensure no object crossed a section boundary, or else add extra code for every access that could straddle a section boundary. Effective coding often required having more than two uncommitted data segments, but the 8088 design was much better than the 80286 design.
– supercat
Oct 2 at 18:49










2 Answers
2






active

oldest

votes

















up vote
54
down vote













There was a 640K limit on the original IBM PC, but it was the result of IBM’s design decisions, and nothing to do with Microsoft: it’s the largest contiguous amount of memory which can be provided without eating into reserved areas of memory. The IBM PC Technical Reference includes a system memory map (page 2-25):



IBM PC system memory map



which is detailed on subsequent pages: the system is supposed to provide between 16 and 64K of RAM on the motherboard, then up to 192K as expansion, with an additional 384K possible in the future (providing 640K RAM in total); then there’s a 16K reserved block, 112K for the video buffers (of which 16K at B0000 were used for MDA, 16K at B8000 for CGA in the IBM PC), followed by 192K reserved for a “memory expansion area”, then 16K reserved, and 48K for the base system ROM at F4000.



DOS itself isn’t limited to 640K. Any amount of RAM (within the 8086 memory model’s limitations, i.e. up to slightly over 1MiB) could be used. This was the case in some DOS-compatible computers: the Tandy 2000 and Apricot PC provided up to 768K, the DEC Rainbow 100 and Sirius Victor 9000 provided up to 896K, and the Siemens PC-D and PC-X provided up to 1020K; the original SCP systems on which 86-DOS was developed weren’t limited to 640K either. On PC-compatible systems with memory available at 640K, typically provided by a VGA adapter, drivers could be used to add the memory from 640K up to 736K to the memory pool, increasing the maximum runnable program size. (This worked fine for programs which only used colour text mode, or CGA graphics.) Additional memory available in separate areas above 640K could also be added as separate memory pools, but that didn’t help run larger programs.



Note that the 640K quote is likely apocryphal.



As to why this limit was chosen, I don’t have a definitive answer, but there are a number of factors to consider:



  • the IBM PC wasn’t designed as a family of computers, at least not past the 8086 and 8088;

  • 640K was huge compared to micro-computer memory sizes at the time, both in terms of program requirements and in terms of cost;

  • the memory map was probably designed the way it was in order to provide a balanced set of expansion possibilities: a lot of memory, a decent amount of display buffers, and room for ROM expansion (in the IBM PC, there were no option ROMs; those appeared with the XT).





share|improve this answer


















  • 3




    Note the right-hand side of the diagram: the 216K block is within the ROM address space, so it’s intended as room for additional ROM, not RAM. There was no graphics adapter at A0000 when the IBM was designed; I suspect the designers thought that 128K would be a nice, safe amount of address space to set aside for video. (If one imagined future graphics with 4 bits per pixel at MDA resolutions, 128K would provide just enough room.)
    – Stephen Kitt
    Oct 1 at 13:45






  • 1




    Another example of an MS-DOS computer which is unencumbered by the 640K limit: The Rainbow 100B which can address up to 896 kilobytes
    – Wilson
    Oct 1 at 14:06






  • 2




    The Sirius did as well offer 896 KiB. Not to mention the SIEMENS PC-D with 1020 KiB when using the 'Alpha-Card' (monchrome text adapter, which in fact was terminal on a card and only accessed via two ports :)), 992 KiB with it's monocrome card and 960KiB with the (never official delivered) colour card.
    – Raffzahn
    Oct 1 at 14:45







  • 4




    @StephenKitt During the last year here on RC my answers did evolve into kind of a Short-Answer/Long-Answer combination style. Giving a shorter one first that tries to hit the point asked as compact as possible, and a longer where my inner nerd can explain all the details, so I wouldn't burst some day due the build up pressure to tell it :))
    – Raffzahn
    Oct 1 at 15:09






  • 3




    @Rui 64K blocks are nice to reason about, but I’m not sure it was really the main factor in designing the memory map — for one, the base model had 16K RAM, and there was only 40K of ROM (which included BASIC); even the XT and AT only had 64K ROM (including BASIC). The XT BIOS listing shows that the option ROM scan checks for a signature every 2K, from C8000 to F4000 included, i.e. over a range which doesn’t map to 64K blocks.
    – Stephen Kitt
    Oct 1 at 21:22

















up vote
13
down vote













Following up on the @StephenKitt answer:



CP/M put BIOS and BDOS code at the top of RAM, and IBM decided to copy that idea. Just like with CP/M systems, the plan was to raise the start of reserved memory from A0000 (640KB) to a higher value once newer chips like the 80286 arrived.



This would have worked if end-user programmers like at Lotus obeyed MSFT's guidelines. However, they naturally wanted speed, and so wrote directly to memory addresses.



This, of course, meant that all old programs -- which people paid a lot of hard-earned money for -- would instantly break, and so the range A0000 to FFFFF got permanently baked into PCs.






share|improve this answer
















  • 1




    Could you expand on how that would have worked in practice, assuming well-behaved software?
    – Stephen Kitt
    Oct 1 at 21:24






  • 1




    @StephenKitt Good question. I don't think there is an interrupt vector which returns, say, the start of video memory. INT 12h returns the amount of contiguous memory, in KB, starting from address 0, but presumably if less than 640KB of RAM is installed this will be smaller and not tell you where video memory begins.
    – Artelius
    Oct 2 at 1:34







  • 1




    (With appropriate rules, it is possible to write real-mode programs which work fine in protected mode; Windows is proof of that.)
    – Stephen Kitt
    Oct 2 at 7:03






  • 1




    But even if DOS software had only called BIOS and DOS services, it wouldn’t have been easy to run it in protected mode on a 286 because of all the segment arithmetic which was commonly indulged in.
    – Stephen Kitt
    Oct 2 at 11:18






  • 3




    @RuiFRibeiro: Unfortunately, the DOS routines for text output were more than an order of magnitude (factor of 10!) slower than optimized functions that wrote screen memory directly. Users preferred programs that could draw a screen in under 1/10 second rather than taking more than a full second, and programmers can hardly be blamed for giving users the kind of performance they want.
    – supercat
    Oct 2 at 20:34










Your Answer







StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "648"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f7817%2fwho-set-the-640k-limit%23new-answer', 'question_page');

);

Post as a guest






























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
54
down vote













There was a 640K limit on the original IBM PC, but it was the result of IBM’s design decisions, and nothing to do with Microsoft: it’s the largest contiguous amount of memory which can be provided without eating into reserved areas of memory. The IBM PC Technical Reference includes a system memory map (page 2-25):



IBM PC system memory map



which is detailed on subsequent pages: the system is supposed to provide between 16 and 64K of RAM on the motherboard, then up to 192K as expansion, with an additional 384K possible in the future (providing 640K RAM in total); then there’s a 16K reserved block, 112K for the video buffers (of which 16K at B0000 were used for MDA, 16K at B8000 for CGA in the IBM PC), followed by 192K reserved for a “memory expansion area”, then 16K reserved, and 48K for the base system ROM at F4000.



DOS itself isn’t limited to 640K. Any amount of RAM (within the 8086 memory model’s limitations, i.e. up to slightly over 1MiB) could be used. This was the case in some DOS-compatible computers: the Tandy 2000 and Apricot PC provided up to 768K, the DEC Rainbow 100 and Sirius Victor 9000 provided up to 896K, and the Siemens PC-D and PC-X provided up to 1020K; the original SCP systems on which 86-DOS was developed weren’t limited to 640K either. On PC-compatible systems with memory available at 640K, typically provided by a VGA adapter, drivers could be used to add the memory from 640K up to 736K to the memory pool, increasing the maximum runnable program size. (This worked fine for programs which only used colour text mode, or CGA graphics.) Additional memory available in separate areas above 640K could also be added as separate memory pools, but that didn’t help run larger programs.



Note that the 640K quote is likely apocryphal.



As to why this limit was chosen, I don’t have a definitive answer, but there are a number of factors to consider:



  • the IBM PC wasn’t designed as a family of computers, at least not past the 8086 and 8088;

  • 640K was huge compared to micro-computer memory sizes at the time, both in terms of program requirements and in terms of cost;

  • the memory map was probably designed the way it was in order to provide a balanced set of expansion possibilities: a lot of memory, a decent amount of display buffers, and room for ROM expansion (in the IBM PC, there were no option ROMs; those appeared with the XT).





share|improve this answer


















  • 3




    Note the right-hand side of the diagram: the 216K block is within the ROM address space, so it’s intended as room for additional ROM, not RAM. There was no graphics adapter at A0000 when the IBM was designed; I suspect the designers thought that 128K would be a nice, safe amount of address space to set aside for video. (If one imagined future graphics with 4 bits per pixel at MDA resolutions, 128K would provide just enough room.)
    – Stephen Kitt
    Oct 1 at 13:45






  • 1




    Another example of an MS-DOS computer which is unencumbered by the 640K limit: The Rainbow 100B which can address up to 896 kilobytes
    – Wilson
    Oct 1 at 14:06






  • 2




    The Sirius did as well offer 896 KiB. Not to mention the SIEMENS PC-D with 1020 KiB when using the 'Alpha-Card' (monchrome text adapter, which in fact was terminal on a card and only accessed via two ports :)), 992 KiB with it's monocrome card and 960KiB with the (never official delivered) colour card.
    – Raffzahn
    Oct 1 at 14:45







  • 4




    @StephenKitt During the last year here on RC my answers did evolve into kind of a Short-Answer/Long-Answer combination style. Giving a shorter one first that tries to hit the point asked as compact as possible, and a longer where my inner nerd can explain all the details, so I wouldn't burst some day due the build up pressure to tell it :))
    – Raffzahn
    Oct 1 at 15:09






  • 3




    @Rui 64K blocks are nice to reason about, but I’m not sure it was really the main factor in designing the memory map — for one, the base model had 16K RAM, and there was only 40K of ROM (which included BASIC); even the XT and AT only had 64K ROM (including BASIC). The XT BIOS listing shows that the option ROM scan checks for a signature every 2K, from C8000 to F4000 included, i.e. over a range which doesn’t map to 64K blocks.
    – Stephen Kitt
    Oct 1 at 21:22














up vote
54
down vote













There was a 640K limit on the original IBM PC, but it was the result of IBM’s design decisions, and nothing to do with Microsoft: it’s the largest contiguous amount of memory which can be provided without eating into reserved areas of memory. The IBM PC Technical Reference includes a system memory map (page 2-25):



IBM PC system memory map



which is detailed on subsequent pages: the system is supposed to provide between 16 and 64K of RAM on the motherboard, then up to 192K as expansion, with an additional 384K possible in the future (providing 640K RAM in total); then there’s a 16K reserved block, 112K for the video buffers (of which 16K at B0000 were used for MDA, 16K at B8000 for CGA in the IBM PC), followed by 192K reserved for a “memory expansion area”, then 16K reserved, and 48K for the base system ROM at F4000.



DOS itself isn’t limited to 640K. Any amount of RAM (within the 8086 memory model’s limitations, i.e. up to slightly over 1MiB) could be used. This was the case in some DOS-compatible computers: the Tandy 2000 and Apricot PC provided up to 768K, the DEC Rainbow 100 and Sirius Victor 9000 provided up to 896K, and the Siemens PC-D and PC-X provided up to 1020K; the original SCP systems on which 86-DOS was developed weren’t limited to 640K either. On PC-compatible systems with memory available at 640K, typically provided by a VGA adapter, drivers could be used to add the memory from 640K up to 736K to the memory pool, increasing the maximum runnable program size. (This worked fine for programs which only used colour text mode, or CGA graphics.) Additional memory available in separate areas above 640K could also be added as separate memory pools, but that didn’t help run larger programs.



Note that the 640K quote is likely apocryphal.



As to why this limit was chosen, I don’t have a definitive answer, but there are a number of factors to consider:



  • the IBM PC wasn’t designed as a family of computers, at least not past the 8086 and 8088;

  • 640K was huge compared to micro-computer memory sizes at the time, both in terms of program requirements and in terms of cost;

  • the memory map was probably designed the way it was in order to provide a balanced set of expansion possibilities: a lot of memory, a decent amount of display buffers, and room for ROM expansion (in the IBM PC, there were no option ROMs; those appeared with the XT).





share|improve this answer


















  • 3




    Note the right-hand side of the diagram: the 216K block is within the ROM address space, so it’s intended as room for additional ROM, not RAM. There was no graphics adapter at A0000 when the IBM was designed; I suspect the designers thought that 128K would be a nice, safe amount of address space to set aside for video. (If one imagined future graphics with 4 bits per pixel at MDA resolutions, 128K would provide just enough room.)
    – Stephen Kitt
    Oct 1 at 13:45






  • 1




    Another example of an MS-DOS computer which is unencumbered by the 640K limit: The Rainbow 100B which can address up to 896 kilobytes
    – Wilson
    Oct 1 at 14:06






  • 2




    The Sirius did as well offer 896 KiB. Not to mention the SIEMENS PC-D with 1020 KiB when using the 'Alpha-Card' (monchrome text adapter, which in fact was terminal on a card and only accessed via two ports :)), 992 KiB with it's monocrome card and 960KiB with the (never official delivered) colour card.
    – Raffzahn
    Oct 1 at 14:45







  • 4




    @StephenKitt During the last year here on RC my answers did evolve into kind of a Short-Answer/Long-Answer combination style. Giving a shorter one first that tries to hit the point asked as compact as possible, and a longer where my inner nerd can explain all the details, so I wouldn't burst some day due the build up pressure to tell it :))
    – Raffzahn
    Oct 1 at 15:09






  • 3




    @Rui 64K blocks are nice to reason about, but I’m not sure it was really the main factor in designing the memory map — for one, the base model had 16K RAM, and there was only 40K of ROM (which included BASIC); even the XT and AT only had 64K ROM (including BASIC). The XT BIOS listing shows that the option ROM scan checks for a signature every 2K, from C8000 to F4000 included, i.e. over a range which doesn’t map to 64K blocks.
    – Stephen Kitt
    Oct 1 at 21:22












up vote
54
down vote










up vote
54
down vote









There was a 640K limit on the original IBM PC, but it was the result of IBM’s design decisions, and nothing to do with Microsoft: it’s the largest contiguous amount of memory which can be provided without eating into reserved areas of memory. The IBM PC Technical Reference includes a system memory map (page 2-25):



IBM PC system memory map



which is detailed on subsequent pages: the system is supposed to provide between 16 and 64K of RAM on the motherboard, then up to 192K as expansion, with an additional 384K possible in the future (providing 640K RAM in total); then there’s a 16K reserved block, 112K for the video buffers (of which 16K at B0000 were used for MDA, 16K at B8000 for CGA in the IBM PC), followed by 192K reserved for a “memory expansion area”, then 16K reserved, and 48K for the base system ROM at F4000.



DOS itself isn’t limited to 640K. Any amount of RAM (within the 8086 memory model’s limitations, i.e. up to slightly over 1MiB) could be used. This was the case in some DOS-compatible computers: the Tandy 2000 and Apricot PC provided up to 768K, the DEC Rainbow 100 and Sirius Victor 9000 provided up to 896K, and the Siemens PC-D and PC-X provided up to 1020K; the original SCP systems on which 86-DOS was developed weren’t limited to 640K either. On PC-compatible systems with memory available at 640K, typically provided by a VGA adapter, drivers could be used to add the memory from 640K up to 736K to the memory pool, increasing the maximum runnable program size. (This worked fine for programs which only used colour text mode, or CGA graphics.) Additional memory available in separate areas above 640K could also be added as separate memory pools, but that didn’t help run larger programs.



Note that the 640K quote is likely apocryphal.



As to why this limit was chosen, I don’t have a definitive answer, but there are a number of factors to consider:



  • the IBM PC wasn’t designed as a family of computers, at least not past the 8086 and 8088;

  • 640K was huge compared to micro-computer memory sizes at the time, both in terms of program requirements and in terms of cost;

  • the memory map was probably designed the way it was in order to provide a balanced set of expansion possibilities: a lot of memory, a decent amount of display buffers, and room for ROM expansion (in the IBM PC, there were no option ROMs; those appeared with the XT).





share|improve this answer














There was a 640K limit on the original IBM PC, but it was the result of IBM’s design decisions, and nothing to do with Microsoft: it’s the largest contiguous amount of memory which can be provided without eating into reserved areas of memory. The IBM PC Technical Reference includes a system memory map (page 2-25):



IBM PC system memory map



which is detailed on subsequent pages: the system is supposed to provide between 16 and 64K of RAM on the motherboard, then up to 192K as expansion, with an additional 384K possible in the future (providing 640K RAM in total); then there’s a 16K reserved block, 112K for the video buffers (of which 16K at B0000 were used for MDA, 16K at B8000 for CGA in the IBM PC), followed by 192K reserved for a “memory expansion area”, then 16K reserved, and 48K for the base system ROM at F4000.



DOS itself isn’t limited to 640K. Any amount of RAM (within the 8086 memory model’s limitations, i.e. up to slightly over 1MiB) could be used. This was the case in some DOS-compatible computers: the Tandy 2000 and Apricot PC provided up to 768K, the DEC Rainbow 100 and Sirius Victor 9000 provided up to 896K, and the Siemens PC-D and PC-X provided up to 1020K; the original SCP systems on which 86-DOS was developed weren’t limited to 640K either. On PC-compatible systems with memory available at 640K, typically provided by a VGA adapter, drivers could be used to add the memory from 640K up to 736K to the memory pool, increasing the maximum runnable program size. (This worked fine for programs which only used colour text mode, or CGA graphics.) Additional memory available in separate areas above 640K could also be added as separate memory pools, but that didn’t help run larger programs.



Note that the 640K quote is likely apocryphal.



As to why this limit was chosen, I don’t have a definitive answer, but there are a number of factors to consider:



  • the IBM PC wasn’t designed as a family of computers, at least not past the 8086 and 8088;

  • 640K was huge compared to micro-computer memory sizes at the time, both in terms of program requirements and in terms of cost;

  • the memory map was probably designed the way it was in order to provide a balanced set of expansion possibilities: a lot of memory, a decent amount of display buffers, and room for ROM expansion (in the IBM PC, there were no option ROMs; those appeared with the XT).






share|improve this answer














share|improve this answer



share|improve this answer








edited Oct 1 at 15:12

























answered Oct 1 at 13:15









Stephen Kitt

31.2k4126149




31.2k4126149







  • 3




    Note the right-hand side of the diagram: the 216K block is within the ROM address space, so it’s intended as room for additional ROM, not RAM. There was no graphics adapter at A0000 when the IBM was designed; I suspect the designers thought that 128K would be a nice, safe amount of address space to set aside for video. (If one imagined future graphics with 4 bits per pixel at MDA resolutions, 128K would provide just enough room.)
    – Stephen Kitt
    Oct 1 at 13:45






  • 1




    Another example of an MS-DOS computer which is unencumbered by the 640K limit: The Rainbow 100B which can address up to 896 kilobytes
    – Wilson
    Oct 1 at 14:06






  • 2




    The Sirius did as well offer 896 KiB. Not to mention the SIEMENS PC-D with 1020 KiB when using the 'Alpha-Card' (monchrome text adapter, which in fact was terminal on a card and only accessed via two ports :)), 992 KiB with it's monocrome card and 960KiB with the (never official delivered) colour card.
    – Raffzahn
    Oct 1 at 14:45







  • 4




    @StephenKitt During the last year here on RC my answers did evolve into kind of a Short-Answer/Long-Answer combination style. Giving a shorter one first that tries to hit the point asked as compact as possible, and a longer where my inner nerd can explain all the details, so I wouldn't burst some day due the build up pressure to tell it :))
    – Raffzahn
    Oct 1 at 15:09






  • 3




    @Rui 64K blocks are nice to reason about, but I’m not sure it was really the main factor in designing the memory map — for one, the base model had 16K RAM, and there was only 40K of ROM (which included BASIC); even the XT and AT only had 64K ROM (including BASIC). The XT BIOS listing shows that the option ROM scan checks for a signature every 2K, from C8000 to F4000 included, i.e. over a range which doesn’t map to 64K blocks.
    – Stephen Kitt
    Oct 1 at 21:22












  • 3




    Note the right-hand side of the diagram: the 216K block is within the ROM address space, so it’s intended as room for additional ROM, not RAM. There was no graphics adapter at A0000 when the IBM was designed; I suspect the designers thought that 128K would be a nice, safe amount of address space to set aside for video. (If one imagined future graphics with 4 bits per pixel at MDA resolutions, 128K would provide just enough room.)
    – Stephen Kitt
    Oct 1 at 13:45






  • 1




    Another example of an MS-DOS computer which is unencumbered by the 640K limit: The Rainbow 100B which can address up to 896 kilobytes
    – Wilson
    Oct 1 at 14:06






  • 2




    The Sirius did as well offer 896 KiB. Not to mention the SIEMENS PC-D with 1020 KiB when using the 'Alpha-Card' (monchrome text adapter, which in fact was terminal on a card and only accessed via two ports :)), 992 KiB with it's monocrome card and 960KiB with the (never official delivered) colour card.
    – Raffzahn
    Oct 1 at 14:45







  • 4




    @StephenKitt During the last year here on RC my answers did evolve into kind of a Short-Answer/Long-Answer combination style. Giving a shorter one first that tries to hit the point asked as compact as possible, and a longer where my inner nerd can explain all the details, so I wouldn't burst some day due the build up pressure to tell it :))
    – Raffzahn
    Oct 1 at 15:09






  • 3




    @Rui 64K blocks are nice to reason about, but I’m not sure it was really the main factor in designing the memory map — for one, the base model had 16K RAM, and there was only 40K of ROM (which included BASIC); even the XT and AT only had 64K ROM (including BASIC). The XT BIOS listing shows that the option ROM scan checks for a signature every 2K, from C8000 to F4000 included, i.e. over a range which doesn’t map to 64K blocks.
    – Stephen Kitt
    Oct 1 at 21:22







3




3




Note the right-hand side of the diagram: the 216K block is within the ROM address space, so it’s intended as room for additional ROM, not RAM. There was no graphics adapter at A0000 when the IBM was designed; I suspect the designers thought that 128K would be a nice, safe amount of address space to set aside for video. (If one imagined future graphics with 4 bits per pixel at MDA resolutions, 128K would provide just enough room.)
– Stephen Kitt
Oct 1 at 13:45




Note the right-hand side of the diagram: the 216K block is within the ROM address space, so it’s intended as room for additional ROM, not RAM. There was no graphics adapter at A0000 when the IBM was designed; I suspect the designers thought that 128K would be a nice, safe amount of address space to set aside for video. (If one imagined future graphics with 4 bits per pixel at MDA resolutions, 128K would provide just enough room.)
– Stephen Kitt
Oct 1 at 13:45




1




1




Another example of an MS-DOS computer which is unencumbered by the 640K limit: The Rainbow 100B which can address up to 896 kilobytes
– Wilson
Oct 1 at 14:06




Another example of an MS-DOS computer which is unencumbered by the 640K limit: The Rainbow 100B which can address up to 896 kilobytes
– Wilson
Oct 1 at 14:06




2




2




The Sirius did as well offer 896 KiB. Not to mention the SIEMENS PC-D with 1020 KiB when using the 'Alpha-Card' (monchrome text adapter, which in fact was terminal on a card and only accessed via two ports :)), 992 KiB with it's monocrome card and 960KiB with the (never official delivered) colour card.
– Raffzahn
Oct 1 at 14:45





The Sirius did as well offer 896 KiB. Not to mention the SIEMENS PC-D with 1020 KiB when using the 'Alpha-Card' (monchrome text adapter, which in fact was terminal on a card and only accessed via two ports :)), 992 KiB with it's monocrome card and 960KiB with the (never official delivered) colour card.
– Raffzahn
Oct 1 at 14:45





4




4




@StephenKitt During the last year here on RC my answers did evolve into kind of a Short-Answer/Long-Answer combination style. Giving a shorter one first that tries to hit the point asked as compact as possible, and a longer where my inner nerd can explain all the details, so I wouldn't burst some day due the build up pressure to tell it :))
– Raffzahn
Oct 1 at 15:09




@StephenKitt During the last year here on RC my answers did evolve into kind of a Short-Answer/Long-Answer combination style. Giving a shorter one first that tries to hit the point asked as compact as possible, and a longer where my inner nerd can explain all the details, so I wouldn't burst some day due the build up pressure to tell it :))
– Raffzahn
Oct 1 at 15:09




3




3




@Rui 64K blocks are nice to reason about, but I’m not sure it was really the main factor in designing the memory map — for one, the base model had 16K RAM, and there was only 40K of ROM (which included BASIC); even the XT and AT only had 64K ROM (including BASIC). The XT BIOS listing shows that the option ROM scan checks for a signature every 2K, from C8000 to F4000 included, i.e. over a range which doesn’t map to 64K blocks.
– Stephen Kitt
Oct 1 at 21:22




@Rui 64K blocks are nice to reason about, but I’m not sure it was really the main factor in designing the memory map — for one, the base model had 16K RAM, and there was only 40K of ROM (which included BASIC); even the XT and AT only had 64K ROM (including BASIC). The XT BIOS listing shows that the option ROM scan checks for a signature every 2K, from C8000 to F4000 included, i.e. over a range which doesn’t map to 64K blocks.
– Stephen Kitt
Oct 1 at 21:22










up vote
13
down vote













Following up on the @StephenKitt answer:



CP/M put BIOS and BDOS code at the top of RAM, and IBM decided to copy that idea. Just like with CP/M systems, the plan was to raise the start of reserved memory from A0000 (640KB) to a higher value once newer chips like the 80286 arrived.



This would have worked if end-user programmers like at Lotus obeyed MSFT's guidelines. However, they naturally wanted speed, and so wrote directly to memory addresses.



This, of course, meant that all old programs -- which people paid a lot of hard-earned money for -- would instantly break, and so the range A0000 to FFFFF got permanently baked into PCs.






share|improve this answer
















  • 1




    Could you expand on how that would have worked in practice, assuming well-behaved software?
    – Stephen Kitt
    Oct 1 at 21:24






  • 1




    @StephenKitt Good question. I don't think there is an interrupt vector which returns, say, the start of video memory. INT 12h returns the amount of contiguous memory, in KB, starting from address 0, but presumably if less than 640KB of RAM is installed this will be smaller and not tell you where video memory begins.
    – Artelius
    Oct 2 at 1:34







  • 1




    (With appropriate rules, it is possible to write real-mode programs which work fine in protected mode; Windows is proof of that.)
    – Stephen Kitt
    Oct 2 at 7:03






  • 1




    But even if DOS software had only called BIOS and DOS services, it wouldn’t have been easy to run it in protected mode on a 286 because of all the segment arithmetic which was commonly indulged in.
    – Stephen Kitt
    Oct 2 at 11:18






  • 3




    @RuiFRibeiro: Unfortunately, the DOS routines for text output were more than an order of magnitude (factor of 10!) slower than optimized functions that wrote screen memory directly. Users preferred programs that could draw a screen in under 1/10 second rather than taking more than a full second, and programmers can hardly be blamed for giving users the kind of performance they want.
    – supercat
    Oct 2 at 20:34














up vote
13
down vote













Following up on the @StephenKitt answer:



CP/M put BIOS and BDOS code at the top of RAM, and IBM decided to copy that idea. Just like with CP/M systems, the plan was to raise the start of reserved memory from A0000 (640KB) to a higher value once newer chips like the 80286 arrived.



This would have worked if end-user programmers like at Lotus obeyed MSFT's guidelines. However, they naturally wanted speed, and so wrote directly to memory addresses.



This, of course, meant that all old programs -- which people paid a lot of hard-earned money for -- would instantly break, and so the range A0000 to FFFFF got permanently baked into PCs.






share|improve this answer
















  • 1




    Could you expand on how that would have worked in practice, assuming well-behaved software?
    – Stephen Kitt
    Oct 1 at 21:24






  • 1




    @StephenKitt Good question. I don't think there is an interrupt vector which returns, say, the start of video memory. INT 12h returns the amount of contiguous memory, in KB, starting from address 0, but presumably if less than 640KB of RAM is installed this will be smaller and not tell you where video memory begins.
    – Artelius
    Oct 2 at 1:34







  • 1




    (With appropriate rules, it is possible to write real-mode programs which work fine in protected mode; Windows is proof of that.)
    – Stephen Kitt
    Oct 2 at 7:03






  • 1




    But even if DOS software had only called BIOS and DOS services, it wouldn’t have been easy to run it in protected mode on a 286 because of all the segment arithmetic which was commonly indulged in.
    – Stephen Kitt
    Oct 2 at 11:18






  • 3




    @RuiFRibeiro: Unfortunately, the DOS routines for text output were more than an order of magnitude (factor of 10!) slower than optimized functions that wrote screen memory directly. Users preferred programs that could draw a screen in under 1/10 second rather than taking more than a full second, and programmers can hardly be blamed for giving users the kind of performance they want.
    – supercat
    Oct 2 at 20:34












up vote
13
down vote










up vote
13
down vote









Following up on the @StephenKitt answer:



CP/M put BIOS and BDOS code at the top of RAM, and IBM decided to copy that idea. Just like with CP/M systems, the plan was to raise the start of reserved memory from A0000 (640KB) to a higher value once newer chips like the 80286 arrived.



This would have worked if end-user programmers like at Lotus obeyed MSFT's guidelines. However, they naturally wanted speed, and so wrote directly to memory addresses.



This, of course, meant that all old programs -- which people paid a lot of hard-earned money for -- would instantly break, and so the range A0000 to FFFFF got permanently baked into PCs.






share|improve this answer












Following up on the @StephenKitt answer:



CP/M put BIOS and BDOS code at the top of RAM, and IBM decided to copy that idea. Just like with CP/M systems, the plan was to raise the start of reserved memory from A0000 (640KB) to a higher value once newer chips like the 80286 arrived.



This would have worked if end-user programmers like at Lotus obeyed MSFT's guidelines. However, they naturally wanted speed, and so wrote directly to memory addresses.



This, of course, meant that all old programs -- which people paid a lot of hard-earned money for -- would instantly break, and so the range A0000 to FFFFF got permanently baked into PCs.







share|improve this answer












share|improve this answer



share|improve this answer










answered Oct 1 at 16:51









RonJohn

24515




24515







  • 1




    Could you expand on how that would have worked in practice, assuming well-behaved software?
    – Stephen Kitt
    Oct 1 at 21:24






  • 1




    @StephenKitt Good question. I don't think there is an interrupt vector which returns, say, the start of video memory. INT 12h returns the amount of contiguous memory, in KB, starting from address 0, but presumably if less than 640KB of RAM is installed this will be smaller and not tell you where video memory begins.
    – Artelius
    Oct 2 at 1:34







  • 1




    (With appropriate rules, it is possible to write real-mode programs which work fine in protected mode; Windows is proof of that.)
    – Stephen Kitt
    Oct 2 at 7:03






  • 1




    But even if DOS software had only called BIOS and DOS services, it wouldn’t have been easy to run it in protected mode on a 286 because of all the segment arithmetic which was commonly indulged in.
    – Stephen Kitt
    Oct 2 at 11:18






  • 3




    @RuiFRibeiro: Unfortunately, the DOS routines for text output were more than an order of magnitude (factor of 10!) slower than optimized functions that wrote screen memory directly. Users preferred programs that could draw a screen in under 1/10 second rather than taking more than a full second, and programmers can hardly be blamed for giving users the kind of performance they want.
    – supercat
    Oct 2 at 20:34












  • 1




    Could you expand on how that would have worked in practice, assuming well-behaved software?
    – Stephen Kitt
    Oct 1 at 21:24






  • 1




    @StephenKitt Good question. I don't think there is an interrupt vector which returns, say, the start of video memory. INT 12h returns the amount of contiguous memory, in KB, starting from address 0, but presumably if less than 640KB of RAM is installed this will be smaller and not tell you where video memory begins.
    – Artelius
    Oct 2 at 1:34







  • 1




    (With appropriate rules, it is possible to write real-mode programs which work fine in protected mode; Windows is proof of that.)
    – Stephen Kitt
    Oct 2 at 7:03






  • 1




    But even if DOS software had only called BIOS and DOS services, it wouldn’t have been easy to run it in protected mode on a 286 because of all the segment arithmetic which was commonly indulged in.
    – Stephen Kitt
    Oct 2 at 11:18






  • 3




    @RuiFRibeiro: Unfortunately, the DOS routines for text output were more than an order of magnitude (factor of 10!) slower than optimized functions that wrote screen memory directly. Users preferred programs that could draw a screen in under 1/10 second rather than taking more than a full second, and programmers can hardly be blamed for giving users the kind of performance they want.
    – supercat
    Oct 2 at 20:34







1




1




Could you expand on how that would have worked in practice, assuming well-behaved software?
– Stephen Kitt
Oct 1 at 21:24




Could you expand on how that would have worked in practice, assuming well-behaved software?
– Stephen Kitt
Oct 1 at 21:24




1




1




@StephenKitt Good question. I don't think there is an interrupt vector which returns, say, the start of video memory. INT 12h returns the amount of contiguous memory, in KB, starting from address 0, but presumably if less than 640KB of RAM is installed this will be smaller and not tell you where video memory begins.
– Artelius
Oct 2 at 1:34





@StephenKitt Good question. I don't think there is an interrupt vector which returns, say, the start of video memory. INT 12h returns the amount of contiguous memory, in KB, starting from address 0, but presumably if less than 640KB of RAM is installed this will be smaller and not tell you where video memory begins.
– Artelius
Oct 2 at 1:34





1




1




(With appropriate rules, it is possible to write real-mode programs which work fine in protected mode; Windows is proof of that.)
– Stephen Kitt
Oct 2 at 7:03




(With appropriate rules, it is possible to write real-mode programs which work fine in protected mode; Windows is proof of that.)
– Stephen Kitt
Oct 2 at 7:03




1




1




But even if DOS software had only called BIOS and DOS services, it wouldn’t have been easy to run it in protected mode on a 286 because of all the segment arithmetic which was commonly indulged in.
– Stephen Kitt
Oct 2 at 11:18




But even if DOS software had only called BIOS and DOS services, it wouldn’t have been easy to run it in protected mode on a 286 because of all the segment arithmetic which was commonly indulged in.
– Stephen Kitt
Oct 2 at 11:18




3




3




@RuiFRibeiro: Unfortunately, the DOS routines for text output were more than an order of magnitude (factor of 10!) slower than optimized functions that wrote screen memory directly. Users preferred programs that could draw a screen in under 1/10 second rather than taking more than a full second, and programmers can hardly be blamed for giving users the kind of performance they want.
– supercat
Oct 2 at 20:34




@RuiFRibeiro: Unfortunately, the DOS routines for text output were more than an order of magnitude (factor of 10!) slower than optimized functions that wrote screen memory directly. Users preferred programs that could draw a screen in under 1/10 second rather than taking more than a full second, and programmers can hardly be blamed for giving users the kind of performance they want.
– supercat
Oct 2 at 20:34

















 

draft saved


draft discarded















































 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f7817%2fwho-set-the-640k-limit%23new-answer', 'question_page');

);

Post as a guest













































































Popular posts from this blog

How to check contact read email or not when send email to Individual?

Bahrain

Postfix configuration issue with fips on centos 7; mailgun relay