Were later MS-DOS versions still implemented in x86 assembly?
Clash Royale CLAN TAG#URR8PPP
up vote
28
down vote
favorite
Recently, Microsoft published the source code of old MS-DOS versions on GitHub.
What is odd in my opinion is the use of x86 assembly language for everything. The assembly language would not be my first choice for implementing an operating system. At the time MS-DOS was created, the C programming language had already been invented in Bell Labs, offering a good compromise between low level and high level programming.
Was this assembly language approach used also in the newest versions of MS-DOS in the 1990s?
history assembly operating-system
 |Â
show 3 more comments
up vote
28
down vote
favorite
Recently, Microsoft published the source code of old MS-DOS versions on GitHub.
What is odd in my opinion is the use of x86 assembly language for everything. The assembly language would not be my first choice for implementing an operating system. At the time MS-DOS was created, the C programming language had already been invented in Bell Labs, offering a good compromise between low level and high level programming.
Was this assembly language approach used also in the newest versions of MS-DOS in the 1990s?
history assembly operating-system
4
The MS_DOS ABI (if we can call it that) is an assembly register interface. It would be quite inconvenient to implement that in C.
â tofro
yesterday
5
"The assembly language would not be my first choice for implementing an operating system." -- But someone did anyways, a fully graphical, multitasking OS that can run from a floppy: menuetos.net. While it's not exactly the same thing as Windows/Linux/MacOS (more primitive/flickers, etc), it's a cool proof of concept of how much space can be saved when you use only assembler.
â phyrfox
yesterday
4
@tofro Linux system calls have a similar assembly register interface.mov eax, __NR_write; int 0x80
on i386, with args in ebx, ecx, edx, ... (orsysenter
, or on x86-64,syscall
with arg-passing registers mostly matching the function-calling convention). The solution is either inline-asm macros, or library wrapper functions (like glibc uses). e.g.write(1, "hello", 5)
compiles to a library function call, which (on x86-64) does something likemov eax, __NR_write
/syscall
/cmp rax,-4095
/jae set_errno
/ret
. See this Q&A.
â Peter Cordes
yesterday
28
"The assembly language would not be my first choice for implementing an operating system." Because you're young, and have no appreciation for #1 what it means to have to run in 16KB of RAM, and #2 how hard it is to write a compiler that optimizes so well that it's better than hand-coded assembler.
â RonJohn
yesterday
4
RAM is the issue, entirely. You don't say why assembly language wouldn't be your first choice, but my guess is that you feel it would be easier to code in C or some other higher-level language, and it would be. However, when programming resource-limited computers, ease is not the primary goal; space efficiency, speed of execution, or some combination of these two factors will be the main considerations. Consider that some computers had one kilobyte or less of RAM versus gigabytes now, and even the very last version of MS-DOS had to run nicely on 640kB systems (the first, 16 kB!).
â Jim MacKenzie
14 hours ago
 |Â
show 3 more comments
up vote
28
down vote
favorite
up vote
28
down vote
favorite
Recently, Microsoft published the source code of old MS-DOS versions on GitHub.
What is odd in my opinion is the use of x86 assembly language for everything. The assembly language would not be my first choice for implementing an operating system. At the time MS-DOS was created, the C programming language had already been invented in Bell Labs, offering a good compromise between low level and high level programming.
Was this assembly language approach used also in the newest versions of MS-DOS in the 1990s?
history assembly operating-system
Recently, Microsoft published the source code of old MS-DOS versions on GitHub.
What is odd in my opinion is the use of x86 assembly language for everything. The assembly language would not be my first choice for implementing an operating system. At the time MS-DOS was created, the C programming language had already been invented in Bell Labs, offering a good compromise between low level and high level programming.
Was this assembly language approach used also in the newest versions of MS-DOS in the 1990s?
history assembly operating-system
history assembly operating-system
edited 22 hours ago
Uwe Keim
1296
1296
asked yesterday
juhist
32537
32537
4
The MS_DOS ABI (if we can call it that) is an assembly register interface. It would be quite inconvenient to implement that in C.
â tofro
yesterday
5
"The assembly language would not be my first choice for implementing an operating system." -- But someone did anyways, a fully graphical, multitasking OS that can run from a floppy: menuetos.net. While it's not exactly the same thing as Windows/Linux/MacOS (more primitive/flickers, etc), it's a cool proof of concept of how much space can be saved when you use only assembler.
â phyrfox
yesterday
4
@tofro Linux system calls have a similar assembly register interface.mov eax, __NR_write; int 0x80
on i386, with args in ebx, ecx, edx, ... (orsysenter
, or on x86-64,syscall
with arg-passing registers mostly matching the function-calling convention). The solution is either inline-asm macros, or library wrapper functions (like glibc uses). e.g.write(1, "hello", 5)
compiles to a library function call, which (on x86-64) does something likemov eax, __NR_write
/syscall
/cmp rax,-4095
/jae set_errno
/ret
. See this Q&A.
â Peter Cordes
yesterday
28
"The assembly language would not be my first choice for implementing an operating system." Because you're young, and have no appreciation for #1 what it means to have to run in 16KB of RAM, and #2 how hard it is to write a compiler that optimizes so well that it's better than hand-coded assembler.
â RonJohn
yesterday
4
RAM is the issue, entirely. You don't say why assembly language wouldn't be your first choice, but my guess is that you feel it would be easier to code in C or some other higher-level language, and it would be. However, when programming resource-limited computers, ease is not the primary goal; space efficiency, speed of execution, or some combination of these two factors will be the main considerations. Consider that some computers had one kilobyte or less of RAM versus gigabytes now, and even the very last version of MS-DOS had to run nicely on 640kB systems (the first, 16 kB!).
â Jim MacKenzie
14 hours ago
 |Â
show 3 more comments
4
The MS_DOS ABI (if we can call it that) is an assembly register interface. It would be quite inconvenient to implement that in C.
â tofro
yesterday
5
"The assembly language would not be my first choice for implementing an operating system." -- But someone did anyways, a fully graphical, multitasking OS that can run from a floppy: menuetos.net. While it's not exactly the same thing as Windows/Linux/MacOS (more primitive/flickers, etc), it's a cool proof of concept of how much space can be saved when you use only assembler.
â phyrfox
yesterday
4
@tofro Linux system calls have a similar assembly register interface.mov eax, __NR_write; int 0x80
on i386, with args in ebx, ecx, edx, ... (orsysenter
, or on x86-64,syscall
with arg-passing registers mostly matching the function-calling convention). The solution is either inline-asm macros, or library wrapper functions (like glibc uses). e.g.write(1, "hello", 5)
compiles to a library function call, which (on x86-64) does something likemov eax, __NR_write
/syscall
/cmp rax,-4095
/jae set_errno
/ret
. See this Q&A.
â Peter Cordes
yesterday
28
"The assembly language would not be my first choice for implementing an operating system." Because you're young, and have no appreciation for #1 what it means to have to run in 16KB of RAM, and #2 how hard it is to write a compiler that optimizes so well that it's better than hand-coded assembler.
â RonJohn
yesterday
4
RAM is the issue, entirely. You don't say why assembly language wouldn't be your first choice, but my guess is that you feel it would be easier to code in C or some other higher-level language, and it would be. However, when programming resource-limited computers, ease is not the primary goal; space efficiency, speed of execution, or some combination of these two factors will be the main considerations. Consider that some computers had one kilobyte or less of RAM versus gigabytes now, and even the very last version of MS-DOS had to run nicely on 640kB systems (the first, 16 kB!).
â Jim MacKenzie
14 hours ago
4
4
The MS_DOS ABI (if we can call it that) is an assembly register interface. It would be quite inconvenient to implement that in C.
â tofro
yesterday
The MS_DOS ABI (if we can call it that) is an assembly register interface. It would be quite inconvenient to implement that in C.
â tofro
yesterday
5
5
"The assembly language would not be my first choice for implementing an operating system." -- But someone did anyways, a fully graphical, multitasking OS that can run from a floppy: menuetos.net. While it's not exactly the same thing as Windows/Linux/MacOS (more primitive/flickers, etc), it's a cool proof of concept of how much space can be saved when you use only assembler.
â phyrfox
yesterday
"The assembly language would not be my first choice for implementing an operating system." -- But someone did anyways, a fully graphical, multitasking OS that can run from a floppy: menuetos.net. While it's not exactly the same thing as Windows/Linux/MacOS (more primitive/flickers, etc), it's a cool proof of concept of how much space can be saved when you use only assembler.
â phyrfox
yesterday
4
4
@tofro Linux system calls have a similar assembly register interface.
mov eax, __NR_write; int 0x80
on i386, with args in ebx, ecx, edx, ... (or sysenter
, or on x86-64, syscall
with arg-passing registers mostly matching the function-calling convention). The solution is either inline-asm macros, or library wrapper functions (like glibc uses). e.g. write(1, "hello", 5)
compiles to a library function call, which (on x86-64) does something like mov eax, __NR_write
/ syscall
/ cmp rax,-4095
/ jae set_errno
/ ret
. See this Q&A.â Peter Cordes
yesterday
@tofro Linux system calls have a similar assembly register interface.
mov eax, __NR_write; int 0x80
on i386, with args in ebx, ecx, edx, ... (or sysenter
, or on x86-64, syscall
with arg-passing registers mostly matching the function-calling convention). The solution is either inline-asm macros, or library wrapper functions (like glibc uses). e.g. write(1, "hello", 5)
compiles to a library function call, which (on x86-64) does something like mov eax, __NR_write
/ syscall
/ cmp rax,-4095
/ jae set_errno
/ ret
. See this Q&A.â Peter Cordes
yesterday
28
28
"The assembly language would not be my first choice for implementing an operating system." Because you're young, and have no appreciation for #1 what it means to have to run in 16KB of RAM, and #2 how hard it is to write a compiler that optimizes so well that it's better than hand-coded assembler.
â RonJohn
yesterday
"The assembly language would not be my first choice for implementing an operating system." Because you're young, and have no appreciation for #1 what it means to have to run in 16KB of RAM, and #2 how hard it is to write a compiler that optimizes so well that it's better than hand-coded assembler.
â RonJohn
yesterday
4
4
RAM is the issue, entirely. You don't say why assembly language wouldn't be your first choice, but my guess is that you feel it would be easier to code in C or some other higher-level language, and it would be. However, when programming resource-limited computers, ease is not the primary goal; space efficiency, speed of execution, or some combination of these two factors will be the main considerations. Consider that some computers had one kilobyte or less of RAM versus gigabytes now, and even the very last version of MS-DOS had to run nicely on 640kB systems (the first, 16 kB!).
â Jim MacKenzie
14 hours ago
RAM is the issue, entirely. You don't say why assembly language wouldn't be your first choice, but my guess is that you feel it would be easier to code in C or some other higher-level language, and it would be. However, when programming resource-limited computers, ease is not the primary goal; space efficiency, speed of execution, or some combination of these two factors will be the main considerations. Consider that some computers had one kilobyte or less of RAM versus gigabytes now, and even the very last version of MS-DOS had to run nicely on 640kB systems (the first, 16 kB!).
â Jim MacKenzie
14 hours ago
 |Â
show 3 more comments
11 Answers
11
active
oldest
votes
up vote
42
down vote
accepted
C did exist when DOS was developed, but it wasnâÂÂt used much outside the Unix world, and as mentioned by JdeBP, wouldnâÂÂt necessarily have been considered a good language for systems programming on micros anyway â more likely candidates in the late seventies would include Forth and Pascal. SCP developed DOS in assembly for a few very pragmatic reasons:
The last design requirement was that MS-DOS be written in assembly language. While this characteristic does help meet the need for speed and efficiency, the reason for including it is much more basic. The only 8086 software-development tools available to Seattle Computer at that time were an assembler that ran on the Z80 under CP/M and a monitor/debugger that fit into a 2K-byte EPROM (erasable programmable read-only memory). Both of these tools had been developed in house.
As youâÂÂve seen from the code available for MS-DOS 1.25 and 2.11, these versions were also written in assembly language. That code was never entirely rewritten, so there never was the opportunity to rewrite it in a higher-level language; nor was the need ever felt, I suspect â assembly was the language of choice for system tools on PCs for a long time, and system developers were as familiar with assembly as with any other language.
Other languages were used in MS-DOS releases. BASIC was used for a number of demos of various kinds over the years; but they hardly count as âÂÂcoreâ utilities. MicrosoftâÂÂs Pascal compiler (sold as IBM Pascal) was available early on, and could have been used â but it produces binaries with tell-tale memory problems which none of the MS-DOS tools exhibit, as far as IâÂÂm aware.
Some tools added in later versions were developed in C; a quick look through MS-DOS 6.22 shows that, for example, DEFRAG
, FASTHELP
, MEMMAKER
and SCANDISK
were written in C. FDISK
is also a C program in 6.22; I havenâÂÂt checked its history if it started out in assembly and was rewritten (in early versions of DOS, it wasnâÂÂt provided by Microsoft but by OEMs). As No'am Newman mentions, the OS/2 Museum page on DOS 3 lists ATTRIB.EXE
as the first program provided with MS-DOS to have been written in C.
1
It gets even more interesting when one adds DR-DOS to the picture, but the question did not ask about that. It's worth disputing the question's implied premise that at the time people generally accepted that the C language was the language to use for implementing operating systems. That was by no means a given. There were people who implemented operating systems in Pascal, for example. (-:
â JdeBP
15 hours ago
@JdeBP do you know what languages were used in DR DOS? OpenDOSâ kernel was all assembly, but itsCOMMAND.COM
was partly written in C. I havenâÂÂt looked into the rest of the system...
â Stephen Kitt
15 hours ago
1
Let's not forget that MS-DOS included a complete IDE for working in assembly on the computer that allowed for editing of even the currently executing program without the need to re-compile, re-link, and re-execute to see the results.DEBUG
â Gypsy Spellweaver
1 hour ago
add a comment |Â
up vote
20
down vote
From The OS/2 Museum page about DOS3: "The new ATTRIB.EXE utility allowed the user to manipulate file attributes (Read-only, Hidden, System, etc.). It is notable for being the first DOS utility written in C (up to that point, all components in DOS were written in assembly language) and contains the string âÂÂZbikowski C startup Copyright 1983 (C) Microsoft CorpâÂÂ, clearly identifying Mark ZbikowskiâÂÂs handiwork."
I realise that this doesn't really answer the question, but gives an idea of how DOS was implemented. Also remember that the operating system had to be as small as possible so as to leave room for user programs, so the resident part of COMMAND.COM would have stayed written in Assembly.
add a comment |Â
up vote
16
down vote
Low Memory ==> Assembly Language
In the early days every byte mattered. MS-DOS was, in many ways, an outgrowth of CP/M. CP/M had a fairly hard limit of 64K. Yes, there were some bank switching in later versions, but for practical purposes for most its popular lifetime it was a 64K O/S. That included O/S resident portion + Application + User Data.
MS-DOS quickly increased that to 1 Meg. (but 640K in practical terms due to IBM's design decisions) and it was relatively easy to use the 8086 segmented architecture to make use of more than 64K memory, as long as you worked in 64K chunks.
Despite the 1 Meg./640K limit, plenty of machines started out with a lot less RAM. 256K was typical. The original IBM PC motherboard could hold from 16K to 64K, though most (and all the ones I ever worked with myself) could hold from 64K - 256K. Any more RAM went on expansion cards. RAM was still rather expensive in the early days - plenty of machines did plenty of useful work with 256K (or less!), so keeping the resident O/S components to a minimum was very important to allow for larger applications and more user data.
Can an optimizing C compiler get really close to hand-coded assembly language in memory usage? Absolutely. But they weren't there in the early days. Plus, compilers (in my mind, until Turbo Pascal came along) were big & clunky - i.e., needed plenty of RAM and disk space and took a long time to compile/link/etc. which would make developing the core of an O/S even harder to do. MS-DOS wasn't like a strip of paper tape loaded in via a TTY to an Altair (the first Microsoft Basic) but it was small and efficient for what was needed at the time, leaving room for applications on a bootable floppy and in RAM.
3
COMMAND.COM (the command-line interpreter) loaded in the top 32K of RAM and could be overwritten by a large application if necessary. In the twin-floppy days, it was a PITA if that was the case - one finished up putting copies of COMMAND.COM on the data disks. DOS was so small, it wasn't worth writing in C. Even large applications like Lotus 1-2-3 were written in assembler. 1-2-3 version 3 was the first C version and it was slower and had more bugs than version 2.
â grahamj42
yesterday
6
@grahamj42 "DOS was so small, it wasn't worth writing in C" - actually I would argue the opposite - because it was so small, it had to be assembler to keep it as absolutely small as possible in the early days. Large applications were initially in assembler to - every cycle counts on a 4.77 Mhz. 8088, and every byte counts in 256K (or often less). As you move on to 6 Mhz. 80286, 640K, etc. the overhead of a high-level language (both CPU cycles & bytes) becomes more acceptable. Bugs - any major rewrite has 'em :-(
â manassehkatz
yesterday
1
"the original IBM PC motherboard could hold from 16K to 256K" -- a quick correction: the original motherboard held 16K-64K (1-4 banks of 16K chips). It was quickly replaced by a version that held 64K-256K (1-4 banks of 64K chips) after IBM realised that the 16K configuration wasn't selling well. See minuszerodegrees.net/5150/early/5150_early.htm for more details.
â Jules
21 hours ago
1
Low Memory ==> Assembly Language or Forth // (There were options. ;)
â RichF
11 hours ago
1
From my point of view, compilers are still big and clunky. I'm currently compiling GCC 6.4 on an old Pentium MMX. I expect it to finish sometime in early November.
â Mark
5 hours ago
 |Â
show 1 more comment
up vote
14
down vote
MS-DOS (by which I mean the underlying IO.SYS and MSDOS.SYS files) was written in assembly through the first half of the 1990s.
In 1995 for Windows 95, which was bootstrapped by what you would call MS-DOS 7.0 (although nobody ran DOS 7.0 as a stand-alone OS), I did write a small piece of code in C and included it in the project. As far as I know, it was the first time C code appeared in either of those .SYS files (yes I know one of those SYS files became a text file and all the OS code ended up in the other one).
I remember sneaking a look at the Windows NT source code at the time to see how they had solved some issue, and I was impressed at how even their low level drivers were all written in C. For instance they used the _inp() function to read I/O ports on the ISA bus.
New contributor
Nice answer. Didn't IBM release a (very different) PC-DOS 7.0?
â Davislor
yesterday
In addition to the Basic Input/Output System and the Basic Disk Operating System, MS-DOS also comprises the command processor and the housekeeping utilities (superuser.com/questions/329442); and those are also in the Microsoft publication referred to in the question. As such, the use of C code in one of the housekeeping utilities in MS-DOS in 1984, per another answer here, does mean that MS-DOS (the whole actual operating system, not a partial subset of it) had not been wholly written in assembly for more than 10 years prior to that.
â JdeBP
16 hours ago
Do you know any more from that mysterious problem, why the MS-DOS couldn't read DR-DOS floppies?
â peterh
13 hours ago
add a comment |Â
up vote
7
down vote
C would have been really inefficient to write the operating system for a number of reasons:
First, initial compilers for high level language used 16-bit pointers on MS-DOS, only later adding support for 32-bit pointers. Since much of the work done in the operating system needed to manage and work with a 1MB address space, this would have been impractical without larger pointers. Programs that took advantage of compiler-support for larger pointers suffered significantly on 8086 since the hardware doesn't actually support 32-bit pointers nicely!
So second, code generation quality was poor on 8086, in part because compilers were less mature back then, in part because of the irregular instruction set of the 8086 (re: many things, including the above mentioned 32-bit pointer handling), and in part because in assembly language a human programmer can so simply use all the features of the processor (e.g. return value in CF flags (Z-bit) along with return value in AX, as is done with the int 21h system calls). The small register set also makes the compiler's job harder, which means it would tend to use stack memory for local variables when a programmer would have used registers.
Compilers only use a subset of a processor's instruction set, and accessing those other features would have required an extensive library or compiler options and language extensions that were yet to come (e.g. __stdcall, others).
As hardware has evolved it has become more friendly to compilers; while also, compilers have improved dramatically.
add a comment |Â
up vote
5
down vote
Aside from the historical answer, which is just "yes", you also have to keep in mind that DOS is magnitudes smaller than what we'd call an "OS" today.
What is odd in my opinion is the use of x86 assembly language for everything.
On the contrary; using anything else would have been odd back then.
DOS had very few responsibilities - it handled several low level components in a more or less static way. There was no multi-user/-tasking/-processing. No scheduler. No forking, no subprocesses, no "exec"; no virtualization of memory or processes; no concept of drivers; no modules, no extensability. No USB, no PCI, no video functionality to speak of, no networking, no audio. Really, there was very little going on.
See the source code - the whole thing (including command line tools, "kernel"...) fits into a handful of assembler files; they aren't even sorted into subdirectories (as Michael Kjörling pointed out, DOS 1.0 didn't have subdirectories, but they didn't bother adding a hierarchy in later versions either).
If you count the DOS API calls, you end up at roughly 100 services for the 0x21 call, which is... not much, compared to today.
Finally, the CPUs were much simpler; there was only one mode (at least DOS ignored the rest, if we ignore EMM386 and such).
Suffice it to say, the programmers back then were quite used to assembler; more complex software was written in assembler on a regular basis. It probably did not even occur to them to rewrite DOS in C. There simply would have been little benefit.
1
"they didn't even need to be sorted into subdirectories" MS-DOS 1.x didn't even support subdirectories. That was only added in 2.0. So development of 2.0 would pretty naturally not have used subdirectories for source code organization. It's like how, these days, when building a compiler for an updated version of a programming language, any newly introduced language constructs likely don't get used in the compiler source code until the new version of the compiler is quite stable.
â Michael Kjörling
17 hours ago
@MichaelKjörling, phew, thanks for that addition. My first contact with DOS was on an Schneider Amstrad PC, I think (no HDD but two floppies, though I cannot recall if they were 5 1/4" or already 3 1/2"; and I certainly do not recall the version of DOS). I do recall vividly how I once tried out all the DOS commands... up to and including RECOVER.COM on the boot disk. The disk certainly had no subdirectories after THAT one, and I learned the importance of having backups. :-) en.wikipedia.org/wiki/Recover_(command) Good old times.
â AnoE
15 hours ago
add a comment |Â
up vote
4
down vote
High level languages are generally easier to work with. However, depending on what one is programming, and one's experience, programming in assembler is not necessarily all that complex. Remove the hurdles of graphics, sound, inter-process communication, and just a keyboard and text shell for user interaction -- it's pretty straight-forward. Especially with a well-documented BIOS to handle the low level text in / text out / disk stuff, building a well-functioning program in assembler was straight-forward and not all that slow to accomplish.
Looking backward with a 2018 mindset, yeah it might seem strange to stick with assembler even in the later versions of DOS. It was not, though. Others have mentioned that some tools were eventually written in C. Still most of it was already written and known to operate well. Why bother rewriting everything? Do you think any users would have cared if a box containing the newest DOS had a blurb stating, "Now fully implemented in the C language!"?
add a comment |Â
up vote
3
down vote
You need to understand that C wasn't a good compromise between low-level and "high-level". The abstractions it offered were tiny, and the cost of them was more important on the PC than on machines where Unix originated (even the original PDP-11/20 had more memory and faster storage than the original IBM PC). The main reason why you'd choose C wasn't to get useful abstraction, but rather to improve portability (this in a time where differences between CPUs and memory models were still huge). Since the IBM PC didn't need portability, there was little benefit to using C.
Today, people tend to look at assembly programming as some stone-age level technology (especially if you've never worked with modern assembly). But keep in mind that the high-level alternatives to assembly were languages like LISP - languages that didn't even pretend to have any relation to the hardware. C and assembly were extremely close in their capabilities, and the benefits C gave you were often outweighed by the costs. People had large amounts of experience and knowledge of the hardware and assembly, and lots of experience designing software in assembly. Moving to C didn't save as much effort as much as you'd think.
Additionally, when MS-DOS was being developed, there was no C compiler for the PC (or the x86 CPUs). Writing a compiler wasn't easy (and I'm not even talking about optimizing compilers). Most of the people involved didn't have a great insight into state-of-the-art computer science (which, while of great theoretical value, was pretty academical in relation to desktop computers at the time; CS tended to shun the "just get it working, somehow" mentality of commercial software). On the other hand, creating an assembler is pretty trivial - and already gives you lots of capabilities that early compilers for languages like C did. Do you really want to spend the effort to make a high-level compiler when what you're actually trying to do is write an OS? By the time tools like Turbo Pascal came to be, they might have been a good choice - but that was much later, and there'd be little point in rewriting the code already written.
Even with a compiler, don't forget how crappy those computers were. Compilers were bulky and slow, and using a compiler involved flipping floppies all the time. That's one of the reasons why at those times, languages like C usually didn't improve productivity unless your software got really big - you needed just as much careful design as with assembly, and rely on your own verification of the code long before it got compiled and executed. The first compiler to really break that trend was Turbo Pascal, which took very little memory and was blazing fast (while including a full-blown IDE with a debugger!) - in 1983. That's about in time for MS DOS 3 - and indeed, around that time, some of the new tools were already written in C; but that's still no reason to rewrite the whole OS. If it works, why break it? And worse, risk breaking all of the applications that already run just fine on MS-DOS?
The API of DOS was mostly about invoking interrupts and passing arguments (or pointers) through registers. That's an extremely simple interface that's pretty much just as easy to implement in assembly as in C (or depending on your C compiler, much easier). Developing applications for MS DOS required pretty much no investment beyond the computer itself, and a lot of development tools sprung up pretty quickly from other vendors (though on launch, Microsoft was still the only company that provided an OS, a programming language and applications for the PC). All the way through the MS-DOS era, people used assembly whenever small or fast code was required - compilers only slowly caught up with what assembly was capable of, though it usually meant you used something like C or Pascal for most of the application, with custom assembly for the performance critical bits.
OSes for desktop computers had one main requirement - be small. They didn't have to do much stuff, but whatever they had to keep in memory was memory that couldn't be used by applications. MS-DOS targeted machines with 16 kiB RAM - that didn't leave a lot of room for the OS. Diminishing that further by using code that wasn't hand optimized would have been a pointless waste. Even later, as memory started expanding towards the 640 kiB barrier, every kiB still counted - I remember tweaking memory for days to get to run Doom with a mouse, network and sound at the same time (this was already with a 16 MiB PC, but lots of things still had to fit in those 640 kiB - including device drivers). This got even worse with CD-ROM games; one more driver to fit in. And throughout all this time, you wanted to avoid the OS as much as possible - direct memory access was the king if you could afford it. So there wasn't really much of a demand for complicated OS features - you mostly wanted the OS to stand aside while your applications were running (on the PC, the major exception would only come with Windows 3.0).
But programming in 100% assembly was nowhere near as tedious as people imagine today (one notable example being Roller Coaster Tycoon, a huge '99 game, 100% written in assembly by one guy). Most importantly, C wasn't significantly better, especially on the PC and with one-person "teams", and introduced a lot of design conflicts that people had to learn to deal with. People already had plenty of experience developing in assembly, and were very aware of the potential pitfalls and design challenges.
add a comment |Â
up vote
3
down vote
Yes - C has been around since 1972 but there were no MSDOS C compilers until the late 80s. To convert an entire OS from Assembler to C would be a mammoth task. Even though it might be easier to maintain, it could be a lot slower.
You can see the result of conversion when you compare Visual Studio 2008 to VS2010. This was a full blown conversion from C to C#. OK - it is easier to maintain from the Vendor's point of view but the new product is 24 times slower: on a netbook, 2008 loads in 5s, 2010 takes almost 2 minutes.
Also, DOS was a 16-bit OS in a 20 bit address space. This meant that there was a lot of segmented addressing and a few memory models to choose from (Tiny, Compact, Medium, Large, Huge): not the flat addressing that you get in the 32-bit/64-bit compilers nowadays. The compilers didn't hide this from you: you had to make a conscious decision as to which memory model to use since changing from one model to another wasn't a trivial exercise (I've done this in a past life).
2
Assembly to C, and C to C# cannot really be compared, as C# is a JIT'd GC'd memory-safe language, whereas C is very close to Assembly in the features it provides, being just easier to write and maintain. Anyway, "entire OS" for MS-DOS is quite little code, so converting it to C, either completely or partially, wouldn't be such a large task.
â juhist
yesterday
OK, maybe C to C# was a bad example. It is the little specialist instructions like IN and OUT, which are used a lot in device drivers that cause no end of problems with many C compilers. I don't know what instructions are used but stuff like REPZ often has to be hand coded: there are no C equivalents.
â cup
yesterday
2
@juhist Try actually doing some of the conversion yourself, then tell us whether it was really a large task or not. (Oh, and make sure you regression test every edge and corner case, just to be sure your C version doesn't change the behaviour of anything, not just what the (buggy and incomplete) user documentation says is valid!
â alephzero
yesterday
1
It is an answer; it starts with "Yes".
â wizzwizz4â¦
yesterday
3
June 1982 is post-launch by a bit over a year (and so well after key development work) but it is not by any stretch of imagination the late 80's
â Chris Stratton
yesterday
 |Â
show 4 more comments
up vote
2
down vote
Answering the question - yes.
For the rationale: there's very little gain in rewriting functioning code (unless for example you have portability specifically in mind).
The "newest version" of any major program generally contains much of the code of the previous version, so again, why spend programmer time on rewriting existing features instead of adding new features?
New contributor
1
Welcome to Retrocomputing! This answer could be improved by supporting evidence. For example, specifying which version your answer refers to, and giving a link to source code. Although your rationale is valid, it doesn't count as evidence.
â Dr Sheldon
yesterday
add a comment |Â
up vote
0
down vote
It's interesting to note that the direct inspiration for the initial DOS API, namely CP/M was mostly written in PL/M rather than assembly language.
PL/M being a somewhat obscure language and the original Digital Research source code being unavailable due to copyright and licensing reasons anyway, writing in assembly language was the most straightforward course for direct binary compatibility. In particular since the machine-dependent part of the operating system, the BIOS, was already provided by Microsoft (it was very common to write the BIOS in assembly language anyway, even for CP/M, for similar reasons).
The original CP/M structure consisted of BIOS, BDOS, and CCP (basically what COMMAND.COM does) with BIOS implementing the system-dependent parts, BDOS implementing the available system calls on top of that, and the CCP (typically reloaded after each program run) providing a basic command line interface.
Much of the BDOS layer was just glue code, with the most important part that was more complex being the file system code and implementation. There were no file ids indexing kernel internal structures: instead the application program had to provide the room for the respective data structures. Consequently there was no limitation on concurrently open files. Also there was no file abstraction across devices: disk files used different system calls than console I/O or printers.
Since the core of MS-DOS corresponds just to what was the BDOS in CP/M, reimplementing it in assembly language was not that much of a chore. Later versions of MS-DOS tried adding a file id layer and directories and pipes to the mix to look more like Unix, but partly due to the unwieldy implementation language and partly due to a lack of technical excellence, the results were far from convincing. Other things that were a mess were end-of-file handling (since CP/M only had file lengths in multiples of 128) and text line separators vs. terminal handling (CR/LF is around even to these days).
So doing the original implementation in assembly language was reasonable given the system call history of CP/M that DOS initially tried to emulate. However, it contributed to drawing the wrong project members for moving to a Unix-like approach of system responsibility and mechanisms. Microsoft never managed utilizing the x286 16-bit protected modes for creating a more modern Windows variant; instead both Windows 95 and Windows NT worked with the x386 32-bit protected modes, Windows 95 with DOS underpinnings and Windows NT with a kernel developed new. Eventually the NT approach replaced the old DOS-based one.
NT was renowned for being "enterprise-level" and resource consuming. Part of the reason certainly was that it had bulkier and slower code due to not being coded principally in assembly language like the DOS-based OS cores were. That led to a rather long parallel history of DOS- and NT-based Windows systems.
So to answer your question: later "versions of DOS" were written in higher languages, but it took them a long time to actually replace the assembly language based ones.
add a comment |Â
11 Answers
11
active
oldest
votes
11 Answers
11
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
42
down vote
accepted
C did exist when DOS was developed, but it wasnâÂÂt used much outside the Unix world, and as mentioned by JdeBP, wouldnâÂÂt necessarily have been considered a good language for systems programming on micros anyway â more likely candidates in the late seventies would include Forth and Pascal. SCP developed DOS in assembly for a few very pragmatic reasons:
The last design requirement was that MS-DOS be written in assembly language. While this characteristic does help meet the need for speed and efficiency, the reason for including it is much more basic. The only 8086 software-development tools available to Seattle Computer at that time were an assembler that ran on the Z80 under CP/M and a monitor/debugger that fit into a 2K-byte EPROM (erasable programmable read-only memory). Both of these tools had been developed in house.
As youâÂÂve seen from the code available for MS-DOS 1.25 and 2.11, these versions were also written in assembly language. That code was never entirely rewritten, so there never was the opportunity to rewrite it in a higher-level language; nor was the need ever felt, I suspect â assembly was the language of choice for system tools on PCs for a long time, and system developers were as familiar with assembly as with any other language.
Other languages were used in MS-DOS releases. BASIC was used for a number of demos of various kinds over the years; but they hardly count as âÂÂcoreâ utilities. MicrosoftâÂÂs Pascal compiler (sold as IBM Pascal) was available early on, and could have been used â but it produces binaries with tell-tale memory problems which none of the MS-DOS tools exhibit, as far as IâÂÂm aware.
Some tools added in later versions were developed in C; a quick look through MS-DOS 6.22 shows that, for example, DEFRAG
, FASTHELP
, MEMMAKER
and SCANDISK
were written in C. FDISK
is also a C program in 6.22; I havenâÂÂt checked its history if it started out in assembly and was rewritten (in early versions of DOS, it wasnâÂÂt provided by Microsoft but by OEMs). As No'am Newman mentions, the OS/2 Museum page on DOS 3 lists ATTRIB.EXE
as the first program provided with MS-DOS to have been written in C.
1
It gets even more interesting when one adds DR-DOS to the picture, but the question did not ask about that. It's worth disputing the question's implied premise that at the time people generally accepted that the C language was the language to use for implementing operating systems. That was by no means a given. There were people who implemented operating systems in Pascal, for example. (-:
â JdeBP
15 hours ago
@JdeBP do you know what languages were used in DR DOS? OpenDOSâ kernel was all assembly, but itsCOMMAND.COM
was partly written in C. I havenâÂÂt looked into the rest of the system...
â Stephen Kitt
15 hours ago
1
Let's not forget that MS-DOS included a complete IDE for working in assembly on the computer that allowed for editing of even the currently executing program without the need to re-compile, re-link, and re-execute to see the results.DEBUG
â Gypsy Spellweaver
1 hour ago
add a comment |Â
up vote
42
down vote
accepted
C did exist when DOS was developed, but it wasnâÂÂt used much outside the Unix world, and as mentioned by JdeBP, wouldnâÂÂt necessarily have been considered a good language for systems programming on micros anyway â more likely candidates in the late seventies would include Forth and Pascal. SCP developed DOS in assembly for a few very pragmatic reasons:
The last design requirement was that MS-DOS be written in assembly language. While this characteristic does help meet the need for speed and efficiency, the reason for including it is much more basic. The only 8086 software-development tools available to Seattle Computer at that time were an assembler that ran on the Z80 under CP/M and a monitor/debugger that fit into a 2K-byte EPROM (erasable programmable read-only memory). Both of these tools had been developed in house.
As youâÂÂve seen from the code available for MS-DOS 1.25 and 2.11, these versions were also written in assembly language. That code was never entirely rewritten, so there never was the opportunity to rewrite it in a higher-level language; nor was the need ever felt, I suspect â assembly was the language of choice for system tools on PCs for a long time, and system developers were as familiar with assembly as with any other language.
Other languages were used in MS-DOS releases. BASIC was used for a number of demos of various kinds over the years; but they hardly count as âÂÂcoreâ utilities. MicrosoftâÂÂs Pascal compiler (sold as IBM Pascal) was available early on, and could have been used â but it produces binaries with tell-tale memory problems which none of the MS-DOS tools exhibit, as far as IâÂÂm aware.
Some tools added in later versions were developed in C; a quick look through MS-DOS 6.22 shows that, for example, DEFRAG
, FASTHELP
, MEMMAKER
and SCANDISK
were written in C. FDISK
is also a C program in 6.22; I havenâÂÂt checked its history if it started out in assembly and was rewritten (in early versions of DOS, it wasnâÂÂt provided by Microsoft but by OEMs). As No'am Newman mentions, the OS/2 Museum page on DOS 3 lists ATTRIB.EXE
as the first program provided with MS-DOS to have been written in C.
1
It gets even more interesting when one adds DR-DOS to the picture, but the question did not ask about that. It's worth disputing the question's implied premise that at the time people generally accepted that the C language was the language to use for implementing operating systems. That was by no means a given. There were people who implemented operating systems in Pascal, for example. (-:
â JdeBP
15 hours ago
@JdeBP do you know what languages were used in DR DOS? OpenDOSâ kernel was all assembly, but itsCOMMAND.COM
was partly written in C. I havenâÂÂt looked into the rest of the system...
â Stephen Kitt
15 hours ago
1
Let's not forget that MS-DOS included a complete IDE for working in assembly on the computer that allowed for editing of even the currently executing program without the need to re-compile, re-link, and re-execute to see the results.DEBUG
â Gypsy Spellweaver
1 hour ago
add a comment |Â
up vote
42
down vote
accepted
up vote
42
down vote
accepted
C did exist when DOS was developed, but it wasnâÂÂt used much outside the Unix world, and as mentioned by JdeBP, wouldnâÂÂt necessarily have been considered a good language for systems programming on micros anyway â more likely candidates in the late seventies would include Forth and Pascal. SCP developed DOS in assembly for a few very pragmatic reasons:
The last design requirement was that MS-DOS be written in assembly language. While this characteristic does help meet the need for speed and efficiency, the reason for including it is much more basic. The only 8086 software-development tools available to Seattle Computer at that time were an assembler that ran on the Z80 under CP/M and a monitor/debugger that fit into a 2K-byte EPROM (erasable programmable read-only memory). Both of these tools had been developed in house.
As youâÂÂve seen from the code available for MS-DOS 1.25 and 2.11, these versions were also written in assembly language. That code was never entirely rewritten, so there never was the opportunity to rewrite it in a higher-level language; nor was the need ever felt, I suspect â assembly was the language of choice for system tools on PCs for a long time, and system developers were as familiar with assembly as with any other language.
Other languages were used in MS-DOS releases. BASIC was used for a number of demos of various kinds over the years; but they hardly count as âÂÂcoreâ utilities. MicrosoftâÂÂs Pascal compiler (sold as IBM Pascal) was available early on, and could have been used â but it produces binaries with tell-tale memory problems which none of the MS-DOS tools exhibit, as far as IâÂÂm aware.
Some tools added in later versions were developed in C; a quick look through MS-DOS 6.22 shows that, for example, DEFRAG
, FASTHELP
, MEMMAKER
and SCANDISK
were written in C. FDISK
is also a C program in 6.22; I havenâÂÂt checked its history if it started out in assembly and was rewritten (in early versions of DOS, it wasnâÂÂt provided by Microsoft but by OEMs). As No'am Newman mentions, the OS/2 Museum page on DOS 3 lists ATTRIB.EXE
as the first program provided with MS-DOS to have been written in C.
C did exist when DOS was developed, but it wasnâÂÂt used much outside the Unix world, and as mentioned by JdeBP, wouldnâÂÂt necessarily have been considered a good language for systems programming on micros anyway â more likely candidates in the late seventies would include Forth and Pascal. SCP developed DOS in assembly for a few very pragmatic reasons:
The last design requirement was that MS-DOS be written in assembly language. While this characteristic does help meet the need for speed and efficiency, the reason for including it is much more basic. The only 8086 software-development tools available to Seattle Computer at that time were an assembler that ran on the Z80 under CP/M and a monitor/debugger that fit into a 2K-byte EPROM (erasable programmable read-only memory). Both of these tools had been developed in house.
As youâÂÂve seen from the code available for MS-DOS 1.25 and 2.11, these versions were also written in assembly language. That code was never entirely rewritten, so there never was the opportunity to rewrite it in a higher-level language; nor was the need ever felt, I suspect â assembly was the language of choice for system tools on PCs for a long time, and system developers were as familiar with assembly as with any other language.
Other languages were used in MS-DOS releases. BASIC was used for a number of demos of various kinds over the years; but they hardly count as âÂÂcoreâ utilities. MicrosoftâÂÂs Pascal compiler (sold as IBM Pascal) was available early on, and could have been used â but it produces binaries with tell-tale memory problems which none of the MS-DOS tools exhibit, as far as IâÂÂm aware.
Some tools added in later versions were developed in C; a quick look through MS-DOS 6.22 shows that, for example, DEFRAG
, FASTHELP
, MEMMAKER
and SCANDISK
were written in C. FDISK
is also a C program in 6.22; I havenâÂÂt checked its history if it started out in assembly and was rewritten (in early versions of DOS, it wasnâÂÂt provided by Microsoft but by OEMs). As No'am Newman mentions, the OS/2 Museum page on DOS 3 lists ATTRIB.EXE
as the first program provided with MS-DOS to have been written in C.
edited 15 hours ago
answered yesterday
Stephen Kitt
31.7k4128150
31.7k4128150
1
It gets even more interesting when one adds DR-DOS to the picture, but the question did not ask about that. It's worth disputing the question's implied premise that at the time people generally accepted that the C language was the language to use for implementing operating systems. That was by no means a given. There were people who implemented operating systems in Pascal, for example. (-:
â JdeBP
15 hours ago
@JdeBP do you know what languages were used in DR DOS? OpenDOSâ kernel was all assembly, but itsCOMMAND.COM
was partly written in C. I havenâÂÂt looked into the rest of the system...
â Stephen Kitt
15 hours ago
1
Let's not forget that MS-DOS included a complete IDE for working in assembly on the computer that allowed for editing of even the currently executing program without the need to re-compile, re-link, and re-execute to see the results.DEBUG
â Gypsy Spellweaver
1 hour ago
add a comment |Â
1
It gets even more interesting when one adds DR-DOS to the picture, but the question did not ask about that. It's worth disputing the question's implied premise that at the time people generally accepted that the C language was the language to use for implementing operating systems. That was by no means a given. There were people who implemented operating systems in Pascal, for example. (-:
â JdeBP
15 hours ago
@JdeBP do you know what languages were used in DR DOS? OpenDOSâ kernel was all assembly, but itsCOMMAND.COM
was partly written in C. I havenâÂÂt looked into the rest of the system...
â Stephen Kitt
15 hours ago
1
Let's not forget that MS-DOS included a complete IDE for working in assembly on the computer that allowed for editing of even the currently executing program without the need to re-compile, re-link, and re-execute to see the results.DEBUG
â Gypsy Spellweaver
1 hour ago
1
1
It gets even more interesting when one adds DR-DOS to the picture, but the question did not ask about that. It's worth disputing the question's implied premise that at the time people generally accepted that the C language was the language to use for implementing operating systems. That was by no means a given. There were people who implemented operating systems in Pascal, for example. (-:
â JdeBP
15 hours ago
It gets even more interesting when one adds DR-DOS to the picture, but the question did not ask about that. It's worth disputing the question's implied premise that at the time people generally accepted that the C language was the language to use for implementing operating systems. That was by no means a given. There were people who implemented operating systems in Pascal, for example. (-:
â JdeBP
15 hours ago
@JdeBP do you know what languages were used in DR DOS? OpenDOSâ kernel was all assembly, but its
COMMAND.COM
was partly written in C. I havenâÂÂt looked into the rest of the system...â Stephen Kitt
15 hours ago
@JdeBP do you know what languages were used in DR DOS? OpenDOSâ kernel was all assembly, but its
COMMAND.COM
was partly written in C. I havenâÂÂt looked into the rest of the system...â Stephen Kitt
15 hours ago
1
1
Let's not forget that MS-DOS included a complete IDE for working in assembly on the computer that allowed for editing of even the currently executing program without the need to re-compile, re-link, and re-execute to see the results.
DEBUG
â Gypsy Spellweaver
1 hour ago
Let's not forget that MS-DOS included a complete IDE for working in assembly on the computer that allowed for editing of even the currently executing program without the need to re-compile, re-link, and re-execute to see the results.
DEBUG
â Gypsy Spellweaver
1 hour ago
add a comment |Â
up vote
20
down vote
From The OS/2 Museum page about DOS3: "The new ATTRIB.EXE utility allowed the user to manipulate file attributes (Read-only, Hidden, System, etc.). It is notable for being the first DOS utility written in C (up to that point, all components in DOS were written in assembly language) and contains the string âÂÂZbikowski C startup Copyright 1983 (C) Microsoft CorpâÂÂ, clearly identifying Mark ZbikowskiâÂÂs handiwork."
I realise that this doesn't really answer the question, but gives an idea of how DOS was implemented. Also remember that the operating system had to be as small as possible so as to leave room for user programs, so the resident part of COMMAND.COM would have stayed written in Assembly.
add a comment |Â
up vote
20
down vote
From The OS/2 Museum page about DOS3: "The new ATTRIB.EXE utility allowed the user to manipulate file attributes (Read-only, Hidden, System, etc.). It is notable for being the first DOS utility written in C (up to that point, all components in DOS were written in assembly language) and contains the string âÂÂZbikowski C startup Copyright 1983 (C) Microsoft CorpâÂÂ, clearly identifying Mark ZbikowskiâÂÂs handiwork."
I realise that this doesn't really answer the question, but gives an idea of how DOS was implemented. Also remember that the operating system had to be as small as possible so as to leave room for user programs, so the resident part of COMMAND.COM would have stayed written in Assembly.
add a comment |Â
up vote
20
down vote
up vote
20
down vote
From The OS/2 Museum page about DOS3: "The new ATTRIB.EXE utility allowed the user to manipulate file attributes (Read-only, Hidden, System, etc.). It is notable for being the first DOS utility written in C (up to that point, all components in DOS were written in assembly language) and contains the string âÂÂZbikowski C startup Copyright 1983 (C) Microsoft CorpâÂÂ, clearly identifying Mark ZbikowskiâÂÂs handiwork."
I realise that this doesn't really answer the question, but gives an idea of how DOS was implemented. Also remember that the operating system had to be as small as possible so as to leave room for user programs, so the resident part of COMMAND.COM would have stayed written in Assembly.
From The OS/2 Museum page about DOS3: "The new ATTRIB.EXE utility allowed the user to manipulate file attributes (Read-only, Hidden, System, etc.). It is notable for being the first DOS utility written in C (up to that point, all components in DOS were written in assembly language) and contains the string âÂÂZbikowski C startup Copyright 1983 (C) Microsoft CorpâÂÂ, clearly identifying Mark ZbikowskiâÂÂs handiwork."
I realise that this doesn't really answer the question, but gives an idea of how DOS was implemented. Also remember that the operating system had to be as small as possible so as to leave room for user programs, so the resident part of COMMAND.COM would have stayed written in Assembly.
answered yesterday
No'am Newman
436126
436126
add a comment |Â
add a comment |Â
up vote
16
down vote
Low Memory ==> Assembly Language
In the early days every byte mattered. MS-DOS was, in many ways, an outgrowth of CP/M. CP/M had a fairly hard limit of 64K. Yes, there were some bank switching in later versions, but for practical purposes for most its popular lifetime it was a 64K O/S. That included O/S resident portion + Application + User Data.
MS-DOS quickly increased that to 1 Meg. (but 640K in practical terms due to IBM's design decisions) and it was relatively easy to use the 8086 segmented architecture to make use of more than 64K memory, as long as you worked in 64K chunks.
Despite the 1 Meg./640K limit, plenty of machines started out with a lot less RAM. 256K was typical. The original IBM PC motherboard could hold from 16K to 64K, though most (and all the ones I ever worked with myself) could hold from 64K - 256K. Any more RAM went on expansion cards. RAM was still rather expensive in the early days - plenty of machines did plenty of useful work with 256K (or less!), so keeping the resident O/S components to a minimum was very important to allow for larger applications and more user data.
Can an optimizing C compiler get really close to hand-coded assembly language in memory usage? Absolutely. But they weren't there in the early days. Plus, compilers (in my mind, until Turbo Pascal came along) were big & clunky - i.e., needed plenty of RAM and disk space and took a long time to compile/link/etc. which would make developing the core of an O/S even harder to do. MS-DOS wasn't like a strip of paper tape loaded in via a TTY to an Altair (the first Microsoft Basic) but it was small and efficient for what was needed at the time, leaving room for applications on a bootable floppy and in RAM.
3
COMMAND.COM (the command-line interpreter) loaded in the top 32K of RAM and could be overwritten by a large application if necessary. In the twin-floppy days, it was a PITA if that was the case - one finished up putting copies of COMMAND.COM on the data disks. DOS was so small, it wasn't worth writing in C. Even large applications like Lotus 1-2-3 were written in assembler. 1-2-3 version 3 was the first C version and it was slower and had more bugs than version 2.
â grahamj42
yesterday
6
@grahamj42 "DOS was so small, it wasn't worth writing in C" - actually I would argue the opposite - because it was so small, it had to be assembler to keep it as absolutely small as possible in the early days. Large applications were initially in assembler to - every cycle counts on a 4.77 Mhz. 8088, and every byte counts in 256K (or often less). As you move on to 6 Mhz. 80286, 640K, etc. the overhead of a high-level language (both CPU cycles & bytes) becomes more acceptable. Bugs - any major rewrite has 'em :-(
â manassehkatz
yesterday
1
"the original IBM PC motherboard could hold from 16K to 256K" -- a quick correction: the original motherboard held 16K-64K (1-4 banks of 16K chips). It was quickly replaced by a version that held 64K-256K (1-4 banks of 64K chips) after IBM realised that the 16K configuration wasn't selling well. See minuszerodegrees.net/5150/early/5150_early.htm for more details.
â Jules
21 hours ago
1
Low Memory ==> Assembly Language or Forth // (There were options. ;)
â RichF
11 hours ago
1
From my point of view, compilers are still big and clunky. I'm currently compiling GCC 6.4 on an old Pentium MMX. I expect it to finish sometime in early November.
â Mark
5 hours ago
 |Â
show 1 more comment
up vote
16
down vote
Low Memory ==> Assembly Language
In the early days every byte mattered. MS-DOS was, in many ways, an outgrowth of CP/M. CP/M had a fairly hard limit of 64K. Yes, there were some bank switching in later versions, but for practical purposes for most its popular lifetime it was a 64K O/S. That included O/S resident portion + Application + User Data.
MS-DOS quickly increased that to 1 Meg. (but 640K in practical terms due to IBM's design decisions) and it was relatively easy to use the 8086 segmented architecture to make use of more than 64K memory, as long as you worked in 64K chunks.
Despite the 1 Meg./640K limit, plenty of machines started out with a lot less RAM. 256K was typical. The original IBM PC motherboard could hold from 16K to 64K, though most (and all the ones I ever worked with myself) could hold from 64K - 256K. Any more RAM went on expansion cards. RAM was still rather expensive in the early days - plenty of machines did plenty of useful work with 256K (or less!), so keeping the resident O/S components to a minimum was very important to allow for larger applications and more user data.
Can an optimizing C compiler get really close to hand-coded assembly language in memory usage? Absolutely. But they weren't there in the early days. Plus, compilers (in my mind, until Turbo Pascal came along) were big & clunky - i.e., needed plenty of RAM and disk space and took a long time to compile/link/etc. which would make developing the core of an O/S even harder to do. MS-DOS wasn't like a strip of paper tape loaded in via a TTY to an Altair (the first Microsoft Basic) but it was small and efficient for what was needed at the time, leaving room for applications on a bootable floppy and in RAM.
3
COMMAND.COM (the command-line interpreter) loaded in the top 32K of RAM and could be overwritten by a large application if necessary. In the twin-floppy days, it was a PITA if that was the case - one finished up putting copies of COMMAND.COM on the data disks. DOS was so small, it wasn't worth writing in C. Even large applications like Lotus 1-2-3 were written in assembler. 1-2-3 version 3 was the first C version and it was slower and had more bugs than version 2.
â grahamj42
yesterday
6
@grahamj42 "DOS was so small, it wasn't worth writing in C" - actually I would argue the opposite - because it was so small, it had to be assembler to keep it as absolutely small as possible in the early days. Large applications were initially in assembler to - every cycle counts on a 4.77 Mhz. 8088, and every byte counts in 256K (or often less). As you move on to 6 Mhz. 80286, 640K, etc. the overhead of a high-level language (both CPU cycles & bytes) becomes more acceptable. Bugs - any major rewrite has 'em :-(
â manassehkatz
yesterday
1
"the original IBM PC motherboard could hold from 16K to 256K" -- a quick correction: the original motherboard held 16K-64K (1-4 banks of 16K chips). It was quickly replaced by a version that held 64K-256K (1-4 banks of 64K chips) after IBM realised that the 16K configuration wasn't selling well. See minuszerodegrees.net/5150/early/5150_early.htm for more details.
â Jules
21 hours ago
1
Low Memory ==> Assembly Language or Forth // (There were options. ;)
â RichF
11 hours ago
1
From my point of view, compilers are still big and clunky. I'm currently compiling GCC 6.4 on an old Pentium MMX. I expect it to finish sometime in early November.
â Mark
5 hours ago
 |Â
show 1 more comment
up vote
16
down vote
up vote
16
down vote
Low Memory ==> Assembly Language
In the early days every byte mattered. MS-DOS was, in many ways, an outgrowth of CP/M. CP/M had a fairly hard limit of 64K. Yes, there were some bank switching in later versions, but for practical purposes for most its popular lifetime it was a 64K O/S. That included O/S resident portion + Application + User Data.
MS-DOS quickly increased that to 1 Meg. (but 640K in practical terms due to IBM's design decisions) and it was relatively easy to use the 8086 segmented architecture to make use of more than 64K memory, as long as you worked in 64K chunks.
Despite the 1 Meg./640K limit, plenty of machines started out with a lot less RAM. 256K was typical. The original IBM PC motherboard could hold from 16K to 64K, though most (and all the ones I ever worked with myself) could hold from 64K - 256K. Any more RAM went on expansion cards. RAM was still rather expensive in the early days - plenty of machines did plenty of useful work with 256K (or less!), so keeping the resident O/S components to a minimum was very important to allow for larger applications and more user data.
Can an optimizing C compiler get really close to hand-coded assembly language in memory usage? Absolutely. But they weren't there in the early days. Plus, compilers (in my mind, until Turbo Pascal came along) were big & clunky - i.e., needed plenty of RAM and disk space and took a long time to compile/link/etc. which would make developing the core of an O/S even harder to do. MS-DOS wasn't like a strip of paper tape loaded in via a TTY to an Altair (the first Microsoft Basic) but it was small and efficient for what was needed at the time, leaving room for applications on a bootable floppy and in RAM.
Low Memory ==> Assembly Language
In the early days every byte mattered. MS-DOS was, in many ways, an outgrowth of CP/M. CP/M had a fairly hard limit of 64K. Yes, there were some bank switching in later versions, but for practical purposes for most its popular lifetime it was a 64K O/S. That included O/S resident portion + Application + User Data.
MS-DOS quickly increased that to 1 Meg. (but 640K in practical terms due to IBM's design decisions) and it was relatively easy to use the 8086 segmented architecture to make use of more than 64K memory, as long as you worked in 64K chunks.
Despite the 1 Meg./640K limit, plenty of machines started out with a lot less RAM. 256K was typical. The original IBM PC motherboard could hold from 16K to 64K, though most (and all the ones I ever worked with myself) could hold from 64K - 256K. Any more RAM went on expansion cards. RAM was still rather expensive in the early days - plenty of machines did plenty of useful work with 256K (or less!), so keeping the resident O/S components to a minimum was very important to allow for larger applications and more user data.
Can an optimizing C compiler get really close to hand-coded assembly language in memory usage? Absolutely. But they weren't there in the early days. Plus, compilers (in my mind, until Turbo Pascal came along) were big & clunky - i.e., needed plenty of RAM and disk space and took a long time to compile/link/etc. which would make developing the core of an O/S even harder to do. MS-DOS wasn't like a strip of paper tape loaded in via a TTY to an Altair (the first Microsoft Basic) but it was small and efficient for what was needed at the time, leaving room for applications on a bootable floppy and in RAM.
edited 15 hours ago
answered yesterday
manassehkatz
1,251111
1,251111
3
COMMAND.COM (the command-line interpreter) loaded in the top 32K of RAM and could be overwritten by a large application if necessary. In the twin-floppy days, it was a PITA if that was the case - one finished up putting copies of COMMAND.COM on the data disks. DOS was so small, it wasn't worth writing in C. Even large applications like Lotus 1-2-3 were written in assembler. 1-2-3 version 3 was the first C version and it was slower and had more bugs than version 2.
â grahamj42
yesterday
6
@grahamj42 "DOS was so small, it wasn't worth writing in C" - actually I would argue the opposite - because it was so small, it had to be assembler to keep it as absolutely small as possible in the early days. Large applications were initially in assembler to - every cycle counts on a 4.77 Mhz. 8088, and every byte counts in 256K (or often less). As you move on to 6 Mhz. 80286, 640K, etc. the overhead of a high-level language (both CPU cycles & bytes) becomes more acceptable. Bugs - any major rewrite has 'em :-(
â manassehkatz
yesterday
1
"the original IBM PC motherboard could hold from 16K to 256K" -- a quick correction: the original motherboard held 16K-64K (1-4 banks of 16K chips). It was quickly replaced by a version that held 64K-256K (1-4 banks of 64K chips) after IBM realised that the 16K configuration wasn't selling well. See minuszerodegrees.net/5150/early/5150_early.htm for more details.
â Jules
21 hours ago
1
Low Memory ==> Assembly Language or Forth // (There were options. ;)
â RichF
11 hours ago
1
From my point of view, compilers are still big and clunky. I'm currently compiling GCC 6.4 on an old Pentium MMX. I expect it to finish sometime in early November.
â Mark
5 hours ago
 |Â
show 1 more comment
3
COMMAND.COM (the command-line interpreter) loaded in the top 32K of RAM and could be overwritten by a large application if necessary. In the twin-floppy days, it was a PITA if that was the case - one finished up putting copies of COMMAND.COM on the data disks. DOS was so small, it wasn't worth writing in C. Even large applications like Lotus 1-2-3 were written in assembler. 1-2-3 version 3 was the first C version and it was slower and had more bugs than version 2.
â grahamj42
yesterday
6
@grahamj42 "DOS was so small, it wasn't worth writing in C" - actually I would argue the opposite - because it was so small, it had to be assembler to keep it as absolutely small as possible in the early days. Large applications were initially in assembler to - every cycle counts on a 4.77 Mhz. 8088, and every byte counts in 256K (or often less). As you move on to 6 Mhz. 80286, 640K, etc. the overhead of a high-level language (both CPU cycles & bytes) becomes more acceptable. Bugs - any major rewrite has 'em :-(
â manassehkatz
yesterday
1
"the original IBM PC motherboard could hold from 16K to 256K" -- a quick correction: the original motherboard held 16K-64K (1-4 banks of 16K chips). It was quickly replaced by a version that held 64K-256K (1-4 banks of 64K chips) after IBM realised that the 16K configuration wasn't selling well. See minuszerodegrees.net/5150/early/5150_early.htm for more details.
â Jules
21 hours ago
1
Low Memory ==> Assembly Language or Forth // (There were options. ;)
â RichF
11 hours ago
1
From my point of view, compilers are still big and clunky. I'm currently compiling GCC 6.4 on an old Pentium MMX. I expect it to finish sometime in early November.
â Mark
5 hours ago
3
3
COMMAND.COM (the command-line interpreter) loaded in the top 32K of RAM and could be overwritten by a large application if necessary. In the twin-floppy days, it was a PITA if that was the case - one finished up putting copies of COMMAND.COM on the data disks. DOS was so small, it wasn't worth writing in C. Even large applications like Lotus 1-2-3 were written in assembler. 1-2-3 version 3 was the first C version and it was slower and had more bugs than version 2.
â grahamj42
yesterday
COMMAND.COM (the command-line interpreter) loaded in the top 32K of RAM and could be overwritten by a large application if necessary. In the twin-floppy days, it was a PITA if that was the case - one finished up putting copies of COMMAND.COM on the data disks. DOS was so small, it wasn't worth writing in C. Even large applications like Lotus 1-2-3 were written in assembler. 1-2-3 version 3 was the first C version and it was slower and had more bugs than version 2.
â grahamj42
yesterday
6
6
@grahamj42 "DOS was so small, it wasn't worth writing in C" - actually I would argue the opposite - because it was so small, it had to be assembler to keep it as absolutely small as possible in the early days. Large applications were initially in assembler to - every cycle counts on a 4.77 Mhz. 8088, and every byte counts in 256K (or often less). As you move on to 6 Mhz. 80286, 640K, etc. the overhead of a high-level language (both CPU cycles & bytes) becomes more acceptable. Bugs - any major rewrite has 'em :-(
â manassehkatz
yesterday
@grahamj42 "DOS was so small, it wasn't worth writing in C" - actually I would argue the opposite - because it was so small, it had to be assembler to keep it as absolutely small as possible in the early days. Large applications were initially in assembler to - every cycle counts on a 4.77 Mhz. 8088, and every byte counts in 256K (or often less). As you move on to 6 Mhz. 80286, 640K, etc. the overhead of a high-level language (both CPU cycles & bytes) becomes more acceptable. Bugs - any major rewrite has 'em :-(
â manassehkatz
yesterday
1
1
"the original IBM PC motherboard could hold from 16K to 256K" -- a quick correction: the original motherboard held 16K-64K (1-4 banks of 16K chips). It was quickly replaced by a version that held 64K-256K (1-4 banks of 64K chips) after IBM realised that the 16K configuration wasn't selling well. See minuszerodegrees.net/5150/early/5150_early.htm for more details.
â Jules
21 hours ago
"the original IBM PC motherboard could hold from 16K to 256K" -- a quick correction: the original motherboard held 16K-64K (1-4 banks of 16K chips). It was quickly replaced by a version that held 64K-256K (1-4 banks of 64K chips) after IBM realised that the 16K configuration wasn't selling well. See minuszerodegrees.net/5150/early/5150_early.htm for more details.
â Jules
21 hours ago
1
1
Low Memory ==> Assembly Language or Forth // (There were options. ;)
â RichF
11 hours ago
Low Memory ==> Assembly Language or Forth // (There were options. ;)
â RichF
11 hours ago
1
1
From my point of view, compilers are still big and clunky. I'm currently compiling GCC 6.4 on an old Pentium MMX. I expect it to finish sometime in early November.
â Mark
5 hours ago
From my point of view, compilers are still big and clunky. I'm currently compiling GCC 6.4 on an old Pentium MMX. I expect it to finish sometime in early November.
â Mark
5 hours ago
 |Â
show 1 more comment
up vote
14
down vote
MS-DOS (by which I mean the underlying IO.SYS and MSDOS.SYS files) was written in assembly through the first half of the 1990s.
In 1995 for Windows 95, which was bootstrapped by what you would call MS-DOS 7.0 (although nobody ran DOS 7.0 as a stand-alone OS), I did write a small piece of code in C and included it in the project. As far as I know, it was the first time C code appeared in either of those .SYS files (yes I know one of those SYS files became a text file and all the OS code ended up in the other one).
I remember sneaking a look at the Windows NT source code at the time to see how they had solved some issue, and I was impressed at how even their low level drivers were all written in C. For instance they used the _inp() function to read I/O ports on the ISA bus.
New contributor
Nice answer. Didn't IBM release a (very different) PC-DOS 7.0?
â Davislor
yesterday
In addition to the Basic Input/Output System and the Basic Disk Operating System, MS-DOS also comprises the command processor and the housekeeping utilities (superuser.com/questions/329442); and those are also in the Microsoft publication referred to in the question. As such, the use of C code in one of the housekeeping utilities in MS-DOS in 1984, per another answer here, does mean that MS-DOS (the whole actual operating system, not a partial subset of it) had not been wholly written in assembly for more than 10 years prior to that.
â JdeBP
16 hours ago
Do you know any more from that mysterious problem, why the MS-DOS couldn't read DR-DOS floppies?
â peterh
13 hours ago
add a comment |Â
up vote
14
down vote
MS-DOS (by which I mean the underlying IO.SYS and MSDOS.SYS files) was written in assembly through the first half of the 1990s.
In 1995 for Windows 95, which was bootstrapped by what you would call MS-DOS 7.0 (although nobody ran DOS 7.0 as a stand-alone OS), I did write a small piece of code in C and included it in the project. As far as I know, it was the first time C code appeared in either of those .SYS files (yes I know one of those SYS files became a text file and all the OS code ended up in the other one).
I remember sneaking a look at the Windows NT source code at the time to see how they had solved some issue, and I was impressed at how even their low level drivers were all written in C. For instance they used the _inp() function to read I/O ports on the ISA bus.
New contributor
Nice answer. Didn't IBM release a (very different) PC-DOS 7.0?
â Davislor
yesterday
In addition to the Basic Input/Output System and the Basic Disk Operating System, MS-DOS also comprises the command processor and the housekeeping utilities (superuser.com/questions/329442); and those are also in the Microsoft publication referred to in the question. As such, the use of C code in one of the housekeeping utilities in MS-DOS in 1984, per another answer here, does mean that MS-DOS (the whole actual operating system, not a partial subset of it) had not been wholly written in assembly for more than 10 years prior to that.
â JdeBP
16 hours ago
Do you know any more from that mysterious problem, why the MS-DOS couldn't read DR-DOS floppies?
â peterh
13 hours ago
add a comment |Â
up vote
14
down vote
up vote
14
down vote
MS-DOS (by which I mean the underlying IO.SYS and MSDOS.SYS files) was written in assembly through the first half of the 1990s.
In 1995 for Windows 95, which was bootstrapped by what you would call MS-DOS 7.0 (although nobody ran DOS 7.0 as a stand-alone OS), I did write a small piece of code in C and included it in the project. As far as I know, it was the first time C code appeared in either of those .SYS files (yes I know one of those SYS files became a text file and all the OS code ended up in the other one).
I remember sneaking a look at the Windows NT source code at the time to see how they had solved some issue, and I was impressed at how even their low level drivers were all written in C. For instance they used the _inp() function to read I/O ports on the ISA bus.
New contributor
MS-DOS (by which I mean the underlying IO.SYS and MSDOS.SYS files) was written in assembly through the first half of the 1990s.
In 1995 for Windows 95, which was bootstrapped by what you would call MS-DOS 7.0 (although nobody ran DOS 7.0 as a stand-alone OS), I did write a small piece of code in C and included it in the project. As far as I know, it was the first time C code appeared in either of those .SYS files (yes I know one of those SYS files became a text file and all the OS code ended up in the other one).
I remember sneaking a look at the Windows NT source code at the time to see how they had solved some issue, and I was impressed at how even their low level drivers were all written in C. For instance they used the _inp() function to read I/O ports on the ISA bus.
New contributor
New contributor
answered yesterday
skew
1412
1412
New contributor
New contributor
Nice answer. Didn't IBM release a (very different) PC-DOS 7.0?
â Davislor
yesterday
In addition to the Basic Input/Output System and the Basic Disk Operating System, MS-DOS also comprises the command processor and the housekeeping utilities (superuser.com/questions/329442); and those are also in the Microsoft publication referred to in the question. As such, the use of C code in one of the housekeeping utilities in MS-DOS in 1984, per another answer here, does mean that MS-DOS (the whole actual operating system, not a partial subset of it) had not been wholly written in assembly for more than 10 years prior to that.
â JdeBP
16 hours ago
Do you know any more from that mysterious problem, why the MS-DOS couldn't read DR-DOS floppies?
â peterh
13 hours ago
add a comment |Â
Nice answer. Didn't IBM release a (very different) PC-DOS 7.0?
â Davislor
yesterday
In addition to the Basic Input/Output System and the Basic Disk Operating System, MS-DOS also comprises the command processor and the housekeeping utilities (superuser.com/questions/329442); and those are also in the Microsoft publication referred to in the question. As such, the use of C code in one of the housekeeping utilities in MS-DOS in 1984, per another answer here, does mean that MS-DOS (the whole actual operating system, not a partial subset of it) had not been wholly written in assembly for more than 10 years prior to that.
â JdeBP
16 hours ago
Do you know any more from that mysterious problem, why the MS-DOS couldn't read DR-DOS floppies?
â peterh
13 hours ago
Nice answer. Didn't IBM release a (very different) PC-DOS 7.0?
â Davislor
yesterday
Nice answer. Didn't IBM release a (very different) PC-DOS 7.0?
â Davislor
yesterday
In addition to the Basic Input/Output System and the Basic Disk Operating System, MS-DOS also comprises the command processor and the housekeeping utilities (superuser.com/questions/329442); and those are also in the Microsoft publication referred to in the question. As such, the use of C code in one of the housekeeping utilities in MS-DOS in 1984, per another answer here, does mean that MS-DOS (the whole actual operating system, not a partial subset of it) had not been wholly written in assembly for more than 10 years prior to that.
â JdeBP
16 hours ago
In addition to the Basic Input/Output System and the Basic Disk Operating System, MS-DOS also comprises the command processor and the housekeeping utilities (superuser.com/questions/329442); and those are also in the Microsoft publication referred to in the question. As such, the use of C code in one of the housekeeping utilities in MS-DOS in 1984, per another answer here, does mean that MS-DOS (the whole actual operating system, not a partial subset of it) had not been wholly written in assembly for more than 10 years prior to that.
â JdeBP
16 hours ago
Do you know any more from that mysterious problem, why the MS-DOS couldn't read DR-DOS floppies?
â peterh
13 hours ago
Do you know any more from that mysterious problem, why the MS-DOS couldn't read DR-DOS floppies?
â peterh
13 hours ago
add a comment |Â
up vote
7
down vote
C would have been really inefficient to write the operating system for a number of reasons:
First, initial compilers for high level language used 16-bit pointers on MS-DOS, only later adding support for 32-bit pointers. Since much of the work done in the operating system needed to manage and work with a 1MB address space, this would have been impractical without larger pointers. Programs that took advantage of compiler-support for larger pointers suffered significantly on 8086 since the hardware doesn't actually support 32-bit pointers nicely!
So second, code generation quality was poor on 8086, in part because compilers were less mature back then, in part because of the irregular instruction set of the 8086 (re: many things, including the above mentioned 32-bit pointer handling), and in part because in assembly language a human programmer can so simply use all the features of the processor (e.g. return value in CF flags (Z-bit) along with return value in AX, as is done with the int 21h system calls). The small register set also makes the compiler's job harder, which means it would tend to use stack memory for local variables when a programmer would have used registers.
Compilers only use a subset of a processor's instruction set, and accessing those other features would have required an extensive library or compiler options and language extensions that were yet to come (e.g. __stdcall, others).
As hardware has evolved it has become more friendly to compilers; while also, compilers have improved dramatically.
add a comment |Â
up vote
7
down vote
C would have been really inefficient to write the operating system for a number of reasons:
First, initial compilers for high level language used 16-bit pointers on MS-DOS, only later adding support for 32-bit pointers. Since much of the work done in the operating system needed to manage and work with a 1MB address space, this would have been impractical without larger pointers. Programs that took advantage of compiler-support for larger pointers suffered significantly on 8086 since the hardware doesn't actually support 32-bit pointers nicely!
So second, code generation quality was poor on 8086, in part because compilers were less mature back then, in part because of the irregular instruction set of the 8086 (re: many things, including the above mentioned 32-bit pointer handling), and in part because in assembly language a human programmer can so simply use all the features of the processor (e.g. return value in CF flags (Z-bit) along with return value in AX, as is done with the int 21h system calls). The small register set also makes the compiler's job harder, which means it would tend to use stack memory for local variables when a programmer would have used registers.
Compilers only use a subset of a processor's instruction set, and accessing those other features would have required an extensive library or compiler options and language extensions that were yet to come (e.g. __stdcall, others).
As hardware has evolved it has become more friendly to compilers; while also, compilers have improved dramatically.
add a comment |Â
up vote
7
down vote
up vote
7
down vote
C would have been really inefficient to write the operating system for a number of reasons:
First, initial compilers for high level language used 16-bit pointers on MS-DOS, only later adding support for 32-bit pointers. Since much of the work done in the operating system needed to manage and work with a 1MB address space, this would have been impractical without larger pointers. Programs that took advantage of compiler-support for larger pointers suffered significantly on 8086 since the hardware doesn't actually support 32-bit pointers nicely!
So second, code generation quality was poor on 8086, in part because compilers were less mature back then, in part because of the irregular instruction set of the 8086 (re: many things, including the above mentioned 32-bit pointer handling), and in part because in assembly language a human programmer can so simply use all the features of the processor (e.g. return value in CF flags (Z-bit) along with return value in AX, as is done with the int 21h system calls). The small register set also makes the compiler's job harder, which means it would tend to use stack memory for local variables when a programmer would have used registers.
Compilers only use a subset of a processor's instruction set, and accessing those other features would have required an extensive library or compiler options and language extensions that were yet to come (e.g. __stdcall, others).
As hardware has evolved it has become more friendly to compilers; while also, compilers have improved dramatically.
C would have been really inefficient to write the operating system for a number of reasons:
First, initial compilers for high level language used 16-bit pointers on MS-DOS, only later adding support for 32-bit pointers. Since much of the work done in the operating system needed to manage and work with a 1MB address space, this would have been impractical without larger pointers. Programs that took advantage of compiler-support for larger pointers suffered significantly on 8086 since the hardware doesn't actually support 32-bit pointers nicely!
So second, code generation quality was poor on 8086, in part because compilers were less mature back then, in part because of the irregular instruction set of the 8086 (re: many things, including the above mentioned 32-bit pointer handling), and in part because in assembly language a human programmer can so simply use all the features of the processor (e.g. return value in CF flags (Z-bit) along with return value in AX, as is done with the int 21h system calls). The small register set also makes the compiler's job harder, which means it would tend to use stack memory for local variables when a programmer would have used registers.
Compilers only use a subset of a processor's instruction set, and accessing those other features would have required an extensive library or compiler options and language extensions that were yet to come (e.g. __stdcall, others).
As hardware has evolved it has become more friendly to compilers; while also, compilers have improved dramatically.
answered yesterday
Erik Eidt
752310
752310
add a comment |Â
add a comment |Â
up vote
5
down vote
Aside from the historical answer, which is just "yes", you also have to keep in mind that DOS is magnitudes smaller than what we'd call an "OS" today.
What is odd in my opinion is the use of x86 assembly language for everything.
On the contrary; using anything else would have been odd back then.
DOS had very few responsibilities - it handled several low level components in a more or less static way. There was no multi-user/-tasking/-processing. No scheduler. No forking, no subprocesses, no "exec"; no virtualization of memory or processes; no concept of drivers; no modules, no extensability. No USB, no PCI, no video functionality to speak of, no networking, no audio. Really, there was very little going on.
See the source code - the whole thing (including command line tools, "kernel"...) fits into a handful of assembler files; they aren't even sorted into subdirectories (as Michael Kjörling pointed out, DOS 1.0 didn't have subdirectories, but they didn't bother adding a hierarchy in later versions either).
If you count the DOS API calls, you end up at roughly 100 services for the 0x21 call, which is... not much, compared to today.
Finally, the CPUs were much simpler; there was only one mode (at least DOS ignored the rest, if we ignore EMM386 and such).
Suffice it to say, the programmers back then were quite used to assembler; more complex software was written in assembler on a regular basis. It probably did not even occur to them to rewrite DOS in C. There simply would have been little benefit.
1
"they didn't even need to be sorted into subdirectories" MS-DOS 1.x didn't even support subdirectories. That was only added in 2.0. So development of 2.0 would pretty naturally not have used subdirectories for source code organization. It's like how, these days, when building a compiler for an updated version of a programming language, any newly introduced language constructs likely don't get used in the compiler source code until the new version of the compiler is quite stable.
â Michael Kjörling
17 hours ago
@MichaelKjörling, phew, thanks for that addition. My first contact with DOS was on an Schneider Amstrad PC, I think (no HDD but two floppies, though I cannot recall if they were 5 1/4" or already 3 1/2"; and I certainly do not recall the version of DOS). I do recall vividly how I once tried out all the DOS commands... up to and including RECOVER.COM on the boot disk. The disk certainly had no subdirectories after THAT one, and I learned the importance of having backups. :-) en.wikipedia.org/wiki/Recover_(command) Good old times.
â AnoE
15 hours ago
add a comment |Â
up vote
5
down vote
Aside from the historical answer, which is just "yes", you also have to keep in mind that DOS is magnitudes smaller than what we'd call an "OS" today.
What is odd in my opinion is the use of x86 assembly language for everything.
On the contrary; using anything else would have been odd back then.
DOS had very few responsibilities - it handled several low level components in a more or less static way. There was no multi-user/-tasking/-processing. No scheduler. No forking, no subprocesses, no "exec"; no virtualization of memory or processes; no concept of drivers; no modules, no extensability. No USB, no PCI, no video functionality to speak of, no networking, no audio. Really, there was very little going on.
See the source code - the whole thing (including command line tools, "kernel"...) fits into a handful of assembler files; they aren't even sorted into subdirectories (as Michael Kjörling pointed out, DOS 1.0 didn't have subdirectories, but they didn't bother adding a hierarchy in later versions either).
If you count the DOS API calls, you end up at roughly 100 services for the 0x21 call, which is... not much, compared to today.
Finally, the CPUs were much simpler; there was only one mode (at least DOS ignored the rest, if we ignore EMM386 and such).
Suffice it to say, the programmers back then were quite used to assembler; more complex software was written in assembler on a regular basis. It probably did not even occur to them to rewrite DOS in C. There simply would have been little benefit.
1
"they didn't even need to be sorted into subdirectories" MS-DOS 1.x didn't even support subdirectories. That was only added in 2.0. So development of 2.0 would pretty naturally not have used subdirectories for source code organization. It's like how, these days, when building a compiler for an updated version of a programming language, any newly introduced language constructs likely don't get used in the compiler source code until the new version of the compiler is quite stable.
â Michael Kjörling
17 hours ago
@MichaelKjörling, phew, thanks for that addition. My first contact with DOS was on an Schneider Amstrad PC, I think (no HDD but two floppies, though I cannot recall if they were 5 1/4" or already 3 1/2"; and I certainly do not recall the version of DOS). I do recall vividly how I once tried out all the DOS commands... up to and including RECOVER.COM on the boot disk. The disk certainly had no subdirectories after THAT one, and I learned the importance of having backups. :-) en.wikipedia.org/wiki/Recover_(command) Good old times.
â AnoE
15 hours ago
add a comment |Â
up vote
5
down vote
up vote
5
down vote
Aside from the historical answer, which is just "yes", you also have to keep in mind that DOS is magnitudes smaller than what we'd call an "OS" today.
What is odd in my opinion is the use of x86 assembly language for everything.
On the contrary; using anything else would have been odd back then.
DOS had very few responsibilities - it handled several low level components in a more or less static way. There was no multi-user/-tasking/-processing. No scheduler. No forking, no subprocesses, no "exec"; no virtualization of memory or processes; no concept of drivers; no modules, no extensability. No USB, no PCI, no video functionality to speak of, no networking, no audio. Really, there was very little going on.
See the source code - the whole thing (including command line tools, "kernel"...) fits into a handful of assembler files; they aren't even sorted into subdirectories (as Michael Kjörling pointed out, DOS 1.0 didn't have subdirectories, but they didn't bother adding a hierarchy in later versions either).
If you count the DOS API calls, you end up at roughly 100 services for the 0x21 call, which is... not much, compared to today.
Finally, the CPUs were much simpler; there was only one mode (at least DOS ignored the rest, if we ignore EMM386 and such).
Suffice it to say, the programmers back then were quite used to assembler; more complex software was written in assembler on a regular basis. It probably did not even occur to them to rewrite DOS in C. There simply would have been little benefit.
Aside from the historical answer, which is just "yes", you also have to keep in mind that DOS is magnitudes smaller than what we'd call an "OS" today.
What is odd in my opinion is the use of x86 assembly language for everything.
On the contrary; using anything else would have been odd back then.
DOS had very few responsibilities - it handled several low level components in a more or less static way. There was no multi-user/-tasking/-processing. No scheduler. No forking, no subprocesses, no "exec"; no virtualization of memory or processes; no concept of drivers; no modules, no extensability. No USB, no PCI, no video functionality to speak of, no networking, no audio. Really, there was very little going on.
See the source code - the whole thing (including command line tools, "kernel"...) fits into a handful of assembler files; they aren't even sorted into subdirectories (as Michael Kjörling pointed out, DOS 1.0 didn't have subdirectories, but they didn't bother adding a hierarchy in later versions either).
If you count the DOS API calls, you end up at roughly 100 services for the 0x21 call, which is... not much, compared to today.
Finally, the CPUs were much simpler; there was only one mode (at least DOS ignored the rest, if we ignore EMM386 and such).
Suffice it to say, the programmers back then were quite used to assembler; more complex software was written in assembler on a regular basis. It probably did not even occur to them to rewrite DOS in C. There simply would have been little benefit.
edited 15 hours ago
answered 20 hours ago
AnoE
60016
60016
1
"they didn't even need to be sorted into subdirectories" MS-DOS 1.x didn't even support subdirectories. That was only added in 2.0. So development of 2.0 would pretty naturally not have used subdirectories for source code organization. It's like how, these days, when building a compiler for an updated version of a programming language, any newly introduced language constructs likely don't get used in the compiler source code until the new version of the compiler is quite stable.
â Michael Kjörling
17 hours ago
@MichaelKjörling, phew, thanks for that addition. My first contact with DOS was on an Schneider Amstrad PC, I think (no HDD but two floppies, though I cannot recall if they were 5 1/4" or already 3 1/2"; and I certainly do not recall the version of DOS). I do recall vividly how I once tried out all the DOS commands... up to and including RECOVER.COM on the boot disk. The disk certainly had no subdirectories after THAT one, and I learned the importance of having backups. :-) en.wikipedia.org/wiki/Recover_(command) Good old times.
â AnoE
15 hours ago
add a comment |Â
1
"they didn't even need to be sorted into subdirectories" MS-DOS 1.x didn't even support subdirectories. That was only added in 2.0. So development of 2.0 would pretty naturally not have used subdirectories for source code organization. It's like how, these days, when building a compiler for an updated version of a programming language, any newly introduced language constructs likely don't get used in the compiler source code until the new version of the compiler is quite stable.
â Michael Kjörling
17 hours ago
@MichaelKjörling, phew, thanks for that addition. My first contact with DOS was on an Schneider Amstrad PC, I think (no HDD but two floppies, though I cannot recall if they were 5 1/4" or already 3 1/2"; and I certainly do not recall the version of DOS). I do recall vividly how I once tried out all the DOS commands... up to and including RECOVER.COM on the boot disk. The disk certainly had no subdirectories after THAT one, and I learned the importance of having backups. :-) en.wikipedia.org/wiki/Recover_(command) Good old times.
â AnoE
15 hours ago
1
1
"they didn't even need to be sorted into subdirectories" MS-DOS 1.x didn't even support subdirectories. That was only added in 2.0. So development of 2.0 would pretty naturally not have used subdirectories for source code organization. It's like how, these days, when building a compiler for an updated version of a programming language, any newly introduced language constructs likely don't get used in the compiler source code until the new version of the compiler is quite stable.
â Michael Kjörling
17 hours ago
"they didn't even need to be sorted into subdirectories" MS-DOS 1.x didn't even support subdirectories. That was only added in 2.0. So development of 2.0 would pretty naturally not have used subdirectories for source code organization. It's like how, these days, when building a compiler for an updated version of a programming language, any newly introduced language constructs likely don't get used in the compiler source code until the new version of the compiler is quite stable.
â Michael Kjörling
17 hours ago
@MichaelKjörling, phew, thanks for that addition. My first contact with DOS was on an Schneider Amstrad PC, I think (no HDD but two floppies, though I cannot recall if they were 5 1/4" or already 3 1/2"; and I certainly do not recall the version of DOS). I do recall vividly how I once tried out all the DOS commands... up to and including RECOVER.COM on the boot disk. The disk certainly had no subdirectories after THAT one, and I learned the importance of having backups. :-) en.wikipedia.org/wiki/Recover_(command) Good old times.
â AnoE
15 hours ago
@MichaelKjörling, phew, thanks for that addition. My first contact with DOS was on an Schneider Amstrad PC, I think (no HDD but two floppies, though I cannot recall if they were 5 1/4" or already 3 1/2"; and I certainly do not recall the version of DOS). I do recall vividly how I once tried out all the DOS commands... up to and including RECOVER.COM on the boot disk. The disk certainly had no subdirectories after THAT one, and I learned the importance of having backups. :-) en.wikipedia.org/wiki/Recover_(command) Good old times.
â AnoE
15 hours ago
add a comment |Â
up vote
4
down vote
High level languages are generally easier to work with. However, depending on what one is programming, and one's experience, programming in assembler is not necessarily all that complex. Remove the hurdles of graphics, sound, inter-process communication, and just a keyboard and text shell for user interaction -- it's pretty straight-forward. Especially with a well-documented BIOS to handle the low level text in / text out / disk stuff, building a well-functioning program in assembler was straight-forward and not all that slow to accomplish.
Looking backward with a 2018 mindset, yeah it might seem strange to stick with assembler even in the later versions of DOS. It was not, though. Others have mentioned that some tools were eventually written in C. Still most of it was already written and known to operate well. Why bother rewriting everything? Do you think any users would have cared if a box containing the newest DOS had a blurb stating, "Now fully implemented in the C language!"?
add a comment |Â
up vote
4
down vote
High level languages are generally easier to work with. However, depending on what one is programming, and one's experience, programming in assembler is not necessarily all that complex. Remove the hurdles of graphics, sound, inter-process communication, and just a keyboard and text shell for user interaction -- it's pretty straight-forward. Especially with a well-documented BIOS to handle the low level text in / text out / disk stuff, building a well-functioning program in assembler was straight-forward and not all that slow to accomplish.
Looking backward with a 2018 mindset, yeah it might seem strange to stick with assembler even in the later versions of DOS. It was not, though. Others have mentioned that some tools were eventually written in C. Still most of it was already written and known to operate well. Why bother rewriting everything? Do you think any users would have cared if a box containing the newest DOS had a blurb stating, "Now fully implemented in the C language!"?
add a comment |Â
up vote
4
down vote
up vote
4
down vote
High level languages are generally easier to work with. However, depending on what one is programming, and one's experience, programming in assembler is not necessarily all that complex. Remove the hurdles of graphics, sound, inter-process communication, and just a keyboard and text shell for user interaction -- it's pretty straight-forward. Especially with a well-documented BIOS to handle the low level text in / text out / disk stuff, building a well-functioning program in assembler was straight-forward and not all that slow to accomplish.
Looking backward with a 2018 mindset, yeah it might seem strange to stick with assembler even in the later versions of DOS. It was not, though. Others have mentioned that some tools were eventually written in C. Still most of it was already written and known to operate well. Why bother rewriting everything? Do you think any users would have cared if a box containing the newest DOS had a blurb stating, "Now fully implemented in the C language!"?
High level languages are generally easier to work with. However, depending on what one is programming, and one's experience, programming in assembler is not necessarily all that complex. Remove the hurdles of graphics, sound, inter-process communication, and just a keyboard and text shell for user interaction -- it's pretty straight-forward. Especially with a well-documented BIOS to handle the low level text in / text out / disk stuff, building a well-functioning program in assembler was straight-forward and not all that slow to accomplish.
Looking backward with a 2018 mindset, yeah it might seem strange to stick with assembler even in the later versions of DOS. It was not, though. Others have mentioned that some tools were eventually written in C. Still most of it was already written and known to operate well. Why bother rewriting everything? Do you think any users would have cared if a box containing the newest DOS had a blurb stating, "Now fully implemented in the C language!"?
answered yesterday
RichF
4,3291334
4,3291334
add a comment |Â
add a comment |Â
up vote
3
down vote
You need to understand that C wasn't a good compromise between low-level and "high-level". The abstractions it offered were tiny, and the cost of them was more important on the PC than on machines where Unix originated (even the original PDP-11/20 had more memory and faster storage than the original IBM PC). The main reason why you'd choose C wasn't to get useful abstraction, but rather to improve portability (this in a time where differences between CPUs and memory models were still huge). Since the IBM PC didn't need portability, there was little benefit to using C.
Today, people tend to look at assembly programming as some stone-age level technology (especially if you've never worked with modern assembly). But keep in mind that the high-level alternatives to assembly were languages like LISP - languages that didn't even pretend to have any relation to the hardware. C and assembly were extremely close in their capabilities, and the benefits C gave you were often outweighed by the costs. People had large amounts of experience and knowledge of the hardware and assembly, and lots of experience designing software in assembly. Moving to C didn't save as much effort as much as you'd think.
Additionally, when MS-DOS was being developed, there was no C compiler for the PC (or the x86 CPUs). Writing a compiler wasn't easy (and I'm not even talking about optimizing compilers). Most of the people involved didn't have a great insight into state-of-the-art computer science (which, while of great theoretical value, was pretty academical in relation to desktop computers at the time; CS tended to shun the "just get it working, somehow" mentality of commercial software). On the other hand, creating an assembler is pretty trivial - and already gives you lots of capabilities that early compilers for languages like C did. Do you really want to spend the effort to make a high-level compiler when what you're actually trying to do is write an OS? By the time tools like Turbo Pascal came to be, they might have been a good choice - but that was much later, and there'd be little point in rewriting the code already written.
Even with a compiler, don't forget how crappy those computers were. Compilers were bulky and slow, and using a compiler involved flipping floppies all the time. That's one of the reasons why at those times, languages like C usually didn't improve productivity unless your software got really big - you needed just as much careful design as with assembly, and rely on your own verification of the code long before it got compiled and executed. The first compiler to really break that trend was Turbo Pascal, which took very little memory and was blazing fast (while including a full-blown IDE with a debugger!) - in 1983. That's about in time for MS DOS 3 - and indeed, around that time, some of the new tools were already written in C; but that's still no reason to rewrite the whole OS. If it works, why break it? And worse, risk breaking all of the applications that already run just fine on MS-DOS?
The API of DOS was mostly about invoking interrupts and passing arguments (or pointers) through registers. That's an extremely simple interface that's pretty much just as easy to implement in assembly as in C (or depending on your C compiler, much easier). Developing applications for MS DOS required pretty much no investment beyond the computer itself, and a lot of development tools sprung up pretty quickly from other vendors (though on launch, Microsoft was still the only company that provided an OS, a programming language and applications for the PC). All the way through the MS-DOS era, people used assembly whenever small or fast code was required - compilers only slowly caught up with what assembly was capable of, though it usually meant you used something like C or Pascal for most of the application, with custom assembly for the performance critical bits.
OSes for desktop computers had one main requirement - be small. They didn't have to do much stuff, but whatever they had to keep in memory was memory that couldn't be used by applications. MS-DOS targeted machines with 16 kiB RAM - that didn't leave a lot of room for the OS. Diminishing that further by using code that wasn't hand optimized would have been a pointless waste. Even later, as memory started expanding towards the 640 kiB barrier, every kiB still counted - I remember tweaking memory for days to get to run Doom with a mouse, network and sound at the same time (this was already with a 16 MiB PC, but lots of things still had to fit in those 640 kiB - including device drivers). This got even worse with CD-ROM games; one more driver to fit in. And throughout all this time, you wanted to avoid the OS as much as possible - direct memory access was the king if you could afford it. So there wasn't really much of a demand for complicated OS features - you mostly wanted the OS to stand aside while your applications were running (on the PC, the major exception would only come with Windows 3.0).
But programming in 100% assembly was nowhere near as tedious as people imagine today (one notable example being Roller Coaster Tycoon, a huge '99 game, 100% written in assembly by one guy). Most importantly, C wasn't significantly better, especially on the PC and with one-person "teams", and introduced a lot of design conflicts that people had to learn to deal with. People already had plenty of experience developing in assembly, and were very aware of the potential pitfalls and design challenges.
add a comment |Â
up vote
3
down vote
You need to understand that C wasn't a good compromise between low-level and "high-level". The abstractions it offered were tiny, and the cost of them was more important on the PC than on machines where Unix originated (even the original PDP-11/20 had more memory and faster storage than the original IBM PC). The main reason why you'd choose C wasn't to get useful abstraction, but rather to improve portability (this in a time where differences between CPUs and memory models were still huge). Since the IBM PC didn't need portability, there was little benefit to using C.
Today, people tend to look at assembly programming as some stone-age level technology (especially if you've never worked with modern assembly). But keep in mind that the high-level alternatives to assembly were languages like LISP - languages that didn't even pretend to have any relation to the hardware. C and assembly were extremely close in their capabilities, and the benefits C gave you were often outweighed by the costs. People had large amounts of experience and knowledge of the hardware and assembly, and lots of experience designing software in assembly. Moving to C didn't save as much effort as much as you'd think.
Additionally, when MS-DOS was being developed, there was no C compiler for the PC (or the x86 CPUs). Writing a compiler wasn't easy (and I'm not even talking about optimizing compilers). Most of the people involved didn't have a great insight into state-of-the-art computer science (which, while of great theoretical value, was pretty academical in relation to desktop computers at the time; CS tended to shun the "just get it working, somehow" mentality of commercial software). On the other hand, creating an assembler is pretty trivial - and already gives you lots of capabilities that early compilers for languages like C did. Do you really want to spend the effort to make a high-level compiler when what you're actually trying to do is write an OS? By the time tools like Turbo Pascal came to be, they might have been a good choice - but that was much later, and there'd be little point in rewriting the code already written.
Even with a compiler, don't forget how crappy those computers were. Compilers were bulky and slow, and using a compiler involved flipping floppies all the time. That's one of the reasons why at those times, languages like C usually didn't improve productivity unless your software got really big - you needed just as much careful design as with assembly, and rely on your own verification of the code long before it got compiled and executed. The first compiler to really break that trend was Turbo Pascal, which took very little memory and was blazing fast (while including a full-blown IDE with a debugger!) - in 1983. That's about in time for MS DOS 3 - and indeed, around that time, some of the new tools were already written in C; but that's still no reason to rewrite the whole OS. If it works, why break it? And worse, risk breaking all of the applications that already run just fine on MS-DOS?
The API of DOS was mostly about invoking interrupts and passing arguments (or pointers) through registers. That's an extremely simple interface that's pretty much just as easy to implement in assembly as in C (or depending on your C compiler, much easier). Developing applications for MS DOS required pretty much no investment beyond the computer itself, and a lot of development tools sprung up pretty quickly from other vendors (though on launch, Microsoft was still the only company that provided an OS, a programming language and applications for the PC). All the way through the MS-DOS era, people used assembly whenever small or fast code was required - compilers only slowly caught up with what assembly was capable of, though it usually meant you used something like C or Pascal for most of the application, with custom assembly for the performance critical bits.
OSes for desktop computers had one main requirement - be small. They didn't have to do much stuff, but whatever they had to keep in memory was memory that couldn't be used by applications. MS-DOS targeted machines with 16 kiB RAM - that didn't leave a lot of room for the OS. Diminishing that further by using code that wasn't hand optimized would have been a pointless waste. Even later, as memory started expanding towards the 640 kiB barrier, every kiB still counted - I remember tweaking memory for days to get to run Doom with a mouse, network and sound at the same time (this was already with a 16 MiB PC, but lots of things still had to fit in those 640 kiB - including device drivers). This got even worse with CD-ROM games; one more driver to fit in. And throughout all this time, you wanted to avoid the OS as much as possible - direct memory access was the king if you could afford it. So there wasn't really much of a demand for complicated OS features - you mostly wanted the OS to stand aside while your applications were running (on the PC, the major exception would only come with Windows 3.0).
But programming in 100% assembly was nowhere near as tedious as people imagine today (one notable example being Roller Coaster Tycoon, a huge '99 game, 100% written in assembly by one guy). Most importantly, C wasn't significantly better, especially on the PC and with one-person "teams", and introduced a lot of design conflicts that people had to learn to deal with. People already had plenty of experience developing in assembly, and were very aware of the potential pitfalls and design challenges.
add a comment |Â
up vote
3
down vote
up vote
3
down vote
You need to understand that C wasn't a good compromise between low-level and "high-level". The abstractions it offered were tiny, and the cost of them was more important on the PC than on machines where Unix originated (even the original PDP-11/20 had more memory and faster storage than the original IBM PC). The main reason why you'd choose C wasn't to get useful abstraction, but rather to improve portability (this in a time where differences between CPUs and memory models were still huge). Since the IBM PC didn't need portability, there was little benefit to using C.
Today, people tend to look at assembly programming as some stone-age level technology (especially if you've never worked with modern assembly). But keep in mind that the high-level alternatives to assembly were languages like LISP - languages that didn't even pretend to have any relation to the hardware. C and assembly were extremely close in their capabilities, and the benefits C gave you were often outweighed by the costs. People had large amounts of experience and knowledge of the hardware and assembly, and lots of experience designing software in assembly. Moving to C didn't save as much effort as much as you'd think.
Additionally, when MS-DOS was being developed, there was no C compiler for the PC (or the x86 CPUs). Writing a compiler wasn't easy (and I'm not even talking about optimizing compilers). Most of the people involved didn't have a great insight into state-of-the-art computer science (which, while of great theoretical value, was pretty academical in relation to desktop computers at the time; CS tended to shun the "just get it working, somehow" mentality of commercial software). On the other hand, creating an assembler is pretty trivial - and already gives you lots of capabilities that early compilers for languages like C did. Do you really want to spend the effort to make a high-level compiler when what you're actually trying to do is write an OS? By the time tools like Turbo Pascal came to be, they might have been a good choice - but that was much later, and there'd be little point in rewriting the code already written.
Even with a compiler, don't forget how crappy those computers were. Compilers were bulky and slow, and using a compiler involved flipping floppies all the time. That's one of the reasons why at those times, languages like C usually didn't improve productivity unless your software got really big - you needed just as much careful design as with assembly, and rely on your own verification of the code long before it got compiled and executed. The first compiler to really break that trend was Turbo Pascal, which took very little memory and was blazing fast (while including a full-blown IDE with a debugger!) - in 1983. That's about in time for MS DOS 3 - and indeed, around that time, some of the new tools were already written in C; but that's still no reason to rewrite the whole OS. If it works, why break it? And worse, risk breaking all of the applications that already run just fine on MS-DOS?
The API of DOS was mostly about invoking interrupts and passing arguments (or pointers) through registers. That's an extremely simple interface that's pretty much just as easy to implement in assembly as in C (or depending on your C compiler, much easier). Developing applications for MS DOS required pretty much no investment beyond the computer itself, and a lot of development tools sprung up pretty quickly from other vendors (though on launch, Microsoft was still the only company that provided an OS, a programming language and applications for the PC). All the way through the MS-DOS era, people used assembly whenever small or fast code was required - compilers only slowly caught up with what assembly was capable of, though it usually meant you used something like C or Pascal for most of the application, with custom assembly for the performance critical bits.
OSes for desktop computers had one main requirement - be small. They didn't have to do much stuff, but whatever they had to keep in memory was memory that couldn't be used by applications. MS-DOS targeted machines with 16 kiB RAM - that didn't leave a lot of room for the OS. Diminishing that further by using code that wasn't hand optimized would have been a pointless waste. Even later, as memory started expanding towards the 640 kiB barrier, every kiB still counted - I remember tweaking memory for days to get to run Doom with a mouse, network and sound at the same time (this was already with a 16 MiB PC, but lots of things still had to fit in those 640 kiB - including device drivers). This got even worse with CD-ROM games; one more driver to fit in. And throughout all this time, you wanted to avoid the OS as much as possible - direct memory access was the king if you could afford it. So there wasn't really much of a demand for complicated OS features - you mostly wanted the OS to stand aside while your applications were running (on the PC, the major exception would only come with Windows 3.0).
But programming in 100% assembly was nowhere near as tedious as people imagine today (one notable example being Roller Coaster Tycoon, a huge '99 game, 100% written in assembly by one guy). Most importantly, C wasn't significantly better, especially on the PC and with one-person "teams", and introduced a lot of design conflicts that people had to learn to deal with. People already had plenty of experience developing in assembly, and were very aware of the potential pitfalls and design challenges.
You need to understand that C wasn't a good compromise between low-level and "high-level". The abstractions it offered were tiny, and the cost of them was more important on the PC than on machines where Unix originated (even the original PDP-11/20 had more memory and faster storage than the original IBM PC). The main reason why you'd choose C wasn't to get useful abstraction, but rather to improve portability (this in a time where differences between CPUs and memory models were still huge). Since the IBM PC didn't need portability, there was little benefit to using C.
Today, people tend to look at assembly programming as some stone-age level technology (especially if you've never worked with modern assembly). But keep in mind that the high-level alternatives to assembly were languages like LISP - languages that didn't even pretend to have any relation to the hardware. C and assembly were extremely close in their capabilities, and the benefits C gave you were often outweighed by the costs. People had large amounts of experience and knowledge of the hardware and assembly, and lots of experience designing software in assembly. Moving to C didn't save as much effort as much as you'd think.
Additionally, when MS-DOS was being developed, there was no C compiler for the PC (or the x86 CPUs). Writing a compiler wasn't easy (and I'm not even talking about optimizing compilers). Most of the people involved didn't have a great insight into state-of-the-art computer science (which, while of great theoretical value, was pretty academical in relation to desktop computers at the time; CS tended to shun the "just get it working, somehow" mentality of commercial software). On the other hand, creating an assembler is pretty trivial - and already gives you lots of capabilities that early compilers for languages like C did. Do you really want to spend the effort to make a high-level compiler when what you're actually trying to do is write an OS? By the time tools like Turbo Pascal came to be, they might have been a good choice - but that was much later, and there'd be little point in rewriting the code already written.
Even with a compiler, don't forget how crappy those computers were. Compilers were bulky and slow, and using a compiler involved flipping floppies all the time. That's one of the reasons why at those times, languages like C usually didn't improve productivity unless your software got really big - you needed just as much careful design as with assembly, and rely on your own verification of the code long before it got compiled and executed. The first compiler to really break that trend was Turbo Pascal, which took very little memory and was blazing fast (while including a full-blown IDE with a debugger!) - in 1983. That's about in time for MS DOS 3 - and indeed, around that time, some of the new tools were already written in C; but that's still no reason to rewrite the whole OS. If it works, why break it? And worse, risk breaking all of the applications that already run just fine on MS-DOS?
The API of DOS was mostly about invoking interrupts and passing arguments (or pointers) through registers. That's an extremely simple interface that's pretty much just as easy to implement in assembly as in C (or depending on your C compiler, much easier). Developing applications for MS DOS required pretty much no investment beyond the computer itself, and a lot of development tools sprung up pretty quickly from other vendors (though on launch, Microsoft was still the only company that provided an OS, a programming language and applications for the PC). All the way through the MS-DOS era, people used assembly whenever small or fast code was required - compilers only slowly caught up with what assembly was capable of, though it usually meant you used something like C or Pascal for most of the application, with custom assembly for the performance critical bits.
OSes for desktop computers had one main requirement - be small. They didn't have to do much stuff, but whatever they had to keep in memory was memory that couldn't be used by applications. MS-DOS targeted machines with 16 kiB RAM - that didn't leave a lot of room for the OS. Diminishing that further by using code that wasn't hand optimized would have been a pointless waste. Even later, as memory started expanding towards the 640 kiB barrier, every kiB still counted - I remember tweaking memory for days to get to run Doom with a mouse, network and sound at the same time (this was already with a 16 MiB PC, but lots of things still had to fit in those 640 kiB - including device drivers). This got even worse with CD-ROM games; one more driver to fit in. And throughout all this time, you wanted to avoid the OS as much as possible - direct memory access was the king if you could afford it. So there wasn't really much of a demand for complicated OS features - you mostly wanted the OS to stand aside while your applications were running (on the PC, the major exception would only come with Windows 3.0).
But programming in 100% assembly was nowhere near as tedious as people imagine today (one notable example being Roller Coaster Tycoon, a huge '99 game, 100% written in assembly by one guy). Most importantly, C wasn't significantly better, especially on the PC and with one-person "teams", and introduced a lot of design conflicts that people had to learn to deal with. People already had plenty of experience developing in assembly, and were very aware of the potential pitfalls and design challenges.
answered 15 hours ago
Luaan
24616
24616
add a comment |Â
add a comment |Â
up vote
3
down vote
Yes - C has been around since 1972 but there were no MSDOS C compilers until the late 80s. To convert an entire OS from Assembler to C would be a mammoth task. Even though it might be easier to maintain, it could be a lot slower.
You can see the result of conversion when you compare Visual Studio 2008 to VS2010. This was a full blown conversion from C to C#. OK - it is easier to maintain from the Vendor's point of view but the new product is 24 times slower: on a netbook, 2008 loads in 5s, 2010 takes almost 2 minutes.
Also, DOS was a 16-bit OS in a 20 bit address space. This meant that there was a lot of segmented addressing and a few memory models to choose from (Tiny, Compact, Medium, Large, Huge): not the flat addressing that you get in the 32-bit/64-bit compilers nowadays. The compilers didn't hide this from you: you had to make a conscious decision as to which memory model to use since changing from one model to another wasn't a trivial exercise (I've done this in a past life).
2
Assembly to C, and C to C# cannot really be compared, as C# is a JIT'd GC'd memory-safe language, whereas C is very close to Assembly in the features it provides, being just easier to write and maintain. Anyway, "entire OS" for MS-DOS is quite little code, so converting it to C, either completely or partially, wouldn't be such a large task.
â juhist
yesterday
OK, maybe C to C# was a bad example. It is the little specialist instructions like IN and OUT, which are used a lot in device drivers that cause no end of problems with many C compilers. I don't know what instructions are used but stuff like REPZ often has to be hand coded: there are no C equivalents.
â cup
yesterday
2
@juhist Try actually doing some of the conversion yourself, then tell us whether it was really a large task or not. (Oh, and make sure you regression test every edge and corner case, just to be sure your C version doesn't change the behaviour of anything, not just what the (buggy and incomplete) user documentation says is valid!
â alephzero
yesterday
1
It is an answer; it starts with "Yes".
â wizzwizz4â¦
yesterday
3
June 1982 is post-launch by a bit over a year (and so well after key development work) but it is not by any stretch of imagination the late 80's
â Chris Stratton
yesterday
 |Â
show 4 more comments
up vote
3
down vote
Yes - C has been around since 1972 but there were no MSDOS C compilers until the late 80s. To convert an entire OS from Assembler to C would be a mammoth task. Even though it might be easier to maintain, it could be a lot slower.
You can see the result of conversion when you compare Visual Studio 2008 to VS2010. This was a full blown conversion from C to C#. OK - it is easier to maintain from the Vendor's point of view but the new product is 24 times slower: on a netbook, 2008 loads in 5s, 2010 takes almost 2 minutes.
Also, DOS was a 16-bit OS in a 20 bit address space. This meant that there was a lot of segmented addressing and a few memory models to choose from (Tiny, Compact, Medium, Large, Huge): not the flat addressing that you get in the 32-bit/64-bit compilers nowadays. The compilers didn't hide this from you: you had to make a conscious decision as to which memory model to use since changing from one model to another wasn't a trivial exercise (I've done this in a past life).
2
Assembly to C, and C to C# cannot really be compared, as C# is a JIT'd GC'd memory-safe language, whereas C is very close to Assembly in the features it provides, being just easier to write and maintain. Anyway, "entire OS" for MS-DOS is quite little code, so converting it to C, either completely or partially, wouldn't be such a large task.
â juhist
yesterday
OK, maybe C to C# was a bad example. It is the little specialist instructions like IN and OUT, which are used a lot in device drivers that cause no end of problems with many C compilers. I don't know what instructions are used but stuff like REPZ often has to be hand coded: there are no C equivalents.
â cup
yesterday
2
@juhist Try actually doing some of the conversion yourself, then tell us whether it was really a large task or not. (Oh, and make sure you regression test every edge and corner case, just to be sure your C version doesn't change the behaviour of anything, not just what the (buggy and incomplete) user documentation says is valid!
â alephzero
yesterday
1
It is an answer; it starts with "Yes".
â wizzwizz4â¦
yesterday
3
June 1982 is post-launch by a bit over a year (and so well after key development work) but it is not by any stretch of imagination the late 80's
â Chris Stratton
yesterday
 |Â
show 4 more comments
up vote
3
down vote
up vote
3
down vote
Yes - C has been around since 1972 but there were no MSDOS C compilers until the late 80s. To convert an entire OS from Assembler to C would be a mammoth task. Even though it might be easier to maintain, it could be a lot slower.
You can see the result of conversion when you compare Visual Studio 2008 to VS2010. This was a full blown conversion from C to C#. OK - it is easier to maintain from the Vendor's point of view but the new product is 24 times slower: on a netbook, 2008 loads in 5s, 2010 takes almost 2 minutes.
Also, DOS was a 16-bit OS in a 20 bit address space. This meant that there was a lot of segmented addressing and a few memory models to choose from (Tiny, Compact, Medium, Large, Huge): not the flat addressing that you get in the 32-bit/64-bit compilers nowadays. The compilers didn't hide this from you: you had to make a conscious decision as to which memory model to use since changing from one model to another wasn't a trivial exercise (I've done this in a past life).
Yes - C has been around since 1972 but there were no MSDOS C compilers until the late 80s. To convert an entire OS from Assembler to C would be a mammoth task. Even though it might be easier to maintain, it could be a lot slower.
You can see the result of conversion when you compare Visual Studio 2008 to VS2010. This was a full blown conversion from C to C#. OK - it is easier to maintain from the Vendor's point of view but the new product is 24 times slower: on a netbook, 2008 loads in 5s, 2010 takes almost 2 minutes.
Also, DOS was a 16-bit OS in a 20 bit address space. This meant that there was a lot of segmented addressing and a few memory models to choose from (Tiny, Compact, Medium, Large, Huge): not the flat addressing that you get in the 32-bit/64-bit compilers nowadays. The compilers didn't hide this from you: you had to make a conscious decision as to which memory model to use since changing from one model to another wasn't a trivial exercise (I've done this in a past life).
edited 12 hours ago
answered yesterday
cup
49918
49918
2
Assembly to C, and C to C# cannot really be compared, as C# is a JIT'd GC'd memory-safe language, whereas C is very close to Assembly in the features it provides, being just easier to write and maintain. Anyway, "entire OS" for MS-DOS is quite little code, so converting it to C, either completely or partially, wouldn't be such a large task.
â juhist
yesterday
OK, maybe C to C# was a bad example. It is the little specialist instructions like IN and OUT, which are used a lot in device drivers that cause no end of problems with many C compilers. I don't know what instructions are used but stuff like REPZ often has to be hand coded: there are no C equivalents.
â cup
yesterday
2
@juhist Try actually doing some of the conversion yourself, then tell us whether it was really a large task or not. (Oh, and make sure you regression test every edge and corner case, just to be sure your C version doesn't change the behaviour of anything, not just what the (buggy and incomplete) user documentation says is valid!
â alephzero
yesterday
1
It is an answer; it starts with "Yes".
â wizzwizz4â¦
yesterday
3
June 1982 is post-launch by a bit over a year (and so well after key development work) but it is not by any stretch of imagination the late 80's
â Chris Stratton
yesterday
 |Â
show 4 more comments
2
Assembly to C, and C to C# cannot really be compared, as C# is a JIT'd GC'd memory-safe language, whereas C is very close to Assembly in the features it provides, being just easier to write and maintain. Anyway, "entire OS" for MS-DOS is quite little code, so converting it to C, either completely or partially, wouldn't be such a large task.
â juhist
yesterday
OK, maybe C to C# was a bad example. It is the little specialist instructions like IN and OUT, which are used a lot in device drivers that cause no end of problems with many C compilers. I don't know what instructions are used but stuff like REPZ often has to be hand coded: there are no C equivalents.
â cup
yesterday
2
@juhist Try actually doing some of the conversion yourself, then tell us whether it was really a large task or not. (Oh, and make sure you regression test every edge and corner case, just to be sure your C version doesn't change the behaviour of anything, not just what the (buggy and incomplete) user documentation says is valid!
â alephzero
yesterday
1
It is an answer; it starts with "Yes".
â wizzwizz4â¦
yesterday
3
June 1982 is post-launch by a bit over a year (and so well after key development work) but it is not by any stretch of imagination the late 80's
â Chris Stratton
yesterday
2
2
Assembly to C, and C to C# cannot really be compared, as C# is a JIT'd GC'd memory-safe language, whereas C is very close to Assembly in the features it provides, being just easier to write and maintain. Anyway, "entire OS" for MS-DOS is quite little code, so converting it to C, either completely or partially, wouldn't be such a large task.
â juhist
yesterday
Assembly to C, and C to C# cannot really be compared, as C# is a JIT'd GC'd memory-safe language, whereas C is very close to Assembly in the features it provides, being just easier to write and maintain. Anyway, "entire OS" for MS-DOS is quite little code, so converting it to C, either completely or partially, wouldn't be such a large task.
â juhist
yesterday
OK, maybe C to C# was a bad example. It is the little specialist instructions like IN and OUT, which are used a lot in device drivers that cause no end of problems with many C compilers. I don't know what instructions are used but stuff like REPZ often has to be hand coded: there are no C equivalents.
â cup
yesterday
OK, maybe C to C# was a bad example. It is the little specialist instructions like IN and OUT, which are used a lot in device drivers that cause no end of problems with many C compilers. I don't know what instructions are used but stuff like REPZ often has to be hand coded: there are no C equivalents.
â cup
yesterday
2
2
@juhist Try actually doing some of the conversion yourself, then tell us whether it was really a large task or not. (Oh, and make sure you regression test every edge and corner case, just to be sure your C version doesn't change the behaviour of anything, not just what the (buggy and incomplete) user documentation says is valid!
â alephzero
yesterday
@juhist Try actually doing some of the conversion yourself, then tell us whether it was really a large task or not. (Oh, and make sure you regression test every edge and corner case, just to be sure your C version doesn't change the behaviour of anything, not just what the (buggy and incomplete) user documentation says is valid!
â alephzero
yesterday
1
1
It is an answer; it starts with "Yes".
â wizzwizz4â¦
yesterday
It is an answer; it starts with "Yes".
â wizzwizz4â¦
yesterday
3
3
June 1982 is post-launch by a bit over a year (and so well after key development work) but it is not by any stretch of imagination the late 80's
â Chris Stratton
yesterday
June 1982 is post-launch by a bit over a year (and so well after key development work) but it is not by any stretch of imagination the late 80's
â Chris Stratton
yesterday
 |Â
show 4 more comments
up vote
2
down vote
Answering the question - yes.
For the rationale: there's very little gain in rewriting functioning code (unless for example you have portability specifically in mind).
The "newest version" of any major program generally contains much of the code of the previous version, so again, why spend programmer time on rewriting existing features instead of adding new features?
New contributor
1
Welcome to Retrocomputing! This answer could be improved by supporting evidence. For example, specifying which version your answer refers to, and giving a link to source code. Although your rationale is valid, it doesn't count as evidence.
â Dr Sheldon
yesterday
add a comment |Â
up vote
2
down vote
Answering the question - yes.
For the rationale: there's very little gain in rewriting functioning code (unless for example you have portability specifically in mind).
The "newest version" of any major program generally contains much of the code of the previous version, so again, why spend programmer time on rewriting existing features instead of adding new features?
New contributor
1
Welcome to Retrocomputing! This answer could be improved by supporting evidence. For example, specifying which version your answer refers to, and giving a link to source code. Although your rationale is valid, it doesn't count as evidence.
â Dr Sheldon
yesterday
add a comment |Â
up vote
2
down vote
up vote
2
down vote
Answering the question - yes.
For the rationale: there's very little gain in rewriting functioning code (unless for example you have portability specifically in mind).
The "newest version" of any major program generally contains much of the code of the previous version, so again, why spend programmer time on rewriting existing features instead of adding new features?
New contributor
Answering the question - yes.
For the rationale: there's very little gain in rewriting functioning code (unless for example you have portability specifically in mind).
The "newest version" of any major program generally contains much of the code of the previous version, so again, why spend programmer time on rewriting existing features instead of adding new features?
New contributor
New contributor
answered yesterday
dave
211
211
New contributor
New contributor
1
Welcome to Retrocomputing! This answer could be improved by supporting evidence. For example, specifying which version your answer refers to, and giving a link to source code. Although your rationale is valid, it doesn't count as evidence.
â Dr Sheldon
yesterday
add a comment |Â
1
Welcome to Retrocomputing! This answer could be improved by supporting evidence. For example, specifying which version your answer refers to, and giving a link to source code. Although your rationale is valid, it doesn't count as evidence.
â Dr Sheldon
yesterday
1
1
Welcome to Retrocomputing! This answer could be improved by supporting evidence. For example, specifying which version your answer refers to, and giving a link to source code. Although your rationale is valid, it doesn't count as evidence.
â Dr Sheldon
yesterday
Welcome to Retrocomputing! This answer could be improved by supporting evidence. For example, specifying which version your answer refers to, and giving a link to source code. Although your rationale is valid, it doesn't count as evidence.
â Dr Sheldon
yesterday
add a comment |Â
up vote
0
down vote
It's interesting to note that the direct inspiration for the initial DOS API, namely CP/M was mostly written in PL/M rather than assembly language.
PL/M being a somewhat obscure language and the original Digital Research source code being unavailable due to copyright and licensing reasons anyway, writing in assembly language was the most straightforward course for direct binary compatibility. In particular since the machine-dependent part of the operating system, the BIOS, was already provided by Microsoft (it was very common to write the BIOS in assembly language anyway, even for CP/M, for similar reasons).
The original CP/M structure consisted of BIOS, BDOS, and CCP (basically what COMMAND.COM does) with BIOS implementing the system-dependent parts, BDOS implementing the available system calls on top of that, and the CCP (typically reloaded after each program run) providing a basic command line interface.
Much of the BDOS layer was just glue code, with the most important part that was more complex being the file system code and implementation. There were no file ids indexing kernel internal structures: instead the application program had to provide the room for the respective data structures. Consequently there was no limitation on concurrently open files. Also there was no file abstraction across devices: disk files used different system calls than console I/O or printers.
Since the core of MS-DOS corresponds just to what was the BDOS in CP/M, reimplementing it in assembly language was not that much of a chore. Later versions of MS-DOS tried adding a file id layer and directories and pipes to the mix to look more like Unix, but partly due to the unwieldy implementation language and partly due to a lack of technical excellence, the results were far from convincing. Other things that were a mess were end-of-file handling (since CP/M only had file lengths in multiples of 128) and text line separators vs. terminal handling (CR/LF is around even to these days).
So doing the original implementation in assembly language was reasonable given the system call history of CP/M that DOS initially tried to emulate. However, it contributed to drawing the wrong project members for moving to a Unix-like approach of system responsibility and mechanisms. Microsoft never managed utilizing the x286 16-bit protected modes for creating a more modern Windows variant; instead both Windows 95 and Windows NT worked with the x386 32-bit protected modes, Windows 95 with DOS underpinnings and Windows NT with a kernel developed new. Eventually the NT approach replaced the old DOS-based one.
NT was renowned for being "enterprise-level" and resource consuming. Part of the reason certainly was that it had bulkier and slower code due to not being coded principally in assembly language like the DOS-based OS cores were. That led to a rather long parallel history of DOS- and NT-based Windows systems.
So to answer your question: later "versions of DOS" were written in higher languages, but it took them a long time to actually replace the assembly language based ones.
add a comment |Â
up vote
0
down vote
It's interesting to note that the direct inspiration for the initial DOS API, namely CP/M was mostly written in PL/M rather than assembly language.
PL/M being a somewhat obscure language and the original Digital Research source code being unavailable due to copyright and licensing reasons anyway, writing in assembly language was the most straightforward course for direct binary compatibility. In particular since the machine-dependent part of the operating system, the BIOS, was already provided by Microsoft (it was very common to write the BIOS in assembly language anyway, even for CP/M, for similar reasons).
The original CP/M structure consisted of BIOS, BDOS, and CCP (basically what COMMAND.COM does) with BIOS implementing the system-dependent parts, BDOS implementing the available system calls on top of that, and the CCP (typically reloaded after each program run) providing a basic command line interface.
Much of the BDOS layer was just glue code, with the most important part that was more complex being the file system code and implementation. There were no file ids indexing kernel internal structures: instead the application program had to provide the room for the respective data structures. Consequently there was no limitation on concurrently open files. Also there was no file abstraction across devices: disk files used different system calls than console I/O or printers.
Since the core of MS-DOS corresponds just to what was the BDOS in CP/M, reimplementing it in assembly language was not that much of a chore. Later versions of MS-DOS tried adding a file id layer and directories and pipes to the mix to look more like Unix, but partly due to the unwieldy implementation language and partly due to a lack of technical excellence, the results were far from convincing. Other things that were a mess were end-of-file handling (since CP/M only had file lengths in multiples of 128) and text line separators vs. terminal handling (CR/LF is around even to these days).
So doing the original implementation in assembly language was reasonable given the system call history of CP/M that DOS initially tried to emulate. However, it contributed to drawing the wrong project members for moving to a Unix-like approach of system responsibility and mechanisms. Microsoft never managed utilizing the x286 16-bit protected modes for creating a more modern Windows variant; instead both Windows 95 and Windows NT worked with the x386 32-bit protected modes, Windows 95 with DOS underpinnings and Windows NT with a kernel developed new. Eventually the NT approach replaced the old DOS-based one.
NT was renowned for being "enterprise-level" and resource consuming. Part of the reason certainly was that it had bulkier and slower code due to not being coded principally in assembly language like the DOS-based OS cores were. That led to a rather long parallel history of DOS- and NT-based Windows systems.
So to answer your question: later "versions of DOS" were written in higher languages, but it took them a long time to actually replace the assembly language based ones.
add a comment |Â
up vote
0
down vote
up vote
0
down vote
It's interesting to note that the direct inspiration for the initial DOS API, namely CP/M was mostly written in PL/M rather than assembly language.
PL/M being a somewhat obscure language and the original Digital Research source code being unavailable due to copyright and licensing reasons anyway, writing in assembly language was the most straightforward course for direct binary compatibility. In particular since the machine-dependent part of the operating system, the BIOS, was already provided by Microsoft (it was very common to write the BIOS in assembly language anyway, even for CP/M, for similar reasons).
The original CP/M structure consisted of BIOS, BDOS, and CCP (basically what COMMAND.COM does) with BIOS implementing the system-dependent parts, BDOS implementing the available system calls on top of that, and the CCP (typically reloaded after each program run) providing a basic command line interface.
Much of the BDOS layer was just glue code, with the most important part that was more complex being the file system code and implementation. There were no file ids indexing kernel internal structures: instead the application program had to provide the room for the respective data structures. Consequently there was no limitation on concurrently open files. Also there was no file abstraction across devices: disk files used different system calls than console I/O or printers.
Since the core of MS-DOS corresponds just to what was the BDOS in CP/M, reimplementing it in assembly language was not that much of a chore. Later versions of MS-DOS tried adding a file id layer and directories and pipes to the mix to look more like Unix, but partly due to the unwieldy implementation language and partly due to a lack of technical excellence, the results were far from convincing. Other things that were a mess were end-of-file handling (since CP/M only had file lengths in multiples of 128) and text line separators vs. terminal handling (CR/LF is around even to these days).
So doing the original implementation in assembly language was reasonable given the system call history of CP/M that DOS initially tried to emulate. However, it contributed to drawing the wrong project members for moving to a Unix-like approach of system responsibility and mechanisms. Microsoft never managed utilizing the x286 16-bit protected modes for creating a more modern Windows variant; instead both Windows 95 and Windows NT worked with the x386 32-bit protected modes, Windows 95 with DOS underpinnings and Windows NT with a kernel developed new. Eventually the NT approach replaced the old DOS-based one.
NT was renowned for being "enterprise-level" and resource consuming. Part of the reason certainly was that it had bulkier and slower code due to not being coded principally in assembly language like the DOS-based OS cores were. That led to a rather long parallel history of DOS- and NT-based Windows systems.
So to answer your question: later "versions of DOS" were written in higher languages, but it took them a long time to actually replace the assembly language based ones.
It's interesting to note that the direct inspiration for the initial DOS API, namely CP/M was mostly written in PL/M rather than assembly language.
PL/M being a somewhat obscure language and the original Digital Research source code being unavailable due to copyright and licensing reasons anyway, writing in assembly language was the most straightforward course for direct binary compatibility. In particular since the machine-dependent part of the operating system, the BIOS, was already provided by Microsoft (it was very common to write the BIOS in assembly language anyway, even for CP/M, for similar reasons).
The original CP/M structure consisted of BIOS, BDOS, and CCP (basically what COMMAND.COM does) with BIOS implementing the system-dependent parts, BDOS implementing the available system calls on top of that, and the CCP (typically reloaded after each program run) providing a basic command line interface.
Much of the BDOS layer was just glue code, with the most important part that was more complex being the file system code and implementation. There were no file ids indexing kernel internal structures: instead the application program had to provide the room for the respective data structures. Consequently there was no limitation on concurrently open files. Also there was no file abstraction across devices: disk files used different system calls than console I/O or printers.
Since the core of MS-DOS corresponds just to what was the BDOS in CP/M, reimplementing it in assembly language was not that much of a chore. Later versions of MS-DOS tried adding a file id layer and directories and pipes to the mix to look more like Unix, but partly due to the unwieldy implementation language and partly due to a lack of technical excellence, the results were far from convincing. Other things that were a mess were end-of-file handling (since CP/M only had file lengths in multiples of 128) and text line separators vs. terminal handling (CR/LF is around even to these days).
So doing the original implementation in assembly language was reasonable given the system call history of CP/M that DOS initially tried to emulate. However, it contributed to drawing the wrong project members for moving to a Unix-like approach of system responsibility and mechanisms. Microsoft never managed utilizing the x286 16-bit protected modes for creating a more modern Windows variant; instead both Windows 95 and Windows NT worked with the x386 32-bit protected modes, Windows 95 with DOS underpinnings and Windows NT with a kernel developed new. Eventually the NT approach replaced the old DOS-based one.
NT was renowned for being "enterprise-level" and resource consuming. Part of the reason certainly was that it had bulkier and slower code due to not being coded principally in assembly language like the DOS-based OS cores were. That led to a rather long parallel history of DOS- and NT-based Windows systems.
So to answer your question: later "versions of DOS" were written in higher languages, but it took them a long time to actually replace the assembly language based ones.
edited 11 hours ago
manassehkatz
1,251111
1,251111
answered 14 hours ago
user10869
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f7880%2fwere-later-ms-dos-versions-still-implemented-in-x86-assembly%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
4
The MS_DOS ABI (if we can call it that) is an assembly register interface. It would be quite inconvenient to implement that in C.
â tofro
yesterday
5
"The assembly language would not be my first choice for implementing an operating system." -- But someone did anyways, a fully graphical, multitasking OS that can run from a floppy: menuetos.net. While it's not exactly the same thing as Windows/Linux/MacOS (more primitive/flickers, etc), it's a cool proof of concept of how much space can be saved when you use only assembler.
â phyrfox
yesterday
4
@tofro Linux system calls have a similar assembly register interface.
mov eax, __NR_write; int 0x80
on i386, with args in ebx, ecx, edx, ... (orsysenter
, or on x86-64,syscall
with arg-passing registers mostly matching the function-calling convention). The solution is either inline-asm macros, or library wrapper functions (like glibc uses). e.g.write(1, "hello", 5)
compiles to a library function call, which (on x86-64) does something likemov eax, __NR_write
/syscall
/cmp rax,-4095
/jae set_errno
/ret
. See this Q&A.â Peter Cordes
yesterday
28
"The assembly language would not be my first choice for implementing an operating system." Because you're young, and have no appreciation for #1 what it means to have to run in 16KB of RAM, and #2 how hard it is to write a compiler that optimizes so well that it's better than hand-coded assembler.
â RonJohn
yesterday
4
RAM is the issue, entirely. You don't say why assembly language wouldn't be your first choice, but my guess is that you feel it would be easier to code in C or some other higher-level language, and it would be. However, when programming resource-limited computers, ease is not the primary goal; space efficiency, speed of execution, or some combination of these two factors will be the main considerations. Consider that some computers had one kilobyte or less of RAM versus gigabytes now, and even the very last version of MS-DOS had to run nicely on 640kB systems (the first, 16 kB!).
â Jim MacKenzie
14 hours ago