Will PC-DOS run faster on 4 or 8 core modern machines?
Clash Royale CLAN TAG#URR8PPP
When I run PC-DOS on my 4 core AMD Phenom chip, does it take advantage of the extra parallel CPU's? If not, is there a way to coax DOS to use all available CPU's or does this require specific developer programming at the assembly or C compilation time?
ms-dos assembly cpu
add a comment |
When I run PC-DOS on my 4 core AMD Phenom chip, does it take advantage of the extra parallel CPU's? If not, is there a way to coax DOS to use all available CPU's or does this require specific developer programming at the assembly or C compilation time?
ms-dos assembly cpu
2
Apparently there were Multi-user DOS (not derived from MS-DOS available). en.wikipedia.org/wiki/Multiuser_DOS#Multiuser_DOS
– Thorbjørn Ravn Andersen
Feb 5 at 23:29
1
You could certainly run DOS multiples times in parallel in VMs, assigning a dedicated CPU to each of them
– Thomas Weller
Feb 5 at 23:31
See also: superuser.com/questions/726348/…
– traal
Feb 6 at 0:46
add a comment |
When I run PC-DOS on my 4 core AMD Phenom chip, does it take advantage of the extra parallel CPU's? If not, is there a way to coax DOS to use all available CPU's or does this require specific developer programming at the assembly or C compilation time?
ms-dos assembly cpu
When I run PC-DOS on my 4 core AMD Phenom chip, does it take advantage of the extra parallel CPU's? If not, is there a way to coax DOS to use all available CPU's or does this require specific developer programming at the assembly or C compilation time?
ms-dos assembly cpu
ms-dos assembly cpu
edited Feb 6 at 22:16
Stephen Kitt
38.2k8154167
38.2k8154167
asked Feb 5 at 12:54
jwzumwaltjwzumwalt
1,93441536
1,93441536
2
Apparently there were Multi-user DOS (not derived from MS-DOS available). en.wikipedia.org/wiki/Multiuser_DOS#Multiuser_DOS
– Thorbjørn Ravn Andersen
Feb 5 at 23:29
1
You could certainly run DOS multiples times in parallel in VMs, assigning a dedicated CPU to each of them
– Thomas Weller
Feb 5 at 23:31
See also: superuser.com/questions/726348/…
– traal
Feb 6 at 0:46
add a comment |
2
Apparently there were Multi-user DOS (not derived from MS-DOS available). en.wikipedia.org/wiki/Multiuser_DOS#Multiuser_DOS
– Thorbjørn Ravn Andersen
Feb 5 at 23:29
1
You could certainly run DOS multiples times in parallel in VMs, assigning a dedicated CPU to each of them
– Thomas Weller
Feb 5 at 23:31
See also: superuser.com/questions/726348/…
– traal
Feb 6 at 0:46
2
2
Apparently there were Multi-user DOS (not derived from MS-DOS available). en.wikipedia.org/wiki/Multiuser_DOS#Multiuser_DOS
– Thorbjørn Ravn Andersen
Feb 5 at 23:29
Apparently there were Multi-user DOS (not derived from MS-DOS available). en.wikipedia.org/wiki/Multiuser_DOS#Multiuser_DOS
– Thorbjørn Ravn Andersen
Feb 5 at 23:29
1
1
You could certainly run DOS multiples times in parallel in VMs, assigning a dedicated CPU to each of them
– Thomas Weller
Feb 5 at 23:31
You could certainly run DOS multiples times in parallel in VMs, assigning a dedicated CPU to each of them
– Thomas Weller
Feb 5 at 23:31
See also: superuser.com/questions/726348/…
– traal
Feb 6 at 0:46
See also: superuser.com/questions/726348/…
– traal
Feb 6 at 0:46
add a comment |
5 Answers
5
active
oldest
votes
No, DOS won't use any additional CPU (*1) ever.
(Though it might run faster due them new CPUs being faster)
Quite the same way as DOS doesn't take advantage of the extended memory or additional instructions.
DOS is a
- Single CPU
- Single User
- Single Task
- Single Program
- Real Mode
- 8086
operating system.
Even through it got a few extensions over time to tap a bit into newer developments, like
- A20 Handler for HMA usage
- Utilities for extended memory usage like HIMEM.SYS or EMM386
- Usage of certain 286 instructions
it never left its original setup. That's what Unix, OS/2 and Windows NT were meant for.
Similar goes for third party extensions like background utilities (SideKick) or process swappers/switchers (DoubleDos et.al.), or the eventually furthest reaching combination of all, Windows (up to 98). They all added layers to manage DOS for some increased functionality, but didn't (and couldn't) change its basic workings.
Of course, application programs can use additional features of a machine, no matter if it's new CPU instructions, a new graphics card, more memory, or in this case an additional CPU. Much like DOS-extenders allowed protected mode programs to run as DOS applications, most notably maybe DOS/4GW, which got huge popularity due being delivered with Watcom compilers.
Then again, I do not know of any extender allowing the use of concurrent CPUs.
It's worth noting that none of these additions - including the application specific DOS-extenders - were changing the basic paradigm of DOS as a single-user, single-task operating system.
*1 - What to call it? CPU, core, hyperthread, socket? Well, in ye good olde times of simple microprocessors we couldn't care less. Nowadays it gets blurry. With multiple processing units of various quality it gets hard to qualify. For most purposes the professional world continues to call the device (IC) a CPU, but avoids the term when it comes to system building and talks now about sockets, leaving open how many processing units the IC plugged in will have - or if at all. Often the term core is used instead, but thanks to Intel using this as well as a brand name, it got blurred again. So for all RC.SE purposes I stay with the term CPU, as vague as it is, and qualify if needed.
Comments are not for extended discussion; this conversation has been moved to chat.
– Chenmunka♦
Feb 11 at 10:55
add a comment |
If by IBM DOS you mean IBM PC DOS, which was a rebrand/derivate of MS-DOS, then the answer is no - DOS will only ever support a single core. HyperThreading and multiple cores is simply not supported by DOS.
Making DOS use multiple cores would be a major operation. Firstly DOS would have to support multitasking. It could not be task switching or cooperative multitasking, it would have to be full pre-emptive multitasking. Then it would need to be developed to support SMP (symmetric multiprocessing). Both of these tasks would be major undertakings - their peers in the 90s took years to get reliable and efficient multiprocessing support.
It is possible that a single application could take advantage of multiple cores if it used DOS as a bootstrap and then kicked DOS out, and essentially became its own operating system.
If you're looking for a 90s-era operating system that supports multiple cores, then you would be looking at Linux (2.0 or later), Windows NT 4 or OS/2 Warp.
2
AFAIK there were only two versions of OS/2 with support for multiple CPUs, OS/2 2.11 SMP and OS/2 Warp 4 AS SMP, and both are very rare.
– Stephen Kitt
Feb 5 at 13:24
2
"It could not be task switching or cooperative multitasking, it would have to be full pre-emptive multitasking" <- why? shouldn't it be enough to give each "core" something to do and wait until this something says "hey, I finished"?
– Felix Palmen
Feb 5 at 14:31
2
Please feel free to edit my post - I assumed pre-emptive multitasking as a given, but that assumption may not stand up to scrutiny, or at least may be disputed :-)
– Richard Downer
Feb 5 at 17:08
1
@Tommy how does cooperative multitasking prevent you from using multiple cores? If a core is idle, do something on it, if not, wait until there's a yield or I/O on one... that's still not preemptive
– Felix Palmen
Feb 5 at 17:08
2
@FelixPalmen nobody is arguing that "preemption is a must for SMP". They're arguing that preemption is a must for finding a way to do anything useful with multiple cores within the confines of running MS-DOS software.
– Tommy
Feb 5 at 17:21
|
show 24 more comments
DOS itself won't do anything to boot up the "extra" cores in a multicore system, running only on the boot CPU.
A program that does that is normally called an operating system. You could certainly have a program that takes over from DOS. Maybe even one that saves the previous DOS state and could exit back to DOS. But nobody's written such a thing. You can probably find bootloaders that will load Linux from DOS, though. It would involve taking over the interrupt table and so on.
https://stackoverflow.com/questions/980999/what-does-multicore-assembly-language-look-like includes an answer with some details on what you'd need to do on x86 to bring up other cores to send inter-processor interrupts (IPIs) that bring up other cores.
CPUs with fewer cores tend to have higher guaranteed clock speeds, because each core has a higher power budget. But CPUs that support "turbo" typically have a high single-core turbo clock they can use when other cores are idle.
Modern Intel CPUs with more cores also tend to have more L3 cache; a single core can benefit from all the L3 cache on the whole die. (Not on other CPUs in a multi-socket system, though).
If you're using a DOS extender that allows a DOS program to access more than 1MiB of RAM (e.g. running in 32-bit protected mode), then it might actually benefit from having more than the 3MiB of L3 cache that a low-end dual-core system has.
But otherwise DOS can't use more memory than the smallest L3 caches on modern mainstream x86 CPUs. (Or on Skylake-X, the per-core L2 caches are 1MiB, up from 256kiB, so other than unacheable I/O / device / VGA memory, everything would be a cache hit with 13 cycle latency, much faster than ~45 cycle latency L3 in a dual or quad core!)
But there's a downside to having more cores on Intel CPUs: L3 cache latency. They put a slice of L3 along with each core, and cores + L3 are connected by a ring bus. So more cores means more hops on the ring bus on average to get to the right slice. (This is reportedly even worse on Skylake-X, where a mesh connects cores. It's odd because a mesh should mean fewer hops.)
This extra latency also affects DRAM access, so single-core memory bandwidth is better on dual / quad core desktop CPUs than on big many-core Xeons. Why is Skylake so much better than Broadwell-E for single-threaded memory throughput?. Even though the Xeon has quad-channel or 6-channel memory controllers, a single core can't saturate them and actually has worse bandwidth than a single core of the same clock speed on a quad-core part. Bandwidth is limited by max_concurrency / latency.
(Of course this doesn't apply to L2 cache hits, and 256kiB L2 is a good fraction of the total memory that DOS programs can use without a DOS extender. And 2, 4, or 8 MiB of L3 cache is pretty huge by DOS standards.)
AMD Ryzen is different: it uses "core clusters" which each group of 4 cores share an L3 cache. More total cores won't give you more L3 that a single core can benefit from. But within a cluster, L3 latency is fixed and pretty good.
add a comment |
The answers above are correct about the core usage, but it would still be faster than old machines because the clock (MHz/GHz) of the CPU is faster. This is actually problematic on many of the old games because things happen faster than you can react.
If you wanted to test/play without committing to a wipe/load scenario, you could always try FreeDOS on an emulator or on an old hard drive.
4
Actually, the canonical example of speed causing problems wasn't, IMHO, games, but rather Turbo Pascal. There was a bug in certain versions of Turbo Pascal where executables had a timing loop at the beginning and on machines of a certain speed (I think I first saw this with Pentium, but I'm not 100% sure), they would get a runtime error due to, I think, an overflow of an integer variable. Fortunately, someone came up with a patch (I'm sure I still have it here somewhere) that bypassed that code - I would run that as part of the compile process.
– manassehkatz
Feb 5 at 15:26
1
@manassehkatz The first time I experienced "too fast" was Tetris that originally I had for a 286. When I installed it on a 386, I had to turn off the turbo button for it to be remotely playable. By my 486, it was unplayable :P
– UnhandledExcepSean
Feb 5 at 15:29
2
For the record, I had to use the Turbo Pascal patch program in order to run Turbo Pascal software on a Pentium 200 (non-MMX) if memory serves. But only in real DOS, not when running inside Windows 95. So the speed of execution that causes its startup code to attempt a divide by zero must be somewhere close below that of an unencumbered P200.
– Tommy
Feb 5 at 16:36
1
@manassehkatz, 18.2 Hz was 1/65536th of 1/12th of 4 times the 4.77 MHz NTSC color subcarrier frequency. Fortunately, the timer could be adjusted to produce other frequencies.
– Mark
Feb 5 at 20:55
3
@manassehkatz There was also the problem that the computers were so slow that adding any timing methods would make the game run slow. And many systems shipped with a "Turbo" button that restricted the speed back to a fairly standard rate, so the slowdown could be handled in hardware.
– trlkly
Feb 6 at 6:07
|
show 1 more comment
When PC starts, it launches only one CPU core (so called bootstrap CPU). OS is executed on this CPU. Then, OS starts other cores by sending IPI (inter processor interrupt). To do this, OS first switches PIC (interrupt controller) to APIC (advanced) mode and uses its registers.
DOS is not aware of other CPUs and APIC, and does not know how to send IPI.
So, DOS can't even start other cores. They simply turned off and do not work.
If you want to use them, you would have to deal with APIC by yourself. It is not easy task, though. See https://wiki.osdev.org/Symmetric_Multiprocessing
i bet that the "solution" would be to write some kind of "driver" inside of DOS that would enable access to the APIC. (and if mode switching required, fake everything and make it appear as if DOS was still in real mode) sort of like how the "dos extenders" (see above answer) allowed DOS programs to access more than 1 MB of RAM. Theoretically if the machine had a GPU, and a OpenCL "driver" for DOS, you should be able to use a GPU as a coprocessor. After all, the original 8086, and 286, and maybe 386 allowed you to buy a "Math Coprocessor" which had a floating point unit inside of it.
– don bright
Feb 7 at 0:25
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "648"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9082%2fwill-pc-dos-run-faster-on-4-or-8-core-modern-machines%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
5 Answers
5
active
oldest
votes
5 Answers
5
active
oldest
votes
active
oldest
votes
active
oldest
votes
No, DOS won't use any additional CPU (*1) ever.
(Though it might run faster due them new CPUs being faster)
Quite the same way as DOS doesn't take advantage of the extended memory or additional instructions.
DOS is a
- Single CPU
- Single User
- Single Task
- Single Program
- Real Mode
- 8086
operating system.
Even through it got a few extensions over time to tap a bit into newer developments, like
- A20 Handler for HMA usage
- Utilities for extended memory usage like HIMEM.SYS or EMM386
- Usage of certain 286 instructions
it never left its original setup. That's what Unix, OS/2 and Windows NT were meant for.
Similar goes for third party extensions like background utilities (SideKick) or process swappers/switchers (DoubleDos et.al.), or the eventually furthest reaching combination of all, Windows (up to 98). They all added layers to manage DOS for some increased functionality, but didn't (and couldn't) change its basic workings.
Of course, application programs can use additional features of a machine, no matter if it's new CPU instructions, a new graphics card, more memory, or in this case an additional CPU. Much like DOS-extenders allowed protected mode programs to run as DOS applications, most notably maybe DOS/4GW, which got huge popularity due being delivered with Watcom compilers.
Then again, I do not know of any extender allowing the use of concurrent CPUs.
It's worth noting that none of these additions - including the application specific DOS-extenders - were changing the basic paradigm of DOS as a single-user, single-task operating system.
*1 - What to call it? CPU, core, hyperthread, socket? Well, in ye good olde times of simple microprocessors we couldn't care less. Nowadays it gets blurry. With multiple processing units of various quality it gets hard to qualify. For most purposes the professional world continues to call the device (IC) a CPU, but avoids the term when it comes to system building and talks now about sockets, leaving open how many processing units the IC plugged in will have - or if at all. Often the term core is used instead, but thanks to Intel using this as well as a brand name, it got blurred again. So for all RC.SE purposes I stay with the term CPU, as vague as it is, and qualify if needed.
Comments are not for extended discussion; this conversation has been moved to chat.
– Chenmunka♦
Feb 11 at 10:55
add a comment |
No, DOS won't use any additional CPU (*1) ever.
(Though it might run faster due them new CPUs being faster)
Quite the same way as DOS doesn't take advantage of the extended memory or additional instructions.
DOS is a
- Single CPU
- Single User
- Single Task
- Single Program
- Real Mode
- 8086
operating system.
Even through it got a few extensions over time to tap a bit into newer developments, like
- A20 Handler for HMA usage
- Utilities for extended memory usage like HIMEM.SYS or EMM386
- Usage of certain 286 instructions
it never left its original setup. That's what Unix, OS/2 and Windows NT were meant for.
Similar goes for third party extensions like background utilities (SideKick) or process swappers/switchers (DoubleDos et.al.), or the eventually furthest reaching combination of all, Windows (up to 98). They all added layers to manage DOS for some increased functionality, but didn't (and couldn't) change its basic workings.
Of course, application programs can use additional features of a machine, no matter if it's new CPU instructions, a new graphics card, more memory, or in this case an additional CPU. Much like DOS-extenders allowed protected mode programs to run as DOS applications, most notably maybe DOS/4GW, which got huge popularity due being delivered with Watcom compilers.
Then again, I do not know of any extender allowing the use of concurrent CPUs.
It's worth noting that none of these additions - including the application specific DOS-extenders - were changing the basic paradigm of DOS as a single-user, single-task operating system.
*1 - What to call it? CPU, core, hyperthread, socket? Well, in ye good olde times of simple microprocessors we couldn't care less. Nowadays it gets blurry. With multiple processing units of various quality it gets hard to qualify. For most purposes the professional world continues to call the device (IC) a CPU, but avoids the term when it comes to system building and talks now about sockets, leaving open how many processing units the IC plugged in will have - or if at all. Often the term core is used instead, but thanks to Intel using this as well as a brand name, it got blurred again. So for all RC.SE purposes I stay with the term CPU, as vague as it is, and qualify if needed.
Comments are not for extended discussion; this conversation has been moved to chat.
– Chenmunka♦
Feb 11 at 10:55
add a comment |
No, DOS won't use any additional CPU (*1) ever.
(Though it might run faster due them new CPUs being faster)
Quite the same way as DOS doesn't take advantage of the extended memory or additional instructions.
DOS is a
- Single CPU
- Single User
- Single Task
- Single Program
- Real Mode
- 8086
operating system.
Even through it got a few extensions over time to tap a bit into newer developments, like
- A20 Handler for HMA usage
- Utilities for extended memory usage like HIMEM.SYS or EMM386
- Usage of certain 286 instructions
it never left its original setup. That's what Unix, OS/2 and Windows NT were meant for.
Similar goes for third party extensions like background utilities (SideKick) or process swappers/switchers (DoubleDos et.al.), or the eventually furthest reaching combination of all, Windows (up to 98). They all added layers to manage DOS for some increased functionality, but didn't (and couldn't) change its basic workings.
Of course, application programs can use additional features of a machine, no matter if it's new CPU instructions, a new graphics card, more memory, or in this case an additional CPU. Much like DOS-extenders allowed protected mode programs to run as DOS applications, most notably maybe DOS/4GW, which got huge popularity due being delivered with Watcom compilers.
Then again, I do not know of any extender allowing the use of concurrent CPUs.
It's worth noting that none of these additions - including the application specific DOS-extenders - were changing the basic paradigm of DOS as a single-user, single-task operating system.
*1 - What to call it? CPU, core, hyperthread, socket? Well, in ye good olde times of simple microprocessors we couldn't care less. Nowadays it gets blurry. With multiple processing units of various quality it gets hard to qualify. For most purposes the professional world continues to call the device (IC) a CPU, but avoids the term when it comes to system building and talks now about sockets, leaving open how many processing units the IC plugged in will have - or if at all. Often the term core is used instead, but thanks to Intel using this as well as a brand name, it got blurred again. So for all RC.SE purposes I stay with the term CPU, as vague as it is, and qualify if needed.
No, DOS won't use any additional CPU (*1) ever.
(Though it might run faster due them new CPUs being faster)
Quite the same way as DOS doesn't take advantage of the extended memory or additional instructions.
DOS is a
- Single CPU
- Single User
- Single Task
- Single Program
- Real Mode
- 8086
operating system.
Even through it got a few extensions over time to tap a bit into newer developments, like
- A20 Handler for HMA usage
- Utilities for extended memory usage like HIMEM.SYS or EMM386
- Usage of certain 286 instructions
it never left its original setup. That's what Unix, OS/2 and Windows NT were meant for.
Similar goes for third party extensions like background utilities (SideKick) or process swappers/switchers (DoubleDos et.al.), or the eventually furthest reaching combination of all, Windows (up to 98). They all added layers to manage DOS for some increased functionality, but didn't (and couldn't) change its basic workings.
Of course, application programs can use additional features of a machine, no matter if it's new CPU instructions, a new graphics card, more memory, or in this case an additional CPU. Much like DOS-extenders allowed protected mode programs to run as DOS applications, most notably maybe DOS/4GW, which got huge popularity due being delivered with Watcom compilers.
Then again, I do not know of any extender allowing the use of concurrent CPUs.
It's worth noting that none of these additions - including the application specific DOS-extenders - were changing the basic paradigm of DOS as a single-user, single-task operating system.
*1 - What to call it? CPU, core, hyperthread, socket? Well, in ye good olde times of simple microprocessors we couldn't care less. Nowadays it gets blurry. With multiple processing units of various quality it gets hard to qualify. For most purposes the professional world continues to call the device (IC) a CPU, but avoids the term when it comes to system building and talks now about sockets, leaving open how many processing units the IC plugged in will have - or if at all. Often the term core is used instead, but thanks to Intel using this as well as a brand name, it got blurred again. So for all RC.SE purposes I stay with the term CPU, as vague as it is, and qualify if needed.
edited Feb 6 at 13:21
answered Feb 5 at 13:10
RaffzahnRaffzahn
52k6123210
52k6123210
Comments are not for extended discussion; this conversation has been moved to chat.
– Chenmunka♦
Feb 11 at 10:55
add a comment |
Comments are not for extended discussion; this conversation has been moved to chat.
– Chenmunka♦
Feb 11 at 10:55
Comments are not for extended discussion; this conversation has been moved to chat.
– Chenmunka♦
Feb 11 at 10:55
Comments are not for extended discussion; this conversation has been moved to chat.
– Chenmunka♦
Feb 11 at 10:55
add a comment |
If by IBM DOS you mean IBM PC DOS, which was a rebrand/derivate of MS-DOS, then the answer is no - DOS will only ever support a single core. HyperThreading and multiple cores is simply not supported by DOS.
Making DOS use multiple cores would be a major operation. Firstly DOS would have to support multitasking. It could not be task switching or cooperative multitasking, it would have to be full pre-emptive multitasking. Then it would need to be developed to support SMP (symmetric multiprocessing). Both of these tasks would be major undertakings - their peers in the 90s took years to get reliable and efficient multiprocessing support.
It is possible that a single application could take advantage of multiple cores if it used DOS as a bootstrap and then kicked DOS out, and essentially became its own operating system.
If you're looking for a 90s-era operating system that supports multiple cores, then you would be looking at Linux (2.0 or later), Windows NT 4 or OS/2 Warp.
2
AFAIK there were only two versions of OS/2 with support for multiple CPUs, OS/2 2.11 SMP and OS/2 Warp 4 AS SMP, and both are very rare.
– Stephen Kitt
Feb 5 at 13:24
2
"It could not be task switching or cooperative multitasking, it would have to be full pre-emptive multitasking" <- why? shouldn't it be enough to give each "core" something to do and wait until this something says "hey, I finished"?
– Felix Palmen
Feb 5 at 14:31
2
Please feel free to edit my post - I assumed pre-emptive multitasking as a given, but that assumption may not stand up to scrutiny, or at least may be disputed :-)
– Richard Downer
Feb 5 at 17:08
1
@Tommy how does cooperative multitasking prevent you from using multiple cores? If a core is idle, do something on it, if not, wait until there's a yield or I/O on one... that's still not preemptive
– Felix Palmen
Feb 5 at 17:08
2
@FelixPalmen nobody is arguing that "preemption is a must for SMP". They're arguing that preemption is a must for finding a way to do anything useful with multiple cores within the confines of running MS-DOS software.
– Tommy
Feb 5 at 17:21
|
show 24 more comments
If by IBM DOS you mean IBM PC DOS, which was a rebrand/derivate of MS-DOS, then the answer is no - DOS will only ever support a single core. HyperThreading and multiple cores is simply not supported by DOS.
Making DOS use multiple cores would be a major operation. Firstly DOS would have to support multitasking. It could not be task switching or cooperative multitasking, it would have to be full pre-emptive multitasking. Then it would need to be developed to support SMP (symmetric multiprocessing). Both of these tasks would be major undertakings - their peers in the 90s took years to get reliable and efficient multiprocessing support.
It is possible that a single application could take advantage of multiple cores if it used DOS as a bootstrap and then kicked DOS out, and essentially became its own operating system.
If you're looking for a 90s-era operating system that supports multiple cores, then you would be looking at Linux (2.0 or later), Windows NT 4 or OS/2 Warp.
2
AFAIK there were only two versions of OS/2 with support for multiple CPUs, OS/2 2.11 SMP and OS/2 Warp 4 AS SMP, and both are very rare.
– Stephen Kitt
Feb 5 at 13:24
2
"It could not be task switching or cooperative multitasking, it would have to be full pre-emptive multitasking" <- why? shouldn't it be enough to give each "core" something to do and wait until this something says "hey, I finished"?
– Felix Palmen
Feb 5 at 14:31
2
Please feel free to edit my post - I assumed pre-emptive multitasking as a given, but that assumption may not stand up to scrutiny, or at least may be disputed :-)
– Richard Downer
Feb 5 at 17:08
1
@Tommy how does cooperative multitasking prevent you from using multiple cores? If a core is idle, do something on it, if not, wait until there's a yield or I/O on one... that's still not preemptive
– Felix Palmen
Feb 5 at 17:08
2
@FelixPalmen nobody is arguing that "preemption is a must for SMP". They're arguing that preemption is a must for finding a way to do anything useful with multiple cores within the confines of running MS-DOS software.
– Tommy
Feb 5 at 17:21
|
show 24 more comments
If by IBM DOS you mean IBM PC DOS, which was a rebrand/derivate of MS-DOS, then the answer is no - DOS will only ever support a single core. HyperThreading and multiple cores is simply not supported by DOS.
Making DOS use multiple cores would be a major operation. Firstly DOS would have to support multitasking. It could not be task switching or cooperative multitasking, it would have to be full pre-emptive multitasking. Then it would need to be developed to support SMP (symmetric multiprocessing). Both of these tasks would be major undertakings - their peers in the 90s took years to get reliable and efficient multiprocessing support.
It is possible that a single application could take advantage of multiple cores if it used DOS as a bootstrap and then kicked DOS out, and essentially became its own operating system.
If you're looking for a 90s-era operating system that supports multiple cores, then you would be looking at Linux (2.0 or later), Windows NT 4 or OS/2 Warp.
If by IBM DOS you mean IBM PC DOS, which was a rebrand/derivate of MS-DOS, then the answer is no - DOS will only ever support a single core. HyperThreading and multiple cores is simply not supported by DOS.
Making DOS use multiple cores would be a major operation. Firstly DOS would have to support multitasking. It could not be task switching or cooperative multitasking, it would have to be full pre-emptive multitasking. Then it would need to be developed to support SMP (symmetric multiprocessing). Both of these tasks would be major undertakings - their peers in the 90s took years to get reliable and efficient multiprocessing support.
It is possible that a single application could take advantage of multiple cores if it used DOS as a bootstrap and then kicked DOS out, and essentially became its own operating system.
If you're looking for a 90s-era operating system that supports multiple cores, then you would be looking at Linux (2.0 or later), Windows NT 4 or OS/2 Warp.
answered Feb 5 at 13:13
Richard DownerRichard Downer
2,290634
2,290634
2
AFAIK there were only two versions of OS/2 with support for multiple CPUs, OS/2 2.11 SMP and OS/2 Warp 4 AS SMP, and both are very rare.
– Stephen Kitt
Feb 5 at 13:24
2
"It could not be task switching or cooperative multitasking, it would have to be full pre-emptive multitasking" <- why? shouldn't it be enough to give each "core" something to do and wait until this something says "hey, I finished"?
– Felix Palmen
Feb 5 at 14:31
2
Please feel free to edit my post - I assumed pre-emptive multitasking as a given, but that assumption may not stand up to scrutiny, or at least may be disputed :-)
– Richard Downer
Feb 5 at 17:08
1
@Tommy how does cooperative multitasking prevent you from using multiple cores? If a core is idle, do something on it, if not, wait until there's a yield or I/O on one... that's still not preemptive
– Felix Palmen
Feb 5 at 17:08
2
@FelixPalmen nobody is arguing that "preemption is a must for SMP". They're arguing that preemption is a must for finding a way to do anything useful with multiple cores within the confines of running MS-DOS software.
– Tommy
Feb 5 at 17:21
|
show 24 more comments
2
AFAIK there were only two versions of OS/2 with support for multiple CPUs, OS/2 2.11 SMP and OS/2 Warp 4 AS SMP, and both are very rare.
– Stephen Kitt
Feb 5 at 13:24
2
"It could not be task switching or cooperative multitasking, it would have to be full pre-emptive multitasking" <- why? shouldn't it be enough to give each "core" something to do and wait until this something says "hey, I finished"?
– Felix Palmen
Feb 5 at 14:31
2
Please feel free to edit my post - I assumed pre-emptive multitasking as a given, but that assumption may not stand up to scrutiny, or at least may be disputed :-)
– Richard Downer
Feb 5 at 17:08
1
@Tommy how does cooperative multitasking prevent you from using multiple cores? If a core is idle, do something on it, if not, wait until there's a yield or I/O on one... that's still not preemptive
– Felix Palmen
Feb 5 at 17:08
2
@FelixPalmen nobody is arguing that "preemption is a must for SMP". They're arguing that preemption is a must for finding a way to do anything useful with multiple cores within the confines of running MS-DOS software.
– Tommy
Feb 5 at 17:21
2
2
AFAIK there were only two versions of OS/2 with support for multiple CPUs, OS/2 2.11 SMP and OS/2 Warp 4 AS SMP, and both are very rare.
– Stephen Kitt
Feb 5 at 13:24
AFAIK there were only two versions of OS/2 with support for multiple CPUs, OS/2 2.11 SMP and OS/2 Warp 4 AS SMP, and both are very rare.
– Stephen Kitt
Feb 5 at 13:24
2
2
"It could not be task switching or cooperative multitasking, it would have to be full pre-emptive multitasking" <- why? shouldn't it be enough to give each "core" something to do and wait until this something says "hey, I finished"?
– Felix Palmen
Feb 5 at 14:31
"It could not be task switching or cooperative multitasking, it would have to be full pre-emptive multitasking" <- why? shouldn't it be enough to give each "core" something to do and wait until this something says "hey, I finished"?
– Felix Palmen
Feb 5 at 14:31
2
2
Please feel free to edit my post - I assumed pre-emptive multitasking as a given, but that assumption may not stand up to scrutiny, or at least may be disputed :-)
– Richard Downer
Feb 5 at 17:08
Please feel free to edit my post - I assumed pre-emptive multitasking as a given, but that assumption may not stand up to scrutiny, or at least may be disputed :-)
– Richard Downer
Feb 5 at 17:08
1
1
@Tommy how does cooperative multitasking prevent you from using multiple cores? If a core is idle, do something on it, if not, wait until there's a yield or I/O on one... that's still not preemptive
– Felix Palmen
Feb 5 at 17:08
@Tommy how does cooperative multitasking prevent you from using multiple cores? If a core is idle, do something on it, if not, wait until there's a yield or I/O on one... that's still not preemptive
– Felix Palmen
Feb 5 at 17:08
2
2
@FelixPalmen nobody is arguing that "preemption is a must for SMP". They're arguing that preemption is a must for finding a way to do anything useful with multiple cores within the confines of running MS-DOS software.
– Tommy
Feb 5 at 17:21
@FelixPalmen nobody is arguing that "preemption is a must for SMP". They're arguing that preemption is a must for finding a way to do anything useful with multiple cores within the confines of running MS-DOS software.
– Tommy
Feb 5 at 17:21
|
show 24 more comments
DOS itself won't do anything to boot up the "extra" cores in a multicore system, running only on the boot CPU.
A program that does that is normally called an operating system. You could certainly have a program that takes over from DOS. Maybe even one that saves the previous DOS state and could exit back to DOS. But nobody's written such a thing. You can probably find bootloaders that will load Linux from DOS, though. It would involve taking over the interrupt table and so on.
https://stackoverflow.com/questions/980999/what-does-multicore-assembly-language-look-like includes an answer with some details on what you'd need to do on x86 to bring up other cores to send inter-processor interrupts (IPIs) that bring up other cores.
CPUs with fewer cores tend to have higher guaranteed clock speeds, because each core has a higher power budget. But CPUs that support "turbo" typically have a high single-core turbo clock they can use when other cores are idle.
Modern Intel CPUs with more cores also tend to have more L3 cache; a single core can benefit from all the L3 cache on the whole die. (Not on other CPUs in a multi-socket system, though).
If you're using a DOS extender that allows a DOS program to access more than 1MiB of RAM (e.g. running in 32-bit protected mode), then it might actually benefit from having more than the 3MiB of L3 cache that a low-end dual-core system has.
But otherwise DOS can't use more memory than the smallest L3 caches on modern mainstream x86 CPUs. (Or on Skylake-X, the per-core L2 caches are 1MiB, up from 256kiB, so other than unacheable I/O / device / VGA memory, everything would be a cache hit with 13 cycle latency, much faster than ~45 cycle latency L3 in a dual or quad core!)
But there's a downside to having more cores on Intel CPUs: L3 cache latency. They put a slice of L3 along with each core, and cores + L3 are connected by a ring bus. So more cores means more hops on the ring bus on average to get to the right slice. (This is reportedly even worse on Skylake-X, where a mesh connects cores. It's odd because a mesh should mean fewer hops.)
This extra latency also affects DRAM access, so single-core memory bandwidth is better on dual / quad core desktop CPUs than on big many-core Xeons. Why is Skylake so much better than Broadwell-E for single-threaded memory throughput?. Even though the Xeon has quad-channel or 6-channel memory controllers, a single core can't saturate them and actually has worse bandwidth than a single core of the same clock speed on a quad-core part. Bandwidth is limited by max_concurrency / latency.
(Of course this doesn't apply to L2 cache hits, and 256kiB L2 is a good fraction of the total memory that DOS programs can use without a DOS extender. And 2, 4, or 8 MiB of L3 cache is pretty huge by DOS standards.)
AMD Ryzen is different: it uses "core clusters" which each group of 4 cores share an L3 cache. More total cores won't give you more L3 that a single core can benefit from. But within a cluster, L3 latency is fixed and pretty good.
add a comment |
DOS itself won't do anything to boot up the "extra" cores in a multicore system, running only on the boot CPU.
A program that does that is normally called an operating system. You could certainly have a program that takes over from DOS. Maybe even one that saves the previous DOS state and could exit back to DOS. But nobody's written such a thing. You can probably find bootloaders that will load Linux from DOS, though. It would involve taking over the interrupt table and so on.
https://stackoverflow.com/questions/980999/what-does-multicore-assembly-language-look-like includes an answer with some details on what you'd need to do on x86 to bring up other cores to send inter-processor interrupts (IPIs) that bring up other cores.
CPUs with fewer cores tend to have higher guaranteed clock speeds, because each core has a higher power budget. But CPUs that support "turbo" typically have a high single-core turbo clock they can use when other cores are idle.
Modern Intel CPUs with more cores also tend to have more L3 cache; a single core can benefit from all the L3 cache on the whole die. (Not on other CPUs in a multi-socket system, though).
If you're using a DOS extender that allows a DOS program to access more than 1MiB of RAM (e.g. running in 32-bit protected mode), then it might actually benefit from having more than the 3MiB of L3 cache that a low-end dual-core system has.
But otherwise DOS can't use more memory than the smallest L3 caches on modern mainstream x86 CPUs. (Or on Skylake-X, the per-core L2 caches are 1MiB, up from 256kiB, so other than unacheable I/O / device / VGA memory, everything would be a cache hit with 13 cycle latency, much faster than ~45 cycle latency L3 in a dual or quad core!)
But there's a downside to having more cores on Intel CPUs: L3 cache latency. They put a slice of L3 along with each core, and cores + L3 are connected by a ring bus. So more cores means more hops on the ring bus on average to get to the right slice. (This is reportedly even worse on Skylake-X, where a mesh connects cores. It's odd because a mesh should mean fewer hops.)
This extra latency also affects DRAM access, so single-core memory bandwidth is better on dual / quad core desktop CPUs than on big many-core Xeons. Why is Skylake so much better than Broadwell-E for single-threaded memory throughput?. Even though the Xeon has quad-channel or 6-channel memory controllers, a single core can't saturate them and actually has worse bandwidth than a single core of the same clock speed on a quad-core part. Bandwidth is limited by max_concurrency / latency.
(Of course this doesn't apply to L2 cache hits, and 256kiB L2 is a good fraction of the total memory that DOS programs can use without a DOS extender. And 2, 4, or 8 MiB of L3 cache is pretty huge by DOS standards.)
AMD Ryzen is different: it uses "core clusters" which each group of 4 cores share an L3 cache. More total cores won't give you more L3 that a single core can benefit from. But within a cluster, L3 latency is fixed and pretty good.
add a comment |
DOS itself won't do anything to boot up the "extra" cores in a multicore system, running only on the boot CPU.
A program that does that is normally called an operating system. You could certainly have a program that takes over from DOS. Maybe even one that saves the previous DOS state and could exit back to DOS. But nobody's written such a thing. You can probably find bootloaders that will load Linux from DOS, though. It would involve taking over the interrupt table and so on.
https://stackoverflow.com/questions/980999/what-does-multicore-assembly-language-look-like includes an answer with some details on what you'd need to do on x86 to bring up other cores to send inter-processor interrupts (IPIs) that bring up other cores.
CPUs with fewer cores tend to have higher guaranteed clock speeds, because each core has a higher power budget. But CPUs that support "turbo" typically have a high single-core turbo clock they can use when other cores are idle.
Modern Intel CPUs with more cores also tend to have more L3 cache; a single core can benefit from all the L3 cache on the whole die. (Not on other CPUs in a multi-socket system, though).
If you're using a DOS extender that allows a DOS program to access more than 1MiB of RAM (e.g. running in 32-bit protected mode), then it might actually benefit from having more than the 3MiB of L3 cache that a low-end dual-core system has.
But otherwise DOS can't use more memory than the smallest L3 caches on modern mainstream x86 CPUs. (Or on Skylake-X, the per-core L2 caches are 1MiB, up from 256kiB, so other than unacheable I/O / device / VGA memory, everything would be a cache hit with 13 cycle latency, much faster than ~45 cycle latency L3 in a dual or quad core!)
But there's a downside to having more cores on Intel CPUs: L3 cache latency. They put a slice of L3 along with each core, and cores + L3 are connected by a ring bus. So more cores means more hops on the ring bus on average to get to the right slice. (This is reportedly even worse on Skylake-X, where a mesh connects cores. It's odd because a mesh should mean fewer hops.)
This extra latency also affects DRAM access, so single-core memory bandwidth is better on dual / quad core desktop CPUs than on big many-core Xeons. Why is Skylake so much better than Broadwell-E for single-threaded memory throughput?. Even though the Xeon has quad-channel or 6-channel memory controllers, a single core can't saturate them and actually has worse bandwidth than a single core of the same clock speed on a quad-core part. Bandwidth is limited by max_concurrency / latency.
(Of course this doesn't apply to L2 cache hits, and 256kiB L2 is a good fraction of the total memory that DOS programs can use without a DOS extender. And 2, 4, or 8 MiB of L3 cache is pretty huge by DOS standards.)
AMD Ryzen is different: it uses "core clusters" which each group of 4 cores share an L3 cache. More total cores won't give you more L3 that a single core can benefit from. But within a cluster, L3 latency is fixed and pretty good.
DOS itself won't do anything to boot up the "extra" cores in a multicore system, running only on the boot CPU.
A program that does that is normally called an operating system. You could certainly have a program that takes over from DOS. Maybe even one that saves the previous DOS state and could exit back to DOS. But nobody's written such a thing. You can probably find bootloaders that will load Linux from DOS, though. It would involve taking over the interrupt table and so on.
https://stackoverflow.com/questions/980999/what-does-multicore-assembly-language-look-like includes an answer with some details on what you'd need to do on x86 to bring up other cores to send inter-processor interrupts (IPIs) that bring up other cores.
CPUs with fewer cores tend to have higher guaranteed clock speeds, because each core has a higher power budget. But CPUs that support "turbo" typically have a high single-core turbo clock they can use when other cores are idle.
Modern Intel CPUs with more cores also tend to have more L3 cache; a single core can benefit from all the L3 cache on the whole die. (Not on other CPUs in a multi-socket system, though).
If you're using a DOS extender that allows a DOS program to access more than 1MiB of RAM (e.g. running in 32-bit protected mode), then it might actually benefit from having more than the 3MiB of L3 cache that a low-end dual-core system has.
But otherwise DOS can't use more memory than the smallest L3 caches on modern mainstream x86 CPUs. (Or on Skylake-X, the per-core L2 caches are 1MiB, up from 256kiB, so other than unacheable I/O / device / VGA memory, everything would be a cache hit with 13 cycle latency, much faster than ~45 cycle latency L3 in a dual or quad core!)
But there's a downside to having more cores on Intel CPUs: L3 cache latency. They put a slice of L3 along with each core, and cores + L3 are connected by a ring bus. So more cores means more hops on the ring bus on average to get to the right slice. (This is reportedly even worse on Skylake-X, where a mesh connects cores. It's odd because a mesh should mean fewer hops.)
This extra latency also affects DRAM access, so single-core memory bandwidth is better on dual / quad core desktop CPUs than on big many-core Xeons. Why is Skylake so much better than Broadwell-E for single-threaded memory throughput?. Even though the Xeon has quad-channel or 6-channel memory controllers, a single core can't saturate them and actually has worse bandwidth than a single core of the same clock speed on a quad-core part. Bandwidth is limited by max_concurrency / latency.
(Of course this doesn't apply to L2 cache hits, and 256kiB L2 is a good fraction of the total memory that DOS programs can use without a DOS extender. And 2, 4, or 8 MiB of L3 cache is pretty huge by DOS standards.)
AMD Ryzen is different: it uses "core clusters" which each group of 4 cores share an L3 cache. More total cores won't give you more L3 that a single core can benefit from. But within a cluster, L3 latency is fixed and pretty good.
answered Feb 6 at 0:21
Peter CordesPeter Cordes
1,012510
1,012510
add a comment |
add a comment |
The answers above are correct about the core usage, but it would still be faster than old machines because the clock (MHz/GHz) of the CPU is faster. This is actually problematic on many of the old games because things happen faster than you can react.
If you wanted to test/play without committing to a wipe/load scenario, you could always try FreeDOS on an emulator or on an old hard drive.
4
Actually, the canonical example of speed causing problems wasn't, IMHO, games, but rather Turbo Pascal. There was a bug in certain versions of Turbo Pascal where executables had a timing loop at the beginning and on machines of a certain speed (I think I first saw this with Pentium, but I'm not 100% sure), they would get a runtime error due to, I think, an overflow of an integer variable. Fortunately, someone came up with a patch (I'm sure I still have it here somewhere) that bypassed that code - I would run that as part of the compile process.
– manassehkatz
Feb 5 at 15:26
1
@manassehkatz The first time I experienced "too fast" was Tetris that originally I had for a 286. When I installed it on a 386, I had to turn off the turbo button for it to be remotely playable. By my 486, it was unplayable :P
– UnhandledExcepSean
Feb 5 at 15:29
2
For the record, I had to use the Turbo Pascal patch program in order to run Turbo Pascal software on a Pentium 200 (non-MMX) if memory serves. But only in real DOS, not when running inside Windows 95. So the speed of execution that causes its startup code to attempt a divide by zero must be somewhere close below that of an unencumbered P200.
– Tommy
Feb 5 at 16:36
1
@manassehkatz, 18.2 Hz was 1/65536th of 1/12th of 4 times the 4.77 MHz NTSC color subcarrier frequency. Fortunately, the timer could be adjusted to produce other frequencies.
– Mark
Feb 5 at 20:55
3
@manassehkatz There was also the problem that the computers were so slow that adding any timing methods would make the game run slow. And many systems shipped with a "Turbo" button that restricted the speed back to a fairly standard rate, so the slowdown could be handled in hardware.
– trlkly
Feb 6 at 6:07
|
show 1 more comment
The answers above are correct about the core usage, but it would still be faster than old machines because the clock (MHz/GHz) of the CPU is faster. This is actually problematic on many of the old games because things happen faster than you can react.
If you wanted to test/play without committing to a wipe/load scenario, you could always try FreeDOS on an emulator or on an old hard drive.
4
Actually, the canonical example of speed causing problems wasn't, IMHO, games, but rather Turbo Pascal. There was a bug in certain versions of Turbo Pascal where executables had a timing loop at the beginning and on machines of a certain speed (I think I first saw this with Pentium, but I'm not 100% sure), they would get a runtime error due to, I think, an overflow of an integer variable. Fortunately, someone came up with a patch (I'm sure I still have it here somewhere) that bypassed that code - I would run that as part of the compile process.
– manassehkatz
Feb 5 at 15:26
1
@manassehkatz The first time I experienced "too fast" was Tetris that originally I had for a 286. When I installed it on a 386, I had to turn off the turbo button for it to be remotely playable. By my 486, it was unplayable :P
– UnhandledExcepSean
Feb 5 at 15:29
2
For the record, I had to use the Turbo Pascal patch program in order to run Turbo Pascal software on a Pentium 200 (non-MMX) if memory serves. But only in real DOS, not when running inside Windows 95. So the speed of execution that causes its startup code to attempt a divide by zero must be somewhere close below that of an unencumbered P200.
– Tommy
Feb 5 at 16:36
1
@manassehkatz, 18.2 Hz was 1/65536th of 1/12th of 4 times the 4.77 MHz NTSC color subcarrier frequency. Fortunately, the timer could be adjusted to produce other frequencies.
– Mark
Feb 5 at 20:55
3
@manassehkatz There was also the problem that the computers were so slow that adding any timing methods would make the game run slow. And many systems shipped with a "Turbo" button that restricted the speed back to a fairly standard rate, so the slowdown could be handled in hardware.
– trlkly
Feb 6 at 6:07
|
show 1 more comment
The answers above are correct about the core usage, but it would still be faster than old machines because the clock (MHz/GHz) of the CPU is faster. This is actually problematic on many of the old games because things happen faster than you can react.
If you wanted to test/play without committing to a wipe/load scenario, you could always try FreeDOS on an emulator or on an old hard drive.
The answers above are correct about the core usage, but it would still be faster than old machines because the clock (MHz/GHz) of the CPU is faster. This is actually problematic on many of the old games because things happen faster than you can react.
If you wanted to test/play without committing to a wipe/load scenario, you could always try FreeDOS on an emulator or on an old hard drive.
answered Feb 5 at 15:00
UnhandledExcepSeanUnhandledExcepSean
1211
1211
4
Actually, the canonical example of speed causing problems wasn't, IMHO, games, but rather Turbo Pascal. There was a bug in certain versions of Turbo Pascal where executables had a timing loop at the beginning and on machines of a certain speed (I think I first saw this with Pentium, but I'm not 100% sure), they would get a runtime error due to, I think, an overflow of an integer variable. Fortunately, someone came up with a patch (I'm sure I still have it here somewhere) that bypassed that code - I would run that as part of the compile process.
– manassehkatz
Feb 5 at 15:26
1
@manassehkatz The first time I experienced "too fast" was Tetris that originally I had for a 286. When I installed it on a 386, I had to turn off the turbo button for it to be remotely playable. By my 486, it was unplayable :P
– UnhandledExcepSean
Feb 5 at 15:29
2
For the record, I had to use the Turbo Pascal patch program in order to run Turbo Pascal software on a Pentium 200 (non-MMX) if memory serves. But only in real DOS, not when running inside Windows 95. So the speed of execution that causes its startup code to attempt a divide by zero must be somewhere close below that of an unencumbered P200.
– Tommy
Feb 5 at 16:36
1
@manassehkatz, 18.2 Hz was 1/65536th of 1/12th of 4 times the 4.77 MHz NTSC color subcarrier frequency. Fortunately, the timer could be adjusted to produce other frequencies.
– Mark
Feb 5 at 20:55
3
@manassehkatz There was also the problem that the computers were so slow that adding any timing methods would make the game run slow. And many systems shipped with a "Turbo" button that restricted the speed back to a fairly standard rate, so the slowdown could be handled in hardware.
– trlkly
Feb 6 at 6:07
|
show 1 more comment
4
Actually, the canonical example of speed causing problems wasn't, IMHO, games, but rather Turbo Pascal. There was a bug in certain versions of Turbo Pascal where executables had a timing loop at the beginning and on machines of a certain speed (I think I first saw this with Pentium, but I'm not 100% sure), they would get a runtime error due to, I think, an overflow of an integer variable. Fortunately, someone came up with a patch (I'm sure I still have it here somewhere) that bypassed that code - I would run that as part of the compile process.
– manassehkatz
Feb 5 at 15:26
1
@manassehkatz The first time I experienced "too fast" was Tetris that originally I had for a 286. When I installed it on a 386, I had to turn off the turbo button for it to be remotely playable. By my 486, it was unplayable :P
– UnhandledExcepSean
Feb 5 at 15:29
2
For the record, I had to use the Turbo Pascal patch program in order to run Turbo Pascal software on a Pentium 200 (non-MMX) if memory serves. But only in real DOS, not when running inside Windows 95. So the speed of execution that causes its startup code to attempt a divide by zero must be somewhere close below that of an unencumbered P200.
– Tommy
Feb 5 at 16:36
1
@manassehkatz, 18.2 Hz was 1/65536th of 1/12th of 4 times the 4.77 MHz NTSC color subcarrier frequency. Fortunately, the timer could be adjusted to produce other frequencies.
– Mark
Feb 5 at 20:55
3
@manassehkatz There was also the problem that the computers were so slow that adding any timing methods would make the game run slow. And many systems shipped with a "Turbo" button that restricted the speed back to a fairly standard rate, so the slowdown could be handled in hardware.
– trlkly
Feb 6 at 6:07
4
4
Actually, the canonical example of speed causing problems wasn't, IMHO, games, but rather Turbo Pascal. There was a bug in certain versions of Turbo Pascal where executables had a timing loop at the beginning and on machines of a certain speed (I think I first saw this with Pentium, but I'm not 100% sure), they would get a runtime error due to, I think, an overflow of an integer variable. Fortunately, someone came up with a patch (I'm sure I still have it here somewhere) that bypassed that code - I would run that as part of the compile process.
– manassehkatz
Feb 5 at 15:26
Actually, the canonical example of speed causing problems wasn't, IMHO, games, but rather Turbo Pascal. There was a bug in certain versions of Turbo Pascal where executables had a timing loop at the beginning and on machines of a certain speed (I think I first saw this with Pentium, but I'm not 100% sure), they would get a runtime error due to, I think, an overflow of an integer variable. Fortunately, someone came up with a patch (I'm sure I still have it here somewhere) that bypassed that code - I would run that as part of the compile process.
– manassehkatz
Feb 5 at 15:26
1
1
@manassehkatz The first time I experienced "too fast" was Tetris that originally I had for a 286. When I installed it on a 386, I had to turn off the turbo button for it to be remotely playable. By my 486, it was unplayable :P
– UnhandledExcepSean
Feb 5 at 15:29
@manassehkatz The first time I experienced "too fast" was Tetris that originally I had for a 286. When I installed it on a 386, I had to turn off the turbo button for it to be remotely playable. By my 486, it was unplayable :P
– UnhandledExcepSean
Feb 5 at 15:29
2
2
For the record, I had to use the Turbo Pascal patch program in order to run Turbo Pascal software on a Pentium 200 (non-MMX) if memory serves. But only in real DOS, not when running inside Windows 95. So the speed of execution that causes its startup code to attempt a divide by zero must be somewhere close below that of an unencumbered P200.
– Tommy
Feb 5 at 16:36
For the record, I had to use the Turbo Pascal patch program in order to run Turbo Pascal software on a Pentium 200 (non-MMX) if memory serves. But only in real DOS, not when running inside Windows 95. So the speed of execution that causes its startup code to attempt a divide by zero must be somewhere close below that of an unencumbered P200.
– Tommy
Feb 5 at 16:36
1
1
@manassehkatz, 18.2 Hz was 1/65536th of 1/12th of 4 times the 4.77 MHz NTSC color subcarrier frequency. Fortunately, the timer could be adjusted to produce other frequencies.
– Mark
Feb 5 at 20:55
@manassehkatz, 18.2 Hz was 1/65536th of 1/12th of 4 times the 4.77 MHz NTSC color subcarrier frequency. Fortunately, the timer could be adjusted to produce other frequencies.
– Mark
Feb 5 at 20:55
3
3
@manassehkatz There was also the problem that the computers were so slow that adding any timing methods would make the game run slow. And many systems shipped with a "Turbo" button that restricted the speed back to a fairly standard rate, so the slowdown could be handled in hardware.
– trlkly
Feb 6 at 6:07
@manassehkatz There was also the problem that the computers were so slow that adding any timing methods would make the game run slow. And many systems shipped with a "Turbo" button that restricted the speed back to a fairly standard rate, so the slowdown could be handled in hardware.
– trlkly
Feb 6 at 6:07
|
show 1 more comment
When PC starts, it launches only one CPU core (so called bootstrap CPU). OS is executed on this CPU. Then, OS starts other cores by sending IPI (inter processor interrupt). To do this, OS first switches PIC (interrupt controller) to APIC (advanced) mode and uses its registers.
DOS is not aware of other CPUs and APIC, and does not know how to send IPI.
So, DOS can't even start other cores. They simply turned off and do not work.
If you want to use them, you would have to deal with APIC by yourself. It is not easy task, though. See https://wiki.osdev.org/Symmetric_Multiprocessing
i bet that the "solution" would be to write some kind of "driver" inside of DOS that would enable access to the APIC. (and if mode switching required, fake everything and make it appear as if DOS was still in real mode) sort of like how the "dos extenders" (see above answer) allowed DOS programs to access more than 1 MB of RAM. Theoretically if the machine had a GPU, and a OpenCL "driver" for DOS, you should be able to use a GPU as a coprocessor. After all, the original 8086, and 286, and maybe 386 allowed you to buy a "Math Coprocessor" which had a floating point unit inside of it.
– don bright
Feb 7 at 0:25
add a comment |
When PC starts, it launches only one CPU core (so called bootstrap CPU). OS is executed on this CPU. Then, OS starts other cores by sending IPI (inter processor interrupt). To do this, OS first switches PIC (interrupt controller) to APIC (advanced) mode and uses its registers.
DOS is not aware of other CPUs and APIC, and does not know how to send IPI.
So, DOS can't even start other cores. They simply turned off and do not work.
If you want to use them, you would have to deal with APIC by yourself. It is not easy task, though. See https://wiki.osdev.org/Symmetric_Multiprocessing
i bet that the "solution" would be to write some kind of "driver" inside of DOS that would enable access to the APIC. (and if mode switching required, fake everything and make it appear as if DOS was still in real mode) sort of like how the "dos extenders" (see above answer) allowed DOS programs to access more than 1 MB of RAM. Theoretically if the machine had a GPU, and a OpenCL "driver" for DOS, you should be able to use a GPU as a coprocessor. After all, the original 8086, and 286, and maybe 386 allowed you to buy a "Math Coprocessor" which had a floating point unit inside of it.
– don bright
Feb 7 at 0:25
add a comment |
When PC starts, it launches only one CPU core (so called bootstrap CPU). OS is executed on this CPU. Then, OS starts other cores by sending IPI (inter processor interrupt). To do this, OS first switches PIC (interrupt controller) to APIC (advanced) mode and uses its registers.
DOS is not aware of other CPUs and APIC, and does not know how to send IPI.
So, DOS can't even start other cores. They simply turned off and do not work.
If you want to use them, you would have to deal with APIC by yourself. It is not easy task, though. See https://wiki.osdev.org/Symmetric_Multiprocessing
When PC starts, it launches only one CPU core (so called bootstrap CPU). OS is executed on this CPU. Then, OS starts other cores by sending IPI (inter processor interrupt). To do this, OS first switches PIC (interrupt controller) to APIC (advanced) mode and uses its registers.
DOS is not aware of other CPUs and APIC, and does not know how to send IPI.
So, DOS can't even start other cores. They simply turned off and do not work.
If you want to use them, you would have to deal with APIC by yourself. It is not easy task, though. See https://wiki.osdev.org/Symmetric_Multiprocessing
answered Feb 6 at 21:33
user996142user996142
1211
1211
i bet that the "solution" would be to write some kind of "driver" inside of DOS that would enable access to the APIC. (and if mode switching required, fake everything and make it appear as if DOS was still in real mode) sort of like how the "dos extenders" (see above answer) allowed DOS programs to access more than 1 MB of RAM. Theoretically if the machine had a GPU, and a OpenCL "driver" for DOS, you should be able to use a GPU as a coprocessor. After all, the original 8086, and 286, and maybe 386 allowed you to buy a "Math Coprocessor" which had a floating point unit inside of it.
– don bright
Feb 7 at 0:25
add a comment |
i bet that the "solution" would be to write some kind of "driver" inside of DOS that would enable access to the APIC. (and if mode switching required, fake everything and make it appear as if DOS was still in real mode) sort of like how the "dos extenders" (see above answer) allowed DOS programs to access more than 1 MB of RAM. Theoretically if the machine had a GPU, and a OpenCL "driver" for DOS, you should be able to use a GPU as a coprocessor. After all, the original 8086, and 286, and maybe 386 allowed you to buy a "Math Coprocessor" which had a floating point unit inside of it.
– don bright
Feb 7 at 0:25
i bet that the "solution" would be to write some kind of "driver" inside of DOS that would enable access to the APIC. (and if mode switching required, fake everything and make it appear as if DOS was still in real mode) sort of like how the "dos extenders" (see above answer) allowed DOS programs to access more than 1 MB of RAM. Theoretically if the machine had a GPU, and a OpenCL "driver" for DOS, you should be able to use a GPU as a coprocessor. After all, the original 8086, and 286, and maybe 386 allowed you to buy a "Math Coprocessor" which had a floating point unit inside of it.
– don bright
Feb 7 at 0:25
i bet that the "solution" would be to write some kind of "driver" inside of DOS that would enable access to the APIC. (and if mode switching required, fake everything and make it appear as if DOS was still in real mode) sort of like how the "dos extenders" (see above answer) allowed DOS programs to access more than 1 MB of RAM. Theoretically if the machine had a GPU, and a OpenCL "driver" for DOS, you should be able to use a GPU as a coprocessor. After all, the original 8086, and 286, and maybe 386 allowed you to buy a "Math Coprocessor" which had a floating point unit inside of it.
– don bright
Feb 7 at 0:25
add a comment |
Thanks for contributing an answer to Retrocomputing Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9082%2fwill-pc-dos-run-faster-on-4-or-8-core-modern-machines%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
Apparently there were Multi-user DOS (not derived from MS-DOS available). en.wikipedia.org/wiki/Multiuser_DOS#Multiuser_DOS
– Thorbjørn Ravn Andersen
Feb 5 at 23:29
1
You could certainly run DOS multiples times in parallel in VMs, assigning a dedicated CPU to each of them
– Thomas Weller
Feb 5 at 23:31
See also: superuser.com/questions/726348/…
– traal
Feb 6 at 0:46