What was the reason of the non-preemptivity of older Linux kernels?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
15
down vote

favorite
6












Why did the first Linux developers choose to implement a non-preemptive kernel? Is it to save synchronization?



As far as I know, Linux was developed in the early '90s, when PCs had a single processor. What advantage does a non-preemptive kernel give in such PCs? Why, however, the advantage is reduced by multi-core processors?







share|improve this question


























    up vote
    15
    down vote

    favorite
    6












    Why did the first Linux developers choose to implement a non-preemptive kernel? Is it to save synchronization?



    As far as I know, Linux was developed in the early '90s, when PCs had a single processor. What advantage does a non-preemptive kernel give in such PCs? Why, however, the advantage is reduced by multi-core processors?







    share|improve this question
























      up vote
      15
      down vote

      favorite
      6









      up vote
      15
      down vote

      favorite
      6






      6





      Why did the first Linux developers choose to implement a non-preemptive kernel? Is it to save synchronization?



      As far as I know, Linux was developed in the early '90s, when PCs had a single processor. What advantage does a non-preemptive kernel give in such PCs? Why, however, the advantage is reduced by multi-core processors?







      share|improve this question














      Why did the first Linux developers choose to implement a non-preemptive kernel? Is it to save synchronization?



      As far as I know, Linux was developed in the early '90s, when PCs had a single processor. What advantage does a non-preemptive kernel give in such PCs? Why, however, the advantage is reduced by multi-core processors?









      share|improve this question













      share|improve this question




      share|improve this question








      edited Dec 24 '17 at 13:38

























      asked Dec 24 '17 at 13:22









      Narden

      764




      764




















          3 Answers
          3






          active

          oldest

          votes

















          up vote
          25
          down vote













          In the context of the Linux kernel, when people talk about pre-emption they often refer to the kernel’s ability to interrupt itself — essentially, switch tasks while running kernel code. Allowing this to happen is quite complex, which is probably the main reason it took a long time for the kernel to be made pre-emptible.



          At first most kernel code couldn’t be interrupted anyway, since it was protected by the big kernel lock. That lock was progressively eliminated from more and more kernel code, allowing multiple simultaneous calls to the kernel in parallel (which became more important as SMP systems became more common). But that still didn’t make the kernel itself pre-emptible; that took more development still, culminating in the PREEMPT_RT patch set which was eventually merged in the mainline kernel (and was capable of pre-empting the BKL anyway). Nowadays the kernel can be configured to be more or less pre-emptible, depending on the throughput and latency characteristics you’re after; see the related kernel configuration for details.



          As you can see from the explanations in the kernel configuration, pre-emption affects throughput and latency, not concurrency. On single-CPU systems, pre-emption is still useful because it allows events to be processed with shorter reaction times; however, it also results in lower throughput (since the kernel spends time switching tasks). Pre-emption allows any given CPU, in a single or multiple CPU system, to switch to another task more rapidly. The limiting factor on multi-CPU systems isn’t pre-emption, it’s locks, big or otherwise: any time code takes a lock, it means that another CPU can’t start performing the same action.






          share|improve this answer





























            up vote
            11
            down vote













            Preemptive kernel only means that there is no Big Kernel Lock.



            Linux had preemptive multi-tasking (i.e. user code was preemptible) since its first moment (as far I know, the very-very first Linux 0.0.1 uploaded by Linus to the funet ftp server was already preemptive multitask). If you executed, for example, multiple compression or compilation processes, they were executed parallel from the first moment.



            Contrary the - at the time - widely used Win31. On Win31, if a task got the CPU from the "kernel", by default it was its responsibility to determine when to give control back to the OS (or to other tasks). If a process had no special support for this feature (which required additional programming work), then while executing it, all other tasks were suspended. Even most basic apps integrated into the Win31 worked so.



            Preemptive multitasking means, that the tasks have no way to allocate the CPU as they want. Instead, if their time slot expires, the kernel gets the CPU away from them. Thus, in preemptive operating systems, a badly written or badly functioning process can't freeze the OS, or avoid other processes from running. Linux was always preemptive for user space processes.



            The Big Kernel Lock means that in some cases, inside kernel space, still there could be some locks, preventing other processes from running the protected code. For example, you could not mount multiple filesystems concurrently - if you gave multiple mount commands, they were still executed consecutively, because mounting things required to allocate the Big Kernel Lock.



            Making the kernel preemptive had required to eliminate this big kernel lock, i.e. making the mount and any other tasks to be able to run concurrently. It was a big job.



            Historically, this was made really urgent by the increasing support of SMP (multi-CPU support). In the first time, there were really multiple-CPU mainboards. Later multiple CPUs ("cores") were integrated into a single chip, today the really multi-CPU mainboards are already rare (they are typically in costly server systems). Also the really single-core systems (where there is only a single cpu, with a single core) are rare.



            Thus, the answer to your question isn't that "what was the reason of non-preemptivity", because it was always preemptive. The real question is, what made the preemptive kernel execution really necessary. The answer is for that: the increasing ratio of the many-CPU, many-core systems.






            share|improve this answer






















            • I didn't actually understand :( Until kernel version 2.4, only user processes were preemptive and the kernel was non-preemptive. As I answered someone before, I think that the reason was to save the work on synchronization deadlocks that could happen with preemptive implementation on single-core process. What do you think?
              – Narden
              Dec 24 '17 at 13:47











            • @Narden I don't know were you read it. Roughly until 1.3 or 2.0, only a single process could be in kernel space, even if multiple processes were running. This limitation was eliminated roughly with 2.0 . Until around 2.4, there was a Big Kernel Lock (i.e. simultanously mounting multiple filesystems didn't work).
              – peterh
              Dec 24 '17 at 13:50










            • @Narden But it is not a cooperative multitasking, no process were ever needed to intentionally give the CPU back to the task scheduler. Yes, the reason for the BKL was likely that doing this correctly is a heck a lot of work: 1) locks have to be split 2) lock-free data structures should be used were it is possible 3) splitted locks lead to deadlocks/livelocks, these are typically particularly dirty, hard-to-fix bugs, all of them should be found and fixed 4) all the drivers should be ported to the changes in the kernel core API.
              – peterh
              Dec 24 '17 at 13:57










            • I read it while I was searching for an answer , and it's also given as an information in a course that I'm taking, named Operating Systems.
              – Narden
              Dec 24 '17 at 13:57






            • 1




              The Big Kernel Lock prevented other threads from entering the kernel when one was executing in the kernel. Only one thread was allowed, because the kernel was not designed from the start with symmetric multiprocessing in mind. A pre-emptive kernel means something different, however. Traditionally the execution context was changed only when the kernel returned to user space. In a pre-emptive kernel a thread can be pre-empted in the middle of running kernel code.
              – Johan Myréen
              Dec 24 '17 at 14:10

















            up vote
            3
            down vote













            This isn't a technical answer but a historical answer to the specific question posed by the OP: "What was the reason of the non-preemptivity of older Linux kernels?"



            (I assume, as explained by @peterh in his answer and comments, that by "non-preemptivity" the OP is referring to either or both of the fact that only one user process could be inside the kernel (in an API) at a time and/or the Big Kernel Lock.)



            Linus Torvalds was interested in learning how operating systems worked, and the way he learned was to write one. His model, and base, and initial development environment was Minix, an existing OS for educational purposes (i.e., not a production OS) which was not free (as in open source, at that time - it wasn't free as in beer, either).



            So he wrote a kernel with no preemption (the Big Kernel Lock mentioned in other answers) because that's the way you do it if you want to get your new OS up and running quickly for educational purposes: it's much much much much simpler that way. A kernel to support concurrent multiprogramming of user programs and devices is hard enough - it's extremely difficult to make the kernel itself concurrent.



            If he had known then how popular/useful/important Linux would become ... he would have probably done it the same way. (IMO only, I have no idea what he actually thinks.) Because you've gotta walk before you can run.



            And it stayed that way for a good long while because a) there was a lot of other work to be done on making Linux what it is today (or even what it was then) and b) to change it would be a major difficult undertaking (as explained in other answers).






            share|improve this answer






















              Your Answer







              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "106"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              convertImagesToLinks: false,
              noModals: false,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );








               

              draft saved


              draft discarded


















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f412806%2fwhat-was-the-reason-of-the-non-preemptivity-of-older-linux-kernels%23new-answer', 'question_page');

              );

              Post as a guest






























              3 Answers
              3






              active

              oldest

              votes








              3 Answers
              3






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes








              up vote
              25
              down vote













              In the context of the Linux kernel, when people talk about pre-emption they often refer to the kernel’s ability to interrupt itself — essentially, switch tasks while running kernel code. Allowing this to happen is quite complex, which is probably the main reason it took a long time for the kernel to be made pre-emptible.



              At first most kernel code couldn’t be interrupted anyway, since it was protected by the big kernel lock. That lock was progressively eliminated from more and more kernel code, allowing multiple simultaneous calls to the kernel in parallel (which became more important as SMP systems became more common). But that still didn’t make the kernel itself pre-emptible; that took more development still, culminating in the PREEMPT_RT patch set which was eventually merged in the mainline kernel (and was capable of pre-empting the BKL anyway). Nowadays the kernel can be configured to be more or less pre-emptible, depending on the throughput and latency characteristics you’re after; see the related kernel configuration for details.



              As you can see from the explanations in the kernel configuration, pre-emption affects throughput and latency, not concurrency. On single-CPU systems, pre-emption is still useful because it allows events to be processed with shorter reaction times; however, it also results in lower throughput (since the kernel spends time switching tasks). Pre-emption allows any given CPU, in a single or multiple CPU system, to switch to another task more rapidly. The limiting factor on multi-CPU systems isn’t pre-emption, it’s locks, big or otherwise: any time code takes a lock, it means that another CPU can’t start performing the same action.






              share|improve this answer


























                up vote
                25
                down vote













                In the context of the Linux kernel, when people talk about pre-emption they often refer to the kernel’s ability to interrupt itself — essentially, switch tasks while running kernel code. Allowing this to happen is quite complex, which is probably the main reason it took a long time for the kernel to be made pre-emptible.



                At first most kernel code couldn’t be interrupted anyway, since it was protected by the big kernel lock. That lock was progressively eliminated from more and more kernel code, allowing multiple simultaneous calls to the kernel in parallel (which became more important as SMP systems became more common). But that still didn’t make the kernel itself pre-emptible; that took more development still, culminating in the PREEMPT_RT patch set which was eventually merged in the mainline kernel (and was capable of pre-empting the BKL anyway). Nowadays the kernel can be configured to be more or less pre-emptible, depending on the throughput and latency characteristics you’re after; see the related kernel configuration for details.



                As you can see from the explanations in the kernel configuration, pre-emption affects throughput and latency, not concurrency. On single-CPU systems, pre-emption is still useful because it allows events to be processed with shorter reaction times; however, it also results in lower throughput (since the kernel spends time switching tasks). Pre-emption allows any given CPU, in a single or multiple CPU system, to switch to another task more rapidly. The limiting factor on multi-CPU systems isn’t pre-emption, it’s locks, big or otherwise: any time code takes a lock, it means that another CPU can’t start performing the same action.






                share|improve this answer
























                  up vote
                  25
                  down vote










                  up vote
                  25
                  down vote









                  In the context of the Linux kernel, when people talk about pre-emption they often refer to the kernel’s ability to interrupt itself — essentially, switch tasks while running kernel code. Allowing this to happen is quite complex, which is probably the main reason it took a long time for the kernel to be made pre-emptible.



                  At first most kernel code couldn’t be interrupted anyway, since it was protected by the big kernel lock. That lock was progressively eliminated from more and more kernel code, allowing multiple simultaneous calls to the kernel in parallel (which became more important as SMP systems became more common). But that still didn’t make the kernel itself pre-emptible; that took more development still, culminating in the PREEMPT_RT patch set which was eventually merged in the mainline kernel (and was capable of pre-empting the BKL anyway). Nowadays the kernel can be configured to be more or less pre-emptible, depending on the throughput and latency characteristics you’re after; see the related kernel configuration for details.



                  As you can see from the explanations in the kernel configuration, pre-emption affects throughput and latency, not concurrency. On single-CPU systems, pre-emption is still useful because it allows events to be processed with shorter reaction times; however, it also results in lower throughput (since the kernel spends time switching tasks). Pre-emption allows any given CPU, in a single or multiple CPU system, to switch to another task more rapidly. The limiting factor on multi-CPU systems isn’t pre-emption, it’s locks, big or otherwise: any time code takes a lock, it means that another CPU can’t start performing the same action.






                  share|improve this answer














                  In the context of the Linux kernel, when people talk about pre-emption they often refer to the kernel’s ability to interrupt itself — essentially, switch tasks while running kernel code. Allowing this to happen is quite complex, which is probably the main reason it took a long time for the kernel to be made pre-emptible.



                  At first most kernel code couldn’t be interrupted anyway, since it was protected by the big kernel lock. That lock was progressively eliminated from more and more kernel code, allowing multiple simultaneous calls to the kernel in parallel (which became more important as SMP systems became more common). But that still didn’t make the kernel itself pre-emptible; that took more development still, culminating in the PREEMPT_RT patch set which was eventually merged in the mainline kernel (and was capable of pre-empting the BKL anyway). Nowadays the kernel can be configured to be more or less pre-emptible, depending on the throughput and latency characteristics you’re after; see the related kernel configuration for details.



                  As you can see from the explanations in the kernel configuration, pre-emption affects throughput and latency, not concurrency. On single-CPU systems, pre-emption is still useful because it allows events to be processed with shorter reaction times; however, it also results in lower throughput (since the kernel spends time switching tasks). Pre-emption allows any given CPU, in a single or multiple CPU system, to switch to another task more rapidly. The limiting factor on multi-CPU systems isn’t pre-emption, it’s locks, big or otherwise: any time code takes a lock, it means that another CPU can’t start performing the same action.







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited Dec 25 '17 at 11:38

























                  answered Dec 24 '17 at 14:13









                  Stephen Kitt

                  143k22308371




                  143k22308371






















                      up vote
                      11
                      down vote













                      Preemptive kernel only means that there is no Big Kernel Lock.



                      Linux had preemptive multi-tasking (i.e. user code was preemptible) since its first moment (as far I know, the very-very first Linux 0.0.1 uploaded by Linus to the funet ftp server was already preemptive multitask). If you executed, for example, multiple compression or compilation processes, they were executed parallel from the first moment.



                      Contrary the - at the time - widely used Win31. On Win31, if a task got the CPU from the "kernel", by default it was its responsibility to determine when to give control back to the OS (or to other tasks). If a process had no special support for this feature (which required additional programming work), then while executing it, all other tasks were suspended. Even most basic apps integrated into the Win31 worked so.



                      Preemptive multitasking means, that the tasks have no way to allocate the CPU as they want. Instead, if their time slot expires, the kernel gets the CPU away from them. Thus, in preemptive operating systems, a badly written or badly functioning process can't freeze the OS, or avoid other processes from running. Linux was always preemptive for user space processes.



                      The Big Kernel Lock means that in some cases, inside kernel space, still there could be some locks, preventing other processes from running the protected code. For example, you could not mount multiple filesystems concurrently - if you gave multiple mount commands, they were still executed consecutively, because mounting things required to allocate the Big Kernel Lock.



                      Making the kernel preemptive had required to eliminate this big kernel lock, i.e. making the mount and any other tasks to be able to run concurrently. It was a big job.



                      Historically, this was made really urgent by the increasing support of SMP (multi-CPU support). In the first time, there were really multiple-CPU mainboards. Later multiple CPUs ("cores") were integrated into a single chip, today the really multi-CPU mainboards are already rare (they are typically in costly server systems). Also the really single-core systems (where there is only a single cpu, with a single core) are rare.



                      Thus, the answer to your question isn't that "what was the reason of non-preemptivity", because it was always preemptive. The real question is, what made the preemptive kernel execution really necessary. The answer is for that: the increasing ratio of the many-CPU, many-core systems.






                      share|improve this answer






















                      • I didn't actually understand :( Until kernel version 2.4, only user processes were preemptive and the kernel was non-preemptive. As I answered someone before, I think that the reason was to save the work on synchronization deadlocks that could happen with preemptive implementation on single-core process. What do you think?
                        – Narden
                        Dec 24 '17 at 13:47











                      • @Narden I don't know were you read it. Roughly until 1.3 or 2.0, only a single process could be in kernel space, even if multiple processes were running. This limitation was eliminated roughly with 2.0 . Until around 2.4, there was a Big Kernel Lock (i.e. simultanously mounting multiple filesystems didn't work).
                        – peterh
                        Dec 24 '17 at 13:50










                      • @Narden But it is not a cooperative multitasking, no process were ever needed to intentionally give the CPU back to the task scheduler. Yes, the reason for the BKL was likely that doing this correctly is a heck a lot of work: 1) locks have to be split 2) lock-free data structures should be used were it is possible 3) splitted locks lead to deadlocks/livelocks, these are typically particularly dirty, hard-to-fix bugs, all of them should be found and fixed 4) all the drivers should be ported to the changes in the kernel core API.
                        – peterh
                        Dec 24 '17 at 13:57










                      • I read it while I was searching for an answer , and it's also given as an information in a course that I'm taking, named Operating Systems.
                        – Narden
                        Dec 24 '17 at 13:57






                      • 1




                        The Big Kernel Lock prevented other threads from entering the kernel when one was executing in the kernel. Only one thread was allowed, because the kernel was not designed from the start with symmetric multiprocessing in mind. A pre-emptive kernel means something different, however. Traditionally the execution context was changed only when the kernel returned to user space. In a pre-emptive kernel a thread can be pre-empted in the middle of running kernel code.
                        – Johan Myréen
                        Dec 24 '17 at 14:10














                      up vote
                      11
                      down vote













                      Preemptive kernel only means that there is no Big Kernel Lock.



                      Linux had preemptive multi-tasking (i.e. user code was preemptible) since its first moment (as far I know, the very-very first Linux 0.0.1 uploaded by Linus to the funet ftp server was already preemptive multitask). If you executed, for example, multiple compression or compilation processes, they were executed parallel from the first moment.



                      Contrary the - at the time - widely used Win31. On Win31, if a task got the CPU from the "kernel", by default it was its responsibility to determine when to give control back to the OS (or to other tasks). If a process had no special support for this feature (which required additional programming work), then while executing it, all other tasks were suspended. Even most basic apps integrated into the Win31 worked so.



                      Preemptive multitasking means, that the tasks have no way to allocate the CPU as they want. Instead, if their time slot expires, the kernel gets the CPU away from them. Thus, in preemptive operating systems, a badly written or badly functioning process can't freeze the OS, or avoid other processes from running. Linux was always preemptive for user space processes.



                      The Big Kernel Lock means that in some cases, inside kernel space, still there could be some locks, preventing other processes from running the protected code. For example, you could not mount multiple filesystems concurrently - if you gave multiple mount commands, they were still executed consecutively, because mounting things required to allocate the Big Kernel Lock.



                      Making the kernel preemptive had required to eliminate this big kernel lock, i.e. making the mount and any other tasks to be able to run concurrently. It was a big job.



                      Historically, this was made really urgent by the increasing support of SMP (multi-CPU support). In the first time, there were really multiple-CPU mainboards. Later multiple CPUs ("cores") were integrated into a single chip, today the really multi-CPU mainboards are already rare (they are typically in costly server systems). Also the really single-core systems (where there is only a single cpu, with a single core) are rare.



                      Thus, the answer to your question isn't that "what was the reason of non-preemptivity", because it was always preemptive. The real question is, what made the preemptive kernel execution really necessary. The answer is for that: the increasing ratio of the many-CPU, many-core systems.






                      share|improve this answer






















                      • I didn't actually understand :( Until kernel version 2.4, only user processes were preemptive and the kernel was non-preemptive. As I answered someone before, I think that the reason was to save the work on synchronization deadlocks that could happen with preemptive implementation on single-core process. What do you think?
                        – Narden
                        Dec 24 '17 at 13:47











                      • @Narden I don't know were you read it. Roughly until 1.3 or 2.0, only a single process could be in kernel space, even if multiple processes were running. This limitation was eliminated roughly with 2.0 . Until around 2.4, there was a Big Kernel Lock (i.e. simultanously mounting multiple filesystems didn't work).
                        – peterh
                        Dec 24 '17 at 13:50










                      • @Narden But it is not a cooperative multitasking, no process were ever needed to intentionally give the CPU back to the task scheduler. Yes, the reason for the BKL was likely that doing this correctly is a heck a lot of work: 1) locks have to be split 2) lock-free data structures should be used were it is possible 3) splitted locks lead to deadlocks/livelocks, these are typically particularly dirty, hard-to-fix bugs, all of them should be found and fixed 4) all the drivers should be ported to the changes in the kernel core API.
                        – peterh
                        Dec 24 '17 at 13:57










                      • I read it while I was searching for an answer , and it's also given as an information in a course that I'm taking, named Operating Systems.
                        – Narden
                        Dec 24 '17 at 13:57






                      • 1




                        The Big Kernel Lock prevented other threads from entering the kernel when one was executing in the kernel. Only one thread was allowed, because the kernel was not designed from the start with symmetric multiprocessing in mind. A pre-emptive kernel means something different, however. Traditionally the execution context was changed only when the kernel returned to user space. In a pre-emptive kernel a thread can be pre-empted in the middle of running kernel code.
                        – Johan Myréen
                        Dec 24 '17 at 14:10












                      up vote
                      11
                      down vote










                      up vote
                      11
                      down vote









                      Preemptive kernel only means that there is no Big Kernel Lock.



                      Linux had preemptive multi-tasking (i.e. user code was preemptible) since its first moment (as far I know, the very-very first Linux 0.0.1 uploaded by Linus to the funet ftp server was already preemptive multitask). If you executed, for example, multiple compression or compilation processes, they were executed parallel from the first moment.



                      Contrary the - at the time - widely used Win31. On Win31, if a task got the CPU from the "kernel", by default it was its responsibility to determine when to give control back to the OS (or to other tasks). If a process had no special support for this feature (which required additional programming work), then while executing it, all other tasks were suspended. Even most basic apps integrated into the Win31 worked so.



                      Preemptive multitasking means, that the tasks have no way to allocate the CPU as they want. Instead, if their time slot expires, the kernel gets the CPU away from them. Thus, in preemptive operating systems, a badly written or badly functioning process can't freeze the OS, or avoid other processes from running. Linux was always preemptive for user space processes.



                      The Big Kernel Lock means that in some cases, inside kernel space, still there could be some locks, preventing other processes from running the protected code. For example, you could not mount multiple filesystems concurrently - if you gave multiple mount commands, they were still executed consecutively, because mounting things required to allocate the Big Kernel Lock.



                      Making the kernel preemptive had required to eliminate this big kernel lock, i.e. making the mount and any other tasks to be able to run concurrently. It was a big job.



                      Historically, this was made really urgent by the increasing support of SMP (multi-CPU support). In the first time, there were really multiple-CPU mainboards. Later multiple CPUs ("cores") were integrated into a single chip, today the really multi-CPU mainboards are already rare (they are typically in costly server systems). Also the really single-core systems (where there is only a single cpu, with a single core) are rare.



                      Thus, the answer to your question isn't that "what was the reason of non-preemptivity", because it was always preemptive. The real question is, what made the preemptive kernel execution really necessary. The answer is for that: the increasing ratio of the many-CPU, many-core systems.






                      share|improve this answer














                      Preemptive kernel only means that there is no Big Kernel Lock.



                      Linux had preemptive multi-tasking (i.e. user code was preemptible) since its first moment (as far I know, the very-very first Linux 0.0.1 uploaded by Linus to the funet ftp server was already preemptive multitask). If you executed, for example, multiple compression or compilation processes, they were executed parallel from the first moment.



                      Contrary the - at the time - widely used Win31. On Win31, if a task got the CPU from the "kernel", by default it was its responsibility to determine when to give control back to the OS (or to other tasks). If a process had no special support for this feature (which required additional programming work), then while executing it, all other tasks were suspended. Even most basic apps integrated into the Win31 worked so.



                      Preemptive multitasking means, that the tasks have no way to allocate the CPU as they want. Instead, if their time slot expires, the kernel gets the CPU away from them. Thus, in preemptive operating systems, a badly written or badly functioning process can't freeze the OS, or avoid other processes from running. Linux was always preemptive for user space processes.



                      The Big Kernel Lock means that in some cases, inside kernel space, still there could be some locks, preventing other processes from running the protected code. For example, you could not mount multiple filesystems concurrently - if you gave multiple mount commands, they were still executed consecutively, because mounting things required to allocate the Big Kernel Lock.



                      Making the kernel preemptive had required to eliminate this big kernel lock, i.e. making the mount and any other tasks to be able to run concurrently. It was a big job.



                      Historically, this was made really urgent by the increasing support of SMP (multi-CPU support). In the first time, there were really multiple-CPU mainboards. Later multiple CPUs ("cores") were integrated into a single chip, today the really multi-CPU mainboards are already rare (they are typically in costly server systems). Also the really single-core systems (where there is only a single cpu, with a single core) are rare.



                      Thus, the answer to your question isn't that "what was the reason of non-preemptivity", because it was always preemptive. The real question is, what made the preemptive kernel execution really necessary. The answer is for that: the increasing ratio of the many-CPU, many-core systems.







                      share|improve this answer














                      share|improve this answer



                      share|improve this answer








                      edited Apr 5 at 16:40









                      orenmn

                      52




                      52










                      answered Dec 24 '17 at 13:40









                      peterh

                      3,93092755




                      3,93092755











                      • I didn't actually understand :( Until kernel version 2.4, only user processes were preemptive and the kernel was non-preemptive. As I answered someone before, I think that the reason was to save the work on synchronization deadlocks that could happen with preemptive implementation on single-core process. What do you think?
                        – Narden
                        Dec 24 '17 at 13:47











                      • @Narden I don't know were you read it. Roughly until 1.3 or 2.0, only a single process could be in kernel space, even if multiple processes were running. This limitation was eliminated roughly with 2.0 . Until around 2.4, there was a Big Kernel Lock (i.e. simultanously mounting multiple filesystems didn't work).
                        – peterh
                        Dec 24 '17 at 13:50










                      • @Narden But it is not a cooperative multitasking, no process were ever needed to intentionally give the CPU back to the task scheduler. Yes, the reason for the BKL was likely that doing this correctly is a heck a lot of work: 1) locks have to be split 2) lock-free data structures should be used were it is possible 3) splitted locks lead to deadlocks/livelocks, these are typically particularly dirty, hard-to-fix bugs, all of them should be found and fixed 4) all the drivers should be ported to the changes in the kernel core API.
                        – peterh
                        Dec 24 '17 at 13:57










                      • I read it while I was searching for an answer , and it's also given as an information in a course that I'm taking, named Operating Systems.
                        – Narden
                        Dec 24 '17 at 13:57






                      • 1




                        The Big Kernel Lock prevented other threads from entering the kernel when one was executing in the kernel. Only one thread was allowed, because the kernel was not designed from the start with symmetric multiprocessing in mind. A pre-emptive kernel means something different, however. Traditionally the execution context was changed only when the kernel returned to user space. In a pre-emptive kernel a thread can be pre-empted in the middle of running kernel code.
                        – Johan Myréen
                        Dec 24 '17 at 14:10
















                      • I didn't actually understand :( Until kernel version 2.4, only user processes were preemptive and the kernel was non-preemptive. As I answered someone before, I think that the reason was to save the work on synchronization deadlocks that could happen with preemptive implementation on single-core process. What do you think?
                        – Narden
                        Dec 24 '17 at 13:47











                      • @Narden I don't know were you read it. Roughly until 1.3 or 2.0, only a single process could be in kernel space, even if multiple processes were running. This limitation was eliminated roughly with 2.0 . Until around 2.4, there was a Big Kernel Lock (i.e. simultanously mounting multiple filesystems didn't work).
                        – peterh
                        Dec 24 '17 at 13:50










                      • @Narden But it is not a cooperative multitasking, no process were ever needed to intentionally give the CPU back to the task scheduler. Yes, the reason for the BKL was likely that doing this correctly is a heck a lot of work: 1) locks have to be split 2) lock-free data structures should be used were it is possible 3) splitted locks lead to deadlocks/livelocks, these are typically particularly dirty, hard-to-fix bugs, all of them should be found and fixed 4) all the drivers should be ported to the changes in the kernel core API.
                        – peterh
                        Dec 24 '17 at 13:57










                      • I read it while I was searching for an answer , and it's also given as an information in a course that I'm taking, named Operating Systems.
                        – Narden
                        Dec 24 '17 at 13:57






                      • 1




                        The Big Kernel Lock prevented other threads from entering the kernel when one was executing in the kernel. Only one thread was allowed, because the kernel was not designed from the start with symmetric multiprocessing in mind. A pre-emptive kernel means something different, however. Traditionally the execution context was changed only when the kernel returned to user space. In a pre-emptive kernel a thread can be pre-empted in the middle of running kernel code.
                        – Johan Myréen
                        Dec 24 '17 at 14:10















                      I didn't actually understand :( Until kernel version 2.4, only user processes were preemptive and the kernel was non-preemptive. As I answered someone before, I think that the reason was to save the work on synchronization deadlocks that could happen with preemptive implementation on single-core process. What do you think?
                      – Narden
                      Dec 24 '17 at 13:47





                      I didn't actually understand :( Until kernel version 2.4, only user processes were preemptive and the kernel was non-preemptive. As I answered someone before, I think that the reason was to save the work on synchronization deadlocks that could happen with preemptive implementation on single-core process. What do you think?
                      – Narden
                      Dec 24 '17 at 13:47













                      @Narden I don't know were you read it. Roughly until 1.3 or 2.0, only a single process could be in kernel space, even if multiple processes were running. This limitation was eliminated roughly with 2.0 . Until around 2.4, there was a Big Kernel Lock (i.e. simultanously mounting multiple filesystems didn't work).
                      – peterh
                      Dec 24 '17 at 13:50




                      @Narden I don't know were you read it. Roughly until 1.3 or 2.0, only a single process could be in kernel space, even if multiple processes were running. This limitation was eliminated roughly with 2.0 . Until around 2.4, there was a Big Kernel Lock (i.e. simultanously mounting multiple filesystems didn't work).
                      – peterh
                      Dec 24 '17 at 13:50












                      @Narden But it is not a cooperative multitasking, no process were ever needed to intentionally give the CPU back to the task scheduler. Yes, the reason for the BKL was likely that doing this correctly is a heck a lot of work: 1) locks have to be split 2) lock-free data structures should be used were it is possible 3) splitted locks lead to deadlocks/livelocks, these are typically particularly dirty, hard-to-fix bugs, all of them should be found and fixed 4) all the drivers should be ported to the changes in the kernel core API.
                      – peterh
                      Dec 24 '17 at 13:57




                      @Narden But it is not a cooperative multitasking, no process were ever needed to intentionally give the CPU back to the task scheduler. Yes, the reason for the BKL was likely that doing this correctly is a heck a lot of work: 1) locks have to be split 2) lock-free data structures should be used were it is possible 3) splitted locks lead to deadlocks/livelocks, these are typically particularly dirty, hard-to-fix bugs, all of them should be found and fixed 4) all the drivers should be ported to the changes in the kernel core API.
                      – peterh
                      Dec 24 '17 at 13:57












                      I read it while I was searching for an answer , and it's also given as an information in a course that I'm taking, named Operating Systems.
                      – Narden
                      Dec 24 '17 at 13:57




                      I read it while I was searching for an answer , and it's also given as an information in a course that I'm taking, named Operating Systems.
                      – Narden
                      Dec 24 '17 at 13:57




                      1




                      1




                      The Big Kernel Lock prevented other threads from entering the kernel when one was executing in the kernel. Only one thread was allowed, because the kernel was not designed from the start with symmetric multiprocessing in mind. A pre-emptive kernel means something different, however. Traditionally the execution context was changed only when the kernel returned to user space. In a pre-emptive kernel a thread can be pre-empted in the middle of running kernel code.
                      – Johan Myréen
                      Dec 24 '17 at 14:10




                      The Big Kernel Lock prevented other threads from entering the kernel when one was executing in the kernel. Only one thread was allowed, because the kernel was not designed from the start with symmetric multiprocessing in mind. A pre-emptive kernel means something different, however. Traditionally the execution context was changed only when the kernel returned to user space. In a pre-emptive kernel a thread can be pre-empted in the middle of running kernel code.
                      – Johan Myréen
                      Dec 24 '17 at 14:10










                      up vote
                      3
                      down vote













                      This isn't a technical answer but a historical answer to the specific question posed by the OP: "What was the reason of the non-preemptivity of older Linux kernels?"



                      (I assume, as explained by @peterh in his answer and comments, that by "non-preemptivity" the OP is referring to either or both of the fact that only one user process could be inside the kernel (in an API) at a time and/or the Big Kernel Lock.)



                      Linus Torvalds was interested in learning how operating systems worked, and the way he learned was to write one. His model, and base, and initial development environment was Minix, an existing OS for educational purposes (i.e., not a production OS) which was not free (as in open source, at that time - it wasn't free as in beer, either).



                      So he wrote a kernel with no preemption (the Big Kernel Lock mentioned in other answers) because that's the way you do it if you want to get your new OS up and running quickly for educational purposes: it's much much much much simpler that way. A kernel to support concurrent multiprogramming of user programs and devices is hard enough - it's extremely difficult to make the kernel itself concurrent.



                      If he had known then how popular/useful/important Linux would become ... he would have probably done it the same way. (IMO only, I have no idea what he actually thinks.) Because you've gotta walk before you can run.



                      And it stayed that way for a good long while because a) there was a lot of other work to be done on making Linux what it is today (or even what it was then) and b) to change it would be a major difficult undertaking (as explained in other answers).






                      share|improve this answer


























                        up vote
                        3
                        down vote













                        This isn't a technical answer but a historical answer to the specific question posed by the OP: "What was the reason of the non-preemptivity of older Linux kernels?"



                        (I assume, as explained by @peterh in his answer and comments, that by "non-preemptivity" the OP is referring to either or both of the fact that only one user process could be inside the kernel (in an API) at a time and/or the Big Kernel Lock.)



                        Linus Torvalds was interested in learning how operating systems worked, and the way he learned was to write one. His model, and base, and initial development environment was Minix, an existing OS for educational purposes (i.e., not a production OS) which was not free (as in open source, at that time - it wasn't free as in beer, either).



                        So he wrote a kernel with no preemption (the Big Kernel Lock mentioned in other answers) because that's the way you do it if you want to get your new OS up and running quickly for educational purposes: it's much much much much simpler that way. A kernel to support concurrent multiprogramming of user programs and devices is hard enough - it's extremely difficult to make the kernel itself concurrent.



                        If he had known then how popular/useful/important Linux would become ... he would have probably done it the same way. (IMO only, I have no idea what he actually thinks.) Because you've gotta walk before you can run.



                        And it stayed that way for a good long while because a) there was a lot of other work to be done on making Linux what it is today (or even what it was then) and b) to change it would be a major difficult undertaking (as explained in other answers).






                        share|improve this answer
























                          up vote
                          3
                          down vote










                          up vote
                          3
                          down vote









                          This isn't a technical answer but a historical answer to the specific question posed by the OP: "What was the reason of the non-preemptivity of older Linux kernels?"



                          (I assume, as explained by @peterh in his answer and comments, that by "non-preemptivity" the OP is referring to either or both of the fact that only one user process could be inside the kernel (in an API) at a time and/or the Big Kernel Lock.)



                          Linus Torvalds was interested in learning how operating systems worked, and the way he learned was to write one. His model, and base, and initial development environment was Minix, an existing OS for educational purposes (i.e., not a production OS) which was not free (as in open source, at that time - it wasn't free as in beer, either).



                          So he wrote a kernel with no preemption (the Big Kernel Lock mentioned in other answers) because that's the way you do it if you want to get your new OS up and running quickly for educational purposes: it's much much much much simpler that way. A kernel to support concurrent multiprogramming of user programs and devices is hard enough - it's extremely difficult to make the kernel itself concurrent.



                          If he had known then how popular/useful/important Linux would become ... he would have probably done it the same way. (IMO only, I have no idea what he actually thinks.) Because you've gotta walk before you can run.



                          And it stayed that way for a good long while because a) there was a lot of other work to be done on making Linux what it is today (or even what it was then) and b) to change it would be a major difficult undertaking (as explained in other answers).






                          share|improve this answer














                          This isn't a technical answer but a historical answer to the specific question posed by the OP: "What was the reason of the non-preemptivity of older Linux kernels?"



                          (I assume, as explained by @peterh in his answer and comments, that by "non-preemptivity" the OP is referring to either or both of the fact that only one user process could be inside the kernel (in an API) at a time and/or the Big Kernel Lock.)



                          Linus Torvalds was interested in learning how operating systems worked, and the way he learned was to write one. His model, and base, and initial development environment was Minix, an existing OS for educational purposes (i.e., not a production OS) which was not free (as in open source, at that time - it wasn't free as in beer, either).



                          So he wrote a kernel with no preemption (the Big Kernel Lock mentioned in other answers) because that's the way you do it if you want to get your new OS up and running quickly for educational purposes: it's much much much much simpler that way. A kernel to support concurrent multiprogramming of user programs and devices is hard enough - it's extremely difficult to make the kernel itself concurrent.



                          If he had known then how popular/useful/important Linux would become ... he would have probably done it the same way. (IMO only, I have no idea what he actually thinks.) Because you've gotta walk before you can run.



                          And it stayed that way for a good long while because a) there was a lot of other work to be done on making Linux what it is today (or even what it was then) and b) to change it would be a major difficult undertaking (as explained in other answers).







                          share|improve this answer














                          share|improve this answer



                          share|improve this answer








                          edited Dec 26 '17 at 4:39

























                          answered Dec 26 '17 at 4:34









                          davidbak

                          1314




                          1314






















                               

                              draft saved


                              draft discarded


























                               


                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function ()
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f412806%2fwhat-was-the-reason-of-the-non-preemptivity-of-older-linux-kernels%23new-answer', 'question_page');

                              );

                              Post as a guest













































































                              Popular posts from this blog

                              How to check contact read email or not when send email to Individual?

                              Bahrain

                              Postfix configuration issue with fips on centos 7; mailgun relay