Why were chips socketed in early computers?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
42
down vote

favorite
2












In many early computers, many of the chips were in sockets rather than soldered directly to boards, e.g. this series of pictures of the Tandy CoCo 1 has a note to the effect that all the chips are socketed: http://tech.markoverholser.com/?q=node/50



I can understand doing this with RAM chips, because it's often feasible and desirable to upgrade these; I think the CoCo 1 did often receive RAM upgrades.



But what was the purpose of doing it with all the others? I would expect soldering all the other chips directly to the board, would both save money and improve reliability by eliminating one thing that can go wrong.










share|improve this question



















  • 14




    I'll let someone who has more extensive experience give an official answer. But I suspect that it has to do with complexity. Your most complex chips - and the CPU is generally at the top of the list - are most likely to (a) have problems requiring replacement (e.g., design bugs found after manufacture, which has happened a few times to Intel) and (b) are most important to keep protected from any damage during assembly - wave soldering being pretty safe but why risk the most important chip when you can put in a socket and insert the chip after the soldering is all done.
    – manassehkatz
    Sep 5 at 14:30






  • 22




    Customers of very early computers expected sockets for maintainability (at least in the "nerd market".
    – tofro
    Sep 5 at 15:50






  • 7




    This was the general common practice with DIP (and other non SMD) IC packages. It allowed for quick fixes of the boards without using desoldering stations, etc. And the manufacturing techniques were different too.
    – nbloqs
    Sep 5 at 17:16







  • 19




    You know how people rage on internet forums when Apple or HP ships a product with soldered RAM? In the '70s soldered anything would have elicited the same reaction. Only they didn't have forums, so they would have picked up actual torches and pitchforks.
    – Tobia Tesan
    Sep 6 at 10:51






  • 3




    I wonder if in the future all car parts will be welded together, and complete car replacement when any piece breaks will be the norm, and then we'll wonder, why did we join the pieces with nuts and bolts before?
    – JoL
    Sep 6 at 15:20














up vote
42
down vote

favorite
2












In many early computers, many of the chips were in sockets rather than soldered directly to boards, e.g. this series of pictures of the Tandy CoCo 1 has a note to the effect that all the chips are socketed: http://tech.markoverholser.com/?q=node/50



I can understand doing this with RAM chips, because it's often feasible and desirable to upgrade these; I think the CoCo 1 did often receive RAM upgrades.



But what was the purpose of doing it with all the others? I would expect soldering all the other chips directly to the board, would both save money and improve reliability by eliminating one thing that can go wrong.










share|improve this question



















  • 14




    I'll let someone who has more extensive experience give an official answer. But I suspect that it has to do with complexity. Your most complex chips - and the CPU is generally at the top of the list - are most likely to (a) have problems requiring replacement (e.g., design bugs found after manufacture, which has happened a few times to Intel) and (b) are most important to keep protected from any damage during assembly - wave soldering being pretty safe but why risk the most important chip when you can put in a socket and insert the chip after the soldering is all done.
    – manassehkatz
    Sep 5 at 14:30






  • 22




    Customers of very early computers expected sockets for maintainability (at least in the "nerd market".
    – tofro
    Sep 5 at 15:50






  • 7




    This was the general common practice with DIP (and other non SMD) IC packages. It allowed for quick fixes of the boards without using desoldering stations, etc. And the manufacturing techniques were different too.
    – nbloqs
    Sep 5 at 17:16







  • 19




    You know how people rage on internet forums when Apple or HP ships a product with soldered RAM? In the '70s soldered anything would have elicited the same reaction. Only they didn't have forums, so they would have picked up actual torches and pitchforks.
    – Tobia Tesan
    Sep 6 at 10:51






  • 3




    I wonder if in the future all car parts will be welded together, and complete car replacement when any piece breaks will be the norm, and then we'll wonder, why did we join the pieces with nuts and bolts before?
    – JoL
    Sep 6 at 15:20












up vote
42
down vote

favorite
2









up vote
42
down vote

favorite
2






2





In many early computers, many of the chips were in sockets rather than soldered directly to boards, e.g. this series of pictures of the Tandy CoCo 1 has a note to the effect that all the chips are socketed: http://tech.markoverholser.com/?q=node/50



I can understand doing this with RAM chips, because it's often feasible and desirable to upgrade these; I think the CoCo 1 did often receive RAM upgrades.



But what was the purpose of doing it with all the others? I would expect soldering all the other chips directly to the board, would both save money and improve reliability by eliminating one thing that can go wrong.










share|improve this question















In many early computers, many of the chips were in sockets rather than soldered directly to boards, e.g. this series of pictures of the Tandy CoCo 1 has a note to the effect that all the chips are socketed: http://tech.markoverholser.com/?q=node/50



I can understand doing this with RAM chips, because it's often feasible and desirable to upgrade these; I think the CoCo 1 did often receive RAM upgrades.



But what was the purpose of doing it with all the others? I would expect soldering all the other chips directly to the board, would both save money and improve reliability by eliminating one thing that can go wrong.







hardware chip






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Sep 5 at 14:53









fadden

2,7971944




2,7971944










asked Sep 5 at 14:20









rwallace

7,15223197




7,15223197







  • 14




    I'll let someone who has more extensive experience give an official answer. But I suspect that it has to do with complexity. Your most complex chips - and the CPU is generally at the top of the list - are most likely to (a) have problems requiring replacement (e.g., design bugs found after manufacture, which has happened a few times to Intel) and (b) are most important to keep protected from any damage during assembly - wave soldering being pretty safe but why risk the most important chip when you can put in a socket and insert the chip after the soldering is all done.
    – manassehkatz
    Sep 5 at 14:30






  • 22




    Customers of very early computers expected sockets for maintainability (at least in the "nerd market".
    – tofro
    Sep 5 at 15:50






  • 7




    This was the general common practice with DIP (and other non SMD) IC packages. It allowed for quick fixes of the boards without using desoldering stations, etc. And the manufacturing techniques were different too.
    – nbloqs
    Sep 5 at 17:16







  • 19




    You know how people rage on internet forums when Apple or HP ships a product with soldered RAM? In the '70s soldered anything would have elicited the same reaction. Only they didn't have forums, so they would have picked up actual torches and pitchforks.
    – Tobia Tesan
    Sep 6 at 10:51






  • 3




    I wonder if in the future all car parts will be welded together, and complete car replacement when any piece breaks will be the norm, and then we'll wonder, why did we join the pieces with nuts and bolts before?
    – JoL
    Sep 6 at 15:20












  • 14




    I'll let someone who has more extensive experience give an official answer. But I suspect that it has to do with complexity. Your most complex chips - and the CPU is generally at the top of the list - are most likely to (a) have problems requiring replacement (e.g., design bugs found after manufacture, which has happened a few times to Intel) and (b) are most important to keep protected from any damage during assembly - wave soldering being pretty safe but why risk the most important chip when you can put in a socket and insert the chip after the soldering is all done.
    – manassehkatz
    Sep 5 at 14:30






  • 22




    Customers of very early computers expected sockets for maintainability (at least in the "nerd market".
    – tofro
    Sep 5 at 15:50






  • 7




    This was the general common practice with DIP (and other non SMD) IC packages. It allowed for quick fixes of the boards without using desoldering stations, etc. And the manufacturing techniques were different too.
    – nbloqs
    Sep 5 at 17:16







  • 19




    You know how people rage on internet forums when Apple or HP ships a product with soldered RAM? In the '70s soldered anything would have elicited the same reaction. Only they didn't have forums, so they would have picked up actual torches and pitchforks.
    – Tobia Tesan
    Sep 6 at 10:51






  • 3




    I wonder if in the future all car parts will be welded together, and complete car replacement when any piece breaks will be the norm, and then we'll wonder, why did we join the pieces with nuts and bolts before?
    – JoL
    Sep 6 at 15:20







14




14




I'll let someone who has more extensive experience give an official answer. But I suspect that it has to do with complexity. Your most complex chips - and the CPU is generally at the top of the list - are most likely to (a) have problems requiring replacement (e.g., design bugs found after manufacture, which has happened a few times to Intel) and (b) are most important to keep protected from any damage during assembly - wave soldering being pretty safe but why risk the most important chip when you can put in a socket and insert the chip after the soldering is all done.
– manassehkatz
Sep 5 at 14:30




I'll let someone who has more extensive experience give an official answer. But I suspect that it has to do with complexity. Your most complex chips - and the CPU is generally at the top of the list - are most likely to (a) have problems requiring replacement (e.g., design bugs found after manufacture, which has happened a few times to Intel) and (b) are most important to keep protected from any damage during assembly - wave soldering being pretty safe but why risk the most important chip when you can put in a socket and insert the chip after the soldering is all done.
– manassehkatz
Sep 5 at 14:30




22




22




Customers of very early computers expected sockets for maintainability (at least in the "nerd market".
– tofro
Sep 5 at 15:50




Customers of very early computers expected sockets for maintainability (at least in the "nerd market".
– tofro
Sep 5 at 15:50




7




7




This was the general common practice with DIP (and other non SMD) IC packages. It allowed for quick fixes of the boards without using desoldering stations, etc. And the manufacturing techniques were different too.
– nbloqs
Sep 5 at 17:16





This was the general common practice with DIP (and other non SMD) IC packages. It allowed for quick fixes of the boards without using desoldering stations, etc. And the manufacturing techniques were different too.
– nbloqs
Sep 5 at 17:16





19




19




You know how people rage on internet forums when Apple or HP ships a product with soldered RAM? In the '70s soldered anything would have elicited the same reaction. Only they didn't have forums, so they would have picked up actual torches and pitchforks.
– Tobia Tesan
Sep 6 at 10:51




You know how people rage on internet forums when Apple or HP ships a product with soldered RAM? In the '70s soldered anything would have elicited the same reaction. Only they didn't have forums, so they would have picked up actual torches and pitchforks.
– Tobia Tesan
Sep 6 at 10:51




3




3




I wonder if in the future all car parts will be welded together, and complete car replacement when any piece breaks will be the norm, and then we'll wonder, why did we join the pieces with nuts and bolts before?
– JoL
Sep 6 at 15:20




I wonder if in the future all car parts will be welded together, and complete car replacement when any piece breaks will be the norm, and then we'll wonder, why did we join the pieces with nuts and bolts before?
– JoL
Sep 6 at 15:20










8 Answers
8






active

oldest

votes

















up vote
64
down vote



accepted










Caveat: It might be useful to distinguish between high volume low cost computers (like the mentioned CoCo) and low volume high cost machines (like Intel boards - or workstations). I assume the question to be rather about these high volume low cost machines.





Why were chips socketed in early computers?




There are several reasons:



  • Most important: Chips where not fitted by robots back then, but by hand. Thus any chip fitted wrong and soldered would make the whole board unusable (*1), while with sockets it's a simple swap to make it work.


  • Second, sockets can endure more abuse (in this case heat) than chips, thus having them inserted later increases acceptable tolerances during manufacturing and leads to a lower rate of failure (*2)


  • Next, fitting sockets instead of chips decouples production. Boards can be prepared, ready made and stocked until the time there are orders to fill.


  • As a result investment can be reduced by ordering the most expensive single part(s) -- the chips -- as late as possible (*3).


  • Similarly, this is helpful to keep board production running even when some chips are not available for some time (*4).



I would expect soldering all the other chips directly to the board, would both save money and improve reliability by eliminating one thing that can go wrong.




Yes. It does, as soon as there are automated placement machines and -- more importantly -- an order of sufficient size to keep capital cost in check.




I can understand doing this with RAM chips, because it's often feasible and desirable to upgrade these; I think the CoCo 1 did often receive RAM upgrades.




Not really, as that's something users may want, not necessarily the manufacturer.




As said before, this is focused on high volume/low cost machines. While all of the reasons are as well valid for the high performance/high cost boards, labor and stock cost are somewhat less important due to labor which is already rather intensive for small production runs, and stock cost aren't that big in the first place.



Further, the repair aspect wasn't any consideration for cheap, mass produced machines. Commodores share of a C64 was less than 150 USD during most of the time (in the late 1980s even less than 50 USD). There is not enough money to be made in repairing defective units beyond a bare function check. Even more so with the lower priced CoCo. So serviceability wasn't a major concern.




*1 - Yes, they could have had people assigned to desolder the chip(s), clean the through-holes and have new ones fitted and soldered. Except, that would have cost possibly more than the wholesale price of that (cheap) computer would have been.



*2 - Wave soldering is a rather secure process and around since at least 1947 (oldest usage I know). It does require to have the whole board plus all of the placed components preheated to 130-160 degree Celsius (depending on the process) for several minutes. In combination with (somewhat) early chips, this may cause failure within the chips. So using sockets instead during soldering was a great idea.



*3 - One reason why Commodore continued to use sockets for the more expensive chips even after using automated placement.



*4 - Commodore is well know to have run into these conditions several time - for example with some batches of C16.






share|improve this answer


















  • 1




    "any chip fitted wrong and soldered would make the whole board unusable" --while a socket fitted wrong and soldered would not technically make the whole board unusable, wouldn't it still make the board unsellable?
    – traal
    Sep 5 at 21:48






  • 8




    @traal Sure it would, but it would be a board without the chips, so way less damage done. Further (but that depends ofc much on the production organization), a wrong fited socket would always mean a smaler one than intended, and that's much easyer to detect than having the wrong chip with the right size inserted.
    – Raffzahn
    Sep 5 at 21:58






  • 8




    Other reasons: chips had a tendency to fail a lot back then (making replacement necessary - my C64 went through quite a few CIA chips!), and many of them were also very heat-sensitive which made it dangerous to solder them directly especially when much of the soldering was done by hand.
    – fluffy
    Sep 6 at 3:37






  • 2




    @traal Doesn't the typical socket just offer mechanical stability and electrical through-connectivity? Seems it would matter which way it is mounted on the circuit board, then, as long as the chip that goes into it is oriented correctly with respect to the board's (electrical) traces.
    – Michael Kjörling
    Sep 6 at 11:33






  • 2




    Yes, a DIP socket fitted the wrong way round just has the human-visible "top" notch the wrong way round, so if you put the chip in the correct way round it will work fine.
    – pjc50
    Sep 6 at 14:54

















up vote
18
down vote













One reason to use sockets at the beginning of a production cycle is to make it easier on the service technicians.



At one company making computer terminals, the techs would identify a bad chip and then give the board to the rework person to replace. This destroyed the presumed-bad chip and risked damage to the board. Before the techs were familiar with the board signals and failure modes, the false-bad rate would be pretty high. With experience, the false-bad rate got really low, and all boards were built without sockets on all parts except the ROMs, which could be upgraded in the field.



When times got busy and the bad boards piled up (the dog pile), some of us from engineering would volunteer to go to the factory and help fix boards. I was really good at debugging design issues, but the techs were much faster than I was at debugging production problems. I learned a lot from working with them.






share|improve this answer



























    up vote
    10
    down vote













    There was a time when many personal computer manufacturers did almost no incoming QA on components. Sockets allowed swapping out chips when the assembled computer didn’t run, e.g. didn’t pass initial power on testing or burn-in. Simply swapping out certain chips by trial-and-error was a very common debugging technique to find the bad ones, much harder to do if the chips are all soldered down.



    Some personal computer models would not run with all worst case timing spec parts. Swapping some parts was likely to mix in some parts not as slow and allow the system to meet the timing necessary to run.



    Socketing is no longer done today, as chip yields are higher, more testing and incoming inspection is done, sophisticated CAE tools allow far more complete worst-case timing analysis before a product goes into production, and modern automatic test equipment allows automatic pinpointing of many common faults.






    share|improve this answer


















    • 1




      Socketing today's SMD chips is not easy and not cheap. There are some SMD chips without a proper socket time.
      – Uwe
      Sep 9 at 10:42

















    up vote
    9
    down vote













    I suspect this is down to quality engineering (QE) — in the early micro days many chips were susceptible to failures (during manufacturing) but wouldn’t be as well-tested as they are now, so the chances of getting bad batches were higher. If you finish assembling a micro and discover that one of its chips is bad, it’s easier to fix if it’s socketed.



    Another aspect is that it’s easier to solder a socket than a component, or rather it’s harder to damage a socket when soldering it than it is to damage a component. That might not be relevant when manufacturing at scale though (I’m not sure when wave soldering became common).






    share|improve this answer


















    • 2




      The oldest wave soldering setup I know is from a radio production of 1947 at Grundigs new plant in Nürnberg. It's safe to assume that similar technology was used in the US or Britain as well at the same time.
      – Raffzahn
      Sep 5 at 18:50






    • 1




      FYI, wave soldering has the problem of entire components becoming unseated during the process and requiring manual rework.
      – traal
      Sep 5 at 21:51






    • 3




      @traal That's why tight fit for holes and pins is essential - and the reason why sockets have often flat waves pins a tiny bit larger than the hole. And similar why ICs have by default outward bend pins, which need to be 'carful' pressed by the placing machine /handler. When released, they straddle and keep the IC well in place during soldering.
      – Raffzahn
      Sep 5 at 22:02






    • 1




      Probably obvious once you spell it out, but what's QE?
      – Michael Kjörling
      Sep 6 at 11:35






    • 1




      @MichaelKjörling quality engineering, in this context.
      – Stephen Kitt
      Sep 6 at 11:58

















    up vote
    8
    down vote













    A long time ago you could solder an entire Acorn Atom yourself. A hobbyist who was unsure about his soldering skills would use sockets, thinking that he was playing it safe. I helped diagnose a non-working board where the pin for the CPU clock made no contact with the socket.. The signal on the bottom of the board was good, it took us hours to find this. The customer had used cheap sockets with contacts only on the inside, on all ICs. Bending all pins inward before pushing the ICs in the sockets fixed that. You could even buy special DIP IC inserters.



    Tin plated ICs with gold plated sockets (Garry/Augat) don't go well together, the gold dissolves in the tin and ultimately all contact is lost.



    The added height due to the sockets increases the inductance of the tracks a lot, leading to bad HF behavior and bad EMC. This was OK when computers were slower and there were no FCC regulations, but not now.



    All reasons why the CE industry uses no IC sockets anymore, except maybe for EPROMs which must be removed for erasing and programming. And for CPUs where the customer makes the choice, but then the socket is an expensive one.






    share|improve this answer



























      up vote
      6
      down vote













      I am not to what extent this was a manufacturer consideration, however it made the machines much easier/cheaper to service. This meant customers would be more willing to "invest" what might be $1-2000 in today's money since they could continue using it for a longer period.



      As testament to this, having never touched an 8-bit machine before, I recently got my hands on two dead C64s and within twenty minutes I got one working by swapping chips.



      Certainly in other manufacturing sectors, serviceability is a consideration so I find it likely the manufacturers did consider this.






      share|improve this answer



























        up vote
        1
        down vote













        One area that use to mandate the use of sockets (and still does to my knowledge) is in the area of huge-footprint chips (specialized processors, CCD arrays above a certain size, Array-Processor Logic for in-circuit AI boards, etc.) where the mechanical stresses of insertion and/or pick-and-place soldering across the face of the chip exceed a certain recommended threshold.



        In such cases, they're likely to use a ZIF-socket (Zero-Insertion-Force) where these high-dollar frequently upgraded components can be removed as needed, particularly in environments where ambient heat, vibration stresses, and radiation may cause gate degradation. It still happens, and when your board costs $1.75 and your AI Gate-Array chip costs $20K, you'd prefer to protect your investment.



        The other variable not addressed earlier is when chip failures are not entirely understood and diagnostics on specialty high-density chips differ between the actual (working) environment and the prefab test environment. Soldering in these huge beasts (even when possible) usually means that the chip is destroyed during the desoldering process and therefore means it is no longer possible to analyze the casualties for failure modes after-the-fact.






        share|improve this answer



























          up vote
          -3
          down vote













          It's really quite simple; early computer MBs were designed to accept a number of different processors. One could easily 'upgrade' their processor by simply swapping in a more powerful one.






          share|improve this answer
















          • 1




            Well... It often wasn't that easy. Plus, that doesn't explain the rest of the types of chips, such as signal processors (graphics / audio) etc.
            – wizzwizz4♦
            Sep 6 at 19:31






          • 2




            Drop-in replacement of processors was (and is) relatively limited. In some cases (generally more recently), there are multiple speed CPUs with the same bus speed. In many retro systems the bus speed & CPU speed were the same or one was a simple multiple of the other, so you couldn't easily put in a 50% faster CPU without changing plenty of other stuff. There were a few situations where it worked - e.g., Nec V20 to replace an Intel 8088. But most software-compatible chips were not hardware (pin) compatible - e.g., 8080 vs. Z-80.
            – manassehkatz
            Sep 7 at 4:05






          • 1




            and let's not forget the expensive 8087 math co-processor, to be plugged into its socket only if the user had a need for it, e.g. for running AutoCAD (TM).
            – StessenJ
            Sep 7 at 18:19










          Your Answer







          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "648"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: false,
          noModals: false,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f7477%2fwhy-were-chips-socketed-in-early-computers%23new-answer', 'question_page');

          );

          Post as a guest






























          8 Answers
          8






          active

          oldest

          votes








          8 Answers
          8






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          64
          down vote



          accepted










          Caveat: It might be useful to distinguish between high volume low cost computers (like the mentioned CoCo) and low volume high cost machines (like Intel boards - or workstations). I assume the question to be rather about these high volume low cost machines.





          Why were chips socketed in early computers?




          There are several reasons:



          • Most important: Chips where not fitted by robots back then, but by hand. Thus any chip fitted wrong and soldered would make the whole board unusable (*1), while with sockets it's a simple swap to make it work.


          • Second, sockets can endure more abuse (in this case heat) than chips, thus having them inserted later increases acceptable tolerances during manufacturing and leads to a lower rate of failure (*2)


          • Next, fitting sockets instead of chips decouples production. Boards can be prepared, ready made and stocked until the time there are orders to fill.


          • As a result investment can be reduced by ordering the most expensive single part(s) -- the chips -- as late as possible (*3).


          • Similarly, this is helpful to keep board production running even when some chips are not available for some time (*4).



          I would expect soldering all the other chips directly to the board, would both save money and improve reliability by eliminating one thing that can go wrong.




          Yes. It does, as soon as there are automated placement machines and -- more importantly -- an order of sufficient size to keep capital cost in check.




          I can understand doing this with RAM chips, because it's often feasible and desirable to upgrade these; I think the CoCo 1 did often receive RAM upgrades.




          Not really, as that's something users may want, not necessarily the manufacturer.




          As said before, this is focused on high volume/low cost machines. While all of the reasons are as well valid for the high performance/high cost boards, labor and stock cost are somewhat less important due to labor which is already rather intensive for small production runs, and stock cost aren't that big in the first place.



          Further, the repair aspect wasn't any consideration for cheap, mass produced machines. Commodores share of a C64 was less than 150 USD during most of the time (in the late 1980s even less than 50 USD). There is not enough money to be made in repairing defective units beyond a bare function check. Even more so with the lower priced CoCo. So serviceability wasn't a major concern.




          *1 - Yes, they could have had people assigned to desolder the chip(s), clean the through-holes and have new ones fitted and soldered. Except, that would have cost possibly more than the wholesale price of that (cheap) computer would have been.



          *2 - Wave soldering is a rather secure process and around since at least 1947 (oldest usage I know). It does require to have the whole board plus all of the placed components preheated to 130-160 degree Celsius (depending on the process) for several minutes. In combination with (somewhat) early chips, this may cause failure within the chips. So using sockets instead during soldering was a great idea.



          *3 - One reason why Commodore continued to use sockets for the more expensive chips even after using automated placement.



          *4 - Commodore is well know to have run into these conditions several time - for example with some batches of C16.






          share|improve this answer


















          • 1




            "any chip fitted wrong and soldered would make the whole board unusable" --while a socket fitted wrong and soldered would not technically make the whole board unusable, wouldn't it still make the board unsellable?
            – traal
            Sep 5 at 21:48






          • 8




            @traal Sure it would, but it would be a board without the chips, so way less damage done. Further (but that depends ofc much on the production organization), a wrong fited socket would always mean a smaler one than intended, and that's much easyer to detect than having the wrong chip with the right size inserted.
            – Raffzahn
            Sep 5 at 21:58






          • 8




            Other reasons: chips had a tendency to fail a lot back then (making replacement necessary - my C64 went through quite a few CIA chips!), and many of them were also very heat-sensitive which made it dangerous to solder them directly especially when much of the soldering was done by hand.
            – fluffy
            Sep 6 at 3:37






          • 2




            @traal Doesn't the typical socket just offer mechanical stability and electrical through-connectivity? Seems it would matter which way it is mounted on the circuit board, then, as long as the chip that goes into it is oriented correctly with respect to the board's (electrical) traces.
            – Michael Kjörling
            Sep 6 at 11:33






          • 2




            Yes, a DIP socket fitted the wrong way round just has the human-visible "top" notch the wrong way round, so if you put the chip in the correct way round it will work fine.
            – pjc50
            Sep 6 at 14:54














          up vote
          64
          down vote



          accepted










          Caveat: It might be useful to distinguish between high volume low cost computers (like the mentioned CoCo) and low volume high cost machines (like Intel boards - or workstations). I assume the question to be rather about these high volume low cost machines.





          Why were chips socketed in early computers?




          There are several reasons:



          • Most important: Chips where not fitted by robots back then, but by hand. Thus any chip fitted wrong and soldered would make the whole board unusable (*1), while with sockets it's a simple swap to make it work.


          • Second, sockets can endure more abuse (in this case heat) than chips, thus having them inserted later increases acceptable tolerances during manufacturing and leads to a lower rate of failure (*2)


          • Next, fitting sockets instead of chips decouples production. Boards can be prepared, ready made and stocked until the time there are orders to fill.


          • As a result investment can be reduced by ordering the most expensive single part(s) -- the chips -- as late as possible (*3).


          • Similarly, this is helpful to keep board production running even when some chips are not available for some time (*4).



          I would expect soldering all the other chips directly to the board, would both save money and improve reliability by eliminating one thing that can go wrong.




          Yes. It does, as soon as there are automated placement machines and -- more importantly -- an order of sufficient size to keep capital cost in check.




          I can understand doing this with RAM chips, because it's often feasible and desirable to upgrade these; I think the CoCo 1 did often receive RAM upgrades.




          Not really, as that's something users may want, not necessarily the manufacturer.




          As said before, this is focused on high volume/low cost machines. While all of the reasons are as well valid for the high performance/high cost boards, labor and stock cost are somewhat less important due to labor which is already rather intensive for small production runs, and stock cost aren't that big in the first place.



          Further, the repair aspect wasn't any consideration for cheap, mass produced machines. Commodores share of a C64 was less than 150 USD during most of the time (in the late 1980s even less than 50 USD). There is not enough money to be made in repairing defective units beyond a bare function check. Even more so with the lower priced CoCo. So serviceability wasn't a major concern.




          *1 - Yes, they could have had people assigned to desolder the chip(s), clean the through-holes and have new ones fitted and soldered. Except, that would have cost possibly more than the wholesale price of that (cheap) computer would have been.



          *2 - Wave soldering is a rather secure process and around since at least 1947 (oldest usage I know). It does require to have the whole board plus all of the placed components preheated to 130-160 degree Celsius (depending on the process) for several minutes. In combination with (somewhat) early chips, this may cause failure within the chips. So using sockets instead during soldering was a great idea.



          *3 - One reason why Commodore continued to use sockets for the more expensive chips even after using automated placement.



          *4 - Commodore is well know to have run into these conditions several time - for example with some batches of C16.






          share|improve this answer


















          • 1




            "any chip fitted wrong and soldered would make the whole board unusable" --while a socket fitted wrong and soldered would not technically make the whole board unusable, wouldn't it still make the board unsellable?
            – traal
            Sep 5 at 21:48






          • 8




            @traal Sure it would, but it would be a board without the chips, so way less damage done. Further (but that depends ofc much on the production organization), a wrong fited socket would always mean a smaler one than intended, and that's much easyer to detect than having the wrong chip with the right size inserted.
            – Raffzahn
            Sep 5 at 21:58






          • 8




            Other reasons: chips had a tendency to fail a lot back then (making replacement necessary - my C64 went through quite a few CIA chips!), and many of them were also very heat-sensitive which made it dangerous to solder them directly especially when much of the soldering was done by hand.
            – fluffy
            Sep 6 at 3:37






          • 2




            @traal Doesn't the typical socket just offer mechanical stability and electrical through-connectivity? Seems it would matter which way it is mounted on the circuit board, then, as long as the chip that goes into it is oriented correctly with respect to the board's (electrical) traces.
            – Michael Kjörling
            Sep 6 at 11:33






          • 2




            Yes, a DIP socket fitted the wrong way round just has the human-visible "top" notch the wrong way round, so if you put the chip in the correct way round it will work fine.
            – pjc50
            Sep 6 at 14:54












          up vote
          64
          down vote



          accepted







          up vote
          64
          down vote



          accepted






          Caveat: It might be useful to distinguish between high volume low cost computers (like the mentioned CoCo) and low volume high cost machines (like Intel boards - or workstations). I assume the question to be rather about these high volume low cost machines.





          Why were chips socketed in early computers?




          There are several reasons:



          • Most important: Chips where not fitted by robots back then, but by hand. Thus any chip fitted wrong and soldered would make the whole board unusable (*1), while with sockets it's a simple swap to make it work.


          • Second, sockets can endure more abuse (in this case heat) than chips, thus having them inserted later increases acceptable tolerances during manufacturing and leads to a lower rate of failure (*2)


          • Next, fitting sockets instead of chips decouples production. Boards can be prepared, ready made and stocked until the time there are orders to fill.


          • As a result investment can be reduced by ordering the most expensive single part(s) -- the chips -- as late as possible (*3).


          • Similarly, this is helpful to keep board production running even when some chips are not available for some time (*4).



          I would expect soldering all the other chips directly to the board, would both save money and improve reliability by eliminating one thing that can go wrong.




          Yes. It does, as soon as there are automated placement machines and -- more importantly -- an order of sufficient size to keep capital cost in check.




          I can understand doing this with RAM chips, because it's often feasible and desirable to upgrade these; I think the CoCo 1 did often receive RAM upgrades.




          Not really, as that's something users may want, not necessarily the manufacturer.




          As said before, this is focused on high volume/low cost machines. While all of the reasons are as well valid for the high performance/high cost boards, labor and stock cost are somewhat less important due to labor which is already rather intensive for small production runs, and stock cost aren't that big in the first place.



          Further, the repair aspect wasn't any consideration for cheap, mass produced machines. Commodores share of a C64 was less than 150 USD during most of the time (in the late 1980s even less than 50 USD). There is not enough money to be made in repairing defective units beyond a bare function check. Even more so with the lower priced CoCo. So serviceability wasn't a major concern.




          *1 - Yes, they could have had people assigned to desolder the chip(s), clean the through-holes and have new ones fitted and soldered. Except, that would have cost possibly more than the wholesale price of that (cheap) computer would have been.



          *2 - Wave soldering is a rather secure process and around since at least 1947 (oldest usage I know). It does require to have the whole board plus all of the placed components preheated to 130-160 degree Celsius (depending on the process) for several minutes. In combination with (somewhat) early chips, this may cause failure within the chips. So using sockets instead during soldering was a great idea.



          *3 - One reason why Commodore continued to use sockets for the more expensive chips even after using automated placement.



          *4 - Commodore is well know to have run into these conditions several time - for example with some batches of C16.






          share|improve this answer














          Caveat: It might be useful to distinguish between high volume low cost computers (like the mentioned CoCo) and low volume high cost machines (like Intel boards - or workstations). I assume the question to be rather about these high volume low cost machines.





          Why were chips socketed in early computers?




          There are several reasons:



          • Most important: Chips where not fitted by robots back then, but by hand. Thus any chip fitted wrong and soldered would make the whole board unusable (*1), while with sockets it's a simple swap to make it work.


          • Second, sockets can endure more abuse (in this case heat) than chips, thus having them inserted later increases acceptable tolerances during manufacturing and leads to a lower rate of failure (*2)


          • Next, fitting sockets instead of chips decouples production. Boards can be prepared, ready made and stocked until the time there are orders to fill.


          • As a result investment can be reduced by ordering the most expensive single part(s) -- the chips -- as late as possible (*3).


          • Similarly, this is helpful to keep board production running even when some chips are not available for some time (*4).



          I would expect soldering all the other chips directly to the board, would both save money and improve reliability by eliminating one thing that can go wrong.




          Yes. It does, as soon as there are automated placement machines and -- more importantly -- an order of sufficient size to keep capital cost in check.




          I can understand doing this with RAM chips, because it's often feasible and desirable to upgrade these; I think the CoCo 1 did often receive RAM upgrades.




          Not really, as that's something users may want, not necessarily the manufacturer.




          As said before, this is focused on high volume/low cost machines. While all of the reasons are as well valid for the high performance/high cost boards, labor and stock cost are somewhat less important due to labor which is already rather intensive for small production runs, and stock cost aren't that big in the first place.



          Further, the repair aspect wasn't any consideration for cheap, mass produced machines. Commodores share of a C64 was less than 150 USD during most of the time (in the late 1980s even less than 50 USD). There is not enough money to be made in repairing defective units beyond a bare function check. Even more so with the lower priced CoCo. So serviceability wasn't a major concern.




          *1 - Yes, they could have had people assigned to desolder the chip(s), clean the through-holes and have new ones fitted and soldered. Except, that would have cost possibly more than the wholesale price of that (cheap) computer would have been.



          *2 - Wave soldering is a rather secure process and around since at least 1947 (oldest usage I know). It does require to have the whole board plus all of the placed components preheated to 130-160 degree Celsius (depending on the process) for several minutes. In combination with (somewhat) early chips, this may cause failure within the chips. So using sockets instead during soldering was a great idea.



          *3 - One reason why Commodore continued to use sockets for the more expensive chips even after using automated placement.



          *4 - Commodore is well know to have run into these conditions several time - for example with some batches of C16.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Sep 10 at 13:33









          Wilson

          8,576437107




          8,576437107










          answered Sep 5 at 16:57









          Raffzahn

          35.5k478141




          35.5k478141







          • 1




            "any chip fitted wrong and soldered would make the whole board unusable" --while a socket fitted wrong and soldered would not technically make the whole board unusable, wouldn't it still make the board unsellable?
            – traal
            Sep 5 at 21:48






          • 8




            @traal Sure it would, but it would be a board without the chips, so way less damage done. Further (but that depends ofc much on the production organization), a wrong fited socket would always mean a smaler one than intended, and that's much easyer to detect than having the wrong chip with the right size inserted.
            – Raffzahn
            Sep 5 at 21:58






          • 8




            Other reasons: chips had a tendency to fail a lot back then (making replacement necessary - my C64 went through quite a few CIA chips!), and many of them were also very heat-sensitive which made it dangerous to solder them directly especially when much of the soldering was done by hand.
            – fluffy
            Sep 6 at 3:37






          • 2




            @traal Doesn't the typical socket just offer mechanical stability and electrical through-connectivity? Seems it would matter which way it is mounted on the circuit board, then, as long as the chip that goes into it is oriented correctly with respect to the board's (electrical) traces.
            – Michael Kjörling
            Sep 6 at 11:33






          • 2




            Yes, a DIP socket fitted the wrong way round just has the human-visible "top" notch the wrong way round, so if you put the chip in the correct way round it will work fine.
            – pjc50
            Sep 6 at 14:54












          • 1




            "any chip fitted wrong and soldered would make the whole board unusable" --while a socket fitted wrong and soldered would not technically make the whole board unusable, wouldn't it still make the board unsellable?
            – traal
            Sep 5 at 21:48






          • 8




            @traal Sure it would, but it would be a board without the chips, so way less damage done. Further (but that depends ofc much on the production organization), a wrong fited socket would always mean a smaler one than intended, and that's much easyer to detect than having the wrong chip with the right size inserted.
            – Raffzahn
            Sep 5 at 21:58






          • 8




            Other reasons: chips had a tendency to fail a lot back then (making replacement necessary - my C64 went through quite a few CIA chips!), and many of them were also very heat-sensitive which made it dangerous to solder them directly especially when much of the soldering was done by hand.
            – fluffy
            Sep 6 at 3:37






          • 2




            @traal Doesn't the typical socket just offer mechanical stability and electrical through-connectivity? Seems it would matter which way it is mounted on the circuit board, then, as long as the chip that goes into it is oriented correctly with respect to the board's (electrical) traces.
            – Michael Kjörling
            Sep 6 at 11:33






          • 2




            Yes, a DIP socket fitted the wrong way round just has the human-visible "top" notch the wrong way round, so if you put the chip in the correct way round it will work fine.
            – pjc50
            Sep 6 at 14:54







          1




          1




          "any chip fitted wrong and soldered would make the whole board unusable" --while a socket fitted wrong and soldered would not technically make the whole board unusable, wouldn't it still make the board unsellable?
          – traal
          Sep 5 at 21:48




          "any chip fitted wrong and soldered would make the whole board unusable" --while a socket fitted wrong and soldered would not technically make the whole board unusable, wouldn't it still make the board unsellable?
          – traal
          Sep 5 at 21:48




          8




          8




          @traal Sure it would, but it would be a board without the chips, so way less damage done. Further (but that depends ofc much on the production organization), a wrong fited socket would always mean a smaler one than intended, and that's much easyer to detect than having the wrong chip with the right size inserted.
          – Raffzahn
          Sep 5 at 21:58




          @traal Sure it would, but it would be a board without the chips, so way less damage done. Further (but that depends ofc much on the production organization), a wrong fited socket would always mean a smaler one than intended, and that's much easyer to detect than having the wrong chip with the right size inserted.
          – Raffzahn
          Sep 5 at 21:58




          8




          8




          Other reasons: chips had a tendency to fail a lot back then (making replacement necessary - my C64 went through quite a few CIA chips!), and many of them were also very heat-sensitive which made it dangerous to solder them directly especially when much of the soldering was done by hand.
          – fluffy
          Sep 6 at 3:37




          Other reasons: chips had a tendency to fail a lot back then (making replacement necessary - my C64 went through quite a few CIA chips!), and many of them were also very heat-sensitive which made it dangerous to solder them directly especially when much of the soldering was done by hand.
          – fluffy
          Sep 6 at 3:37




          2




          2




          @traal Doesn't the typical socket just offer mechanical stability and electrical through-connectivity? Seems it would matter which way it is mounted on the circuit board, then, as long as the chip that goes into it is oriented correctly with respect to the board's (electrical) traces.
          – Michael Kjörling
          Sep 6 at 11:33




          @traal Doesn't the typical socket just offer mechanical stability and electrical through-connectivity? Seems it would matter which way it is mounted on the circuit board, then, as long as the chip that goes into it is oriented correctly with respect to the board's (electrical) traces.
          – Michael Kjörling
          Sep 6 at 11:33




          2




          2




          Yes, a DIP socket fitted the wrong way round just has the human-visible "top" notch the wrong way round, so if you put the chip in the correct way round it will work fine.
          – pjc50
          Sep 6 at 14:54




          Yes, a DIP socket fitted the wrong way round just has the human-visible "top" notch the wrong way round, so if you put the chip in the correct way round it will work fine.
          – pjc50
          Sep 6 at 14:54










          up vote
          18
          down vote













          One reason to use sockets at the beginning of a production cycle is to make it easier on the service technicians.



          At one company making computer terminals, the techs would identify a bad chip and then give the board to the rework person to replace. This destroyed the presumed-bad chip and risked damage to the board. Before the techs were familiar with the board signals and failure modes, the false-bad rate would be pretty high. With experience, the false-bad rate got really low, and all boards were built without sockets on all parts except the ROMs, which could be upgraded in the field.



          When times got busy and the bad boards piled up (the dog pile), some of us from engineering would volunteer to go to the factory and help fix boards. I was really good at debugging design issues, but the techs were much faster than I was at debugging production problems. I learned a lot from working with them.






          share|improve this answer
























            up vote
            18
            down vote













            One reason to use sockets at the beginning of a production cycle is to make it easier on the service technicians.



            At one company making computer terminals, the techs would identify a bad chip and then give the board to the rework person to replace. This destroyed the presumed-bad chip and risked damage to the board. Before the techs were familiar with the board signals and failure modes, the false-bad rate would be pretty high. With experience, the false-bad rate got really low, and all boards were built without sockets on all parts except the ROMs, which could be upgraded in the field.



            When times got busy and the bad boards piled up (the dog pile), some of us from engineering would volunteer to go to the factory and help fix boards. I was really good at debugging design issues, but the techs were much faster than I was at debugging production problems. I learned a lot from working with them.






            share|improve this answer






















              up vote
              18
              down vote










              up vote
              18
              down vote









              One reason to use sockets at the beginning of a production cycle is to make it easier on the service technicians.



              At one company making computer terminals, the techs would identify a bad chip and then give the board to the rework person to replace. This destroyed the presumed-bad chip and risked damage to the board. Before the techs were familiar with the board signals and failure modes, the false-bad rate would be pretty high. With experience, the false-bad rate got really low, and all boards were built without sockets on all parts except the ROMs, which could be upgraded in the field.



              When times got busy and the bad boards piled up (the dog pile), some of us from engineering would volunteer to go to the factory and help fix boards. I was really good at debugging design issues, but the techs were much faster than I was at debugging production problems. I learned a lot from working with them.






              share|improve this answer












              One reason to use sockets at the beginning of a production cycle is to make it easier on the service technicians.



              At one company making computer terminals, the techs would identify a bad chip and then give the board to the rework person to replace. This destroyed the presumed-bad chip and risked damage to the board. Before the techs were familiar with the board signals and failure modes, the false-bad rate would be pretty high. With experience, the false-bad rate got really low, and all boards were built without sockets on all parts except the ROMs, which could be upgraded in the field.



              When times got busy and the bad boards piled up (the dog pile), some of us from engineering would volunteer to go to the factory and help fix boards. I was really good at debugging design issues, but the techs were much faster than I was at debugging production problems. I learned a lot from working with them.







              share|improve this answer












              share|improve this answer



              share|improve this answer










              answered Sep 5 at 19:51









              cmm

              33114




              33114




















                  up vote
                  10
                  down vote













                  There was a time when many personal computer manufacturers did almost no incoming QA on components. Sockets allowed swapping out chips when the assembled computer didn’t run, e.g. didn’t pass initial power on testing or burn-in. Simply swapping out certain chips by trial-and-error was a very common debugging technique to find the bad ones, much harder to do if the chips are all soldered down.



                  Some personal computer models would not run with all worst case timing spec parts. Swapping some parts was likely to mix in some parts not as slow and allow the system to meet the timing necessary to run.



                  Socketing is no longer done today, as chip yields are higher, more testing and incoming inspection is done, sophisticated CAE tools allow far more complete worst-case timing analysis before a product goes into production, and modern automatic test equipment allows automatic pinpointing of many common faults.






                  share|improve this answer


















                  • 1




                    Socketing today's SMD chips is not easy and not cheap. There are some SMD chips without a proper socket time.
                    – Uwe
                    Sep 9 at 10:42














                  up vote
                  10
                  down vote













                  There was a time when many personal computer manufacturers did almost no incoming QA on components. Sockets allowed swapping out chips when the assembled computer didn’t run, e.g. didn’t pass initial power on testing or burn-in. Simply swapping out certain chips by trial-and-error was a very common debugging technique to find the bad ones, much harder to do if the chips are all soldered down.



                  Some personal computer models would not run with all worst case timing spec parts. Swapping some parts was likely to mix in some parts not as slow and allow the system to meet the timing necessary to run.



                  Socketing is no longer done today, as chip yields are higher, more testing and incoming inspection is done, sophisticated CAE tools allow far more complete worst-case timing analysis before a product goes into production, and modern automatic test equipment allows automatic pinpointing of many common faults.






                  share|improve this answer


















                  • 1




                    Socketing today's SMD chips is not easy and not cheap. There are some SMD chips without a proper socket time.
                    – Uwe
                    Sep 9 at 10:42












                  up vote
                  10
                  down vote










                  up vote
                  10
                  down vote









                  There was a time when many personal computer manufacturers did almost no incoming QA on components. Sockets allowed swapping out chips when the assembled computer didn’t run, e.g. didn’t pass initial power on testing or burn-in. Simply swapping out certain chips by trial-and-error was a very common debugging technique to find the bad ones, much harder to do if the chips are all soldered down.



                  Some personal computer models would not run with all worst case timing spec parts. Swapping some parts was likely to mix in some parts not as slow and allow the system to meet the timing necessary to run.



                  Socketing is no longer done today, as chip yields are higher, more testing and incoming inspection is done, sophisticated CAE tools allow far more complete worst-case timing analysis before a product goes into production, and modern automatic test equipment allows automatic pinpointing of many common faults.






                  share|improve this answer














                  There was a time when many personal computer manufacturers did almost no incoming QA on components. Sockets allowed swapping out chips when the assembled computer didn’t run, e.g. didn’t pass initial power on testing or burn-in. Simply swapping out certain chips by trial-and-error was a very common debugging technique to find the bad ones, much harder to do if the chips are all soldered down.



                  Some personal computer models would not run with all worst case timing spec parts. Swapping some parts was likely to mix in some parts not as slow and allow the system to meet the timing necessary to run.



                  Socketing is no longer done today, as chip yields are higher, more testing and incoming inspection is done, sophisticated CAE tools allow far more complete worst-case timing analysis before a product goes into production, and modern automatic test equipment allows automatic pinpointing of many common faults.







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited Sep 6 at 2:20

























                  answered Sep 6 at 2:09









                  hotpaw2

                  2,666521




                  2,666521







                  • 1




                    Socketing today's SMD chips is not easy and not cheap. There are some SMD chips without a proper socket time.
                    – Uwe
                    Sep 9 at 10:42












                  • 1




                    Socketing today's SMD chips is not easy and not cheap. There are some SMD chips without a proper socket time.
                    – Uwe
                    Sep 9 at 10:42







                  1




                  1




                  Socketing today's SMD chips is not easy and not cheap. There are some SMD chips without a proper socket time.
                  – Uwe
                  Sep 9 at 10:42




                  Socketing today's SMD chips is not easy and not cheap. There are some SMD chips without a proper socket time.
                  – Uwe
                  Sep 9 at 10:42










                  up vote
                  9
                  down vote













                  I suspect this is down to quality engineering (QE) — in the early micro days many chips were susceptible to failures (during manufacturing) but wouldn’t be as well-tested as they are now, so the chances of getting bad batches were higher. If you finish assembling a micro and discover that one of its chips is bad, it’s easier to fix if it’s socketed.



                  Another aspect is that it’s easier to solder a socket than a component, or rather it’s harder to damage a socket when soldering it than it is to damage a component. That might not be relevant when manufacturing at scale though (I’m not sure when wave soldering became common).






                  share|improve this answer


















                  • 2




                    The oldest wave soldering setup I know is from a radio production of 1947 at Grundigs new plant in Nürnberg. It's safe to assume that similar technology was used in the US or Britain as well at the same time.
                    – Raffzahn
                    Sep 5 at 18:50






                  • 1




                    FYI, wave soldering has the problem of entire components becoming unseated during the process and requiring manual rework.
                    – traal
                    Sep 5 at 21:51






                  • 3




                    @traal That's why tight fit for holes and pins is essential - and the reason why sockets have often flat waves pins a tiny bit larger than the hole. And similar why ICs have by default outward bend pins, which need to be 'carful' pressed by the placing machine /handler. When released, they straddle and keep the IC well in place during soldering.
                    – Raffzahn
                    Sep 5 at 22:02






                  • 1




                    Probably obvious once you spell it out, but what's QE?
                    – Michael Kjörling
                    Sep 6 at 11:35






                  • 1




                    @MichaelKjörling quality engineering, in this context.
                    – Stephen Kitt
                    Sep 6 at 11:58














                  up vote
                  9
                  down vote













                  I suspect this is down to quality engineering (QE) — in the early micro days many chips were susceptible to failures (during manufacturing) but wouldn’t be as well-tested as they are now, so the chances of getting bad batches were higher. If you finish assembling a micro and discover that one of its chips is bad, it’s easier to fix if it’s socketed.



                  Another aspect is that it’s easier to solder a socket than a component, or rather it’s harder to damage a socket when soldering it than it is to damage a component. That might not be relevant when manufacturing at scale though (I’m not sure when wave soldering became common).






                  share|improve this answer


















                  • 2




                    The oldest wave soldering setup I know is from a radio production of 1947 at Grundigs new plant in Nürnberg. It's safe to assume that similar technology was used in the US or Britain as well at the same time.
                    – Raffzahn
                    Sep 5 at 18:50






                  • 1




                    FYI, wave soldering has the problem of entire components becoming unseated during the process and requiring manual rework.
                    – traal
                    Sep 5 at 21:51






                  • 3




                    @traal That's why tight fit for holes and pins is essential - and the reason why sockets have often flat waves pins a tiny bit larger than the hole. And similar why ICs have by default outward bend pins, which need to be 'carful' pressed by the placing machine /handler. When released, they straddle and keep the IC well in place during soldering.
                    – Raffzahn
                    Sep 5 at 22:02






                  • 1




                    Probably obvious once you spell it out, but what's QE?
                    – Michael Kjörling
                    Sep 6 at 11:35






                  • 1




                    @MichaelKjörling quality engineering, in this context.
                    – Stephen Kitt
                    Sep 6 at 11:58












                  up vote
                  9
                  down vote










                  up vote
                  9
                  down vote









                  I suspect this is down to quality engineering (QE) — in the early micro days many chips were susceptible to failures (during manufacturing) but wouldn’t be as well-tested as they are now, so the chances of getting bad batches were higher. If you finish assembling a micro and discover that one of its chips is bad, it’s easier to fix if it’s socketed.



                  Another aspect is that it’s easier to solder a socket than a component, or rather it’s harder to damage a socket when soldering it than it is to damage a component. That might not be relevant when manufacturing at scale though (I’m not sure when wave soldering became common).






                  share|improve this answer














                  I suspect this is down to quality engineering (QE) — in the early micro days many chips were susceptible to failures (during manufacturing) but wouldn’t be as well-tested as they are now, so the chances of getting bad batches were higher. If you finish assembling a micro and discover that one of its chips is bad, it’s easier to fix if it’s socketed.



                  Another aspect is that it’s easier to solder a socket than a component, or rather it’s harder to damage a socket when soldering it than it is to damage a component. That might not be relevant when manufacturing at scale though (I’m not sure when wave soldering became common).







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited Sep 6 at 13:26









                  Michael Kjörling

                  1,3721825




                  1,3721825










                  answered Sep 5 at 14:33









                  Stephen Kitt

                  30.4k4125147




                  30.4k4125147







                  • 2




                    The oldest wave soldering setup I know is from a radio production of 1947 at Grundigs new plant in Nürnberg. It's safe to assume that similar technology was used in the US or Britain as well at the same time.
                    – Raffzahn
                    Sep 5 at 18:50






                  • 1




                    FYI, wave soldering has the problem of entire components becoming unseated during the process and requiring manual rework.
                    – traal
                    Sep 5 at 21:51






                  • 3




                    @traal That's why tight fit for holes and pins is essential - and the reason why sockets have often flat waves pins a tiny bit larger than the hole. And similar why ICs have by default outward bend pins, which need to be 'carful' pressed by the placing machine /handler. When released, they straddle and keep the IC well in place during soldering.
                    – Raffzahn
                    Sep 5 at 22:02






                  • 1




                    Probably obvious once you spell it out, but what's QE?
                    – Michael Kjörling
                    Sep 6 at 11:35






                  • 1




                    @MichaelKjörling quality engineering, in this context.
                    – Stephen Kitt
                    Sep 6 at 11:58












                  • 2




                    The oldest wave soldering setup I know is from a radio production of 1947 at Grundigs new plant in Nürnberg. It's safe to assume that similar technology was used in the US or Britain as well at the same time.
                    – Raffzahn
                    Sep 5 at 18:50






                  • 1




                    FYI, wave soldering has the problem of entire components becoming unseated during the process and requiring manual rework.
                    – traal
                    Sep 5 at 21:51






                  • 3




                    @traal That's why tight fit for holes and pins is essential - and the reason why sockets have often flat waves pins a tiny bit larger than the hole. And similar why ICs have by default outward bend pins, which need to be 'carful' pressed by the placing machine /handler. When released, they straddle and keep the IC well in place during soldering.
                    – Raffzahn
                    Sep 5 at 22:02






                  • 1




                    Probably obvious once you spell it out, but what's QE?
                    – Michael Kjörling
                    Sep 6 at 11:35






                  • 1




                    @MichaelKjörling quality engineering, in this context.
                    – Stephen Kitt
                    Sep 6 at 11:58







                  2




                  2




                  The oldest wave soldering setup I know is from a radio production of 1947 at Grundigs new plant in Nürnberg. It's safe to assume that similar technology was used in the US or Britain as well at the same time.
                  – Raffzahn
                  Sep 5 at 18:50




                  The oldest wave soldering setup I know is from a radio production of 1947 at Grundigs new plant in Nürnberg. It's safe to assume that similar technology was used in the US or Britain as well at the same time.
                  – Raffzahn
                  Sep 5 at 18:50




                  1




                  1




                  FYI, wave soldering has the problem of entire components becoming unseated during the process and requiring manual rework.
                  – traal
                  Sep 5 at 21:51




                  FYI, wave soldering has the problem of entire components becoming unseated during the process and requiring manual rework.
                  – traal
                  Sep 5 at 21:51




                  3




                  3




                  @traal That's why tight fit for holes and pins is essential - and the reason why sockets have often flat waves pins a tiny bit larger than the hole. And similar why ICs have by default outward bend pins, which need to be 'carful' pressed by the placing machine /handler. When released, they straddle and keep the IC well in place during soldering.
                  – Raffzahn
                  Sep 5 at 22:02




                  @traal That's why tight fit for holes and pins is essential - and the reason why sockets have often flat waves pins a tiny bit larger than the hole. And similar why ICs have by default outward bend pins, which need to be 'carful' pressed by the placing machine /handler. When released, they straddle and keep the IC well in place during soldering.
                  – Raffzahn
                  Sep 5 at 22:02




                  1




                  1




                  Probably obvious once you spell it out, but what's QE?
                  – Michael Kjörling
                  Sep 6 at 11:35




                  Probably obvious once you spell it out, but what's QE?
                  – Michael Kjörling
                  Sep 6 at 11:35




                  1




                  1




                  @MichaelKjörling quality engineering, in this context.
                  – Stephen Kitt
                  Sep 6 at 11:58




                  @MichaelKjörling quality engineering, in this context.
                  – Stephen Kitt
                  Sep 6 at 11:58










                  up vote
                  8
                  down vote













                  A long time ago you could solder an entire Acorn Atom yourself. A hobbyist who was unsure about his soldering skills would use sockets, thinking that he was playing it safe. I helped diagnose a non-working board where the pin for the CPU clock made no contact with the socket.. The signal on the bottom of the board was good, it took us hours to find this. The customer had used cheap sockets with contacts only on the inside, on all ICs. Bending all pins inward before pushing the ICs in the sockets fixed that. You could even buy special DIP IC inserters.



                  Tin plated ICs with gold plated sockets (Garry/Augat) don't go well together, the gold dissolves in the tin and ultimately all contact is lost.



                  The added height due to the sockets increases the inductance of the tracks a lot, leading to bad HF behavior and bad EMC. This was OK when computers were slower and there were no FCC regulations, but not now.



                  All reasons why the CE industry uses no IC sockets anymore, except maybe for EPROMs which must be removed for erasing and programming. And for CPUs where the customer makes the choice, but then the socket is an expensive one.






                  share|improve this answer
























                    up vote
                    8
                    down vote













                    A long time ago you could solder an entire Acorn Atom yourself. A hobbyist who was unsure about his soldering skills would use sockets, thinking that he was playing it safe. I helped diagnose a non-working board where the pin for the CPU clock made no contact with the socket.. The signal on the bottom of the board was good, it took us hours to find this. The customer had used cheap sockets with contacts only on the inside, on all ICs. Bending all pins inward before pushing the ICs in the sockets fixed that. You could even buy special DIP IC inserters.



                    Tin plated ICs with gold plated sockets (Garry/Augat) don't go well together, the gold dissolves in the tin and ultimately all contact is lost.



                    The added height due to the sockets increases the inductance of the tracks a lot, leading to bad HF behavior and bad EMC. This was OK when computers were slower and there were no FCC regulations, but not now.



                    All reasons why the CE industry uses no IC sockets anymore, except maybe for EPROMs which must be removed for erasing and programming. And for CPUs where the customer makes the choice, but then the socket is an expensive one.






                    share|improve this answer






















                      up vote
                      8
                      down vote










                      up vote
                      8
                      down vote









                      A long time ago you could solder an entire Acorn Atom yourself. A hobbyist who was unsure about his soldering skills would use sockets, thinking that he was playing it safe. I helped diagnose a non-working board where the pin for the CPU clock made no contact with the socket.. The signal on the bottom of the board was good, it took us hours to find this. The customer had used cheap sockets with contacts only on the inside, on all ICs. Bending all pins inward before pushing the ICs in the sockets fixed that. You could even buy special DIP IC inserters.



                      Tin plated ICs with gold plated sockets (Garry/Augat) don't go well together, the gold dissolves in the tin and ultimately all contact is lost.



                      The added height due to the sockets increases the inductance of the tracks a lot, leading to bad HF behavior and bad EMC. This was OK when computers were slower and there were no FCC regulations, but not now.



                      All reasons why the CE industry uses no IC sockets anymore, except maybe for EPROMs which must be removed for erasing and programming. And for CPUs where the customer makes the choice, but then the socket is an expensive one.






                      share|improve this answer












                      A long time ago you could solder an entire Acorn Atom yourself. A hobbyist who was unsure about his soldering skills would use sockets, thinking that he was playing it safe. I helped diagnose a non-working board where the pin for the CPU clock made no contact with the socket.. The signal on the bottom of the board was good, it took us hours to find this. The customer had used cheap sockets with contacts only on the inside, on all ICs. Bending all pins inward before pushing the ICs in the sockets fixed that. You could even buy special DIP IC inserters.



                      Tin plated ICs with gold plated sockets (Garry/Augat) don't go well together, the gold dissolves in the tin and ultimately all contact is lost.



                      The added height due to the sockets increases the inductance of the tracks a lot, leading to bad HF behavior and bad EMC. This was OK when computers were slower and there were no FCC regulations, but not now.



                      All reasons why the CE industry uses no IC sockets anymore, except maybe for EPROMs which must be removed for erasing and programming. And for CPUs where the customer makes the choice, but then the socket is an expensive one.







                      share|improve this answer












                      share|improve this answer



                      share|improve this answer










                      answered Sep 5 at 18:34









                      StessenJ

                      3551




                      3551




















                          up vote
                          6
                          down vote













                          I am not to what extent this was a manufacturer consideration, however it made the machines much easier/cheaper to service. This meant customers would be more willing to "invest" what might be $1-2000 in today's money since they could continue using it for a longer period.



                          As testament to this, having never touched an 8-bit machine before, I recently got my hands on two dead C64s and within twenty minutes I got one working by swapping chips.



                          Certainly in other manufacturing sectors, serviceability is a consideration so I find it likely the manufacturers did consider this.






                          share|improve this answer
























                            up vote
                            6
                            down vote













                            I am not to what extent this was a manufacturer consideration, however it made the machines much easier/cheaper to service. This meant customers would be more willing to "invest" what might be $1-2000 in today's money since they could continue using it for a longer period.



                            As testament to this, having never touched an 8-bit machine before, I recently got my hands on two dead C64s and within twenty minutes I got one working by swapping chips.



                            Certainly in other manufacturing sectors, serviceability is a consideration so I find it likely the manufacturers did consider this.






                            share|improve this answer






















                              up vote
                              6
                              down vote










                              up vote
                              6
                              down vote









                              I am not to what extent this was a manufacturer consideration, however it made the machines much easier/cheaper to service. This meant customers would be more willing to "invest" what might be $1-2000 in today's money since they could continue using it for a longer period.



                              As testament to this, having never touched an 8-bit machine before, I recently got my hands on two dead C64s and within twenty minutes I got one working by swapping chips.



                              Certainly in other manufacturing sectors, serviceability is a consideration so I find it likely the manufacturers did consider this.






                              share|improve this answer












                              I am not to what extent this was a manufacturer consideration, however it made the machines much easier/cheaper to service. This meant customers would be more willing to "invest" what might be $1-2000 in today's money since they could continue using it for a longer period.



                              As testament to this, having never touched an 8-bit machine before, I recently got my hands on two dead C64s and within twenty minutes I got one working by swapping chips.



                              Certainly in other manufacturing sectors, serviceability is a consideration so I find it likely the manufacturers did consider this.







                              share|improve this answer












                              share|improve this answer



                              share|improve this answer










                              answered Sep 6 at 0:30









                              Artelius

                              1612




                              1612




















                                  up vote
                                  1
                                  down vote













                                  One area that use to mandate the use of sockets (and still does to my knowledge) is in the area of huge-footprint chips (specialized processors, CCD arrays above a certain size, Array-Processor Logic for in-circuit AI boards, etc.) where the mechanical stresses of insertion and/or pick-and-place soldering across the face of the chip exceed a certain recommended threshold.



                                  In such cases, they're likely to use a ZIF-socket (Zero-Insertion-Force) where these high-dollar frequently upgraded components can be removed as needed, particularly in environments where ambient heat, vibration stresses, and radiation may cause gate degradation. It still happens, and when your board costs $1.75 and your AI Gate-Array chip costs $20K, you'd prefer to protect your investment.



                                  The other variable not addressed earlier is when chip failures are not entirely understood and diagnostics on specialty high-density chips differ between the actual (working) environment and the prefab test environment. Soldering in these huge beasts (even when possible) usually means that the chip is destroyed during the desoldering process and therefore means it is no longer possible to analyze the casualties for failure modes after-the-fact.






                                  share|improve this answer
























                                    up vote
                                    1
                                    down vote













                                    One area that use to mandate the use of sockets (and still does to my knowledge) is in the area of huge-footprint chips (specialized processors, CCD arrays above a certain size, Array-Processor Logic for in-circuit AI boards, etc.) where the mechanical stresses of insertion and/or pick-and-place soldering across the face of the chip exceed a certain recommended threshold.



                                    In such cases, they're likely to use a ZIF-socket (Zero-Insertion-Force) where these high-dollar frequently upgraded components can be removed as needed, particularly in environments where ambient heat, vibration stresses, and radiation may cause gate degradation. It still happens, and when your board costs $1.75 and your AI Gate-Array chip costs $20K, you'd prefer to protect your investment.



                                    The other variable not addressed earlier is when chip failures are not entirely understood and diagnostics on specialty high-density chips differ between the actual (working) environment and the prefab test environment. Soldering in these huge beasts (even when possible) usually means that the chip is destroyed during the desoldering process and therefore means it is no longer possible to analyze the casualties for failure modes after-the-fact.






                                    share|improve this answer






















                                      up vote
                                      1
                                      down vote










                                      up vote
                                      1
                                      down vote









                                      One area that use to mandate the use of sockets (and still does to my knowledge) is in the area of huge-footprint chips (specialized processors, CCD arrays above a certain size, Array-Processor Logic for in-circuit AI boards, etc.) where the mechanical stresses of insertion and/or pick-and-place soldering across the face of the chip exceed a certain recommended threshold.



                                      In such cases, they're likely to use a ZIF-socket (Zero-Insertion-Force) where these high-dollar frequently upgraded components can be removed as needed, particularly in environments where ambient heat, vibration stresses, and radiation may cause gate degradation. It still happens, and when your board costs $1.75 and your AI Gate-Array chip costs $20K, you'd prefer to protect your investment.



                                      The other variable not addressed earlier is when chip failures are not entirely understood and diagnostics on specialty high-density chips differ between the actual (working) environment and the prefab test environment. Soldering in these huge beasts (even when possible) usually means that the chip is destroyed during the desoldering process and therefore means it is no longer possible to analyze the casualties for failure modes after-the-fact.






                                      share|improve this answer












                                      One area that use to mandate the use of sockets (and still does to my knowledge) is in the area of huge-footprint chips (specialized processors, CCD arrays above a certain size, Array-Processor Logic for in-circuit AI boards, etc.) where the mechanical stresses of insertion and/or pick-and-place soldering across the face of the chip exceed a certain recommended threshold.



                                      In such cases, they're likely to use a ZIF-socket (Zero-Insertion-Force) where these high-dollar frequently upgraded components can be removed as needed, particularly in environments where ambient heat, vibration stresses, and radiation may cause gate degradation. It still happens, and when your board costs $1.75 and your AI Gate-Array chip costs $20K, you'd prefer to protect your investment.



                                      The other variable not addressed earlier is when chip failures are not entirely understood and diagnostics on specialty high-density chips differ between the actual (working) environment and the prefab test environment. Soldering in these huge beasts (even when possible) usually means that the chip is destroyed during the desoldering process and therefore means it is no longer possible to analyze the casualties for failure modes after-the-fact.







                                      share|improve this answer












                                      share|improve this answer



                                      share|improve this answer










                                      answered Sep 9 at 15:41









                                      David Lovering

                                      111




                                      111




















                                          up vote
                                          -3
                                          down vote













                                          It's really quite simple; early computer MBs were designed to accept a number of different processors. One could easily 'upgrade' their processor by simply swapping in a more powerful one.






                                          share|improve this answer
















                                          • 1




                                            Well... It often wasn't that easy. Plus, that doesn't explain the rest of the types of chips, such as signal processors (graphics / audio) etc.
                                            – wizzwizz4♦
                                            Sep 6 at 19:31






                                          • 2




                                            Drop-in replacement of processors was (and is) relatively limited. In some cases (generally more recently), there are multiple speed CPUs with the same bus speed. In many retro systems the bus speed & CPU speed were the same or one was a simple multiple of the other, so you couldn't easily put in a 50% faster CPU without changing plenty of other stuff. There were a few situations where it worked - e.g., Nec V20 to replace an Intel 8088. But most software-compatible chips were not hardware (pin) compatible - e.g., 8080 vs. Z-80.
                                            – manassehkatz
                                            Sep 7 at 4:05






                                          • 1




                                            and let's not forget the expensive 8087 math co-processor, to be plugged into its socket only if the user had a need for it, e.g. for running AutoCAD (TM).
                                            – StessenJ
                                            Sep 7 at 18:19














                                          up vote
                                          -3
                                          down vote













                                          It's really quite simple; early computer MBs were designed to accept a number of different processors. One could easily 'upgrade' their processor by simply swapping in a more powerful one.






                                          share|improve this answer
















                                          • 1




                                            Well... It often wasn't that easy. Plus, that doesn't explain the rest of the types of chips, such as signal processors (graphics / audio) etc.
                                            – wizzwizz4♦
                                            Sep 6 at 19:31






                                          • 2




                                            Drop-in replacement of processors was (and is) relatively limited. In some cases (generally more recently), there are multiple speed CPUs with the same bus speed. In many retro systems the bus speed & CPU speed were the same or one was a simple multiple of the other, so you couldn't easily put in a 50% faster CPU without changing plenty of other stuff. There were a few situations where it worked - e.g., Nec V20 to replace an Intel 8088. But most software-compatible chips were not hardware (pin) compatible - e.g., 8080 vs. Z-80.
                                            – manassehkatz
                                            Sep 7 at 4:05






                                          • 1




                                            and let's not forget the expensive 8087 math co-processor, to be plugged into its socket only if the user had a need for it, e.g. for running AutoCAD (TM).
                                            – StessenJ
                                            Sep 7 at 18:19












                                          up vote
                                          -3
                                          down vote










                                          up vote
                                          -3
                                          down vote









                                          It's really quite simple; early computer MBs were designed to accept a number of different processors. One could easily 'upgrade' their processor by simply swapping in a more powerful one.






                                          share|improve this answer












                                          It's really quite simple; early computer MBs were designed to accept a number of different processors. One could easily 'upgrade' their processor by simply swapping in a more powerful one.







                                          share|improve this answer












                                          share|improve this answer



                                          share|improve this answer










                                          answered Sep 6 at 18:30









                                          rgoers

                                          1




                                          1







                                          • 1




                                            Well... It often wasn't that easy. Plus, that doesn't explain the rest of the types of chips, such as signal processors (graphics / audio) etc.
                                            – wizzwizz4♦
                                            Sep 6 at 19:31






                                          • 2




                                            Drop-in replacement of processors was (and is) relatively limited. In some cases (generally more recently), there are multiple speed CPUs with the same bus speed. In many retro systems the bus speed & CPU speed were the same or one was a simple multiple of the other, so you couldn't easily put in a 50% faster CPU without changing plenty of other stuff. There were a few situations where it worked - e.g., Nec V20 to replace an Intel 8088. But most software-compatible chips were not hardware (pin) compatible - e.g., 8080 vs. Z-80.
                                            – manassehkatz
                                            Sep 7 at 4:05






                                          • 1




                                            and let's not forget the expensive 8087 math co-processor, to be plugged into its socket only if the user had a need for it, e.g. for running AutoCAD (TM).
                                            – StessenJ
                                            Sep 7 at 18:19












                                          • 1




                                            Well... It often wasn't that easy. Plus, that doesn't explain the rest of the types of chips, such as signal processors (graphics / audio) etc.
                                            – wizzwizz4♦
                                            Sep 6 at 19:31






                                          • 2




                                            Drop-in replacement of processors was (and is) relatively limited. In some cases (generally more recently), there are multiple speed CPUs with the same bus speed. In many retro systems the bus speed & CPU speed were the same or one was a simple multiple of the other, so you couldn't easily put in a 50% faster CPU without changing plenty of other stuff. There were a few situations where it worked - e.g., Nec V20 to replace an Intel 8088. But most software-compatible chips were not hardware (pin) compatible - e.g., 8080 vs. Z-80.
                                            – manassehkatz
                                            Sep 7 at 4:05






                                          • 1




                                            and let's not forget the expensive 8087 math co-processor, to be plugged into its socket only if the user had a need for it, e.g. for running AutoCAD (TM).
                                            – StessenJ
                                            Sep 7 at 18:19







                                          1




                                          1




                                          Well... It often wasn't that easy. Plus, that doesn't explain the rest of the types of chips, such as signal processors (graphics / audio) etc.
                                          – wizzwizz4♦
                                          Sep 6 at 19:31




                                          Well... It often wasn't that easy. Plus, that doesn't explain the rest of the types of chips, such as signal processors (graphics / audio) etc.
                                          – wizzwizz4♦
                                          Sep 6 at 19:31




                                          2




                                          2




                                          Drop-in replacement of processors was (and is) relatively limited. In some cases (generally more recently), there are multiple speed CPUs with the same bus speed. In many retro systems the bus speed & CPU speed were the same or one was a simple multiple of the other, so you couldn't easily put in a 50% faster CPU without changing plenty of other stuff. There were a few situations where it worked - e.g., Nec V20 to replace an Intel 8088. But most software-compatible chips were not hardware (pin) compatible - e.g., 8080 vs. Z-80.
                                          – manassehkatz
                                          Sep 7 at 4:05




                                          Drop-in replacement of processors was (and is) relatively limited. In some cases (generally more recently), there are multiple speed CPUs with the same bus speed. In many retro systems the bus speed & CPU speed were the same or one was a simple multiple of the other, so you couldn't easily put in a 50% faster CPU without changing plenty of other stuff. There were a few situations where it worked - e.g., Nec V20 to replace an Intel 8088. But most software-compatible chips were not hardware (pin) compatible - e.g., 8080 vs. Z-80.
                                          – manassehkatz
                                          Sep 7 at 4:05




                                          1




                                          1




                                          and let's not forget the expensive 8087 math co-processor, to be plugged into its socket only if the user had a need for it, e.g. for running AutoCAD (TM).
                                          – StessenJ
                                          Sep 7 at 18:19




                                          and let's not forget the expensive 8087 math co-processor, to be plugged into its socket only if the user had a need for it, e.g. for running AutoCAD (TM).
                                          – StessenJ
                                          Sep 7 at 18:19

















                                           

                                          draft saved


                                          draft discarded















































                                           


                                          draft saved


                                          draft discarded














                                          StackExchange.ready(
                                          function ()
                                          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f7477%2fwhy-were-chips-socketed-in-early-computers%23new-answer', 'question_page');

                                          );

                                          Post as a guest













































































                                          Popular posts from this blog

                                          How to check contact read email or not when send email to Individual?

                                          Bahrain

                                          Postfix configuration issue with fips on centos 7; mailgun relay