Why were chips socketed in early computers?
Clash Royale CLAN TAG#URR8PPP
up vote
42
down vote
favorite
In many early computers, many of the chips were in sockets rather than soldered directly to boards, e.g. this series of pictures of the Tandy CoCo 1 has a note to the effect that all the chips are socketed: http://tech.markoverholser.com/?q=node/50
I can understand doing this with RAM chips, because it's often feasible and desirable to upgrade these; I think the CoCo 1 did often receive RAM upgrades.
But what was the purpose of doing it with all the others? I would expect soldering all the other chips directly to the board, would both save money and improve reliability by eliminating one thing that can go wrong.
hardware chip
 |Â
show 4 more comments
up vote
42
down vote
favorite
In many early computers, many of the chips were in sockets rather than soldered directly to boards, e.g. this series of pictures of the Tandy CoCo 1 has a note to the effect that all the chips are socketed: http://tech.markoverholser.com/?q=node/50
I can understand doing this with RAM chips, because it's often feasible and desirable to upgrade these; I think the CoCo 1 did often receive RAM upgrades.
But what was the purpose of doing it with all the others? I would expect soldering all the other chips directly to the board, would both save money and improve reliability by eliminating one thing that can go wrong.
hardware chip
14
I'll let someone who has more extensive experience give an official answer. But I suspect that it has to do with complexity. Your most complex chips - and the CPU is generally at the top of the list - are most likely to (a) have problems requiring replacement (e.g., design bugs found after manufacture, which has happened a few times to Intel) and (b) are most important to keep protected from any damage during assembly - wave soldering being pretty safe but why risk the most important chip when you can put in a socket and insert the chip after the soldering is all done.
â manassehkatz
Sep 5 at 14:30
22
Customers of very early computers expected sockets for maintainability (at least in the "nerd market".
â tofro
Sep 5 at 15:50
7
This was the general common practice with DIP (and other non SMD) IC packages. It allowed for quick fixes of the boards without using desoldering stations, etc. And the manufacturing techniques were different too.
â nbloqs
Sep 5 at 17:16
19
You know how people rage on internet forums when Apple or HP ships a product with soldered RAM? In the '70s soldered anything would have elicited the same reaction. Only they didn't have forums, so they would have picked up actual torches and pitchforks.
â Tobia Tesan
Sep 6 at 10:51
3
I wonder if in the future all car parts will be welded together, and complete car replacement when any piece breaks will be the norm, and then we'll wonder, why did we join the pieces with nuts and bolts before?
â JoL
Sep 6 at 15:20
 |Â
show 4 more comments
up vote
42
down vote
favorite
up vote
42
down vote
favorite
In many early computers, many of the chips were in sockets rather than soldered directly to boards, e.g. this series of pictures of the Tandy CoCo 1 has a note to the effect that all the chips are socketed: http://tech.markoverholser.com/?q=node/50
I can understand doing this with RAM chips, because it's often feasible and desirable to upgrade these; I think the CoCo 1 did often receive RAM upgrades.
But what was the purpose of doing it with all the others? I would expect soldering all the other chips directly to the board, would both save money and improve reliability by eliminating one thing that can go wrong.
hardware chip
In many early computers, many of the chips were in sockets rather than soldered directly to boards, e.g. this series of pictures of the Tandy CoCo 1 has a note to the effect that all the chips are socketed: http://tech.markoverholser.com/?q=node/50
I can understand doing this with RAM chips, because it's often feasible and desirable to upgrade these; I think the CoCo 1 did often receive RAM upgrades.
But what was the purpose of doing it with all the others? I would expect soldering all the other chips directly to the board, would both save money and improve reliability by eliminating one thing that can go wrong.
hardware chip
hardware chip
edited Sep 5 at 14:53
fadden
2,7971944
2,7971944
asked Sep 5 at 14:20
rwallace
7,15223197
7,15223197
14
I'll let someone who has more extensive experience give an official answer. But I suspect that it has to do with complexity. Your most complex chips - and the CPU is generally at the top of the list - are most likely to (a) have problems requiring replacement (e.g., design bugs found after manufacture, which has happened a few times to Intel) and (b) are most important to keep protected from any damage during assembly - wave soldering being pretty safe but why risk the most important chip when you can put in a socket and insert the chip after the soldering is all done.
â manassehkatz
Sep 5 at 14:30
22
Customers of very early computers expected sockets for maintainability (at least in the "nerd market".
â tofro
Sep 5 at 15:50
7
This was the general common practice with DIP (and other non SMD) IC packages. It allowed for quick fixes of the boards without using desoldering stations, etc. And the manufacturing techniques were different too.
â nbloqs
Sep 5 at 17:16
19
You know how people rage on internet forums when Apple or HP ships a product with soldered RAM? In the '70s soldered anything would have elicited the same reaction. Only they didn't have forums, so they would have picked up actual torches and pitchforks.
â Tobia Tesan
Sep 6 at 10:51
3
I wonder if in the future all car parts will be welded together, and complete car replacement when any piece breaks will be the norm, and then we'll wonder, why did we join the pieces with nuts and bolts before?
â JoL
Sep 6 at 15:20
 |Â
show 4 more comments
14
I'll let someone who has more extensive experience give an official answer. But I suspect that it has to do with complexity. Your most complex chips - and the CPU is generally at the top of the list - are most likely to (a) have problems requiring replacement (e.g., design bugs found after manufacture, which has happened a few times to Intel) and (b) are most important to keep protected from any damage during assembly - wave soldering being pretty safe but why risk the most important chip when you can put in a socket and insert the chip after the soldering is all done.
â manassehkatz
Sep 5 at 14:30
22
Customers of very early computers expected sockets for maintainability (at least in the "nerd market".
â tofro
Sep 5 at 15:50
7
This was the general common practice with DIP (and other non SMD) IC packages. It allowed for quick fixes of the boards without using desoldering stations, etc. And the manufacturing techniques were different too.
â nbloqs
Sep 5 at 17:16
19
You know how people rage on internet forums when Apple or HP ships a product with soldered RAM? In the '70s soldered anything would have elicited the same reaction. Only they didn't have forums, so they would have picked up actual torches and pitchforks.
â Tobia Tesan
Sep 6 at 10:51
3
I wonder if in the future all car parts will be welded together, and complete car replacement when any piece breaks will be the norm, and then we'll wonder, why did we join the pieces with nuts and bolts before?
â JoL
Sep 6 at 15:20
14
14
I'll let someone who has more extensive experience give an official answer. But I suspect that it has to do with complexity. Your most complex chips - and the CPU is generally at the top of the list - are most likely to (a) have problems requiring replacement (e.g., design bugs found after manufacture, which has happened a few times to Intel) and (b) are most important to keep protected from any damage during assembly - wave soldering being pretty safe but why risk the most important chip when you can put in a socket and insert the chip after the soldering is all done.
â manassehkatz
Sep 5 at 14:30
I'll let someone who has more extensive experience give an official answer. But I suspect that it has to do with complexity. Your most complex chips - and the CPU is generally at the top of the list - are most likely to (a) have problems requiring replacement (e.g., design bugs found after manufacture, which has happened a few times to Intel) and (b) are most important to keep protected from any damage during assembly - wave soldering being pretty safe but why risk the most important chip when you can put in a socket and insert the chip after the soldering is all done.
â manassehkatz
Sep 5 at 14:30
22
22
Customers of very early computers expected sockets for maintainability (at least in the "nerd market".
â tofro
Sep 5 at 15:50
Customers of very early computers expected sockets for maintainability (at least in the "nerd market".
â tofro
Sep 5 at 15:50
7
7
This was the general common practice with DIP (and other non SMD) IC packages. It allowed for quick fixes of the boards without using desoldering stations, etc. And the manufacturing techniques were different too.
â nbloqs
Sep 5 at 17:16
This was the general common practice with DIP (and other non SMD) IC packages. It allowed for quick fixes of the boards without using desoldering stations, etc. And the manufacturing techniques were different too.
â nbloqs
Sep 5 at 17:16
19
19
You know how people rage on internet forums when Apple or HP ships a product with soldered RAM? In the '70s soldered anything would have elicited the same reaction. Only they didn't have forums, so they would have picked up actual torches and pitchforks.
â Tobia Tesan
Sep 6 at 10:51
You know how people rage on internet forums when Apple or HP ships a product with soldered RAM? In the '70s soldered anything would have elicited the same reaction. Only they didn't have forums, so they would have picked up actual torches and pitchforks.
â Tobia Tesan
Sep 6 at 10:51
3
3
I wonder if in the future all car parts will be welded together, and complete car replacement when any piece breaks will be the norm, and then we'll wonder, why did we join the pieces with nuts and bolts before?
â JoL
Sep 6 at 15:20
I wonder if in the future all car parts will be welded together, and complete car replacement when any piece breaks will be the norm, and then we'll wonder, why did we join the pieces with nuts and bolts before?
â JoL
Sep 6 at 15:20
 |Â
show 4 more comments
8 Answers
8
active
oldest
votes
up vote
64
down vote
accepted
Caveat: It might be useful to distinguish between high volume low cost computers (like the mentioned CoCo) and low volume high cost machines (like Intel boards - or workstations). I assume the question to be rather about these high volume low cost machines.
Why were chips socketed in early computers?
There are several reasons:
Most important: Chips where not fitted by robots back then, but by hand. Thus any chip fitted wrong and soldered would make the whole board unusable (*1), while with sockets it's a simple swap to make it work.
Second, sockets can endure more abuse (in this case heat) than chips, thus having them inserted later increases acceptable tolerances during manufacturing and leads to a lower rate of failure (*2)
Next, fitting sockets instead of chips decouples production. Boards can be prepared, ready made and stocked until the time there are orders to fill.
As a result investment can be reduced by ordering the most expensive single part(s) -- the chips -- as late as possible (*3).
Similarly, this is helpful to keep board production running even when some chips are not available for some time (*4).
I would expect soldering all the other chips directly to the board, would both save money and improve reliability by eliminating one thing that can go wrong.
Yes. It does, as soon as there are automated placement machines and -- more importantly -- an order of sufficient size to keep capital cost in check.
I can understand doing this with RAM chips, because it's often feasible and desirable to upgrade these; I think the CoCo 1 did often receive RAM upgrades.
Not really, as that's something users may want, not necessarily the manufacturer.
As said before, this is focused on high volume/low cost machines. While all of the reasons are as well valid for the high performance/high cost boards, labor and stock cost are somewhat less important due to labor which is already rather intensive for small production runs, and stock cost aren't that big in the first place.
Further, the repair aspect wasn't any consideration for cheap, mass produced machines. Commodores share of a C64 was less than 150 USD during most of the time (in the late 1980s even less than 50 USD). There is not enough money to be made in repairing defective units beyond a bare function check. Even more so with the lower priced CoCo. So serviceability wasn't a major concern.
*1 - Yes, they could have had people assigned to desolder the chip(s), clean the through-holes and have new ones fitted and soldered. Except, that would have cost possibly more than the wholesale price of that (cheap) computer would have been.
*2 - Wave soldering is a rather secure process and around since at least 1947 (oldest usage I know). It does require to have the whole board plus all of the placed components preheated to 130-160 degree Celsius (depending on the process) for several minutes. In combination with (somewhat) early chips, this may cause failure within the chips. So using sockets instead during soldering was a great idea.
*3 - One reason why Commodore continued to use sockets for the more expensive chips even after using automated placement.
*4 - Commodore is well know to have run into these conditions several time - for example with some batches of C16.
1
"any chip fitted wrong and soldered would make the whole board unusable" --while a socket fitted wrong and soldered would not technically make the whole board unusable, wouldn't it still make the board unsellable?
â traal
Sep 5 at 21:48
8
@traal Sure it would, but it would be a board without the chips, so way less damage done. Further (but that depends ofc much on the production organization), a wrong fited socket would always mean a smaler one than intended, and that's much easyer to detect than having the wrong chip with the right size inserted.
â Raffzahn
Sep 5 at 21:58
8
Other reasons: chips had a tendency to fail a lot back then (making replacement necessary - my C64 went through quite a few CIA chips!), and many of them were also very heat-sensitive which made it dangerous to solder them directly especially when much of the soldering was done by hand.
â fluffy
Sep 6 at 3:37
2
@traal Doesn't the typical socket just offer mechanical stability and electrical through-connectivity? Seems it would matter which way it is mounted on the circuit board, then, as long as the chip that goes into it is oriented correctly with respect to the board's (electrical) traces.
â Michael Kjörling
Sep 6 at 11:33
2
Yes, a DIP socket fitted the wrong way round just has the human-visible "top" notch the wrong way round, so if you put the chip in the correct way round it will work fine.
â pjc50
Sep 6 at 14:54
 |Â
show 2 more comments
up vote
18
down vote
One reason to use sockets at the beginning of a production cycle is to make it easier on the service technicians.
At one company making computer terminals, the techs would identify a bad chip and then give the board to the rework person to replace. This destroyed the presumed-bad chip and risked damage to the board. Before the techs were familiar with the board signals and failure modes, the false-bad rate would be pretty high. With experience, the false-bad rate got really low, and all boards were built without sockets on all parts except the ROMs, which could be upgraded in the field.
When times got busy and the bad boards piled up (the dog pile), some of us from engineering would volunteer to go to the factory and help fix boards. I was really good at debugging design issues, but the techs were much faster than I was at debugging production problems. I learned a lot from working with them.
add a comment |Â
up vote
10
down vote
There was a time when many personal computer manufacturers did almost no incoming QA on components. Sockets allowed swapping out chips when the assembled computer didnâÂÂt run, e.g. didnâÂÂt pass initial power on testing or burn-in. Simply swapping out certain chips by trial-and-error was a very common debugging technique to find the bad ones, much harder to do if the chips are all soldered down.
Some personal computer models would not run with all worst case timing spec parts. Swapping some parts was likely to mix in some parts not as slow and allow the system to meet the timing necessary to run.
Socketing is no longer done today, as chip yields are higher, more testing and incoming inspection is done, sophisticated CAE tools allow far more complete worst-case timing analysis before a product goes into production, and modern automatic test equipment allows automatic pinpointing of many common faults.
1
Socketing today's SMD chips is not easy and not cheap. There are some SMD chips without a proper socket time.
â Uwe
Sep 9 at 10:42
add a comment |Â
up vote
9
down vote
I suspect this is down to quality engineering (QE) â in the early micro days many chips were susceptible to failures (during manufacturing) but wouldnâÂÂt be as well-tested as they are now, so the chances of getting bad batches were higher. If you finish assembling a micro and discover that one of its chips is bad, itâÂÂs easier to fix if itâÂÂs socketed.
Another aspect is that itâÂÂs easier to solder a socket than a component, or rather itâÂÂs harder to damage a socket when soldering it than it is to damage a component. That might not be relevant when manufacturing at scale though (IâÂÂm not sure when wave soldering became common).
2
The oldest wave soldering setup I know is from a radio production of 1947 at Grundigs new plant in Nürnberg. It's safe to assume that similar technology was used in the US or Britain as well at the same time.
â Raffzahn
Sep 5 at 18:50
1
FYI, wave soldering has the problem of entire components becoming unseated during the process and requiring manual rework.
â traal
Sep 5 at 21:51
3
@traal That's why tight fit for holes and pins is essential - and the reason why sockets have often flat waves pins a tiny bit larger than the hole. And similar why ICs have by default outward bend pins, which need to be 'carful' pressed by the placing machine /handler. When released, they straddle and keep the IC well in place during soldering.
â Raffzahn
Sep 5 at 22:02
1
Probably obvious once you spell it out, but what's QE?
â Michael Kjörling
Sep 6 at 11:35
1
@MichaelKjörling quality engineering, in this context.
â Stephen Kitt
Sep 6 at 11:58
add a comment |Â
up vote
8
down vote
A long time ago you could solder an entire Acorn Atom yourself. A hobbyist who was unsure about his soldering skills would use sockets, thinking that he was playing it safe. I helped diagnose a non-working board where the pin for the CPU clock made no contact with the socket.. The signal on the bottom of the board was good, it took us hours to find this. The customer had used cheap sockets with contacts only on the inside, on all ICs. Bending all pins inward before pushing the ICs in the sockets fixed that. You could even buy special DIP IC inserters.
Tin plated ICs with gold plated sockets (Garry/Augat) don't go well together, the gold dissolves in the tin and ultimately all contact is lost.
The added height due to the sockets increases the inductance of the tracks a lot, leading to bad HF behavior and bad EMC. This was OK when computers were slower and there were no FCC regulations, but not now.
All reasons why the CE industry uses no IC sockets anymore, except maybe for EPROMs which must be removed for erasing and programming. And for CPUs where the customer makes the choice, but then the socket is an expensive one.
add a comment |Â
up vote
6
down vote
I am not to what extent this was a manufacturer consideration, however it made the machines much easier/cheaper to service. This meant customers would be more willing to "invest" what might be $1-2000 in today's money since they could continue using it for a longer period.
As testament to this, having never touched an 8-bit machine before, I recently got my hands on two dead C64s and within twenty minutes I got one working by swapping chips.
Certainly in other manufacturing sectors, serviceability is a consideration so I find it likely the manufacturers did consider this.
add a comment |Â
up vote
1
down vote
One area that use to mandate the use of sockets (and still does to my knowledge) is in the area of huge-footprint chips (specialized processors, CCD arrays above a certain size, Array-Processor Logic for in-circuit AI boards, etc.) where the mechanical stresses of insertion and/or pick-and-place soldering across the face of the chip exceed a certain recommended threshold.
In such cases, they're likely to use a ZIF-socket (Zero-Insertion-Force) where these high-dollar frequently upgraded components can be removed as needed, particularly in environments where ambient heat, vibration stresses, and radiation may cause gate degradation. It still happens, and when your board costs $1.75 and your AI Gate-Array chip costs $20K, you'd prefer to protect your investment.
The other variable not addressed earlier is when chip failures are not entirely understood and diagnostics on specialty high-density chips differ between the actual (working) environment and the prefab test environment. Soldering in these huge beasts (even when possible) usually means that the chip is destroyed during the desoldering process and therefore means it is no longer possible to analyze the casualties for failure modes after-the-fact.
add a comment |Â
up vote
-3
down vote
It's really quite simple; early computer MBs were designed to accept a number of different processors. One could easily 'upgrade' their processor by simply swapping in a more powerful one.
1
Well... It often wasn't that easy. Plus, that doesn't explain the rest of the types of chips, such as signal processors (graphics / audio) etc.
â wizzwizz4â¦
Sep 6 at 19:31
2
Drop-in replacement of processors was (and is) relatively limited. In some cases (generally more recently), there are multiple speed CPUs with the same bus speed. In many retro systems the bus speed & CPU speed were the same or one was a simple multiple of the other, so you couldn't easily put in a 50% faster CPU without changing plenty of other stuff. There were a few situations where it worked - e.g., Nec V20 to replace an Intel 8088. But most software-compatible chips were not hardware (pin) compatible - e.g., 8080 vs. Z-80.
â manassehkatz
Sep 7 at 4:05
1
and let's not forget the expensive 8087 math co-processor, to be plugged into its socket only if the user had a need for it, e.g. for running AutoCAD (TM).
â StessenJ
Sep 7 at 18:19
add a comment |Â
8 Answers
8
active
oldest
votes
8 Answers
8
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
64
down vote
accepted
Caveat: It might be useful to distinguish between high volume low cost computers (like the mentioned CoCo) and low volume high cost machines (like Intel boards - or workstations). I assume the question to be rather about these high volume low cost machines.
Why were chips socketed in early computers?
There are several reasons:
Most important: Chips where not fitted by robots back then, but by hand. Thus any chip fitted wrong and soldered would make the whole board unusable (*1), while with sockets it's a simple swap to make it work.
Second, sockets can endure more abuse (in this case heat) than chips, thus having them inserted later increases acceptable tolerances during manufacturing and leads to a lower rate of failure (*2)
Next, fitting sockets instead of chips decouples production. Boards can be prepared, ready made and stocked until the time there are orders to fill.
As a result investment can be reduced by ordering the most expensive single part(s) -- the chips -- as late as possible (*3).
Similarly, this is helpful to keep board production running even when some chips are not available for some time (*4).
I would expect soldering all the other chips directly to the board, would both save money and improve reliability by eliminating one thing that can go wrong.
Yes. It does, as soon as there are automated placement machines and -- more importantly -- an order of sufficient size to keep capital cost in check.
I can understand doing this with RAM chips, because it's often feasible and desirable to upgrade these; I think the CoCo 1 did often receive RAM upgrades.
Not really, as that's something users may want, not necessarily the manufacturer.
As said before, this is focused on high volume/low cost machines. While all of the reasons are as well valid for the high performance/high cost boards, labor and stock cost are somewhat less important due to labor which is already rather intensive for small production runs, and stock cost aren't that big in the first place.
Further, the repair aspect wasn't any consideration for cheap, mass produced machines. Commodores share of a C64 was less than 150 USD during most of the time (in the late 1980s even less than 50 USD). There is not enough money to be made in repairing defective units beyond a bare function check. Even more so with the lower priced CoCo. So serviceability wasn't a major concern.
*1 - Yes, they could have had people assigned to desolder the chip(s), clean the through-holes and have new ones fitted and soldered. Except, that would have cost possibly more than the wholesale price of that (cheap) computer would have been.
*2 - Wave soldering is a rather secure process and around since at least 1947 (oldest usage I know). It does require to have the whole board plus all of the placed components preheated to 130-160 degree Celsius (depending on the process) for several minutes. In combination with (somewhat) early chips, this may cause failure within the chips. So using sockets instead during soldering was a great idea.
*3 - One reason why Commodore continued to use sockets for the more expensive chips even after using automated placement.
*4 - Commodore is well know to have run into these conditions several time - for example with some batches of C16.
1
"any chip fitted wrong and soldered would make the whole board unusable" --while a socket fitted wrong and soldered would not technically make the whole board unusable, wouldn't it still make the board unsellable?
â traal
Sep 5 at 21:48
8
@traal Sure it would, but it would be a board without the chips, so way less damage done. Further (but that depends ofc much on the production organization), a wrong fited socket would always mean a smaler one than intended, and that's much easyer to detect than having the wrong chip with the right size inserted.
â Raffzahn
Sep 5 at 21:58
8
Other reasons: chips had a tendency to fail a lot back then (making replacement necessary - my C64 went through quite a few CIA chips!), and many of them were also very heat-sensitive which made it dangerous to solder them directly especially when much of the soldering was done by hand.
â fluffy
Sep 6 at 3:37
2
@traal Doesn't the typical socket just offer mechanical stability and electrical through-connectivity? Seems it would matter which way it is mounted on the circuit board, then, as long as the chip that goes into it is oriented correctly with respect to the board's (electrical) traces.
â Michael Kjörling
Sep 6 at 11:33
2
Yes, a DIP socket fitted the wrong way round just has the human-visible "top" notch the wrong way round, so if you put the chip in the correct way round it will work fine.
â pjc50
Sep 6 at 14:54
 |Â
show 2 more comments
up vote
64
down vote
accepted
Caveat: It might be useful to distinguish between high volume low cost computers (like the mentioned CoCo) and low volume high cost machines (like Intel boards - or workstations). I assume the question to be rather about these high volume low cost machines.
Why were chips socketed in early computers?
There are several reasons:
Most important: Chips where not fitted by robots back then, but by hand. Thus any chip fitted wrong and soldered would make the whole board unusable (*1), while with sockets it's a simple swap to make it work.
Second, sockets can endure more abuse (in this case heat) than chips, thus having them inserted later increases acceptable tolerances during manufacturing and leads to a lower rate of failure (*2)
Next, fitting sockets instead of chips decouples production. Boards can be prepared, ready made and stocked until the time there are orders to fill.
As a result investment can be reduced by ordering the most expensive single part(s) -- the chips -- as late as possible (*3).
Similarly, this is helpful to keep board production running even when some chips are not available for some time (*4).
I would expect soldering all the other chips directly to the board, would both save money and improve reliability by eliminating one thing that can go wrong.
Yes. It does, as soon as there are automated placement machines and -- more importantly -- an order of sufficient size to keep capital cost in check.
I can understand doing this with RAM chips, because it's often feasible and desirable to upgrade these; I think the CoCo 1 did often receive RAM upgrades.
Not really, as that's something users may want, not necessarily the manufacturer.
As said before, this is focused on high volume/low cost machines. While all of the reasons are as well valid for the high performance/high cost boards, labor and stock cost are somewhat less important due to labor which is already rather intensive for small production runs, and stock cost aren't that big in the first place.
Further, the repair aspect wasn't any consideration for cheap, mass produced machines. Commodores share of a C64 was less than 150 USD during most of the time (in the late 1980s even less than 50 USD). There is not enough money to be made in repairing defective units beyond a bare function check. Even more so with the lower priced CoCo. So serviceability wasn't a major concern.
*1 - Yes, they could have had people assigned to desolder the chip(s), clean the through-holes and have new ones fitted and soldered. Except, that would have cost possibly more than the wholesale price of that (cheap) computer would have been.
*2 - Wave soldering is a rather secure process and around since at least 1947 (oldest usage I know). It does require to have the whole board plus all of the placed components preheated to 130-160 degree Celsius (depending on the process) for several minutes. In combination with (somewhat) early chips, this may cause failure within the chips. So using sockets instead during soldering was a great idea.
*3 - One reason why Commodore continued to use sockets for the more expensive chips even after using automated placement.
*4 - Commodore is well know to have run into these conditions several time - for example with some batches of C16.
1
"any chip fitted wrong and soldered would make the whole board unusable" --while a socket fitted wrong and soldered would not technically make the whole board unusable, wouldn't it still make the board unsellable?
â traal
Sep 5 at 21:48
8
@traal Sure it would, but it would be a board without the chips, so way less damage done. Further (but that depends ofc much on the production organization), a wrong fited socket would always mean a smaler one than intended, and that's much easyer to detect than having the wrong chip with the right size inserted.
â Raffzahn
Sep 5 at 21:58
8
Other reasons: chips had a tendency to fail a lot back then (making replacement necessary - my C64 went through quite a few CIA chips!), and many of them were also very heat-sensitive which made it dangerous to solder them directly especially when much of the soldering was done by hand.
â fluffy
Sep 6 at 3:37
2
@traal Doesn't the typical socket just offer mechanical stability and electrical through-connectivity? Seems it would matter which way it is mounted on the circuit board, then, as long as the chip that goes into it is oriented correctly with respect to the board's (electrical) traces.
â Michael Kjörling
Sep 6 at 11:33
2
Yes, a DIP socket fitted the wrong way round just has the human-visible "top" notch the wrong way round, so if you put the chip in the correct way round it will work fine.
â pjc50
Sep 6 at 14:54
 |Â
show 2 more comments
up vote
64
down vote
accepted
up vote
64
down vote
accepted
Caveat: It might be useful to distinguish between high volume low cost computers (like the mentioned CoCo) and low volume high cost machines (like Intel boards - or workstations). I assume the question to be rather about these high volume low cost machines.
Why were chips socketed in early computers?
There are several reasons:
Most important: Chips where not fitted by robots back then, but by hand. Thus any chip fitted wrong and soldered would make the whole board unusable (*1), while with sockets it's a simple swap to make it work.
Second, sockets can endure more abuse (in this case heat) than chips, thus having them inserted later increases acceptable tolerances during manufacturing and leads to a lower rate of failure (*2)
Next, fitting sockets instead of chips decouples production. Boards can be prepared, ready made and stocked until the time there are orders to fill.
As a result investment can be reduced by ordering the most expensive single part(s) -- the chips -- as late as possible (*3).
Similarly, this is helpful to keep board production running even when some chips are not available for some time (*4).
I would expect soldering all the other chips directly to the board, would both save money and improve reliability by eliminating one thing that can go wrong.
Yes. It does, as soon as there are automated placement machines and -- more importantly -- an order of sufficient size to keep capital cost in check.
I can understand doing this with RAM chips, because it's often feasible and desirable to upgrade these; I think the CoCo 1 did often receive RAM upgrades.
Not really, as that's something users may want, not necessarily the manufacturer.
As said before, this is focused on high volume/low cost machines. While all of the reasons are as well valid for the high performance/high cost boards, labor and stock cost are somewhat less important due to labor which is already rather intensive for small production runs, and stock cost aren't that big in the first place.
Further, the repair aspect wasn't any consideration for cheap, mass produced machines. Commodores share of a C64 was less than 150 USD during most of the time (in the late 1980s even less than 50 USD). There is not enough money to be made in repairing defective units beyond a bare function check. Even more so with the lower priced CoCo. So serviceability wasn't a major concern.
*1 - Yes, they could have had people assigned to desolder the chip(s), clean the through-holes and have new ones fitted and soldered. Except, that would have cost possibly more than the wholesale price of that (cheap) computer would have been.
*2 - Wave soldering is a rather secure process and around since at least 1947 (oldest usage I know). It does require to have the whole board plus all of the placed components preheated to 130-160 degree Celsius (depending on the process) for several minutes. In combination with (somewhat) early chips, this may cause failure within the chips. So using sockets instead during soldering was a great idea.
*3 - One reason why Commodore continued to use sockets for the more expensive chips even after using automated placement.
*4 - Commodore is well know to have run into these conditions several time - for example with some batches of C16.
Caveat: It might be useful to distinguish between high volume low cost computers (like the mentioned CoCo) and low volume high cost machines (like Intel boards - or workstations). I assume the question to be rather about these high volume low cost machines.
Why were chips socketed in early computers?
There are several reasons:
Most important: Chips where not fitted by robots back then, but by hand. Thus any chip fitted wrong and soldered would make the whole board unusable (*1), while with sockets it's a simple swap to make it work.
Second, sockets can endure more abuse (in this case heat) than chips, thus having them inserted later increases acceptable tolerances during manufacturing and leads to a lower rate of failure (*2)
Next, fitting sockets instead of chips decouples production. Boards can be prepared, ready made and stocked until the time there are orders to fill.
As a result investment can be reduced by ordering the most expensive single part(s) -- the chips -- as late as possible (*3).
Similarly, this is helpful to keep board production running even when some chips are not available for some time (*4).
I would expect soldering all the other chips directly to the board, would both save money and improve reliability by eliminating one thing that can go wrong.
Yes. It does, as soon as there are automated placement machines and -- more importantly -- an order of sufficient size to keep capital cost in check.
I can understand doing this with RAM chips, because it's often feasible and desirable to upgrade these; I think the CoCo 1 did often receive RAM upgrades.
Not really, as that's something users may want, not necessarily the manufacturer.
As said before, this is focused on high volume/low cost machines. While all of the reasons are as well valid for the high performance/high cost boards, labor and stock cost are somewhat less important due to labor which is already rather intensive for small production runs, and stock cost aren't that big in the first place.
Further, the repair aspect wasn't any consideration for cheap, mass produced machines. Commodores share of a C64 was less than 150 USD during most of the time (in the late 1980s even less than 50 USD). There is not enough money to be made in repairing defective units beyond a bare function check. Even more so with the lower priced CoCo. So serviceability wasn't a major concern.
*1 - Yes, they could have had people assigned to desolder the chip(s), clean the through-holes and have new ones fitted and soldered. Except, that would have cost possibly more than the wholesale price of that (cheap) computer would have been.
*2 - Wave soldering is a rather secure process and around since at least 1947 (oldest usage I know). It does require to have the whole board plus all of the placed components preheated to 130-160 degree Celsius (depending on the process) for several minutes. In combination with (somewhat) early chips, this may cause failure within the chips. So using sockets instead during soldering was a great idea.
*3 - One reason why Commodore continued to use sockets for the more expensive chips even after using automated placement.
*4 - Commodore is well know to have run into these conditions several time - for example with some batches of C16.
edited Sep 10 at 13:33
Wilson
8,576437107
8,576437107
answered Sep 5 at 16:57
Raffzahn
35.5k478141
35.5k478141
1
"any chip fitted wrong and soldered would make the whole board unusable" --while a socket fitted wrong and soldered would not technically make the whole board unusable, wouldn't it still make the board unsellable?
â traal
Sep 5 at 21:48
8
@traal Sure it would, but it would be a board without the chips, so way less damage done. Further (but that depends ofc much on the production organization), a wrong fited socket would always mean a smaler one than intended, and that's much easyer to detect than having the wrong chip with the right size inserted.
â Raffzahn
Sep 5 at 21:58
8
Other reasons: chips had a tendency to fail a lot back then (making replacement necessary - my C64 went through quite a few CIA chips!), and many of them were also very heat-sensitive which made it dangerous to solder them directly especially when much of the soldering was done by hand.
â fluffy
Sep 6 at 3:37
2
@traal Doesn't the typical socket just offer mechanical stability and electrical through-connectivity? Seems it would matter which way it is mounted on the circuit board, then, as long as the chip that goes into it is oriented correctly with respect to the board's (electrical) traces.
â Michael Kjörling
Sep 6 at 11:33
2
Yes, a DIP socket fitted the wrong way round just has the human-visible "top" notch the wrong way round, so if you put the chip in the correct way round it will work fine.
â pjc50
Sep 6 at 14:54
 |Â
show 2 more comments
1
"any chip fitted wrong and soldered would make the whole board unusable" --while a socket fitted wrong and soldered would not technically make the whole board unusable, wouldn't it still make the board unsellable?
â traal
Sep 5 at 21:48
8
@traal Sure it would, but it would be a board without the chips, so way less damage done. Further (but that depends ofc much on the production organization), a wrong fited socket would always mean a smaler one than intended, and that's much easyer to detect than having the wrong chip with the right size inserted.
â Raffzahn
Sep 5 at 21:58
8
Other reasons: chips had a tendency to fail a lot back then (making replacement necessary - my C64 went through quite a few CIA chips!), and many of them were also very heat-sensitive which made it dangerous to solder them directly especially when much of the soldering was done by hand.
â fluffy
Sep 6 at 3:37
2
@traal Doesn't the typical socket just offer mechanical stability and electrical through-connectivity? Seems it would matter which way it is mounted on the circuit board, then, as long as the chip that goes into it is oriented correctly with respect to the board's (electrical) traces.
â Michael Kjörling
Sep 6 at 11:33
2
Yes, a DIP socket fitted the wrong way round just has the human-visible "top" notch the wrong way round, so if you put the chip in the correct way round it will work fine.
â pjc50
Sep 6 at 14:54
1
1
"any chip fitted wrong and soldered would make the whole board unusable" --while a socket fitted wrong and soldered would not technically make the whole board unusable, wouldn't it still make the board unsellable?
â traal
Sep 5 at 21:48
"any chip fitted wrong and soldered would make the whole board unusable" --while a socket fitted wrong and soldered would not technically make the whole board unusable, wouldn't it still make the board unsellable?
â traal
Sep 5 at 21:48
8
8
@traal Sure it would, but it would be a board without the chips, so way less damage done. Further (but that depends ofc much on the production organization), a wrong fited socket would always mean a smaler one than intended, and that's much easyer to detect than having the wrong chip with the right size inserted.
â Raffzahn
Sep 5 at 21:58
@traal Sure it would, but it would be a board without the chips, so way less damage done. Further (but that depends ofc much on the production organization), a wrong fited socket would always mean a smaler one than intended, and that's much easyer to detect than having the wrong chip with the right size inserted.
â Raffzahn
Sep 5 at 21:58
8
8
Other reasons: chips had a tendency to fail a lot back then (making replacement necessary - my C64 went through quite a few CIA chips!), and many of them were also very heat-sensitive which made it dangerous to solder them directly especially when much of the soldering was done by hand.
â fluffy
Sep 6 at 3:37
Other reasons: chips had a tendency to fail a lot back then (making replacement necessary - my C64 went through quite a few CIA chips!), and many of them were also very heat-sensitive which made it dangerous to solder them directly especially when much of the soldering was done by hand.
â fluffy
Sep 6 at 3:37
2
2
@traal Doesn't the typical socket just offer mechanical stability and electrical through-connectivity? Seems it would matter which way it is mounted on the circuit board, then, as long as the chip that goes into it is oriented correctly with respect to the board's (electrical) traces.
â Michael Kjörling
Sep 6 at 11:33
@traal Doesn't the typical socket just offer mechanical stability and electrical through-connectivity? Seems it would matter which way it is mounted on the circuit board, then, as long as the chip that goes into it is oriented correctly with respect to the board's (electrical) traces.
â Michael Kjörling
Sep 6 at 11:33
2
2
Yes, a DIP socket fitted the wrong way round just has the human-visible "top" notch the wrong way round, so if you put the chip in the correct way round it will work fine.
â pjc50
Sep 6 at 14:54
Yes, a DIP socket fitted the wrong way round just has the human-visible "top" notch the wrong way round, so if you put the chip in the correct way round it will work fine.
â pjc50
Sep 6 at 14:54
 |Â
show 2 more comments
up vote
18
down vote
One reason to use sockets at the beginning of a production cycle is to make it easier on the service technicians.
At one company making computer terminals, the techs would identify a bad chip and then give the board to the rework person to replace. This destroyed the presumed-bad chip and risked damage to the board. Before the techs were familiar with the board signals and failure modes, the false-bad rate would be pretty high. With experience, the false-bad rate got really low, and all boards were built without sockets on all parts except the ROMs, which could be upgraded in the field.
When times got busy and the bad boards piled up (the dog pile), some of us from engineering would volunteer to go to the factory and help fix boards. I was really good at debugging design issues, but the techs were much faster than I was at debugging production problems. I learned a lot from working with them.
add a comment |Â
up vote
18
down vote
One reason to use sockets at the beginning of a production cycle is to make it easier on the service technicians.
At one company making computer terminals, the techs would identify a bad chip and then give the board to the rework person to replace. This destroyed the presumed-bad chip and risked damage to the board. Before the techs were familiar with the board signals and failure modes, the false-bad rate would be pretty high. With experience, the false-bad rate got really low, and all boards were built without sockets on all parts except the ROMs, which could be upgraded in the field.
When times got busy and the bad boards piled up (the dog pile), some of us from engineering would volunteer to go to the factory and help fix boards. I was really good at debugging design issues, but the techs were much faster than I was at debugging production problems. I learned a lot from working with them.
add a comment |Â
up vote
18
down vote
up vote
18
down vote
One reason to use sockets at the beginning of a production cycle is to make it easier on the service technicians.
At one company making computer terminals, the techs would identify a bad chip and then give the board to the rework person to replace. This destroyed the presumed-bad chip and risked damage to the board. Before the techs were familiar with the board signals and failure modes, the false-bad rate would be pretty high. With experience, the false-bad rate got really low, and all boards were built without sockets on all parts except the ROMs, which could be upgraded in the field.
When times got busy and the bad boards piled up (the dog pile), some of us from engineering would volunteer to go to the factory and help fix boards. I was really good at debugging design issues, but the techs were much faster than I was at debugging production problems. I learned a lot from working with them.
One reason to use sockets at the beginning of a production cycle is to make it easier on the service technicians.
At one company making computer terminals, the techs would identify a bad chip and then give the board to the rework person to replace. This destroyed the presumed-bad chip and risked damage to the board. Before the techs were familiar with the board signals and failure modes, the false-bad rate would be pretty high. With experience, the false-bad rate got really low, and all boards were built without sockets on all parts except the ROMs, which could be upgraded in the field.
When times got busy and the bad boards piled up (the dog pile), some of us from engineering would volunteer to go to the factory and help fix boards. I was really good at debugging design issues, but the techs were much faster than I was at debugging production problems. I learned a lot from working with them.
answered Sep 5 at 19:51
cmm
33114
33114
add a comment |Â
add a comment |Â
up vote
10
down vote
There was a time when many personal computer manufacturers did almost no incoming QA on components. Sockets allowed swapping out chips when the assembled computer didnâÂÂt run, e.g. didnâÂÂt pass initial power on testing or burn-in. Simply swapping out certain chips by trial-and-error was a very common debugging technique to find the bad ones, much harder to do if the chips are all soldered down.
Some personal computer models would not run with all worst case timing spec parts. Swapping some parts was likely to mix in some parts not as slow and allow the system to meet the timing necessary to run.
Socketing is no longer done today, as chip yields are higher, more testing and incoming inspection is done, sophisticated CAE tools allow far more complete worst-case timing analysis before a product goes into production, and modern automatic test equipment allows automatic pinpointing of many common faults.
1
Socketing today's SMD chips is not easy and not cheap. There are some SMD chips without a proper socket time.
â Uwe
Sep 9 at 10:42
add a comment |Â
up vote
10
down vote
There was a time when many personal computer manufacturers did almost no incoming QA on components. Sockets allowed swapping out chips when the assembled computer didnâÂÂt run, e.g. didnâÂÂt pass initial power on testing or burn-in. Simply swapping out certain chips by trial-and-error was a very common debugging technique to find the bad ones, much harder to do if the chips are all soldered down.
Some personal computer models would not run with all worst case timing spec parts. Swapping some parts was likely to mix in some parts not as slow and allow the system to meet the timing necessary to run.
Socketing is no longer done today, as chip yields are higher, more testing and incoming inspection is done, sophisticated CAE tools allow far more complete worst-case timing analysis before a product goes into production, and modern automatic test equipment allows automatic pinpointing of many common faults.
1
Socketing today's SMD chips is not easy and not cheap. There are some SMD chips without a proper socket time.
â Uwe
Sep 9 at 10:42
add a comment |Â
up vote
10
down vote
up vote
10
down vote
There was a time when many personal computer manufacturers did almost no incoming QA on components. Sockets allowed swapping out chips when the assembled computer didnâÂÂt run, e.g. didnâÂÂt pass initial power on testing or burn-in. Simply swapping out certain chips by trial-and-error was a very common debugging technique to find the bad ones, much harder to do if the chips are all soldered down.
Some personal computer models would not run with all worst case timing spec parts. Swapping some parts was likely to mix in some parts not as slow and allow the system to meet the timing necessary to run.
Socketing is no longer done today, as chip yields are higher, more testing and incoming inspection is done, sophisticated CAE tools allow far more complete worst-case timing analysis before a product goes into production, and modern automatic test equipment allows automatic pinpointing of many common faults.
There was a time when many personal computer manufacturers did almost no incoming QA on components. Sockets allowed swapping out chips when the assembled computer didnâÂÂt run, e.g. didnâÂÂt pass initial power on testing or burn-in. Simply swapping out certain chips by trial-and-error was a very common debugging technique to find the bad ones, much harder to do if the chips are all soldered down.
Some personal computer models would not run with all worst case timing spec parts. Swapping some parts was likely to mix in some parts not as slow and allow the system to meet the timing necessary to run.
Socketing is no longer done today, as chip yields are higher, more testing and incoming inspection is done, sophisticated CAE tools allow far more complete worst-case timing analysis before a product goes into production, and modern automatic test equipment allows automatic pinpointing of many common faults.
edited Sep 6 at 2:20
answered Sep 6 at 2:09
hotpaw2
2,666521
2,666521
1
Socketing today's SMD chips is not easy and not cheap. There are some SMD chips without a proper socket time.
â Uwe
Sep 9 at 10:42
add a comment |Â
1
Socketing today's SMD chips is not easy and not cheap. There are some SMD chips without a proper socket time.
â Uwe
Sep 9 at 10:42
1
1
Socketing today's SMD chips is not easy and not cheap. There are some SMD chips without a proper socket time.
â Uwe
Sep 9 at 10:42
Socketing today's SMD chips is not easy and not cheap. There are some SMD chips without a proper socket time.
â Uwe
Sep 9 at 10:42
add a comment |Â
up vote
9
down vote
I suspect this is down to quality engineering (QE) â in the early micro days many chips were susceptible to failures (during manufacturing) but wouldnâÂÂt be as well-tested as they are now, so the chances of getting bad batches were higher. If you finish assembling a micro and discover that one of its chips is bad, itâÂÂs easier to fix if itâÂÂs socketed.
Another aspect is that itâÂÂs easier to solder a socket than a component, or rather itâÂÂs harder to damage a socket when soldering it than it is to damage a component. That might not be relevant when manufacturing at scale though (IâÂÂm not sure when wave soldering became common).
2
The oldest wave soldering setup I know is from a radio production of 1947 at Grundigs new plant in Nürnberg. It's safe to assume that similar technology was used in the US or Britain as well at the same time.
â Raffzahn
Sep 5 at 18:50
1
FYI, wave soldering has the problem of entire components becoming unseated during the process and requiring manual rework.
â traal
Sep 5 at 21:51
3
@traal That's why tight fit for holes and pins is essential - and the reason why sockets have often flat waves pins a tiny bit larger than the hole. And similar why ICs have by default outward bend pins, which need to be 'carful' pressed by the placing machine /handler. When released, they straddle and keep the IC well in place during soldering.
â Raffzahn
Sep 5 at 22:02
1
Probably obvious once you spell it out, but what's QE?
â Michael Kjörling
Sep 6 at 11:35
1
@MichaelKjörling quality engineering, in this context.
â Stephen Kitt
Sep 6 at 11:58
add a comment |Â
up vote
9
down vote
I suspect this is down to quality engineering (QE) â in the early micro days many chips were susceptible to failures (during manufacturing) but wouldnâÂÂt be as well-tested as they are now, so the chances of getting bad batches were higher. If you finish assembling a micro and discover that one of its chips is bad, itâÂÂs easier to fix if itâÂÂs socketed.
Another aspect is that itâÂÂs easier to solder a socket than a component, or rather itâÂÂs harder to damage a socket when soldering it than it is to damage a component. That might not be relevant when manufacturing at scale though (IâÂÂm not sure when wave soldering became common).
2
The oldest wave soldering setup I know is from a radio production of 1947 at Grundigs new plant in Nürnberg. It's safe to assume that similar technology was used in the US or Britain as well at the same time.
â Raffzahn
Sep 5 at 18:50
1
FYI, wave soldering has the problem of entire components becoming unseated during the process and requiring manual rework.
â traal
Sep 5 at 21:51
3
@traal That's why tight fit for holes and pins is essential - and the reason why sockets have often flat waves pins a tiny bit larger than the hole. And similar why ICs have by default outward bend pins, which need to be 'carful' pressed by the placing machine /handler. When released, they straddle and keep the IC well in place during soldering.
â Raffzahn
Sep 5 at 22:02
1
Probably obvious once you spell it out, but what's QE?
â Michael Kjörling
Sep 6 at 11:35
1
@MichaelKjörling quality engineering, in this context.
â Stephen Kitt
Sep 6 at 11:58
add a comment |Â
up vote
9
down vote
up vote
9
down vote
I suspect this is down to quality engineering (QE) â in the early micro days many chips were susceptible to failures (during manufacturing) but wouldnâÂÂt be as well-tested as they are now, so the chances of getting bad batches were higher. If you finish assembling a micro and discover that one of its chips is bad, itâÂÂs easier to fix if itâÂÂs socketed.
Another aspect is that itâÂÂs easier to solder a socket than a component, or rather itâÂÂs harder to damage a socket when soldering it than it is to damage a component. That might not be relevant when manufacturing at scale though (IâÂÂm not sure when wave soldering became common).
I suspect this is down to quality engineering (QE) â in the early micro days many chips were susceptible to failures (during manufacturing) but wouldnâÂÂt be as well-tested as they are now, so the chances of getting bad batches were higher. If you finish assembling a micro and discover that one of its chips is bad, itâÂÂs easier to fix if itâÂÂs socketed.
Another aspect is that itâÂÂs easier to solder a socket than a component, or rather itâÂÂs harder to damage a socket when soldering it than it is to damage a component. That might not be relevant when manufacturing at scale though (IâÂÂm not sure when wave soldering became common).
edited Sep 6 at 13:26
Michael Kjörling
1,3721825
1,3721825
answered Sep 5 at 14:33
Stephen Kitt
30.4k4125147
30.4k4125147
2
The oldest wave soldering setup I know is from a radio production of 1947 at Grundigs new plant in Nürnberg. It's safe to assume that similar technology was used in the US or Britain as well at the same time.
â Raffzahn
Sep 5 at 18:50
1
FYI, wave soldering has the problem of entire components becoming unseated during the process and requiring manual rework.
â traal
Sep 5 at 21:51
3
@traal That's why tight fit for holes and pins is essential - and the reason why sockets have often flat waves pins a tiny bit larger than the hole. And similar why ICs have by default outward bend pins, which need to be 'carful' pressed by the placing machine /handler. When released, they straddle and keep the IC well in place during soldering.
â Raffzahn
Sep 5 at 22:02
1
Probably obvious once you spell it out, but what's QE?
â Michael Kjörling
Sep 6 at 11:35
1
@MichaelKjörling quality engineering, in this context.
â Stephen Kitt
Sep 6 at 11:58
add a comment |Â
2
The oldest wave soldering setup I know is from a radio production of 1947 at Grundigs new plant in Nürnberg. It's safe to assume that similar technology was used in the US or Britain as well at the same time.
â Raffzahn
Sep 5 at 18:50
1
FYI, wave soldering has the problem of entire components becoming unseated during the process and requiring manual rework.
â traal
Sep 5 at 21:51
3
@traal That's why tight fit for holes and pins is essential - and the reason why sockets have often flat waves pins a tiny bit larger than the hole. And similar why ICs have by default outward bend pins, which need to be 'carful' pressed by the placing machine /handler. When released, they straddle and keep the IC well in place during soldering.
â Raffzahn
Sep 5 at 22:02
1
Probably obvious once you spell it out, but what's QE?
â Michael Kjörling
Sep 6 at 11:35
1
@MichaelKjörling quality engineering, in this context.
â Stephen Kitt
Sep 6 at 11:58
2
2
The oldest wave soldering setup I know is from a radio production of 1947 at Grundigs new plant in Nürnberg. It's safe to assume that similar technology was used in the US or Britain as well at the same time.
â Raffzahn
Sep 5 at 18:50
The oldest wave soldering setup I know is from a radio production of 1947 at Grundigs new plant in Nürnberg. It's safe to assume that similar technology was used in the US or Britain as well at the same time.
â Raffzahn
Sep 5 at 18:50
1
1
FYI, wave soldering has the problem of entire components becoming unseated during the process and requiring manual rework.
â traal
Sep 5 at 21:51
FYI, wave soldering has the problem of entire components becoming unseated during the process and requiring manual rework.
â traal
Sep 5 at 21:51
3
3
@traal That's why tight fit for holes and pins is essential - and the reason why sockets have often flat waves pins a tiny bit larger than the hole. And similar why ICs have by default outward bend pins, which need to be 'carful' pressed by the placing machine /handler. When released, they straddle and keep the IC well in place during soldering.
â Raffzahn
Sep 5 at 22:02
@traal That's why tight fit for holes and pins is essential - and the reason why sockets have often flat waves pins a tiny bit larger than the hole. And similar why ICs have by default outward bend pins, which need to be 'carful' pressed by the placing machine /handler. When released, they straddle and keep the IC well in place during soldering.
â Raffzahn
Sep 5 at 22:02
1
1
Probably obvious once you spell it out, but what's QE?
â Michael Kjörling
Sep 6 at 11:35
Probably obvious once you spell it out, but what's QE?
â Michael Kjörling
Sep 6 at 11:35
1
1
@MichaelKjörling quality engineering, in this context.
â Stephen Kitt
Sep 6 at 11:58
@MichaelKjörling quality engineering, in this context.
â Stephen Kitt
Sep 6 at 11:58
add a comment |Â
up vote
8
down vote
A long time ago you could solder an entire Acorn Atom yourself. A hobbyist who was unsure about his soldering skills would use sockets, thinking that he was playing it safe. I helped diagnose a non-working board where the pin for the CPU clock made no contact with the socket.. The signal on the bottom of the board was good, it took us hours to find this. The customer had used cheap sockets with contacts only on the inside, on all ICs. Bending all pins inward before pushing the ICs in the sockets fixed that. You could even buy special DIP IC inserters.
Tin plated ICs with gold plated sockets (Garry/Augat) don't go well together, the gold dissolves in the tin and ultimately all contact is lost.
The added height due to the sockets increases the inductance of the tracks a lot, leading to bad HF behavior and bad EMC. This was OK when computers were slower and there were no FCC regulations, but not now.
All reasons why the CE industry uses no IC sockets anymore, except maybe for EPROMs which must be removed for erasing and programming. And for CPUs where the customer makes the choice, but then the socket is an expensive one.
add a comment |Â
up vote
8
down vote
A long time ago you could solder an entire Acorn Atom yourself. A hobbyist who was unsure about his soldering skills would use sockets, thinking that he was playing it safe. I helped diagnose a non-working board where the pin for the CPU clock made no contact with the socket.. The signal on the bottom of the board was good, it took us hours to find this. The customer had used cheap sockets with contacts only on the inside, on all ICs. Bending all pins inward before pushing the ICs in the sockets fixed that. You could even buy special DIP IC inserters.
Tin plated ICs with gold plated sockets (Garry/Augat) don't go well together, the gold dissolves in the tin and ultimately all contact is lost.
The added height due to the sockets increases the inductance of the tracks a lot, leading to bad HF behavior and bad EMC. This was OK when computers were slower and there were no FCC regulations, but not now.
All reasons why the CE industry uses no IC sockets anymore, except maybe for EPROMs which must be removed for erasing and programming. And for CPUs where the customer makes the choice, but then the socket is an expensive one.
add a comment |Â
up vote
8
down vote
up vote
8
down vote
A long time ago you could solder an entire Acorn Atom yourself. A hobbyist who was unsure about his soldering skills would use sockets, thinking that he was playing it safe. I helped diagnose a non-working board where the pin for the CPU clock made no contact with the socket.. The signal on the bottom of the board was good, it took us hours to find this. The customer had used cheap sockets with contacts only on the inside, on all ICs. Bending all pins inward before pushing the ICs in the sockets fixed that. You could even buy special DIP IC inserters.
Tin plated ICs with gold plated sockets (Garry/Augat) don't go well together, the gold dissolves in the tin and ultimately all contact is lost.
The added height due to the sockets increases the inductance of the tracks a lot, leading to bad HF behavior and bad EMC. This was OK when computers were slower and there were no FCC regulations, but not now.
All reasons why the CE industry uses no IC sockets anymore, except maybe for EPROMs which must be removed for erasing and programming. And for CPUs where the customer makes the choice, but then the socket is an expensive one.
A long time ago you could solder an entire Acorn Atom yourself. A hobbyist who was unsure about his soldering skills would use sockets, thinking that he was playing it safe. I helped diagnose a non-working board where the pin for the CPU clock made no contact with the socket.. The signal on the bottom of the board was good, it took us hours to find this. The customer had used cheap sockets with contacts only on the inside, on all ICs. Bending all pins inward before pushing the ICs in the sockets fixed that. You could even buy special DIP IC inserters.
Tin plated ICs with gold plated sockets (Garry/Augat) don't go well together, the gold dissolves in the tin and ultimately all contact is lost.
The added height due to the sockets increases the inductance of the tracks a lot, leading to bad HF behavior and bad EMC. This was OK when computers were slower and there were no FCC regulations, but not now.
All reasons why the CE industry uses no IC sockets anymore, except maybe for EPROMs which must be removed for erasing and programming. And for CPUs where the customer makes the choice, but then the socket is an expensive one.
answered Sep 5 at 18:34
StessenJ
3551
3551
add a comment |Â
add a comment |Â
up vote
6
down vote
I am not to what extent this was a manufacturer consideration, however it made the machines much easier/cheaper to service. This meant customers would be more willing to "invest" what might be $1-2000 in today's money since they could continue using it for a longer period.
As testament to this, having never touched an 8-bit machine before, I recently got my hands on two dead C64s and within twenty minutes I got one working by swapping chips.
Certainly in other manufacturing sectors, serviceability is a consideration so I find it likely the manufacturers did consider this.
add a comment |Â
up vote
6
down vote
I am not to what extent this was a manufacturer consideration, however it made the machines much easier/cheaper to service. This meant customers would be more willing to "invest" what might be $1-2000 in today's money since they could continue using it for a longer period.
As testament to this, having never touched an 8-bit machine before, I recently got my hands on two dead C64s and within twenty minutes I got one working by swapping chips.
Certainly in other manufacturing sectors, serviceability is a consideration so I find it likely the manufacturers did consider this.
add a comment |Â
up vote
6
down vote
up vote
6
down vote
I am not to what extent this was a manufacturer consideration, however it made the machines much easier/cheaper to service. This meant customers would be more willing to "invest" what might be $1-2000 in today's money since they could continue using it for a longer period.
As testament to this, having never touched an 8-bit machine before, I recently got my hands on two dead C64s and within twenty minutes I got one working by swapping chips.
Certainly in other manufacturing sectors, serviceability is a consideration so I find it likely the manufacturers did consider this.
I am not to what extent this was a manufacturer consideration, however it made the machines much easier/cheaper to service. This meant customers would be more willing to "invest" what might be $1-2000 in today's money since they could continue using it for a longer period.
As testament to this, having never touched an 8-bit machine before, I recently got my hands on two dead C64s and within twenty minutes I got one working by swapping chips.
Certainly in other manufacturing sectors, serviceability is a consideration so I find it likely the manufacturers did consider this.
answered Sep 6 at 0:30
Artelius
1612
1612
add a comment |Â
add a comment |Â
up vote
1
down vote
One area that use to mandate the use of sockets (and still does to my knowledge) is in the area of huge-footprint chips (specialized processors, CCD arrays above a certain size, Array-Processor Logic for in-circuit AI boards, etc.) where the mechanical stresses of insertion and/or pick-and-place soldering across the face of the chip exceed a certain recommended threshold.
In such cases, they're likely to use a ZIF-socket (Zero-Insertion-Force) where these high-dollar frequently upgraded components can be removed as needed, particularly in environments where ambient heat, vibration stresses, and radiation may cause gate degradation. It still happens, and when your board costs $1.75 and your AI Gate-Array chip costs $20K, you'd prefer to protect your investment.
The other variable not addressed earlier is when chip failures are not entirely understood and diagnostics on specialty high-density chips differ between the actual (working) environment and the prefab test environment. Soldering in these huge beasts (even when possible) usually means that the chip is destroyed during the desoldering process and therefore means it is no longer possible to analyze the casualties for failure modes after-the-fact.
add a comment |Â
up vote
1
down vote
One area that use to mandate the use of sockets (and still does to my knowledge) is in the area of huge-footprint chips (specialized processors, CCD arrays above a certain size, Array-Processor Logic for in-circuit AI boards, etc.) where the mechanical stresses of insertion and/or pick-and-place soldering across the face of the chip exceed a certain recommended threshold.
In such cases, they're likely to use a ZIF-socket (Zero-Insertion-Force) where these high-dollar frequently upgraded components can be removed as needed, particularly in environments where ambient heat, vibration stresses, and radiation may cause gate degradation. It still happens, and when your board costs $1.75 and your AI Gate-Array chip costs $20K, you'd prefer to protect your investment.
The other variable not addressed earlier is when chip failures are not entirely understood and diagnostics on specialty high-density chips differ between the actual (working) environment and the prefab test environment. Soldering in these huge beasts (even when possible) usually means that the chip is destroyed during the desoldering process and therefore means it is no longer possible to analyze the casualties for failure modes after-the-fact.
add a comment |Â
up vote
1
down vote
up vote
1
down vote
One area that use to mandate the use of sockets (and still does to my knowledge) is in the area of huge-footprint chips (specialized processors, CCD arrays above a certain size, Array-Processor Logic for in-circuit AI boards, etc.) where the mechanical stresses of insertion and/or pick-and-place soldering across the face of the chip exceed a certain recommended threshold.
In such cases, they're likely to use a ZIF-socket (Zero-Insertion-Force) where these high-dollar frequently upgraded components can be removed as needed, particularly in environments where ambient heat, vibration stresses, and radiation may cause gate degradation. It still happens, and when your board costs $1.75 and your AI Gate-Array chip costs $20K, you'd prefer to protect your investment.
The other variable not addressed earlier is when chip failures are not entirely understood and diagnostics on specialty high-density chips differ between the actual (working) environment and the prefab test environment. Soldering in these huge beasts (even when possible) usually means that the chip is destroyed during the desoldering process and therefore means it is no longer possible to analyze the casualties for failure modes after-the-fact.
One area that use to mandate the use of sockets (and still does to my knowledge) is in the area of huge-footprint chips (specialized processors, CCD arrays above a certain size, Array-Processor Logic for in-circuit AI boards, etc.) where the mechanical stresses of insertion and/or pick-and-place soldering across the face of the chip exceed a certain recommended threshold.
In such cases, they're likely to use a ZIF-socket (Zero-Insertion-Force) where these high-dollar frequently upgraded components can be removed as needed, particularly in environments where ambient heat, vibration stresses, and radiation may cause gate degradation. It still happens, and when your board costs $1.75 and your AI Gate-Array chip costs $20K, you'd prefer to protect your investment.
The other variable not addressed earlier is when chip failures are not entirely understood and diagnostics on specialty high-density chips differ between the actual (working) environment and the prefab test environment. Soldering in these huge beasts (even when possible) usually means that the chip is destroyed during the desoldering process and therefore means it is no longer possible to analyze the casualties for failure modes after-the-fact.
answered Sep 9 at 15:41
David Lovering
111
111
add a comment |Â
add a comment |Â
up vote
-3
down vote
It's really quite simple; early computer MBs were designed to accept a number of different processors. One could easily 'upgrade' their processor by simply swapping in a more powerful one.
1
Well... It often wasn't that easy. Plus, that doesn't explain the rest of the types of chips, such as signal processors (graphics / audio) etc.
â wizzwizz4â¦
Sep 6 at 19:31
2
Drop-in replacement of processors was (and is) relatively limited. In some cases (generally more recently), there are multiple speed CPUs with the same bus speed. In many retro systems the bus speed & CPU speed were the same or one was a simple multiple of the other, so you couldn't easily put in a 50% faster CPU without changing plenty of other stuff. There were a few situations where it worked - e.g., Nec V20 to replace an Intel 8088. But most software-compatible chips were not hardware (pin) compatible - e.g., 8080 vs. Z-80.
â manassehkatz
Sep 7 at 4:05
1
and let's not forget the expensive 8087 math co-processor, to be plugged into its socket only if the user had a need for it, e.g. for running AutoCAD (TM).
â StessenJ
Sep 7 at 18:19
add a comment |Â
up vote
-3
down vote
It's really quite simple; early computer MBs were designed to accept a number of different processors. One could easily 'upgrade' their processor by simply swapping in a more powerful one.
1
Well... It often wasn't that easy. Plus, that doesn't explain the rest of the types of chips, such as signal processors (graphics / audio) etc.
â wizzwizz4â¦
Sep 6 at 19:31
2
Drop-in replacement of processors was (and is) relatively limited. In some cases (generally more recently), there are multiple speed CPUs with the same bus speed. In many retro systems the bus speed & CPU speed were the same or one was a simple multiple of the other, so you couldn't easily put in a 50% faster CPU without changing plenty of other stuff. There were a few situations where it worked - e.g., Nec V20 to replace an Intel 8088. But most software-compatible chips were not hardware (pin) compatible - e.g., 8080 vs. Z-80.
â manassehkatz
Sep 7 at 4:05
1
and let's not forget the expensive 8087 math co-processor, to be plugged into its socket only if the user had a need for it, e.g. for running AutoCAD (TM).
â StessenJ
Sep 7 at 18:19
add a comment |Â
up vote
-3
down vote
up vote
-3
down vote
It's really quite simple; early computer MBs were designed to accept a number of different processors. One could easily 'upgrade' their processor by simply swapping in a more powerful one.
It's really quite simple; early computer MBs were designed to accept a number of different processors. One could easily 'upgrade' their processor by simply swapping in a more powerful one.
answered Sep 6 at 18:30
rgoers
1
1
1
Well... It often wasn't that easy. Plus, that doesn't explain the rest of the types of chips, such as signal processors (graphics / audio) etc.
â wizzwizz4â¦
Sep 6 at 19:31
2
Drop-in replacement of processors was (and is) relatively limited. In some cases (generally more recently), there are multiple speed CPUs with the same bus speed. In many retro systems the bus speed & CPU speed were the same or one was a simple multiple of the other, so you couldn't easily put in a 50% faster CPU without changing plenty of other stuff. There were a few situations where it worked - e.g., Nec V20 to replace an Intel 8088. But most software-compatible chips were not hardware (pin) compatible - e.g., 8080 vs. Z-80.
â manassehkatz
Sep 7 at 4:05
1
and let's not forget the expensive 8087 math co-processor, to be plugged into its socket only if the user had a need for it, e.g. for running AutoCAD (TM).
â StessenJ
Sep 7 at 18:19
add a comment |Â
1
Well... It often wasn't that easy. Plus, that doesn't explain the rest of the types of chips, such as signal processors (graphics / audio) etc.
â wizzwizz4â¦
Sep 6 at 19:31
2
Drop-in replacement of processors was (and is) relatively limited. In some cases (generally more recently), there are multiple speed CPUs with the same bus speed. In many retro systems the bus speed & CPU speed were the same or one was a simple multiple of the other, so you couldn't easily put in a 50% faster CPU without changing plenty of other stuff. There were a few situations where it worked - e.g., Nec V20 to replace an Intel 8088. But most software-compatible chips were not hardware (pin) compatible - e.g., 8080 vs. Z-80.
â manassehkatz
Sep 7 at 4:05
1
and let's not forget the expensive 8087 math co-processor, to be plugged into its socket only if the user had a need for it, e.g. for running AutoCAD (TM).
â StessenJ
Sep 7 at 18:19
1
1
Well... It often wasn't that easy. Plus, that doesn't explain the rest of the types of chips, such as signal processors (graphics / audio) etc.
â wizzwizz4â¦
Sep 6 at 19:31
Well... It often wasn't that easy. Plus, that doesn't explain the rest of the types of chips, such as signal processors (graphics / audio) etc.
â wizzwizz4â¦
Sep 6 at 19:31
2
2
Drop-in replacement of processors was (and is) relatively limited. In some cases (generally more recently), there are multiple speed CPUs with the same bus speed. In many retro systems the bus speed & CPU speed were the same or one was a simple multiple of the other, so you couldn't easily put in a 50% faster CPU without changing plenty of other stuff. There were a few situations where it worked - e.g., Nec V20 to replace an Intel 8088. But most software-compatible chips were not hardware (pin) compatible - e.g., 8080 vs. Z-80.
â manassehkatz
Sep 7 at 4:05
Drop-in replacement of processors was (and is) relatively limited. In some cases (generally more recently), there are multiple speed CPUs with the same bus speed. In many retro systems the bus speed & CPU speed were the same or one was a simple multiple of the other, so you couldn't easily put in a 50% faster CPU without changing plenty of other stuff. There were a few situations where it worked - e.g., Nec V20 to replace an Intel 8088. But most software-compatible chips were not hardware (pin) compatible - e.g., 8080 vs. Z-80.
â manassehkatz
Sep 7 at 4:05
1
1
and let's not forget the expensive 8087 math co-processor, to be plugged into its socket only if the user had a need for it, e.g. for running AutoCAD (TM).
â StessenJ
Sep 7 at 18:19
and let's not forget the expensive 8087 math co-processor, to be plugged into its socket only if the user had a need for it, e.g. for running AutoCAD (TM).
â StessenJ
Sep 7 at 18:19
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f7477%2fwhy-were-chips-socketed-in-early-computers%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
14
I'll let someone who has more extensive experience give an official answer. But I suspect that it has to do with complexity. Your most complex chips - and the CPU is generally at the top of the list - are most likely to (a) have problems requiring replacement (e.g., design bugs found after manufacture, which has happened a few times to Intel) and (b) are most important to keep protected from any damage during assembly - wave soldering being pretty safe but why risk the most important chip when you can put in a socket and insert the chip after the soldering is all done.
â manassehkatz
Sep 5 at 14:30
22
Customers of very early computers expected sockets for maintainability (at least in the "nerd market".
â tofro
Sep 5 at 15:50
7
This was the general common practice with DIP (and other non SMD) IC packages. It allowed for quick fixes of the boards without using desoldering stations, etc. And the manufacturing techniques were different too.
â nbloqs
Sep 5 at 17:16
19
You know how people rage on internet forums when Apple or HP ships a product with soldered RAM? In the '70s soldered anything would have elicited the same reaction. Only they didn't have forums, so they would have picked up actual torches and pitchforks.
â Tobia Tesan
Sep 6 at 10:51
3
I wonder if in the future all car parts will be welded together, and complete car replacement when any piece breaks will be the norm, and then we'll wonder, why did we join the pieces with nuts and bolts before?
â JoL
Sep 6 at 15:20