Why did the Cray-1 have 8 parity bits per word?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP












16















According to https://en.wikipedia.org/wiki/Cray-1




The Cray-1 was built as a 64-bit system, a departure from the 7600/6600, which were 60-bit machines (a change was also planned for the 8600). Addressing was 24-bit, with a maximum of 1,048,576 64-bit words (1 megaword) of main memory, where each word also had 8 parity bits for a total of 72 bits per word.[10] There were 64 data bits and 8 check bits.




It seems to me by the nature of parity, it should suffice to have one bit of overhead per word, rather than eight. I can understand on something like an 8088/87, you might be stuck with 1/8 because the memory system deals in eight bits at a time, but why is it that way on a 64-bit machine?










share|improve this question






















  • Every parity bit you add halves the error rate. Hence 8 bits divide it by 256. (Though as error correction was used as well, the improvement is not so good.)

    – Yves Daoust
    Mar 6 at 17:24






  • 3





    8/64 = 1/8. Guess how many parity bits modern computers use for parity on bytes??

    – RonJohn
    Mar 6 at 20:29















16















According to https://en.wikipedia.org/wiki/Cray-1




The Cray-1 was built as a 64-bit system, a departure from the 7600/6600, which were 60-bit machines (a change was also planned for the 8600). Addressing was 24-bit, with a maximum of 1,048,576 64-bit words (1 megaword) of main memory, where each word also had 8 parity bits for a total of 72 bits per word.[10] There were 64 data bits and 8 check bits.




It seems to me by the nature of parity, it should suffice to have one bit of overhead per word, rather than eight. I can understand on something like an 8088/87, you might be stuck with 1/8 because the memory system deals in eight bits at a time, but why is it that way on a 64-bit machine?










share|improve this question






















  • Every parity bit you add halves the error rate. Hence 8 bits divide it by 256. (Though as error correction was used as well, the improvement is not so good.)

    – Yves Daoust
    Mar 6 at 17:24






  • 3





    8/64 = 1/8. Guess how many parity bits modern computers use for parity on bytes??

    – RonJohn
    Mar 6 at 20:29













16












16








16


1






According to https://en.wikipedia.org/wiki/Cray-1




The Cray-1 was built as a 64-bit system, a departure from the 7600/6600, which were 60-bit machines (a change was also planned for the 8600). Addressing was 24-bit, with a maximum of 1,048,576 64-bit words (1 megaword) of main memory, where each word also had 8 parity bits for a total of 72 bits per word.[10] There were 64 data bits and 8 check bits.




It seems to me by the nature of parity, it should suffice to have one bit of overhead per word, rather than eight. I can understand on something like an 8088/87, you might be stuck with 1/8 because the memory system deals in eight bits at a time, but why is it that way on a 64-bit machine?










share|improve this question














According to https://en.wikipedia.org/wiki/Cray-1




The Cray-1 was built as a 64-bit system, a departure from the 7600/6600, which were 60-bit machines (a change was also planned for the 8600). Addressing was 24-bit, with a maximum of 1,048,576 64-bit words (1 megaword) of main memory, where each word also had 8 parity bits for a total of 72 bits per word.[10] There were 64 data bits and 8 check bits.




It seems to me by the nature of parity, it should suffice to have one bit of overhead per word, rather than eight. I can understand on something like an 8088/87, you might be stuck with 1/8 because the memory system deals in eight bits at a time, but why is it that way on a 64-bit machine?







hardware memory cray






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Mar 6 at 10:49









rwallacerwallace

10.5k453156




10.5k453156












  • Every parity bit you add halves the error rate. Hence 8 bits divide it by 256. (Though as error correction was used as well, the improvement is not so good.)

    – Yves Daoust
    Mar 6 at 17:24






  • 3





    8/64 = 1/8. Guess how many parity bits modern computers use for parity on bytes??

    – RonJohn
    Mar 6 at 20:29

















  • Every parity bit you add halves the error rate. Hence 8 bits divide it by 256. (Though as error correction was used as well, the improvement is not so good.)

    – Yves Daoust
    Mar 6 at 17:24






  • 3





    8/64 = 1/8. Guess how many parity bits modern computers use for parity on bytes??

    – RonJohn
    Mar 6 at 20:29
















Every parity bit you add halves the error rate. Hence 8 bits divide it by 256. (Though as error correction was used as well, the improvement is not so good.)

– Yves Daoust
Mar 6 at 17:24





Every parity bit you add halves the error rate. Hence 8 bits divide it by 256. (Though as error correction was used as well, the improvement is not so good.)

– Yves Daoust
Mar 6 at 17:24




3




3





8/64 = 1/8. Guess how many parity bits modern computers use for parity on bytes??

– RonJohn
Mar 6 at 20:29





8/64 = 1/8. Guess how many parity bits modern computers use for parity on bytes??

– RonJohn
Mar 6 at 20:29










3 Answers
3






active

oldest

votes


















27
















There were 64 data bits and 8 check bits.




It seems to me by the nature of parity, it should suffice to have one bit of overhead per word, rather than eight. [...]




What you refer to here is a simple single bit parity. Basically counting the number of ones (even parity) or zeros (odd). Such a mechanism can only detect an odd number of bit flips (1 or 3 or 5 or ... flipping). Even numbers of flips can't be detected and will result in undetected computing errors.



What the Cray uses is a parity system based on Hamming encoding. Encoding parity this way allows detection of multiple bit errors within a word and even correction of these on the fly. The 8-bit code used was able to correct single bit errors (SEC) and detect double error (DED).



So while a machine with a single bit parity can detect single bit flips, it will always fail on double flips. Further, even if an error is detected, the only solution is to halt the program. With SEC-DED, a single error detected will be recovered (final) on the fly (at cost of maybe a few cycles) and a multi-bit error will halt the machine.




I can understand on something like an 8088/87, you might be stuck with 1/8 because the memory system deals in eight bits at a time, but why is it that way on a 64-bit machine?




Because it's still just 1/8th, but now with improved flavour :))



Considering the quite important function of invisible error correction, the question is rather why only 8. Longer codes would allow to detect even longer errors and multi-bit corrections. With the 1 Ki by 1 RAMs used (Fairchild 10415FC), any width could have been made. Then again, while the Cray 1 architecture shows a switch to the 'new' standard of 8 bit units - so using 8 parity bits comes naturally. Doesn't it?




Remark#1



Eventually it's the same development the PC took, just instead of going from 9 bit memory (SIMM) over 36 bit (PS/2) to today's 72 Bit DIMM, the Cray-1 leapfrogged all of this and started with 72 Bit right away.




Remark#2



Seymour Cray is known to have said that 'Parity is for Farmers' when designing the 6600. While this quote was famous in inspiring the reply 'Farmers buy Computers' when parity got introduced with the 7600, not may know what he was referring to on an implied level: The Doctrine of Parity, a US policy to make farming profitable again during and after the great depression - a policy that to some degree still results in higher food prices in the US than in most other countries.




Remark#3



The Cray Y-MP of 1990 even went a step further and added parity to (most) registers. Also the code was changed to enable double-bit correction and multi-bit detection.






share|improve this answer




















  • 4





    Cray certainly resisted parity and error checking hardware in the Cray-1, because it was a performance hit. AFAIK one (the first production?) Cray-1 was built without parity and delivered to a US government agency (can't remember exactly where), and it did have better benchmarked performance than any of the later production machines.

    – alephzero
    Mar 6 at 12:16






  • 2





    @alephzero: Would parity have required a performance hit if its sole function was to sound an alarm in case of parity fault to notify the user that the output from the current job should not be trusted, as opposed to trying to prevent erroneous computations? Even if parity-validation logic wouldn't be able to indicate whether a fetch had received valid data until long after the data had already been used, it could still provide an extremely valuable pass-fail indication of whether the output from a job should be trusted.

    – supercat
    Mar 6 at 19:09











  • @supercat: Per my CAL (Cray Assembler Language) reference card next to me, memory cycle time for scalar access is 11 clock periods but 10 clock periods for Serial 1 (which had parity rather than SECDED protection). There was in fact a performance hit.

    – Edward Barnard
    Mar 27 at 0:17






  • 1





    @EdwardBarnard: You're saying the 10 cycle duration was for parity but not SECDED? If so, then unless there was some faster mode without any sort of parity protection, it sounds like you're saying there was only a performance hit if one needed to be able to recover from parity errors (as opposed to merely sounding an alarm).

    – supercat
    Mar 27 at 3:13











  • @supercat: Memory access was either "vector mode" or "scalar mode", with access time a bit faster for vector mode - but still 1 clock period faster for Serial 1. There's a third mode, instruction fetch, not relevant here. This was literally wired into the hardware; no option to turn on or off. There WAS an option as to whether or not generate a hardware interrupt to report single, double, or both, but the single-bit-error-correction happened regardless of interrupt settings. I never worked with Serial 1 personally but did other CRAY-1's inside the operating system.

    – Edward Barnard
    Mar 27 at 15:31


















9














After the first Cray-1 was built, some calculation determined that the time between failures would be greatly extended by having a single-error-correction-double-error-detection (SECDED) without much cost in speed. The point is that with large memory, random single bit errors occur every few hours; with SECDED, it's every few years or so.






share|improve this answer























  • Yes. Mean time between failure was a significant consideration. Multi-day runs for a single program were not uncommon. SECDED, allowing to ride through flipped memory bits, were one of the factors allowing the long runs without hardware failure.

    – Edward Barnard
    Mar 27 at 0:21


















2














The extra bits are used to allow for error detection and correction (EDAC).



This scheme is described in detail in: Cray 1 Hardware Reference Manual at page 5-5 (~168)



The use of EDAC in the Cray-1 is rather ironic given that Seymour Cray is (in)famous for once saying




Parity is for farmers.




Which I think is a reference to farm subsides in Europe.






share|improve this answer


















  • 3





    "Farm income parity"' was a policy in 20th century US agriculture, probably topical in the 60s and 70s, so I suppose Cray was referring to that,

    – another-dave
    Mar 7 at 23:56












Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "648"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9318%2fwhy-did-the-cray-1-have-8-parity-bits-per-word%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























3 Answers
3






active

oldest

votes








3 Answers
3






active

oldest

votes









active

oldest

votes






active

oldest

votes









27
















There were 64 data bits and 8 check bits.




It seems to me by the nature of parity, it should suffice to have one bit of overhead per word, rather than eight. [...]




What you refer to here is a simple single bit parity. Basically counting the number of ones (even parity) or zeros (odd). Such a mechanism can only detect an odd number of bit flips (1 or 3 or 5 or ... flipping). Even numbers of flips can't be detected and will result in undetected computing errors.



What the Cray uses is a parity system based on Hamming encoding. Encoding parity this way allows detection of multiple bit errors within a word and even correction of these on the fly. The 8-bit code used was able to correct single bit errors (SEC) and detect double error (DED).



So while a machine with a single bit parity can detect single bit flips, it will always fail on double flips. Further, even if an error is detected, the only solution is to halt the program. With SEC-DED, a single error detected will be recovered (final) on the fly (at cost of maybe a few cycles) and a multi-bit error will halt the machine.




I can understand on something like an 8088/87, you might be stuck with 1/8 because the memory system deals in eight bits at a time, but why is it that way on a 64-bit machine?




Because it's still just 1/8th, but now with improved flavour :))



Considering the quite important function of invisible error correction, the question is rather why only 8. Longer codes would allow to detect even longer errors and multi-bit corrections. With the 1 Ki by 1 RAMs used (Fairchild 10415FC), any width could have been made. Then again, while the Cray 1 architecture shows a switch to the 'new' standard of 8 bit units - so using 8 parity bits comes naturally. Doesn't it?




Remark#1



Eventually it's the same development the PC took, just instead of going from 9 bit memory (SIMM) over 36 bit (PS/2) to today's 72 Bit DIMM, the Cray-1 leapfrogged all of this and started with 72 Bit right away.




Remark#2



Seymour Cray is known to have said that 'Parity is for Farmers' when designing the 6600. While this quote was famous in inspiring the reply 'Farmers buy Computers' when parity got introduced with the 7600, not may know what he was referring to on an implied level: The Doctrine of Parity, a US policy to make farming profitable again during and after the great depression - a policy that to some degree still results in higher food prices in the US than in most other countries.




Remark#3



The Cray Y-MP of 1990 even went a step further and added parity to (most) registers. Also the code was changed to enable double-bit correction and multi-bit detection.






share|improve this answer




















  • 4





    Cray certainly resisted parity and error checking hardware in the Cray-1, because it was a performance hit. AFAIK one (the first production?) Cray-1 was built without parity and delivered to a US government agency (can't remember exactly where), and it did have better benchmarked performance than any of the later production machines.

    – alephzero
    Mar 6 at 12:16






  • 2





    @alephzero: Would parity have required a performance hit if its sole function was to sound an alarm in case of parity fault to notify the user that the output from the current job should not be trusted, as opposed to trying to prevent erroneous computations? Even if parity-validation logic wouldn't be able to indicate whether a fetch had received valid data until long after the data had already been used, it could still provide an extremely valuable pass-fail indication of whether the output from a job should be trusted.

    – supercat
    Mar 6 at 19:09











  • @supercat: Per my CAL (Cray Assembler Language) reference card next to me, memory cycle time for scalar access is 11 clock periods but 10 clock periods for Serial 1 (which had parity rather than SECDED protection). There was in fact a performance hit.

    – Edward Barnard
    Mar 27 at 0:17






  • 1





    @EdwardBarnard: You're saying the 10 cycle duration was for parity but not SECDED? If so, then unless there was some faster mode without any sort of parity protection, it sounds like you're saying there was only a performance hit if one needed to be able to recover from parity errors (as opposed to merely sounding an alarm).

    – supercat
    Mar 27 at 3:13











  • @supercat: Memory access was either "vector mode" or "scalar mode", with access time a bit faster for vector mode - but still 1 clock period faster for Serial 1. There's a third mode, instruction fetch, not relevant here. This was literally wired into the hardware; no option to turn on or off. There WAS an option as to whether or not generate a hardware interrupt to report single, double, or both, but the single-bit-error-correction happened regardless of interrupt settings. I never worked with Serial 1 personally but did other CRAY-1's inside the operating system.

    – Edward Barnard
    Mar 27 at 15:31















27
















There were 64 data bits and 8 check bits.




It seems to me by the nature of parity, it should suffice to have one bit of overhead per word, rather than eight. [...]




What you refer to here is a simple single bit parity. Basically counting the number of ones (even parity) or zeros (odd). Such a mechanism can only detect an odd number of bit flips (1 or 3 or 5 or ... flipping). Even numbers of flips can't be detected and will result in undetected computing errors.



What the Cray uses is a parity system based on Hamming encoding. Encoding parity this way allows detection of multiple bit errors within a word and even correction of these on the fly. The 8-bit code used was able to correct single bit errors (SEC) and detect double error (DED).



So while a machine with a single bit parity can detect single bit flips, it will always fail on double flips. Further, even if an error is detected, the only solution is to halt the program. With SEC-DED, a single error detected will be recovered (final) on the fly (at cost of maybe a few cycles) and a multi-bit error will halt the machine.




I can understand on something like an 8088/87, you might be stuck with 1/8 because the memory system deals in eight bits at a time, but why is it that way on a 64-bit machine?




Because it's still just 1/8th, but now with improved flavour :))



Considering the quite important function of invisible error correction, the question is rather why only 8. Longer codes would allow to detect even longer errors and multi-bit corrections. With the 1 Ki by 1 RAMs used (Fairchild 10415FC), any width could have been made. Then again, while the Cray 1 architecture shows a switch to the 'new' standard of 8 bit units - so using 8 parity bits comes naturally. Doesn't it?




Remark#1



Eventually it's the same development the PC took, just instead of going from 9 bit memory (SIMM) over 36 bit (PS/2) to today's 72 Bit DIMM, the Cray-1 leapfrogged all of this and started with 72 Bit right away.




Remark#2



Seymour Cray is known to have said that 'Parity is for Farmers' when designing the 6600. While this quote was famous in inspiring the reply 'Farmers buy Computers' when parity got introduced with the 7600, not may know what he was referring to on an implied level: The Doctrine of Parity, a US policy to make farming profitable again during and after the great depression - a policy that to some degree still results in higher food prices in the US than in most other countries.




Remark#3



The Cray Y-MP of 1990 even went a step further and added parity to (most) registers. Also the code was changed to enable double-bit correction and multi-bit detection.






share|improve this answer




















  • 4





    Cray certainly resisted parity and error checking hardware in the Cray-1, because it was a performance hit. AFAIK one (the first production?) Cray-1 was built without parity and delivered to a US government agency (can't remember exactly where), and it did have better benchmarked performance than any of the later production machines.

    – alephzero
    Mar 6 at 12:16






  • 2





    @alephzero: Would parity have required a performance hit if its sole function was to sound an alarm in case of parity fault to notify the user that the output from the current job should not be trusted, as opposed to trying to prevent erroneous computations? Even if parity-validation logic wouldn't be able to indicate whether a fetch had received valid data until long after the data had already been used, it could still provide an extremely valuable pass-fail indication of whether the output from a job should be trusted.

    – supercat
    Mar 6 at 19:09











  • @supercat: Per my CAL (Cray Assembler Language) reference card next to me, memory cycle time for scalar access is 11 clock periods but 10 clock periods for Serial 1 (which had parity rather than SECDED protection). There was in fact a performance hit.

    – Edward Barnard
    Mar 27 at 0:17






  • 1





    @EdwardBarnard: You're saying the 10 cycle duration was for parity but not SECDED? If so, then unless there was some faster mode without any sort of parity protection, it sounds like you're saying there was only a performance hit if one needed to be able to recover from parity errors (as opposed to merely sounding an alarm).

    – supercat
    Mar 27 at 3:13











  • @supercat: Memory access was either "vector mode" or "scalar mode", with access time a bit faster for vector mode - but still 1 clock period faster for Serial 1. There's a third mode, instruction fetch, not relevant here. This was literally wired into the hardware; no option to turn on or off. There WAS an option as to whether or not generate a hardware interrupt to report single, double, or both, but the single-bit-error-correction happened regardless of interrupt settings. I never worked with Serial 1 personally but did other CRAY-1's inside the operating system.

    – Edward Barnard
    Mar 27 at 15:31













27












27








27









There were 64 data bits and 8 check bits.




It seems to me by the nature of parity, it should suffice to have one bit of overhead per word, rather than eight. [...]




What you refer to here is a simple single bit parity. Basically counting the number of ones (even parity) or zeros (odd). Such a mechanism can only detect an odd number of bit flips (1 or 3 or 5 or ... flipping). Even numbers of flips can't be detected and will result in undetected computing errors.



What the Cray uses is a parity system based on Hamming encoding. Encoding parity this way allows detection of multiple bit errors within a word and even correction of these on the fly. The 8-bit code used was able to correct single bit errors (SEC) and detect double error (DED).



So while a machine with a single bit parity can detect single bit flips, it will always fail on double flips. Further, even if an error is detected, the only solution is to halt the program. With SEC-DED, a single error detected will be recovered (final) on the fly (at cost of maybe a few cycles) and a multi-bit error will halt the machine.




I can understand on something like an 8088/87, you might be stuck with 1/8 because the memory system deals in eight bits at a time, but why is it that way on a 64-bit machine?




Because it's still just 1/8th, but now with improved flavour :))



Considering the quite important function of invisible error correction, the question is rather why only 8. Longer codes would allow to detect even longer errors and multi-bit corrections. With the 1 Ki by 1 RAMs used (Fairchild 10415FC), any width could have been made. Then again, while the Cray 1 architecture shows a switch to the 'new' standard of 8 bit units - so using 8 parity bits comes naturally. Doesn't it?




Remark#1



Eventually it's the same development the PC took, just instead of going from 9 bit memory (SIMM) over 36 bit (PS/2) to today's 72 Bit DIMM, the Cray-1 leapfrogged all of this and started with 72 Bit right away.




Remark#2



Seymour Cray is known to have said that 'Parity is for Farmers' when designing the 6600. While this quote was famous in inspiring the reply 'Farmers buy Computers' when parity got introduced with the 7600, not may know what he was referring to on an implied level: The Doctrine of Parity, a US policy to make farming profitable again during and after the great depression - a policy that to some degree still results in higher food prices in the US than in most other countries.




Remark#3



The Cray Y-MP of 1990 even went a step further and added parity to (most) registers. Also the code was changed to enable double-bit correction and multi-bit detection.






share|improve this answer

















There were 64 data bits and 8 check bits.




It seems to me by the nature of parity, it should suffice to have one bit of overhead per word, rather than eight. [...]




What you refer to here is a simple single bit parity. Basically counting the number of ones (even parity) or zeros (odd). Such a mechanism can only detect an odd number of bit flips (1 or 3 or 5 or ... flipping). Even numbers of flips can't be detected and will result in undetected computing errors.



What the Cray uses is a parity system based on Hamming encoding. Encoding parity this way allows detection of multiple bit errors within a word and even correction of these on the fly. The 8-bit code used was able to correct single bit errors (SEC) and detect double error (DED).



So while a machine with a single bit parity can detect single bit flips, it will always fail on double flips. Further, even if an error is detected, the only solution is to halt the program. With SEC-DED, a single error detected will be recovered (final) on the fly (at cost of maybe a few cycles) and a multi-bit error will halt the machine.




I can understand on something like an 8088/87, you might be stuck with 1/8 because the memory system deals in eight bits at a time, but why is it that way on a 64-bit machine?




Because it's still just 1/8th, but now with improved flavour :))



Considering the quite important function of invisible error correction, the question is rather why only 8. Longer codes would allow to detect even longer errors and multi-bit corrections. With the 1 Ki by 1 RAMs used (Fairchild 10415FC), any width could have been made. Then again, while the Cray 1 architecture shows a switch to the 'new' standard of 8 bit units - so using 8 parity bits comes naturally. Doesn't it?




Remark#1



Eventually it's the same development the PC took, just instead of going from 9 bit memory (SIMM) over 36 bit (PS/2) to today's 72 Bit DIMM, the Cray-1 leapfrogged all of this and started with 72 Bit right away.




Remark#2



Seymour Cray is known to have said that 'Parity is for Farmers' when designing the 6600. While this quote was famous in inspiring the reply 'Farmers buy Computers' when parity got introduced with the 7600, not may know what he was referring to on an implied level: The Doctrine of Parity, a US policy to make farming profitable again during and after the great depression - a policy that to some degree still results in higher food prices in the US than in most other countries.




Remark#3



The Cray Y-MP of 1990 even went a step further and added parity to (most) registers. Also the code was changed to enable double-bit correction and multi-bit detection.







share|improve this answer














share|improve this answer



share|improve this answer








edited Mar 7 at 21:21

























answered Mar 6 at 11:16









RaffzahnRaffzahn

55k6136223




55k6136223







  • 4





    Cray certainly resisted parity and error checking hardware in the Cray-1, because it was a performance hit. AFAIK one (the first production?) Cray-1 was built without parity and delivered to a US government agency (can't remember exactly where), and it did have better benchmarked performance than any of the later production machines.

    – alephzero
    Mar 6 at 12:16






  • 2





    @alephzero: Would parity have required a performance hit if its sole function was to sound an alarm in case of parity fault to notify the user that the output from the current job should not be trusted, as opposed to trying to prevent erroneous computations? Even if parity-validation logic wouldn't be able to indicate whether a fetch had received valid data until long after the data had already been used, it could still provide an extremely valuable pass-fail indication of whether the output from a job should be trusted.

    – supercat
    Mar 6 at 19:09











  • @supercat: Per my CAL (Cray Assembler Language) reference card next to me, memory cycle time for scalar access is 11 clock periods but 10 clock periods for Serial 1 (which had parity rather than SECDED protection). There was in fact a performance hit.

    – Edward Barnard
    Mar 27 at 0:17






  • 1





    @EdwardBarnard: You're saying the 10 cycle duration was for parity but not SECDED? If so, then unless there was some faster mode without any sort of parity protection, it sounds like you're saying there was only a performance hit if one needed to be able to recover from parity errors (as opposed to merely sounding an alarm).

    – supercat
    Mar 27 at 3:13











  • @supercat: Memory access was either "vector mode" or "scalar mode", with access time a bit faster for vector mode - but still 1 clock period faster for Serial 1. There's a third mode, instruction fetch, not relevant here. This was literally wired into the hardware; no option to turn on or off. There WAS an option as to whether or not generate a hardware interrupt to report single, double, or both, but the single-bit-error-correction happened regardless of interrupt settings. I never worked with Serial 1 personally but did other CRAY-1's inside the operating system.

    – Edward Barnard
    Mar 27 at 15:31












  • 4





    Cray certainly resisted parity and error checking hardware in the Cray-1, because it was a performance hit. AFAIK one (the first production?) Cray-1 was built without parity and delivered to a US government agency (can't remember exactly where), and it did have better benchmarked performance than any of the later production machines.

    – alephzero
    Mar 6 at 12:16






  • 2





    @alephzero: Would parity have required a performance hit if its sole function was to sound an alarm in case of parity fault to notify the user that the output from the current job should not be trusted, as opposed to trying to prevent erroneous computations? Even if parity-validation logic wouldn't be able to indicate whether a fetch had received valid data until long after the data had already been used, it could still provide an extremely valuable pass-fail indication of whether the output from a job should be trusted.

    – supercat
    Mar 6 at 19:09











  • @supercat: Per my CAL (Cray Assembler Language) reference card next to me, memory cycle time for scalar access is 11 clock periods but 10 clock periods for Serial 1 (which had parity rather than SECDED protection). There was in fact a performance hit.

    – Edward Barnard
    Mar 27 at 0:17






  • 1





    @EdwardBarnard: You're saying the 10 cycle duration was for parity but not SECDED? If so, then unless there was some faster mode without any sort of parity protection, it sounds like you're saying there was only a performance hit if one needed to be able to recover from parity errors (as opposed to merely sounding an alarm).

    – supercat
    Mar 27 at 3:13











  • @supercat: Memory access was either "vector mode" or "scalar mode", with access time a bit faster for vector mode - but still 1 clock period faster for Serial 1. There's a third mode, instruction fetch, not relevant here. This was literally wired into the hardware; no option to turn on or off. There WAS an option as to whether or not generate a hardware interrupt to report single, double, or both, but the single-bit-error-correction happened regardless of interrupt settings. I never worked with Serial 1 personally but did other CRAY-1's inside the operating system.

    – Edward Barnard
    Mar 27 at 15:31







4




4





Cray certainly resisted parity and error checking hardware in the Cray-1, because it was a performance hit. AFAIK one (the first production?) Cray-1 was built without parity and delivered to a US government agency (can't remember exactly where), and it did have better benchmarked performance than any of the later production machines.

– alephzero
Mar 6 at 12:16





Cray certainly resisted parity and error checking hardware in the Cray-1, because it was a performance hit. AFAIK one (the first production?) Cray-1 was built without parity and delivered to a US government agency (can't remember exactly where), and it did have better benchmarked performance than any of the later production machines.

– alephzero
Mar 6 at 12:16




2




2





@alephzero: Would parity have required a performance hit if its sole function was to sound an alarm in case of parity fault to notify the user that the output from the current job should not be trusted, as opposed to trying to prevent erroneous computations? Even if parity-validation logic wouldn't be able to indicate whether a fetch had received valid data until long after the data had already been used, it could still provide an extremely valuable pass-fail indication of whether the output from a job should be trusted.

– supercat
Mar 6 at 19:09





@alephzero: Would parity have required a performance hit if its sole function was to sound an alarm in case of parity fault to notify the user that the output from the current job should not be trusted, as opposed to trying to prevent erroneous computations? Even if parity-validation logic wouldn't be able to indicate whether a fetch had received valid data until long after the data had already been used, it could still provide an extremely valuable pass-fail indication of whether the output from a job should be trusted.

– supercat
Mar 6 at 19:09













@supercat: Per my CAL (Cray Assembler Language) reference card next to me, memory cycle time for scalar access is 11 clock periods but 10 clock periods for Serial 1 (which had parity rather than SECDED protection). There was in fact a performance hit.

– Edward Barnard
Mar 27 at 0:17





@supercat: Per my CAL (Cray Assembler Language) reference card next to me, memory cycle time for scalar access is 11 clock periods but 10 clock periods for Serial 1 (which had parity rather than SECDED protection). There was in fact a performance hit.

– Edward Barnard
Mar 27 at 0:17




1




1





@EdwardBarnard: You're saying the 10 cycle duration was for parity but not SECDED? If so, then unless there was some faster mode without any sort of parity protection, it sounds like you're saying there was only a performance hit if one needed to be able to recover from parity errors (as opposed to merely sounding an alarm).

– supercat
Mar 27 at 3:13





@EdwardBarnard: You're saying the 10 cycle duration was for parity but not SECDED? If so, then unless there was some faster mode without any sort of parity protection, it sounds like you're saying there was only a performance hit if one needed to be able to recover from parity errors (as opposed to merely sounding an alarm).

– supercat
Mar 27 at 3:13













@supercat: Memory access was either "vector mode" or "scalar mode", with access time a bit faster for vector mode - but still 1 clock period faster for Serial 1. There's a third mode, instruction fetch, not relevant here. This was literally wired into the hardware; no option to turn on or off. There WAS an option as to whether or not generate a hardware interrupt to report single, double, or both, but the single-bit-error-correction happened regardless of interrupt settings. I never worked with Serial 1 personally but did other CRAY-1's inside the operating system.

– Edward Barnard
Mar 27 at 15:31





@supercat: Memory access was either "vector mode" or "scalar mode", with access time a bit faster for vector mode - but still 1 clock period faster for Serial 1. There's a third mode, instruction fetch, not relevant here. This was literally wired into the hardware; no option to turn on or off. There WAS an option as to whether or not generate a hardware interrupt to report single, double, or both, but the single-bit-error-correction happened regardless of interrupt settings. I never worked with Serial 1 personally but did other CRAY-1's inside the operating system.

– Edward Barnard
Mar 27 at 15:31











9














After the first Cray-1 was built, some calculation determined that the time between failures would be greatly extended by having a single-error-correction-double-error-detection (SECDED) without much cost in speed. The point is that with large memory, random single bit errors occur every few hours; with SECDED, it's every few years or so.






share|improve this answer























  • Yes. Mean time between failure was a significant consideration. Multi-day runs for a single program were not uncommon. SECDED, allowing to ride through flipped memory bits, were one of the factors allowing the long runs without hardware failure.

    – Edward Barnard
    Mar 27 at 0:21















9














After the first Cray-1 was built, some calculation determined that the time between failures would be greatly extended by having a single-error-correction-double-error-detection (SECDED) without much cost in speed. The point is that with large memory, random single bit errors occur every few hours; with SECDED, it's every few years or so.






share|improve this answer























  • Yes. Mean time between failure was a significant consideration. Multi-day runs for a single program were not uncommon. SECDED, allowing to ride through flipped memory bits, were one of the factors allowing the long runs without hardware failure.

    – Edward Barnard
    Mar 27 at 0:21













9












9








9







After the first Cray-1 was built, some calculation determined that the time between failures would be greatly extended by having a single-error-correction-double-error-detection (SECDED) without much cost in speed. The point is that with large memory, random single bit errors occur every few hours; with SECDED, it's every few years or so.






share|improve this answer













After the first Cray-1 was built, some calculation determined that the time between failures would be greatly extended by having a single-error-correction-double-error-detection (SECDED) without much cost in speed. The point is that with large memory, random single bit errors occur every few hours; with SECDED, it's every few years or so.







share|improve this answer












share|improve this answer



share|improve this answer










answered Mar 6 at 19:09









ttwttw

1912




1912












  • Yes. Mean time between failure was a significant consideration. Multi-day runs for a single program were not uncommon. SECDED, allowing to ride through flipped memory bits, were one of the factors allowing the long runs without hardware failure.

    – Edward Barnard
    Mar 27 at 0:21

















  • Yes. Mean time between failure was a significant consideration. Multi-day runs for a single program were not uncommon. SECDED, allowing to ride through flipped memory bits, were one of the factors allowing the long runs without hardware failure.

    – Edward Barnard
    Mar 27 at 0:21
















Yes. Mean time between failure was a significant consideration. Multi-day runs for a single program were not uncommon. SECDED, allowing to ride through flipped memory bits, were one of the factors allowing the long runs without hardware failure.

– Edward Barnard
Mar 27 at 0:21





Yes. Mean time between failure was a significant consideration. Multi-day runs for a single program were not uncommon. SECDED, allowing to ride through flipped memory bits, were one of the factors allowing the long runs without hardware failure.

– Edward Barnard
Mar 27 at 0:21











2














The extra bits are used to allow for error detection and correction (EDAC).



This scheme is described in detail in: Cray 1 Hardware Reference Manual at page 5-5 (~168)



The use of EDAC in the Cray-1 is rather ironic given that Seymour Cray is (in)famous for once saying




Parity is for farmers.




Which I think is a reference to farm subsides in Europe.






share|improve this answer


















  • 3





    "Farm income parity"' was a policy in 20th century US agriculture, probably topical in the 60s and 70s, so I suppose Cray was referring to that,

    – another-dave
    Mar 7 at 23:56
















2














The extra bits are used to allow for error detection and correction (EDAC).



This scheme is described in detail in: Cray 1 Hardware Reference Manual at page 5-5 (~168)



The use of EDAC in the Cray-1 is rather ironic given that Seymour Cray is (in)famous for once saying




Parity is for farmers.




Which I think is a reference to farm subsides in Europe.






share|improve this answer


















  • 3





    "Farm income parity"' was a policy in 20th century US agriculture, probably topical in the 60s and 70s, so I suppose Cray was referring to that,

    – another-dave
    Mar 7 at 23:56














2












2








2







The extra bits are used to allow for error detection and correction (EDAC).



This scheme is described in detail in: Cray 1 Hardware Reference Manual at page 5-5 (~168)



The use of EDAC in the Cray-1 is rather ironic given that Seymour Cray is (in)famous for once saying




Parity is for farmers.




Which I think is a reference to farm subsides in Europe.






share|improve this answer













The extra bits are used to allow for error detection and correction (EDAC).



This scheme is described in detail in: Cray 1 Hardware Reference Manual at page 5-5 (~168)



The use of EDAC in the Cray-1 is rather ironic given that Seymour Cray is (in)famous for once saying




Parity is for farmers.




Which I think is a reference to farm subsides in Europe.







share|improve this answer












share|improve this answer



share|improve this answer










answered Mar 7 at 18:40









Peter CamilleriPeter Camilleri

88439




88439







  • 3





    "Farm income parity"' was a policy in 20th century US agriculture, probably topical in the 60s and 70s, so I suppose Cray was referring to that,

    – another-dave
    Mar 7 at 23:56













  • 3





    "Farm income parity"' was a policy in 20th century US agriculture, probably topical in the 60s and 70s, so I suppose Cray was referring to that,

    – another-dave
    Mar 7 at 23:56








3




3





"Farm income parity"' was a policy in 20th century US agriculture, probably topical in the 60s and 70s, so I suppose Cray was referring to that,

– another-dave
Mar 7 at 23:56






"Farm income parity"' was a policy in 20th century US agriculture, probably topical in the 60s and 70s, so I suppose Cray was referring to that,

– another-dave
Mar 7 at 23:56


















draft saved

draft discarded
















































Thanks for contributing an answer to Retrocomputing Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9318%2fwhy-did-the-cray-1-have-8-parity-bits-per-word%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown






Popular posts from this blog

How to check contact read email or not when send email to Individual?

Displaying single band from multi-band raster using QGIS

How many registers does an x86_64 CPU actually have?