Which was the first programming language that had data types?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP












30















Machine language (and Assembly language) don't have the concept of data types, so if you want to add an int and a float variables in Assembly, you have to use the appropriate Assembly instruction that adds an int and a float.



But if you are working with a high level language (for example: C), all you have to do is "mark" one variable with the int keyword and mark the other variable with the float keyword, and then use the + operator to add the two variables together, and the compiler will generate the machine language instruction that adds an int and float.



But I am wondering, which was the first programming language that had data types?










share|improve this question

















  • 16





    Not sure if it's the earliest, but the initial version of FORTRAN had at least two "types": integers ("fixed point") and floating point -- see also Programmer's Reference Manual, The FORTRAN Automatic Coding System for the IBM 704 EDPM

    – Felix Palmen
    Feb 5 at 14:01







  • 6





    I can't really answer with "the first" to have data types, although I suspect it's Fortran, note that you don't have to have manifest typing to have types. In C, you have to say int i; float f; to tag i as capable of holding an int and f as holding a float. In many other languages, such as Lisp, Perl, Python, many Basics, JavaScript, and so on, the variables can hold any type, but the operators know what to do with the values in them because the values know their types.

    – Dranon
    Feb 5 at 14:42







  • 5





    Welcome to Retrocomputing! Since you use it as an example in your question -- not that it was the first -- the main difference between the C language and its own predecessor B was the introduction of data types.

    – Dr Sheldon
    Feb 5 at 15:02






  • 12





    It's unclear to me what the actual question is here. Since even machine language has data types (bytes, words, doublewords, etc.) I think this question may actually be asking "Which was the first programming language that had implicit type coercion?"

    – Ken Gober
    Feb 5 at 15:10






  • 4





    So why are you talking about conversion now? How is this asked in the question here? And how is this implemented in an assembler?

    – Felix Palmen
    Feb 5 at 17:04















30















Machine language (and Assembly language) don't have the concept of data types, so if you want to add an int and a float variables in Assembly, you have to use the appropriate Assembly instruction that adds an int and a float.



But if you are working with a high level language (for example: C), all you have to do is "mark" one variable with the int keyword and mark the other variable with the float keyword, and then use the + operator to add the two variables together, and the compiler will generate the machine language instruction that adds an int and float.



But I am wondering, which was the first programming language that had data types?










share|improve this question

















  • 16





    Not sure if it's the earliest, but the initial version of FORTRAN had at least two "types": integers ("fixed point") and floating point -- see also Programmer's Reference Manual, The FORTRAN Automatic Coding System for the IBM 704 EDPM

    – Felix Palmen
    Feb 5 at 14:01







  • 6





    I can't really answer with "the first" to have data types, although I suspect it's Fortran, note that you don't have to have manifest typing to have types. In C, you have to say int i; float f; to tag i as capable of holding an int and f as holding a float. In many other languages, such as Lisp, Perl, Python, many Basics, JavaScript, and so on, the variables can hold any type, but the operators know what to do with the values in them because the values know their types.

    – Dranon
    Feb 5 at 14:42







  • 5





    Welcome to Retrocomputing! Since you use it as an example in your question -- not that it was the first -- the main difference between the C language and its own predecessor B was the introduction of data types.

    – Dr Sheldon
    Feb 5 at 15:02






  • 12





    It's unclear to me what the actual question is here. Since even machine language has data types (bytes, words, doublewords, etc.) I think this question may actually be asking "Which was the first programming language that had implicit type coercion?"

    – Ken Gober
    Feb 5 at 15:10






  • 4





    So why are you talking about conversion now? How is this asked in the question here? And how is this implemented in an assembler?

    – Felix Palmen
    Feb 5 at 17:04













30












30








30


2






Machine language (and Assembly language) don't have the concept of data types, so if you want to add an int and a float variables in Assembly, you have to use the appropriate Assembly instruction that adds an int and a float.



But if you are working with a high level language (for example: C), all you have to do is "mark" one variable with the int keyword and mark the other variable with the float keyword, and then use the + operator to add the two variables together, and the compiler will generate the machine language instruction that adds an int and float.



But I am wondering, which was the first programming language that had data types?










share|improve this question














Machine language (and Assembly language) don't have the concept of data types, so if you want to add an int and a float variables in Assembly, you have to use the appropriate Assembly instruction that adds an int and a float.



But if you are working with a high level language (for example: C), all you have to do is "mark" one variable with the int keyword and mark the other variable with the float keyword, and then use the + operator to add the two variables together, and the compiler will generate the machine language instruction that adds an int and float.



But I am wondering, which was the first programming language that had data types?







history programming assembly






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Feb 5 at 13:28









user11869user11869

160123




160123







  • 16





    Not sure if it's the earliest, but the initial version of FORTRAN had at least two "types": integers ("fixed point") and floating point -- see also Programmer's Reference Manual, The FORTRAN Automatic Coding System for the IBM 704 EDPM

    – Felix Palmen
    Feb 5 at 14:01







  • 6





    I can't really answer with "the first" to have data types, although I suspect it's Fortran, note that you don't have to have manifest typing to have types. In C, you have to say int i; float f; to tag i as capable of holding an int and f as holding a float. In many other languages, such as Lisp, Perl, Python, many Basics, JavaScript, and so on, the variables can hold any type, but the operators know what to do with the values in them because the values know their types.

    – Dranon
    Feb 5 at 14:42







  • 5





    Welcome to Retrocomputing! Since you use it as an example in your question -- not that it was the first -- the main difference between the C language and its own predecessor B was the introduction of data types.

    – Dr Sheldon
    Feb 5 at 15:02






  • 12





    It's unclear to me what the actual question is here. Since even machine language has data types (bytes, words, doublewords, etc.) I think this question may actually be asking "Which was the first programming language that had implicit type coercion?"

    – Ken Gober
    Feb 5 at 15:10






  • 4





    So why are you talking about conversion now? How is this asked in the question here? And how is this implemented in an assembler?

    – Felix Palmen
    Feb 5 at 17:04












  • 16





    Not sure if it's the earliest, but the initial version of FORTRAN had at least two "types": integers ("fixed point") and floating point -- see also Programmer's Reference Manual, The FORTRAN Automatic Coding System for the IBM 704 EDPM

    – Felix Palmen
    Feb 5 at 14:01







  • 6





    I can't really answer with "the first" to have data types, although I suspect it's Fortran, note that you don't have to have manifest typing to have types. In C, you have to say int i; float f; to tag i as capable of holding an int and f as holding a float. In many other languages, such as Lisp, Perl, Python, many Basics, JavaScript, and so on, the variables can hold any type, but the operators know what to do with the values in them because the values know their types.

    – Dranon
    Feb 5 at 14:42







  • 5





    Welcome to Retrocomputing! Since you use it as an example in your question -- not that it was the first -- the main difference between the C language and its own predecessor B was the introduction of data types.

    – Dr Sheldon
    Feb 5 at 15:02






  • 12





    It's unclear to me what the actual question is here. Since even machine language has data types (bytes, words, doublewords, etc.) I think this question may actually be asking "Which was the first programming language that had implicit type coercion?"

    – Ken Gober
    Feb 5 at 15:10






  • 4





    So why are you talking about conversion now? How is this asked in the question here? And how is this implemented in an assembler?

    – Felix Palmen
    Feb 5 at 17:04







16




16





Not sure if it's the earliest, but the initial version of FORTRAN had at least two "types": integers ("fixed point") and floating point -- see also Programmer's Reference Manual, The FORTRAN Automatic Coding System for the IBM 704 EDPM

– Felix Palmen
Feb 5 at 14:01






Not sure if it's the earliest, but the initial version of FORTRAN had at least two "types": integers ("fixed point") and floating point -- see also Programmer's Reference Manual, The FORTRAN Automatic Coding System for the IBM 704 EDPM

– Felix Palmen
Feb 5 at 14:01





6




6





I can't really answer with "the first" to have data types, although I suspect it's Fortran, note that you don't have to have manifest typing to have types. In C, you have to say int i; float f; to tag i as capable of holding an int and f as holding a float. In many other languages, such as Lisp, Perl, Python, many Basics, JavaScript, and so on, the variables can hold any type, but the operators know what to do with the values in them because the values know their types.

– Dranon
Feb 5 at 14:42






I can't really answer with "the first" to have data types, although I suspect it's Fortran, note that you don't have to have manifest typing to have types. In C, you have to say int i; float f; to tag i as capable of holding an int and f as holding a float. In many other languages, such as Lisp, Perl, Python, many Basics, JavaScript, and so on, the variables can hold any type, but the operators know what to do with the values in them because the values know their types.

– Dranon
Feb 5 at 14:42





5




5





Welcome to Retrocomputing! Since you use it as an example in your question -- not that it was the first -- the main difference between the C language and its own predecessor B was the introduction of data types.

– Dr Sheldon
Feb 5 at 15:02





Welcome to Retrocomputing! Since you use it as an example in your question -- not that it was the first -- the main difference between the C language and its own predecessor B was the introduction of data types.

– Dr Sheldon
Feb 5 at 15:02




12




12





It's unclear to me what the actual question is here. Since even machine language has data types (bytes, words, doublewords, etc.) I think this question may actually be asking "Which was the first programming language that had implicit type coercion?"

– Ken Gober
Feb 5 at 15:10





It's unclear to me what the actual question is here. Since even machine language has data types (bytes, words, doublewords, etc.) I think this question may actually be asking "Which was the first programming language that had implicit type coercion?"

– Ken Gober
Feb 5 at 15:10




4




4





So why are you talking about conversion now? How is this asked in the question here? And how is this implemented in an assembler?

– Felix Palmen
Feb 5 at 17:04





So why are you talking about conversion now? How is this asked in the question here? And how is this implemented in an assembler?

– Felix Palmen
Feb 5 at 17:04










6 Answers
6






active

oldest

votes


















42














The premise:




Machine language (and Assembly language) don't have the concept of data types




is not quite correct, because tagged architecture means exactly this, machine language where the data is tagged for its "type" (even though not quite what we know from higher level languages).



Probably the first widespread tagged architecture computer was the Burrough B5000 (or 5500?) from 1960s. But FORTRAN predates this.






share|improve this answer

























  • Except, that Fortran did no implied type conversion - at least not prior to the introduction of intrinsics inserted into assignment evaluations in FORTRAN 77. Way after ALGOL 60. (Intrinsics itself where introduced in Fortran 66).

    – Raffzahn
    Feb 5 at 17:06







  • 8





    Again here, how does "which was the first programming language that had data types?" ask anything about type conversions?

    – Felix Palmen
    Feb 5 at 18:20






  • 3





    I agree with @FelixPalmen in that the posed question is not about type conversions, though type conversions are mentioned in the middle paragraph. However, with respect to whether Fortran had implied type conversion: Fortran II certainly did, across an assignment operator: both float = int expression and int = float expression were allowed. Exponentiation of a float to an integer power was also allowed. Generalized mixed-mode expressions were not allowed, however.

    – another-dave
    Feb 6 at 0:15












  • @another-dave You're right. FORTRAN II did convert on expression assignemnts. I added that as a footnote.

    – Raffzahn
    Feb 6 at 14:58






  • 2





    Thirteen comments of being mean to each other‽ Come on, you're all better than that.

    – wizzwizz4
    Feb 8 at 6:47



















33















Machine language (and Assembly language) don't have the concept of data types, so if you want to add an int and a float variable in Assembly, you have to use the appropriate Assembly instruction that adds an int and a float.




Erm... this sounds as if you're mixing up the idea of data types and operations on these. Data types are memory structures. Operations are an independent unit. And just because some languages do provide operators that can be used with multiple data types, doesn't mean they do in general and always. For example in C the sine function is defined as:





double sin(double x)


This means feeding anything but a double, for example an integer, will screw it up. Much like using a floating point operation (like FSIN) on an x87 will choke if an integer is handed as parameter.



Long story short, Assembler does have data types and does obey them (*1). For example on a /360 (1964) that would be:



Type Example Alignment
Character C'1234' Byte
Binary B'0101' Byte
Packed (BCD) P'1234' Byte
Decimal Z'1234' Byte
Char (hex) X'1234' Byte
Integer 16 Bit H'1234' Halfword
Integer 32 Bit F'1234' Word
Float (32 bit) E'-12.34' Word
Float (64 bit) D'-12.34' Doubleword
Float (128 bit) L'-12.34' Doubleword
Pointer (32 Bit) A(1234) Word
Pointer (16 Bit) Y(1234) Halfword


(There are also Q, S and V pointers, but that's extreme high level stuff :))



Using the wrong data type in an instruction will make the assembler throw a warning, exactly the same way as a C compiler does.




But if you are working with a high level language (for example: C), all you have to do is "mark" one variable with the int keyword and mark the other variable with the float keyword, and then use the '+' operator to add the two variables together, and the compiler will generate the machine language instruction that adds an int and float.




As said before, C does this only for a handful of predefined operators for convenience, not in general and all over. C99 resolved this in part by selecting one of several possible functions fitting the operand type(s), and C++ used overloading. Still, not by default and everywhere.




But I am wondering, which was the first programming language that had data types?




As shown, it's Assembly :))



Beside that, each and every programming language that was ever designed and implemented for a real machine does include data types. After all, without it won't operate, would it?



If the question is more about implied type conversion (and/or selection), then again Assembly will be a valid answer, as Assembly offers the same ways as C/C++ to write code that adapts to data types (*2). Now, if you insist to exclude Assembly for whatever ideological reason, then ALGOL 60 (*3) may be a good candidate. The sometimes cited FORTRAN introduced it quite late (*4) with FORTRAN 77 (in 1978) (*5) using intrinsics (introduced with FORTRAN 66).




*1 - Or better can, as many - let's say less proficient - programmers decide to ignore or even disable that feature.



*2 - As usual, the secret lies within meta programming - aka Macros - much you do overloading in C++. Except, Assembler does not even force you to use existing operators.



*3 - In fact, ALGOL is a very nice example for the issues of automatic conversion and how to handle it. Where ALGOL 60 added arbitrary type conversion, like its descendant C, ALGOL 68 restricted automatic type conversion later, to only work upward, to avoid program/data errors due to precision loss. So INT could be implied converted to FLOAT, but a downward conversion had to be explicit.



*4 - Which let people use explicit conversions way into the 80s, making it hard to update programs until today. A great example of the advantages of clear, stringent and centralized definition. The ability to switch from single to double or long with just a few changes, instead of debugging huge piles of old code to find each and every explicit conversion.



*5 - As another-dave pointed out in a comment IBM's Fortran II (of 1958) did automatic type conversion between float and int when assigning the result of an expression (See p.22 'Mode of an Arithmetic Statement' in the manual). The expression itself had to be, in all parts, either integer or float, thus it might not fit case made by the OP.






share|improve this answer




















  • 11





    I'm not the downvoter, but this shows a very inaccurate understanding of types. The C function prototype does not mean that "feeding anything but a double, for example an integer, will screw it up," it means that anything but a double--or something convertible to a double--is forbidden by the type system and won't compile. Likewise, having annotations in a specific assembler that the coder is free to turn off, and which give warnings, not errors, if your code describes a nonsensical operation, is not a static type system in the style of C or ALGOL.

    – Mason Wheeler
    Feb 5 at 17:23






  • 2





    @MasonWheeler So am I right to understand your point is about the way the warnings are presented and how that was handled afterwards? Correct me, but are these not features of a language, but rather of its development environment? C is a real bad example, as it did allow to produce type mismatch producing bad code - not to mention that several compilers did generate an a.out despite warnings and errors given. So it's as well up to the user (or his policies represented in IDE settings) how to handle it. Not really a difference, right? Not everything we assume being part of a language is )

    – Raffzahn
    Feb 5 at 17:42







  • 4





    This seems to be a snarky reply saying "assembly has types if you squint hard enough", which is more of a comment than an answer.

    – BlueRaja - Danny Pflughoeft
    Feb 5 at 18:03







  • 5





    @BlueRaja-DannyPflughoeft No need to squint at all. It's all there. If people decide not to do it, it can't be made the fault of the language. Beside, the squint is maybe rather on the side of people looking at the puny capabilities of GNU assembler and judging all the others with that knowledge - much like saying cars don't have air bags by looking at a Yugo.

    – Raffzahn
    Feb 5 at 19:42







  • 2





    When you say that "Assembly [was the first language to have datatypes]", are you claiming that there is essentially one programming language called "Assembly", or are you claiming that IBM S/360 assembly is the first one?

    – Wilson
    Feb 6 at 14:01


















17














Perhaps Plankalkül (1942-45).




Plankalkül [has ...] floating point arithmetic, arrays, hierarchical
record structures [...] The Plankalkül provides a data structure
called generalized graph (verallgemeinerter Graph), which can be used
to represent geometrical structures. [...] Some features of the
Plankalkül: [...]
composite types are arrays and tuples
[...] The only primitive data type in the Plankalkül is a single bit, denoted by S0. Further data types can be built up from these.




Addition: from Stanford CS-76-562 Early development of programming languages (Knuth & Pardo).




Thus the Plankalkül included the important concept of hierarchically
structured data, going all the way down to the bit level. Such
advanced data structures did not enter again into programming
languages until the late 1950's, in IBM's Commercial Translator. The
idea eventually appeared in many other languages, such as FACT, COBOL,
PL/I, and extensions of ALGOL 60




There are some details about numbers also:




Integer variables in the Plankalkül were represented by type A9.
Another special type was used for floating-binary numbers, namely
[...] The first three-bit component here was for signs and special
markers -- indicating, for example, whether the number was real or
imaginary or zero; the second was for a seven-bit exponent in two's
complement notation; and the final 22 bits represented the 25-bit
fraction part of a normalized number, with the redundant leading "11"
bit suppressed.







share|improve this answer




















  • 5





    @FelixPalmen: yes?

    – Tomas By
    Feb 5 at 15:06






  • 10





    You can write programs, and you can perform all the operations theoretically on paper. Have you never taken a programming course at university, and sat an exam?

    – Tomas By
    Feb 5 at 15:16






  • 5





    It says "arrays, records, graphs" - all those are data types. And it also says floating point math, so presumably there are both integers and floats in some form. "primitive" is not the same as "built-in".

    – Tomas By
    Feb 5 at 15:28






  • 9





    @FelixPalmen: writing programs is progamming, correct.

    – Tomas By
    Feb 5 at 15:29






  • 5





    Why not? 8008 assembler was designed, and coded in, before any processor capable of running it existed. That is how design works. Fortran as a notably typed language was designed and coded in before a Fortran compiler was ever written. This is a chicken/egg issue. The people designing this language didn't design it not anticipating to build a real computer later. This was part of their justification for the money. Duh.

    – Harper
    Feb 5 at 18:14


















5














Let's get the answer to the question out of the way first. Limiting ourselves to high level languages designed for electronic digital computers and that are not really obscure, the answer is between Cobol and Fortran depending on which was invented first.




Machine language (and Assembly language) don't have the concept of data types




This is not true. Many assembler languages have multiple different sized words they can operate on and some have floating point types.




But if you are working with a high level language (for example: C), all you have to do is "mark" one variable with the int keyword and mark the other variable with the float keyword, and then use the + operator to add the two variables together, and the compiler will generate the machine language instruction that adds an int and float.




That's called implicit coercion and while it's true that you need data types to do implicit coercion so that the compiler knows how to do the coercion, coercion is not synonymous with data types or even a necessary condition. Swift, for example, has no implicit coercion - you always have to convert both operands to the same type when doing arithmetic.



There are three related concepts here that are being confused,



  • types assign a meaning to certain bit patterns in memory. They tell you and the compiler what kind of object a thing in memory is and what you can do with it.

  • type checking is where the compiler or the language runtime checks that an operation is valid for a particular type

  • implicit coercion is where the compiler or language runtime has a rule for automatically converting one type to another if needed.

Almost all computer languages are typed to some degree. Some languages are called "typeless" e.g. the predecessor to C which was called "B" (in reality these languages actually have one type), often the machine word. What really distinguishes languages is not whether they are typed or not but how much type checking is done, when the type checking is done and what happens when mismatched types are found.



Let's look at some examples:



One of the biggest complaints about Javascript is that its type system is very weak. This is not really true. When a program is running, the interpreter always knows exactly what type every object is. The problems with Javascript occur because type checking is done at run time (this is called "dynamic typing", compile time type checking is called "static typing") and if you perform an operation on an object of an incompatible type, Javascript will try to coerce it into a compatible type, sometimes with surprising results.



C is relatively strongly typed with static type checking. This was not always the case. In the pre-ANSI standard days, not all conversions between types were checked. Perhaps the most egregious issue was the fact that the compiler didn't check assignments between pointers and ints. On many architectures, you got away with it because int and someType * were the same size. However, if they were not the same size (as with my Atari ST Megamax C compiler) assigning a pointer to an int would lose bits and then hilarity would ensue.



The trend today seems to be towards statically typed languages but with type inference. Type inference allows the programmer to omit the type specification if the compiler can infer the type of the object from the context. For example, in Swift:



func square(_ x: Int) -> Int

return x * x


let a = square(3)


defines a function square that takes an Int and returns an Int and the applies it to 3 and assigns the result to a. The compiler infers the type of a from the return type of the function and the type of the literal 3 from the type of the function's parameter. In C I would have to declare the type of a although it does have limited inference for literals.



Type inference seems to be a new trend although, as with all things in Computer Science, the concept probably dates back decades. Statically typed languages are as old as high level languages.






share|improve this answer























  • "Type inference seems to be a new trend although ... the concept probably dates back decades." Wikipedia claims the Hindley-Milner principal type algorithm goes back to 1969. Certainly Haskell had type inference from day 1, Haskell dates back to 1990, and Haskell is loosely based on Miranda, which dates to 1985... It seems real language implementations have been doing this for a while.

    – MathematicalOrchid
    Feb 7 at 14:11











  • @MathematicalOrchid I tend to think of everyone after 1980 as being new - that was when I first came in contact with real computers. I am aware that type inference has been around in functional programming languages for a long time but as a popular trend for mainstream languages, it is quite new.

    – JeremyP
    Feb 8 at 9:54











  • To be clear, I wasn't disputing that the trend is new, I was confirming that the original idea itself is quite old.

    – MathematicalOrchid
    Feb 8 at 13:22


















5














There's a lot of heated discussion about what the meaning of 'programming language that had data types' might be. In the absence of clarification from the OP, here's my opinion.



Definitions:



'Programming language' - any language in which programs are written. For the purposes of this answer, I'm restricting this to languages which are implemented 'soon' after design, for some vague value of 'soon' (I don't exclude languages which have programs written before the implementation). I wish to exclude Plankalkül for this answer, work of genius though it may be, simply because it did not become known to the world until after data-typing became commonplace. Until implementation, it's a theoretical idea, not a programming language - though perhaps as a workaday programmer I am prejudiced.



'Language with data types' - I think the bedrock requirements here are that the language defines more than one type, and there be some way to indicate what type a quantity has. I include explicit declaration, implicit typing (by denotation for literals, initial letters for variables, or weird sigil schemes), and runtime determination.



Lastly, if a language is said by its population of programmers to be 'untyped' then I think we should agree with those programmers. This means BLISS is typeless even though it has builtins that will treat a machine word as holding a floating-point value.



Having established my frame of reference, I say the answer is "early FORTRAN" (1954). FORTRAN is normally considered to have been born in 1957, but this survey of early programming languages by Knuth shows, on pages 62-63, an early implementation where the I-N as integer, others as real numbers, convention is in place.






share|improve this answer

























  • Knuth & Pardo 1976, yes, if you check the list on p. 1, then Plankalkül is #1 and FORTRAN is #11.

    – Tomas By
    Feb 7 at 1:44











  • @TomasBy - you're referring to the order of language appearance, right? I'm inclined to not consider Plankalkül for the purpose of this question since it was not implemented. As far as I can tell, no implemented language until Fortran had the data typing we're looking for. (I was quite disappointed that Glennie's Autocode didn't)

    – another-dave
    Feb 7 at 2:08











  • Will do later. I admit it's a slippery slope since as observed, people program in a language before it's implemented (extreme case: compiler bootstrapping). However, feels like it was a paper exercise, even if not intended that way. I'll do a little more reading first. Thanks for the note.

    – another-dave
    Feb 7 at 13:01






  • 1





    @IMSoP - I laid my cards on the table!

    – another-dave
    Feb 9 at 1:15


















0














Not the first, but Algol 60 deserves honorable mention. It had strong typing, but type mismatches caused compiler errors instead of automatic conversions.



The typing system for Algol 60 was better than Fortran or COBOL.



Incidentally there was a period of four years when major languages were launched. 1957 Fortran; 1958 COBOL; 1959 Lisp; and 1960 Algol.






share|improve this answer

























  • Agreed, but there was that weird thing of the specification part (if I recall the terminology correctly) being optional for formal procedure parameters. Many implementations required it. But if you don't know the parameter data types (without exhaustive analysis of every potential execution path!) how can you compile the code for the procedure, in the general case? Hmm, maybe this should be a new question!

    – another-dave
    Feb 7 at 13:05












  • I have forgotten too much Algol to follow your comment. Suffice it to say that Algol was a step in that direction.

    – Walter Mitty
    Feb 7 at 13:16











  • Oh, Algol was a long way ahead of its contemporaries. I'm just commenting on what seems like a weird oversight to me. Per Revised Report, real procedure foo(a, b) begin foo := a + b end; is legal and complete. There is no type specification for a and b.

    – another-dave
    Feb 7 at 23:38










Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "648"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9085%2fwhich-was-the-first-programming-language-that-had-data-types%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























6 Answers
6






active

oldest

votes








6 Answers
6






active

oldest

votes









active

oldest

votes






active

oldest

votes









42














The premise:




Machine language (and Assembly language) don't have the concept of data types




is not quite correct, because tagged architecture means exactly this, machine language where the data is tagged for its "type" (even though not quite what we know from higher level languages).



Probably the first widespread tagged architecture computer was the Burrough B5000 (or 5500?) from 1960s. But FORTRAN predates this.






share|improve this answer

























  • Except, that Fortran did no implied type conversion - at least not prior to the introduction of intrinsics inserted into assignment evaluations in FORTRAN 77. Way after ALGOL 60. (Intrinsics itself where introduced in Fortran 66).

    – Raffzahn
    Feb 5 at 17:06







  • 8





    Again here, how does "which was the first programming language that had data types?" ask anything about type conversions?

    – Felix Palmen
    Feb 5 at 18:20






  • 3





    I agree with @FelixPalmen in that the posed question is not about type conversions, though type conversions are mentioned in the middle paragraph. However, with respect to whether Fortran had implied type conversion: Fortran II certainly did, across an assignment operator: both float = int expression and int = float expression were allowed. Exponentiation of a float to an integer power was also allowed. Generalized mixed-mode expressions were not allowed, however.

    – another-dave
    Feb 6 at 0:15












  • @another-dave You're right. FORTRAN II did convert on expression assignemnts. I added that as a footnote.

    – Raffzahn
    Feb 6 at 14:58






  • 2





    Thirteen comments of being mean to each other‽ Come on, you're all better than that.

    – wizzwizz4
    Feb 8 at 6:47
















42














The premise:




Machine language (and Assembly language) don't have the concept of data types




is not quite correct, because tagged architecture means exactly this, machine language where the data is tagged for its "type" (even though not quite what we know from higher level languages).



Probably the first widespread tagged architecture computer was the Burrough B5000 (or 5500?) from 1960s. But FORTRAN predates this.






share|improve this answer

























  • Except, that Fortran did no implied type conversion - at least not prior to the introduction of intrinsics inserted into assignment evaluations in FORTRAN 77. Way after ALGOL 60. (Intrinsics itself where introduced in Fortran 66).

    – Raffzahn
    Feb 5 at 17:06







  • 8





    Again here, how does "which was the first programming language that had data types?" ask anything about type conversions?

    – Felix Palmen
    Feb 5 at 18:20






  • 3





    I agree with @FelixPalmen in that the posed question is not about type conversions, though type conversions are mentioned in the middle paragraph. However, with respect to whether Fortran had implied type conversion: Fortran II certainly did, across an assignment operator: both float = int expression and int = float expression were allowed. Exponentiation of a float to an integer power was also allowed. Generalized mixed-mode expressions were not allowed, however.

    – another-dave
    Feb 6 at 0:15












  • @another-dave You're right. FORTRAN II did convert on expression assignemnts. I added that as a footnote.

    – Raffzahn
    Feb 6 at 14:58






  • 2





    Thirteen comments of being mean to each other‽ Come on, you're all better than that.

    – wizzwizz4
    Feb 8 at 6:47














42












42








42







The premise:




Machine language (and Assembly language) don't have the concept of data types




is not quite correct, because tagged architecture means exactly this, machine language where the data is tagged for its "type" (even though not quite what we know from higher level languages).



Probably the first widespread tagged architecture computer was the Burrough B5000 (or 5500?) from 1960s. But FORTRAN predates this.






share|improve this answer















The premise:




Machine language (and Assembly language) don't have the concept of data types




is not quite correct, because tagged architecture means exactly this, machine language where the data is tagged for its "type" (even though not quite what we know from higher level languages).



Probably the first widespread tagged architecture computer was the Burrough B5000 (or 5500?) from 1960s. But FORTRAN predates this.







share|improve this answer














share|improve this answer



share|improve this answer








edited Feb 8 at 6:42

























answered Feb 5 at 13:56









Radovan GarabíkRadovan Garabík

1,566612




1,566612












  • Except, that Fortran did no implied type conversion - at least not prior to the introduction of intrinsics inserted into assignment evaluations in FORTRAN 77. Way after ALGOL 60. (Intrinsics itself where introduced in Fortran 66).

    – Raffzahn
    Feb 5 at 17:06







  • 8





    Again here, how does "which was the first programming language that had data types?" ask anything about type conversions?

    – Felix Palmen
    Feb 5 at 18:20






  • 3





    I agree with @FelixPalmen in that the posed question is not about type conversions, though type conversions are mentioned in the middle paragraph. However, with respect to whether Fortran had implied type conversion: Fortran II certainly did, across an assignment operator: both float = int expression and int = float expression were allowed. Exponentiation of a float to an integer power was also allowed. Generalized mixed-mode expressions were not allowed, however.

    – another-dave
    Feb 6 at 0:15












  • @another-dave You're right. FORTRAN II did convert on expression assignemnts. I added that as a footnote.

    – Raffzahn
    Feb 6 at 14:58






  • 2





    Thirteen comments of being mean to each other‽ Come on, you're all better than that.

    – wizzwizz4
    Feb 8 at 6:47


















  • Except, that Fortran did no implied type conversion - at least not prior to the introduction of intrinsics inserted into assignment evaluations in FORTRAN 77. Way after ALGOL 60. (Intrinsics itself where introduced in Fortran 66).

    – Raffzahn
    Feb 5 at 17:06







  • 8





    Again here, how does "which was the first programming language that had data types?" ask anything about type conversions?

    – Felix Palmen
    Feb 5 at 18:20






  • 3





    I agree with @FelixPalmen in that the posed question is not about type conversions, though type conversions are mentioned in the middle paragraph. However, with respect to whether Fortran had implied type conversion: Fortran II certainly did, across an assignment operator: both float = int expression and int = float expression were allowed. Exponentiation of a float to an integer power was also allowed. Generalized mixed-mode expressions were not allowed, however.

    – another-dave
    Feb 6 at 0:15












  • @another-dave You're right. FORTRAN II did convert on expression assignemnts. I added that as a footnote.

    – Raffzahn
    Feb 6 at 14:58






  • 2





    Thirteen comments of being mean to each other‽ Come on, you're all better than that.

    – wizzwizz4
    Feb 8 at 6:47

















Except, that Fortran did no implied type conversion - at least not prior to the introduction of intrinsics inserted into assignment evaluations in FORTRAN 77. Way after ALGOL 60. (Intrinsics itself where introduced in Fortran 66).

– Raffzahn
Feb 5 at 17:06






Except, that Fortran did no implied type conversion - at least not prior to the introduction of intrinsics inserted into assignment evaluations in FORTRAN 77. Way after ALGOL 60. (Intrinsics itself where introduced in Fortran 66).

– Raffzahn
Feb 5 at 17:06





8




8





Again here, how does "which was the first programming language that had data types?" ask anything about type conversions?

– Felix Palmen
Feb 5 at 18:20





Again here, how does "which was the first programming language that had data types?" ask anything about type conversions?

– Felix Palmen
Feb 5 at 18:20




3




3





I agree with @FelixPalmen in that the posed question is not about type conversions, though type conversions are mentioned in the middle paragraph. However, with respect to whether Fortran had implied type conversion: Fortran II certainly did, across an assignment operator: both float = int expression and int = float expression were allowed. Exponentiation of a float to an integer power was also allowed. Generalized mixed-mode expressions were not allowed, however.

– another-dave
Feb 6 at 0:15






I agree with @FelixPalmen in that the posed question is not about type conversions, though type conversions are mentioned in the middle paragraph. However, with respect to whether Fortran had implied type conversion: Fortran II certainly did, across an assignment operator: both float = int expression and int = float expression were allowed. Exponentiation of a float to an integer power was also allowed. Generalized mixed-mode expressions were not allowed, however.

– another-dave
Feb 6 at 0:15














@another-dave You're right. FORTRAN II did convert on expression assignemnts. I added that as a footnote.

– Raffzahn
Feb 6 at 14:58





@another-dave You're right. FORTRAN II did convert on expression assignemnts. I added that as a footnote.

– Raffzahn
Feb 6 at 14:58




2




2





Thirteen comments of being mean to each other‽ Come on, you're all better than that.

– wizzwizz4
Feb 8 at 6:47






Thirteen comments of being mean to each other‽ Come on, you're all better than that.

– wizzwizz4
Feb 8 at 6:47












33















Machine language (and Assembly language) don't have the concept of data types, so if you want to add an int and a float variable in Assembly, you have to use the appropriate Assembly instruction that adds an int and a float.




Erm... this sounds as if you're mixing up the idea of data types and operations on these. Data types are memory structures. Operations are an independent unit. And just because some languages do provide operators that can be used with multiple data types, doesn't mean they do in general and always. For example in C the sine function is defined as:





double sin(double x)


This means feeding anything but a double, for example an integer, will screw it up. Much like using a floating point operation (like FSIN) on an x87 will choke if an integer is handed as parameter.



Long story short, Assembler does have data types and does obey them (*1). For example on a /360 (1964) that would be:



Type Example Alignment
Character C'1234' Byte
Binary B'0101' Byte
Packed (BCD) P'1234' Byte
Decimal Z'1234' Byte
Char (hex) X'1234' Byte
Integer 16 Bit H'1234' Halfword
Integer 32 Bit F'1234' Word
Float (32 bit) E'-12.34' Word
Float (64 bit) D'-12.34' Doubleword
Float (128 bit) L'-12.34' Doubleword
Pointer (32 Bit) A(1234) Word
Pointer (16 Bit) Y(1234) Halfword


(There are also Q, S and V pointers, but that's extreme high level stuff :))



Using the wrong data type in an instruction will make the assembler throw a warning, exactly the same way as a C compiler does.




But if you are working with a high level language (for example: C), all you have to do is "mark" one variable with the int keyword and mark the other variable with the float keyword, and then use the '+' operator to add the two variables together, and the compiler will generate the machine language instruction that adds an int and float.




As said before, C does this only for a handful of predefined operators for convenience, not in general and all over. C99 resolved this in part by selecting one of several possible functions fitting the operand type(s), and C++ used overloading. Still, not by default and everywhere.




But I am wondering, which was the first programming language that had data types?




As shown, it's Assembly :))



Beside that, each and every programming language that was ever designed and implemented for a real machine does include data types. After all, without it won't operate, would it?



If the question is more about implied type conversion (and/or selection), then again Assembly will be a valid answer, as Assembly offers the same ways as C/C++ to write code that adapts to data types (*2). Now, if you insist to exclude Assembly for whatever ideological reason, then ALGOL 60 (*3) may be a good candidate. The sometimes cited FORTRAN introduced it quite late (*4) with FORTRAN 77 (in 1978) (*5) using intrinsics (introduced with FORTRAN 66).




*1 - Or better can, as many - let's say less proficient - programmers decide to ignore or even disable that feature.



*2 - As usual, the secret lies within meta programming - aka Macros - much you do overloading in C++. Except, Assembler does not even force you to use existing operators.



*3 - In fact, ALGOL is a very nice example for the issues of automatic conversion and how to handle it. Where ALGOL 60 added arbitrary type conversion, like its descendant C, ALGOL 68 restricted automatic type conversion later, to only work upward, to avoid program/data errors due to precision loss. So INT could be implied converted to FLOAT, but a downward conversion had to be explicit.



*4 - Which let people use explicit conversions way into the 80s, making it hard to update programs until today. A great example of the advantages of clear, stringent and centralized definition. The ability to switch from single to double or long with just a few changes, instead of debugging huge piles of old code to find each and every explicit conversion.



*5 - As another-dave pointed out in a comment IBM's Fortran II (of 1958) did automatic type conversion between float and int when assigning the result of an expression (See p.22 'Mode of an Arithmetic Statement' in the manual). The expression itself had to be, in all parts, either integer or float, thus it might not fit case made by the OP.






share|improve this answer




















  • 11





    I'm not the downvoter, but this shows a very inaccurate understanding of types. The C function prototype does not mean that "feeding anything but a double, for example an integer, will screw it up," it means that anything but a double--or something convertible to a double--is forbidden by the type system and won't compile. Likewise, having annotations in a specific assembler that the coder is free to turn off, and which give warnings, not errors, if your code describes a nonsensical operation, is not a static type system in the style of C or ALGOL.

    – Mason Wheeler
    Feb 5 at 17:23






  • 2





    @MasonWheeler So am I right to understand your point is about the way the warnings are presented and how that was handled afterwards? Correct me, but are these not features of a language, but rather of its development environment? C is a real bad example, as it did allow to produce type mismatch producing bad code - not to mention that several compilers did generate an a.out despite warnings and errors given. So it's as well up to the user (or his policies represented in IDE settings) how to handle it. Not really a difference, right? Not everything we assume being part of a language is )

    – Raffzahn
    Feb 5 at 17:42







  • 4





    This seems to be a snarky reply saying "assembly has types if you squint hard enough", which is more of a comment than an answer.

    – BlueRaja - Danny Pflughoeft
    Feb 5 at 18:03







  • 5





    @BlueRaja-DannyPflughoeft No need to squint at all. It's all there. If people decide not to do it, it can't be made the fault of the language. Beside, the squint is maybe rather on the side of people looking at the puny capabilities of GNU assembler and judging all the others with that knowledge - much like saying cars don't have air bags by looking at a Yugo.

    – Raffzahn
    Feb 5 at 19:42







  • 2





    When you say that "Assembly [was the first language to have datatypes]", are you claiming that there is essentially one programming language called "Assembly", or are you claiming that IBM S/360 assembly is the first one?

    – Wilson
    Feb 6 at 14:01















33















Machine language (and Assembly language) don't have the concept of data types, so if you want to add an int and a float variable in Assembly, you have to use the appropriate Assembly instruction that adds an int and a float.




Erm... this sounds as if you're mixing up the idea of data types and operations on these. Data types are memory structures. Operations are an independent unit. And just because some languages do provide operators that can be used with multiple data types, doesn't mean they do in general and always. For example in C the sine function is defined as:





double sin(double x)


This means feeding anything but a double, for example an integer, will screw it up. Much like using a floating point operation (like FSIN) on an x87 will choke if an integer is handed as parameter.



Long story short, Assembler does have data types and does obey them (*1). For example on a /360 (1964) that would be:



Type Example Alignment
Character C'1234' Byte
Binary B'0101' Byte
Packed (BCD) P'1234' Byte
Decimal Z'1234' Byte
Char (hex) X'1234' Byte
Integer 16 Bit H'1234' Halfword
Integer 32 Bit F'1234' Word
Float (32 bit) E'-12.34' Word
Float (64 bit) D'-12.34' Doubleword
Float (128 bit) L'-12.34' Doubleword
Pointer (32 Bit) A(1234) Word
Pointer (16 Bit) Y(1234) Halfword


(There are also Q, S and V pointers, but that's extreme high level stuff :))



Using the wrong data type in an instruction will make the assembler throw a warning, exactly the same way as a C compiler does.




But if you are working with a high level language (for example: C), all you have to do is "mark" one variable with the int keyword and mark the other variable with the float keyword, and then use the '+' operator to add the two variables together, and the compiler will generate the machine language instruction that adds an int and float.




As said before, C does this only for a handful of predefined operators for convenience, not in general and all over. C99 resolved this in part by selecting one of several possible functions fitting the operand type(s), and C++ used overloading. Still, not by default and everywhere.




But I am wondering, which was the first programming language that had data types?




As shown, it's Assembly :))



Beside that, each and every programming language that was ever designed and implemented for a real machine does include data types. After all, without it won't operate, would it?



If the question is more about implied type conversion (and/or selection), then again Assembly will be a valid answer, as Assembly offers the same ways as C/C++ to write code that adapts to data types (*2). Now, if you insist to exclude Assembly for whatever ideological reason, then ALGOL 60 (*3) may be a good candidate. The sometimes cited FORTRAN introduced it quite late (*4) with FORTRAN 77 (in 1978) (*5) using intrinsics (introduced with FORTRAN 66).




*1 - Or better can, as many - let's say less proficient - programmers decide to ignore or even disable that feature.



*2 - As usual, the secret lies within meta programming - aka Macros - much you do overloading in C++. Except, Assembler does not even force you to use existing operators.



*3 - In fact, ALGOL is a very nice example for the issues of automatic conversion and how to handle it. Where ALGOL 60 added arbitrary type conversion, like its descendant C, ALGOL 68 restricted automatic type conversion later, to only work upward, to avoid program/data errors due to precision loss. So INT could be implied converted to FLOAT, but a downward conversion had to be explicit.



*4 - Which let people use explicit conversions way into the 80s, making it hard to update programs until today. A great example of the advantages of clear, stringent and centralized definition. The ability to switch from single to double or long with just a few changes, instead of debugging huge piles of old code to find each and every explicit conversion.



*5 - As another-dave pointed out in a comment IBM's Fortran II (of 1958) did automatic type conversion between float and int when assigning the result of an expression (See p.22 'Mode of an Arithmetic Statement' in the manual). The expression itself had to be, in all parts, either integer or float, thus it might not fit case made by the OP.






share|improve this answer




















  • 11





    I'm not the downvoter, but this shows a very inaccurate understanding of types. The C function prototype does not mean that "feeding anything but a double, for example an integer, will screw it up," it means that anything but a double--or something convertible to a double--is forbidden by the type system and won't compile. Likewise, having annotations in a specific assembler that the coder is free to turn off, and which give warnings, not errors, if your code describes a nonsensical operation, is not a static type system in the style of C or ALGOL.

    – Mason Wheeler
    Feb 5 at 17:23






  • 2





    @MasonWheeler So am I right to understand your point is about the way the warnings are presented and how that was handled afterwards? Correct me, but are these not features of a language, but rather of its development environment? C is a real bad example, as it did allow to produce type mismatch producing bad code - not to mention that several compilers did generate an a.out despite warnings and errors given. So it's as well up to the user (or his policies represented in IDE settings) how to handle it. Not really a difference, right? Not everything we assume being part of a language is )

    – Raffzahn
    Feb 5 at 17:42







  • 4





    This seems to be a snarky reply saying "assembly has types if you squint hard enough", which is more of a comment than an answer.

    – BlueRaja - Danny Pflughoeft
    Feb 5 at 18:03







  • 5





    @BlueRaja-DannyPflughoeft No need to squint at all. It's all there. If people decide not to do it, it can't be made the fault of the language. Beside, the squint is maybe rather on the side of people looking at the puny capabilities of GNU assembler and judging all the others with that knowledge - much like saying cars don't have air bags by looking at a Yugo.

    – Raffzahn
    Feb 5 at 19:42







  • 2





    When you say that "Assembly [was the first language to have datatypes]", are you claiming that there is essentially one programming language called "Assembly", or are you claiming that IBM S/360 assembly is the first one?

    – Wilson
    Feb 6 at 14:01













33












33








33








Machine language (and Assembly language) don't have the concept of data types, so if you want to add an int and a float variable in Assembly, you have to use the appropriate Assembly instruction that adds an int and a float.




Erm... this sounds as if you're mixing up the idea of data types and operations on these. Data types are memory structures. Operations are an independent unit. And just because some languages do provide operators that can be used with multiple data types, doesn't mean they do in general and always. For example in C the sine function is defined as:





double sin(double x)


This means feeding anything but a double, for example an integer, will screw it up. Much like using a floating point operation (like FSIN) on an x87 will choke if an integer is handed as parameter.



Long story short, Assembler does have data types and does obey them (*1). For example on a /360 (1964) that would be:



Type Example Alignment
Character C'1234' Byte
Binary B'0101' Byte
Packed (BCD) P'1234' Byte
Decimal Z'1234' Byte
Char (hex) X'1234' Byte
Integer 16 Bit H'1234' Halfword
Integer 32 Bit F'1234' Word
Float (32 bit) E'-12.34' Word
Float (64 bit) D'-12.34' Doubleword
Float (128 bit) L'-12.34' Doubleword
Pointer (32 Bit) A(1234) Word
Pointer (16 Bit) Y(1234) Halfword


(There are also Q, S and V pointers, but that's extreme high level stuff :))



Using the wrong data type in an instruction will make the assembler throw a warning, exactly the same way as a C compiler does.




But if you are working with a high level language (for example: C), all you have to do is "mark" one variable with the int keyword and mark the other variable with the float keyword, and then use the '+' operator to add the two variables together, and the compiler will generate the machine language instruction that adds an int and float.




As said before, C does this only for a handful of predefined operators for convenience, not in general and all over. C99 resolved this in part by selecting one of several possible functions fitting the operand type(s), and C++ used overloading. Still, not by default and everywhere.




But I am wondering, which was the first programming language that had data types?




As shown, it's Assembly :))



Beside that, each and every programming language that was ever designed and implemented for a real machine does include data types. After all, without it won't operate, would it?



If the question is more about implied type conversion (and/or selection), then again Assembly will be a valid answer, as Assembly offers the same ways as C/C++ to write code that adapts to data types (*2). Now, if you insist to exclude Assembly for whatever ideological reason, then ALGOL 60 (*3) may be a good candidate. The sometimes cited FORTRAN introduced it quite late (*4) with FORTRAN 77 (in 1978) (*5) using intrinsics (introduced with FORTRAN 66).




*1 - Or better can, as many - let's say less proficient - programmers decide to ignore or even disable that feature.



*2 - As usual, the secret lies within meta programming - aka Macros - much you do overloading in C++. Except, Assembler does not even force you to use existing operators.



*3 - In fact, ALGOL is a very nice example for the issues of automatic conversion and how to handle it. Where ALGOL 60 added arbitrary type conversion, like its descendant C, ALGOL 68 restricted automatic type conversion later, to only work upward, to avoid program/data errors due to precision loss. So INT could be implied converted to FLOAT, but a downward conversion had to be explicit.



*4 - Which let people use explicit conversions way into the 80s, making it hard to update programs until today. A great example of the advantages of clear, stringent and centralized definition. The ability to switch from single to double or long with just a few changes, instead of debugging huge piles of old code to find each and every explicit conversion.



*5 - As another-dave pointed out in a comment IBM's Fortran II (of 1958) did automatic type conversion between float and int when assigning the result of an expression (See p.22 'Mode of an Arithmetic Statement' in the manual). The expression itself had to be, in all parts, either integer or float, thus it might not fit case made by the OP.






share|improve this answer
















Machine language (and Assembly language) don't have the concept of data types, so if you want to add an int and a float variable in Assembly, you have to use the appropriate Assembly instruction that adds an int and a float.




Erm... this sounds as if you're mixing up the idea of data types and operations on these. Data types are memory structures. Operations are an independent unit. And just because some languages do provide operators that can be used with multiple data types, doesn't mean they do in general and always. For example in C the sine function is defined as:





double sin(double x)


This means feeding anything but a double, for example an integer, will screw it up. Much like using a floating point operation (like FSIN) on an x87 will choke if an integer is handed as parameter.



Long story short, Assembler does have data types and does obey them (*1). For example on a /360 (1964) that would be:



Type Example Alignment
Character C'1234' Byte
Binary B'0101' Byte
Packed (BCD) P'1234' Byte
Decimal Z'1234' Byte
Char (hex) X'1234' Byte
Integer 16 Bit H'1234' Halfword
Integer 32 Bit F'1234' Word
Float (32 bit) E'-12.34' Word
Float (64 bit) D'-12.34' Doubleword
Float (128 bit) L'-12.34' Doubleword
Pointer (32 Bit) A(1234) Word
Pointer (16 Bit) Y(1234) Halfword


(There are also Q, S and V pointers, but that's extreme high level stuff :))



Using the wrong data type in an instruction will make the assembler throw a warning, exactly the same way as a C compiler does.




But if you are working with a high level language (for example: C), all you have to do is "mark" one variable with the int keyword and mark the other variable with the float keyword, and then use the '+' operator to add the two variables together, and the compiler will generate the machine language instruction that adds an int and float.




As said before, C does this only for a handful of predefined operators for convenience, not in general and all over. C99 resolved this in part by selecting one of several possible functions fitting the operand type(s), and C++ used overloading. Still, not by default and everywhere.




But I am wondering, which was the first programming language that had data types?




As shown, it's Assembly :))



Beside that, each and every programming language that was ever designed and implemented for a real machine does include data types. After all, without it won't operate, would it?



If the question is more about implied type conversion (and/or selection), then again Assembly will be a valid answer, as Assembly offers the same ways as C/C++ to write code that adapts to data types (*2). Now, if you insist to exclude Assembly for whatever ideological reason, then ALGOL 60 (*3) may be a good candidate. The sometimes cited FORTRAN introduced it quite late (*4) with FORTRAN 77 (in 1978) (*5) using intrinsics (introduced with FORTRAN 66).




*1 - Or better can, as many - let's say less proficient - programmers decide to ignore or even disable that feature.



*2 - As usual, the secret lies within meta programming - aka Macros - much you do overloading in C++. Except, Assembler does not even force you to use existing operators.



*3 - In fact, ALGOL is a very nice example for the issues of automatic conversion and how to handle it. Where ALGOL 60 added arbitrary type conversion, like its descendant C, ALGOL 68 restricted automatic type conversion later, to only work upward, to avoid program/data errors due to precision loss. So INT could be implied converted to FLOAT, but a downward conversion had to be explicit.



*4 - Which let people use explicit conversions way into the 80s, making it hard to update programs until today. A great example of the advantages of clear, stringent and centralized definition. The ability to switch from single to double or long with just a few changes, instead of debugging huge piles of old code to find each and every explicit conversion.



*5 - As another-dave pointed out in a comment IBM's Fortran II (of 1958) did automatic type conversion between float and int when assigning the result of an expression (See p.22 'Mode of an Arithmetic Statement' in the manual). The expression itself had to be, in all parts, either integer or float, thus it might not fit case made by the OP.







share|improve this answer














share|improve this answer



share|improve this answer








edited Feb 6 at 14:56

























answered Feb 5 at 14:43









RaffzahnRaffzahn

52k6123210




52k6123210







  • 11





    I'm not the downvoter, but this shows a very inaccurate understanding of types. The C function prototype does not mean that "feeding anything but a double, for example an integer, will screw it up," it means that anything but a double--or something convertible to a double--is forbidden by the type system and won't compile. Likewise, having annotations in a specific assembler that the coder is free to turn off, and which give warnings, not errors, if your code describes a nonsensical operation, is not a static type system in the style of C or ALGOL.

    – Mason Wheeler
    Feb 5 at 17:23






  • 2





    @MasonWheeler So am I right to understand your point is about the way the warnings are presented and how that was handled afterwards? Correct me, but are these not features of a language, but rather of its development environment? C is a real bad example, as it did allow to produce type mismatch producing bad code - not to mention that several compilers did generate an a.out despite warnings and errors given. So it's as well up to the user (or his policies represented in IDE settings) how to handle it. Not really a difference, right? Not everything we assume being part of a language is )

    – Raffzahn
    Feb 5 at 17:42







  • 4





    This seems to be a snarky reply saying "assembly has types if you squint hard enough", which is more of a comment than an answer.

    – BlueRaja - Danny Pflughoeft
    Feb 5 at 18:03







  • 5





    @BlueRaja-DannyPflughoeft No need to squint at all. It's all there. If people decide not to do it, it can't be made the fault of the language. Beside, the squint is maybe rather on the side of people looking at the puny capabilities of GNU assembler and judging all the others with that knowledge - much like saying cars don't have air bags by looking at a Yugo.

    – Raffzahn
    Feb 5 at 19:42







  • 2





    When you say that "Assembly [was the first language to have datatypes]", are you claiming that there is essentially one programming language called "Assembly", or are you claiming that IBM S/360 assembly is the first one?

    – Wilson
    Feb 6 at 14:01












  • 11





    I'm not the downvoter, but this shows a very inaccurate understanding of types. The C function prototype does not mean that "feeding anything but a double, for example an integer, will screw it up," it means that anything but a double--or something convertible to a double--is forbidden by the type system and won't compile. Likewise, having annotations in a specific assembler that the coder is free to turn off, and which give warnings, not errors, if your code describes a nonsensical operation, is not a static type system in the style of C or ALGOL.

    – Mason Wheeler
    Feb 5 at 17:23






  • 2





    @MasonWheeler So am I right to understand your point is about the way the warnings are presented and how that was handled afterwards? Correct me, but are these not features of a language, but rather of its development environment? C is a real bad example, as it did allow to produce type mismatch producing bad code - not to mention that several compilers did generate an a.out despite warnings and errors given. So it's as well up to the user (or his policies represented in IDE settings) how to handle it. Not really a difference, right? Not everything we assume being part of a language is )

    – Raffzahn
    Feb 5 at 17:42







  • 4





    This seems to be a snarky reply saying "assembly has types if you squint hard enough", which is more of a comment than an answer.

    – BlueRaja - Danny Pflughoeft
    Feb 5 at 18:03







  • 5





    @BlueRaja-DannyPflughoeft No need to squint at all. It's all there. If people decide not to do it, it can't be made the fault of the language. Beside, the squint is maybe rather on the side of people looking at the puny capabilities of GNU assembler and judging all the others with that knowledge - much like saying cars don't have air bags by looking at a Yugo.

    – Raffzahn
    Feb 5 at 19:42







  • 2





    When you say that "Assembly [was the first language to have datatypes]", are you claiming that there is essentially one programming language called "Assembly", or are you claiming that IBM S/360 assembly is the first one?

    – Wilson
    Feb 6 at 14:01







11




11





I'm not the downvoter, but this shows a very inaccurate understanding of types. The C function prototype does not mean that "feeding anything but a double, for example an integer, will screw it up," it means that anything but a double--or something convertible to a double--is forbidden by the type system and won't compile. Likewise, having annotations in a specific assembler that the coder is free to turn off, and which give warnings, not errors, if your code describes a nonsensical operation, is not a static type system in the style of C or ALGOL.

– Mason Wheeler
Feb 5 at 17:23





I'm not the downvoter, but this shows a very inaccurate understanding of types. The C function prototype does not mean that "feeding anything but a double, for example an integer, will screw it up," it means that anything but a double--or something convertible to a double--is forbidden by the type system and won't compile. Likewise, having annotations in a specific assembler that the coder is free to turn off, and which give warnings, not errors, if your code describes a nonsensical operation, is not a static type system in the style of C or ALGOL.

– Mason Wheeler
Feb 5 at 17:23




2




2





@MasonWheeler So am I right to understand your point is about the way the warnings are presented and how that was handled afterwards? Correct me, but are these not features of a language, but rather of its development environment? C is a real bad example, as it did allow to produce type mismatch producing bad code - not to mention that several compilers did generate an a.out despite warnings and errors given. So it's as well up to the user (or his policies represented in IDE settings) how to handle it. Not really a difference, right? Not everything we assume being part of a language is )

– Raffzahn
Feb 5 at 17:42






@MasonWheeler So am I right to understand your point is about the way the warnings are presented and how that was handled afterwards? Correct me, but are these not features of a language, but rather of its development environment? C is a real bad example, as it did allow to produce type mismatch producing bad code - not to mention that several compilers did generate an a.out despite warnings and errors given. So it's as well up to the user (or his policies represented in IDE settings) how to handle it. Not really a difference, right? Not everything we assume being part of a language is )

– Raffzahn
Feb 5 at 17:42





4




4





This seems to be a snarky reply saying "assembly has types if you squint hard enough", which is more of a comment than an answer.

– BlueRaja - Danny Pflughoeft
Feb 5 at 18:03






This seems to be a snarky reply saying "assembly has types if you squint hard enough", which is more of a comment than an answer.

– BlueRaja - Danny Pflughoeft
Feb 5 at 18:03





5




5





@BlueRaja-DannyPflughoeft No need to squint at all. It's all there. If people decide not to do it, it can't be made the fault of the language. Beside, the squint is maybe rather on the side of people looking at the puny capabilities of GNU assembler and judging all the others with that knowledge - much like saying cars don't have air bags by looking at a Yugo.

– Raffzahn
Feb 5 at 19:42






@BlueRaja-DannyPflughoeft No need to squint at all. It's all there. If people decide not to do it, it can't be made the fault of the language. Beside, the squint is maybe rather on the side of people looking at the puny capabilities of GNU assembler and judging all the others with that knowledge - much like saying cars don't have air bags by looking at a Yugo.

– Raffzahn
Feb 5 at 19:42





2




2





When you say that "Assembly [was the first language to have datatypes]", are you claiming that there is essentially one programming language called "Assembly", or are you claiming that IBM S/360 assembly is the first one?

– Wilson
Feb 6 at 14:01





When you say that "Assembly [was the first language to have datatypes]", are you claiming that there is essentially one programming language called "Assembly", or are you claiming that IBM S/360 assembly is the first one?

– Wilson
Feb 6 at 14:01











17














Perhaps Plankalkül (1942-45).




Plankalkül [has ...] floating point arithmetic, arrays, hierarchical
record structures [...] The Plankalkül provides a data structure
called generalized graph (verallgemeinerter Graph), which can be used
to represent geometrical structures. [...] Some features of the
Plankalkül: [...]
composite types are arrays and tuples
[...] The only primitive data type in the Plankalkül is a single bit, denoted by S0. Further data types can be built up from these.




Addition: from Stanford CS-76-562 Early development of programming languages (Knuth & Pardo).




Thus the Plankalkül included the important concept of hierarchically
structured data, going all the way down to the bit level. Such
advanced data structures did not enter again into programming
languages until the late 1950's, in IBM's Commercial Translator. The
idea eventually appeared in many other languages, such as FACT, COBOL,
PL/I, and extensions of ALGOL 60




There are some details about numbers also:




Integer variables in the Plankalkül were represented by type A9.
Another special type was used for floating-binary numbers, namely
[...] The first three-bit component here was for signs and special
markers -- indicating, for example, whether the number was real or
imaginary or zero; the second was for a seven-bit exponent in two's
complement notation; and the final 22 bits represented the 25-bit
fraction part of a normalized number, with the redundant leading "11"
bit suppressed.







share|improve this answer




















  • 5





    @FelixPalmen: yes?

    – Tomas By
    Feb 5 at 15:06






  • 10





    You can write programs, and you can perform all the operations theoretically on paper. Have you never taken a programming course at university, and sat an exam?

    – Tomas By
    Feb 5 at 15:16






  • 5





    It says "arrays, records, graphs" - all those are data types. And it also says floating point math, so presumably there are both integers and floats in some form. "primitive" is not the same as "built-in".

    – Tomas By
    Feb 5 at 15:28






  • 9





    @FelixPalmen: writing programs is progamming, correct.

    – Tomas By
    Feb 5 at 15:29






  • 5





    Why not? 8008 assembler was designed, and coded in, before any processor capable of running it existed. That is how design works. Fortran as a notably typed language was designed and coded in before a Fortran compiler was ever written. This is a chicken/egg issue. The people designing this language didn't design it not anticipating to build a real computer later. This was part of their justification for the money. Duh.

    – Harper
    Feb 5 at 18:14















17














Perhaps Plankalkül (1942-45).




Plankalkül [has ...] floating point arithmetic, arrays, hierarchical
record structures [...] The Plankalkül provides a data structure
called generalized graph (verallgemeinerter Graph), which can be used
to represent geometrical structures. [...] Some features of the
Plankalkül: [...]
composite types are arrays and tuples
[...] The only primitive data type in the Plankalkül is a single bit, denoted by S0. Further data types can be built up from these.




Addition: from Stanford CS-76-562 Early development of programming languages (Knuth & Pardo).




Thus the Plankalkül included the important concept of hierarchically
structured data, going all the way down to the bit level. Such
advanced data structures did not enter again into programming
languages until the late 1950's, in IBM's Commercial Translator. The
idea eventually appeared in many other languages, such as FACT, COBOL,
PL/I, and extensions of ALGOL 60




There are some details about numbers also:




Integer variables in the Plankalkül were represented by type A9.
Another special type was used for floating-binary numbers, namely
[...] The first three-bit component here was for signs and special
markers -- indicating, for example, whether the number was real or
imaginary or zero; the second was for a seven-bit exponent in two's
complement notation; and the final 22 bits represented the 25-bit
fraction part of a normalized number, with the redundant leading "11"
bit suppressed.







share|improve this answer




















  • 5





    @FelixPalmen: yes?

    – Tomas By
    Feb 5 at 15:06






  • 10





    You can write programs, and you can perform all the operations theoretically on paper. Have you never taken a programming course at university, and sat an exam?

    – Tomas By
    Feb 5 at 15:16






  • 5





    It says "arrays, records, graphs" - all those are data types. And it also says floating point math, so presumably there are both integers and floats in some form. "primitive" is not the same as "built-in".

    – Tomas By
    Feb 5 at 15:28






  • 9





    @FelixPalmen: writing programs is progamming, correct.

    – Tomas By
    Feb 5 at 15:29






  • 5





    Why not? 8008 assembler was designed, and coded in, before any processor capable of running it existed. That is how design works. Fortran as a notably typed language was designed and coded in before a Fortran compiler was ever written. This is a chicken/egg issue. The people designing this language didn't design it not anticipating to build a real computer later. This was part of their justification for the money. Duh.

    – Harper
    Feb 5 at 18:14













17












17








17







Perhaps Plankalkül (1942-45).




Plankalkül [has ...] floating point arithmetic, arrays, hierarchical
record structures [...] The Plankalkül provides a data structure
called generalized graph (verallgemeinerter Graph), which can be used
to represent geometrical structures. [...] Some features of the
Plankalkül: [...]
composite types are arrays and tuples
[...] The only primitive data type in the Plankalkül is a single bit, denoted by S0. Further data types can be built up from these.




Addition: from Stanford CS-76-562 Early development of programming languages (Knuth & Pardo).




Thus the Plankalkül included the important concept of hierarchically
structured data, going all the way down to the bit level. Such
advanced data structures did not enter again into programming
languages until the late 1950's, in IBM's Commercial Translator. The
idea eventually appeared in many other languages, such as FACT, COBOL,
PL/I, and extensions of ALGOL 60




There are some details about numbers also:




Integer variables in the Plankalkül were represented by type A9.
Another special type was used for floating-binary numbers, namely
[...] The first three-bit component here was for signs and special
markers -- indicating, for example, whether the number was real or
imaginary or zero; the second was for a seven-bit exponent in two's
complement notation; and the final 22 bits represented the 25-bit
fraction part of a normalized number, with the redundant leading "11"
bit suppressed.







share|improve this answer















Perhaps Plankalkül (1942-45).




Plankalkül [has ...] floating point arithmetic, arrays, hierarchical
record structures [...] The Plankalkül provides a data structure
called generalized graph (verallgemeinerter Graph), which can be used
to represent geometrical structures. [...] Some features of the
Plankalkül: [...]
composite types are arrays and tuples
[...] The only primitive data type in the Plankalkül is a single bit, denoted by S0. Further data types can be built up from these.




Addition: from Stanford CS-76-562 Early development of programming languages (Knuth & Pardo).




Thus the Plankalkül included the important concept of hierarchically
structured data, going all the way down to the bit level. Such
advanced data structures did not enter again into programming
languages until the late 1950's, in IBM's Commercial Translator. The
idea eventually appeared in many other languages, such as FACT, COBOL,
PL/I, and extensions of ALGOL 60




There are some details about numbers also:




Integer variables in the Plankalkül were represented by type A9.
Another special type was used for floating-binary numbers, namely
[...] The first three-bit component here was for signs and special
markers -- indicating, for example, whether the number was real or
imaginary or zero; the second was for a seven-bit exponent in two's
complement notation; and the final 22 bits represented the 25-bit
fraction part of a normalized number, with the redundant leading "11"
bit suppressed.








share|improve this answer














share|improve this answer



share|improve this answer








edited Feb 5 at 18:53

























answered Feb 5 at 14:44









Tomas ByTomas By

526112




526112







  • 5





    @FelixPalmen: yes?

    – Tomas By
    Feb 5 at 15:06






  • 10





    You can write programs, and you can perform all the operations theoretically on paper. Have you never taken a programming course at university, and sat an exam?

    – Tomas By
    Feb 5 at 15:16






  • 5





    It says "arrays, records, graphs" - all those are data types. And it also says floating point math, so presumably there are both integers and floats in some form. "primitive" is not the same as "built-in".

    – Tomas By
    Feb 5 at 15:28






  • 9





    @FelixPalmen: writing programs is progamming, correct.

    – Tomas By
    Feb 5 at 15:29






  • 5





    Why not? 8008 assembler was designed, and coded in, before any processor capable of running it existed. That is how design works. Fortran as a notably typed language was designed and coded in before a Fortran compiler was ever written. This is a chicken/egg issue. The people designing this language didn't design it not anticipating to build a real computer later. This was part of their justification for the money. Duh.

    – Harper
    Feb 5 at 18:14












  • 5





    @FelixPalmen: yes?

    – Tomas By
    Feb 5 at 15:06






  • 10





    You can write programs, and you can perform all the operations theoretically on paper. Have you never taken a programming course at university, and sat an exam?

    – Tomas By
    Feb 5 at 15:16






  • 5





    It says "arrays, records, graphs" - all those are data types. And it also says floating point math, so presumably there are both integers and floats in some form. "primitive" is not the same as "built-in".

    – Tomas By
    Feb 5 at 15:28






  • 9





    @FelixPalmen: writing programs is progamming, correct.

    – Tomas By
    Feb 5 at 15:29






  • 5





    Why not? 8008 assembler was designed, and coded in, before any processor capable of running it existed. That is how design works. Fortran as a notably typed language was designed and coded in before a Fortran compiler was ever written. This is a chicken/egg issue. The people designing this language didn't design it not anticipating to build a real computer later. This was part of their justification for the money. Duh.

    – Harper
    Feb 5 at 18:14







5




5





@FelixPalmen: yes?

– Tomas By
Feb 5 at 15:06





@FelixPalmen: yes?

– Tomas By
Feb 5 at 15:06




10




10





You can write programs, and you can perform all the operations theoretically on paper. Have you never taken a programming course at university, and sat an exam?

– Tomas By
Feb 5 at 15:16





You can write programs, and you can perform all the operations theoretically on paper. Have you never taken a programming course at university, and sat an exam?

– Tomas By
Feb 5 at 15:16




5




5





It says "arrays, records, graphs" - all those are data types. And it also says floating point math, so presumably there are both integers and floats in some form. "primitive" is not the same as "built-in".

– Tomas By
Feb 5 at 15:28





It says "arrays, records, graphs" - all those are data types. And it also says floating point math, so presumably there are both integers and floats in some form. "primitive" is not the same as "built-in".

– Tomas By
Feb 5 at 15:28




9




9





@FelixPalmen: writing programs is progamming, correct.

– Tomas By
Feb 5 at 15:29





@FelixPalmen: writing programs is progamming, correct.

– Tomas By
Feb 5 at 15:29




5




5





Why not? 8008 assembler was designed, and coded in, before any processor capable of running it existed. That is how design works. Fortran as a notably typed language was designed and coded in before a Fortran compiler was ever written. This is a chicken/egg issue. The people designing this language didn't design it not anticipating to build a real computer later. This was part of their justification for the money. Duh.

– Harper
Feb 5 at 18:14





Why not? 8008 assembler was designed, and coded in, before any processor capable of running it existed. That is how design works. Fortran as a notably typed language was designed and coded in before a Fortran compiler was ever written. This is a chicken/egg issue. The people designing this language didn't design it not anticipating to build a real computer later. This was part of their justification for the money. Duh.

– Harper
Feb 5 at 18:14











5














Let's get the answer to the question out of the way first. Limiting ourselves to high level languages designed for electronic digital computers and that are not really obscure, the answer is between Cobol and Fortran depending on which was invented first.




Machine language (and Assembly language) don't have the concept of data types




This is not true. Many assembler languages have multiple different sized words they can operate on and some have floating point types.




But if you are working with a high level language (for example: C), all you have to do is "mark" one variable with the int keyword and mark the other variable with the float keyword, and then use the + operator to add the two variables together, and the compiler will generate the machine language instruction that adds an int and float.




That's called implicit coercion and while it's true that you need data types to do implicit coercion so that the compiler knows how to do the coercion, coercion is not synonymous with data types or even a necessary condition. Swift, for example, has no implicit coercion - you always have to convert both operands to the same type when doing arithmetic.



There are three related concepts here that are being confused,



  • types assign a meaning to certain bit patterns in memory. They tell you and the compiler what kind of object a thing in memory is and what you can do with it.

  • type checking is where the compiler or the language runtime checks that an operation is valid for a particular type

  • implicit coercion is where the compiler or language runtime has a rule for automatically converting one type to another if needed.

Almost all computer languages are typed to some degree. Some languages are called "typeless" e.g. the predecessor to C which was called "B" (in reality these languages actually have one type), often the machine word. What really distinguishes languages is not whether they are typed or not but how much type checking is done, when the type checking is done and what happens when mismatched types are found.



Let's look at some examples:



One of the biggest complaints about Javascript is that its type system is very weak. This is not really true. When a program is running, the interpreter always knows exactly what type every object is. The problems with Javascript occur because type checking is done at run time (this is called "dynamic typing", compile time type checking is called "static typing") and if you perform an operation on an object of an incompatible type, Javascript will try to coerce it into a compatible type, sometimes with surprising results.



C is relatively strongly typed with static type checking. This was not always the case. In the pre-ANSI standard days, not all conversions between types were checked. Perhaps the most egregious issue was the fact that the compiler didn't check assignments between pointers and ints. On many architectures, you got away with it because int and someType * were the same size. However, if they were not the same size (as with my Atari ST Megamax C compiler) assigning a pointer to an int would lose bits and then hilarity would ensue.



The trend today seems to be towards statically typed languages but with type inference. Type inference allows the programmer to omit the type specification if the compiler can infer the type of the object from the context. For example, in Swift:



func square(_ x: Int) -> Int

return x * x


let a = square(3)


defines a function square that takes an Int and returns an Int and the applies it to 3 and assigns the result to a. The compiler infers the type of a from the return type of the function and the type of the literal 3 from the type of the function's parameter. In C I would have to declare the type of a although it does have limited inference for literals.



Type inference seems to be a new trend although, as with all things in Computer Science, the concept probably dates back decades. Statically typed languages are as old as high level languages.






share|improve this answer























  • "Type inference seems to be a new trend although ... the concept probably dates back decades." Wikipedia claims the Hindley-Milner principal type algorithm goes back to 1969. Certainly Haskell had type inference from day 1, Haskell dates back to 1990, and Haskell is loosely based on Miranda, which dates to 1985... It seems real language implementations have been doing this for a while.

    – MathematicalOrchid
    Feb 7 at 14:11











  • @MathematicalOrchid I tend to think of everyone after 1980 as being new - that was when I first came in contact with real computers. I am aware that type inference has been around in functional programming languages for a long time but as a popular trend for mainstream languages, it is quite new.

    – JeremyP
    Feb 8 at 9:54











  • To be clear, I wasn't disputing that the trend is new, I was confirming that the original idea itself is quite old.

    – MathematicalOrchid
    Feb 8 at 13:22















5














Let's get the answer to the question out of the way first. Limiting ourselves to high level languages designed for electronic digital computers and that are not really obscure, the answer is between Cobol and Fortran depending on which was invented first.




Machine language (and Assembly language) don't have the concept of data types




This is not true. Many assembler languages have multiple different sized words they can operate on and some have floating point types.




But if you are working with a high level language (for example: C), all you have to do is "mark" one variable with the int keyword and mark the other variable with the float keyword, and then use the + operator to add the two variables together, and the compiler will generate the machine language instruction that adds an int and float.




That's called implicit coercion and while it's true that you need data types to do implicit coercion so that the compiler knows how to do the coercion, coercion is not synonymous with data types or even a necessary condition. Swift, for example, has no implicit coercion - you always have to convert both operands to the same type when doing arithmetic.



There are three related concepts here that are being confused,



  • types assign a meaning to certain bit patterns in memory. They tell you and the compiler what kind of object a thing in memory is and what you can do with it.

  • type checking is where the compiler or the language runtime checks that an operation is valid for a particular type

  • implicit coercion is where the compiler or language runtime has a rule for automatically converting one type to another if needed.

Almost all computer languages are typed to some degree. Some languages are called "typeless" e.g. the predecessor to C which was called "B" (in reality these languages actually have one type), often the machine word. What really distinguishes languages is not whether they are typed or not but how much type checking is done, when the type checking is done and what happens when mismatched types are found.



Let's look at some examples:



One of the biggest complaints about Javascript is that its type system is very weak. This is not really true. When a program is running, the interpreter always knows exactly what type every object is. The problems with Javascript occur because type checking is done at run time (this is called "dynamic typing", compile time type checking is called "static typing") and if you perform an operation on an object of an incompatible type, Javascript will try to coerce it into a compatible type, sometimes with surprising results.



C is relatively strongly typed with static type checking. This was not always the case. In the pre-ANSI standard days, not all conversions between types were checked. Perhaps the most egregious issue was the fact that the compiler didn't check assignments between pointers and ints. On many architectures, you got away with it because int and someType * were the same size. However, if they were not the same size (as with my Atari ST Megamax C compiler) assigning a pointer to an int would lose bits and then hilarity would ensue.



The trend today seems to be towards statically typed languages but with type inference. Type inference allows the programmer to omit the type specification if the compiler can infer the type of the object from the context. For example, in Swift:



func square(_ x: Int) -> Int

return x * x


let a = square(3)


defines a function square that takes an Int and returns an Int and the applies it to 3 and assigns the result to a. The compiler infers the type of a from the return type of the function and the type of the literal 3 from the type of the function's parameter. In C I would have to declare the type of a although it does have limited inference for literals.



Type inference seems to be a new trend although, as with all things in Computer Science, the concept probably dates back decades. Statically typed languages are as old as high level languages.






share|improve this answer























  • "Type inference seems to be a new trend although ... the concept probably dates back decades." Wikipedia claims the Hindley-Milner principal type algorithm goes back to 1969. Certainly Haskell had type inference from day 1, Haskell dates back to 1990, and Haskell is loosely based on Miranda, which dates to 1985... It seems real language implementations have been doing this for a while.

    – MathematicalOrchid
    Feb 7 at 14:11











  • @MathematicalOrchid I tend to think of everyone after 1980 as being new - that was when I first came in contact with real computers. I am aware that type inference has been around in functional programming languages for a long time but as a popular trend for mainstream languages, it is quite new.

    – JeremyP
    Feb 8 at 9:54











  • To be clear, I wasn't disputing that the trend is new, I was confirming that the original idea itself is quite old.

    – MathematicalOrchid
    Feb 8 at 13:22













5












5








5







Let's get the answer to the question out of the way first. Limiting ourselves to high level languages designed for electronic digital computers and that are not really obscure, the answer is between Cobol and Fortran depending on which was invented first.




Machine language (and Assembly language) don't have the concept of data types




This is not true. Many assembler languages have multiple different sized words they can operate on and some have floating point types.




But if you are working with a high level language (for example: C), all you have to do is "mark" one variable with the int keyword and mark the other variable with the float keyword, and then use the + operator to add the two variables together, and the compiler will generate the machine language instruction that adds an int and float.




That's called implicit coercion and while it's true that you need data types to do implicit coercion so that the compiler knows how to do the coercion, coercion is not synonymous with data types or even a necessary condition. Swift, for example, has no implicit coercion - you always have to convert both operands to the same type when doing arithmetic.



There are three related concepts here that are being confused,



  • types assign a meaning to certain bit patterns in memory. They tell you and the compiler what kind of object a thing in memory is and what you can do with it.

  • type checking is where the compiler or the language runtime checks that an operation is valid for a particular type

  • implicit coercion is where the compiler or language runtime has a rule for automatically converting one type to another if needed.

Almost all computer languages are typed to some degree. Some languages are called "typeless" e.g. the predecessor to C which was called "B" (in reality these languages actually have one type), often the machine word. What really distinguishes languages is not whether they are typed or not but how much type checking is done, when the type checking is done and what happens when mismatched types are found.



Let's look at some examples:



One of the biggest complaints about Javascript is that its type system is very weak. This is not really true. When a program is running, the interpreter always knows exactly what type every object is. The problems with Javascript occur because type checking is done at run time (this is called "dynamic typing", compile time type checking is called "static typing") and if you perform an operation on an object of an incompatible type, Javascript will try to coerce it into a compatible type, sometimes with surprising results.



C is relatively strongly typed with static type checking. This was not always the case. In the pre-ANSI standard days, not all conversions between types were checked. Perhaps the most egregious issue was the fact that the compiler didn't check assignments between pointers and ints. On many architectures, you got away with it because int and someType * were the same size. However, if they were not the same size (as with my Atari ST Megamax C compiler) assigning a pointer to an int would lose bits and then hilarity would ensue.



The trend today seems to be towards statically typed languages but with type inference. Type inference allows the programmer to omit the type specification if the compiler can infer the type of the object from the context. For example, in Swift:



func square(_ x: Int) -> Int

return x * x


let a = square(3)


defines a function square that takes an Int and returns an Int and the applies it to 3 and assigns the result to a. The compiler infers the type of a from the return type of the function and the type of the literal 3 from the type of the function's parameter. In C I would have to declare the type of a although it does have limited inference for literals.



Type inference seems to be a new trend although, as with all things in Computer Science, the concept probably dates back decades. Statically typed languages are as old as high level languages.






share|improve this answer













Let's get the answer to the question out of the way first. Limiting ourselves to high level languages designed for electronic digital computers and that are not really obscure, the answer is between Cobol and Fortran depending on which was invented first.




Machine language (and Assembly language) don't have the concept of data types




This is not true. Many assembler languages have multiple different sized words they can operate on and some have floating point types.




But if you are working with a high level language (for example: C), all you have to do is "mark" one variable with the int keyword and mark the other variable with the float keyword, and then use the + operator to add the two variables together, and the compiler will generate the machine language instruction that adds an int and float.




That's called implicit coercion and while it's true that you need data types to do implicit coercion so that the compiler knows how to do the coercion, coercion is not synonymous with data types or even a necessary condition. Swift, for example, has no implicit coercion - you always have to convert both operands to the same type when doing arithmetic.



There are three related concepts here that are being confused,



  • types assign a meaning to certain bit patterns in memory. They tell you and the compiler what kind of object a thing in memory is and what you can do with it.

  • type checking is where the compiler or the language runtime checks that an operation is valid for a particular type

  • implicit coercion is where the compiler or language runtime has a rule for automatically converting one type to another if needed.

Almost all computer languages are typed to some degree. Some languages are called "typeless" e.g. the predecessor to C which was called "B" (in reality these languages actually have one type), often the machine word. What really distinguishes languages is not whether they are typed or not but how much type checking is done, when the type checking is done and what happens when mismatched types are found.



Let's look at some examples:



One of the biggest complaints about Javascript is that its type system is very weak. This is not really true. When a program is running, the interpreter always knows exactly what type every object is. The problems with Javascript occur because type checking is done at run time (this is called "dynamic typing", compile time type checking is called "static typing") and if you perform an operation on an object of an incompatible type, Javascript will try to coerce it into a compatible type, sometimes with surprising results.



C is relatively strongly typed with static type checking. This was not always the case. In the pre-ANSI standard days, not all conversions between types were checked. Perhaps the most egregious issue was the fact that the compiler didn't check assignments between pointers and ints. On many architectures, you got away with it because int and someType * were the same size. However, if they were not the same size (as with my Atari ST Megamax C compiler) assigning a pointer to an int would lose bits and then hilarity would ensue.



The trend today seems to be towards statically typed languages but with type inference. Type inference allows the programmer to omit the type specification if the compiler can infer the type of the object from the context. For example, in Swift:



func square(_ x: Int) -> Int

return x * x


let a = square(3)


defines a function square that takes an Int and returns an Int and the applies it to 3 and assigns the result to a. The compiler infers the type of a from the return type of the function and the type of the literal 3 from the type of the function's parameter. In C I would have to declare the type of a although it does have limited inference for literals.



Type inference seems to be a new trend although, as with all things in Computer Science, the concept probably dates back decades. Statically typed languages are as old as high level languages.







share|improve this answer












share|improve this answer



share|improve this answer










answered Feb 6 at 10:40









JeremyPJeremyP

4,96011829




4,96011829












  • "Type inference seems to be a new trend although ... the concept probably dates back decades." Wikipedia claims the Hindley-Milner principal type algorithm goes back to 1969. Certainly Haskell had type inference from day 1, Haskell dates back to 1990, and Haskell is loosely based on Miranda, which dates to 1985... It seems real language implementations have been doing this for a while.

    – MathematicalOrchid
    Feb 7 at 14:11











  • @MathematicalOrchid I tend to think of everyone after 1980 as being new - that was when I first came in contact with real computers. I am aware that type inference has been around in functional programming languages for a long time but as a popular trend for mainstream languages, it is quite new.

    – JeremyP
    Feb 8 at 9:54











  • To be clear, I wasn't disputing that the trend is new, I was confirming that the original idea itself is quite old.

    – MathematicalOrchid
    Feb 8 at 13:22

















  • "Type inference seems to be a new trend although ... the concept probably dates back decades." Wikipedia claims the Hindley-Milner principal type algorithm goes back to 1969. Certainly Haskell had type inference from day 1, Haskell dates back to 1990, and Haskell is loosely based on Miranda, which dates to 1985... It seems real language implementations have been doing this for a while.

    – MathematicalOrchid
    Feb 7 at 14:11











  • @MathematicalOrchid I tend to think of everyone after 1980 as being new - that was when I first came in contact with real computers. I am aware that type inference has been around in functional programming languages for a long time but as a popular trend for mainstream languages, it is quite new.

    – JeremyP
    Feb 8 at 9:54











  • To be clear, I wasn't disputing that the trend is new, I was confirming that the original idea itself is quite old.

    – MathematicalOrchid
    Feb 8 at 13:22
















"Type inference seems to be a new trend although ... the concept probably dates back decades." Wikipedia claims the Hindley-Milner principal type algorithm goes back to 1969. Certainly Haskell had type inference from day 1, Haskell dates back to 1990, and Haskell is loosely based on Miranda, which dates to 1985... It seems real language implementations have been doing this for a while.

– MathematicalOrchid
Feb 7 at 14:11





"Type inference seems to be a new trend although ... the concept probably dates back decades." Wikipedia claims the Hindley-Milner principal type algorithm goes back to 1969. Certainly Haskell had type inference from day 1, Haskell dates back to 1990, and Haskell is loosely based on Miranda, which dates to 1985... It seems real language implementations have been doing this for a while.

– MathematicalOrchid
Feb 7 at 14:11













@MathematicalOrchid I tend to think of everyone after 1980 as being new - that was when I first came in contact with real computers. I am aware that type inference has been around in functional programming languages for a long time but as a popular trend for mainstream languages, it is quite new.

– JeremyP
Feb 8 at 9:54





@MathematicalOrchid I tend to think of everyone after 1980 as being new - that was when I first came in contact with real computers. I am aware that type inference has been around in functional programming languages for a long time but as a popular trend for mainstream languages, it is quite new.

– JeremyP
Feb 8 at 9:54













To be clear, I wasn't disputing that the trend is new, I was confirming that the original idea itself is quite old.

– MathematicalOrchid
Feb 8 at 13:22





To be clear, I wasn't disputing that the trend is new, I was confirming that the original idea itself is quite old.

– MathematicalOrchid
Feb 8 at 13:22











5














There's a lot of heated discussion about what the meaning of 'programming language that had data types' might be. In the absence of clarification from the OP, here's my opinion.



Definitions:



'Programming language' - any language in which programs are written. For the purposes of this answer, I'm restricting this to languages which are implemented 'soon' after design, for some vague value of 'soon' (I don't exclude languages which have programs written before the implementation). I wish to exclude Plankalkül for this answer, work of genius though it may be, simply because it did not become known to the world until after data-typing became commonplace. Until implementation, it's a theoretical idea, not a programming language - though perhaps as a workaday programmer I am prejudiced.



'Language with data types' - I think the bedrock requirements here are that the language defines more than one type, and there be some way to indicate what type a quantity has. I include explicit declaration, implicit typing (by denotation for literals, initial letters for variables, or weird sigil schemes), and runtime determination.



Lastly, if a language is said by its population of programmers to be 'untyped' then I think we should agree with those programmers. This means BLISS is typeless even though it has builtins that will treat a machine word as holding a floating-point value.



Having established my frame of reference, I say the answer is "early FORTRAN" (1954). FORTRAN is normally considered to have been born in 1957, but this survey of early programming languages by Knuth shows, on pages 62-63, an early implementation where the I-N as integer, others as real numbers, convention is in place.






share|improve this answer

























  • Knuth & Pardo 1976, yes, if you check the list on p. 1, then Plankalkül is #1 and FORTRAN is #11.

    – Tomas By
    Feb 7 at 1:44











  • @TomasBy - you're referring to the order of language appearance, right? I'm inclined to not consider Plankalkül for the purpose of this question since it was not implemented. As far as I can tell, no implemented language until Fortran had the data typing we're looking for. (I was quite disappointed that Glennie's Autocode didn't)

    – another-dave
    Feb 7 at 2:08











  • Will do later. I admit it's a slippery slope since as observed, people program in a language before it's implemented (extreme case: compiler bootstrapping). However, feels like it was a paper exercise, even if not intended that way. I'll do a little more reading first. Thanks for the note.

    – another-dave
    Feb 7 at 13:01






  • 1





    @IMSoP - I laid my cards on the table!

    – another-dave
    Feb 9 at 1:15















5














There's a lot of heated discussion about what the meaning of 'programming language that had data types' might be. In the absence of clarification from the OP, here's my opinion.



Definitions:



'Programming language' - any language in which programs are written. For the purposes of this answer, I'm restricting this to languages which are implemented 'soon' after design, for some vague value of 'soon' (I don't exclude languages which have programs written before the implementation). I wish to exclude Plankalkül for this answer, work of genius though it may be, simply because it did not become known to the world until after data-typing became commonplace. Until implementation, it's a theoretical idea, not a programming language - though perhaps as a workaday programmer I am prejudiced.



'Language with data types' - I think the bedrock requirements here are that the language defines more than one type, and there be some way to indicate what type a quantity has. I include explicit declaration, implicit typing (by denotation for literals, initial letters for variables, or weird sigil schemes), and runtime determination.



Lastly, if a language is said by its population of programmers to be 'untyped' then I think we should agree with those programmers. This means BLISS is typeless even though it has builtins that will treat a machine word as holding a floating-point value.



Having established my frame of reference, I say the answer is "early FORTRAN" (1954). FORTRAN is normally considered to have been born in 1957, but this survey of early programming languages by Knuth shows, on pages 62-63, an early implementation where the I-N as integer, others as real numbers, convention is in place.






share|improve this answer

























  • Knuth & Pardo 1976, yes, if you check the list on p. 1, then Plankalkül is #1 and FORTRAN is #11.

    – Tomas By
    Feb 7 at 1:44











  • @TomasBy - you're referring to the order of language appearance, right? I'm inclined to not consider Plankalkül for the purpose of this question since it was not implemented. As far as I can tell, no implemented language until Fortran had the data typing we're looking for. (I was quite disappointed that Glennie's Autocode didn't)

    – another-dave
    Feb 7 at 2:08











  • Will do later. I admit it's a slippery slope since as observed, people program in a language before it's implemented (extreme case: compiler bootstrapping). However, feels like it was a paper exercise, even if not intended that way. I'll do a little more reading first. Thanks for the note.

    – another-dave
    Feb 7 at 13:01






  • 1





    @IMSoP - I laid my cards on the table!

    – another-dave
    Feb 9 at 1:15













5












5








5







There's a lot of heated discussion about what the meaning of 'programming language that had data types' might be. In the absence of clarification from the OP, here's my opinion.



Definitions:



'Programming language' - any language in which programs are written. For the purposes of this answer, I'm restricting this to languages which are implemented 'soon' after design, for some vague value of 'soon' (I don't exclude languages which have programs written before the implementation). I wish to exclude Plankalkül for this answer, work of genius though it may be, simply because it did not become known to the world until after data-typing became commonplace. Until implementation, it's a theoretical idea, not a programming language - though perhaps as a workaday programmer I am prejudiced.



'Language with data types' - I think the bedrock requirements here are that the language defines more than one type, and there be some way to indicate what type a quantity has. I include explicit declaration, implicit typing (by denotation for literals, initial letters for variables, or weird sigil schemes), and runtime determination.



Lastly, if a language is said by its population of programmers to be 'untyped' then I think we should agree with those programmers. This means BLISS is typeless even though it has builtins that will treat a machine word as holding a floating-point value.



Having established my frame of reference, I say the answer is "early FORTRAN" (1954). FORTRAN is normally considered to have been born in 1957, but this survey of early programming languages by Knuth shows, on pages 62-63, an early implementation where the I-N as integer, others as real numbers, convention is in place.






share|improve this answer















There's a lot of heated discussion about what the meaning of 'programming language that had data types' might be. In the absence of clarification from the OP, here's my opinion.



Definitions:



'Programming language' - any language in which programs are written. For the purposes of this answer, I'm restricting this to languages which are implemented 'soon' after design, for some vague value of 'soon' (I don't exclude languages which have programs written before the implementation). I wish to exclude Plankalkül for this answer, work of genius though it may be, simply because it did not become known to the world until after data-typing became commonplace. Until implementation, it's a theoretical idea, not a programming language - though perhaps as a workaday programmer I am prejudiced.



'Language with data types' - I think the bedrock requirements here are that the language defines more than one type, and there be some way to indicate what type a quantity has. I include explicit declaration, implicit typing (by denotation for literals, initial letters for variables, or weird sigil schemes), and runtime determination.



Lastly, if a language is said by its population of programmers to be 'untyped' then I think we should agree with those programmers. This means BLISS is typeless even though it has builtins that will treat a machine word as holding a floating-point value.



Having established my frame of reference, I say the answer is "early FORTRAN" (1954). FORTRAN is normally considered to have been born in 1957, but this survey of early programming languages by Knuth shows, on pages 62-63, an early implementation where the I-N as integer, others as real numbers, convention is in place.







share|improve this answer














share|improve this answer



share|improve this answer








edited Feb 9 at 1:14

























answered Feb 6 at 23:49









another-daveanother-dave

1,067110




1,067110












  • Knuth & Pardo 1976, yes, if you check the list on p. 1, then Plankalkül is #1 and FORTRAN is #11.

    – Tomas By
    Feb 7 at 1:44











  • @TomasBy - you're referring to the order of language appearance, right? I'm inclined to not consider Plankalkül for the purpose of this question since it was not implemented. As far as I can tell, no implemented language until Fortran had the data typing we're looking for. (I was quite disappointed that Glennie's Autocode didn't)

    – another-dave
    Feb 7 at 2:08











  • Will do later. I admit it's a slippery slope since as observed, people program in a language before it's implemented (extreme case: compiler bootstrapping). However, feels like it was a paper exercise, even if not intended that way. I'll do a little more reading first. Thanks for the note.

    – another-dave
    Feb 7 at 13:01






  • 1





    @IMSoP - I laid my cards on the table!

    – another-dave
    Feb 9 at 1:15

















  • Knuth & Pardo 1976, yes, if you check the list on p. 1, then Plankalkül is #1 and FORTRAN is #11.

    – Tomas By
    Feb 7 at 1:44











  • @TomasBy - you're referring to the order of language appearance, right? I'm inclined to not consider Plankalkül for the purpose of this question since it was not implemented. As far as I can tell, no implemented language until Fortran had the data typing we're looking for. (I was quite disappointed that Glennie's Autocode didn't)

    – another-dave
    Feb 7 at 2:08











  • Will do later. I admit it's a slippery slope since as observed, people program in a language before it's implemented (extreme case: compiler bootstrapping). However, feels like it was a paper exercise, even if not intended that way. I'll do a little more reading first. Thanks for the note.

    – another-dave
    Feb 7 at 13:01






  • 1





    @IMSoP - I laid my cards on the table!

    – another-dave
    Feb 9 at 1:15
















Knuth & Pardo 1976, yes, if you check the list on p. 1, then Plankalkül is #1 and FORTRAN is #11.

– Tomas By
Feb 7 at 1:44





Knuth & Pardo 1976, yes, if you check the list on p. 1, then Plankalkül is #1 and FORTRAN is #11.

– Tomas By
Feb 7 at 1:44













@TomasBy - you're referring to the order of language appearance, right? I'm inclined to not consider Plankalkül for the purpose of this question since it was not implemented. As far as I can tell, no implemented language until Fortran had the data typing we're looking for. (I was quite disappointed that Glennie's Autocode didn't)

– another-dave
Feb 7 at 2:08





@TomasBy - you're referring to the order of language appearance, right? I'm inclined to not consider Plankalkül for the purpose of this question since it was not implemented. As far as I can tell, no implemented language until Fortran had the data typing we're looking for. (I was quite disappointed that Glennie's Autocode didn't)

– another-dave
Feb 7 at 2:08













Will do later. I admit it's a slippery slope since as observed, people program in a language before it's implemented (extreme case: compiler bootstrapping). However, feels like it was a paper exercise, even if not intended that way. I'll do a little more reading first. Thanks for the note.

– another-dave
Feb 7 at 13:01





Will do later. I admit it's a slippery slope since as observed, people program in a language before it's implemented (extreme case: compiler bootstrapping). However, feels like it was a paper exercise, even if not intended that way. I'll do a little more reading first. Thanks for the note.

– another-dave
Feb 7 at 13:01




1




1





@IMSoP - I laid my cards on the table!

– another-dave
Feb 9 at 1:15





@IMSoP - I laid my cards on the table!

– another-dave
Feb 9 at 1:15











0














Not the first, but Algol 60 deserves honorable mention. It had strong typing, but type mismatches caused compiler errors instead of automatic conversions.



The typing system for Algol 60 was better than Fortran or COBOL.



Incidentally there was a period of four years when major languages were launched. 1957 Fortran; 1958 COBOL; 1959 Lisp; and 1960 Algol.






share|improve this answer

























  • Agreed, but there was that weird thing of the specification part (if I recall the terminology correctly) being optional for formal procedure parameters. Many implementations required it. But if you don't know the parameter data types (without exhaustive analysis of every potential execution path!) how can you compile the code for the procedure, in the general case? Hmm, maybe this should be a new question!

    – another-dave
    Feb 7 at 13:05












  • I have forgotten too much Algol to follow your comment. Suffice it to say that Algol was a step in that direction.

    – Walter Mitty
    Feb 7 at 13:16











  • Oh, Algol was a long way ahead of its contemporaries. I'm just commenting on what seems like a weird oversight to me. Per Revised Report, real procedure foo(a, b) begin foo := a + b end; is legal and complete. There is no type specification for a and b.

    – another-dave
    Feb 7 at 23:38















0














Not the first, but Algol 60 deserves honorable mention. It had strong typing, but type mismatches caused compiler errors instead of automatic conversions.



The typing system for Algol 60 was better than Fortran or COBOL.



Incidentally there was a period of four years when major languages were launched. 1957 Fortran; 1958 COBOL; 1959 Lisp; and 1960 Algol.






share|improve this answer

























  • Agreed, but there was that weird thing of the specification part (if I recall the terminology correctly) being optional for formal procedure parameters. Many implementations required it. But if you don't know the parameter data types (without exhaustive analysis of every potential execution path!) how can you compile the code for the procedure, in the general case? Hmm, maybe this should be a new question!

    – another-dave
    Feb 7 at 13:05












  • I have forgotten too much Algol to follow your comment. Suffice it to say that Algol was a step in that direction.

    – Walter Mitty
    Feb 7 at 13:16











  • Oh, Algol was a long way ahead of its contemporaries. I'm just commenting on what seems like a weird oversight to me. Per Revised Report, real procedure foo(a, b) begin foo := a + b end; is legal and complete. There is no type specification for a and b.

    – another-dave
    Feb 7 at 23:38













0












0








0







Not the first, but Algol 60 deserves honorable mention. It had strong typing, but type mismatches caused compiler errors instead of automatic conversions.



The typing system for Algol 60 was better than Fortran or COBOL.



Incidentally there was a period of four years when major languages were launched. 1957 Fortran; 1958 COBOL; 1959 Lisp; and 1960 Algol.






share|improve this answer















Not the first, but Algol 60 deserves honorable mention. It had strong typing, but type mismatches caused compiler errors instead of automatic conversions.



The typing system for Algol 60 was better than Fortran or COBOL.



Incidentally there was a period of four years when major languages were launched. 1957 Fortran; 1958 COBOL; 1959 Lisp; and 1960 Algol.







share|improve this answer














share|improve this answer



share|improve this answer








edited Feb 6 at 21:40









manassehkatz

3,012623




3,012623










answered Feb 6 at 21:20









Walter MittyWalter Mitty

652311




652311












  • Agreed, but there was that weird thing of the specification part (if I recall the terminology correctly) being optional for formal procedure parameters. Many implementations required it. But if you don't know the parameter data types (without exhaustive analysis of every potential execution path!) how can you compile the code for the procedure, in the general case? Hmm, maybe this should be a new question!

    – another-dave
    Feb 7 at 13:05












  • I have forgotten too much Algol to follow your comment. Suffice it to say that Algol was a step in that direction.

    – Walter Mitty
    Feb 7 at 13:16











  • Oh, Algol was a long way ahead of its contemporaries. I'm just commenting on what seems like a weird oversight to me. Per Revised Report, real procedure foo(a, b) begin foo := a + b end; is legal and complete. There is no type specification for a and b.

    – another-dave
    Feb 7 at 23:38

















  • Agreed, but there was that weird thing of the specification part (if I recall the terminology correctly) being optional for formal procedure parameters. Many implementations required it. But if you don't know the parameter data types (without exhaustive analysis of every potential execution path!) how can you compile the code for the procedure, in the general case? Hmm, maybe this should be a new question!

    – another-dave
    Feb 7 at 13:05












  • I have forgotten too much Algol to follow your comment. Suffice it to say that Algol was a step in that direction.

    – Walter Mitty
    Feb 7 at 13:16











  • Oh, Algol was a long way ahead of its contemporaries. I'm just commenting on what seems like a weird oversight to me. Per Revised Report, real procedure foo(a, b) begin foo := a + b end; is legal and complete. There is no type specification for a and b.

    – another-dave
    Feb 7 at 23:38
















Agreed, but there was that weird thing of the specification part (if I recall the terminology correctly) being optional for formal procedure parameters. Many implementations required it. But if you don't know the parameter data types (without exhaustive analysis of every potential execution path!) how can you compile the code for the procedure, in the general case? Hmm, maybe this should be a new question!

– another-dave
Feb 7 at 13:05






Agreed, but there was that weird thing of the specification part (if I recall the terminology correctly) being optional for formal procedure parameters. Many implementations required it. But if you don't know the parameter data types (without exhaustive analysis of every potential execution path!) how can you compile the code for the procedure, in the general case? Hmm, maybe this should be a new question!

– another-dave
Feb 7 at 13:05














I have forgotten too much Algol to follow your comment. Suffice it to say that Algol was a step in that direction.

– Walter Mitty
Feb 7 at 13:16





I have forgotten too much Algol to follow your comment. Suffice it to say that Algol was a step in that direction.

– Walter Mitty
Feb 7 at 13:16













Oh, Algol was a long way ahead of its contemporaries. I'm just commenting on what seems like a weird oversight to me. Per Revised Report, real procedure foo(a, b) begin foo := a + b end; is legal and complete. There is no type specification for a and b.

– another-dave
Feb 7 at 23:38





Oh, Algol was a long way ahead of its contemporaries. I'm just commenting on what seems like a weird oversight to me. Per Revised Report, real procedure foo(a, b) begin foo := a + b end; is legal and complete. There is no type specification for a and b.

– another-dave
Feb 7 at 23:38

















draft saved

draft discarded
















































Thanks for contributing an answer to Retrocomputing Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9085%2fwhich-was-the-first-programming-language-that-had-data-types%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown






Popular posts from this blog

How to check contact read email or not when send email to Individual?

Bahrain

Postfix configuration issue with fips on centos 7; mailgun relay