### Gregory Weber

Shared publicly -While reading up on graphics processors, I was diverted (via graphics workstation -> workstation) to the IBM 1620, which was considered a "workstation" in its time. I cut my programmer's teeth on this machine, around 1970. Oh nostalgia!

It was a strange machine, by today's standards. "Everybody knows" that computers use binary numbers, but this one represented all its data and instructions as decimal digits!* It even did its arithmetic the way we learned in elementary school: one column at a time from right to left, using addition and multiplication tables stored in memory, and could do this with pretty big numbers (thousands of digits). Computing was a lot simpler in some ways then -- a human "operating system" to load programs, and no worries about proprietary graphics drivers, since the only I/O devices were a typewriter, paper tape, punch cards, and a few switches.

There is nothing intrinsically "binary" about computing.

Let's face it: my programming teeth were chipped badly enough from writing in a decimal machine language; if the machine language had been binary, my teeth could have been knocked right out of my jaw. I have to agree with the authors of a popular programming book of the era:

'For the following two reasons, the machine chosen for this total organization study is the IBM 1620 computer: First, the 1620 is a physically small machine of good computing power whose size makes its availability to a university, and subsequently to a student, more practical; second, the 1620 is a computer that uses the decimal system of arithmetic. It is our belief that a "first" machine

should be decimal in its internal arithmetic. This is not to say that a decimal machine is superior to a machine using another arithmetic system, or that the converse is true. There is much to be said for each one. However, the problems of learning basic machine concepts are difficult enough without confusing the situation by adding the complexities of a new or less commonly used type of arithmetic. It is also our belief that adequate study of this single computer prepares one for further study on a whole host of other machines, including computers using binary arithmetic internally.'

-- From the Preface of "Basic Programming Concepts and the IBM 1620 Computer", by Daniel N. Leeson and Donald L. Dimitry, (C) 1962 Holt, Rinehart and Winston, Inc. This and other documentation available at http://bitsavers.trailing-edge.com/pdf/ibm/1620/

*It was all decimal down to a certain level. Below that level, each decimal digit was stored as binary coded decimal 4 bits for the 8 4 2 1 and two extra bits. But the decimal level was the level where one programmed.

It was a strange machine, by today's standards. "Everybody knows" that computers use binary numbers, but this one represented all its data and instructions as decimal digits!* It even did its arithmetic the way we learned in elementary school: one column at a time from right to left, using addition and multiplication tables stored in memory, and could do this with pretty big numbers (thousands of digits). Computing was a lot simpler in some ways then -- a human "operating system" to load programs, and no worries about proprietary graphics drivers, since the only I/O devices were a typewriter, paper tape, punch cards, and a few switches.

There is nothing intrinsically "binary" about computing.

Let's face it: my programming teeth were chipped badly enough from writing in a decimal machine language; if the machine language had been binary, my teeth could have been knocked right out of my jaw. I have to agree with the authors of a popular programming book of the era:

'For the following two reasons, the machine chosen for this total organization study is the IBM 1620 computer: First, the 1620 is a physically small machine of good computing power whose size makes its availability to a university, and subsequently to a student, more practical; second, the 1620 is a computer that uses the decimal system of arithmetic. It is our belief that a "first" machine

should be decimal in its internal arithmetic. This is not to say that a decimal machine is superior to a machine using another arithmetic system, or that the converse is true. There is much to be said for each one. However, the problems of learning basic machine concepts are difficult enough without confusing the situation by adding the complexities of a new or less commonly used type of arithmetic. It is also our belief that adequate study of this single computer prepares one for further study on a whole host of other machines, including computers using binary arithmetic internally.'

-- From the Preface of "Basic Programming Concepts and the IBM 1620 Computer", by Daniel N. Leeson and Donald L. Dimitry, (C) 1962 Holt, Rinehart and Winston, Inc. This and other documentation available at http://bitsavers.trailing-edge.com/pdf/ibm/1620/

*It was all decimal down to a certain level. Below that level, each decimal digit was stored as binary coded decimal 4 bits for the 8 4 2 1 and two extra bits. But the decimal level was the level where one programmed.

1

Add a comment...