Jay Forrester

Jay Wright Forrester (1918-2016)
Jay Wright Forrester (1918-2016)

After high school, in 1936 Jay Forrester (1918-2016) enrolled in the Engineering College at the University of Nebraska to study Electrical engineering, and then in 1939 went to do graduate work at MIT on servomechanisms. He stayed at MIT until 1944 when he chose a Navy-sponsored program to design computers for testing new aircraft designs. (Most computer development during WWII and the postwar period was funded by the military, e.g. the ENIAC and Colossus projects.)

The program was initiated following a request from US Navy, asking the MIT Servomechanisms Laboratory to build an aerodynamic stability analyzer, which essentially was a primitive flight simulator. The pilot sat in a cockpit, pulled the joystick and the servos were supposed to respond in real-time to his actions. The problem was to get it all to work fast enough to give the pilot a realistic feel for the plane.

Initially, Forrester and his group started building a large analog computer for the task but soon found that it was slow, inaccurate, and inflexible. Solving these problems in a general way would require a much larger system, perhaps one so large as to be impossible to construct. While thinking about the problems of fast analog computation Forrester heard (from a member of the MIT team, who saw a demonstration of ENIAC and suggested that a digital computer was the solution) about the digital computers being built by Mauchly, Eckert, and John von Neumann (ENIAC and EDVAC). He talked to Eckert and Neumann and was convinced that a fast digital computer, built by electronic valves, was what was needed. In early 1946 a digital computer project was started at the new laboratory, set up by Forrester, and speed was the absolute goal for this, what became known as the Whirlwind Project.

At first, Forrester planned to build a bit serial electronic computer like the EDVAC (EDVAC was a binary serial computer with automatic addition, subtraction, multiplication, programmed division, and automatic checking with an ultrasonic serial memory), but soon realized that this would be slow. A bit serial computer works by calculating one bit at a time. This allows the same hardware to be re-used during computation and so simplifies the design. The alternative is to use a bit parallel design that uses multiple copies of the basic hardware to compute an N-bit result in one operation. Bit parallel may be faster, but it needed so much more hardware that valve failure became a serious problem.

By 1947, Forrester and his collaborator Robert Everett completed the design of a high-speed stored-program computer for this task—the Whirlwind. Construction of the machine started in 1948, an effort that employed 175 people, including 70 engineers and technicians. Whirlwind took 3 years to build and first went online on 20 April 1951. The project’s budget was $1 million a year, and after three years the Navy had lost interest. However, during this time the Air Force had become interested in using computers to help with the task of ground-controlled interception (the Cold War just began), and the Whirlwind was the only machine suitable for the task.

Forrester studied what made the valves fail after about 500 hours, which means regular breakdowns occurred and discovered that the cause was the silicon added to make the refining of the nickel easier. Silicon-free nickel cathodes increased the life of the average valve from 500 to 500000 hours. This discovery was a key factor, which made the Whirlwind possible.

The Whirlwind computer
The Whirlwind computer

Another remarkable innovation increased the reliability of the machine still further. The Whirlwind could alter the voltage on the grid of each valve to test for imminent failures.

Forrester had solved most of the problems in the design of Whirlwind but one remained—the memory. He realized that storage was the critical problem. The whole project depended on finding a more reliable and economical method of storage. At the time most memories were serial, as it was in Edvac. The first fast large memories were based on mercury delay lines that kept a serial stream of bits circulating as sound pulses. The Williams tube, used in the SSEM computer, was a faster version of the same principle. The speed of the original design of Whirlwind (20 KIPS) turned out to be too slow to be very useful, and most of the problem was attributed to the fairly slow speed of the Williams tubes for the main memory of 256 words. A CRT display tube formed a pattern of bits as spots of light which were recirculated using a photocell and feedback amplifier. It was faster but the tubes burned out far too frequently.

Initially Whirlwind used a modified form of the Williams tube. An additional flood gun maintained the pattern of dots while a writing gun was used to alter the pattern. Thirty-two such tubes were needed to provide the 4KBytes of storage that the Whirlwind needed. Given a tube life of one month and a cost of $1000, the running cost of the machine was very high, $1 per bit per month.

Magnetic core memory of An Wang

In 1949 Forrester started to think about ways of making a 2D or 3D form of storage rather than the one-dimensional recirculating method of storage represented by the delay line and Williams tube. After spending much time thinking about the problem, he encountered an article on the use of magnetic materials as amplifiers. He knew also about the pulse transfer controlling device of the Chinese engineer An Wang, which implemented write-after-read (essentially making magnetic core memory possible). Forrester ordered some of the material and built an array that passed current through rings of the material to magnetize it in one of two directions.

This worked but it was too slow. Then the breakthrough happened! Forrester came up with a scheme that involved threading rings of the magnetic material on an x-y grid of wires. Each ring or core was threaded onto a unique pair of x-y wires (see the nearby photo). A third read/write wire was threaded through all of the cores. To read or write a bit half of the current needed to change the magnetization of a core was placed on one of the x wires and on one of the y wires. Only the core at the intersection of the two wires was subject to a current sufficient to change its polarity. This enabled direct access to each bit in the array.

Initially, Forrester had doubts that it would work. Perhaps the repeated exposure to half the current needed to change the polarity would eventually cause a slow degradation in the state of the core. It didn’t and coincidentally current core memory worked! A special test bed computer was built just to verify the principle. Then in 1953 Whirlwind was equipped with a new core memory that doubled its speed (up to 40 KIPS), improved its reliability, and made it cheaper to keep running. In 1951 Forrester applied for a patent, which was granted in 1956 (see the US patent Nr. 2736880). After its debut in Whirlwind, magnetic core memory was used in computers until the beginning of the 1970s.

Jay Forrester is working on Whirlwind. The woman is working on the 16-inch Whirlwind Display Console.
Jay Forrester is working on Whirlwind. The woman is working on the 16-inch Whirlwind Display Console.

The Whirlwind was the fastest machine of his time. Its list of “firsts” is long and impressive but what really matters is the simple fact that the Whirlwind was the first computer capable of real-time computations. It could add two 16-bit numbers in two microseconds and could multiply them in twenty microseconds. Of course, the Whirlwind was huge but it only used 4000 valves, which was less than a quarter of the valves used in ENIAC—a much less powerful machine.

The instructions and data are entered into the memory by means of switches or with a perforated tape. As additional memory can be used a magnetic drum (8KB), as well as a magnetic tape device. Whirlwind was also the first computer, which used a graphical display (with a resolution of 256×256 dots).

The light pen of Whirlwind
The light pen of Whirlwind

Whirlwind was the first computer, which used a revolutionary new device—the light pen (see the nearby image), which will flourish in its successor—SAGE, to identify aircraft of interest by selecting them on the CRT. The light pen was developed in the early 1950s by a Lincoln Lab, in Lexington, MA. engineer—Robert R. Everett (an assistant of Forrester, who will play a major role in the SAGE project also), who was charged with the task to develop a device, to read the position of a dot on the screen of the Whirlwind computer for diagnostic purposes. The light pen sensed light on the CRT screen and caused a computer interruption to occur. This process occurred in just a few microseconds, but it was enough time that the computer could identify the specific graphical item that had been pointed to.

Forrester left the project in 1956 when it was running smoothly through its final stages. His next brainchild, the monstrous SAGE computer system began operation in the 1960s and was used as an air defense system until the 1980s.

Biography of Jay Forrester

Jay Wright Forrester (1918-2016)
Jay Wright Forrester (1918-2016)

Jay Wright Forrester was born on 14 July 1918 and grew up on a cattle ranch in Anselmo, Nebraska, USA, to Ethel Pearl Wright Forrester (1886-1958) from Hastings, Nebraska, and Marmaduke (Duke) Montrose Forrester (1883-1975), from Emerson, Iowa. Jay had a sister, Barbara Francis (1921-2009). Both his parents attended Hastings College, in Nebraska. When they arrived in Anselmo around 1910, both worked as country schoolteachers. Duke was also a Nebraska state legislator for several terms.

Jay was taught at home by his mother for his first two years of schooling. After that, he rode his horse one and a half miles to a one-room schoolhouse. There, for the first two years, he was taught by his father.

Jay developed an early interest in electricity, tinkering with doorbells, batteries, and telegraphs. While in a local high school, he built a wind-driven, 12-volt electrical system using old car parts, and it gave the ranch its first electric power. Jay was offered a scholarship to an agricultural college but decided that the life bucolic was not for him and, instead, enrolled in the University of Nebraska to study electrical engineering.

After earning a bachelor’s degree in electrical engineering in 1939, Jay moved to MIT. He worked as a research assistant with Gordon Brown, a pioneer in servomechanism theory and applications. During World War II, Jay worked on feedback control systems and servo-control systems for radar. For his master’s thesis, he designed and built a servo to stabilize radar antennae on naval ships. In 1943, the prototype was installed on the aircraft carrier Lexington and Jay subsequently traveled to Pearl Harbor to ensure its continued functioning. Though a civilian, he volunteered to stay on board when the fleet was ordered to sea to make sure the servo (and thus the ship’s radar) worked. During the mission, Lexington participated in the retaking of the Marshall Islands and survived a torpedo strike.

Jay Forrester holding a 64x64 core memory plane, 1954
Jay Forrester holding a 64×64 core memory plane, 1954

Jay received an S.M. degree in Electrical Engineering from MIT in 1945. Later, he was a professor at the MIT Sloan School of Management (since 1956), where he introduced the Forrester effect describing fluctuations in supply chains and is credited as the founder of system dynamics, which deals with the simulation of interactions between objects in dynamic systems and is most often applied to research and consulting in organizations and other social systems. In 1972, he received the IEEE Medal of Honor, the IEEEs highest award. In 1982, Forrester received the IEEE Computer Pioneer Award. In 1995, he was made a Fellow of the Computer History Museum “for his perfecting of core memory technology into a practical computer memory device; for fundamental contributions to early computer systems design and development”. In 2006, Jay was inducted into the Operational Research Hall of Fame.

Besides the Whirlwind computer, Forrester’s most notable contributions include the following:
“Multi-coordinate digital information storage device,” the precursor to Random Access Memory (RAM);
He is believed to have created the first animation in the history of computer graphics, a “jumping ball” on an oscilloscope.
The reinterpretation of world dynamics through computer simulations of the Earth’s natural systems (e.g. natural resources, climate, etc.) in interaction with human-created systems (e.g. cities, nations, industries, etc.).

In 1989, Forrester received the National Medal of Technology, the nation’s highest award for technical achievement from George Bush
In 1989, Forrester received the National Medal of Technology, the nation’s highest award for technical achievement from President George Bush

On 27 July 1946, Jay Forrester married Susan Swett (1917-2010) from Southern Pines, Moore County, North Carolina. They had a daughter, Judith, and two sons, Nathan Blair (born 1950), and Ned Cromwell. Both his sons have continued his MIT legacy, Ned studied electrical engineering, and Nathan focused on system dynamics.

Prof. Jay Wright Forrester, founder of the field of system dynamics, and a pioneer of digital computing, considered by some people to be one of the greatest minds of the last 100 years, died on 16 November 2016 (aged 98) in Southern Pines, Moore County, North Carolina, USA.

Frederic Williams and Tom Kilburn

Coming together is a beginning; keeping together is progress; working together is success.
Henry Ford

Frederic Williams (left) and Tom Kilburn
Frederic Williams (left) and Tom Kilburn

The world’s first stored-program electronic digital computer, the English Small Scale Experimental Machine (SSEM, nicknamed the Baby) successfully executed its first program on 21 June 1948. That program was written by Tom Kilburn, who actually built the machine, designed by his mentor—Frederic (Freddie) Williams.

Frederic Calland Williams (1911-1977) gained a BSc (1932) and MSc (1933) degrees in Engineering at the University of Manchester, and a Doctor degree at Oxford University. Then he took up the post of Assistant Lecturer in his old department at Manchester. During the next few years, Williams made many outstanding contributions to research in electronics, publishing some 20 papers. Two of these were with Prof. Blackett on an automatic curve follower for the Hartree Differential Analyser, a famous mechanical calculator constructed at the University of Manchester in the early thirties.

During WWII Williams made outstanding contributions to the electronics of radar and other military equipment, producing, amongst other things, the first operational amplifier. In June 1946 he started investigating the storage of both analog and digital information on a Cathode Ray Tube. Storage of analog information could help solve the problem of static objects cluttering the dynamic picture on a radar screen. Storage of digital data could solve the problem holding up the development of computers worldwide, i.e. lack of a storage mechanism that would work at electronic speeds. Williams demonstrated the successful operation of a single-bit memory using the anticipation pulse method in October and provisionally patented this system in December 1946.

The bit was stored in the form of a charge on the CRT screen’s phosphor, which could be controlled by the electron beam to write a 0 or a 1. Although the phosphor was an electrical insulator, the charge would leak away in the order of a second. Williams arranged to read the charge and then rewrite it continuously at electronic speeds so that information could be kept permanently; this process was called regeneration and the principle is still used today to replenish charge on modern integrated circuit RAMs.

In December 1946 Freddie Williams moved to the University of Manchester to take up a chair in Electro-Technics. He continued to work on the system, and some of his group members followed him, between them was Tom Kilburn (1921-2001), a mathematician, who had worked in his group since 1942. At the end of 1947, 2048 bits were being stored on a standard single 6-inch diameter CRT, and an internal report had been written by Tom Kilburn introducing the “dot-dash” and the “defocus-focus” methods of operation of the CRT and the design of a hypothetical computer. This report aroused considerable interest and was widely circulated in the UK and USA. The CRT storage system became known as the Williams Tube.

Though the store could remember 2048 bits, an individual bit could only be reset by hand, and it was necessary to test its capability of setting and reading any required bit at electronic speeds and remembering its value indefinitely between settings. So the next step was to build a small computer around a CRT memory, to subject it to the “most effective and searching tests possible”. Williams and Kilburn knew all about electronics, but a little about computers, so they headed for consultations with their colleagues, the famous mathematicians Alan Turing and Max Newman, the creator of Colossus. Using the advice of computer geniuses, Williams designed the computer, which was built mainly by Kilburn and his colleagues.

SSEM computer in 1948
SSEM computer in 1948

This computer was built in half a year—the Small Scale Experimental Machine (SSEM) included the stored-program concept so that the Random Access Memory was used not only to hold numbers involved in calculations but also to hold the program instructions. This meant that instructions could be read successively at electronic speed and that running a different program only involved resetting part of the memory using a simple keyboard rather than reconfiguring the electronic circuitry (this could take days on ENIAC).

Work on building the SSEM took place in the first half of 1948 and was mainly carried out by Tom Kilburn with the assistance of Geoff Tootill. The arithmetical device was built by vacuum tubes, while the memory, registers, and display were based on Williams Tubes. The input was a keyboard.

The Baby machine had the following properties:
• 32-bit word length
• Serial binary arithmetic using 2’s complement integer numbers
• A single address format order code
• A random access main store of 32 words initially, extendable up to 8192 words
• A computing speed of around 1.2 milliseconds per instruction

There are 4 Williams Tubes in the Baby. The main store was a 32/32-bit array on a Williams Tube. In addition, there were two other Williams tubes holding special storage “registers”: one held the accumulator A and the other the address of the current Instruction CI (“Control Instruction”) and the instruction itself PI (“Present Instruction”). The fourth tube, the Display Tube could be switched to provide a suitable display of the current contents of any of the Williams Tubes.

The instruction format was:
3-bit function field (bits 13 to 15) + 13-bit store address (0 to 12) + 16 bits unused

There were just 7 instructions (with S representing the contents of the word with address S):
• A = – S
• A = A – S
• S = A
• If A < 0, CI = CI + 1 (i.e. if A negative, skip the next instruction)
• CI = S
• CI = CI + S
• Halt the program

Note the surprising use of the minus operator and the complication/sophistication of using the contents of a store location rather than a store address itself as the operand in control jumps. CI had to be reset to the instruction before the next to be obeyed. Note also however that within two months the Baby had been enhanced to a 4-bit instruction code (not all used), including A = S, A = A + S, and A = A & S.

Input to the Baby was by setting sequences of bits at chosen addresses using a simple keyboard. The output was by reading the information on the Display Tube.

First program for SSEM
The first program for SSEM

Three demonstration programs were run on the prototype, including one written by Turing, which used a long division routine.

The first program (see the nearby image) to run successfully, was executed on 21 June 1948 and was written by Tom Kilburn. It was a program to find the highest proper factor of any number a, which was done by trying every integer b from a-1 downward until one was found that divided exactly into a. The necessary divisions were done not by long division but by repeated subtraction of b (because the “Baby” only had a hardware subtractor).

The original number chosen was quite small, but within a few days the program was tried on 218; here around 130000 numbers were tested, which took about 2.1 million instructions and involved 3 and a half million store accesses. The correct answer was obtained in a 52-minute run.

Williams later said of the first successful run:
A program was laboriously inserted and the start switch pressed. Immediately the spots on the display tube entered a mad dance. In early trials it was a dance of death leading to no useful result, and what was even worse, without yielding any clue as to what was wrong. But one day it stopped, and there, shining brightly in the expected place, was the expected answer. It was a moment to remember. This was in June 1948, and nothing was ever the same again.

With the SSEM proving both the effectiveness of the Williams Tube and the basic stored-program concept, work was immediately started, with increased manpower, to design and build a more realistic and usable computer, based on the Baby. This was achieved between late 1948 and late 1949, with the Manchester Mark 1 computer, which was built and was used for a variety of purposes in 1949 and 1950, including investigation of the Riemann hypothesis and calculations in optics. Besides that, in October 1948, a request was made from the English government to Ferranti Ltd. to manufacture a commercial machine to Williams’ specifications. This was the world’s first general-purpose commercial computer, the Ferranti Mark 1.

Ferranti Mark 1

The world’s first commercially available general-purpose electronic computer was the English machine Ferranti Mark 1, launched in February 1951. It was based on the previous computers Small Scale Experimental Machine and Manchester Mark 1 of Frederic Williams and Tom Kilburn.

Ferranti Mark 1 computer
Ferranti Mark 1 computer

Only two months after the successful test run of the Small Scale Experimental Machine (aka the Baby) in June 1948, a full-scale version was underway and Ferranti was investigating commercial production, although the progress to commercial production was far from certain because of the financial risks. The first machine was delivered to the University of Manchester on 12 February 1951. Later seven updated versions were sold.

The memory of Mark 1 was based on a 20-bit word stored as a line of dots of electric charges settled on the surface of a Williams–Kilburn tube (a cathode ray tube used as a computer memory to electronically store binary data), each tube storing 64 words (lines of dots). Instructions were stored in a single word, while numbers were stored in two words (40 bits). The main (primary) memory consisted of eight tubes, each storing one such page of 64 words, thus the total main memory was 512 words.

Other Williams tubes stored the single main 80-bit accumulator (A), the 40-bit multiplicand/quotient register (MQ), and eight 20-bit B-lines, or index registers, used to modify instructions. The accumulator could also be addressed as two 40-bit words. An extra 20-bit word per tube stored an offset value in the secondary storage. Secondary storage (an extra 20-bit word per tube) was provided in the form of a 512-page magnetic drum, storing two pages per track, with about 30 milliseconds of revolution time.

The instructions (about fifty in total) had an address and an operator part and used a single address format in which operands were modified and left in the accumulator. The writing of programs was based on a numerical system to the base 32 (a five-bit value). Integer numbers were usually treated as 40-bit double words, negative numbers were represented as two’s complement.

The digit frequency is 100 KHz, giving a 10-microsecond digit-period. The drum is synchronized to the processor’s clock, allowing more than one drum to be added if required. 24 digit-periods (240 microseconds) are known as a beat.

The basic cycle time (needed for arithmetical and logical instructions) was 1.2 milliseconds (5 x 240 microseconds beats), a multiplication could be completed in about 2.16 milliseconds (9 beats), while most other instructions took 4 beats (0.96 ms). The multiplier used almost a quarter of the machine’s 4050 vacuum tubes. Several instructions were included to copy a word of memory from one of the Williams tubes to a paper tape machine or read them back in.

Ferranti Mark 1 had to be programmed by entering alphanumeric characters representing the base 32 (a five-bit) value that could be represented on the paper tape input. The engineers decided to use the simplest mapping between the paper holes and the binary digits they represented, but the mapping between the holes and the physical keyboard was never meant to be a binary mapping.

Ferranti Mark 1 contained approximately 1600 pentodes and 2000 thermionic diodes. The main CPU is contained in two bays, each 5 meters long by 2.7 meters high, and consumed 25kW of power.

The Mark I's console
The Mark I’s console

The normal input/output equipment for the Ferranti Mark I consisted of paper tape readers operating at 200 characters/second, paper tape punches operating at 15 characters/second, and a teleprinter printing at six characters/second.

The Mark I’s console (see the nearby image) has two large and four small display tubes. The two large tubes at the bottom show two pages of the main memory.

The actual processing was done on four smaller tubes, which were labeled with the first four letters of the alphabet. A, the accumulator, contained the results of the arithmetical and logical operations and also temporarily stored data for the transmission from one line of the page to another. In the C tube (C for control) were the current instruction and its address. The content of auxiliary store B could be added to the current command and thus could modify it before it was carried out. D contained the multiplier in the appropriate calculations.

Calvin Mooers

It has become appallingly obvious that our technology has exceeded our humanity.
Albert Einstein

Calvin Northrup Mooers (1919–1994)
Calvin Northrup Mooers (1919–1994)

Calvin Northrup Mooers (1919–1994), was an American computer scientist who coined the term “Information Retrieval” in March 1950 and went on from there to obtain several patents in information retrieval and signaling, design a text-handling language (TRAC), author more than 200 publications, and form one of the first companies, whose only concern was information.

Mooers was a native of Minneapolis, Minnesota, attended the University of Minnesota, worked at the Naval Ordnance Laboratory from 1941 to 1946, and then entered the M.I.T. (Massachusetts Institute of Technology), where he earned a master’s degree in mathematics and physics. At M.I.T. Mooers developed a mechanical system using superimposed codes of descriptors for information retrieval called Zatocoding, and founded the Zator Company in 1947 to market his idea.

In 1951 Mooers issued a report entitled Making Information Retrieval Pay, in Issue 55 of the Zator technical bulletin. In section II, INFORMATION RETRIEVAL vs. INFORMATION WAREHOUSING, of this report, Moore stated:
Information retrieval must be distinguished from another operation performed on information. This is the ‘information warehousing’ operation, which is the orderly receipt, cataloguing and storage of information. Almost every library does a highly efficient and satisfactory job of information warehousing. This is fortunate, since successful operation of information retrieval—discovery and use of information–depends upon competent information warehousing. On the other hand, merely to warehouse a large collection of information does little to aid the User to discover the information he needs. Here we have a prevalent fallacy of the libraries.

In section IX, The DOKEN, we can read:
Can the world-wide torrent of scientific information—from an estimated 30,000 periodicals containing an estimated 1,000,000 papers per annum–be met by any conceivable retrieval machine? The answer is yes, and the back-log (estimated roughly at 100,000,000 pieces) can be handled too.
No existing machine is capable of doing a reasonable job of information retrieval on such a collection. The fastest electronic tabulating machinery would seem to require about 2,600 hours, or about 3 1/2 months, to scan a collection of 100 million pieces in answer to one request for information. The Microfilm Rapid Selector, according to published speeds, would take about 170 hours of steady running time or about a week to make the same search. Both these are too slow to meet a reasonable requirement that a central agency having such a machine should be able to make a number of searches each day, and to send out the bibliographies the same day the request was received.
A machine that can do this job is actually possible—and it can be constructed within the limitations of our present technology. I will describe some of the features of such a machine in order that you will know what such a machine will be like when it is built. On the other hand, I can’t tell you the date that his machine will actually be constructed because I cannot forecast when anyone will be able to afford it. The great expense is not in the machine. The machine will cost less than one of the enormous computing machines that we have been hearing so much about, and which some organizations seem to be able to afford. The real cost is handling and analyzing the magnitude of information in setting up the system. We should figure on a cost of at least $2 per item. Thus the annual cost of processing the world’s information—$2,000,000—would be several times the cost of the machine itself. But, to get back to the details of our hypothetical machine:
We will call the machine the D O K E N, which is short for “documentary engine”. The DOKEN is capable of making a complete multi-subject search of 100 million items in about 2 minutes, and having scanned the record, it reproduces or prints a bibliography of the selected abstracts at a rate of about 10 per minute by a dry printing process. Many searches are conducted each hour, steadily, throughout the day. After the first DOKEN is operating, film records for other DOKENS can be inexpensively copied at a fraction of the original cost. A DOKEN is a most appropriate instruction for national or regional research centers. It would be the information retrieval auxiliary instrument at a large library center for the local collection plus the entire world’s literature. For instance, it could scan the Library of Congress collection (10 million catalogued items) in 10 seconds.
The DOKEN can achieve the stated performance goal only by recourse to the most efficient techniques. That means that the job must be broken down into the different functional operations, and highly efficient specialized structures and methods are used to accomplish each. There are three separate functional organs that we must consider. They are: 1) the code storage and scanning engine, 2) the abstract record and reading engine, and 3) the abstract printing stations. These organs, unlike the corresponding elements in the Rapid Selector, are physically separate structures. We will consider them in turn.
The Code storage and scanning engine contains the coded subjects of 100 million documents. Therefore, at least from considerations of sheer bulk, the most efficient possible subject coding must be used. The choice here is Zatocoding—the method of superimposition of random codes in each subject field—since this method seems to be considerably more efficient than any other coding scheme now known. We let each document be described by as many as 25 different cross-referenced subjects. The coded record is micro-photographed on photographic film, and this film strip is helically wound on a metal drum 10 feet in diameter and 7 feet long. This drum is driven at about 300 rpm, and the scanning head, following the helically-wound film, passes from one end to the other in less than a minute. The codes for more than one million documents are scanned in each second. This is about 5,000 times Rapid Selector speed. The basic principles of such a scanning head, able to do this with standard equipment, have been worked out. Selections, when made, are temporarily recorded as document or abstract numbers in an electronic or magnetic memory. The selections are made according to any simple or complex configuration of subject ideas, which can be chosen arbitrarily to suit the needs of the request at hand.
The abstract storage and reading engine is the organ which stores micrographic copies of 200-word abstracts and the citations for the documents. A single, large, square, semi-transparent sheet carries from a quarter million to a million of such abstracts. These sheets are stored in a stack, and by a mechanism like that of an automatic jukebox record changer, the different sheets are pulled out of the stack to be read by an optical copying television head. This read head, using the two coordinate positions of the wanted abstract, finds the abstract, magnifies it, and electrically copies it into a wire circuit. Many such optical heads are working at the same time in the abstract storage engine. This abstract storage and reading engine fits nicely in an ordinary large-sized room, since the stack is only about 20 feet long.
The abstract printing stations are placed remote from the rest of the engine–at the request desk or in the mailing room for mail service. The process used is a fast dry-printing, employing either ultra-violet sensitive diazo paper, or an electro-sensitive facsimile paper. Photography (silver) and Xerography do not meet nearly as well the requirements for a fast, simple and cheap process for giving a single-copy. Presently available equipment, about the size of table radio and now on the market, can produce about ten 200-word abstracts per minute at each station. There are as many stations in the operation as there are reading heads in the reading engine. The abstracts produced are reasonably clear, and are full-sized and readable without any optical aid.
Such is the DOKEN. It can be built if there is a need for it. Part of the world’s intellectual output is already being abstracted. With cooperation, and less than 10% additional effort, this same information could be put into a DOKEN system. Perhaps this cooperative endeavour will take the pattern so well worked out by Chemical Abstracts with its large corps of volunteer abstractors, and smaller staff of central editors. If so, the cost of the world-wide documentary project could be whittled down to manageable proportions. Support could be on a subscription basis. Bibliographic searches to any request would be finished by return airmail, giving an overnight service to information users.
Smaller versions of the same instrument have a possible use in other situations, such as the whole chemical literature, the U.S. Patent Office, or the files of insurance companies. In such smaller collections, a much more complete subject coding is possible and would certainly be desirable in the case of patents.
With regional DOKENs available, company collections of information on punched cards can be enriched by the inclusion of specially selected items from DOKEN bibliographies. But these bibliographies of abstracts would generally have to be pruned, recorded, and ‘slanted’ into the particular company’s technical viewpoint in order to raise their utility up to the company’s retrieval system threshold value.

In 1959 Mooers coined “Mooers’ law”: An information retrieval system will tend not to be used whenever it is more painful and troublesome for a customer to have information than for him not to have it. Where an information retrieval system tends not to be used, a more capable information retrieval system may tend to be used even less.

In 1961 Mooers founded the Rockford Research Institute, where he developed the TRAC (Text Reckoning And Compiling) programming language, and attempted to control its distribution and development using trademark law and a unique invocation of copyright. (At the time patent law would not allow him to control what he saw as his intellectual property and profit from it.) Mooers was awarded the American Society for Information Science’s Award of Merit in 1978.

Jacob Rabinow (hard disk)

It means inventing is a hell of a lot of fun if you don’t have to make a living at it. It means an inventor can get a job, solving day-to-day problems; the industry says, ‘Fix it but don’t change it,’ so they don’t have to re-tool. Do something new and people say ‘It’s different, but who needs it?’
Jacob Rabinow

Notched-Disk Magnetic Memory Device (c.1951) Courtesy of the National Institute of Standards and Technology
Notched-Disk Magnetic Memory Device (c. 1951)
Courtesy of the National Institute of Standards and Technology

At the end of the 1940s, the American engineer and prolific inventor at the National Bureau of Standards (NBS), Jacob Rabinow (1910-1999), served as a consultant on computer development services for government agencies. Asked to design a machine to record on and read from sheets of magnetic material, he instead proposed adopting discs as used by Valdemar Poulsen in 1898 in his experiments with magnetic recorders. An inductive magnetic read/write head moved in the space between the disks that were mounted on a spindle.

In 1949 Rabinow built an experimental model of his disk-based storage unit, called a Notched-Disk Magnetic Memory Device. Each disk on the machine had a pie section, called a notch, removed. This allowed the read/write head to be moved from one disk to another. Approximately 18 inches in diameter, each disk held about 500000 bits of data.

in August 1952, Rabinow reported experimental work on a notched-disk memory. In March 1951 he filed a patent for a “Magnetic Memory Device” that was granted in October 1954 (US patent Nr. 2690913). NBS policy was that inventions made as part of an employee’s job belonged to the government. As foreign rights remained with the inventor, Rabinow received patents in several foreign countries and sold non-U.S. rights to Remington-Rand for $15000, but the company never used the patent.

IBM 350 Disk Storage
IBM 350 Disk Storage

Several years later, seeking a better method than punched cards, magnetic drums, or tape to store and access information, Reynold Johnson’s team at IBM in San Jose, CA included a description of Rabinow’s device in a 1953 report on “A Proposal for Rapid Random Access File.” They adopted the disk concept as the basis for the RAMAC project that yielded the first commercial hard disk drive in 1956.

In September 1956 IBM introduced a new model computer—IBM 305 RAMAC (Random Access Memory Accounting) system. Nothing interesting as a design, it was one of the last vacuum tube systems designed by IBM, but… it introduced disk storage technology to the world—IBM 350 Disk Storage (see the nearby photo), so it became a market hit and more than 1000 305s were built before production ended in 1961.

The 350 Disk Storage Unit consisted of the magnetic disk memory unit with its access mechanism, the electronic and pneumatic controls for the access mechanism, and a small air compressor. Assembled with covers, it was 150 cm long, 170 cm high and 72 cm deep. It was configured with 50 24-inch magnetic disks containing 50000 sectors, each of which held 100 alphanumeric characters, for a total capacity of 5 million characters.

Disks rotated at 1200 rpm, and tracks (20 to the inch) were recorded at up to 100 bits per inch. The execution of a “seek” instruction positioned a read-write head to the track that contained the desired sector and selected the sector for a later read or write operation. Seek time averaged about 600 milliseconds. The disk system cost some $10000.

Over the next several years, as storage memory continued to evolve, the HDD would emerge as the next new, more adaptable solution and would replace many of the earlier, groundbreaking storage technologies.

IBM 3340 Winchester Direct Access Storage Facility (© IBM)
IBM 3340 Winchester Direct Access Storage Facility (© IBM)

During the first several decades of the development of HDDs, IBM was the main innovator. In 1961 it invented heads for disk drives that “fly” on a cushion of air or on “air bearings.” In 1963 it came up with the first removable hard drive, 1311, which has six 14-inch platters and holds 2.6MB. In 1966 it introduced the first drive using a wound-coil ferrite recording head. In 1973, IBM introduced the 3340, or Winchester Direct Access Storage Facility (see the nearby photo). The smaller and lighter 3340 marked the next real evolutionary step in hard disk storage.

The 3340 featured a smaller, lighter read/write head that could ride closer to the disk surface—on an air film 18 millionths of an inch thick, and with a load of fewer than 20 grams. The Winchester disk file’s low-cost head slider structure made it feasible to use two heads per surface, cutting the stroke length in half. The disks, the disk spindle and bearings, the carriage, and the head-arm assemblies were incorporated into a removable, sealed cartridge. A track density of 300 tracks per inch and an access time of 25 milliseconds were achieved. It had three types of data modules: 35 megabytes, 70 megabytes, and 70 megabytes of which 0.5 megabytes were accessible with fixed heads. Two-to-four 3340 drives could be attached to the IBM System/370 Model 115 processor, which had been announced concurrently with the 3340, thus providing a storage capacity of up to 280 million bytes.

IBM continued also to decrease the size of the disks and the platters. In 1979 IBM’s 3370 uses seven 14-inch platters to store 571MB, the first drive to use thin-film heads. The same 1979 was introduced IBM’s “Piccolo,” which uses six 8-inch platters to store 64MB.

Seagate's ST506 was the first hard disk drive for personal computers
Seagate’s ST506 hard disk

In 1980, a new and unknown company made a small revolution in HDD (hard disk drives) production. Till then only large and well-funded companies could afford to buy a hard disk, but now it became possible for the broader public. Seagate Technology was founded in 1979 (under the name “Shugart Technology”) by Alan Shugart and Finis Conner. Their first product (released in 1980) was the ST-506 (see the nearby photo), the first hard disc to fit the 5.25-inch form factor of the (by then famous) Shugart “mini-floppy” drive. The ST506 held just 5MB of data and cost $1500. The hard disc was a hit and was later released in a 10-megabyte version, the ST-412.

In 1983 the company Rodime releases the first in the world 3.5-inch hard drive. The company engineers used the 3.5-inch form factor of the floppy disk drives again. The RO352 includes two platters and stores 10MB. In 1998 IBM announced its Microdrive, the smallest hard drive to date, designed to fit in a CompactFlash (CF) Type II slot. It was launched in 2 models—170 MB and 340 MB.

Despite computer technology and other sectors of the tech industry developing exponentially faster, magnetic hard disk drives (HDDs) are still the technology of choice for storing computer and server data. As the storage market is changing, it’s very likely that hard drives will be replaced by future technology. According to some research, the use of hard disks could come to an end soon. Solid-state drives (SSDs) are already replacing hard disks as a primary storage source for computers.

Biography of Jacob Rabinow

Jacob Rabinow (1910-1999)
Jacob Rabinow (1910-1999)

Jacob Rabinow was born as Яков Рабинович in the Jewish family of Aaron and Helen Rabinovich (born 1882) in Harkov, Russian Empire (now Kharkiv, Ukraine), on 8 January 1910. Арон Рабинович (Aaron Rabinovich) married Елена Флайшер (Helen Fleisher) from Кременчук (a city in central Ukraine) in 1908 and they had two sons: David (born 1908), and Jacob. In 1914, the family moved to Kustanay, Siberia (now Kostanay, Kazakhstan), where Aaron Rabinovich established a small shoe factory. The talent of an inventor in the young Jacob woke up under the influence of science fiction books, especially Jules Verne. He was also captivated by the machinery in their factory and by his father’s attempts to automate some production operations.

After the Revolution struck in 1917, Aaron lost the factory, but the revolutionaries didn’t molest him because he was a Social Democrat, and apparently, he was neutral enough so that neither the Communists nor the White Guards bothered him much, except that the factory was confiscated. In 1918, while in Kustanay, the 8-years old Jacob made his first invention–the machine (“a couple of ropes with a stick in between”) to throw rocks. The young boy was quite pleased with his brilliant idea until an adult informed him that his rock thrower was actually a Roman ballista, which had been invented some 2000 years before 🙂

In 1919, when the Civil War was still raging between the Whites and the Reds, the Rabinovich family took whatever they could carry and left for China. Unfortunately, Aaron caught typhus just as they arrived at Harbin in China, and died about a week later. The family (Helen Rabinovich, Jacob, and his two years older brother David), had some money and they lived in China for two years until finally got permission to travel to the United States where they had relatives. In June 1921 they settled in New York, where Helen opened a corset shop in Brooklyn. Jacob was put directly into fourth grade, although he didn’t understand a word of English, but at the end of a few months of this, he learned very quickly and became a very good mathematics student. He also began to play with the radio just about the same time and built his first professional radio set for his teacher, a one-tube receiver for $11. Jacob never stopped building radio equipment since then, and during the Great Depression, he worked in a radio factory.

Jacob got out of high school in 1927, then he took half a year to work full-time, to earn a few bucks, and then he entered New York City College. At that time he was naturalized and shortened his surname from Rabinovich to Rabinow. When the Depression wiped out his mother’s shop in Brooklyn, “Kuba,” as she called Jacob, had to borrow the $62 tuition for graduate engineering school at City College. Jacob graduated from City College with a Bachelor’s Degree in Engineering in 1933, and a Master’s Degree in Electrical Engineering in 1934. In fact, initially, he had taken a “straight” B.S. at CCNY because he had been told repeatedly that there was no chance in America for a Jewish engineer, but when the corset shop failed, Kuba decided: “If I’ll starve, I’ll starve doing something I like.” He abandoned his compromise goal of being a teacher and switched back to engineering.

His career as an inventor Jacob began when he was hired as a mechanical engineer in 1938 by the National Bureau of Standards (now the National Institute of Standards and Technology, or NIST). He made many developments there, mainly in defense systems, and eventually became Chief of the Electro-Mechanical Ordnance Division at NBS before leaving in 1954 to form his own company, Rabinow Engineering. He also became interested in a microfilm reader envisioned by Vannevar Bush called the Rapid Selector, and this led to his more general interest in reading machines.

Jacob Rabinow (1910-1999)
Jacob Rabinow (1910-1999)

Rabinow earned a total of 229 United States patents on a variety of mechanical, optical, and electrical devices. Among them are the first disc-shaped magnetic storage media for computers (1954), the magnetic particle clutch (1956), the first straight-line phonograph (1959), the first self-regulating clock (1960), and his famous “reading machine” (1960) which was the first to use the “best match” principle and was the basis for the reading, sorting and processing machines used today by banks and post offices. Rabinow had shelves of notebooks full of perhaps 2000 yet unpatented ideas.

In 1964, Rabinow’s company joined Control Data Corporation (Centers for Disease Control and Prevention), and until 1972 he was Vice President of Centers for Disease Control and Prevention and head of the Rabinow Advanced Development Laboratory. In 1968 Rabinow formed the RABCO company to manufacture straight-line phonographs, and the company was later bought out by Harman Kardon Corporation. In 1972 Rabinow returned to NBS where he was Chief Research Engineer until his retirement in 1989.

Rabinow published his book, Inventing for Fun and Profit, in 1989. He also delivered many speeches and lectures on inventions and technology, as a guest at many educational institutions and on several television and radio shows. He also served on the board of trustees for Science Service, now known as Society for Science & the Public, from 1971–1973. Rabinow has been listed as a noteworthy Electrical engineer and consultant by Marquis Who’s Who.

On 26 September 1943, Jacob Rabinow married Gladys Lieder (1918-2018), a math teacher and statistician, and they had two daughters—Jean Ellen, and Clare Lynn.

The remarkable engineer and inventor Jacob Rabinow died on 11 September 1999 (aged 89) in Fairfax County, Virginia, USA.

Geoffrey Dummer (integrated circuit)

We are all faced with a series of great opportunities brilliantly disguised as impossible situations.
Charles R. Swindoll

Geoffrey William Arnold Dummer (1909–2002)
Geoffrey William Arnold Dummer (1909–2002)

The first man, who must be credited for the conceptualization of the integrated circuit, is the British engineer Geoffrey Dummer. Geoffrey William Arnold Dummer (1909–2002) was an electronics author and consultant, who passed the first radar trainers and became a pioneer of reliability engineering at the Telecommunications Research Establishment in Malvern in the 1940s. His work with colleagues at TRE led him to the belief that it would be possible to fabricate multiple circuit elements on and into a substance like silicon.

In 1952 Dummer presented at a conference in Washington, DC, a work, in which he states: “With the advent of the transistor and the work on semiconductors generally, it now seems possible to envisage electronic equipment in a solid block with no connecting wires. The block may consist of layers of insulating, conducting, rectifying and amplifying materials, the electronic functions being connected directly by cutting out areas of the various layers”. This is now generally accepted as the first public description of an integrated circuit.

At a later date, Dummer said, “It seemed so logical to me; we had been working on smaller and smaller components, improving reliability as well as size reduction. I thought the only way we could ever attain our aim was in the form of a solid block. You then do away with all your contact problems, and you have a small circuit with high reliability. And that is why I went on with it. I shook the industry to the bone. I was trying to make them realize how important its invention would be for the future of microelectronics and the national economy”.
In September 1957, Dummer presented a model to illustrate the possibilities of solid-circuit techniques—a flip-flop in the form of a solid block of semiconductor material, suitably doped and shaped to form four transistors. Four resistors were represented by silicon bridges, and other resistors and capacitors were deposited in film form directly onto the silicon block with intervening insulating films.
Dummer’s ideas however remained unrealized and relatively unknown, because the UK military failed to perceive any operational requirements for ICs, and UK companies were unwilling to invest their own money. Dummer later said: “I have attributed it to war-weariness in one of my books, but that is perhaps an excuse. The plain fact is that nobody would take the risk. The Ministry wouldn’t place a contract because they hadn’t an application. The application people wouldn’t say they want it, because they had no experience with it. It was a chicken-and-egg situation. The Americans took financial gambles, whereas this was very slow in this country”.

Jack St. Clair Kilby around 1960 (Courtesy of Texas Instruments)
Jack St. Clair Kilby around 1960 (Courtesy of Texas Instruments)

And the Americans were again faster and took financial gambles.
One day in late July of 1958, the engineer Jack Kilby (1923-2005) was sitting alone at a small, but innovative company in Dallas, Texas—Texas Instruments. In 1954 the company had been involved with manufacturing the first transistor pocket radio, which was enormously successful. Executives at Texas Instruments believed that the possibilities of electronic circuits were nearly endless. In May of 1954 company engineers perfected a process for making transistors out of silicon—an improvement that made them much less prone to fail when they got hot. In their research, they discovered that several electrical components could be built from silicon, although at the time they were only interested in transistors.

Kilby had been hired only a month earlier and so he wasn’t able to take vacation time when practically everyone else did. The halls were deserted, and he had lots of time to think. As he remembered later: “As a new employee, I had no vacation time coming and was left alone to ponder the results of an IF amplifier exercise. The cost analysis gave me my first insight into the cost structure of a semiconductor house.” It suddenly occurred to him that all parts of a circuit, not just the transistor, could be made out of silicon. At the time, nobody was making capacitors or resistors out of semiconductors. If it could be done then the entire circuit could be built out of a single crystal—making it smaller and much easier to produce. Kilby’s solution to this problem has come to be called the monolithic idea. He listed all the electrical components that could be built from silicon: transistors, diodes, resistors, and capacitors.

What was the reaction of his colleagues? Kilby recalled: There were a number of objections. Most people thought that you would never be able to make them in quantity. At that time less than 10 percent of the transistors at the end of the line were likely to be good. The thought that you would put several on a chip seemed like madness.

The original integrated circuit of Jack Kilby
The original integrated circuit of Jack Kilby

Kilby then conceived the idea of constructing a single device with all the needed parts that could be made of silicon and soldering it to a circuit board. He understood that if he could eliminate the wires between the parts, he could squeeze more parts into a smaller space, thus solving the obstacle of manufacturing complex transistor circuits. When he presented this smashing idea to his boss, he liked it and told him to get to work. By 12 September, Kilby had built a working model (see the nearby photo), and on 6 February, Texas Instruments filed a patent. Their first Solid Circuit the size of a pencil point (11-by-1.5-millimetres in size ), was shown off for the first time in March 1960.

But over in California, another man had similar ideas. In January of 1959, Robert Noyce was working at a small startup company—Fairchild Semiconductor, which he and 7 of his colleagues established in 1957, leaving Shockley Semiconductor. He also realized a whole circuit could be made on a single chip. While Kilby had hammered out the details of making individual components, Noyce thought of a much better way to connect the parts. That spring, Fairchild began a push to build what they called “unitary circuits” and they also applied for a patent on the idea. Knowing that TI had already filed a patent on something similar, Fairchild wrote out a highly detailed application, hoping that it wouldn’t infringe on TI’s similar device.

Robert Noyce (1927-1990)
Robert Noyce (1927-1990)

All that detail paid off. On 25 April 1961, the patent office awarded the first patent for an integrated circuit to Robert Noyce (see the U.S. patent Nr. 2981877 of Noyce) while Kilby’s application, filed 5 months earlier than Noyce’s, was still being analyzed and the patent was granted as late as June 1964 (see the U.S. patent Nr. 3138743 of Kilby). Today, both men are acknowledged as having independently conceived of the idea, but the real acknowledgment came too late, in 2000, when only Kilby became a Nobel Prize laureate for his invention of the integrated circuit, while Noyce died in 1990 and didn’t manage to be honored with this prestigious award.

The companies Fairchild Electronics and Texas Instruments had a court fight, that was not settled until 1966, by which time integrated circuit chips had become a multi-billion dollar industry. In the summer of 1966 executives of the two companies had made an agreement to share ownership by granting production licenses to each other. Any other company that wanted to produce integrated circuits had to pay both Texas Instruments and Fairchild. As for Kilby, the scientific community informally agreed that both he and Noyce had invented the chip and that they both deserved credit.

Kilby and Texas Instruments had made a big breakthrough. But while the U.S. Air Force showed some interest in TI’s integrated circuit, the industry reacted skeptically. Indeed the IC and its relative merits “provided much of the entertainment at major technical meetings over the next few years,” as Kilby wrote later.

Since TI and Fairchild were the co-inventors of the IC, one might expect that they would release the first commercial devices, and in fact, this was so. In March 1960, Texas Instruments announced the introduction of the earliest product line of integrated logic circuits. TI’s trade name is Solid Circuits for this line. This family called the series 51, utilized the modified DCTL circuit, and the SN510 and SN514 were the first integrated circuits to orbit the Earth, aboard the IMP satellite, launched by the US on 27 November 1963. Fairchild’s prototype chips were announced in November 1960, and the company had introduced its first commercial integrated circuit, the same device as Dummer’s a decade ago, a flip-flop (the basic storage element in computer logic), at an industry convention in New York in March 1961.

Soon other firms began to develop ICs, i.e. Motorola and Signetics, which announced their first chips in 1962.

The First Electronic Handheld Calculator, invented at Texas Instruments in 1967 by Jack Kilby, Jerry Merryman, and James Van Tassel (Courtesy of Texas Instruments)
The First Electronic Handheld Calculator invented at Texas Instruments in 1967 by Jack Kilby, Jerry Merryman, and James Van Tassel (Courtesy of Texas Instruments)

The integrated circuit first won a place in the military market through programs such as the first computer using silicon chips for the Air Force in 1961 and the Minuteman Missile in 1962. Recognizing the need for a “demonstration product” to speed widespread use of the IC, Patrick Haggerty, former TI chairman, challenged Kilby to design a calculator as powerful as the large, electro-mechanical desktop models of the day, but small enough to fit in a coat pocket. In 1965, Kilby was put in charge of directing a team to develop the world’s first pocket calculator, made feasible by the microchip. Within a year Kilby and his colleagues Merryman, and Van Tassel had a working prototype, and a year later they filed for a patent. The resulting first in the world electronic hand-held calculator (see the nearby photo), of which Kilby is a co-inventor, successfully commercialized the integrated circuit in 1967. The so-called Pocketronic was launched on 14 April 1971, weighed a little over 1 kg, cost $150, and could only perform the four main arithmetical functions. Displaying the output remained a problem. Light-emitting diode LED (light-emitting diode) technology, which became the standard for calculator display, was not yet advanced enough to use. So Kilby invented a new thermal printer with a low-power printing head, that pressed the paper readout against a heated digit.

Gary Starkweather (laser printer)

What you have to do is not just look at the marble. You have to see the angel in the marble.”
Gary Starkweather

Chester Carlson (1906-1968)
Chester Carlson (1906-1968)

In 1938 the American physicist and inventor Chester Carlson (1906-1968) (see the nearby image) invented a dry printing process, called later Xerography (the word comes from the Greek for dry writing), the foundation technology for copiers and laser printers to come. Carlson applied for a patent in 1939 and in 1942 the patent was granted (US patent Nr. 2297691). After several years of unsuccessful attempts to catch the interest of companies in his invention, in 1947 Carlson succeeded in negotiating commercial rights to his invention to Haloid Company (later renamed Xerox). This was the deal of life not only for Carlson but also for the completely unknown company Haloid, which would become one of the biggest companies in the world due to this invention.

In 1967 a young researcher in Xerox’s Webster Research Center in Rochester, Gary Keith Starkweather (9 Jan 1938–26 Dec 2019), B.S. in Physics from Michigan State University in 1960, and an M.S. in Optics from the University of Rochester in 1966, was sitting in his lab looking at all of these big mainframes when he started thinking: What if, instead of copying someone else’s original, which is what a facsimile does, we used a computer to generate the original?

Gary was hired by Xerox as a junior engineer in 1964, several years after the company had introduced the photocopier to American offices, and he began working on a version that could transmit information between two distant copiers so that a person could scan a document in one place and send a copy to someone else in another. He decided that this could best be done with the precision of a laser, another recent invention, which can use amplified light to transfer images onto paper. But then he had a better idea: Rather than sending grainy images of paper documents from place to place, what if he used the precision of a laser to print more refined images straight from a computer? And so the idea of the laser printer was born.

At this time the lasers were rather expensive devices but convinced that the cost of lasers would drop over time and that there was a market for laser printing technology, Starkweather stuck to his guns. His ideas however were met with major resistance from Xerox management.

Gary Starkweather in the early 1970s with a version of the laser printer
Gary Starkweather in the early 1970s with a version of the laser printer

Starkweather was told by his bosses to stop working on the laser printer project. But he couldn’t. He had to go through with this idea. He ended up working on it covertly, convincing people to get different parts for him so he could build it. The prototype was ready in 1969, built by modifying an existing xerographic copier. Starkweather disabled the imaging system and created a spinning drum with 8 mirrored sides, with a laser, focused on the drum. Light from the laser would bounce off the spinning drum, sweeping across the page as it traveled through the copier. The hardware was completed in just a week or two, but the computer interface and software took almost 3 months to complete.

Time has shown that Xerox management was wrong in that assumption: Printers now are a pillar of the company’s growth strategy. Indeed, Starkweather’s drive to create the laser printer eventually transformed a small copier company into one of the world’s imaging powerhouses and revolutionized the computer printing industry.

Salvation for Starkweather came in 1970 when Xerox build the Palo Alto Research Center (PARC) in California. Starkweather called PARC and was welcomed, his project appeared to be a natural fit into their long-range plans.

Out of hostile territory and finally given the freedom to conduct his research without fear of retribution, Starkweather went to work on building the laser printer. In 1971, just nine months after joining PARC, Starkweather completed the first working laser printer.

He named this machine SLOT, an acronym for Scanned Laser Output Terminal. The digital control system and character generator for the printer were developed by Butler Lampson and Ronald Rider in 1972. The combined efforts resulted in a printer named EARS (Ethernet, Alto, Research character generator, Scanned laser output terminal). The EARS printer was used with the Alto computer system network and subsequently became the Xerox 9700 laser printing system.

Gradually things took off, and by 1973 Starkweather’s group had working models of this thing at the facility. The final result—the Xerox 9700 (see the lower image), introduced in 1977, was the industry’s first commercial laser printer. It was a wild success, despite projections that few customers would produce the 200000 to 300000 prints per month needed for the unit to be profitable.

Xerox 9700 high speed laser printer, sparking a revolution (operator not included ;-)
Xerox 9700 high-speed laser printer, sparking a revolution (operator not included 🙂

Fresh off the success of the 9700, Starkweather shifted his research onto personal laser printers and again ran into opposition from Xerox. Xerox was a company that liked large, fast laser printers. They saw departmental units as the profit center for laser printer technology.

Xerox failed to connect the dots and realize that the profit wasn’t in the printer but in the toner and the paper. As a result, the company was beaten to market by Hewlett-Packard, which introduced the first personal laser printer in 1980.

Xerox had an interesting capability that has always been characteristic of the company, and that is that it always encouraged new ideas but never really liked to pursue them for very long. Things like Postscript, the laser printer, the personal computer, the bitmapped screen, the iconic interface, Ethernet, and packet switching, all of this came out of PARC. And none of it ended up as a product of Xerox.

Starkweather did see the writing on the wall at Xerox, however, and left the company in 1987 after 24 years of service. Following a 10-year stint at Apple Computer, Starkweather joined Microsoft Research in 1997. Later, his main area of research became display technology.

Biography of Gary Starkweather

Gary Starkweather (1938-2019)
Gary Starkweather (1938-2019)

Gary Keith Starkweather was born on 9 Jan. 1938, in Lansing, Michigan, the only child of Richard J. (1911-1965) and Crystal M. Starkweather (1912-1987). Richard owned a local dairy, and Crystal was a homemaker. Their home was near a junk shop, where Gary would bargain for old radios, washing machines, and car parts that he could tinker with in the basement, taking them apart and then putting them back together. “As long as I didn’t blow up the house, I was allowed to do whatever I wanted down there,” he said in a 2010 interview.

While studying physics at Michigan State University, Gary met Joyce Attard, a nursing student two years behind him. They married in 1961 and moved to Rochester so that he could join Bausch & Lomb, which at the time made lenses for eyeglasses, cameras, microscopes, and other equipment. Soon they had a daughter, Amy Beth, and a son, Keith David.

After several of his colleagues were laid off, they moved to Xerox, and in 1964 Gary followed them. His move to PARC came after he read about the lab in the company’s newsletter. After visiting PARC in 1970, he phoned his wife in cold Rochester and asked how she felt about moving to sunny Palo Alto. Her response, he recalled, was, “I’ll have the furniture in the street by the time you get home.”

As Gary developed the laser printer, his new colleagues built a personal computer that could drive it: the Alto, a machine that eventually gave rise to the Apple Macintosh and Microsoft Windows PCs. Gary made it possible to take the information on the screen and put it onto paper.

By the mid-1970s, Gary’s printer could plug into an entire network of Altos, printing documents from across the lab at a rate of a page a second. After the lab split into two buildings, he and a colleague built a system that could transmit print jobs across the street wirelessly. It was, in many ways, a blueprint for the office of today.

After leaving Xerox, Gary moved to the two biggest companies of the computer age—Apple (there he invented color management technology and led the development of Colorsync, a set of color management application programmer interfaces) and then Microsoft, where he constructed a wall-sized multipanel display.

Gary Starkweather embraced and often cited Einstein’s observation that imagination is more important than knowledge. His career and contributions are glorious exhibits of the power of imagination and the ingenuity to realize his visions. Of the numerous honors he received, he was most proud of his induction into the Inventors Hall of Fame in 2012. He shared a Scientific and Engineering Award (an Oscar) from the Academy of Motion Picture Arts and Sciences in 1994 for work on film scanning for Lucasfilm’s Star Wars. He received major awards from the Optical Society of America and the Society for Information Display, which made him a fellow in 2003, and he was elected to the NAE in 2004.

A quietly religious man, Gary Starkweather died on 26 Dec. 2019 at a hospital in Orlando, Florida, the cause was leukemia.

James Russell (compact disk)

We live as we dream—alone…
Joseph Conrad

James T. Russell
James T. Russell (born in 1931 in Bremerton, Washington)

The first workable digital compact disc device, the precursor to now ubiquitous CD/DVDs, was invented in the late 1960s by the American physicist James Russell, who was quite frustrated with the wear and tear of his vinyl records and their poor sound quality and tried to improve the record player.

James T. Russell was born in Bremerton, Washington on 30 March 1931. Despite having been diagnosed with dyslexia, James was always a smart boy and at the age of six he devised a remote-control battleship with a storage compartment for his lunch (obviously the young James enjoyed the food 🙂

In 1953, Russell earned his Bachelor’s degree in physics and graduated from Reed College in Portland. Afterward, he went to work as a Physicist in General Electric’s nearby labs in Richland, where his wife Barbara worked as a chemist. At GE, working for the Hanford Nuclear Plant, and appointed as a “designated problem-solver” for GE experimental unit, Russell initiated many experimental instrumentation projects. He was among the first to use a color TV screen and keyboard as the main interface between the computer and the operator. He also designed and built the first electron beam welder.

In the 1950s and early 1960s, Russell, who was an avid music listener (he was found of classical music—Beethoven, Chopin, Mussorgsky, Offenbach. etc.), was quite frustrated with the wear and tear of his vinyl records and their poor sound quality, tried to improve the record player. Initially, he tried using a cactus needle, instead of a steel one, for a stylus, but with no success. “After each record, you had to resharpen the needle,” he recalled.

Alone at home on a Saturday afternoon, he suddenly realized that the wear and tear on the records due to the contact from the stylus to the record, can be avoided by using a light to read the music without physically touching the disk. Moreover, as he was familiar with digital data (in a punch card or magnetic tape form), he could accomplish this task in a digital way, more efficiently and effectively. He realized, that if he could make the binary code compact enough, he could store not only symphonies but entire encyclopedias on a small piece of film. “I realized that if I wanted to store music, there weren’t enough bits on the conventional digital tape around at the time,” he said. “So I came up with the optical process.”

In 1965, the Ohio-based Battelle Memorial Institute opened its Pacific Northwest Laboratory in Richland, to take over management of Hanford’s lab, and James Russell joined the effort as Senior Scientist. Thus he gained an audience for his more far-fetched ideas and immediately began to pepper Battelle with proposals for new commercial concepts. The optical digital technology was initially met with skepticism, as it was not believed that one could digitize sound. “Here I was at Battelle, enmeshed in the scientific community, and one of the first things I had to demonstrate was that you could digitize music and reproduce it,” he said. “Music into numbers? Come on now, Russell.”

James Russell with a CD polycarbonate base
James Russell with a CD polycarbonate base

Battelle eventually let Russell pursue the project, and after years of work, he succeeded in inventing the first digital-to-optical recording and playback system (the earliest patent by Russell, US3501586, was filed in 1966, and granted in 1970). Russell had found a way to record onto a photosensitive platter in tiny “bits” of light and dark, each one micron in diameter. A laser read the tiny pits (binary patterns), and a computer converted the data into an electrical signal, which was then comparatively simple to convert into an audible or visible transmission.

Through the 1970s, Russell continued to refine the CD-ROM, adapting it to any form of data. However, like many ideas far ahead of their time, the CD-ROM found few interested investors at first. In 1971, Eli Solomon Jacobs, a New York venture capitalist, pioneered commercialization by forming Digital Recording Corporation to further enhance the product for the consumer video market and hired Russell and a team of technicians to come up with a video disk. Their efforts led to a 20-minute video disc in 1973.

“The vision I had in mind was of television programs on little plastic records. The networks, instead of putting programs on television, would print records. And if you wanted to watch your favorite programs you’d get them in the mail and put in the disk whenever you want,” Russell said. “Jacobs thought, if we can do it, hey great, we’ve got the whole world by the tail. And if we can’t, well at least you know where you are.”

In 1974 Digital Recording Corporation announced an optical digital television recording and playback machine, the first device to digitize a color image, at a Chicago trade show. The response from large potential investors was rather cool.

SONY CDP-101 was the first commercialized CD player
SONY CDP-101 was the first commercialized CD player

Philips Electronics representatives visited Russell’s Battelle lab in the summer of 1975, and they discounted the entire premise of his work. “They said: It’s all very well for data storage, but you can’t do that for video or audio,” recalled Russell. Philips had just released its laser disc, an analog optical video player, and they were convinced that analog was the only way. “Philips put $60 million into the development of the laserdisc. We were advised that nobody would tell them they had made a mistake.”

Sony launched its CDP-101—the first commercialized CD player in 1982. Sony and Philips paid royalties from CD player sales to Battelle and to Optical Recording Corporation. Time Warner and other disc manufacturers settled with the Optical Recording Corporation in 1992, paying $30 million for patent infringement. The court determined that Optical Recording had the sole rights over the technology mentioned in the patents. But because the patents properly belonged to Russell’s employer, he never got a cent out of either deal.

By 1985, Russell had earned 26 patents for CD-ROM technology. He then founded his own consulting firm, where he has continued to create and patent improvements in optical storage systems, along with bar code scanners, liquid crystal shutters, and other industrial optical instruments. His most revolutionary recent invention is a high-speed optical data recorder and player that has no moving parts. Russell earned another 11 patents for this “Optical Random Access Memory” device.

James Russell with 2002 George Stibitz Award
James Russell with the 2002 George Stibitz Award

James Russell has more good ideas before breakfast than most people do all their life. He has originated more than 50 patents in his lifetime. He also dreams big. He would end suburban sprawl by building a hive-like linear city of lots stacked a half-mile into the sky, looking out onto roadless countryside. Trains running on superconducting rails would flit through tunnels under the city. Bikers, pedestrians, and drivers could hop on and get off at each station. Nice, uh!

David Noble (floppy disk)

Why, sometimes I’ve believed as many as six impossible things before breakfast.
Alice in Wonderland, Lewis Carroll

David L. Noble (1918-2004)
David L. Noble (1918-2004)

In the 1960s, the engineers at IBM who had developed the RAMAC computer and the early disk drives understood the significance of small, removable disk memory. In 1967, IBM gave their San Jose, California storage development center a task to develop a simple and inexpensive system for loading microcode into their System/370 mainframes in a process called Initial Control Program Load (ICPL). Usually, this task would be done with tape drives which almost all 370 systems included, but tapes were large, unhandy, and slow. In fact, the floppy disk was initially designed for use in loading microcode into the controller for the “Merlin” (IBM 3330) disk pack file. IBM also wanted something faster and lighter that could also be sent out to customers with software updates for some $5.

IBM 23FD “Minnow” flexible disk drive (prototype ca. 1970)
IBM 23FD “Minnow” flexible disk drive (prototype ca. 1970)

At the end of 1967 IBM Direct Access Storage Product Manager Alan Shugart, an American engineer, entrepreneur, and business executive whose career defined the modern computer disk drive industry, assigned the job to a small team of engineers under the leadership of the experienced David L. Noble. During 1968 Noble’s team experimented with tape cartridges, RCA 45-rpm records, dictating belts, and a magnetic disk with grooves developed by Telefunken, but finally created their own solution—the floppy disk (actually they called it the “memory disk”). IBM introduced the read-only and holding 80 kilobytes diskette commercially in 1971, using the name Minnow, and shipped it as the 23FD, a standard part of System 370 processing units and for other products. The first floppies were bare, but they got dirty easily, so the team packaged them in slim but durable envelopes equipped with an innovative dust-wiping element, making it possible to handle and store them easily. It was a plastic disk 8 inches in diameter, 1.5 mm thick, coated on one side with iron oxide, attached to a foam pad, and designed to rotate on a turntable driven by an idler wheel. In March 1970 IBM applied for a patent and U.S. patent Nr. 3678481 was granted in 1973.

A read-only magnetic head was moved over the disk by solenoids and read data from tracks prerecorded on the disk at a density of 1100 bits per inch. The disk was “hard-sectored” with 8 holes around the center to mark the beginning of data sectors. At first, its capacity was only 81.6 KB, but by February 1969 Noble had doubled the thickness of the plastic base to 3 mm and coated both sides to add more capacity. In June 1969, the Minnow was added to the IBM System 370 and soon began to be used by other divisions in IBM. In 1970, the name was changed to Igar and Noble had a staff of 25 engineers to help him make improvements. By 1971, Igar became the 33FD disk drive and the 8-inch floppy disk became the Type 1 diskette. The speed was 360 rpm, with a head access time of 50 milliseconds. The 8 hard sector holes were replaced by a single index hole for “soft sectors” or “IBM sectoring” across 77 tracks. In 1976 the 43FD disk drive was sold with dual heads to read and write to both sides of the diskette. A new model 53FD was added in 1976 that used modified frequency modulation to record double-density on both sides, resulting in a capacity of 1200 KB.

SA-400 with diskette
SA-400 with 5-inch diskette

In 1969 Alan Shugart left IBM and moved to Memorex where in 1972 his team shipped Memorex 650, the first read-write floppy disk drive. The 650 had a data capacity of 175 kB. Later Shugart established a new company, and in 1975, Shugart Associates produced an 8-inch floppy disk to hold 800k that offered for the first time a low-cost drive for the emerging personal computer market. In 1976, Jim Adkisson and Don Massaro, Shugart engineers, sat down for lunch with a customer (An Wang of Wang Laboratories), who complained that the 8-inch drive was too big for the personal computer. When Adkisson asked what the size should be, An pointed to a napkin on the table and said, “About that size.” Adkisson returned to the Shugart lab with the napkin and designed the 5.25-inch floppy drive, introduced in December 1976 as the model SA-400 (see the nearby photo) with a capacity of 110 KB (single-side single-density) and price of $390. The model became one of Shugart’s best sellers, with shipments that rose to 4000 drives per day. The company turned to Matsushita in Japan to help make the drives, starting that company on its rise to becoming the largest floppy drive manufacturer in the world.

Sony developed a 3.5-inch floppy drive (see the nearby photo) by 1980 and began a two-year effort to make it the U.S. floppy disk standard. They stored 438Kb of data, and later 720K (double-density) and 1.44MB (high-density). Sony declared that its new drive was smaller, faster, better protected (it had a hard-shell cartridge and an automatic shutter, that closed over the recording surface when it was removed from the drive), and could fit in a shirt pocket. Initially, there were different incompatible disk sizes (3-inch of Hitachi, 3.8-inch of Canon, 4-inch of IBM, etc.), but Sony’s soon became dominant. Sales of the 3.5-inch floppy began to surpass the 5.25-inch version by 1989, and Japanese companies would drive most U.S. disk producers out of the market. For many years 3.5-inch floppy disks were as plentiful as plastic bottles and were the most common PC storage disk.

8 inch drive, 5¼ inch drive, 3½ inch drive (photo by Michael Holley, source: wikipedia.org)
8-inch drive, 5¼ inch drive, 3½ inch drive (photo by Michael Holley, source: wikipedia.org)

By the early 1990s, the increasing software size meant large packages like Windows or Adobe Photoshop required a dozen disks or more. In 1996, there were an estimated five billion standard floppy disks in use. Then, the distribution of larger packages was gradually replaced by CD-ROMs, DVDs, and online distribution. As important as they were, by the late nineties floppy disks were on their way out. Re-writable CDs were introduced that had the same capabilities as floppy disks but were more reliable. It is believed that 2011 is the year the floppy disk finally died, and 40 years is a very long lifespan for a technical gadget.

Interestingly, a Japanese inventor, Yoshiro Nakamatsu, claims to have invented core floppy disk technology in 1952 and further claims to have later licensed 16 patents to IBM for the creation of the floppy disk, however, there is no reliable source to support his assertions.

Biography of David Noble

David L. Noble (1918-2004)
David L. Noble (1918-2004)

David L. Noble was born in Naugatuck, CT, on 16 July 1918. In 1940 he graduated with B.E.E. in electrical engineering from and later taught at Rensselaer Polytechnic Institute in Troy, NY. During WWII, Noble served as an officer in the U.S. Naval Reserve where he worked in cryptology, and was recognized for invaluable service to Navy code-breaking activities (just like other computer pioneers like Alan Turing).

Following the war, Noble was an original staff member of Engineering Research Associates in St. Paul, MN, and worked on special-purpose computers. In 1949 he was employed by Remington Rand Laboratory for Advanced Research in Norwalk, CT, where he worked on automatic business machines. In 1956, Noble joined IBM Corporation in Poughkeepsie, NY, where he worked on magnetic ink character sensing equipment for banks, and performed technical assignments related to high-speed data processors. From 1960 to his retirement in 1978, Noble was located at IBM’s Development Laboratory in San Jose, CA, where he was engaged in the development disciplines of magnetic recording, direct access disk files, and data distribution devices. In 1974, Noble received an IBM Outstanding Contribution Award for the development of the flexible floppy disk and its associated disk drive mechanism. In his retirement years, his concentration turned to the study of artificial intelligence and the design of neural networks. A licensed amateur radio operator, Noble volunteered his time with the California Div. of Forestry VIP program and was a former member of Kiwanis.

In 1944 David Noble married Dorothy Beavers Noble (1920-2008), a teacher, born in St. Louis, MO. They had two daughters—Ann Noble Fielder, and Lynn J Noble. David Noble died on 25 April 2004 in his home in Monte Sereno, California.

Claude Shannon

I visualize a time when we will be to robots what dogs are to humans, and I’m rooting for the machines.
Claude Shannon

Claude Elwood Shannon (1916–2001)
Claude Elwood Shannon (1916–2001)

Claude Shannon is a famous American mathematician, electronic engineer, and geneticist sometimes titled the father of information theory. Claude Elwood Shannon (1916–2001) was an outstanding student, and after receiving in 1936 two bachelor’s degrees (one in electrical engineering and one in mathematics) at the University of Michigan, he began graduate study at the Massachusetts Institute of Technology (MIT), where he obtained a Master’s Degree in electrical engineering and his Ph.D. in mathematics in 1940. While at MIT, he worked on Vannevar Bush‘s differential analyzer (a mechanical analog computer, designed to solve differential equations by integration).

While studying the complicated circuits of the differential analyzer, Shannon saw that Boole’s concepts could be used there to great utility. In the 1938 issue of the Transactions of the American Institute of Electrical Engineers, he published a paper, drawn from his 1937 master’s thesis—A Symbolic Analysis of Relay and Switching Circuits. This paper earned Shannon the Alfred Noble American Institute of American Engineers Award in 1940. Some people called Shannon’s thesis possibly the most important, and also the most famous, master’s thesis of the century.

In his paper, Shannon proved that Boolean algebra and binary arithmetic could be used to simplify the arrangement of the electromechanical relays then used in telephone routing switches, then turned the concept upside down and also proved that it should be possible to use arrangements of relays to solve Boolean algebra problems. Exploiting this property of electrical switches to do logic is the basic concept that underlies all electronic digital computers. Shannon’s work became the foundation of practical digital circuit design when it became widely known among the electrical engineering community during and after WW2. The theoretical rigor of Shannon’s work completely replaced the ad hoc methods that had previously prevailed.

In 1940, Shannon became a National Research Fellow at the Institute for Advanced Study in Princeton, New Jersey. At Princeton, Shannon had the opportunity to discuss his ideas with influential scientists and mathematicians such as Hermann Weyl and John von Neumann. Shannon worked freely across disciplines and began to shape the ideas that would become information theory.

During WWII Shannon worked on fire-control systems and cryptography at Bell Labs. In 1943, he came into contact with the famous British mathematician and cryptanalyst Alan Turing, who was then in Washington to share with the US Navy’s cryptanalytic service the methods used by the British Government Code and Cypher School to break the German ciphers. Turing showed Shannon his seminal 1936 paper On Computable Numbers, with an Application to the Entscheidungsproblem, which defined what is now known as the Universal Turing machine, which impressed Shannon, as many of its ideas were complementary to his own.

In 1948 Shannon published another seminal paper—A Mathematical Theory of Communication. In this paper, he defined the subject of information theory and proposed a linear schematic model of a communications system, which was a new idea. Communication was then thought of as requiring electromagnetic waves to be sent down a wire. The idea is that one could transmit pictures, words, sounds, etc., by sending a stream of 1s and 0s down a wire. Introducing the word bit for the first time, Shannon showed that adding extra bits to a signal allowed transmission errors to be corrected. He was the person who saw that the binary digit was the fundamental element in all communications. That was really his discovery, and from it, the whole communications revolution sprung.

The ideas in Shannon’s paper were soon picked up by communication engineers and mathematicians around the world. They were elaborated upon, extended, and complemented with new related ideas. The subject thrived and grew to become a well-rounded and exciting chapter in the annals of science.

Shannon with the electronic mouse Theseus
Shannon with the electronic mouse Theseus

Shannon’s later works looked at ideas in artificial intelligence. In 1950 he published a groundbreaking paper on computer chess, entitled Programming a Computer for Playing Chess, which led to the first full game played by the Los Alamos MANIAC computer in 1956. In the same 1950, Shannon created the electronic mouse Theseus (see the nearby photo) which could solve maze problems. It was a magnetic mouse controlled by a relay circuit that enabled it to move around a maze of 25 squares. The maze configuration was flexible and it could be modified at will. The mouse was designed to search through the corridors until it found the target. Having traveled through the maze, the mouse would then be placed anywhere it had been before and because of its prior experience, it could go directly to the target. If placed in unfamiliar territory, it was programmed to search until it reached a known location, and then it would proceed to the target, adding the new knowledge to its memory thus learning. Shannon’s mouse appears to have been the first learning device of its kind.

Shannon also applied his inventing genius to other areas, e.g. inventing a two-seater version of his beloved unicycle, and it is probably true that no one was anxious to share it with him. A later invention, the unicycle with an off-center hub, would bring people out into the corridors to watch him as he rode it, bobbing up and down like a duck.

John von Neumann

Coincidence is God’s way of remaining anonymous.
Albert Einstein

John Louis von Neumann (1903-1957)
John Louis von Neumann (1903-1957)

The famous mathematician John Louis von Neumann (1903-1957) was born in a prosperous Jewish family in Budapest, Austro-Hungarian Empire, as János Lajos Neumann. A child-prodigy, János received his Ph.D. in mathematics from Pázmány Péter University in Budapest at the age of 22, simultaneously earning a diploma in chemical engineering from ETH Zurich in Switzerland. Between 1926 and 1930, he taught as a Privatdozent at the University of Berlin, the youngest in its history. By age 25, he had already published a dozen of major papers.

John von Neumann emigrated to the United States just in time to escape from nazists—in 1930, where he was invited to Princeton University, and, subsequently, was one of the first four people selected for the faculty of the Institute for Advanced Study (two of the others being Albert Einstein and Kurt Gödel!), where he remained a mathematics professor from its formation in 1933 until his death.

Von Neumann was an important figure in computer science. The use of memory in digital computers to store both sequences of instructions and data was a breakthrough to which von Neumann made major contributions.

In June 1945, while consulting for the Moore School of Electrical Engineering on the EDVAC project, von Neumann wrote an incomplete set of notes, titled the First Draft of a Report on the EDVAC. This widely distributed paper laid the foundations of a computer architecture in which the data and the program are both stored in the computer’s memory in the same address space, which will be described later as von Neumann Architecture (see the lower drawing). This architecture became the de facto standard for a long time and is still used today (until technology enabled more advanced architectures).

A drawing of von Neumann Architecture
A drawing of von Neumann Architecture

John von Neumann also created the field of cellular automata without the aid of computers, constructing the first self-replicating automata with pencil and graph paper. The concept of a universal constructor was fleshed out in his posthumous work Theory of Self-Reproducing Automata. Von Neumann proved that the most effective way of performing large-scale mining operations such as mining an entire planet or asteroid belt would be by using self-replicating machines, taking advantage of their exponential growth.

John von Neumann is credited with at least one contribution to the study of algorithms. The renowned computer scientist Donald Knuth cites von Neumann as the inventor (in 1945), of the merge sort algorithm, in which the first and second halves of an array are each sorted recursively and then merged together. His algorithm for simulating a fair coin with a biased coin is used in the software whitening stage of some hardware random number generators.

In 1956 von Neumann wrote a (posthumously published) book The Computer and the Brain, in which he discusses how the brain can be viewed as a computing machine. The book is speculative in nature, but discusses several important differences between brains and computers of his day (such as processing speed and parallelism), as well as suggests directions for future research. Memory is one of the central themes in this book.