EDSAC Programming

Posted 08 Oct 2023

Programming a dinosaur: doing real work in only 512 words of memory.


EDSAC is considered the first all-electronic computer. Previous machines (like the ENIAC and the Harvard Mark I) were electro-mechanical; they used relays and other mechanical devices to perform their functions. With EDSAC (Electronic Delay Storage Automatic Calculator), the combination of vacuum tubes for logic and mercury delay lines for storage made it much faster than any relay machine could be.

EDSAC was designed along the lines John Von Neumann had proposed in his paper (which I mentioned in the “Origins” post). So it was a “stored program” computer, which meant that both data and instructions were stored in the same place, using the same devices. In this case, mercury delay-line memories (I’ll do another blog post on those, and how we got to spinning disks from there). Due to some issues with the machine’s construction, that memory was limited to 512 “words”.

In this first era of computing, the term “word” did not mean 16 bits as it does now. Rather, a word was the size of a single memory location. In EDSAC, that word was 17 bits long. The highest bit was the sign bit, and you could combine two locations to get a 35-bit value (“long word”). Note the optimization: 17 + 17 = 34, not 35. This was a trick of how the bits were stored in the delay line. There would normally be a one-bit “gap” between words, to make things more reliable. For a long word, that gap bit was turned into another data bit. It’s referred to as a “sandwich bit” in the EDSAC programming manual1.

This manual, “The Preparation of Programs for an Electronic Digital Computer”, answered the then-obvious question of how an instruction like “add two numbers” could be represented in a memory location. Previous computers had hardwired logic for that function, and the operator would configure the machine so that data would flow into and out of an array of such hardwired components to produce the desired calculation. So the idea of a program, stored in memory, was new. The author of that book (Maurice Wilkes, the inventor of the machine) explained that each function was given a numerical code. This code could be stored in memory.

The book gives the list of instructions (“orders”) that the machine could obey. There were 18 of these instructions, each with a unique numerical encoding (“order code”). Compare that to the x86 base set of 81 instructions, plus about 100 extensions, each of which implements any number of additional instructions. Two parts of the EDSAC instruction set are worth looking at: conditional jumps and self-modifying code.

Conditional Jumps

Computers in this era were not much more than automated adding machines. These adding machines had grown in complexity and capability to the point where they became a computer, capable of much more than just arithmetic. Still, the earlier machines were designed to perform a single calculation (however intricate), like generating trajectory tables for the US Army, or predicting astronimical orbits for astronomers (e.g., IBM’s Astronomical Computing Bureau). With sufficient cleverness, these machines could re-use a set of instructions without having to repeat them in the program. One clever method was to use punched paper tape, with the ends joined together to form a loop. This was placed on a specific tape reader, and whenever those instructions were to be performed, the main program simply changed where it read from. The looped tape, one assumes, ended with the instructions to change back to the first tape. A subroutine of sorts. Exactly the kind of prehistoric system that leads us, eventually, to the “stack” in modern computers.

EDSAC didn’t need this trick, because it had two instructions to change which memory location it would read the next instruction from. These were, to modern eyes, pretty limited. One instruction tested if the word in the Accumulator was negative. If so, it would read the next instruction from the location contained in the instruction. In EDSAC, this is written as:

G n

Where ‘n’ is the memory location to read from. The 512 words of storage were numbered from 0 to 511, so ‘n’ would be a number in that range.

To use this for executing a repeated chunk of code, the programmer first stored those instructions in a set of contiguous locations, perhaps starting at location 100. To implement what we would now call a ‘for’ loop, you’d put the number of repetitions in a memory location, then use the G instruction to branch to location 100. (This means, strangely, that the repetition count was a negative number, and would be incremented each time by the repeated code.) Once that snippet at address 100 had executed enough times, the G test would fail, and the control module of the computer would simply read from the next location after that instructions.

This could be used in arbitrarily complex ways, although the small size of storage put a physical limit on how complex it could be. This limitation is why Von Neumann specifically included the Input and Output modules in his model. Since instructions and data shared the small memory devices, designers and programmers (often the same people back then) would have an incentive to keep the data to be operated on outside the memory (perhaps on paper tape or punch cards), and to move completed results out of memory onto an external device (perhaps written to a punched card, or printed onto paper). So the memory would keep only what was needed for the current program in it.

This sort of balancing of system devices (what we’d call “resources”) was initially a tedious manual process, but eventually sparked the desire to automate parts of it, eventually resulting in the “operating system”. I’ll have another blog post (soon, I hope) on where that started, and how it developed into something recognizable in our modern computer systems.

Self-Modifying Code

Anyone who has taken a programming course has been told that self-modifying code is a Bad Thing (TM). Modern systems are (mostly) not capable of such things, so it’s possible the reader has never heard of it.

Since the EDSAC “orders” were encoded as numbers and stored in memory, it follows that any instruction which changes the value in a memory location is also capable of changing one of the instruction codes into a different instruction. As Wilkes said in his book:

“This facility of being able to modify the orders in the program by performing arithmetical operations on the numbers representing them is of great importance, and it is perhaps the feature most characteristic of program design for machines like the EDSAC. The operations required for this purpose are performed in the arithmetical unit in the same way as other arithmetical operations.” 2

When Wilkes designed the encoding for each instruction, he did it in such a way that simple arithmetical operations would modify these instructions in a useful way. One example he gives is changing the instruction “A 50” (which adds the number in location 50 to the value in the Accumulator) into “A 51” can be done simply by adding 1 to the memory location where that instruction is stored. (This then adds the value in location 51 to the current value of the Accumulator.)

This also brings up some more details on how instructions are formed. Modern computers will store a single instruction in a single byte, although more complex instructions may take more bytes. In this first era of computing, the 17-bit word would be used to hold both the instruction code and the data that instruction operates on. In later computers, like the DEC PDP-8, the programming manual would have a diagram of which bits in the word encoded the instruction, any flags for it, and the arguments or operands it acts on. In EDSAC, we see the first steps down this path.

Back to self-modifying code. Suppose you had to sum up a set of values, let’s say the ones in locations 100 through 149. With the primitive 18 instructions available, a program written to read each of those locations and add it to the running sum would be fairly long. You’d have to repeat the same 3 or 4 instructions 50 times, which would use a lot of storage space. However, if you write a small loop (as we saw with the conditionals), and simply modified the “A n” instruction so that ‘n’ started at 100 and ended at 149 by incrmenting the “A” instruction on each pass, you can now do that summation in less than 10 words. So instead of using something like 150 memory locations out of the 512 available, you can drastically shrink the space needed.

This is an example of how the limitations of the technology (the small storage devices) spurred the designers to invent new ways of calculating. And, in the process, start us down the path to the way modern computers are organized.

I also find it interesting that many of these design decisions were made for a very modern reason: they lowered costs. Wilkes talks about how to decide to just write instructions out serially in memory, and when to use self-modifying code. There’s a trade-off, because the instructions used to modify a code took longer. So you could save space, but that increased the time for a calculation to complete. This book (from 1940) may be the first time someone talked about the “time/space” trade-off that is the stock-in-trade of modern programmers. And it makes sense that this would be the first place it could be talked about, because no one previously had their programs stored in memory.

As I mentioned in my “Origins” post, I believe the first machine to have all the features we expect in the modern standard model of computers was the Burroughs B5000. That machine broke with tradition by using “high-level languages” to decide how the hardware should be designed. The B5000 had no assembly language. It’s instruction set were the primitives needed to run an Algol or Fortran program most efficiently. This is a parallel to what Wilkes did here: he was aiming to support scientific calculations, and created a small set of primitives that would most efficiently support those programs.

Thanks for reading!

The.Weaver (at) WovenMemories.net.

  1. Full text at Internet Archive 

  2. Wilkes, 1951 page 9