It’s easy to take for granted the incredible density and speed of data storage we have today. A tiny USB stick can hold terabytes of information, and cloud services offer seemingly infinite storage at our fingertips. But cast your mind back to 1966, and the world of data was a vastly different, and much more ponderous, place.
Recently, a statement circulated suggesting that “in 1966, storing 5 megabytes of data required 62,500 punched cards and took four days to process.” While the idea captures a sense of the challenges of early computing, a closer look reveals that the reality was even more… card-intensive and time-consuming than that!
Let’s break down why this statement, while directionally correct in highlighting the limitations, doesn’t quite add up.
The Mighty (and Minimal) Punched Card
Imagine a time when data wasn’t stored in silicon chips, but in precisely punched holes on sturdy paper cards. This was the domain of the punched card. A standard IBM 80-column card could, in theory, store 80 characters. Assuming each character represented one byte of data, that’s 80 bytes per card.
So, if we wanted to store a whopping 5 megabytes (which is 5,000,000 bytes), the calculation for the number of cards would indeed be:
So far, so good for the first part of the statement! This stack of 62,500 cards would have been quite a sight. To give you an idea of scale, if each card was about 0.007 inches thick, that stack would have been over 36 feet tall! Imagine the physical space needed just for storage.
The Real Bottleneck: Processing Power
Now, let’s turn to the claim of “four days to process.” This is where the numbers diverge significantly from the statement, highlighting just how different computing was.
In 1966, even the most advanced computers, like the formidable IBM System/360 series, operated at speeds that would make a modern smartphone blush. The physical act of reading those 62,500 punched cards was itself a bottleneck. A very fast card reader of the era might handle around 1,000 cards per minute.
So, just to read the data, you’re looking at:
That’s just over an hour of continuous card reading! But this is purely I/O (input/output). It doesn’t account for the actual processing of the data. Five megabytes, by 1966 standards, was an enormous dataset. Think of a computer with a tiny fraction of the RAM we have today, running programs written in assembly language or Fortran, and performing complex calculations.
Processing such a massive amount of data, even simple operations across 5MB, would have taken weeks, if not months, of dedicated machine time. Factors like program complexity, available memory, and the type of computations would all have stretched that “four days” into a much longer ordeal. Computer time was incredibly expensive and precious.
The Dawn of More Efficient Storage
While punched cards were ubiquitous for data input and smaller datasets, larger data storage in 1966 often relied on magnetic tape drives.
A single reel of magnetic tape could hold tens of megabytes of data, far surpassing the capacity of punched cards and offering much faster read/write speeds. These massive tape drives, spinning their reels in climate-controlled computer rooms, were the workhorses for serious data storage and processing.
Acknowledging Progress
The example, though slightly off in its specifics, serves as a powerful reminder of how far computing has come. The sheer physical footprint, the glacial speeds, and the meticulous manual handling of data are almost unfathomable in our age of instant access and miniature devices.
From stacks of cards taller than a house to pocket-sized drives, the journey of data storage is a testament to human ingenuity. It makes you wonder: what will data storage look like in another 50 years?