There has been a good deal written in the tech press about the fact that we have passed the 50th anniversary of “Moore’s Law” (one of the better articles is by Thomas Friedman, “Moore’s Law Turns 50” in the New York Times of May 19th). While most of the articles correctly indicate that the so-called Moore’s Law is an indication of the exponential growth of computer power since Gordon Moore, one of the three founders of Intel, made the observation/prediction that came to bear his name and be promulgated as a “law.”
The Basics of Moore's Law
To start with some background — Moore’s Law is not a law as gravity is (irrefutable) or a traffic law (a suggestion enforceable by court action — fines, jail time, license suspension and/or probation). It is, rather, as stated above, a combination of an observation and a prediction. In Friedman's words, in April 1965, Gordon Moore,
“[T]hen the head of research for Fairchild Semiconductor and later one of the co-founders of Intel, was asked by Electronics Magazine to submit an article predicting what was going to happen to integrated circuits, the heart of computing, in the next 10 years. Studying the trend he’d seen in the previous few years, Moore predicted that every year we’d double the number of transistors that could fit on a single chip of silicon so you’d get twice as much computing power for only slightly more money. When that came true, in 1975, he modified his prediction to a doubling roughly every two years. "Moore’s Law" has essentially held up ever since — and, despite the skeptics, keeps chugging along, making it probably the most remarkable example ever of sustained exponential growth of a technology.”
Many writers skip over the heart of Moore’s Law — the constant miniaturization of electronic components that first allowed a single transistor on a chip, then multiple transistors on a chip, then tens, then hundreds, thousands, tens of thousands, etc. — and just write “double the speed of computers every two years“ (now 18 months). While the effect of the doubling of the transistors on a chip is doubling of the speed, it is helpful to understand “the why” of the result, because while the speed of the processor is the underlying component of the increase in computer power over the 50 years, it is not the sole component.
An Observation of My Own
“johnmac’s law” (not a real law, another observation) is:
Increase in Computing Power = f ((increase in processor speed + improvements in storage + increase in telecommunications bandwidth) * power of paradigm shifts)
CP = f ((p +s +t) * PS)
f = Function
* = Multiplication
CP = Computer Power
p = Processor Speed
s = Storage
t = Telecommunications Bandwidth
PS = Paradigm Shifts
(Note: The above is not meant to be a mathematical formula but rather function as a display tool.)
Storage — Another benefit of miniaturization is the fact that storage has become much, much smaller in size; much, much greater in capacity; much faster; and much, much cheaper in cost. About 35 years ago, I bought a 10 million byte (very large in capacity for its time) Corvus hard drive. The drive was bigger than a desktop personal computer and cost me $5,500.00. Today, I wear a 128 billion byte drive on a chain around my neck that cost me under $100.00. If prices had stayed the same, 1 billion bytes would cost $550,000.00 and the 128 GB drive around my neck would come in at seven billion, forty million dollars.
Telecommunication Bandwidth — We have seen computer modems which started at 110 baud (11 characters per second) replaced by 300 baud and then 1200, 2400, 9600, 28800, 56000, baud — each of which could only handle one user at a time — and finally replaced by fiber optic and cable broadband access. Yet, we are still behind much of the developed world in telecommunications speed and so we have a way to go.
Paradigm Shifts — This is the hardest of the components to quantify — it is not measurable in bytes, or baud, or MIPS — yet it is one of the most important elements in the equation and is driven constantly by human creativity. This is not to say that the engineers developing smaller and faster chips aren’t creative, but they are working in a fairly straight line while the paradigm shifts are often caused by people thinking of new things to do with the smaller and faster computers, and then writing programs to do the new things, which caused people to buy computers almost solely because of the new things.
Paradigm Shifts in Computing History
Some important events that have changed the course of technology:
- 1946: ENIAC Announced — The first working electronic computer was announced to the world. Designed initially to compute gunnery trajectories for World War II, the computer set the standard for later big system implementation — it came in well over budget and quite late (after the war was over) — yet it was the first and, then, computers were refined, made somewhat smaller, somewhat less expensive, and spread throughout the government and business world (when I began programming in 1962, only the government and very large businesses had computers. When smaller computers (“minicomputers”) came along, the use of computers spread, but generally only through the business world).
- April 1973: The First Portable Phone Introduced — By Motorola’s Martin Cooper (as in the Mazda TV ad).
- 1974: The Altair Introduced — The first consumer microcomputer allowed hobbyists and later small businesses to enter the computing world. Its announcement also led to the founding of Micro-Soft (later Microsoft).
- 1977: A Banner Year — The Apple II, the first microcomputer with both a case and color graphics, and the Hayes Micromodem, which allowed microcomputers to communicate over telephone lines, brought many more computer users “into the game.”
- 1979: “VisiCalc” Introduced — The first electronic spreadsheet of any kind and only initially available for the Apple II, spread computer use to the desktops of businesses throughout the US.
- 1981: IBM PC Introduced — “Big Blue’s” introduction of a microcomputer “legitimized” their use for many holdout corporations.
- 1984: Macintosh Introduced — The ease and commonality of use across applications brought many design and publishing companies into the world of computing.
- 1993: Mosaic Introduced — The first graphic-based (and free) World Wide Web browser brought computers into the home and changed the industry by mandating that systems be graphic user interface based (a la Macintosh and Windows).
- 1993: Apple Newton Introduced — A handheld organizer with computer power, handwriting recognition and built-in utilities. Although it had limited distribution, it “started them thinking.”
- 1994: Amazon Founded — First announced as an online bookstore, Amazon became a mass online retailer, eliminating not only bookstores but also small retailers throughout the country.
- 1996: Palm Pilot Introduced — A low-cost handheld organizer had wide distribution.
- 1998: Google Formed — First formed as a search engine firm, the company soon moved into many directions, including mapping, operating systems, web browser, cloud computing, hardware specification, broadband providing, “app development” and driverless cars.
- 2001: iTunes and iPod Introduced — An online seller of music and an easy-to-use player device changed the whole music industry by reducing piracy and eliminating the “middleman” (the retail music stores) from the buying process.
- 2004: Facebook Launched — Facebook and Twitter (launched in 2006) brought us into the vibrant world of “social media.” In early 2015, Facebook had 1.44 billion active monthly users and Twitter had 236 million.
- 2007: iPhone Introduced — The prototype for all “smartphones” (including those based on the Android operating system). The iPhone also launched the worldwide “app” development industry.
- 2007: Kindle Introduced — Amazon’s e-book service changed the publishing industry in much the way that the iPod had changed the music industry.
So here we are — and I didn’t even get into “cloud computing” which extends both our reach and storage capability, allowing us to access our data from wherever we are on whatever device we are currently using. A very good graphic presentation of the differences the last 50 years have made, “Processing Power Compared,” has written in its introduction, “We compared the processing power for various computers and devices from 1956 to 2015 to visualize the 1 trillion-fold increase in performance over those six decades,” — that’s right, one trillion!
What Does the Future Hold?
So, fifty years after Moore’s Law, where do we go from here? Author, researcher, and artificial intelligence guru Ray Kurzweil has been thinking about this for years and, after explaining the exponential growth of technology in a 2007 video, “The accelerating power of technology,” and then, in a 2009 video, “The Coming Singularity,” he tells us where he thinks we are going — and he has a pretty good track record to date (Kurzweil’s website, “Kurzweil Accelerating Intelligence” is a great resource for new developments in technology). Further, his 1996 book “The Age of Spiritual Machines” is worth the price just for the timeline of scientific development in the rear of the book where he takes us from the “Big Bang” through 1996 and on through his predictions to 2300. His massive 2006 tome, “The Singularity Is Near: When Humans Transcend Biology” is available both in paperback and on Kindle — read it and find out what technological singularity is and what it means to us!
The development of computing power, as one can see from the list above, has as much to do with thoughtful people finding new uses (spreadsheets, the World Wide Web, search engines, social media, etc.) for the hardware developed by brilliant engineers as it does for the engineers themselves.
As a final note — while I urge you to read Kurzweil and consider his view of the future, we should also remember the advice of the equally great Alan Kay, “The best way to predict the future is to invent it,” and try to make the future one that we would like to have.