It's tempting to think of today's progress in artificial intelligence as mainly related to solving logical and data-oriented problems, but for companies trying to innovate and keep moving forward, it can be helpful to go back and think about how ever more powerful hardware has also contributed to today's machine learning and artificial intelligence functionality.
Some of the more obvious ways that Moore's law has benefited artificial intelligence advancement are evident to anyone who has been looking at IT for the past 30 years. The first one is that the actual centralized computer workstations and data centers that work on artificial intelligence data sets are smaller than they would have been in the earlier days of computing – and that makes a difference. If simple mainframes were still taking up the space of a washer/dryer set, that would understandably have a dampening effect on the agile development of all sorts of new technologies.
However, much more importantly, the efficiency achievements of companies based on Moore's law have allowed for the prevalence of extremely small mobile data-collecting devices. Smartphones are the best example, but Moore's law also provided us with digital cameras, MP3 players and many other small pieces of hardware that all collect their own data at an astounding pace. Now, the internet of things is supercharging that process with smart kitchen appliances and all sorts of other very modern hardware which trade on the idea that chip-bearing devices are small enough to be placed in almost anything.
However, these are not the only ways that Moore's law has benefited the development of new machine learning and artificial intelligence progress. In the MIT Technology Review, writer Tom Simonite asserts that Moore's law has also been useful as a kind of “coordination device” that has served to project what will come on the market in future years, to give developers and others some semblance of a road map and pointers toward future innovation.
Another interesting perspective comes from Niel Viljoen who talks about how Moore's law may still be critical to the new cloud-based systems and the emergence of brand-new artificial intelligence technology.
Viljoen's argument seems to be that adding general-purpose cores to scaling systems isn’t enough to really connect the hardware to a network in a comprehensive way, which leads to bottlenecks. A corresponding idea is that convergence models will speed up all sorts of functions of data-intensive systems. In other words, since computing systems kept scaling their data use according to what they could fit into a piece of hardware, builders never got around to including some of the corollary functions of development such as image processing, encryption, video rendering, etc.
As a result, modern data centers became very powerful, but still dependent on outside elements to do the required processing – Viljoen posits the future emergence of “systems on a chip” where hyperconverged hardware has everything it needs for doing all of the networking functionality, to streamline data flows and make systems agile as well as data-powerful.
In general, Moore’s law has helped out in IT advancement, and continues to help out, in fundamental ways. It’s part of the “science fiction is the present” model that shows how far humanity has come in building data systems over the course of one or two centuries.