In the 1986 film, "Star Trek IV: The Voyage Home," the Star Trek crew travels back in time to the 20th century to prevent humpback whales from becoming extinct (as part of a story line that's too involved to be explained here). In the course of their visit, Enterprise engineer Montgomery Scott ("Scotty") needs to use a computer to check something and, when told by a technician that he can use his machine, a Macintosh, Scotty brings the mouse up to his mouth and says "Computer." The 20th century characters look at him incredulously, which generally produces a laugh in the theater audience.
Now I walk along using an app on my iPhone that allows me to talk into the phone and email my message to whomever I wish. Moreover, my iPhone is very much like the "communicator" that Kirk used to talk to the Enterprise with while down on a hostile planet.
Kirk's use of a handheld wireless device to speak to someone thousands of miles away was science fiction just 26 years ago. Now it's commonplace. Science fiction often becomes reality. And usually, by the time it does, we are prepared to accept it. Here we'll take a look at some the technologies that made the leap from fantasy to reality.
Suspension of Belief of Science Reality?
When I was very young, my first confrontation with science fiction was "Buck Rogers in the 25th Century," a weekly color comic strip in the Saturday edition of the now defunct New York Journal American. Buck, a 20th century pilot overcome by mine gases in his time, was miraculously preserved in suspended animation until he was awakened five centuries later and rapidly became the leader of the "good guys." In the series, Buck flew around in spaceships and went to distant planets. These were activities that I had never imagined!
In reading science fiction, I quickly learned to employ "suspension of disbelief," a term coined by the poet Samuel Taylor Coleridge in 1817. In order to be able to be immersed in the Buck Rogers story line – to enjoy the story – I had to accept that things that I knew were impossible, such as going to the moon and other planets, were reasonable. Of course, 20 years later, we were heading to the moon. And, the more I read, the more I realized that science fiction was, in many cases, a precursor to science reality.
It was much later that I found that the character Buck Rogers had actually come into being in 1928 – only 40 years before Neil Armstrong’s famous step onto the moon. He was created by Philip Francis Nowlan in a short story, "Armageddon 2419 A.D." in a magazine called Amazing Stories. A comic strip followed less than a year later and was so successful that a competitive spaceship adventure series, "Flash Gordon," appeared five years after that.
In fact, science fiction has actually been around since the early 1900s, originating with French writer Jules Verne. (Some consider Verne to be the "father of science fiction," while others award that title to English author H.G. Wells or American magazine publisher Hugo Gernsback.) Verne wrote about air, space and underwater travel before airplanes, spaceships or submarines had been invented.
The Speed of Change
In those early days of science fiction, the lead time between fiction and fact was between 50 and 100 years. Now it is much shorter. In addition, the fiction and fact often intertwine in different ways than years ago. The early writers were speculating on what might happen. In some ways, some of the more recent writers are actually part of the development process.
Case in point: The late Arthur C. Clarke, who is most famous for "2001: A Space Odyssey," came up with the idea of geostationary satellites, although his 1945 paper proposing their use as telecommunications relays set the stage for the worldwide system in place today.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the first law.
- A robot must protect its own existence as long as such protection does not conflict with the first or second laws.
Asimov later added another law, the "zenith law," to precede the others: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
What is particularly interesting about Asimov’s laws is that they have not only been accepted throughout the science fiction world as the guiding rules for robotics, but they have also been referred to in the literature of major robotic development projects and referenced by major robotic theorists such as Carnegie Mellon’s Hans Moravec.
Literature Comes to Life
The lead-time between fact and fiction has continued to shrink dramatically, consistent with the growth of the internet, which greatly accelerated all scientific progress. Author William Gibson was one of the first to see the potential of the internet. In his 1982 short story, "Burning Chrome," he coined the term "cyberspace," which has become the preferred description of the place out there where all internet content exists. Gibson, whose 1984 novel, "Neuromancer," provides a dark picture of a virtual world to come, is also considered the initiator of the cyberpunk literary genre.
Bruce Sterling’s 1988 novel "Islands In the Net" was also prophetic. It was written before the widespread use of the internet, the development of the world wide web, and the use of mobile communications, and yet the story was built around total wireless communication, "big data" stores, data piracy centered in Grenada and terrorist attacks. Talk about foresight! (Learn about how the world wide web came to be in The Pioneers of the World Wide Web.)
As the use of the internet was just about to pick up steam with the advent of the world wide web, Neal Stephenson’s 1992 novel, "Snow Crash," gave us an indication of what might come next – worlds of three-dimensional virtual reality. In 2003, Linden Labs brought Stephenson’s vision to reality with the introduction of "Second Life," a 3-D virtual platform intended for business, educational and recreational uses. In 2011, it had more than one million active users. Impressive, but it's still dwarfed by the 10.2 million people who played "World of Warcraft," a 3-D massively multiplayer online role-playing game. (Read more about video games in From Friendly to Fragging: A Beginner's Guide to Video Game Genres.)
We have come a long way since Jules Verne. Now, science fiction writers with Ph.D.s in hard science are not only writing novels, but also occasionally stepping beyond fiction to lead us through nonfiction that addresses where technology may be leading us. David Brin’s "The Transparent Society: Will Technology Force Us to Choose Between Privacy and Freedom?," for example, was written in 1998 before 9/11, the Patriot Act and increased surveillance became an increasing cause for concern in the U.S.
Vernor Vinge, a retired professor of mathematics and Hugo award winning author of novels and novellas, opened up a whole new area of speculation with his 1993 essay "The Coming Technological Singularity," in which he states that "within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." The discussion and speculation in this area has been led by computer scientist and futurist Ray Kurzweil, whose 2005 momentous tome on the subject, "The Singularity is Near" is a must-read.
So What's Next?
For slightly less than 150 years, science fiction writers have been attempting to show us where we may be headed. They have often gotten it wrong – we still don’t have Buck Rogers’ flying belts – but they have been right enough times to warrant our attention. Along the way, their prophetic writing has captured the imagination of scientists, engineers and thinkers whose skills have made the prophecies come true at an increasingly faster rate. We can only guess what the future of technology holds, but history suggests that if you're looking for future facts, you might just find them in science fiction.