Creative Disruption: The Changing Landscape of Technology

Technology and Its Problems

Throughout this series, we've covered the technological breakthroughs that have impacted our way of life, often under-the-radar change that has provided efficiency and benefits to consumer, along with collateral damage to workers and industries.

There is, however, more to be concerned about as we increasingly turn the management of our businesses and personal lives over to software-driven systems. Why? Because systems are designed by human beings, as is the software used to make these systems work. Human beings make mistakes. The problem is that as systems become more complex, these mistakes become harder to find.

Computer Problems are People Problems

A few cases in point:

  • The nuclear facility at Three Mile Island supposedly contained a safety mechanism that would take corrective action and notify management in the event of system error. A system error occurred in 1979, but the safety mechanism failed, leading to the release of radioactive material into the environment.
  • A satellite that was launched in 1998 to orbit Mars was immediately sucked into its atmosphere and burned up due to a programming error, costing U.S. taxpayers millions of dollars.
  • Microsoft had to recall and replace Word 3.0 for the Macintosh in 1987 when it was found that a program bug destroyed users’ data.
And the beat goes on and on - a plane flying into a mountain due to a faulty auto-navigation system, software distributed with a virus on the program disk, etc., etc.

What is distressing about this type of problem is that we have no assurance that it won't happen again. As the power of systems increases dramatically, the amount of data that may be collected, stored and analyzed increases geometrically. Also, as high-speed telecommunications link us tighter and tighter, we become more dependent on computer systems, putting us in even more risk.

Better Technology, Bigger Problems?

We are told that, with the newest programming tools, greater quality control and a more astute user base, the chances for major errors are reduced.

The problem is, the possibility for errors is endless. On February 28, 2012, the New York Times reported how in November 2010, Nicaragua "trespassed" into Costa Rica as part of a dredging operation. When Costa Rica complained, the Nicaraguan official claimed that he wasn’t trespassing at all, saying, "See Google’s satellite photo, and there you see the border." Google Maps had placed a border south of the generally accepted border, giving Nicaragua a few more miles. Although when Costa Rica complained to both Google and Nicaragua, the border was reset to what was considered the "generally accepted border" (albeit over the objections of Nicaragua), people around the world began calling the incident "The Google Maps War."

The incident sounds trivial but what if the border was between two long warring states? Then we could really have a Google Maps War!

Technology and Control

Consider "self-replicating" nanotechnology machines, tiny nano-computers built to enter the human bloodstream to find and kill cancerous cells, or to swarm an enemy position or cause insect infestations to cease. K. Eric Drexler, a pioneer in the field, was concerned that the self-replication could get out of hand, turning the earth into a mass of "grey goo."

Drexler stated in his 1986 book, "Engines of Creation," "we cannot afford certain types of accidents," and went on to lay out a possible scenario: "Imagine such a replicator floating in a bottle of chemicals, making copies of itself … the first replicator assembles a copy in one thousand seconds, the two replicators then builds two more in the next thousand seconds, the four build another four, and the eight build another eight. At the end of 10 hours, there are not 36 new replicators, but over 68 billion. In less than a day, they would weigh a ton; in less than two days, they would outweigh the Earth; in another four hours, they would exceed the mass of the Sun and all the planets combined - if the bottle of chemicals hadn't run dry long before."

Does this sound far-fetched? Consider that on November 2, 1988, the Morris Worm (released by Robert Morris, Jr.) shut down a large portion of the Internet, crashing up to 6,000 major UNIX servers. Morris, according to his lawyer and all who knew him, was not being malicious; he was either trying to expose a security problem on the fledgling Internet or to count the number of machines actually connected and operational. He expected to actually "shut down" very few machines. Unfortunately, his very powerful program included some faulty logic. It traveled through the Internet, but the way it controlled how many machines would be impacted was affected by the error. Now, fast-forward to a program that will control how many nano machines and be replicated - can you imagine the risk that a programming error could have here? (To learn more about worms, read Malicious Software: Trojans, Bots and Worms, Oh My!)

At least with the Internet Worm or the Microsoft Word bug, the program code was written by humans who could then go back and look at the program and find the error, albeit after the problems had occurred. In the world conceived by computer scientist James Martin, who as nominated for a Pulitzer Prize for his 1977 book "The Wired Society: A Challenge for Tomorrow," this will not always be the case. Martin, in his 2000 book, "After The Internet: Alien Intelligence," describes computer programs driven by self-modifying code or algorithms. In this scenario, humans will write the original programs or at least define the task to be accomplished for a computer program. The program will then continually modify itself to find the most efficient way to achieve the desired result. The optimized code, in Martin’s view, may well be unintelligible to humans - a product of "alien intelligence." We will only be able to analyze the results to insure that the program is working properly. Martin sees the world to come as so scientifically complex and interconnected that the use of such systems will be mandatory.

Martin’s theories and concerns are expanded on in his more recent work, "The Meaning of the 21st Century," which is also the subject of a documentary narrated by Michael Douglas. I recommend that everyone watch the documentary; it is an eye-opener to the challenges that we face and the way that we can meet them.

With Power Comes Great Responsibility

So we are faced with a future with tremendous computer power, massive storage, high-speed communications, self-replicating nanotechnology and self-modifying alien intelligence code - not to mention a whole lot of other challenges. It is bewildering! That's why the non-technical citizenry must educate itself to, at least, understand the risks and dangers and demand transparency that will help to guide us through the 21st century.

Share this:
Written by John F. McMullen
John F. McMullen lives with his wife, Barbara, in Jefferson Valley, New York, in a converted barn full of pets (dog, cats, and turtles) and books. He has been involved in technology for more than 40 years and has written more than 1,500 articles, columns and reviews about it for major publications. He is a professor at Purchase College and has previously taught at Monroe College, Marist College and the New School for Social Research. He is also a member of the American Academy of Poets, the American Civil Liberties Union, the Freelancer's Union, the Association for Computing Machinery, the American Academy for the Advancement of Science and the World Futurist Society.

His current non-technical writing includes a novel, "The Inwood Book" and "New & Collected Poems by johnmac the bard." Both are available on Amazon.com.