Part of:

Don’t Look Back, Here They Come! The Advance of Artificial Intelligence

Why Trust Techopedia
KEY TAKEAWAYS

Artificial intelligence is rapidly gaining more momentum... and responsibility.

Up until recently, the directors of corporations might bring laptops or tablets into board meetings (or, with larger firms, have assistants with those devices seated behind them) in order to be used as research tools if the need arose. The keyword here is “tools” — the devices were used to gather information so that the director might speak intelligently or vote on a particular topic — the computer system might even make a recommendation on actions to be taken, but the technology was always subservient to the director, who could choose to ignore the data gathered or the recommendation of the so-called “artificial intelligence.”

AI as the Decision Makers

Well, the game has just changed! As Rob Wile wrote in Business Insider back in 2014 in a piece titled “A Venture Capital Firm Just Named An Algorithm To Its Board Of Directors — Here’s What It Actually Does,” a computer analysis system has been named as an equal, not a tool, to a Board of Directors. Wile writes, “Deep Knowledge Ventures, a firm that focuses on age-related disease drugs and regenerative medicine projects, says the program, called VITAL, can make investment recommendations about life sciences firms by poring over large amounts of data. … How does the algorithm work? VITAL makes its decisions by scanning prospective companies’ financing, clinical trials, intellectual property and previous funding rounds.” The real kicker in the story is that VITAL is a voting member of the Board with equal voting weight as any other member.

Suffice to say that this is only the first of such news that will come down the pike.

AI Outsmarting Humans?

Artificial intelligence has been scoring all kinds of wins. A self-taught computer made major news in January when it devised the “ultimate” strategy for winning at poker after playing 2 trillion simulated hand. The reason that the story might not grab the attention of a lot of readers is that a computer has already won at chess (beating a Grand Master) and checkers (not to mention “Jeopardy”). This, however, is different. In those cases, the computer intelligence knew everything about the issue at hand and was able to scan, on-the-spot, millions of facts, moves, strategies, etc. to compete with an opponent. In this case, the AI does not know what cards the opponent has “in the hole” and is, therefore, dealing with incomplete knowledge. It also does not have a profile on its opponent to know when and how often she/he “bluffs” and whether or not the opponent has any “tics” or expressions that give away bluffs (although it may learn them as the session goes on).

Michael Bowling, who led the project for the University of Alberta in Edmonton, Canada, explained the process for the Associated Press — the program considered 24 trillion simulated poker hands per second for two months, probably playing more poker than all humanity has ever experienced. The resulting strategy still won’t win every game because of bad luck in the cards. But over the long run — thousands of games — it won’t lose money. He commented, “We can go against the best (players) in the world and the humans are going to be the ones that lose money.”

The AP article gave further background on the project:

Advertisements

“The strategy applies specifically to a game called heads-up limit Texas Hold ’em. In the two-player game, each contestant creates a poker hand from two cards he is dealt face-down plus five other cards placed on the table face-up.

“Players place bets before the face-up cards are laid out, and then again as each card is revealed. The size of the wagers is fixed. While scientists have created poker-playing programs for years, Bowling’s result stands out because it comes so close to ‘solving’ its version of the game, which essentially means creating the optimal strategy. Poker is hard to solve because it involves imperfect information, where a player doesn’t know everything that has happened in the game he is playing — specifically, what cards the opponent has been dealt. Many real-world challenges like negotiations and auctions also include imperfect information, which is one reason why poker has long been a proving ground for the mathematical approach to decision-making called game theory.”

The system, described in the journal Science, drew praise from other Artificial Intelligence researchers with Tuomas Sandholm of Carnegie Mellon University in Pittsburgh (who didn’t participate in the new work), calling Bowling’s results “a landmark” and saying, “it’s the first time that an imperfect-information game that is competitively played by people has been essentially solved.”

AI Becoming More Intelligent

If this isn’t enough to boggle your mind, how about the fact that a robot somewhere is sitting in front of a computer or TV screen and learning how to do things by watching, “Robot learns to use tools by ‘watching’ YouTube videos.” The story, found on the best place that I know of to keep up with new developments in AI technology, Kurzweil AI, details how the system, developed by researchers at the University of Maryland and NICTA in Australia, is able to recognize shapes and learn methods of manipulating them.

Robot Morality

There’s a lot to think about when it comes to working with robots. In a January New York Times piece titled “Death by Robot,” writer Robin Marantz Henig relates a problem posed by Matthias Scheutz of the Human-Robot Interaction Laboratory at Tufts University:

“Imagine it’s a Sunday in the not-too-distant future. An elderly woman named Sylvia is confined to bed and in pain after breaking two ribs in a fall. She is being tended by a helper robot; let’s call it Fabulon. Sylvia calls out to Fabulon asking for a dose of painkiller. What should Fabulon do? The coders who built Fabulon have programmed it with a set of instructions: The robot must not hurt its human. The robot must do what its human asks it to do. The robot must not administer medication without first contacting its supervisor for permission. On most days, these rules work fine. On this Sunday, though, Fabulon cannot reach the supervisor because the wireless connection in Sylvia’s house is down. Sylvia’s voice is getting louder, and her requests for pain meds become more insistent.”

Scheutz explains, “You have a conflict here. On the one hand, the robot is obliged to make the person pain-free; on the other hand, it can’t make a move without the supervisor, who can’t be reached.” He points out that human caregivers would have a choice, and would be able to justify their actions to a supervisor after the fact.

Henig writes,

“[T]hese are not decisions, or explanations, that robots can make — at least not yet. A handful of experts in the emerging field of robot morality are trying to change that. Computer scientists are teaming up with philosophers, psychologists, linguists, lawyers, theologians and human rights experts to identify the set of decision points that robots would need to work through in order to emulate our own thinking about right and wrong. Scheutz defines ‘morality’ broadly, as a factor that can come into play when choosing between contradictory paths.”

So far, robots are joining boards of directors, winning at poker, learning skills by watching screens and that teams of experts from widely cross-disciplinary fields are banding together to try to develop morality guidelines for robots (the Henig article, too long to give justice to here, is particularly engrossing and challenging and I recommend it to all). Wow, gone are the days of R2-D2 from “Star Wars” and simple adherence to Isaac Asimov’s famed “Laws of Robotics” (from “I, Robot,” 1950):

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These laws have guided both science fiction writers and robotics developers since Asimov wrote them. Now, it seems that, as robot development accelerates at an exponential pace and moves into the realm of complexity, that they are not enough. Henig ends her column with a cautionary note:

“There’s something peculiarly comforting in the idea that ethics can be calculated by an algorithm: It’s easier than the panicked, imperfect bargains humans sometimes have to make. But maybe we should be worried about outsourcing morality to robots as easily as we’ve outsourced so many other forms of human labor. Making hard questions easy should give us pause.”

She is, of course, correct — but we, the public, must become the “informed public” so that decisions that will affect our employment, education, healthcare — just about our whole lives — will not be made only by an “intellectual elite.” For us to become this “informed public,” however, will take work — work that must be done if we are to control our fate.

Advertisements

Related Reading

Related Terms

Advertisements
John F. McMullen
Editor

John F. McMullen lives with his wife, Barbara, in Jefferson Valley, New York, in a converted barn full of pets (dog, cats, and turtles) and books. He has been involved in technology for more than 40 years and has written more than 1,500 articles, columns and reviews about it for major publications. He is a professor at Purchase College and has previously taught at Monroe College, Marist College and the New School for Social Research. MucMullen has a wealth of experience in both technology and in writing for publication. He has worked as a programmer, analyst, manager and director of…