5 Defining Qualities of Robots
As robotic technology evolves and expands, the word "robot" remains somewhat loosely defined, in spite of its growing relevance. The following model outlines how robots have been historically defined and how various technologies within these parameters are improving robotics.
The term “robot” is not easily defined, but its etymology is reasonably simple to track. It is not a very old word, having been implemented into the English language fairly recently. It dates back to the early twentieth century, when Polish playwright Karel Capek presented a unique and somewhat prophetic glimpse into the future with his groundbreaking play, “Rossum's Universal Robots.” Capek chose the word “robot” based on its Old Church Slavonic origin, “rabota” – which basically translates to “slavery.”
Before becoming an established fiction writer, Karel Capek worked as a journalist. And although “Rossum's Universal Robots” was a work of speculative fiction, it serves as an apt prelude to the reality of our increasingly automated tech culture. Like the more recent series of “Terminator” films, Capek's play depicts robots as future overlords who go to war with human beings.
The play emphasizes how the robots were created to serve people, but gradually adopt many of their characteristics and eventually attempt to overtake them. To the extent of imitating human likeness and capability (a subset of biorobotics, which is a field in which life is imitated through technology) this story reflects largely how robots would develop over the next century.
Over the course of the industrial revolution, technology developed a rather uneasy relationship with labor. The term “Luddite” is often used to refer to somebody who mistrusts or opposes technology. A Luddite was a member of the the English textile workers' movement who revolted against industrial innovation that left them obsolete during the nineteenth century. This was an early recognition of technology’s potential to disrupt and perhaps ultimately upend the human workforce. (Read also: WIll Robots Take Our Jobs? That Depends)
But human society thrives on efficiency, and automation is implemented where human labor becomes too costly and inefficient to justify. Technology has been a noble servant of people in many respects over the years. And although it is inspired by nature, it ultimately seeks to improve it. Thus, the robots that we’ve designed in our likeness will surpass many of our own human limitations (as many already are). As this evolution unfolds, the idea of the robot will likely become quite abstract, which raises the question of what currently defines robots as physical beings.
The following five essential qualities characterize robots as we have come to know them today.
Human intelligence is derived from the elaborate and interconnected network of neurons within the human brain. These neurons form electrical connections with one another, but it remains unclear how exactly they collectively cultivate brain activity like thoughts and reasoning. Nevertheless, innovations in the realms of computation and data mining enable the development of artificially intelligent systems that reflect human intellectual capability.
A robot known as Kismet (developed at the Massachusetts Institute of Technology) decentralizes its computing by separating it into different processing tiers. Higher levels of computing deal with complicated and technically advanced processes, while the lower resources are allocated to the tedious and repetitive activity. Kismet works very similarly to the human nervous system, which consists of both voluntary and involuntary functionality.
Artificial intelligence can be a controversial technology, including how its terminology is applied as well as the subjective nature of AI and whether or not it could ever constitute a form of consciousness. Today, much of the modern debate on human-like AI revolves around their lack of true emotions or personality. Possibly, one of the most unique traits that characterize humanity and its evolution over animals is empathy – a powerful driver influencing many of our decisions and actions.
Machines still lack a true “emotional intelligence,” and it’s probably better if they never have their own emotions– unless we want to see our Alexa refusing to work because she’s angry or sad. However, the ability of modern AI to recognize human emotion may be beneficial. Even now, AI seems to to show the first signs of an early empathy–in the form of an enhanced ability to recognize human facial expressions, vocal intonation, and body language, and tune their reactions accordingly.
A glimmer of very rudimental empathy has been positively identified in a recent experiment led by the engineers at Columbia Engineering’s Creative Machines Lab. Although it's a bit of a stretch to define this very primitive ability to visually predict another robot's behavior as true "empathy", this one still is a very first step towards this direction. In a nutshell, a first robot had to choose his path depending on whether he was able or not to see a certain green box in his camera. The other "empathic" robot couldn't see that, yet, after 2 hours of observation, it was eventually able to predict his partner's preferred path 98% of times even without possessing any knowledge about the green box.
2. Sense Perception
The technology that empowers robot senses has fostered our ability to communicate electronically for many years. Electronic communication mechanisms, such as microphones and cameras, help transmit sensory data to computers within simulated nervous systems. Sense is useful, if not fundamental to robots’ interaction with live, natural environments.
The human sensory system is broken down into vision, hearing, touch, smell and taste – all of which have been or are being implemented into robotic technology somehow. Vision and hearing are simulated by transmitting media to databases that compare the information to existing definitions and specifications. When a sound is heard by a robot, for example, the sound is transmitted to a database (or “lexicon”) where it is compared among similar sound waves.
Self-driving vehicles are a great example of how robotic senses work. The car is stacked with sensors such as LIDAR, RADAR, video cameras, GPS, and wheel encoders that allow it to collect data from its surroundings in real time. Advanced perception algorithms will then elaborate this raw data to allow the AI to compare it against a set of pre-defined items. This way the vehicle will be able to identify and, thus, “sense” other cars, road signs, highways, pedestrians, etc. (Read also: Are These Autonomous Vehicles Ready for Our World?)
Much still needs to be done before engineers will truly be able to make human-robot interactions more genuine. A particularly coveted frontier of machine perceptivity for which modern robotics is focusing all its endeavors is the ability to recognize human emotions from facial expressions. Although not yet fully employed in robotics, early emotion recognition systems are currently tested by several tech companies, including Google, Amazon and Microsoft.
These not-particularly-intelligent AI-powered systems are being used for a variety of purposes, such as empowering surveillance cameras with the ability to identify suspicious people or gauge how customers respond to advertisements. Whether these techs will be used for teaching machines how to better understand humans, or just demolish our right to privacy even more, only time will tell.
Dexterity refers to the functionality of limbs, appendages and extremities, as well as the general range of motor skill and physical capability of a body. In robotics, dexterity is maximized where there is a balance between sophisticated hardware and high-level programming that incorporates environmental sensing capability. Many different organizations are achieving significant milestones in robotic dexterity and physical interactivity.
The United States Department of Defense is host to the Defense Advanced Research Projects Agency (DARPA), which sponsors a great deal of innovation in the development of prosthetic limbs. This technology lends a great deal of insight into the future of robot dexterity, but not all robots imitate the human physical form (those that do are often referred to as “androids,” whose Greek etymological origin basically translates as “likeness to man”).
Organizations like Boston Dynamics explore a variety of both bipedal and quadrupedal configurations (with its famous BigDog robot falling in the latter category) while expanding on the idea of extrinsic dexterity in grasping mechanisms.
Anthropomorphic robotic hands that can perform delicate tasks such as opening jars or writing can be used in many circumstances where it is too dangerous for a human to use their own limbs, such as in extreme environments or when handling harmful substances and materials. Reinforcement learning (a relatively new form of machine learning), has driven forward robot dexterity. The algorithms help the machine understand which techniques are more effective in manipulating a certain object or achieving a specific task, similarly to what happens with muscle memory in animals. The results are outstandingly dexterous robots that are nearly able to emulate the level of precision of true human hands.
Robots require an energy source, and there are many factors that go into deciding which form of power provides the most freedom and capability for a robotic body. There are many different ways to generate, transmit and store power. Generators, batteries and fuel cells give power that is locally stored but also temporary, while tethering to a power source naturally limits the device’s freedom and range of functions.
One very notable exception would be the simple machine-based bipedal walking system that relies only on gravity to propel its walk cycle (developed at Japan’s Nagoya Institute of Technology). While this may not qualify as a stand-alone (no pun intended) robot, it could lead to innovations on how robot power could potentially be optimized, or possibly even generated.
A fantastically ingenuous example of how advanced robotics power can be arranged
by for soft and flexible intelligent robots is using soft smart materials such as dielectric elastomers which can be used as transducers to design intelligent wearable robotics.
A wearable actuator-generator such as robotic clothing could, for example, accumulate energy from the body movements while the robot walks down a flight of stairs, only to return this stored energy to provide added power when they must climb up again those same stairs. The strain-responsive properties of these soft materials are employed to create advanced assisting robots that are nearly self-sufficient in terms of power consumption.
Intelligence, sense, dexterity and power all converge to enable independence, which in turn could theoretically lead to a nearly personified individualization of robotic bodies. From its origin within a work of speculative fiction, the word “robot” has almost universally referred to artificially intelligent machinery with a certain degree of humanity to its design and concept (however distant).
This automatically imbues robots with a sense of personhood. It also raises many potential questions as to whether or not a machine can ever really “awaken” and become conscious (sentient), and by extension treated as an individual subject, or "person." (Read also: How does AI interact with robotics?)
Modern robots have already overcome many of the hardest challenges they faced up until just a few years ago. The robot race is running at an amazingly fast pace, and we can only wonder what machines could achieve in the upcoming future.