But what does cognitive mean, and how will we know when we’ve achieved it? And most importantly, in what ways will it improve enterprise processes and capabilities beyond the already significant advancements of AI and ML?
The most well-known cognitive platform these days is IBM Watson. Not only is it a “Jeopardy!” champion, it is playing an increasingly crucial role in numerous data-intensive industries, such as health care, finance and high-tech manufacturing. But Watson is not the only cognitive solution for the enterprise. Companies like Enterra Solutions, Attivio and Diwo are putting cognitive to work on tasks like app development, search and even security. (For more on AI's potential, see Can Creativity Be Implemented in AI?)
So far, however, initial results have been mixed. Even Watson sometimes has trouble distilling the truth from large, often conflicting, data sets. But like all intelligent systems, cognitive has the ability to learn from and adapt to changing environments, which allows them to steadily improve their own performance without manual coding. And this is leading to the very real possibility that before long knowledge work of all types will be largely managed by autonomous, self-learning platforms.
But if this is academic to all intelligent systems, what differentiates cognitive solutions from run-of-the-mill AI? According to RT Insight’s Joel Hans, the key difference lies in the way cognitive processes information. Standard intelligence is very effective at determining which action among a set of predefined options is the most appropriate in a given situation. So an intelligent assistant, for example, can parse the wording of a certain request and select a response from an existing menu. A cognitive solution, however, attempts to emulate human thought to engage in contextually aware problem-solving. This puts cognitive more on the level of an assistant that can give advice and ascertain the nuance of a problem rather than a simple program that can automatically perform a function.
A key example highlighting the difference between intelligence and cognitive can be found in the operating room. An intelligent system would be able to monitor heart rate, breathing and other factors to regulate the level of anesthesia or even guide a remote scalpel to the precise location. A cognitive assistant, on the other hand, would give advice on procedures and courses of treatment, pulling data from numerous sources that a doctor might not have ready access to.
How can this be applied to business? Frederic Laluyaux, president and CEO of Aera Technology, argues that the key application is digitizing the executive function by training machines to evaluate conflicting goals and data to then choose from a variety of options based on logic, rationality, causal analysis and experience. Leading neuroscientists are already mapping how this takes place in the human brain, so the next step is to apply the same learning process to AI.
For instance, an immature human brain may require some time to master a complex task, such as tying a shoe, but once the skill is learned it becomes automatic. As situations become more complex, the brain must rely on a larger data store, much of it external, in order to arrive at a conclusion. From that point, it must then devise sort of a threat meter to weigh the seriousness of a given situation and the amount of attention it deserves. Just as a human brain must undergo these developmental stages in order to achieve executive-level decision-making capability, so too must an intelligent platform. This is why most experts are not overly concerned when platforms like Watson cannot perform flawlessly right out of the box – it has to learn what needs to be done. (We need computers, but do they need us? Check out Another Look at Man-Computer Symbiosis.)
But this development is a two-way street. As cognitive evolves and changes, so too will the enterprise. Mobile Business Insight’s Rose de Fremery notes that a cognitive enterprise will have the capacity for exponential learning and continuous, self-directed optimization, which can be used to gain a competitive advantage by leveraging complex technologies like blockchain, the IoT and advanced 3D printing.
To successfully navigate this transition, however, the enterprise needs to adopt cognitive technologies with a clear plan in mind. While many organizations will undoubtedly use them to shore up positions in established markets, others are looking to pioneer new processes and business models to either remake existing industries along more digital, service-driven lines or create entirely new ones for an increasingly connected world.
At some point, however, cognitive technologies must generate practical applications that serve to improve or expand the way knowledge work is performed today. To Phanikishore Burre, vice president of cloud, infrastructure and security services at CSS Corp., the most pressing use cases for cognitive are:
- Predictive Maintenance – in which huge data sets can be leveraged to anticipate failures in both digital and mechanical systems;
- Interdependency Analytics – mapping the relationships between systems and events to ascertain existing and potential trouble spots and dynamically strive for optimal performance;
- Self-Healing/Autonomous Remediation – automatic restoration of critical infrastructure, applications and software using a combination of automated instrumentation, machine learning analytics and integrated remediation;
- Self-Learning Systems Management – ensuring that intelligence is always accessible in the context of a given task, making it easier to access relevant information, tools, templates and other resources, and;
- Smart Agents – intelligent, connected virtual assets that can detect and respond to internal and external environments to transition the enterprise from real-time control to predictive, autonomous control.
Cognitive technology is sometimes referred to as a “thinking” computer, but this is not entirely correct. The basic underpinnings of human thought and consciousness are still a mystery, while cognitive systems attempt to mimic the results of human intellect through highly advanced algorithmic processes. This means they can be broken down, analyzed and restructured on a finite level, giving humans ultimate control over how they behave.
In many ways, cognitive solutions can outperform the human brain, particularly when it comes to processing large, complex data sets. But ultimately, the brain wins out because it thinks for itself, not for someone else.