Nearly everyone has interacted with a chatbot, either through personal assistants like Apple’s Siri or through customer service departments, but how do they seem so smart? There are several ways AI developers can train these bots to give realistic responses.
The simplest way to design a bot is to have it respond to a preprogrammed range of responses. This was the approach used by Joseph Weizenbaum’s (1923-2008) ELIZA program developed in the 1960s.
ELIZA was intended to simulate a Rogerian psychotherapist. The program could only respond according to preprogrammed “scripts,” but many users found the effect so realistic that they insisted that ELIZA really was intelligent.
This has been dubbed the “ELIZA Effect.”
Research in AI has allowed for far more sophisticated approaches to developing chatbots, which allow them to “learn” from both training data provided by developers and from user input.
Let’s take the example of a chatbot used for the customer service department of a software company. The bot will first be fed information from the company’s own resources: documentation, FAQs, emails, chat transcripts, to start out with.
The bot won’t just be limited to whatever developers give it, the way ELIZA was. It will be able to learn from real interactions with customers using natural language processing (NLP).
Even with automated learning, there will still be areas where bots run into trouble. Humans will have to train the bot occasionally using supervised learning. Given the ambiguity in human languages, it will be hard to build a chatbot that could run completely unsupervised.
A human user will also likely have to check a chatbot’s result for accuracy, especially in a business context. Still, these chatbots will be more flexible than a purely rules-based program like ELIZA.
Advances in machine learning and natural language processing could make these chatbots appear even more intelligent in the future.