Companies might shut down artificial intelligence chatbot projects for many reasons, but the most likely ones can be grouped into a couple categories. First, companies may shut down chatbot technologies if they act in ways that might make the company uncomfortable, such as promoting objectionable speech. Alternately, companies might shut down a chatbot project if it starts to demonstrate sentience and ability that either can’t be easily controlled, or may pose some kind of eventual threat to public health and safety.
We have both of these scenarios documented in recent history. Microsoft’s experiment with the Tay chatbot ended when Tay started to take on some of the worst characteristics of its human counterparts – racist and aggressive comments, and generally objectionable activity, much of which was clearly learned or even parroted directly from users.
In a very different case study, a chatbot setup pioneered by Facebook was shut down when scientists observed two chatbots communicating with each other in a way that was somewhat exclusive to human observation. There was the contention that chatbot entities had started to “talk in a kind of code” that was more convenient for them and less transparent to their human handlers. This is an example of that very real concern around artificial intelligence in general – that as we make great strides in the progression of strong artificial intelligence, humans must act to contain and control any examples of AI implementation to make sure they don’t cross given boundaries. There are a vast number of ethical concerns and safety issues that need to be addressed with strong artificial intelligence, and this is one of the primary reasons that some chatbots or other artificial intelligence projects may in the future be shut down by the stakeholders that have built and supported them through a certain process.