Why are some companies contemplating adding 'human feedback controls' to modern AI systems?
Presented by: AltaML
Why are some companies contemplating adding "human feedback controls" to modern AI systems?
Some companies working with cutting-edge AI technology are working to institute human controls for these systems, giving machine learning and deep learning tools some direct human oversight. These companies aren’t small players, either – Google’s DeepMind and Elon Musk’s OpenAI are two examples of major companies that are getting hands-on about artificial intelligence advances. With that in mind, results differ – for instance, DeepMind has been the subject of controversy for its perceived unwillingness to provide key data to the public, while OpenAI is much more, well, open about its work on controlling artificial intelligence.
Even such notables as Bill Gates have weighed in on the issue, Gates saying that he is one of many who are concerned about the emergence of an artificial superintelligence that may in some ways move beyond human control. Musk, for his part, has also put forth some alarming language about the possibility of “rogue AI.”
That’s probably the most urgent reason that companies are working to apply human controls to AI – the idea that some technological singularity will result in a super-powerful sentient technology that humans simply can’t control anymore. Ever since the dawn of human ambitions, we’ve put tools in place to make sure that we can control the powers that we wield – whether it’s horses with reins and harnesses, electricity in insulated wires, or any other kind of control mechanism, having control is an innately human function and so it makes all the sense in the world that as artificial intelligence comes closer to real functionality, humans apply their own direct controls to keep that power in check.
However, fear of super-intelligent robots isn't the only reason that companies apply human controls to machine learning and AI projects. Another major reason is machine bias – this is the idea that artificial intelligence systems often are limited in how they evaluate the data in question – so that they amplify any bias inherent in the system. Most professionals dealing with machine learning can tell horror stories about IT systems that weren’t able to treat human user groups alike – whether it was gender or ethnic disparity, or some other failure of the system to really understand the nuances of our human societies and how we interact with people.
In a sense, we might put human controls on systems because we’re afraid they might be too powerful – or alternately, because we’re afraid they might not be powerful enough. Human controls help to target machine learning data sets to provide more precision. They help to reinforce ideas that the computer just simply can’t learn on its own, either because the model is not sophisticated enough, because AI hasn’t advanced quite far enough, or because some things just lie in the province of human cognition. Artificial intelligence is great for some things – for instance, a reward-and-score-based system allowed an artificial intelligence to beat a human player at the immensely complex board game “Go” – but for other things, this incentive-based system is wholly inadequate.
In a nutshell, there are numerous compelling reasons to keep human users directly involved in how artificial intelligence projects work. Even the best artificial intelligence technologies can do a lot of thinking on their own – but without an actual biological human brain that can process things like emotions and social mores, they simply can’t see the big picture in a human way.
A skilled machine learning company can help strike this balance with a mix of business and subject-matter experts and the machine learning developers with the skills to solve big business problems.