The Cambridge Analytica scandal with Facebook, where a political consulting firm used data from the social networking site without users’ knowledge or consent, illustrated a lot of the problems associated with the collection and use of user data. While many end-user license agreements specify how users’ data might be used, many social media users may not read the fine print.
|Free Download: Machine Learning and Why It Matters
Another problem is that these machine learning algorithms may be “black boxes” where it’s impossible to see how they really work. It may be impossible to know why a machine learning algorithm made a decision.
One area of machine learning is making medical diagnoses. An algorithm might look at X-rays to find cancer. A human doctor can explain why they made a diagnosis, but we might not know how a machine learning algorithm determined that a patient had cancer or not.
Another issue is the use of machine learning training data and possible biases. There have been several instances of racial and other biases making it into machine learning programs unintentionally. One algorithm identified black people as gorillas, and another altered the facial features of people of color to make them appear more “European” while claiming to beautify them.
One way to counteract this is to have more people from diverse backgrounds in the AI field.
Another problem is the safe use of machine learning and artificial intelligence. AI and machine learning programs could develop behavior that people wouldn’t want them to, such as stopping people from turning them off.