The advance of technology has the potential to truly democratize access to information and opportunity. However, when in some cases, it is being used in ways that reinforce the notion that in our society some people are more equal than others.

This what we’ve seen from the following seven instances in which artificial intelligence (AI) is deliberately used to exclude certain categories or in which it is simply reflects the bias embedded by its human programmers with a discriminatory effect.

The AI Beauty Bias

Beauty may be in the eye of the beholder, but when that subjective view can program AI, you’ve got bias in the program. Rachel Thomas reported on one such episode in a beauty competition from beauty.ai in 2016. The results showed that lighter complexions were rated more attractive than dark ones.

The following year, “FaceApp, which uses neural networks to create filters for photographs, created a ‘hotness filter’ that lightened people's skin and gave them more European features.”

The Gender Bias in Languages

Thomas also cites a documented example of translations that carry over stereotyped expectations of careers. The starting point is two sentences: "She is a doctor. He is a nurse."

If you then translate them to Turkish and back into English, you’d get the kind of results you might have expected from a game of telephone.

Instead of getting what you started out with, you’d get the 1950s kind of expectation, "He is a doctor. She is a nurse." She explains that is due to the gender-neutral singular pronoun in the Turkish language that will assign gender based on expectations and stereotypical bias. (Read Women in AI: Reinforcing Sexism and Stereotypes with Tech.)

While racial and gendered biases filtering into images and language are cause for vexation, they are not quite the same thing as active discrimination resulting from AI, but that has happened, as well.

Discrimination Traced to Facebook Ad Targeting

Facebook Lets Advertisers Exclude Users by Race” declared a ProPublica headline in 2016. It considered the way ads work on the social network to an extension of “the Jim Crow era” in directing ads to white people only.

1. Racial discrimination in housing

Its proof was a screenshot of the limitations put in for a Facebook ad under its housing category that allowed the option for narrowing the audience by checking off exclusions of categories like African American, Asian American or Hispanics. The ad can be viewed here.

As ProPublica points out, the discriminatory effect of such ads are illegal both under The Fair Housing Act of 1968 and The Civil Rights Act of 1964. Facebook’s only defense in this case was that the ad was not for housing itself, as it wasn’t about a property or home for sale or rent.

However, there have been other instances of targeting that indicate racial bias and that has motivated various entities to bring civil suits against the social network. As Wired reported, Facebook finally resolved to adjust its ad-targeting tech as a result of a settlement of five legal cases that charged it with enabling discrimination against minorities through ads in March 2019.

In its report on the settlement, the ACLU pointed out how insidious such targeted ads could be, as minorities and women may not even realize they are not given the same access to information, housing, and job opportunities that are shared with white men.

As more people turn to the internet to find jobs, apartments and loans, there is a real risk that ad targeting will replicate and even exacerbate existing racial and gender biases in society. Imagine if an employer chooses to display ads for engineering jobs only to men — not only will users who aren’t identified as men never see those ads, they’ll also never know what they missed.

After all, we seldom have a way to identify the ads we aren't seeing online. That this discrimination is invisible to the excluded user makes it all the more difficult to stop.

2. Gender and age discrimination in jobs

Among the legal cases was the illegal discrimination in housing that Facebook’s targeting allowed. In Its report on the Facebook settlement, ProPublica said that it has tested the platform and succeeded in purchasing “housing-related ads on Facebook that excluded groups such as African Americans and Jews, and it previously found job ads excluding users by age and gender placed by companies that are household names.”

A number of job ads the ACLU found that were explicitly aimed only at men in a particular age bracket, as users could find in clicking on the answer to why they were shown that particular ad, were featured in another Wired article. The ACLU put in a charge with the Equal Employment Opportunity Commission against the social network and the companies that placed the ads on the grounds that they were in violation of both labor and civil rights laws.

Discrimination against hiring people over 40 violates the federal Age Discrimination in Employment Act (ADEA). But targeting job ads only to people below that age is one of the things enabled by the Facebook platform.

ProPublica made that the focus of one of its reports exposing which job ads capitalized on this illegal form of exclusion by age. The “household names” include Verizon, UPS, Uber, Target, Statefarm, Northwestern Mutual, Microsoft, J Street, HusbSpot, IKEA, Fund For The Public Interest, Goldman Sach, OpenWorks, and Facebook itself, among others.

Facial Recognition Fail

Facial Recognition Is Accurate, if You’re a White Guy” declared the headline of a New York Times article published in February 2018. It cited results that found a distinct correlation between skin tone and faulty identification:

“The darker the skin, the more errors arise — up to nearly 35% for images of darker skinned women, according to a new study that breaks fresh ground by measuring how the technology works on people of different races and gender.”

The findings are credited to Joy Buolamwini, a researcher at the MIT Media Lab, and the founder of the Algorithmic Justice League (AJL). Her area of research is the biases that underlie AI, resulting in such skewed results when it comes to recognizing faces that do not fit the white male norm set for the model.

Buolamwini presented the racial and gender bias problem for facial recognition in a 2017 TED talk, which she referred to her early 2018 in the video on The Gender Shades Project from the MIT Lab:

<

The message spelled out in the description of the video is that leaving AI bias unchecked, "will cripple the age of automation and further exacerbate inequality if left to fester." The risks is nothing less than "losing the gains made with the civil rights movement and women's movement under the false assumption of machine neutrality."

The video description adds the warning many others have now pointed out, as we’ve seen in Women in AI: Reinforcing Sexism and Stereotypes with Tech: "Automated systems are not inherently neutral. They reflect the priorities, preferences, and prejudices—the coded gaze—of those who have the power to mold artificial intelligence."

On January 25, 2019 Buolamnwini published a Medium post that drew on her own research and that of additional researchers who point out how the AI flaws results in errors in Amazon’s Rekognition and demanded that the company stop selling the AI service to police departments.

While Rekognition could boast of 100% accuracy for recognizing light-skinned males and 98.7% accuracy even for darker males, when it came to female, the accuracy dropped to 92.9% for lighter females. Even more glaring was the sharp drop to just 68.6% accuracy for darker females.

But Amazon refused to relent. A Venture Beat article quoted a statement from Dr. Matt Wood, general manager of deep learning and AI at AWS, in which he insisted that the researchers’ findings did not reflect how the AI is actually used, explaining:

“Facial analysis and facial recognition are completely different in terms of the underlying technology and the data used to train them. Trying to use facial analysis to gauge the accuracy of facial recognition is ill-advised, as it’s not the intended algorithm for that‎ purpose.”

But it’s not just those affiliated with major research centers who have found the algorithms to be very problematic. The ACLU ran its own test at a most reasonable cost of $12.33, according to the Gizmodo report. It found that Rekognition matched 28 members of Congress with photos of criminals.

“The false identifications were made when the ACLU of Northern California tasked Rekognition with matching photos of all 535 members of Congress against 25,000 publicly available mugshot photos.”

As 11 out of the 28 were people of color, it reflected a significant 39% error rate for them. In contrast the error rate as a whole was a more acceptable 5%. Six members of the Congressional Black Caucus, who were among those Rekognition linked to mugshots, expressed their concern in an open letter to Amazon’s CEO.

Recidivism Bias

The bias embedded in AI against people of color becomes a more serious problem when it means more than just an error in identify. That was the finding of another ProPublica investigation in 2016. The consequences of such bias are nothing less than individual freedom coupled with ignoring real risk from the person whose skin color is favored by the algorithm.

The article referred to two parallel cases involving one white perpetrator and one black one. An algorithm was used to predict which one was likely to break the law again. The black one was rated a high risk, and the white one a low risk.

The prediction got it completely wrong, and the white one who went free had to be imprisoned again. This is extremely problematic because the courts do rely on the scoring in deciding on parole, and that means that the racial bias factored into the program means unequal treatment under the law.

ProPublica put the algorithm to its own test, comparing the risk scores of over 7,000 people who were arrested in Broward County, Florida, in 2013 and 2014 to the number that had new criminal charges brought against them in the following two years.

What they found was that a mere 20% of the predictions for repeating crimes of a violent nature came true, and more minor crimes only occurred for 61% of those with scores indicating risk.

The real problem is not just the lack of accuracy but the racial bias involved:

  • The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.
  • White defendants were mislabeled as low risk more often than black defendants.

In effect, this translated into an error rate of 45% for black people and 24% for white people. Despite that glaring statistic, Thomas reported that the Wisconsin Supreme Court still upheld the use of this algorithm. She also details other problems associated with recidivism algorithms.