In short: Yes, they can, and they're even better than humans at doing it. According to a highly disconcerting study from Stanford University, pictures of human faces contain a wealth of information that the human brain simply cannot process, while artificial intelligence can. According to their research, a deep neural network was able to distinguish between gay and heterosexual men in 81 percent of cases, and in 74 percent of cases for women, compared to just 61 percent for men and 54 percent for women for human judges. If the algorithm had at least five images of a person to scan, the percentage of success rose to 91 percent and 83 percent, respectively.
Not only was AI able to distinguish some "gender-atypical" features such as grooming style and fashion choices, the machine also identified some specific phenotypic traits in their facial features, such as gay men having narrower jaws, larger foreheads and longer noses. Regardless of the potential implications of this discovery (such as the idea that gender orientation might be linked to some genetic characteristics), it is quite frightening how harmful the applications of this technology could be. For example, it could be used by those governments that prosecute LGBT people to "screen" them, or simply to violate the privacy of countless users for all kinds of malicious purposes – targeted marketing being the least evil of them all.
What makes it even more discomforting, however, is that similar technologies are already available, and are already being used. AI can detect much more than just sexuality by looking at a picture of a human face: It can detect emotions, IQ and even political preferences. Similar AI-powered psychometric profiling technologies have been used to draw data from Facebook profiles and make conclusions about personal preferences and lifestyle choices. This way voters would only see a specific subset of targeted political ads that could subtly steer their political choices.
The creator of this experiment, the psychologist Michal Kosinski, expressed his concern over the possible risks if this technology was used for evil purposes, to the point that he and his team spent much time considering whether the results should be made public at all. The political consultancy Cambridge Analytica, in fact, allegedly used information gathered from social networks to run ads that influenced the U.S. presidential elections in 2016, and possibly even the British Brexit campaign. According to the ongoing investigation, a vast army of bots started spreading a number of accurately targeted fake news about Hillary Clinton to steer her potential electors into voting for Donald Trump. Many of these ads were built on the spot by the intelligent algorithms, to show to people during major events or electoral debates. The AI also measured people's reactions, to reinforce their efficiency in influencing Clinton's voters into believing she was an evil and deranged person.
After the scandal, the agency was closed, but similar technologies still exist and can still be used with malicious purposes. Kosinski gave many speeches warning about this risk long before the scandal, but, sadly, human nature cannot be changed. In an unpublished experiment, he claimed his AI was able to distinguish between the faces of Republicans and Democrats, although he admitted that beards could make a difference. So for all the conspiracy theorists (and privacy-savvy people) out there, here's a big hint – if you want to prevent the government from poking into your private life, just grow a beard. A huge one, if possible.