We count on machine learning systems for everything from creating playlists to driving cars, but like any tool, they can be bent toward dangerous and unethical purposes, as well. Today’s illustration of this fact is a new paper from Stanford researchers, who have created a machine learning system that they claim can tell from a few pictures whether a person is gay or straight.

The research is as surprising as it is disconcerting. In addition to exposing an already vulnerable population to a new form of systematized abuse, it strikes directly at the egalitarian notion that we can’t (and shouldn’t) judge a person by their appearance, nor guess at something as private as sexual orientation from something as simple as a snapshot or two. But the accuracy of the system reported in the paper seems to leave no room for mistake: this is not only possible, it has been achieved.

It relies on cues apparently more subtle than most can perceive — cues many would suggest do not exist. And it demonstrates, as it is intended to, a class of threat to privacy that is entirely unique to the imminent era of ubiquitous computer vision.

Before discussing the system itself, it should be made clear that this research was by all indications done with good intentions. In an extensive set of authors’ notes that anyone commenting on the topic ought to read, Michal Kosinski and Yilun Wang address a variety of objections and questions. Most relevant are perhaps their remarks as to why the paper was released at all:

We were really disturbed by these results and spent much time considering whether they should be made public at all. We did not want to enable the very risks that we are warning against. The ability to control when and to whom to reveal one’s sexual orientation is crucial not only for one’s well-being, but also for one’s safety.

We felt that there is an urgent need to make policymakers and LGBTQ communities aware of the risks that they are facing. We did not create a privacy-invading tool, but rather showed that basic and widely used methods pose serious privacy threats.

Certainly this is only one of many systematized attempts to derive secret information such as sexuality, emotional state or medical conditions. But it is a particularly concerning one, for several reasons.

Seeing what we can’t (or won’t)

The paper, due to be published in the Journal of Personality and Social Psychology, details a rather ordinary supervised-learning approach to addressing the possibility of identifying people as gay or straight from their faces alone. (Note: the paper is still in draft form.)

Using a database of facial imagery (from a dating site that makes its data public), the researchers collected 35,326 images of 14,776 people, with (self-identified) gay and straight men and women all equally represented. Their facial features were extracted and quantified: everything from nose and eyebrow shape to facial hair and expression.

A deep learning network crunched through all these features, finding which tended to be associated with individuals of a given sexual orientation. The researchers didn’t “seed” this with any preconceived notions of how gay or straight people look; the system merely correlated certain features with sexuality and identified patterns.

Composite faces created from thousands of profile pictures, illustrating the slight differences observed by the system.

These patterns can be searched for in facial imagery to let the computer guess at the subject’s sexuality — and it turns out that the AI system is significantly better than humans at this task.

When presented with multiple pictures of a pair of faces, one gay and one straight, the algorithm could determine which was which 91 percent of the time with men and 83 percent of the time with women. People provided the same images were correct 61 and 54 percent of the time, respectively — not much better than flipping a coin.

The variation between the four groups is described in a second paper; apart from obvious behavioral differences like one group grooming or doing make-up one way, the general trend was toward “feminine” features in gay men and “masculine” features in lesbians.

This diagram shows where features were found that were predictive of sexual orientation.

This accuracy, it must be noted, is only in the system’s ideal situation of choosing between two people, one of whom is known to be gay. When the system evaluated a group of 1,000 faces, only 7 percent of which belonged to gay people (in order to be more representative of the actual proportion of the population), it did relatively poorly. Only its top 10 showed a 90 percent hit rate.

There is also the possibility of bias in the data: The dating site could have been dominated by people of a certain race, and contain photos that were more indicative of sexuality than what the person might put on Facebook or how they might appear in snapshots. This could affect the system’s real-world effectiveness (if it were to be deployed, which it isn’t). I’ve asked the researchers for more details on this topic.

Facing the truth

The allegation that this technology is no better than “Phrenology 2.0,” a fraudulent linkage of unrelated physical features to complex internal processes, is understandable but not really true.

“Physiognomy is now universally, and rightly, rejected as a mix of superstition and racism disguised as science,” the researchers write in the paper’s introduction (I understand them to refer to phrenology, not physiognomy). But the pseudoscience was a failure because its practitioners worked backwards. They decided who evinced “criminality” or “sexual deviance” and then decided which features those people had in common.

It turns out people are just plain bad at this kind of task — we see too much of ourselves in people, and we act and perceive instinctively. But it doesn’t mean that visible features and physiology or psychology are unrelated. No one doubts that machines can pick up on things humans don’t have the capacity to perceive, from minute chemical traces of bombs or drugs to subtle patterns in MRI scans that indicate cancer precursors.

The paper suggests that the computer vision system is picking up on equally subtle patterns, both artificial and natural, but primarily the latter; the researchers were careful to focus on features that can’t be altered easily. It’s supportive of the hypothesis that sexuality is biologically determined, possibly by different fetal hormone exposure, which could then cause slight physiological differences in addition to affecting whom one is attracted to. The patterns observed by the system are similar to those predicted by the prenatal hormone theory.

This begins to verge on a larger social and philosophical debate regarding love, sexuality, determinism and the reduction of emotions and personalities to mere biological traits. But this isn’t really the time or place and honestly, I’m not really the guy. So I’m going to steer clear of that and veer toward the other major issue, which is that of privacy.

Dangers and limitations

The implications of automatically determining a person’s sexuality from a handful of photos are enormous, first and most importantly because it puts LGBTQ people at risk worldwide in places where they remain an oppressed minority. The potential for abuse of a system like this is huge, and some may rightly disagree with the researchers’ decision to create and document it.

But it’s also arguable that it’s better to have the possibility out in the open in order to either implement countermeasures or otherwise allow people to prepare themselves for it. After all, it’s also possible to determine sexuality and many other private traits by analyzing Facebook or Twitter feeds.

The difference there is that you can obfuscate or control the information you put online; you can’t control who sees your face. It’s a surveillance society we live in, and increasingly a surveillant one, as well.

“Even the best laws and technologies aimed at protecting privacy are, in our view, doomed to fail,” write the researchers. “The digital environment is very difficult to police; data can be easily moved across borders, stolen, or recorded without users’ consent.”

The cat is out of the bag, in other words. And it’s probably better that it’s being shown by researchers sympathetic to the people most affected than by the FBI or some other party doing it for their own agenda.

This is an enormously complex issue in countless ways, but it’s also a fascinating demonstration of the power of artificial intelligence to do something relatively easily that we’re unable to do ourselves. The implications of the technology are, as always, mixed, but it’s interesting how this kind of research helps define and guide humanity.

The researchers’ assessment of how this will play out aren’t optimistic per se, but they are refreshingly realistic in an age when tech is expected to play the savior but is just likely to be the villain.

“We believe that further erosion of privacy is inevitable, and the safety of gay and other minorities hinges not on the right to privacy but on the enforcement of human rights, and tolerance of societies and governments,” they write. “In order for the post-privacy world to be safer and hospitable, it must be inhabited by well-educated people who are radically intolerant of intolerance.”

Tech won’t save us from ourselves — we might have to save ourselves from tech.

Featured Image: Steffi Loos/Getty Images/Getty Images

SOURCES: TECHCRUNCH GSMARENA MACRUMORS
WIRED.COM FIRSTPOST

LEAVE A REPLY

Please enter your comment!
Please enter your name here