Skip to content

Hide your face?

A start-up claims it can identify whether a face belongs to a high-IQ person, a good poker player, a terrorist, or a pedophile. Faception uses machine-learning to generate classifiers that signal whether a face belongs in one category or not. Basically facial appearance is used to predict personality traits, type, or behaviors. The company claims to already have sold technology to a homeland security agency to help identify terrorists. It does not surprise me at all: governments are willing to buy remarkably bad snake-oil. But even if the technology did work, it would be ethically problematic.

Face interpretation

The most obvious ethical problem is that faces remain the same no matter whether you have committed a crime or not. Presumably actually becoming a terrorist does not suddenly warp the face: the detector ought to have gone off as soon as the person reached adulthood. A face also does not tell what side somebody is on: people who would be terrorists in one country might be law enforcement in another. I guess that what Faception actually does is look at facial expression, building on the post-911 attempts at detecting “malintent” – a generally underwhelming domain that seems to boil down to looking for nervous, stressed people.

That leads to a much more serious problem: the problem of false positives. Even a detector with the ability to detect real terrorists 100% of the time, and which only 1% of the time triggers on innocent people will tend to catch an enormous number of innocent people. Terrorists make up a minuscule fraction of the population (if we assume 100 individual terrorists behind each of the 13,463 attacks 2014 they are still just 2 in 10,000), so for every terrorist caught 50 false positives would occur. This is assuming an amazingly accurate detector: in reality it is not only going to fail to detect some terrorists, but it is likely to have a far higher false positive rate. Even if the consequences of a false positive are merely supposed to be an extra through check at the airport, we have good reason to think that confirmation bias can lead to far worse outcomes.

There is also a problem of biases in the training data. The Washington Post article mentions a case where a machine learning system for distinguishing wolves and dogs worked fine for training pictures but failed in reality: it was actually looking for snow, since the wolf pictures were all taken in winter. I have heard similar stories about detecting enemy and friendly airplanes, ruined by differences in weather (friendly plane pictures were from air-shows and military demonstrations, photos of enemies were taken in the field). This is a well-known problem in machine learning, and I hope the company tries to compensate for it. But I am pessimistic: how do you get a good database of terrorist faces and a matching set of non-terrorists? Especially if your terrorists are slightly ethnically biased you may end up with an ethnicity detector claimed to be a terrorist detector. Would the pedophile face database mayhaps consist of pictures taken in police custody?

There is also good reason to think that homeland defense machine learning validation is often not up to speck. NSA’s SKYNET program seems to have used training methods that produce extremely biased results (an expert I discussed it with pointed out several other flaws), likely producing not just false positives but having a strong tendency to label certain travelling, well-connected people like journalists as terrorists. Here scarcity of good training data likely combined with the protection from criticism that happens inside secret, closed projects.

Faception’s chief executive/ethics officer proudly states that the company will never make classifiers for predicting negative traits available to the general public. But the real ethics problem is that most likely, based on our experience with similar technology, classifiers will never be made available for independent third party testing. The deception detection industry is filled with bold claims that are aimed at corporate, law enforcement and military customers, but independent testing and peer reviewed publication is very rare. Since the customers come regardless of proper science, there is no point.

There are certainly warning signs on the company web page, which conflates that genes influence behavior and that genes influence facial appearance into the conclusion that the face indicates personality and behavior. This is obvious nonsense (beyond the trivial effects of habitual expressions affecting wrinkles), and is “supported” by a reference to ancient Chinese practice! But even if motivation or explanations of the theory are wrong, a machine learning system can pick up patterns: the problem lies in whether it is the claimed pattern, and whether users realize how to interpret the signal right.

Face recognition

While face interpretation is likely just rehashed physiognomy, face recognition has improved dramatically over the past few years. The improvement is not just from better software but also the availability of massive photo databases conveniently tagged by identity… social networks.

The Russian company Findface can identify people from crowd photos with 70% accuracy using the images in the social network Vkontakte. This has proven very popular. The law enforcement and commercial applications are obvious. But the system has also led to online vigilantes finding the social media profiles of female porn actors. It is not hard to see how such a system could be used to harass people or implode the necessary separation of some social personas. Or just bring creepy dating to new levels:

Kabakov says the app could revolutionise dating: “If you see someone you like, you can photograph them, find their identity, and then send them a friend request.” The interaction doesn’t always have to involve the rather creepy opening gambit of clandestine street photography, he added: “It also looks for similar people. So you could just upload a photo of a movie star you like, or your ex, and then find 10 girls who look similar to her and send them messages.”

There is also the issue of database power. For a system to work it needs a database of tagged photos. These are currently owned by a few social network actors, and they will no doubt cheerfully exploit their commercial potential (in principle one could gather a name-image database by randomly scraping the internet, but this would be extremely slow and cumbersome: the power and hence responsibility is likely to be very asymetric).

Facial recognition has some of the same problems as face interpretation: there is going to be a certain false identification rate, and some faces are going to be easier or harder to find (quite likely influenced by race). But most importantly it is the interpretation and use of the information by a human (or machine) actor that really carries moral weight. A system that automatically detects convicted criminals entering a store and alerts the guards seems feasible. But would it be able to distinguish the repeat offender from the person trying to live a straight life?

When Faception’s Shai Gilboa states that negative classifiers will not be made available to the public he misses the real problem: even positive classifiers can be problematic. A “gaydar app” would be amusing in the UK, but hardly in Russia.

We are likely approaching a world where everybody can be identified automatically at distance. Unless people using social media stop tagging pictures of each other, camera sensors somehow fail at becoming cheaper and smaller, or we all decide to wear masks, we will be identifiable – and there are many interests that are willing to push in that direction. But identifiability is not the big ethical challenge, just as classifying people is not necessarily problematic. The issue is how people are allowed to act on this information, how hidden its usage can be, and how accountable we can hold the involved actors for both what they do and the errors they will committ.

Share on