Polygraphs: placebo or trial by ordeal?
Chad Dixon, an Indiana man was recently sentenced to 8 months in jail for teaching people how to beat polygraph tests. The sticking point seems to be that polygraphs are used by the US federal authorities for screening applicants and detecting crimes, so if people could get past them they could do all sorts of nefarious things. But the reliability of polygraph tests is highly dubious, and false positives may have stalled many careers. So of course the UK is considering making polygraph testing compulsory for sex offenders, something the blogger Neurobonkers described as a return to trial by ordeal. Is it unethical to teach people to circumvent these tests?
The reliability and validity of polygraph tests are problematic: at the very least they have not been convincingly demonstrated. Indeed, the field does not seem to progress much: it does not seem to accumulate knowledge, it develops in relative isolation from related fields, the customers and regulators do not demand better work – even when faults lead to miscarriages of justice.
One simple explanation for this is that polygraphs could just as well be impressive empty boxes and still do their job: their real role is to create a social situation where people talk. As Washington Post quotes:
The president of the American Polygraph Association, T.V. O’Malley, … He also acknowledged that some of the polygraph’s value is simply in prompting people to tell the truth.
“It’s kind of like confessing . . . to a priest: You feel a little better by getting rid of your baggage,” O’Malley said. “The same thing often happens with a polygraph examination.”
The test turns answering questions into something different (an inquisition, critics say) and adds a pressure to tell the whole truth because of the implied powers of the machine. This “pipeline to truth” effect makes people less likely to lie – if they believe the polygraph works.
That is of course the real threat of Dixon’s business idea: to undermine faith in the detectors. In a sense polygraphs are placebos that lose effect if you doubt them. In fact, it is irrelevant if his method works or not, as long as people think it works and the detectors hence could be fooled. Reports from the national academies might not be as persuasive as someone making money.
Is it ethical to undermine polygraphs? After all, lying is typically morally bad, and many uses of polygraphs are situations where truthful answers do matter. A criminal planning new crimes might not want to admit it, but it is a good thing – even for him – if he can be prevented from committing them. A person in a position of trust should not lie to get that trust. Polygraphs make people lie a bit less.
Unfortunately they make people lie less because of deception: in order to function they must be surrounded by the appearance of great accuracy and valid use. The institutions using them must convince not just the interviewed people but their own staff that they are valid, since it is hard to maintain solemnity if one truly doubts a practice. Fortunately, as discussed in the above report, institutional ignorance and apathy works wonders here, substituting for deliberate deception a la Frankfurt’s “On bullshit”. However, any external or internal questions on validity are dangerous and must be quickly squashed – too much vested interest is at stake to allow checking the practice or free inquiry.
(There is also a very real risk of false confessions, where the pipeline to truth effect combined with outside pressure makes vulnerable people lie in an approved way.)
The reason trial by ordeal was unfair is that the accused was tested by a random test: their guilt has no bearing on whether the trial succeeded or not. It is not just to test people this way, even if they are guilty. Similarly polygraph tests, because of their unreliability, are not fair tests. In fact, they are worse than random in that various extraneous personal factors seem to creep in (including race; the Washington Post article notes that senior management always seem to pass tests that normally blocks 30-40% applicants). So helping people to evade them is actually improving justice: it gives more weight to other, better ways of finding truth.
The only unfairness is that some people might trust the test to work on them but think they would be able to defeat it if they took the course, yet not be able to take the course, and that institutions may still remain under the false assumption that the (now potentially deceptive) results really are good information.In both cases the public announcement of the training (and even better, the public demonstration that it works) serves to remove the unfairness. It would be bad to sell training if it was done secretly, but Dixon did it overtly.
While I have no opinion about the legalities of the Dixon case, or the ethics of his business, it seems to me that undermining current polygraph practices is a good thing. They seem to lack validity and reliability, and they both depend on and reinforce systematic deception. The public has an interest in the state using technology fit for the purpose. If there existed a good test for deception the situation might be different: the moral and practical value of discerning deliberate deceit is high, especially if it is done in a responsible manner. But the way to get to such tests involves forcing institutions to care about tests actually working, and not just looking like they work.
Earlier posts on this blog about deception detection
- Lie detectors and epistemic duty
- I suggest it was Professor Plum, in the library, with the arsenic: the unreliability of brain experience detection
- A Pipeline to Truth? Fighting Absenteeism with Voice Analysis
- Protecting our borders with snake oil