Skip to content

The Independent Safeguarding Authority

Anyone who wishes to work with children, including even a parent who wishes to help out in the school their child attends, is required to undergo vetting by the Independent Safeguarding Authority. The politicians responsible say that this will protect children from paedophiles.

 

Philip Pullman (children’s book author) has refused to be vetted because “It is insulting and I think unnecessary, and I refuse to be complicit in any scheme that assumes my guilt.” (here) As a result he will be banned from reading his books to children in schools. The Children’s Laureate thinks that ‘the scheme [is] "governmental idiocy" which [will] drive a wedge between children and adults’. Arguably, then, believing it right to vet the enormous number of people that will be vetted (11.3 million by November 2010) is corrupting of the relations between adults and children, and is in part a manifestation of something poisonous in our attitude to adults. So there is a broad question over whether the ISA, simply through existing, has bad consequences and  is unjustly disrespectful. That is not what I want to discuss. I want to consider only the issue of the epistemic duty of the politicians who have created it and of the authority itself.

 

We certainly have an interest in knowing who poses a danger to children (or to anyone else, for that matter). Clearly, if we are going to seek and disseminate this knowledge we have an epistemic duty to get it right, that is to say, to identify all and only those who pose a danger. This duty is stringent because we are actively seeking and disseminating the information, because we have a duty to regard people in a true light and because errors in either direction may result in harms.

 

The government has decided to pursue the goal of knowing who poses a danger by vetting millions of people and recording information about and assessments of each person. The question arises of whether this is a responsible way of fulfilling the epistemic duties consequent on that pursuit. We need to consider the consequences of pursuing it this way, and also contrast it with other methods such as focusing on the small numbers of people who are a proven danger.

Ssignificant errors

First of all, if the ISA were to successfully identify all and only those who pose a danger then it would at least be epistemically responsible in that respect. Unfortunately, for a number of reasons we know now that significant errors will occur.

·        There will be cases of mistaken identity.

·        For millions of the people we are attempting to assess there will be almost no information to assess.

·        We will scratch around for any scraps of information, that is to say, we will attend to the poorest quality data.

·        We have already seen that the Enhanced Criminal Record check can be unreliable and unfair  because of its use of soft data: you can be marked down on the basis of rumour and even events that happen where you live despite you having nothing to do with them. (here)

·        When we are faced with a paucity of information and with poor quality information we are not good at proportioning our belief to that evidence. Indeed, it can take years of training to make good use of such information, and it is doubtful that those running the register have such training. Rather than admitting that if we know little or nothing of someone we have no reason to think they are dangerous, the thinnest information is likely to lead to exaggerated conclusions.

·        The overwhelming majority who are not in any way dangerous are not be able to prove that they are safe. Some people who are dangerous have so far given no traceable indication that they are dangerous. So a small number of dangerous people are indiscriminable from a very large number of perfectly safe people. We are unwilling to tolerate this kind of uncertainty.

·        The incentives faced by the bureaucrats are epistemically skewed. No one loses their job for smearing innocent adults but if a single one of the millions of people not marked down ever seriously harms any child the ISA will be blamed. So despite the fact that it is impossible for the ISA to weed out every dangerous adult, the incentive is to presume guilt.

·        There may be no recourse when a safe adult is marked down. Even if there is, the sheer numbers involved will make it almost impossible to get wrongful assessments reconsidered in time to prevent the harm they will do.

The general problem

So the ISA will not successfully identify all and only those who pose a danger. Instead it will make some kind of a trade off between identifying all the dangerous and identifying only the dangerous. The general problem is that increasing the certainty of identifying all who pose a danger will increase the number of innocent adults that we smear by falsely identifying them as dangerous, and similarly increasing the certainty of identifying only those who pose a danger will increase the number of dangerous adults we fail to identify. Let’s try and get some broad handle on what that trade off will be.

 

For ease of figuring, let’s suppose we are vetting 10 million people. For simplicity we are going to consider the entire assessment process as a test which issues one of two verdicts: positive or negative for danger. If we were going to go into depth we would need to consider two different probabilities, the sensitivity of the test, which is the probability of a true positive (that is to say, of a positive verdict given that the adult is dangerous) and the specificity, which is the probability of a true negative, and the relation between them. (It is the relation between these two and any threshold we set for the verdict that causes reductions in false negatives to increase false positives, and similarly for false negatives and true positives.) For brevity I’m going to set aside those complications and start with an optimistic assumption that both the sensitivity and specificity are 99%. I don’t know what the base rate of dangerous adults is but let’s start with a pessimistic assumption that it is 1 in 1000.

 

These figures would result in the ISA giving 9900 true positives, and 99900 false positives. That’s pretty shocking. We would harm nearly one hundred thousand people and we are still failing to identify one hundred dangerous adults. The probability of someone labelled dangerous actually being dangerous is only 9%. If we get more optimistic (99.9% and 1 in 10000) it looks better, only one false negative but still ten thousand false positives (round numbers from hereon), and still the probability of someone labelled dangerous being dangerous is still only 10%.  If we get less optimistic (95% and 1 in 100) it looks very bad, both 5000 false negatives and ½ a million false positives.

Discussion

It might be thought that I am being pessimistic on the accuracy of the system. Given the problems I mentioned, I maintain that 99% is in fact optimistic. But the point of exploring these numbers is to get some idea of the potential trade offs involved for the sake of discussion.

 

I think it is clear that the trade offs are pretty bad. Even on the most optimistic picture we miss only one dangerous adult but we still harm 10,000 innocent adults. On the middling picture, the reality of any assessment system throwing up one hundred thousand false positives is that the whole thing will become arbitrary. At worst, they will all stand. At best, a second pass on the positives will be made. Except for most of the true positives, most of which will have been a positive because of specific relevant convictions, there is little reason to think a second pass over thin and poor quality data will be much better than the first. With the demand for a judgement, judgements will be made, but  many will be nothing more than the expression of ignorant subjective opinion, prejudice and guesswork. We know how badly this kind of thing goes because we know how badly social services does it: rumour and innuendo get amplified with professionals refusing to back down when they are wrong

 

OK, the trade off is pretty bad, but perhaps it a case in which we must make a further trade off of the epistemic duty and the practical duty. After all, the originating purpose of this register is to prevent the murder of children. If that is the ground on which the failure of epistemic duty is to be justified, we need to assess the trade off on the basis of the right base rate. Again, I don’t know what it is, but now I think we are in the region of one in a million. On that base rate we detect all 10 murderous adults whilst continuing to harm 100,000 adults. Suppose that on this basis we save one child’s life per year. It might thought that this trade off, grim for the 100,000 adults as it is, is worth it. But is it? Furthermore, we now have to consider that this justification will tend to place in the public mind the idea that those who are labelled dangerous are all murderously dangerous paedophiles. Consequently, if the information gets out (and the government has proven that it is incapable of keeping information confidential) those labelled dangerous will face violence, even extreme violence, because they are so labelled.

Conclusion

In general, we have no good evidence that a society can entirely prevent extraordinarily rare harms. Nevertheless, the reason the ISA was set up was because some murderous paedophiles murder children after they have already given us evidence that they are a danger, and clearly we want to find a way to use that evidence to protect children. It is likely that the ISA will provide some protection of children insofar as it correctly identifies dangerous adults. However, this mass screening method  will not be perfect and it may do significant and long lasting harm to large numbers of innocent adults. There is nothing surprising in this result: any mass screening will throw up large numbers of false positives except when the base rate is also high. Indeed, perhaps the assumption that this process is appropriate is a manifestation of that poisonous attitude I mentioned earlier. Furthermore, high numbers of false positives will introduce an enormous amount of arbitrariness to the process. Since the ISA will fail badly in these ways, we must wonder whether a profiling system confined to hard information would not do a better job.

 

Contrary to the way politicians speak, there is no perfect solution here. What faces us is a trade off between identifying all and identifying only dangerous adults. We are going to significantly harm some number of innocent adults to protect some number of innocent children. It is wrong to harm innocents, but I think we all accept that if the number of innocent adults harmed and the extent of the harm is small enough then protecting the concomitant number of innocent children is the lesser of two evils.

 

Included in the harm here is the epistemic wrong of labelling an innocent adult dangerous. This harm may not weigh heavily, but it has, I think, been neglected. As we have seen, it is likely that the ISA will extensively fail its epistemic duty to get the assessment correct within reasonable bounds. As a consequence, in addition to the practical injustice, it will epistemically wrong a large, even a very large, number of innocent adults.

 

Finally, consider the epistemic duty of the politicians responsible for the ISA. Politicians have a tendency ignore the costs that they do not intend and that are widely dispersed. In this case, that means they have a tendency to unjustly neglect the people who will pay that cost: the innocent adults falsely labelled dangerous. It seems to me quite wrong to have implemented a system without any well grounded estimate of what trade off that system will impose and without any discussion of what would be a morally tolerable trade off. In setting up the ISA the politicians have failed in their epistemic duty.

Share on

1 Comment on this post

  1. I see this is a problem (1) in record keeping and (2) the difference between qualification for employment and qualification for criminal treatment. As to (2), the damage done by excluding too many people from employment because of an over-inclusive test (ignore that it is also in some cases under-inclusive) is minimal to the employer and serious only to the applicant who needs the job for some reason. The employer can’t worry about the wrongfully excluded person, given the serious danger of hiring a clearly unsuitable applicant. As to (1), the problem of leakage could be resolved by not keeping any records; simply stamping an application “not accepted” and including a long list of possible reasons. Revealing the reason for rejection of the application should be a felony. Of course, the decision not to hire should be shielded from review by courts except on a small number of grounds on which the plaintiff (the person not hired) should bear the burden of proof.

Comments are closed.