Matthew Harwood has an interesting essay about how a FBI investigation suffering from confirmation bias relentlessly pursued an innocent person based on an accidental partial fingerprint match at the Madrid bombings, leading to him being detained for two weeks – despite plenty of strong evidence against the suspicion. In fact, the dis-confirming evidence was in several cases seen as confirming (No passport? Aha, he must have been travelling abroad secretly!)
Confirmation bias is something that modern police is taught to guard against, but that is of course not enough: knowing about a bias does not mean it will go away. Actually fixing the problem requires institutional structures that balance the human tendency towards bias, and maintaining those structures require proper buy-in from management and correction when they fail.
But there might also be a deeper institutional ethics problem going on here. In a recent ruling a judge ruled the TSA no-fly procedures unconstitutional. As the Kafkaesque shenanigans of the case show, the government spent significant effort, money and political capital in obstructing a case where they actually admitted the plaintiff on the no-fly list did not pose any threat. Indeed, it was a clerical error that put her on the list. While one can argue that maybe the real issue was defending an important administrative tool rather than defending the erroneous decision, it still seems likely that a significant motivator was simply preventing embarrassment.
That people go to great lengths to hide information in order to avoid embarrassment is not unusual. Erving Goffman described embarrassment as occurring when someone is felt to have projected incompatible definitions of themselves to the people present. This especially occurs when failing to live up to moral conduct in interactions with the surrounding establishment. Organisations may be even more sensitive. Organisations only exist because they are defined to exist by all participants, and were they to lose their identity their existence – and hence the roles of their members – would be seriously threatened.
Jack Katz argued that in formal organisations internal authority is built by shielding members from external oversight: by defining an inside running by its own rules it becomes easier for external authorities to treat the organisation as a morally autonomous system. But this can allow illegitimate behaviour, and the internal mechanisms now lead to lack of enforcement cover-ups.
If you are part of an organisation, then your individual error of judgement becomes an error of the entire organisation if it does not detect and correct it before it becomes external (like the DHS case). If the error was due to failings of an internal group process (like the FBI case) it is even worse: now it is clearly the collective that is at fault, with disclosure threatening everyone in the organisation. And hence cover-ups ensue, in order to maintain a projected identity. The problem is that a cover-up of errors is itself a group error, so acknowledging it is even worse when challenged – that also needs a cover-up. And so on.
Whistleblowers disrupt this protective process, which is why they are so hated by anybody invested in the affected organisation (not just on the inside: publicly placing your trust in what is now revealed to be an untrustworthy organisation is also embarrassing). External oversight threatens the shielding, undermining internal authority.
What ethical implications does this view have for members of large organisations? It is clear that with great power comes great responsibility for the entire organisation – if the NSA wants to watch the entire world, they better be very careful with how they do it, what their information is used for, and to accept a corresponding level of oversight.
But an individual inside the organisation will not normally have that kind of power and responsibility. Their moral duties seem to hinge on the key choice of whether to join the organisation in the first place or not (joining an immoral organisation knowingly and voluntarily is bad), and doing their job well given this choice. If they accept the ends and means of the organisation as moral, they should hence strive to act in such a way that they remain moral.
This seems to imply that there is actually an individual duty of FBI agents to try to counteract confirmation bias: if they don’t, the end result is actions of the organisation that are counter to the stated ends that they have accepted by joining it. Since the organisation can make mistakes with great power this is a serious duty. The fact that it is a large organisation does not reduce the importance of the duty. It might spread out the responsibility among more people, but that just means each of them now have a serious duty rather than a smaller shared duty. Plus that they need to act against the natural tendency for diffusion of responsibility. The internal mechanisms maintaining proper function – self-monitoring, routines to de-bias decisions, internal accountability, and learning from mistakes – also hold significant moral importance since they affect the overall likelihood of a failure of ends and means. The fact that there may exist a sociological bias for this internal machinery to tend towards weak enforcement and cover-up means that it is a real test of the moral character of the participants. If there was some easy remedy to correct it, it would be less of a moral test (but of course a good thing to implement – indeed, moral members should push for it since it will produce a better outcome).
Similarly the DHS agents participating in covering up the embarrassing mistake helped make the organisation diverge from what it was intended to be. So unless they held the moral belief that it should be unaccountable they acted against their own morals. One can of course imagine somebody thinking being above accountability would have sufficiently good effects to outweigh the bad, for example by effectively stopping terrorism that would not be stoppable by organisations encumbered by regulations (the Dirty Harry approach). But while this might work for big cases, it is hard to motivate risking the reputation of the organisation and the legality of no-fly lists for the sake of a clerical error: if unaccountability really were seen as a good thing it better be protected by allowing small concessions rather than risking the whole thing on a minor matter.
We should expect that organisations that can easily cover up their mistakes will do so more often: yet another reason to be suspicious of non-transparent organisations that can hide themselves behind secrecy. This means that from a moral standpoint joining such an organisation must be regarded as a far weightier choice than joining an open organisation: it both means there is a stronger risk of temptation to do a job that has effects counter to one’s morals, and that the stated ends and means of the organisation may in fact not correspond to the actual ends and means.
This research combined with all the related research on culture – organisational and national – would suggest that in decision making processes people solve problems within a cognitive matrix which combines these biases along with peer pressure and their own individual personality traits. The more our societies and organisations seek to remove difference, diversity and diligence from within will ultimately result in repeated gross poor collective decision making. The more perceptive and thinking members of society will continue to look on such poor collective outcomes with horror as time progresses, but will remain powerless to make a positive difference from above. The educated May not feel able to challenge but they observe
Comments are closed.