To most people interested in surveillance the latest revelations that the US government has been doing widespread monitoring of its citizens (and the rest of the world), possibly through back-doors into major company services, is merely a chance to smugly say “I told you so“. The technology and legal trends have been clear for a long time. That intelligence agencies share information (allowing them to get around pesky limits on looking at their own citizens) is another yawn.
That does not mean they are unimportant: we are at an important choice-point in regard how to handle mass surveillance. But the battle is not security versus freedom, but secrecy versus openness.
It is not obvious whether we should wish for better surveillance or less: it is not clear it is a force for good or evil in general. Surveillance can prevent or solve crimes, alert society to dangers, provide information for decision-making and so on. It can also distort our private lives, help crime, enable powerful authoritarian and totalitarian forces, and violate human rights. But there doesn’t seem to exist any knock-down argument that dominates the balance: in many cases messy empirical issues and the current societal context likely determines what we should wish for, did we know the full picture.
But what is obvious is that that unaccountable surveillance is much easier turned into a tool for evil than accountable surveillance: the key question is not who got what information about whom, or even security versus freedom, but whether there is appropriate oversight and safeguards for civil liberties. And the only answer that comes out of the Obama administration (or any other government) is “of course!”. Unfortunately this is not convincing, since it is what an untrustworthy or incompetent government would say too.
A NSA spokesperson told The Guardian:
The continued publication of these allegations about highly classified issues, and other information taken out of context, makes it impossible to conduct a reasonable discussion on the merits of these programs.
This is entirely true. Unfortunately a reasonable discussion on the merits of the programs requires the discussants to have relevant information. The secrecy surrounding them makes it impossible to have a public discussion. Either it is done among fully informed people privately, or it is done publicly and the secrecy will be broken. The private discussion does not inspire trust since there are valid concerns about regulatory capture. The public discussion will be incomplete and bound to be biased unless most information is revealed, negating secrecy. In both cases it will never be clear that all relevant information has been revealed unless it is possible to go into the secret domain and at least do spot checks.
In order to actually be convincing the oversight needs to demonstrate that it (1) doesn’t suffer regulatory capture, (2) can access the information needed, (3) actually uses it to make relevant decisions. The general perception is that many elected leaders knew and seem to have accepted widespread surveillance, yet – given the current denials and confusion – either do not know the full picture or are captured in various ways. Even if this is an untrue picture it undermines trust in the legitimacy of the oversight system: who watches those watchmen? In the past we tended to trust authorities by virtue of them being authorities or having excellent character, but this default trust has been eroded. We recognize how power corrupts and how incompetence is everywhere, and few government officials today have reputations as incorruptible – especially when dealing with very powerful vested interests we can at least suspect could apply significant pressure against perceived opponents.
The way out is to add transparency and verifiability. James Clapper of the NSA actually gestured along these lines by declassifying some documents to argue that the ‘the program operates “within the constraints of law” and “appropriately protect[s] privacy and civil liberties”.’ Whether a selective declassification is enough to convince is another matter.
The problem with the view that national security overrides all other considerations is that it makes itself impossible to criticise: evidence and procedure must be secret, why they must be secret is secret, and so on. We cannot know whether the tradeoff is right because we are not allowed to see the effectiveness.
Even in a perfect world this would block the openness of society: open societies work because citizens can criticise any part of the system, demanding accountability, and the system itself can be changed to accommodate this if there is enough support for it. This is how mistakes and corruption get exposed and corrected, this is how the society is reshaped to fit the citizens rather than to fit some minority plan. It might not be quick, neat or easy, but it is a self-repairing and self-modifying system. But if there are aspects of the society that cannot be criticised or changed, then those are excluded from these mechanisms. Since mistakes happens even when people are dedicated and competent, even in the ideal world closed parts of society run the risk of becoming faulty. Add the realistic components of people covering up embarrassment, the possibility of corruption, regulatory capture and the existence of individuals with problematic agendas, and the existence of closed parts of societies become much more problematic. If they are also strongly empowered – legally and technologically – they become potentially very dangerous, no matter how noble the initial intentions were.
It is not hard to imagine, for example, how a well-meaning institution might fall into the trap Janet Radcliffe-Richards described in her Uehiro lectures that characterises politically correct thinking: given that you aim at a morally good thing, you become averse to accepting empirical findings or arguments that disconfirm this, or even discourage attempts at investigate such things. This is an is/ought mistake, but it makes the institution regard attempts at investigating or curtailing its powers as attacks on its well-meaning intentions. Hence, in order to safeguard those, the attempts must be thwarted and the groups attempting them also become suspect: what could they have against these good intentions? With powerful surveillance it will not be hard to find evidence that supports such suspicions…
Intelligence analysts will no doubt dislike this caricature: those I have met are remarkably devoted to reducing bias and actually finding true facts to guide sane decisions. But my experience with the information ecology of large institutions have also shown that cognitive biases can thrive even where the individual members are trying to avoid them – especially if management structures are not de facto rewarded for truthfulness and neutrality but rather for bureaucratic survival skills. It is easy to be overconfident in the niceness of ones own organisation. This is yet another reason oversight and transparency is needed.
Closedness sometimes bites itself. The kafkaesque FISA requests to companies for surveillance data make them unable to discuss the requests. One side effect is that now nobody will believe the denials from tech companies: if it is illegal to disclose surveillance, then no amount of denials, no matter how plausible, will ever assuage our concerns. The irony is that real regulations actually make conspiracy theory logic look reasonable. Conspiracy theorists think the absence of evidence is evidence for a cover-up: here the do-not-discuss orders make any denial pointless (not quite the same thing, but still a strong reducer of trust).
Even from a security standpoint these technologies are double-edged: while they allow amassing massive information invisibly, that doesn’t mean they will only deliver it to the intended recipients. The Petraeus scandal demonstrated that government officials are also ensnared in the net, even when they try to avoid it. Foreign governments have no doubt exploited mandatory eavesdropping functionality in telecom systems against not just their own citizens but also the interests of governments that mandated the systems in the first place. And it is not implausible that they provide tempting targets for many non-governmental groups, who can untraceably gather information for their own agendas. Since their nature and use cannot be freely discussed and analysed the problems they cause cannot easily be corrected.
When the government says law biding citizens have nothing to fear from government surveillance, the rejoinder is of course that law abiding governments have nothing to fear from transparency. Expansion of surveillance power must be balanced by an equal or larger expansion of transparency and accountability. Expansion of secrecy must be balanced by even more accountability – secrecy is in many ways, no matter how useful it can be, a more dangerous tool than surveillance.
This struck me as a very balanced article, and it is a relief to read something on this issue which does not resort to a hectoring tone, but opens up the issue for people like me who are uncertain and curious to explore the issue abstractly, including our own fears, without having our fear buttons pressed at the same time!
It is interesting how radically each side sees the others’ fears as so misplaced in the discussions I have been exposed to, and it seemed unrealistic to me to try to change others’ fear-based perspectives by arguing from principles. But this article is very different and did widen my understanding through arguing using sympathetic imagination, which I maybe only found credible because I have cognitive biases towards seeing people as essentially trying their best to be good.
Anyhow, the dilemma of who watches the watchers and how to craft your secrecy so it does not bite its own tail by creating a catch-22 where it is illegal to discuss what is being done clearly enough to then give meaningful consent does not strike me as the core issue, even though it is important and interesting. I am not sure what the core issue is for, but I’ll try to get towards it to see if my perspective resonates at all with others.
I am not sure how much government secrecy in itself leads to suspicion of governments. It may do at the edges, but I think regardless of how secretive or open governments are about what they do, it is the wider discourse which legitimises or delegitimises it. Here Edward Snowden’s revelations seem to heighten an existing cleavage as those who fear the US government’s (in their eyes) lawlessness feel vindicated, and those who trust the government or approve of its bending rules to catch (people they consider) bad guys think it is a storm in a teacup.
The more important dilemma for me is the issue of power, not process. It seems a symmetrical argument that if law-abiding citizens have nothing to fear from government surveillance, then the government has nothing to fear from citizen surveillance: on the one hand the law-abiding state has to fear surveillance from non-law-abiding citizens who use knowledge of government operations and their limits to evade capture and undermine the state. On the other hand law-abiding citizens have to fear surveillance from a non-law abiding state that uses its capacity for surveillance to indulge in dirty tricks, persecute, hide its own wrongdoings, and generally undermining the rule of law.
But it is not symmetrical, because the law itself is also something which the state determines, executes and judges the execution of, even if through separated powers in the US. I think we need to have as much imagination to understand people’s fears of too much or too little surveillance as well as of too much or too little accountability.
Firstly, citizens even with symmetrical rights to privacy and information with the state would still have far less capacity to monitor the state, punish its infractions in an effective way and make new policies and regulations in response to behaviour it doesn’t like. That’s how things are designed to be, as the state is partly defined by its role as the enactor of laws.
Secondly, and more importantly, part of the role of citizens is to demand laws that meet contemporary concepts of justice. Participating in regular elections to determine new legislative programmes, means they will want to make things previously legal illegal, or previously illegal things legal. In practice, historically, laws that came to be seen as unjust (such as aiding runaway slaves, alcohol prohibition, banning homosexual relationships) have been illegally resisted before they were overturned, and the acts of resistance seem to me to have been a major element of involving people in developing their consciousness of injustice and the imagination and will to remove them. With such surveillance, those acts might not have been possible to covertly organise and create a subculture around, and I wonder what the consequences would be for the role of citizens, not just in elections but generally, psychologically and as social and moral beings.
In addition, leaders of reform movements were targeted by the state for opprobrium, and I would expect data collected on such people would be used to great effect, even if only to smear reputations, or find infractions of minor rules to take them out of the picture. So going back to the first point, citizens – even if we all had the same ability to collect data on the state as it had on us, could not effectively use it to render the state totally politically impotent in the way the state could tie up individuals in court cases to clear their name. Even taking a very sympathetic view of most government officials, we cannot deny that sometimes a J Edgar Hoover, Senator Joe McCarthy, slips through the net to great effect, for example.
The key issue may be for me that law and conceptions of morality do not always coincide, and part of the democratic ideal may be to find mechanisms to enable them to develop in line with each other through debate and experimentation. Surveillance which is too complete or intimidating even if it is not abused could ossify both laws and moral development, or reduce our interest in them as a collective activity. Would making complete surveillance accountable in some way do anything to stop this? What kind of accountability would that be?
The article and comment appear well argued but seem to largely skip over the issues which can be caused by secret surveillance. An example would help:-
During the UK National miners strike a working miners home was in the process of being bugged by members of the police technical support unit. The colliery shift unexpectedly finished early and the miner without delay went to his car and started travelling home in his vehicle along with many of the other miners from that shift. Part of the surveillance team at Gedling colliery were unable to stop or affect his progress home. At that moment a police traffic patrol car passed the colliery entrance where all the vehicles were queued waiting to exit onto the main road. On the VHF police radio the officers at the colliery spoke directly with the traffic car instructing them to ram the working miners vehicle and so delay him. Following discussions involving both the technical support unit installing the bugs, (who were unable to complete or remove the ongoing work in time) and others the traffic patrol was instructed to cause the accident and subsequently did so with the resulting collision then being investigated by a police traffic supervisor during which time the technical support unit were able to compete their task.
This incident clearly illustrates the difficulties in secret surveillance whilst at the same time raising ethical questions within the political sphere regarding the surveillance of friends, trust and confidence.
Any secret surveillance no matter how robustly policed seems bound to at times raise issues of a similar nature which because they will by their nature require continuing secrecy are perceived as reasonably low risk by those involved.
Due to an ongoing house move my contributions are likely to continue to be sporadic over the next few weeks/months.
Comments are closed.