Skip to content

Panopticon Problems: Purists rather than Privacy?

Would a transparent society where we could all watch each other be a better society? A recent paper has made me wonder whether the big problem might not be privacy, but purists.

The standard argument is that transparency (in its different forms) makes people behave well because they can be held accountable: either when they are doing something bad, or when others look back and find out what they did. So there is an incentive for doing things right.

People try to safeguard their reputations, and often behave more generously and cooperatively when observed by others. People also behave themselves when feeling watched, presumably because the internal machinery for reputation management has a long evolutionary background – we benefited from it long before we could think about social game theory, so it presumably runs on prerational hardware somewhere in our brains. This is why merely showing images of eyes is (sometimes) enough to affect behaviour. We rationally know they are not actually seeing us, but they affect the old systems anyway.

This might be an argument for having plenty of visible CCTV cameras – they induce pro-social behaviour both by discouraging criminals by increasing the chance they will be held accountable and by making the law-abiding citizens behave slightly better!

We are likely well on the way towards a transparent society where everything and everybody are going to be potentially observable. A whole bunch of identity technologies (RFID, object recognition, person recognition, biometrics, …) are maturing. Sensors are becoming cheaper, smaller and embedded in ubiquitous devices like cellphones and computers. Storing and networking the information is becoming cheap and often the default condition. Technologies for mining such information are advancing rapidly.  As David Brin pointed out, a key issue in a transparent society is whether it is two-way transparency or one-way transparency; however, as Wikileaks demonstrated, it might be hard to effectively maintain secrecy for any group.

The real problem with a transparent society might not be privacy but tolerance. Privacy norms appear to be fairly flexible social constructions, and could presumably adapt to a world where anybody can in principle see anybody else and what they did. We desire privacy and give each other privacy as part of our social context.

However, a new paper suggest that feeling watched also influences our moral behaviour: People more strongly condemn bad behaviour when cued that they are being watched (original paper: Pierrick Bourrat, Nicolas Baumard and Ryan McKay, Surveillance cues enhance moral condemnation, Evolutionary Psychology, 9:2 193-199 2011). They demonstrated that surveillance cues made students regard bad acts as less morally acceptable.

This is unsurprising, but bodes badly for the tolerance of a transparent society. People would not only be able to observe more transgressions against their norms, but would publicly view them as worse. They might of course exaggerate the denouncement – talk is cheap, after all – and it might not necessarily carry over into action or policy. But as the authors note:

Indeed, failure to express our support for prevailing moral norms may arouse suspicion in our conspecifics. Talk, however, is cheap, so we are wise to take such declarations with the proverbial grain of salt, discounting them as appropriate. In consequence, individuals may attempt to compensate for this discounting by ramping up their rhetoric. To the extent that these compensatory efforts are discounted in turn, they may be ultimately futile; what sustains them is the fact that failure to send the inflated signal immediately brands the deviant as morally suspect.

If this holds (and it seems likely), then the introduction of vastly greater surveillance powers would produce a lot more moral condemnation and competition in the condemnation. This is of course strengthened by the increased transparency: condemnation becomes more visible. There is also heightened penalties for hypocrisy: now you can even check if your neighbour actually does as he preaches. And he will of course check you.

In many cases this might be entirely desirable. It might help us behave better. But it is unlikely to make us saints: there is going to be enough transgressions for anybody to condemn us. Maybe this leads to a social equilibrium where people would abstain from excessive moral rock-throwing because they know they may become targets themselves. But not all people have the same social capital or support: socially marginal people like minorities or stigmatized groups would be far more vulnerable – and likely – as targets. What is a transgression is also floating: being gay or the ‘wrong’ religion is viewed by many as a moral matter. Purists well anchored inside big communities would be able to gain status and support by directing opprobrium against outsiders, getting many people to join in because they both show their affiliation and reduce the risk of being targeted. A transparent society could be a fairly stable intolerant society.

There is a risk that the transition from current, humanely opaque and local, societies into very transparent and global societies might occur far faster than most people expect. We might not desire giving loud busybodies too much say over our lives (especially since they tend to have the most simplistic moral views), but avoiding it might be a tough moral, social and psychological problem.

Figuring out how to live in a more transparent and more globalized world (where numerous communities with wildly different mores are transparent to each other) is one of the key challenges right now, in the early days of the information revolution.

Share on

13 Comment on this post

  1. Anthony Drinkwater

    Thank you, Anders, for this well-argued counterpoint to the current tendency to see video-surveillance and its cousins as a solution to all our woes.
    I'm sure that you're absolutely right in your scepticism, and you hardly mention the huge "big brother " dangers, compounded by the corollary of imposed conformity, which would transform us from ever-so-fallible moral agents, with all our flaws, into utilitarian robots for the "common good" (to be defined of course by the privileged and powerful).

  2. Big Brother is the most obvious arguments against panopticons, but I think Brin is right in principle that two-way transparent societies can keep power differences down and keep powers accountable. Two-way transparency is nothing that happens by itself, but it is not too different from other enlightenment projects of democracy, rule of law and accountability. Unfortunately most people worried about surveillance from the powerful seem to think the best strategy is to push for legislation blinding them; I think this is both less efficient than to push for accountability and less likely to succeed (for political, technical and practical reasons). And as I argued here, two-way transparency may not be a utopia at all.

    If there were a choice between a one-way transparent diverse society much like our current but with very powerful authorities, and a two-way transparent society where everybody can watch everybody but there would be strong grassroots enforcement of conformity, which one would you choose? I suspect the first would be more vulnerable to corruption into something truly bad, but the second is by no means nice either. And maintaining a low-surveillance, high privacy society alternative to these might not be stable for technical reasons.

  3. Thanks, Anders – really interesting post. But I can't help thinking that there's a very strong but implicity consequentialist premise here: that the rightness or wrongness of the panopticon society are to be evaluated purely in terms of whether they lead, in the aggregate, to which of two terms dominate: (1) the reduction of bad behaviour and the promotion of good behaviour and (2) the likely increase in censorious moralising. Do you have any time for rights to privacy?** Should there be some sort of threshold level of expected net benefits before nosey neighbours are allowed to sift through our rubbish/travel details/phone records/web cache/etc, or should the panopticon society be instituted as long as (1) outweighs (2)? (ie as long as the net benefits of better behaviour outweigh likely decreases in tolerance).

    On your reply to Anthony, I'm very suspicious of arguments that "grassroots enforcement" is a good thing. Vigilante gangs, local posses and other "grassroots" enforcers of the local social order have a pretty chequered history… also, your last sentence in your reply is very tantalising… what are these technical reasons?

    **Where these rights are not derived from consequentialist calculation, but pre-exist as a sort of prior, ie the state needs to justify its incursions into privacy, rather than privacy that needs to be justified by appeal to the greater good.

    1. While I do tend towards a consequentialist view, this post was mainly aimed at the moral psychology of surveillance. There might be nonconsequentialist aspects of transparent societies that are weightier, but that was outside the scope.

      Yes, grass-roots movements are prime candidates for being influenced or taken over my moral purists who gain by enforcing social norms rather than doing sensible good. Vigilantes are rarely middle-of-the-road people, and busybodies seem to like finding flaws rather than helping correct them. This is the fundamental worry of my post.

      The reasons of why a low-surveillance, high privacy society might not be stable are that the surveillance technology is becoming extremely cheap, ubiquitous, powerful and has dual uses. If it was just for surveillance it would only be adopted if people want surveillance, but the reason we get cameras, location tracking, rfid tags and face recognition (to name just a few) is that they are so darn useful or cool. My web browser, ISP, and several on-line firms track my on-line activity as a side effect of other functions, a side effect that allows some very profitable businesses like Google. Actually implementing good privacy rules takes effort and is costly (both directly and in missed opportunities – how many open source projects could comply with an EU privacy directive?) This is why the transparent society is the default future if there is no concerted effort to avoid it, and even then the economic and practical forces driving us towards it may prove stronger than the efforts.

  4. Dmitri Pisartchik

    Thank you for an interesting post, Andres.

    Transparency is a fascinating and deeply disturbing topic that I am also involved with. One thing that has not been picked up here is the (very real, to my mind) threat that rampant transparency poses to trust.

    It is often claimed that greater transparency is needed for establishing or maintaining of trust (usually of the public kind) but when you look at the philosophy of trust this is complete rubbish. To trust trust someone is precisely not to look behind the curtain – to leave open the possibility of being wrong and of being vulnerable. I find it both interesting and disheartening that we look to surveillance to replace mutual trust as the social glue that keeps our societies going. Indeed, I fear that with increase in transparency the very value of trust will be marginalized.

    To add a point, the power angle is never to be underestimated in this kind of discussion. The appeal to a two-way transparency as a stable and egalitarian kind of alternative is, I think, mistaken. The government looking into my life is fundamentally more powerful than me looking into the life of the government. Even if we assume full transparency on both sides, concentrated focal points of political power will always dis-proportionally benefit from "open society"

      1. Anders, Andres, Andre, Andrew… same name, really. 🙂

        Very interesting point about trust. I have also been concerned by the way CCTV signals a lack of trust from authorities and store owners. The presumption of potential guilt sends a chilling signal to many social interactions. However, this has more to do with the intention behind the cameras than the surveillance itself. If every object were by default recording everything and potentially able to disclose the information if anybody cared to request it, I think we would still be able to have a trusting society. It is just that now much of the surveillance is based on underlying paranoia and feeding it by being an intentional observing act.

        1. Anthony Drinkwater

          Anders,
          Dmitri will of course speak for himself, but I think you miss his point, which is not that surveillance "signals" a lack of trust (which I agree is important) but that you can't "trust" someone if all their actions are all in the public domain. This is a matter of logic – the word ceases to have real meaning except as irony. Imagine that I say to you "I trust you, but I will constantly check to see if you are doing what you say you will do" – it has no sense other than "Despite what I say, I don't have any confidence in you". In this situation trust is no longer possible.
          In this sense, increasing surveillance changes drastically our social relations. I find Dmitri's expression of replacing the "social glue" very appropriate.

          1. Consider a situation where I cannot avoid seeing everything you do – we are close together in the same room, no chance to hide any action. I remain vulnerable to bad actions like you hitting me – the observability doesn't prevent them or even give me any reassurance. Yet I can still trust you to act appropriately because I think you are a trustworthy person. Or I might find myself uncomfortable sharing the room with an untrustworthy person.

            I have a hard time seeing why situations of mutual observability would preclude trust. Situations where everybody are acting to enforce observation and recording in order to catch/discourage any bad act, yes, but if observability is just a background fact it doesn't have much effect on trust. It might of course be tested more strongly in opaque situations where we cannot know what others are up to. Do you think this kind of testing by having tempting opportunities for breaking trust around is important for forming true trust?

          2. Anthony Drinkwater

            Anders, I'm replying to myself as the "reply" button doesn't show on your comment.
            I have just gone through considering your situation, and my conclusion is that we should be more careful in distinguishing trust from confidence or comfort (I was also guilty of this in my reply). Whilst in everyday language one could say that you trust me not to hit you in the surveilled room, I think that you are really only talking about confidence – ultimately, in your own judgement of what kind of person I am.
            To me the core notion of trust goes much further – I, as a moral agent actively give you you some form of power over me without my being able to assure (other than through this form of moral contract) that you will not abuse it. This could consist of revealing a secret, delegating you something for which ultimately I am responsible, confiding you a mission …. : essentially I take a risk by placing myself in your hands. We do it all the time, and as we cannot do everything we need for ourselves all the time, life would be difficult without it. Consider in addition the following point : someone who trusts nobody is in danger of being considere at the least eccentric if not mentally disturbed.
            Sometimes we are wrong, so we occasionally check to assure that the other is in reality trustworthy (the term is revealing – being trusted is a form of privileged position that we are worthy of, or not). The same notion I guess applies in financial and legal uses of the term.
            But this is very far from what is imaginable through constant surveillance "as a background fact". If at all times we can check up, and the person "trusted" knows this, we have in my view abandoned the notion : nothing is given by the truster, nothing is earnt by the trusted. I believe that it could not fail to change human relations.
            The reasoning behind this is undoubtedly lengthy to justify and depends perhaps on a conception of personhood. But my intuition is that the surveillance tendency poses the risk of abandoning the notion and,importantly, the practice of trust and that we will be the poorer for losing it.

  5. Anthony Drinkwater

    A very interesting and important point on trust, Dmitri. Thank you.
    I would only add that it isn't necessarily governments and the world of politics that hold disproportional power. To cite one example : how much power does the Greek government have today ?

    1. Dmitri Pisartchik

      I concur, governments are the obvious example, primarily because they also hold the monopoly on the use of force, thus making the power imbalance much more worrying. But there was just a story I read today on the subway that gives an example of why a completely transparent society may not be such a good thing. I can't seem to find the story online, but it was about "shame sites" popping up following the Vancouver Stanley Cup riots, where pictures of perpetrators were posted and they were identified, that resulted on people getting fired from their jobs and even receiving death threats.

      1. The problem with the shame sites seems to be that they have disproportionate and inconsistent effects. Some wrongdoers get put into the focus of thousands of stares, and hence suffers much more retribution than other wrongdoers who are not put into the crossfire. If the shaming was appropriate (whatever that is for rioting?) and consistent (all rioters get their share of blame) then there is not much of a problem, quite the opposite. But due to the spotty nature of current surveillance and even more due to the long tail property of internet attention, consistency is unlikely to emerge from straight surveillance. And maintaining appropriate levels of response might also be hard, for the social reasons I mention in my post.

Comments are closed.