Let’s suppose, entirely hypothetically and for the sake of argument, that Brexit is a disaster for the UK. Let’s suppose that sterling crashes; that foreign travel is punishingly expensive and that, if you can afford to go abroad, you’re a laughing stock. Let’s suppose that the Treasury’s estimates of billions of pounds of losses each year are reasonably accurate; that unemployment rises; that credit ratings plummet. Let’s suppose Brexit creates a corrosive tide of racism; that things that should never be said, and can never be unsaid, are shouted at high volume. Let’s suppose that there’s a torrential brain drain; that UK universities fall down the international league tables; that the innovative treatments prescribed (to private patients only, unfortunately – no money left for the NHS) by the UK’s (predominantly white) doctors are all devised in New York, Paris and Rome rather than London and Leeds. Let’s suppose that the environment, unprotected by EU legislation, is trashed, and that Scotland leaves the UK. Let’s suppose, too, that nervousness about all this creates an increasingly authoritarian style of government .
If all that happens, it’ll be great. At least if you’re a consistent utilitarian. The horror of the UK’s experience will strengthen the EU and prevent other countries from thinking that they should leave the Union – which would have similarly disastrous results for them and, if the EU itself dissolves, tectonic consequences for the stability of the world. Continue reading
While ‘interrobang’ sounds like a technique Donald Trump might add to the Guantanamo Bay playbook, it in fact refers to a punctuation mark: a disused mashup of interrogation and exclamation that indicates shock, surprise, excitement, or disbelief. It looks like this: ‽ (a rectangle means your font doesn’t support the symbol). In view of how challenging it seems for anyone to articulate the fundamental weirdness of Trump’s proximity to the office of President of the United States, I propose that we resuscitate the interrobang, because our normal orthographic tools clearly are not up to the task.
Yet even more interrobang-able than the prospect of a Trump presidency is the fact that those opposing his candidacy seem to have almost no understanding of the media dynamics that have enabled it to rise and thrive. Trump is perhaps the most straightforward embodiment of the dynamics of the so-called ‘attention economy’—the pervasive, all-out war over our attention in which all of our media have now been conscripted—that the world has yet seen. He is one of the geniuses of our time in the art of attentional manipulation.
If we ever hope to have a societal conversation about the design ethics of the attention economy—especially the ways in which it incentivizes technology design to push certain buttons in our brains that are incompatible with the assumptions of democracy—now would be the time. Continue reading
I’m not a Pokémaster; I haven’t ‘caught them all.’ If you were to hold a gun to my head and force me to answer Poké-trivia (as one does), my strategy would probably consist of murmuring ‘Pikachu?’ in varied intonations of anger and desperation.
Yet as someone who cares about the ethics of persuasion and technology, I’ve found the Poké-mania of the past couple of weeks really something to behold. In a matter of days after the so-called ‘augmented-reality’ smartphone game Pokémon GO launched, it rampaged up the app charts and quickly amassed more daily active users in the US than Twitter.
The slogan of the Pokémon franchise is ‘Gotta catch ‘em all!’ This phrase has always seemed to me an apt slogan for the digital era as a whole. It expresses an important element of the attitude we’re expected to have as we grapple with the Sisyphean boulder of information abundance using our woefully insufficient cognitive toolsets. (Emails: Gotta read ‘em all! Posts: Gotta like ‘em all!)
What’s noteworthy about the launch of Pokémon GO isn’t that its players are suddenly finding dead bodies in creeks, inadvertently flash-mobbing Central Park, falling prey to Poké-scams, or doing anything else that publishers can cite to catch all the clicks they can. Rather, it’s that Pokémon GO signals the first mainstream adoption of a type of game I’ve come to call ‘BYOB’—that is, games that require you to ‘Bring Your Own Boundaries.’
As such, this Poké-moment (sorry) presents us with a unique opportunity to advance the conversation about the ethics of self-regulation and self-determination in environments of increasingly persuasive technology.
One way of looking at games is as sets of constraints. When I play a game, I’m turning my experience over to some particular configuration of constraints designed by someone whom I (hopefully) trust with my attention, and which, if successful, will enable me to symbolically grapple with psychologically resonant aspects of my individual and/or social world. When games do this well, they perform an essential service for society.
Yet there’s a certain fundamental type of constraint that’s been present in almost all games throughout history: deep constraints of space and/or time—the game’s ultimate ‘boundaries’—that confine the game to some fenced-off region of human life. (e.g.: ‘Friday, 7:00 pm, Port Meadow. Be there.’) Fencing off our games from the rest of life means they can represent our psychological world without actually becoming it. In this way, these fundamental ‘boundaries’ function as extensions of our self-regulation embedded in the environment itself.
However, when these boundaries of time and space disappear—when the game is always on and always with you, a parallel rather than a punctuated experience—the regulatory responsibilities they bore are transferred off of the environment and onto you. You must now actively define and continually enforce (if you can) precisely where and when the game shall be afoot. There’s no support structure to lean on anymore; you have to bring your own boundaries.
‘Bringing your own boundaries’ means expending more of your scarce cognitive resources to achieve the same level of self-regulation you were able to achieve previously. In a given day, we all have a finite amount of cognitive effort we can expend—a finite number of decisions we can make, a finite amount of willpower we can exercise—before we become depleted, weak of will (or ‘akratic’), and more vulnerable to persuasive influences in our environment. In this way, the removal of a constraint itself becomes a constraint.
To be sure, many BYOB technologies already exist and thrive in our information environment. Ubiquitous computing, especially in collision with the so-called ‘attention economy,’ has collapsed spatio-temporal boundaries in many areas of our lives, resulting in the imposition of extensive cognitive and self-regulatory costs that we’re still just beginning to understand. All this makes the mainstream adoption of BYOB gaming more, not less, significant.
However, BYOB games deserve special ethical attention for two reasons. For one, games typically have no pretense of instrumentality. Games are designed to be immensely fun—maybe even the most fun things in life—yet the rest of life is so very not designed that way. Games rarely have to justify their existence any further than this. As a result, it’s easier for us to be less explicit about the net value we expect games to bring to our lives as a whole.
The other reason is that digital games today can be designed to exploit our psychological vulnerabilities far more effectively than in the past. Pokémon GO, for example, makes extensive use of a technique known as random reward scheduling, which involves randomizing the rewards you give a user for taking some particular action (e.g. spinning the circles at PokeStops to get loot) in order to induce them to take that action even more. This is the same psychological mechanism at work in the design of slot machines, and a major factor in their addictive character.
There are countless other brain-hacks at work in Pokémon GO that appear to capitalize on cognitive quirks such as the endowment effect (you value a Pokémon more when you think you ‘own’ it), the nostalgia effect (thinking about the past makes you more willing to pay money—so if you played Pokémon growing up, watch yourself when buying PokéCoins!), territoriality, social reinforcement, the fear of missing out, and many more. My point here is not that these biases and mechanisms are in themselves bad—in fact, they’re often what make games fun—rather, it’s that games can target them to shape our behavior more effectively than ever.
Ultimately, it’s the combination of these two reasons—games’ persuasive power, and our relative lack of criticality in submitting to them—that makes it especially prudent to invest attention in ethical questions at the emergence of the first widely used BYOB game. Because imagine what the headlines would be if it weren’t an app, but instead a chemical substance, that were producing this behavior? (‘Vaporeon—Not Even Once.’)
As a lifelong gamer, I’m constantly frustrated by the lazy moralizing and lack of imagination in much of the so-called ‘ethical’ criticism of games. So much of it stems from the misunderstanding, if not the fear, of games as a medium.
At the same time, I’ve noticed a tendency among many gamers (though not all) to avoid entertaining any possibility that games can have negative effects (despite the fact, remember, that every technology or medium has some negative effects). I suspect this tendency stems from the outdated feeling that gaming’s value still needs to be justified or defended from assailants, as well as from the in-group signaling value that such defenses and justifications can have within communities of gamers. In any case, while noble in intent, this resistance to criticism in fact holds gaming back from realizing its potential as an art form: taking a medium seriously means asking the hard, transformative questions of it—not to tear it down, but to build it up.
In the case of Pokémon GO, what we have is a situation in which the most popular smartphone app is one that exploits its users’ psychological biases to induce them to physically go to particular places in their environments to perform actions on their phones whose value is at best unclear, and at worst a distraction from their other life goals, presumably all with a view to maximizing their further attentional (and monetary) expenditures. Furthermore, these influences are operative on users at all times and in all places. If alien anthropologists were looking down on this situation, wouldn’t they be quite justified in viewing such a game as one of our most promising control mechanisms?
Yet in response to this situation, the immediate concerns that have dominated the ethical discussion have centered on whether some company might be able to access some of the data on users’ devices. This is insane. It reflects how utterly the overinflated issue of ‘privacy’ has dominated the conceptual space in technology ethics as a whole, as well as how dangerously underprepared we are as a society to have the urgent and important discussions about how to preserve users’ self-determination in environments of high technological persuasion.
A few years ago I got really into Ingress, a location-based smartphone game that’s similar to Pokémon GO (and was created by Niantic, the same company). In Ingress, you fight for one of two sides in a perpetual, worldwide war. Your object is to capture virtual ‘portals’ that you can link to…actually, you know what—the details don’t really matter. The point is that soon I was always playing Ingress, wherever I was, and it was really, really fun.
Ingress gave me, consistently and with dopaminergic potency, what my day-to-day life couldn’t: precise goals, meaningful actions, immediate rewards, a clear enemy, social solidarity, and a feeling of advancement. I also found myself walking outside a lot more. As a result, the game quickly became a parallel process of task and goal pursuit running alongside that of my work and research. I felt like a secret agent: in one life, I was reading, writing, and discussing philosophy; in the other, I was blasting, capturing, and linking portals for the Resistance. I had always been at war with the Enlightenment.
But it wasn’t long before I found myself spending time in unusual ways. Like standing for thirty minutes between floors in the stairwell of the world-famous Ashmolean Library, battling an opponent for a strategically valuable portal. Or at the train station, suspiciously eyeing fellow passengers who were staring at their phones—were they my enemies? Or, when visiting Rome, loitering awkwardly outside the American Embassy portal and drawing the attention of men in suits who were talking into their wrists.
Soon I realized that Ingress wasn’t just enabling me to have fun in new ways—it was also imposing new costs on my life. On one level were the self-regulatory costs: Ingress had become a second to-do list for my life, dipping into my pool of finite cognitive resources. On a deeper level, though, were the opportunity costs I realized I’d been paying. If you think about what you really ‘pay’ when you ‘pay attention,’ you pay with all the things you could have attended to, but didn’t—you pay with all the goals you didn’t pursue, all the actions you didn’t take, and all the possible yous you could have been, had you attended to those other things. Attention is paid in possible futures foregone.
A few weeks later, I got a new phone. When I was re-downloading my apps, I tried to remember why I had started playing Ingress in the first place. What had I wanted it to do for me? To help me have fun, I guess. Now, more aware of the costs, I asked myself that question again. What do I want this app to do for me? To help me have fun, I guess. After much consideration, I quietly declined to reinstall Ingress. If a game is going to make me bring my own boundaries, I’m going to hold it to a higher standard. Fun is not enough.
It’s apparently a universal law that any article on the topic of self-regulation in the face of bewildering technological change must end with some capitulatory sentence that expresses ¯\_(ツ)_/¯ in verbal form. Like: ‘Welp, guess we just gotta find it within ourselves to adapt to this zany new world!’
We must reject this impulse. We must reject the lazy notion that, sorry, it’s just up to users now to bring their own boundaries—to incur significant new self-regulatory costs—if they want to benefit from the digital technologies transforming our world. Similarly, we must reject the conjoined notion that if someone doesn’t like the choices on technology’s menu, their only option is to ‘unplug’ or ‘detox.’ This depressingly common all-or-nothing spirit is not only unsustainable in the digital age—it also requires that we assent to a corrupt and pessimistic vision of technology that sits at odds with its very purpose.
What’s the alternative? We have to engage the design. It’s curious how easy it is to forget that technologies are designed by real people, with real reasons—and that both those people and their reasons can be petitioned by users. Having worked at Google for ten years, I know that most designers genuinely want to make products that will win users’ love and transform their lives. However, I also know that even the most noble values (especially the most noble values) are hard to operationalize, and that designers need our help to understand how to do so.
In response to a BYOB game like Pokémon GO, what should we ask of designers? If the game is to remain BYOB in character, then at minimum we have to ask for increased transparency of goals. We should expect to have answers to questions like: What are the game’s goals for me? How do I know this for sure? Do those goals align with my own? For instance: let’s say Pokémon GO helps you take more steps each day, and that’s why you play it. Great—but is that what the game’s actually designed to maximize? If not, then how do we take that from being a design effect to being a design reason?
The other option is to ask that the game provide new boundaries of space and/or time to compensate for the ones it took away, so that it’s no longer BYOB at all. For example, the design could incorporate mechanisms that let you specify where, when, and how you want to play the game. Helping you ‘fence off’ the game into a subset of life again would minimize the new self-regulatory responsibilities it asks you to take on, enabling you to fit the game into your life in the way you want. To be sure, engaging with design in this way isn’t easy, and there are many headwinds against doing it well. It may be a long time before we achieve the sort of feedback loops with designers we ultimately need (if in fact we ever do).
Until then, by all means, give Pokémon GO a whirl. But do so knowing that you’ll have to bring your own boundaries to it—and that in the end, you may not be able to. If you can’t, it’s not your fault—because why should we expect the unoptimized game of life to be able to compete with a game of pure, engineered fun?
And yet, in the end, the games we choose do matter: because when we reach the end of that game—the Big Game—and we think back on all the side quests and microgames we played along the way, how many of them, even if really fun, will we consider to have been time well spent? You and I will no doubt answer that question in different ways, and by the light of different reasons. Yet for both of us, the answer will depend on whether, when a wild game first appeared, we asked of it the really important questions—whether we asked what we wanted it to do for us. In this Poké-moment, spectacle and novelty can easily obscure the fact that there are many, many such questions to ask. But we gotta ask ’em all.
I am a bitter opponent of private education. All my political hackles rise whenever the subject is mentioned.
Yet of my four currently school-aged children, one (‘A’) is educated privately (at a specialist choir school), and another (‘B’, who is dyslexic) will shortly be in private education (at a hip, Indian-cotton swathed, high-fibre, bongo-drumming, holistic school). The two others (‘C’ and ‘D’) are currently in state primary schools. There are two older children too (‘E’ and ‘F’) They were both educated privately, at a fairly traditional school.
How can I live with myself?
One way would be to avert my eyes from the apparently plain discrepancy between my actions and my political convictions. That’s often been my strategy. But I want to attempt some kind of defence – at least in relation to A and B, and lay the ground for a potential defence in relation to C and D, should we choose to educate them privately. Continue reading
Everyone I know thinks it’s obscene, and that the suffering of the dogs cannot possibly be outweighed by the sensual satisfaction of the diners, the desirability of not interfering, colonially, with practices acceptable in another culture, or by any other consideration. It’s just wrong.
‘It’s just wrong’ is the observation that moral philosophers exist to denounce. They draw their salaries for interrogating this observation, exploding its naivety, and showing that the unexamined observation is the observation not worth making.
But what can the moral philosophers bring to the discussion about the Chinese dogs? Alone, and unaided by science, not much. The philosophy turns out to be either (a) reheated science or (b) a description of our intuitions, together with more or less bare assertions that those intuitions are either good or bad. Continue reading
Science and medicine have done a lot for the world. Diseases have been eradicated, rockets have been sent to the moon, and convincing, causal explanations have been given for a whole range of formerly inscrutable phenomena. Notwithstanding recent concerns about sloppy research, small sample sizes, and challenges in replicating major findings—concerns I share and which I have written about at length — I still believe that the scientific method is the best available tool for getting at empirical truth. Or to put it a slightly different way (if I may paraphrase Winston Churchill’s famous remark about democracy): it is perhaps the worst tool, except for all the rest.
Scientists are people too
In other words, science is flawed. And scientists are people too. While it is true that most scientists — at least the ones I know and work with — are hell-bent on getting things right, they are not therefore immune from human foibles. If they want to keep their jobs, at least, they must contend with a perverse “publish or perish” incentive structure that tends to reward flashy findings and high-volume “productivity” over painstaking, reliable research. On top of that, they have reputations to defend, egos to protect, and grants to pursue. They get tired. They get overwhelmed. They don’t always check their references, or even read what they cite. They have cognitive and emotional limitations, not to mention biases, like everyone else.
At the same time, as the psychologist Gary Marcus has recently put it, “it is facile to dismiss science itself. The most careful scientists, and the best science journalists, realize that all science is provisional. There will always be things that we haven’t figured out yet, and even some that we get wrong.” But science is not just about conclusions, he argues, which are occasionally (or even frequently) incorrect. Instead, “It’s about a methodology for investigation, which includes, at its core, a relentless drive towards questioning that which came before.” You can both “love science,” he concludes, “and question it.”
I agree with Marcus. In fact, I agree with him so much that I would like to go a step further: if you love science, you had better question it, and question it well, so it can live up to its potential.
And it is with that in mind that I bring up the subject of bullshit.
Every day, for about thirty-five minutes, I sit cross-legged on a cushion with my eyes shut. I regulate my breath, titrating its speed against numbers in my head; I watch my breath surging and trickling in and out of my chest; I feel the air at the point of entry and exit; I export my mind to a point just beyond my nose and pour the breath into that point. When my mind wanders off, I tug it back.
The practice is systematic and arduous. In some ways it is complex: it involves 16 distinct stages. When I am tired, and the errant mind won’t come quietly back on track, I find it helpful to summarise the injunctions to myself as:
- I am here
- This is it
I alternate the emphases: ‘I am here’: ‘I am here’; ‘I am here’; ‘This is it’; ‘This is it’; ‘This is it.’
I note (although not usually, and not ideally, when I’m in the middle of the practice) that each of these connotations presumes something about the existence of an ‘I’. This is less obvious with the second proposition, but clearly there: ‘This’ is something that requires a subject. Continue reading
There is a long overdue crisis of confidence in the biological and medical sciences. It would be nice – though perhaps rather ambitious – to think that it could transmute into a culture of humility.
A recent comment in Nature observes that: ‘An unpublished 2015 survey by the American Society for Cell Biology found that more than two-thirds of respondents had on at least one occasion been unable to reproduce published results. Biomedical researchers from drug companies have reported that one-quarter or fewer of high-profile papers are reproducible.’
Reproducibility of results is one of the girders underpinning conventional science. The Nature article acknowledges this: it is accompanied by a cartoon showing the crumbling edifice of ‘Robust Science.’
As the unwarranted confidence of scientists teeters and falls, what will – and what should – happen to bioethics?
Selfie-sticks are notoriously ubiquitous in modern society, and the art of ‘selfie-taking’ may well be something that future analysts identify as being one of the defining sociological trends of this period of history. In this post, I will discuss some passages from Sartre that help to explain my feeling of unease at this rampant ‘selfie-ism’. Continue reading