Let’s suppose, entirely hypothetically and for the sake of argument, that Brexit is a disaster for the UK. Let’s suppose that sterling crashes; that foreign travel is punishingly expensive and that, if you can afford to go abroad, you’re a laughing stock. Let’s suppose that the Treasury’s estimates of billions of pounds of losses each year are reasonably accurate; that unemployment rises; that credit ratings plummet. Let’s suppose Brexit creates a corrosive tide of racism; that things that should never be said, and can never be unsaid, are shouted at high volume. Let’s suppose that there’s a torrential brain drain; that UK universities fall down the international league tables; that the innovative treatments prescribed (to private patients only, unfortunately – no money left for the NHS) by the UK’s (predominantly white) doctors are all devised in New York, Paris and Rome rather than London and Leeds. Let’s suppose that the environment, unprotected by EU legislation, is trashed, and that Scotland leaves the UK. Let’s suppose, too, that nervousness about all this creates an increasingly authoritarian style of government .
If all that happens, it’ll be great. At least if you’re a consistent utilitarian. The horror of the UK’s experience will strengthen the EU and prevent other countries from thinking that they should leave the Union – which would have similarly disastrous results for them and, if the EU itself dissolves, tectonic consequences for the stability of the world. Continue reading
The Economist has a leader “For life, not for an afterlife“, in which it argues that Elon Musk’s stated motivation to settle Mars – making humanity a multi-planetary species less likely to go extinct – is misguided: “Seeking to make Earth expendable is not a good reason to settle other planets”. Is it misguided, or is the Economist‘s reasoning misguided? Continue reading
Kuwait is planning to build a complete DNA database of not just citizens but all other residents and temporary visitors. The motivation is claimed to be antiterrorism (the universal motivation!) and fighting crime. Many are outraged, from local lawyers over a UN human rights committee to the European Society of Human Genetics, and think that it will not be very helpful against terrorism (how does having the DNA of a suicide bomber help after the fact?) Rather, there are reasons to worry about misuse in paternity testing (Kuwait has strict adultery laws), and in the politics of citizenship (which provides many benefits): it is strictly circumscribed to paternal descendants of the original Kuwaiti settlers, and there is significant discrimination against people with no recognized paternity such as the Bidun minority. Plus, and this might be another strong motivation for many of the scientists protesting against the law, it might put off public willingness to donate their genomes into research databases where they actually do some good. Obviously it might also put visitors off visiting – would, for example, foreign heads of state accept leaving their genome in the hands of another state? Not to mention the discovery of adultery in ruling families – there is a certain gamble in doing this.
Overall, it seems few outside the Kuwaiti government are cheering for the law. When I recently participated in a panel discussion organised by the BSA at the Wellcome Collection about genetic privacy, at the question “Would anybody here accept mandatory genetic collection?” only one or two hands rose in the large audience. When would it make sense to make mandatory genetic information collection? Continue reading
By Charles Foster
English law has traditionally, for most purposes, regarded animals as mere chattels. There is now animal welfare legislation which seeks to prevent or limit animal suffering, but provided that legislation is complied with, and that no other relevant laws (eg those related to public health) are broken, you are free to do what you want with your animal.
Veterinary surgeons are in an interesting position. The UK regulatory body for veterinarians, the Royal College of Veterinary Surgeons (‘RCVS’) publishes a Code of Professional Conduct. This provides, inter alia:
‘1.1 Veterinary surgeons must make animal health and welfare their first consideration when attending to animals.’
‘2.2 Veterinary surgeons must provide independent and impartial advice and inform a client of any conflict of interest.’
‘First consideration’ in 1.1 is a rather weasly formulation. Does it mean that it is the overriding consideration, trumping all others, however weighty those others might be? Or the one that veterinarians ought to consider first, before moving on to other criteria which might well prevail? Continue reading
In a new post, published by Aeon, I argue that, even if there are moral reasons for and against intentionally delaying parenthood (including, amongst other things, the reduced opportunity for grandparental relationships as a reason against), older parents should not feel guilty if their late parenthood means that their child does not get to know his or her grandparents. Whilst the situation itself might be regrettable (i.e. there might be an understandable wish that things were different), the parent has not deprived their particular child in anyway. Correspondingly, the child has no legitimate complaint (on these grounds) against his or her parent. If the parent had been successful in conceiving earlier, that particular child would not have existed.
Republished in full below: Continue reading
We used to have to take time off from work –or at least leave work early– to watch the Olympics on TV. Now we can thank the engineering marvels of DVR and web replay for protecting our love affair with the Games from our evil work schedules. We are, rightly, mesmerized by the combination of talent, discipline, skill, and genetics embodied by the world’s greatest athletes. While admittedly luck plays a role, these elite athletes use strategies tuned over decades to prove who is the best on the world’s biggest sports stage. What is not to like? This year’s games promise to be epic with greats like Bolt and Phelps closing out their legacies, unstoppable rookies like Simone Biles planning to make their mark, and new sports like Rugby and Golf looking to reach new international audiences. Ready or not, here comes Rio 2016!
In the job market being attractive is advantageous. According to economist Daniel Hamermesh, an attractive man can earn, over a life time, $230,000 more than an unattractive one. Attractive solicitors raise more money for charities. Very attractive individuals are less likely to engage in criminal activities, whereas unattractive ones have higher propensity for crime. Attractive criminals are punished less severely than unattractive ones.
Both children and adults judge attractive people to be more helpful, more intelligent, and more friendly than their unattractive counterparts.
Adults have higher expectations of attractive kids compared to non attractive ones and mothers of attractive infants tend to be more affectionate, playful, and attentive when interacting with their children than mothers of less attractive infants. Teachers expect better performances from attractive students. Transgressions of unattractive children are judged more negatively than transgressions of attractive ones.
One response to unfairness is to get people to stop discriminating unfairly. This might work for some domains, such as employment where interviews could be conducted blind. But it won’t be possible to counteract all the potential downsides.
We can’t require people to like or fall in love with people they find unattractive. There are at least two possible responses:
- Assist people to find attractive what they currently find unattractive
- Assist people to be more attractive to those who currently find them unattractive
Both of these are reasonable solutions. The second is cosmetic enhancement.
I’m not a Pokémaster; I haven’t ‘caught them all.’ If you were to hold a gun to my head and force me to answer Poké-trivia (as one does), my strategy would probably consist of murmuring ‘Pikachu?’ in varied intonations of anger and desperation.
Yet as someone who cares about the ethics of persuasion and technology, I’ve found the Poké-mania of the past couple of weeks really something to behold. In a matter of days after the so-called ‘augmented-reality’ smartphone game Pokémon GO launched, it rampaged up the app charts and quickly amassed more daily active users in the US than Twitter.
The slogan of the Pokémon franchise is ‘Gotta catch ‘em all!’ This phrase has always seemed to me an apt slogan for the digital era as a whole. It expresses an important element of the attitude we’re expected to have as we grapple with the Sisyphean boulder of information abundance using our woefully insufficient cognitive toolsets. (Emails: Gotta read ‘em all! Posts: Gotta like ‘em all!)
What’s noteworthy about the launch of Pokémon GO isn’t that its players are suddenly finding dead bodies in creeks, inadvertently flash-mobbing Central Park, falling prey to Poké-scams, or doing anything else that publishers can cite to catch all the clicks they can. Rather, it’s that Pokémon GO signals the first mainstream adoption of a type of game I’ve come to call ‘BYOB’—that is, games that require you to ‘Bring Your Own Boundaries.’
As such, this Poké-moment (sorry) presents us with a unique opportunity to advance the conversation about the ethics of self-regulation and self-determination in environments of increasingly persuasive technology.
One way of looking at games is as sets of constraints. When I play a game, I’m turning my experience over to some particular configuration of constraints designed by someone whom I (hopefully) trust with my attention, and which, if successful, will enable me to symbolically grapple with psychologically resonant aspects of my individual and/or social world. When games do this well, they perform an essential service for society.
Yet there’s a certain fundamental type of constraint that’s been present in almost all games throughout history: deep constraints of space and/or time—the game’s ultimate ‘boundaries’—that confine the game to some fenced-off region of human life. (e.g.: ‘Friday, 7:00 pm, Port Meadow. Be there.’) Fencing off our games from the rest of life means they can represent our psychological world without actually becoming it. In this way, these fundamental ‘boundaries’ function as extensions of our self-regulation embedded in the environment itself.
However, when these boundaries of time and space disappear—when the game is always on and always with you, a parallel rather than a punctuated experience—the regulatory responsibilities they bore are transferred off of the environment and onto you. You must now actively define and continually enforce (if you can) precisely where and when the game shall be afoot. There’s no support structure to lean on anymore; you have to bring your own boundaries.
‘Bringing your own boundaries’ means expending more of your scarce cognitive resources to achieve the same level of self-regulation you were able to achieve previously. In a given day, we all have a finite amount of cognitive effort we can expend—a finite number of decisions we can make, a finite amount of willpower we can exercise—before we become depleted, weak of will (or ‘akratic’), and more vulnerable to persuasive influences in our environment. In this way, the removal of a constraint itself becomes a constraint.
To be sure, many BYOB technologies already exist and thrive in our information environment. Ubiquitous computing, especially in collision with the so-called ‘attention economy,’ has collapsed spatio-temporal boundaries in many areas of our lives, resulting in the imposition of extensive cognitive and self-regulatory costs that we’re still just beginning to understand. All this makes the mainstream adoption of BYOB gaming more, not less, significant.
However, BYOB games deserve special ethical attention for two reasons. For one, games typically have no pretense of instrumentality. Games are designed to be immensely fun—maybe even the most fun things in life—yet the rest of life is so very not designed that way. Games rarely have to justify their existence any further than this. As a result, it’s easier for us to be less explicit about the net value we expect games to bring to our lives as a whole.
The other reason is that digital games today can be designed to exploit our psychological vulnerabilities far more effectively than in the past. Pokémon GO, for example, makes extensive use of a technique known as random reward scheduling, which involves randomizing the rewards you give a user for taking some particular action (e.g. spinning the circles at PokeStops to get loot) in order to induce them to take that action even more. This is the same psychological mechanism at work in the design of slot machines, and a major factor in their addictive character.
There are countless other brain-hacks at work in Pokémon GO that appear to capitalize on cognitive quirks such as the endowment effect (you value a Pokémon more when you think you ‘own’ it), the nostalgia effect (thinking about the past makes you more willing to pay money—so if you played Pokémon growing up, watch yourself when buying PokéCoins!), territoriality, social reinforcement, the fear of missing out, and many more. My point here is not that these biases and mechanisms are in themselves bad—in fact, they’re often what make games fun—rather, it’s that games can target them to shape our behavior more effectively than ever.
Ultimately, it’s the combination of these two reasons—games’ persuasive power, and our relative lack of criticality in submitting to them—that makes it especially prudent to invest attention in ethical questions at the emergence of the first widely used BYOB game. Because imagine what the headlines would be if it weren’t an app, but instead a chemical substance, that were producing this behavior? (‘Vaporeon—Not Even Once.’)
As a lifelong gamer, I’m constantly frustrated by the lazy moralizing and lack of imagination in much of the so-called ‘ethical’ criticism of games. So much of it stems from the misunderstanding, if not the fear, of games as a medium.
At the same time, I’ve noticed a tendency among many gamers (though not all) to avoid entertaining any possibility that games can have negative effects (despite the fact, remember, that every technology or medium has some negative effects). I suspect this tendency stems from the outdated feeling that gaming’s value still needs to be justified or defended from assailants, as well as from the in-group signaling value that such defenses and justifications can have within communities of gamers. In any case, while noble in intent, this resistance to criticism in fact holds gaming back from realizing its potential as an art form: taking a medium seriously means asking the hard, transformative questions of it—not to tear it down, but to build it up.
In the case of Pokémon GO, what we have is a situation in which the most popular smartphone app is one that exploits its users’ psychological biases to induce them to physically go to particular places in their environments to perform actions on their phones whose value is at best unclear, and at worst a distraction from their other life goals, presumably all with a view to maximizing their further attentional (and monetary) expenditures. Furthermore, these influences are operative on users at all times and in all places. If alien anthropologists were looking down on this situation, wouldn’t they be quite justified in viewing such a game as one of our most promising control mechanisms?
Yet in response to this situation, the immediate concerns that have dominated the ethical discussion have centered on whether some company might be able to access some of the data on users’ devices. This is insane. It reflects how utterly the overinflated issue of ‘privacy’ has dominated the conceptual space in technology ethics as a whole, as well as how dangerously underprepared we are as a society to have the urgent and important discussions about how to preserve users’ self-determination in environments of high technological persuasion.
A few years ago I got really into Ingress, a location-based smartphone game that’s similar to Pokémon GO (and was created by Niantic, the same company). In Ingress, you fight for one of two sides in a perpetual, worldwide war. Your object is to capture virtual ‘portals’ that you can link to…actually, you know what—the details don’t really matter. The point is that soon I was always playing Ingress, wherever I was, and it was really, really fun.
Ingress gave me, consistently and with dopaminergic potency, what my day-to-day life couldn’t: precise goals, meaningful actions, immediate rewards, a clear enemy, social solidarity, and a feeling of advancement. I also found myself walking outside a lot more. As a result, the game quickly became a parallel process of task and goal pursuit running alongside that of my work and research. I felt like a secret agent: in one life, I was reading, writing, and discussing philosophy; in the other, I was blasting, capturing, and linking portals for the Resistance. I had always been at war with the Enlightenment.
But it wasn’t long before I found myself spending time in unusual ways. Like standing for thirty minutes between floors in the stairwell of the world-famous Ashmolean Library, battling an opponent for a strategically valuable portal. Or at the train station, suspiciously eyeing fellow passengers who were staring at their phones—were they my enemies? Or, when visiting Rome, loitering awkwardly outside the American Embassy portal and drawing the attention of men in suits who were talking into their wrists.
Soon I realized that Ingress wasn’t just enabling me to have fun in new ways—it was also imposing new costs on my life. On one level were the self-regulatory costs: Ingress had become a second to-do list for my life, dipping into my pool of finite cognitive resources. On a deeper level, though, were the opportunity costs I realized I’d been paying. If you think about what you really ‘pay’ when you ‘pay attention,’ you pay with all the things you could have attended to, but didn’t—you pay with all the goals you didn’t pursue, all the actions you didn’t take, and all the possible yous you could have been, had you attended to those other things. Attention is paid in possible futures foregone.
A few weeks later, I got a new phone. When I was re-downloading my apps, I tried to remember why I had started playing Ingress in the first place. What had I wanted it to do for me? To help me have fun, I guess. Now, more aware of the costs, I asked myself that question again. What do I want this app to do for me? To help me have fun, I guess. After much consideration, I quietly declined to reinstall Ingress. If a game is going to make me bring my own boundaries, I’m going to hold it to a higher standard. Fun is not enough.
It’s apparently a universal law that any article on the topic of self-regulation in the face of bewildering technological change must end with some capitulatory sentence that expresses ¯\_(ツ)_/¯ in verbal form. Like: ‘Welp, guess we just gotta find it within ourselves to adapt to this zany new world!’
We must reject this impulse. We must reject the lazy notion that, sorry, it’s just up to users now to bring their own boundaries—to incur significant new self-regulatory costs—if they want to benefit from the digital technologies transforming our world. Similarly, we must reject the conjoined notion that if someone doesn’t like the choices on technology’s menu, their only option is to ‘unplug’ or ‘detox.’ This depressingly common all-or-nothing spirit is not only unsustainable in the digital age—it also requires that we assent to a corrupt and pessimistic vision of technology that sits at odds with its very purpose.
What’s the alternative? We have to engage the design. It’s curious how easy it is to forget that technologies are designed by real people, with real reasons—and that both those people and their reasons can be petitioned by users. Having worked at Google for ten years, I know that most designers genuinely want to make products that will win users’ love and transform their lives. However, I also know that even the most noble values (especially the most noble values) are hard to operationalize, and that designers need our help to understand how to do so.
In response to a BYOB game like Pokémon GO, what should we ask of designers? If the game is to remain BYOB in character, then at minimum we have to ask for increased transparency of goals. We should expect to have answers to questions like: What are the game’s goals for me? How do I know this for sure? Do those goals align with my own? For instance: let’s say Pokémon GO helps you take more steps each day, and that’s why you play it. Great—but is that what the game’s actually designed to maximize? If not, then how do we take that from being a design effect to being a design reason?
The other option is to ask that the game provide new boundaries of space and/or time to compensate for the ones it took away, so that it’s no longer BYOB at all. For example, the design could incorporate mechanisms that let you specify where, when, and how you want to play the game. Helping you ‘fence off’ the game into a subset of life again would minimize the new self-regulatory responsibilities it asks you to take on, enabling you to fit the game into your life in the way you want. To be sure, engaging with design in this way isn’t easy, and there are many headwinds against doing it well. It may be a long time before we achieve the sort of feedback loops with designers we ultimately need (if in fact we ever do).
Until then, by all means, give Pokémon GO a whirl. But do so knowing that you’ll have to bring your own boundaries to it—and that in the end, you may not be able to. If you can’t, it’s not your fault—because why should we expect the unoptimized game of life to be able to compete with a game of pure, engineered fun?
And yet, in the end, the games we choose do matter: because when we reach the end of that game—the Big Game—and we think back on all the side quests and microgames we played along the way, how many of them, even if really fun, will we consider to have been time well spent? You and I will no doubt answer that question in different ways, and by the light of different reasons. Yet for both of us, the answer will depend on whether, when a wild game first appeared, we asked of it the really important questions—whether we asked what we wanted it to do for us. In this Poké-moment, spectacle and novelty can easily obscure the fact that there are many, many such questions to ask. But we gotta ask ’em all.
* Note: this article was first published online at Quillette magazine.
Alice Dreger, the historian of science, sex researcher, activist, and author of a much-discussed book of last year, has recently called attention to the loss of ambivalence as an acceptable attitude in contemporary politics and beyond. “Once upon a time,” she writes, “we were allowed to feel ambivalent about people. We were allowed to say, ‘I like what they did here, but that bit over there doesn’t thrill me so much.’ Those days are gone. Today the rule is that if someone—a scientist, a writer, a broadcaster, a politician—does one thing we don’t like, they’re dead to us.”
I’m going to suggest that this development leads to another kind of loss: the loss of our ability to work together, or better, learn from each other, despite intense disagreement over certain issues. Whether it’s because our opponent hails from a different political party, or voted differently on a key referendum, or thinks about economics or gun control or immigration or social values—or whatever—in a way we struggle to comprehend, our collective habit of shouting at each other with fingers stuffed in our ears has reached a breaking point.
It’s time to bring ambivalence back. Continue reading