Skip to content

Bring Your Own Boundaries: Pokémon GO and the Challenge of Ambient Fun

By James Williams (@WilliamsJames_)
(Words: 2500 | Reading time: 12 minutes | Gross misuses of the ‘Poké-‘ prefix: 6)

1.

I’m not a Pokémaster; I haven’t ‘caught them all.’ If you were to hold a gun to my head and force me to answer Poké-trivia (as one does), my strategy would probably consist of murmuring ‘Pikachu?’ in varied intonations of anger and desperation.

Yet as someone who cares about the ethics of persuasion and technology, I’ve found the Poké-mania of the past couple of weeks really something to behold. In a matter of days after the so-called ‘augmented-reality’ smartphone game Pokémon GO launched, it rampaged up the app charts and quickly amassed more daily active users in the US than Twitter.

The slogan of the Pokémon franchise is ‘Gotta catch ‘em all!’ This phrase has always seemed to me an apt slogan for the digital era as a whole. It expresses an important element of the attitude we’re expected to have as we grapple with the Sisyphean boulder of information abundance using our woefully insufficient cognitive toolsets. (Emails: Gotta read ‘em all! Posts: Gotta like ‘em all!)

What’s noteworthy about the launch of Pokémon GO isn’t that its players are suddenly finding dead bodies in creeks, inadvertently flash-mobbing Central Park, falling prey to Poké-scams, or doing anything else that publishers can cite to catch all the clicks they can. Rather, it’s that Pokémon GO signals the first mainstream adoption of a type of game I’ve come to call ‘BYOB’—that is, games that require you to ‘Bring Your Own Boundaries.’

As such, this Poké-moment (sorry) presents us with a unique opportunity to advance the conversation about the ethics of self-regulation and self-determination in environments of increasingly persuasive technology.

 

2.

One way of looking at games is as sets of constraints. When I play a game, I’m turning my experience over to some particular configuration of constraints designed by someone whom I (hopefully) trust with my attention, and which, if successful, will enable me to symbolically grapple with psychologically resonant aspects of my individual and/or social world. When games do this well, they perform an essential service for society.

Yet there’s a certain fundamental type of constraint that’s been present in almost all games throughout history: deep constraints of space and/or time—the game’s ultimate ‘boundaries’—that confine the game to some fenced-off region of human life. (e.g.: ‘Friday, 7:00 pm, Port Meadow. Be there.’) Fencing off our games from the rest of life means they can represent our psychological world without actually becoming it. In this way, these fundamental ‘boundaries’ function as extensions of our self-regulation embedded in the environment itself.

However, when these boundaries of time and space disappear—when the game is always on and always with you, a parallel rather than a punctuated experience—the regulatory responsibilities they bore are transferred off of the environment and onto you. You must now actively define and continually enforce (if you can) precisely where and when the game shall be afoot. There’s no support structure to lean on anymore; you have to bring your own boundaries.

‘Bringing your own boundaries’ means expending more of your scarce cognitive resources to achieve the same level of self-regulation you were able to achieve previously. In a given day, we all have a finite amount of cognitive effort we can expend—a finite number of decisions we can make, a finite amount of willpower we can exercise—before we become depleted, weak of will (or ‘akratic’), and more vulnerable to persuasive influences in our environment. In this way, the removal of a constraint itself becomes a constraint.

To be sure, many BYOB technologies already exist and thrive in our information environment. Ubiquitous computing, especially in collision with the so-called ‘attention economy,’ has collapsed spatio-temporal boundaries in many areas of our lives, resulting in the imposition of extensive cognitive and self-regulatory costs that we’re still just beginning to understand. All this makes the mainstream adoption of BYOB gaming more, not less, significant.

However, BYOB games deserve special ethical attention for two reasons. For one, games typically have no pretense of instrumentality. Games are designed to be immensely fun—maybe even the most fun things in life—yet the rest of life is so very not designed that way. Games rarely have to justify their existence any further than this. As a result, it’s easier for us to be less explicit about the net value we expect games to bring to our lives as a whole.

The other reason is that digital games today can be designed to exploit our psychological vulnerabilities far more effectively than in the past. Pokémon GO, for example, makes extensive use of a technique known as random reward scheduling, which involves randomizing the rewards you give a user for taking some particular action (e.g. spinning the circles at PokeStops to get loot) in order to induce them to take that action even more. This is the same psychological mechanism at work in the design of slot machines, and a major factor in their addictive character.

There are countless other brain-hacks at work in Pokémon GO that appear to capitalize on cognitive quirks such as the endowment effect (you value a Pokémon more when you think you ‘own’ it), the nostalgia effect (thinking about the past makes you more willing to pay money—so if you played Pokémon growing up, watch yourself when buying PokéCoins!), territoriality, social reinforcement, the fear of missing out, and many more. My point here is not that these biases and mechanisms are in themselves bad—in fact, they’re often what make games fun—rather, it’s that games can target them to shape our behavior more effectively than ever.

Ultimately, it’s the combination of these two reasons—games’ persuasive power, and our relative lack of criticality in submitting to them—that makes it especially prudent to invest attention in ethical questions at the emergence of the first widely used BYOB game. Because imagine what the headlines would be if it weren’t an app, but instead a chemical substance, that were producing this behavior? (‘Vaporeon—Not Even Once.’)

 

3.

As a lifelong gamer, I’m constantly frustrated by the lazy moralizing and lack of imagination in much of the so-called ‘ethical’ criticism of games. So much of it stems from the misunderstanding, if not the fear, of games as a medium.

At the same time, I’ve noticed a tendency among many gamers (though not all) to avoid entertaining any possibility that games can have negative effects (despite the fact, remember, that every technology or medium has some negative effects). I suspect this tendency stems from the outdated feeling that gaming’s value still needs to be justified or defended from assailants, as well as from the in-group signaling value that such defenses and justifications can have within communities of gamers. In any case, while noble in intent, this resistance to criticism in fact holds gaming back from realizing its potential as an art form: taking a medium seriously means asking the hard, transformative questions of it—not to tear it down, but to build it up.

In the case of Pokémon GO, what we have is a situation in which the most popular smartphone app is one that exploits its users’ psychological biases to induce them to physically go to particular places in their environments to perform actions on their phones whose value is at best unclear, and at worst a distraction from their other life goals, presumably all with a view to maximizing their further attentional (and monetary) expenditures. Furthermore, these influences are operative on users at all times and in all places. If alien anthropologists were looking down on this situation, wouldn’t they be quite justified in viewing such a game as one of our most promising control mechanisms?

Yet in response to this situation, the immediate concerns that have dominated the ethical discussion have centered on whether some company might be able to access some of the data on users’ devices. This is insane. It reflects how utterly the overinflated issue of ‘privacy’ has dominated the conceptual space in technology ethics as a whole, as well as how dangerously underprepared we are as a society to have the urgent and important discussions about how to preserve users’ self-determination in environments of high technological persuasion.

 

4.

A few years ago I got really into Ingress, a location-based smartphone game that’s similar to Pokémon GO (and was created by Niantic, the same company). In Ingress, you fight for one of two sides in a perpetual, worldwide war. Your object is to capture virtual ‘portals’ that you can link to…actually, you know what—the details don’t really matter. The point is that soon I was always playing Ingress, wherever I was, and it was really, really fun.

Ingress gave me, consistently and with dopaminergic potency, what my day-to-day life couldn’t: precise goals, meaningful actions, immediate rewards, a clear enemy, social solidarity, and a feeling of advancement. I also found myself walking outside a lot more. As a result, the game quickly became a parallel process of task and goal pursuit running alongside that of my work and research. I felt like a secret agent: in one life, I was reading, writing, and discussing philosophy; in the other, I was blasting, capturing, and linking portals for the Resistance. I had always been at war with the Enlightenment.

But it wasn’t long before I found myself spending time in unusual ways. Like standing for thirty minutes between floors in the stairwell of the world-famous Ashmolean Library, battling an opponent for a strategically valuable portal. Or at the train station, suspiciously eyeing fellow passengers who were staring at their phones—were they my enemies? Or, when visiting Rome, loitering awkwardly outside the American Embassy portal and drawing the attention of men in suits who were talking into their wrists.

Soon I realized that Ingress wasn’t just enabling me to have fun in new ways—it was also imposing new costs on my life. On one level were the self-regulatory costs: Ingress had become a second to-do list for my life, dipping into my pool of finite cognitive resources. On a deeper level, though, were the opportunity costs I realized I’d been paying. If you think about what you really ‘pay’ when you ‘pay attention,’ you pay with all the things you could have attended to, but didn’t—you pay with all the goals you didn’t pursue, all the actions you didn’t take, and all the possible yous you could have been, had you attended to those other things. Attention is paid in possible futures foregone.

A few weeks later, I got a new phone. When I was re-downloading my apps, I tried to remember why I had started playing Ingress in the first place. What had I wanted it to do for me? To help me have fun, I guess. Now, more aware of the costs, I asked myself that question again. What do I want this app to do for me? To help me have fun, I guess. After much consideration, I quietly declined to reinstall Ingress. If a game is going to make me bring my own boundaries, I’m going to hold it to a higher standard. Fun is not enough.

 

5.

It’s apparently a universal law that any article on the topic of self-regulation in the face of bewildering technological change must end with some capitulatory sentence that expresses ¯\_(ツ)_/¯ in verbal form. Like: ‘Welp, guess we just gotta find it within ourselves to adapt to this zany new world!’

We must reject this impulse. We must reject the lazy notion that, sorry, it’s just up to users now to bring their own boundaries—to incur significant new self-regulatory costs—if they want to benefit from the digital technologies transforming our world. Similarly, we must reject the conjoined notion that if someone doesn’t like the choices on technology’s menu, their only option is to ‘unplug’ or ‘detox.’ This depressingly common all-or-nothing spirit is not only unsustainable in the digital age—it also requires that we assent to a corrupt and pessimistic vision of technology that sits at odds with its very purpose.

What’s the alternative? We have to engage the design. It’s curious how easy it is to forget that technologies are designed by real people, with real reasons—and that both those people and their reasons can be petitioned by users. Having worked at Google for ten years, I know that most designers genuinely want to make products that will win users’ love and transform their lives. However, I also know that even the most noble values (especially the most noble values) are hard to operationalize, and that designers need our help to understand how to do so.

In response to a BYOB game like Pokémon GO, what should we ask of designers? If the game is to remain BYOB in character, then at minimum we have to ask for increased transparency of goals. We should expect to have answers to questions like: What are the game’s goals for me? How do I know this for sure? Do those goals align with my own? For instance: let’s say Pokémon GO helps you take more steps each day, and that’s why you play it. Great—but is that what the game’s actually designed to maximize? If not, then how do we take that from being a design effect to being a design reason?

The other option is to ask that the game provide new boundaries of space and/or time to compensate for the ones it took away, so that it’s no longer BYOB at all. For example, the design could incorporate mechanisms that let you specify where, when, and how you want to play the game. Helping you ‘fence off’ the game into a subset of life again would minimize the new self-regulatory responsibilities it asks you to take on, enabling you to fit the game into your life in the way you want. To be sure, engaging with design in this way isn’t easy, and there are many headwinds against doing it well. It may be a long time before we achieve the sort of feedback loops with designers we ultimately need (if in fact we ever do).

Until then, by all means, give Pokémon GO a whirl. But do so knowing that you’ll have to bring your own boundaries to it—and that in the end, you may not be able to. If you can’t, it’s not your fault—because why should we expect the unoptimized game of life to be able to compete with a game of pure, engineered fun?

And yet, in the end, the games we choose do matter: because when we reach the end of that game—the Big Game—and we think back on all the side quests and microgames we played along the way, how many of them, even if really fun, will we consider to have been time well spent? You and I will no doubt answer that question in different ways, and by the light of different reasons. Yet for both of us, the answer will depend on whether, when a wild game first appeared, we asked of it the really important questions—whether we asked what we wanted it to do for us. In this Poké-moment, spectacle and novelty can easily obscure the fact that there are many, many such questions to ask. But we gotta ask ’em all.

Share on

6 Comment on this post

  1. Hello,
    Thank you for this text. I enjoyed the text, even though I think your argument could have been phrased just as clearly with half the number of words. Also, I couldn’t exactly what your recommendation was when you write that “we should enhage the designers”. Are you recommending an extension of the current laws on the obligation from editors to warn users about the inherent dangers of video games, perhaps by introducing a new category of dangers relevant for BYOB games? Or are you just inviting developers to have more concern for their audience when designing BYOB games? Also, assuming that warning the audience is proven to be an unefficient way of preventing BYOB specific dangers, can you think of any alternative?

    1. Hi Andrews, you’re very welcome — thanks for taking the time to read & comment. My aim with this exploratory piece was to provide a compass, rather than a map, for this territory. Broadly, I do see promising leverage points for ‘engaging the design’ at all levels of the problem — users, designers, companies, policymakers — and in fact I think most of the principles/proposals of the ‘Time Well Spent’ campaign I work on (which I linked to in the final paragraph) can be pretty directly applied in the context of ‘BYOB’ games. However, as you rightly suggest, any interventions would need to go well beyond merely giving people information about the risks of particular design patterns — that is one approach we already know will not do the trick.

  2. Hello James,
    I agree that warning users does not go far enough. Actually, given how valuable our attention is for private companies operating in a free market, capitalist environment, I think that anything short of a legal obligation will fail. The reason for this is, roughly, that there is plently evidence of blatant failures of the free market to ‘self-regulate’ in almost every sector of the economy, from energy to banking, from food industry to pharmaceuticals. I see no reason why self-regulation failures would not carry over, say, the communication and entertainement industry, if this is the industry where “attention hijacks”, to coin a phrase sympathetic to the manifesto you linked above, are the most problematic. So if self-regulation is the normative framework closest to positive law, and yet fails, it seems reasonable to think that anything weaker than self-regulation will fail.
    For this reason, I think that anyone sympathetic to your “time well spend” initiative would like to jump straight into the genuine problem to be addressed, namely How should one define and formally express a legal framework able to efficiently deal with attention hijacks.

  3. I thought this piece was well articulated and thoughtful. I would not cut a word. I like it so much, I’ve read it twice in less than a day.

Comments are closed.