Skip to content

Friend AI: Personal Enhancement or Uninvited Company?

written by Christopher Register

For 99 USD, you can now pre-order a friend—or, a Friend, which is designed to be an AI friend. The small, round scallop-sized device contains AI-powered software and a microphone, and it’s designed to be worn on a lanyard around the neck at virtually any time. The austere product website says of Friend that,

“When connected via bluetooth, your friend is always listening and forming their own internal thoughts. We have given your friend free will for when they decide to reach out to you.”

Whether spontaneously or when prompted, the device communicates to its users via text. While the website claims that recorded information is encrypted, it’s not clear for how long the data may be stored or how that data may be processed by the device. (The company has not yet responded to a request for answers to these questions.)

Already, humans are interacting and even forming quasi-relationships with AI programs and devices–whether romantic, friendly, or otherwise. We should expect the number and variety of such products to increase over the next decade. As our world becomes ever more saturated with AI systems that are listening, watching, and ‘thinking’ about us, how will our privacy change?

There are two ways of thinking about Friend that can serve as models for our assessment of its potential privacy impact. On one way, perhaps Friend is best thought of as a kind of technological enhancement of the user. On the second way, perhaps Friend is best thought of as a new individual that will be party to our actions and interactions.

On the first model, we think of Friend as a kind of perceptual and cognitive enhancement of the user. The device listens to the world around the user and processes what it hears. The user may then be able to reference what the Friend has heard and inferred, thereby increasing the ability of the user to know and remember their auditory environment and whatever else can be learned from auditory information.

Already, this model highlights potential privacy impacts: if the Friend device is better able to detect, process, and store information than the human user, then by virtue of possessing a Friend, the user will be able to know more about what goes on around them. Often, we say things out of (human) earshot with the intention of not being heard, and the limits of human attention mean that we can often say things within earshot with the reasonable expectation that we nevertheless will not be overheard or remembered (such as when diners at a nearby table are absorbed in their own conversation). As our environment becomes populated with AI Friends, these expectations–and our conventional boundaries of privacy–erode.

On the second model, where we think of the Friend as a distinct observing party, the privacy impact may be greater. If you confess a secret to your friend who is wearing a Friend, then you may be subjecting yourself to whatever ‘thoughts’ and ‘judgments’ the Friend is capable of. While it may be a stretch to use such mentalistic terms to describe the computational machinations of current AI, it’s not clear how much that matters. If you think the AI may be judging you, will you hesitate to share? Or, if an AI can mimic having judgments in a way that is convincing to the user, it may have the same effect as if it were genuinely passing judgment.

There’s a social trope about losing a friend or family member to whispers in their ear, such as when a loved one is swept up in a less-than-copacetic romance. If your long-time friend listens to whispers about you from their new Friend, how could that impact your friendship? There is a real worry that, even if current AI are not actually thinking or judging, they may nevertheless affect our social landscape as if they were. Could an increasing prevalence of AI Friends spoil interactions between human friends? Could Friendships erode friendships?

More distantly, it’s possible that future iterations of AI will indeed be capable of genuine thought and judgment. In this more radical situation, it may be morally obligatory to seek and acquire consent before bringing Friends into someone else’s home, or to announce when Friends are present in the workplace. It’s not easy to predict the ways that Friends may change our social and ethical landscape.

One suggestion is that people should treat their Friends as though they were genuine people, such as by not bringing them along to a social gathering uninvited. That rule of thumb might be worth implementing now, since even mindless Friends are not inert. It’s true that following the rule may be clunky or awkward, especially in the near future. Even so, it’s not reasonable to assume we can integrate these devices into our lives without friction. We owe something to the friends we already have.


For an in-depth exploration of the privacy impact of human-AI relationships, see our new preprint here.

Share on

Join the conversation

Your email address will not be published. Required fields are marked *


Notify me of followup comments via e-mail. You can also subscribe without commenting.