written by Christopher Register
You can now pre-order a friend—or, a Friend, which is designed to be an AI friend. The small, round device contains AI-powered software and a microphone, and it’s designed to be worn on a lanyard around the neck at virtually any time. The austere product website says of Friend that,
“When connected via bluetooth, your friend is always listening and forming their own internal thoughts. We have given your friend free will for when they decide to reach out to you.”
Whether spontaneously or when prompted, the device communicates to its users via text. While the website claims that recorded information is encrypted, it’s not clear for how long the data may be stored or how that data may be processed by the device. (The company has not yet responded to a request for answers to these questions.)
Already, humans are interacting and even forming quasi-relationships with AI programs and devices–whether romantic, friendly, or otherwise. We should expect the number and variety of such products to increase over the next decade. As our world becomes ever more saturated with AI systems that are listening, watching, and ‘thinking’ about us, how will our privacy change?
There are two ways of thinking about Friend that can serve as models for our assessment of its potential privacy impact. On one way, perhaps Friend is best thought of as a kind of technological enhancement of the user. On the second way, perhaps Friend is best thought of as a new individual that will be party to our actions and interactions.
On the first model, we think of Friend as a kind of perceptual and cognitive enhancement of the user. The device listens to the world around the user and processes what it hears. The user may then be able to reference what the Friend has heard and inferred, thereby increasing the ability of the user to know and remember their auditory environment and whatever else can be learned from auditory information.
Already, this model highlights potential privacy impacts: if the Friend device is better able to detect, process, and store information than the human user, then by virtue of possessing a Friend, the user will be able to know more about what goes on around them. Often, we say things out of (human) earshot with the intention of not being heard, and the limits of human attention mean that we can often say things within earshot with the reasonable expectation that we nevertheless will not be overheard or remembered (such as when diners at a nearby table are absorbed in their own conversation). As our environment becomes populated with AI Friends, these expectations–and our conventional boundaries of privacy–erode.
On the second model, where we think of the Friend as a distinct observing party, the privacy impact may be greater. If you confess a secret to your friend who is wearing a Friend, then you may be subjecting yourself to whatever ‘thoughts’ and ‘judgments’ the Friend is capable of. While it may be a stretch to use such mentalistic terms to describe the computational machinations of current AI, it’s not clear how much that matters. If you think the AI may be judging you, will you hesitate to share? Or, if an AI can mimic having judgments in a way that is convincing to the user, it may have the same effect as if it were genuinely passing judgment.
There’s a social trope about losing a friend or family member to whispers in their ear, such as when a loved one is swept up in a less-than-copacetic romance. If your long-time friend listens to whispers about you from their new Friend, how could that impact your friendship? There is a real worry that, even if current AI are not actually thinking or judging, they may nevertheless affect our social landscape as if they were. Could an increasing prevalence of AI Friends spoil interactions between human friends? Could Friendships erode friendships?
More distantly, it’s possible that future iterations of AI will indeed be capable of genuine thought and judgment. In this more radical situation, it may be morally obligatory to seek and acquire consent before bringing Friends into someone else’s home, or to announce when Friends are present in the workplace. It’s not easy to predict the ways that Friends may change our social and ethical landscape.
One suggestion is that people should treat their Friends as though they were genuine people, such as by not bringing them along to a social gathering uninvited. That rule of thumb might be worth implementing now, since even mindless Friends are not inert. It’s true that following the rule may be clunky or awkward, especially in the near future. Even so, it’s not reasonable to assume we can integrate these devices into our lives without friction. We owe something to the friends we already have.
For an in-depth exploration of the privacy impact of human-AI relationships, see our new preprint here.
“More distantly, it’s possible that future iterations of AI will indeed be capable of genuine thought and judgment.” Sorry, what evidence do you have for this? You are suggesting that computers will be able to think and have “genuine thought”. What do you mean?
That’s correct. You may see here for recent assessments of consciousness in current generations of AI, which would likely be closely connected to ‘thought’ or ‘judgment’.
https://philpapers.org/archive/CHACAL-3.pdf
https://arxiv.org/pdf/2308.08708
My claim was only a possibility claim about future generations of AI, so it’s rather modest. I suspect LLMs are not conscious and cannot think, though I don’t think this is obvious.
I know Chalmer’s work but I’m not convinced by his argument in the paper you cite. I have not detected any signs of consciousness or ‘thought’ in any of the LLMs I have tested. Nor indeed do I expect to see any signs given the technology’s specifications. I’m still with Turing, that the question of whether machines can think is itself too meaningless to deserve discussion.
My concern is that the prediction that machines will think and be conscious sometime in the future has been made and is still being made far too often. It has now become a prediction that has acquired the status of a truism in too many minds which can have the effect of making discussions about machine intelligence meaningless. Even if we assume that machines will become “consciousness”, which I agree can’t be excluded, it is not going be anything like human consciousness. Surely philosophers need to address this issue (among others)? What is more probable is that ubiquitous insentient machine intelligence will increasingly become the epitome of intelligence. Those who are working to develop AGI are engaged in this project and will probably, as Turing predicted, be able to claim that “machines can think without fear of contradiction”. Turing thought this would happen by about the end of the 20th century. All-in-all, not a bad guess.
This piece raises some important, timely questions on how AI might influence our social and ethical landscapes, especially with devices like Friend.
While it may be a stretch to expect current AI to achieve genuine thought or judgment, the potential for these devices to impact relationships is very real. Even as they mimic human interactions, they could alter our dynamics in subtle ways—shaping what we feel comfortable sharing, or even shifting our trust within friendships.
As a professional in Distance Learning, I see parallels here with how technology transforms boundaries, requiring us to stay mindful of both the possibilities and limitations of AI. It’s a thought-provoking reminder of how essential it is to thoughtfully integrate AI into our lives, respecting both the tool’s potential and our human connections.
The Friend product appears to highlight the ongoing learning curve for AI. And when considering the linked to ‘Privacy and Awareness in Human-AI Relationships’ preprint, highlights the current generic difficulties faced. Friend achieves more than hearing aids for the hard of hearing which augment to the extent all the sound around confusingly becomes one, and differentiation is seen as a necessary skill to learn.
Considered in that way inclusion of transmission principles within the privacy paradigm is a consequence of all social communication actions (e.g. language, communications at a distance, using an intermediary.) This inclusion is brought about by a purer focus on the social aspects of privacy, and whilst considerations include the individual within them, as do many of the concepts in other relevant word definitions used, what I consider the most important psychological aspect of individual privacy, of focus, (allowing for freedom of thought and all the other aspects of development, progress and self) become diluted or ignored as those elements become considered solely in the context of social issues. Within social beings like humans, the social focus allows a discourse with advantages and disadvantages being displayed to every interested body. But that most important basic element, of allowing an individual to freely focus becomes generally denigrated or denied, if it is considered at all. (Probably because of concerns about the significant point of anarchy, pointing debates towards various forms of social structure or control, which feelings of needing to belong then most often cause to prevail.).
The whole area of private communications do become paradoxical when the worldviews of likely recipients are unknown and the mostly unacknowledged privacy distance principle is applied, as is that constant of trust. Trust and truth have been philosophically analysed in the past, but trust mostly becomes conceptually superficially simplified when deployed in the immediate sense of privacy. Consequentially any privacy deceit highlights a finer focus on trust across the wider aspects of whatever associated contextual settings are involved. Whilst various transgressions of trust may be used as a tool by power players with particular agendas, they can cause irreparable (shattering levels) of damage, especially amongst those most fatigued, where ongoing resentment may build up. Such situations often drive towards the anonymous use of what is termed personal data for disparate and unconnected purposes using that part of privacy which is socially oriented well within the area of secrecy, so that trust in the collection mechanism may ostensibly be maintained.
To explain that laminated (shattered) view, personally, contextuality is viewed like laminated glass, where each contextual transparent glass layer is given strength by the transparent adhesive layers and other glass layers. However, contextuality can be viewed by many worldviews like a shattered car windscreen, where the contexts are contained within each fragment held together weakly by its adjoining polyurethane layers, with each shattered fragment masking the whole, with its potential, and constantly teetering on the edge of falling to pieces. A clear eyed comprehension of a multiplicity of contextual layers with their privacy facets and relationships together with the justifications for those allows an appreciation of the complete structure, each layer with its own strengths and weaknesses formative of the whole. Full comprehension and appreciation replaces the need for mere contextual respect and can allow privacy to function in its more natural state without compromising individual or social freedoms.
Clearly defined ethical and constrained moral frameworks do no doubt support or lead towards regulative regimes and create particular worldviews supportive of defined socially structured forms, but do they form, promote or progress outlooks which could encompass all? Nissenbaums early theories may reflect and be germinal within the legal and regulative arenas, which can facilitate programming type actions, and do lead along a particular path, but beyond that as a more generic morality prevails things do not stack up the same way. And it is that generic moral arena (The completely undamaged laminated glass, as opposed to the more logical contextually focused ethical aspects of each fragment) which appears to present AI its publicly stated difficulties, created when initially necessarily focusing within the popularly viewed segments of contextually shattered glass. With time becoming perceived as an irrelevance from the human perspective, Friend contains a potential which AI would find most useful in an attempt to progress into that more comprehensive view.