Skip to content

Ambient Intelligence

Written by Stephen Rainey

An excitingly futuristic world of seamless interaction with computers! A cybernetic environment that delivers what I want, when I want it! Or: A world of built on vampiric databases, fed on myopic accounts of movements and preferences, loosely related to persons. Each is a possibility given ubiquitous ambient intelligence.

Smart technologies, and the Internet of Things (IoT), are contemporary instances of ambient intelligence. In the late 1990s Philips and Palo Alto Ventures came up with this idea to characterise a future in which technologies invisibly interact with and adapt to human needs. This version of techno-utopianism included technologies that anticipate what we might want and act autonomously to curate the environment around us. Ambient intelligence would be a way for human environments to be optimised for us in terms of what we want. But they also make otherwise predictable elements of outward reality unpredictable.

An unfortunate socio-political preoccupation with security, and an unguided technological turn toward powerful handheld devices, has resulted in ambient intelligent sensor networks logging a great deal of what we do. Our phones collect and share all manner of data with anonymous databases all over the place. Smart objects, a class of technology-enabled devices that include sensors and artificial intelligence, use data to react to or predict user preferences. There are also applications in marketing, such as smart billboards, whose aim is to read physical characteristics of those passing by in order to tailor advertising to them.

In all cases, sensors are deployed to record the environment, process the data derived from the recording, and select features relevant for some application. As these sensor networks include our phones, the public at large is both being monitored and providing the sensors. As data is amassed upon our movements and activities, it becomes increasingly valuable to those with the capacity to crunch the numbers.

The varieties of sensor data and processing applications are not yet wholly integrated with one another. But this snapshot suggests the way in which one form of ubiquitous ambient intelligence is emerging. This sort of data recording and synthesis allows unprecedented quantification of social environments and the people in them. But the techno-utopian aim of optimising reality for human desires is missing. The data collection is happening, but no rationale has emerged.

The most likely ways in which ambient intelligence will capture public space is via security and economic surveillance. Police forces will seek as much data on those inhabiting public spaces in the name of predicting and preventing crime or disorder. The Chinese social credit system is one possible dystopian endpoint. Companies will want similarly voluminous data from which to build consumer profiles and generate means through which to nudge us toward their doors.

As long as someone is snapping a selfie and putting it on the web, the scene they capture is a technological interface. It’s a source of data for open-ended processing. Clearview AI allowed any customer to scrape internet picture archives, using facial recognition technology. This could in principle supply a visual record of anyone’s presence in any location, at any time, as long as they were present in a photo. The advantages of such latent records of activity stay with unaccountable groups, such as security forces or those leading tech companies. Only the risks remain public.

We may find ways to cope with environments that actively scan us, or that inform on our movements and habits to anonymous databases. But these intrusions on our freedom are nonetheless difficult to justify. For one thing, in the case of data monetised for corporate gain, our very being around is objectified, commodified, and enriching for a few. It’s strangely grotesque. And it can’t be resisted in the same way facial recognition might be, through wearing masks to baffle cameras. Such rent-seeking is based on our mere presence. To resist is to accept alienation from the space as the price.

The smooth-functioning of security and economic systems ought to be predicated on how they interact with and account for the wider social world they serve. Ambient intelligence gets things backward.

Two initial steps in resisting this would be (a) to point ambient intelligent data processing toward user-devices, away from cloud databases, and (b) to geo-fence public spaces from social media imaging. With these data removed from wider databases and corporate interests, the security and market value of their collection would be diminished.

In terms of personal data, device-based processing would (literally) put control back in users’ hands. In terms of social imaging, this would be a good way to update outdated photography laws. Current UK legislation widely permits photography in public spaces. But such lack of regulation was written for times of physical photographic media, film, chemical processing, and private picture ownership. Now, take a walk past the Bodleian Library or through Trafalgar Square and you’re likely to appear in hundreds of pictures, and a fair few live video streams. A far cry from holiday snaps of old, likely to go largely unseen in albums.

We ought to protect the shared interests we have in public spaces. Systems hoping to profit from our general living should not get much priority. If we cut out the data streams that would furnish these systems, we could cut out the incentive to deploy them at all.

Share on

2 Comment on this post

  1. “With these data removed from wider databases and corporate interests, the security and market value of their collection would be diminished.”

    Such statements do not ring entirely true. If a given set of data is more difficult to collect or process, the price of that data increases. Yes the collection, subsequent collation and then use of peoples data is frequently a serious problem as many ways in which that data is consumed becomes detrimental to the individual and social groups within which they live. Blocking one part of a chain of events may stop, delay or mask the events; But it only treats a symptom.

    In a similar way to the Peter Singers’ shallow pool example in the Vaccine Passports post these things only work if everybody consistently applies the same worldview to every situation. It is sadly easily perceivable that many people seeing the shallow muddy pool with a child drowning in it would merely stand to one side shouting for help rather than directly assisting, others would pass by rationalising to themselves they were going for help and yet others could completely misunderstand what they were seeing and hearing. Just look to news articles in the past reporting police officers looking on as a person drowned. And those physically visible situations are clearly simpler and more easily understood than data collection and use across a diverse society where a great many serious compromises happen to both individuals and social groups. A narrow view focusing on a single aspect may help certain individuals or social groups but does not help society as a whole.

    Looking to those regulated publicly visible establishments with photography bans which have not worked in the past and I do not think a regulated for public spaces photography ban would work in the expected way.

    1. Thanks for this. The issue with data is the unpredictable nature of its potential uses, and the ways in which these can suddenly morph at scale. ‘Ordinary’ activities like taking photos, when considered from a data perspective, can be seen as opening oneself and others to these potentials, hence to data risks. The ethical aspects of this include that one can profit from exposing others to these risks. Throttling data collection or production through means like geo-fencing physical areas from detailed data transmission forecloses on these elements of risk. And it makes the profiteering (the rent-seeking) trickier and more costly. I’d think that would help collectively as no individual need know if, or how, or when data is potentially compromising their liberty, or creepily lining another’ pockets. It would simply cost more for the data-collector to get the material they want. In general there would a a diminished risk from data. I am not sure about the shallow pool analogy. I would think this would be more along the lines of opting in or out of organ donation: right now there’s no opt in or out question regarding data and public spaces. I’d be in favour of a requirement to opt in which — barring some massive re-think of data economy and awareness raising exercise — would effectively be impossible, and scupper the whole thing.

Comments are closed.