Artificial Intelligence

Hedonism, the Experience Machine, and Virtual Reality

By Roger Crisp

I take hedonism about well-being or welfare to be the view that the only thing that is good for any being is pleasure, and that what makes pleasure good is nothing other than its being pleasant. The standard objections to hedonism of this kind have mostly been of the same form: there are things other than pleasure that are good, and pleasantness isn’t the only property that makes things good. Continue reading

Judgebot.exe Has Encountered a Problem and Can No Longer Serve

Written by Stephen Rainey

Artificial intelligence (AI) is anticipated by many as having the potential to revolutionise traditional fields of knowledge and expertise. In some quarters, this has led to fears about the future of work, with machines muscling in on otherwise human work. Elon Musk is rattling cages again in this context with his imaginary ‘Teslabot’. Reports on the future of work have included these replacement fears for administrative jobs, service and care roles, manufacturing, medical imaging, and the law.

In the context of legal decision-making, a job well done includes reference to prior cases as well as statute. This is, in part, to ensure continuity and consistency in legal decision-making. The more that relevant cases can be drawn upon in any instance of legal decision-making, the better the possibility of good decision-making. But given the volume of legal documentation and the passage of time, there may be too much for legal practitioners to fully comprehend.

Continue reading

Ambient Intelligence

Written by Stephen Rainey

An excitingly futuristic world of seamless interaction with computers! A cybernetic environment that delivers what I want, when I want it! Or: A world of built on vampiric databases, fed on myopic accounts of movements and preferences, loosely related to persons. Each is a possibility given ubiquitous ambient intelligence. Continue reading

Guest Post: Pandemic Ethics. Social Justice Demands Mass Surveillance: Social Distancing, Contact Tracing and COVID-19

Written by: Bryce Goodman

The spread of COVID-19 presents a number of ethical dilemmas. Should ventilators only be used to treat those who are most likely to recover from infection? How should violators of quarantine be punished? What is the right balance between protecting individual privacy and reducing the virus’ spread?

Most of the mitigation strategies pursued today (including in the US and UK) rely primarily on lock-downs or “social distancing” and not enough on contact tracing — the use of location data to identify who an infected individual may have come into contact with and infected. This balance prioritizes individual privacy above public health. But contact tracing will not only protect our overall welfare. It can also help address the disproportionately negative impact social distancing is having on our least well off.
Contact tracing “can achieve epidemic control if used by enough people,” says a recent paper published in Science. “By targeting recommendations to only those at risk, epidemics could be contained without need for mass quarantines (‘lock-downs’) that are harmful to society.” Once someone has tested positive for a virus, we can use that person’s location history to deduce whom they may have “contacted” and infected. For example, we might find that 20 people were in close proximity and 15 have now tested positive for the virus. Contact tracing would allow us to identify and test the other 5 before they spread the virus further.
The success of contact tracing will largely depend on the accuracy and ubiquity of a widespread testing program. Evidence thus far suggests that countries with extensive testing and contact tracing are able to avoid or relax social distancing restrictions in favor of more targeted quarantines.

Continue reading

A Sad Victory

I recently watched the documentary AlphaGo, directed by Greg Kohs. The film tells the story of the refinement of AlphaGo—a computer Go program built by DeepMind—and tracks the match between AlphaGo and 18-time world champion in Go Lee Sedol.

Go is an ancient Chinese board game. It was considered one of the four essential arts of aristocratic Chinese scholars. The goal is to end the game having captured more territory than your opponent. What makes Go a particularly interesting game for AI to master is, first, its complexity. Compared to chess, Go has a larger board, and many more alternatives to consider per move. The number of possible moves in a given position is about 20 in chess; in Go, it’s about 200. The number of possible configurations of the board is more than the number of atoms in the universe. Second, Go is a game in which intuition is believed to play a big role. When professionals get asked why they played a particular move, they will often respond something to the effect that ‘it felt right’. It is this intuitive quality why Go is sometimes considered an art, and Go players artists. For a computer program to beat human Go players, then, it would have to mimic human intuition (or, more precisely, mimic the results of human intuition).

Continue reading

Regulating The Untapped Trove Of Brain Data

Written by Stephen Rainey and Christoph Bublitz

Increasing use of brain data, either from research contexts, medical device use, or in the growing consumer brain-tech sector raises privacy concerns. Some already call for international regulation, especially as consumer neurotech is about to enter the market more widely. In this post, we wish to look at the regulation of brain data under the GDPR and suggest a modified understanding to provide better protection of such data.

In medicine, the use of brain-reading devices is increasing, e.g. Brain-Computer-Interfaces that afford communication, control of neural or motor prostheses. But there is also a range of non-medical applications devices in development, for applications from gaming to the workplace.

Currently marketed ones, e.g. by Emotiv, Neurosky, are not yet widespread, which might be owing to a lack of apps or issues with ease of use, or perhaps just a lack of perceived need. However, various tech companies have announced their entrance to the field, and have invested significant sums. Kernel, a three year old multi-million dollar company based in Los Angeles, wants to ‘hack the human brain’. More recently, they are joined by Facebook, who want to develop a means of controlling devices directly with data derived from the brain (to be developed by their not-at-all-sinister sounding ‘Building 8’ group). Meanwhile, Elon Musk’s ‘Neuralink’ is a venture which aims to ‘merge the brain with AI’ by means of a ‘wizard hat for the brain’. Whatever that means, it’s likely to be based in recording and stimulating the brain.

Continue reading

Should PREDICTED Smokers Get Transplants?

By Tom Douglas

Jack has smoked a packet a day since he was 22. Now, at 52, he needs a heart and lung transplant.

Should he be refused a transplant to allow a non-smoker with a similar medical need to receive one? More generally: does his history of smoking reduce his claim to scarce medical resources?

If it does, then what should we say about Jill, who has never touched a cigarette, but is predicted to become a smoker in the future? Perhaps Jill is 20 years old and from an ethnic group with very high rates of smoking uptake in their 20s. Or perhaps a machine-learning tool has analysed her past facebook posts and google searches and identified her as a ‘high risk’ for taking up smoking—she has an appetite for risk, an unusual susceptibility to peer pressure, and a large number of smokers among her friends. Should Jill’s predicted smoking count against her, were she to need a transplant? Intuitively, it shouldn’t. But why not?

Continue reading

Scrabbling for Augmentation

By Stephen Rainey

 

Around a decade ago, Facebook users were widely playing a game called ‘Scrabulous’ with one another. It was pretty close to Scrabble, effectively, leading to a few legal issues.

Alongside Scrabulous, the popularity of Scrabble-assistance websites grew. Looking over the shoulders of work colleagues, you could often spy a Scrabulous window, as well as one for scrabblesolver.co.uk too. The strange phenomenon of easy, online Scrabulous cheating seemed pervasive for a time.

The strangeness of this can hardly be overstated. Friends would be routinely trying to pretend to one another that they were superior wordsmiths, by each deploying algorithmic anagram solvers. The ‘players’ themselves would do nothing but input data to the automatic solvers. As Charlie Brooker reported back in 2007,

“We’d rendered ourselves obsolete. It was 100% uncensored computer-on-computer action, with two meat puppets pulling the levers, fooling no one but themselves.”

Back to the present, and online Scrabble appears to have lost its sheen (or lustre, patina, or polish). But in a possible near future, I wonder if some similar issues could arise. Continue reading

The ‘Killer Robots’ Are Us

Written by Dr Michael Robillard

In a recent New York Times article Dr Michael Robillard writes: “At a meeting of the United Nations Convention on Conventional Weapons in Geneva in November, a group of experts gathered to discuss the military, legal and ethical dimensions of emerging weapons technologies. Among the views voiced at the convention was a call for a ban on what are now being called “lethal autonomous weapons systems.”

A 2012 Department of Defense directive defines an autonomous weapon system as one that, “once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.” “

Follow this link to read the article in full.

Video Series: Is AI Racist? Can We Trust it? Interview with Prof. Colin Gavaghan

Should self-driving cars be programmed in a way that always protects ‘the driver’? Who is responsible if an AI makes a mistake? Will AI used in policing be less racially biased than police officers? Should a human being always take the final decision? Will we become too reliant on AIs and lose important skills? Many interesting questions answered in this video interview with Dr Katrien Devolder.

Recent Comments

Authors

Affiliations