technology

Judgebot.exe Has Encountered a Problem and Can No Longer Serve

Written by Stephen Rainey

Artificial intelligence (AI) is anticipated by many as having the potential to revolutionise traditional fields of knowledge and expertise. In some quarters, this has led to fears about the future of work, with machines muscling in on otherwise human work. Elon Musk is rattling cages again in this context with his imaginary ‘Teslabot’. Reports on the future of work have included these replacement fears for administrative jobs, service and care roles, manufacturing, medical imaging, and the law.

In the context of legal decision-making, a job well done includes reference to prior cases as well as statute. This is, in part, to ensure continuity and consistency in legal decision-making. The more that relevant cases can be drawn upon in any instance of legal decision-making, the better the possibility of good decision-making. But given the volume of legal documentation and the passage of time, there may be too much for legal practitioners to fully comprehend.

Continue reading

Ambient Intelligence

Written by Stephen Rainey

An excitingly futuristic world of seamless interaction with computers! A cybernetic environment that delivers what I want, when I want it! Or: A world of built on vampiric databases, fed on myopic accounts of movements and preferences, loosely related to persons. Each is a possibility given ubiquitous ambient intelligence. Continue reading

Oxford Uehiro Prize in Practical Ethics: What, if Anything, is Wrong About Algorithmic Administration?

This essay received an honourable mention in the undergraduate category.

Written by University of Oxford student, Angelo Ryu.

 

Introduction

 The scope of modern administration is vast. We expect the state to perform an ever-increasing number of tasks, including the provision of services and the regulation of economic activity. This requires the state to make a large number of decisions in a wide array of areas. Inevitably, the scale and complexity of such decisions stretch the capacity of good governance.

In response, policymakers have begun to implement systems capable of automated decision making. For example, certain jurisdictions within the United States use an automated system to advise on criminal sentences. Australia uses an automated system for parts of its welfare program.

Such systems, it is said, will help address the costs of modern administration. It is plausibly argued that automation will lead to quicker, efficient, and more consistent decisions – that it will ward off a return to the days of Dickens’ Bleak House. Continue reading

Oxford Uehiro Prize in Practical Ethics: Why Is Virtual Wrongdoing Morally Disquieting, Insofar As It Is?

This essay was the winning entry in the undergraduate category of the 6th Annual Oxford Uehiro Prize in Practical Ethics.

Written by University of Oxford student, Eric Sheng.

In the computer game Red Dead Redemption 2 (henceforward, RDR2), players control a character in a virtual world. Among the characters represented by computer graphics but not controlled by a real-world player are suffragettes. Controversy arose when it became known that some players used their characters to torture or kill suffragettes. (One player’s character, for example, feeds a suffragette to an alligator.) In this essay, I seek to explain the moral disquiet ­– the intuition that things are awry from the moral perspective – that the players’ actions (call them, for short, ‘assaulting suffragettes’) provoke. The explanation will be an exercise in ‘moral psychology, philosophical not psychological’:[1] I seek not to causally explain our disquiet through the science of human nature, but to explain why things are indeed awry, and thus justify our disquiet.

My intention in posing the question in this way is to leave open the possibilities that our disquiet is justified although the players’ actions are not wrong, or that it’s justified but not principally by the wrongness of the players’ actions. These possibilities are neglected by previous discussions of virtual wrongdoing that ask: is this or that kind of virtual wrongdoing wrong? Indeed, I argue that some common arguments for the wrongness of virtual wrongdoing do not succeed in explaining our disquiet, and sketch a more plausible account of why virtual wrongdoing is morally disquieting insofar as it is, which invokes not the wrongness of the players’ actions but what these actions reveal about the players. By ‘virtual wrongdoing’ I mean an action by a player in the real world that intentionally brings about an action φV by a character in a virtual world V such that φV is wrong-in-V; and the criteria for evaluating an action’s wrongness-in-V are the same as those for evaluating an action’s wrongness in the real world.[2] Continue reading

Cross Post: Privacy is a Collective Concern: When We Tell Companies About Ourselves, We Give Away Details About Others, Too.

BY CARISSA VÉLIZ

This article was originally published in New Statesman America

Making Ourselves Better

Written by Stephen Rainey

Human beings are sometimes seen as uniquely capable of enacting life plans and controlling our environment. Take technology, for instance; with it we make the world around us yield to our desires in various ways. Communication technologies, and global transport, for example, have the effect of practically shrinking a vast world, making hitherto impossible coordination possible among a global population. This contributes to a view of human-as-maker, or ‘homo faber‘. But taking such a view can risk minimising human interests that ought not to be ignored.

Homo faber is a future-oriented, adaptable, rational animal, whose efforts are aligned with her interests when she creates technology that enables a stable counteraction of natural circumstance. Whereas animals are typically seen to have well adapted responses to their environment, honed through generations of adaptation, human beings appear to have instead a general and adaptable skill that can emancipate them from material, external circumstances. We are bad at running away from danger, for instance, but good at building barriers to obviate the need to run. The protections this general, adaptable skill offer are inherently future-facing: humans seem to seek not to react to, but to control the environment.

Continue reading

Regulating The Untapped Trove Of Brain Data

Written by Stephen Rainey and Christoph Bublitz

Increasing use of brain data, either from research contexts, medical device use, or in the growing consumer brain-tech sector raises privacy concerns. Some already call for international regulation, especially as consumer neurotech is about to enter the market more widely. In this post, we wish to look at the regulation of brain data under the GDPR and suggest a modified understanding to provide better protection of such data.

In medicine, the use of brain-reading devices is increasing, e.g. Brain-Computer-Interfaces that afford communication, control of neural or motor prostheses. But there is also a range of non-medical applications devices in development, for applications from gaming to the workplace.

Currently marketed ones, e.g. by Emotiv, Neurosky, are not yet widespread, which might be owing to a lack of apps or issues with ease of use, or perhaps just a lack of perceived need. However, various tech companies have announced their entrance to the field, and have invested significant sums. Kernel, a three year old multi-million dollar company based in Los Angeles, wants to ‘hack the human brain’. More recently, they are joined by Facebook, who want to develop a means of controlling devices directly with data derived from the brain (to be developed by their not-at-all-sinister sounding ‘Building 8’ group). Meanwhile, Elon Musk’s ‘Neuralink’ is a venture which aims to ‘merge the brain with AI’ by means of a ‘wizard hat for the brain’. Whatever that means, it’s likely to be based in recording and stimulating the brain.

Continue reading

Better Living Through Neurotechnology

Written by Stephen Rainey

If ‘neurotechnology’ isn’t a glamour area for researchers yet, it’s not far off. Technologies centred upon reading the brain are rapidly being developed. Among the claims made of such neurotechnologies are that some can provide special access to normally hidden representations of consciousness. Through recording, processing, and making operational brain signals we are promised greater understanding of our own brain processes. Since every conscious process is thought to be enacted, or subserved, or realised by a neural process, we get greater understanding of our consciousness.

Besides understanding, these technologies provide opportunities for cognitive optimisation and enhancement too. By getting a handle on our obscure cognitive processes, we can get the chance to manipulate them. By representing our own consciousness to ourselves, through a neurofeedback device for instance, we can try to monitor and alter the processes we witness, changing our minds in a very literal sense.

This looks like some kind of technological mind-reading, and perhaps too good to be true. Is neurotechnology overclaiming its prospects? Maybe more pressingly, is it understating its difficulties? Continue reading

Cross Post: Common Sense for A.I. Is a Great Idea. But it’s Harder Than it Sounds.

Written by Carissa Veliz

Crosspost from Slate.  Click here to read the full article

At the moment, artificial intelligence may have perfect memories and be better at arithmetic than us, but they are clueless. It takes a few seconds of interaction with any digital assistant to realize one is not in the presence of a very bright interlocutor. Among some of the unexpected items users have found in their shopping lists after talking to (or near) Amazon’s Alexa are 150,000 bottles of shampoo, sled dogs, “hunk of poo,” and a girlfriend.

The mere exasperation of talking to a digital assistant can be enough to miss human companionship, feel nostalgia of all things analog and dumb, and foreswear any future attempts at communicating with mindless pieces of metal inexplicably labelled “smart.” (Not to mention all the privacy issues.) A.I. not understanding what a shopping list is, and the kinds of items that are appropriate to such lists, is evidence of a much broader problem: They lack common sense.

The Allen Institute for Artificial Intelligence, or AI2, created by Microsoft co-founder Paul Allen, has announced it is embarking on a new research $125 million initiative to try to change that. “To make real progress in A.I., we have to overcome the big challenges in the area of common sense,” Allen told the New York Times. AI2 takes common sense to include the “infinite set of facts, heuristics, observations … that we bring to the table when we address a problem, but the computer doesn’t.” Researchers will use a combination of crowdsourcing, machine learning, and machine vision to create a huge “repository of knowledge” that will bring about common sense. Of paramount importance among its uses is to get A.I. to “understand what’s harmful to people.”

This article was originally published on Slate.  To read the full article and to join in the conversation please follow this link.

The ‘Killer Robots’ Are Us

Written by Dr Michael Robillard

In a recent New York Times article Dr Michael Robillard writes: “At a meeting of the United Nations Convention on Conventional Weapons in Geneva in November, a group of experts gathered to discuss the military, legal and ethical dimensions of emerging weapons technologies. Among the views voiced at the convention was a call for a ban on what are now being called “lethal autonomous weapons systems.”

A 2012 Department of Defense directive defines an autonomous weapon system as one that, “once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.” “

Follow this link to read the article in full.

Authors

Affiliations