technology

Oxford Uehiro Prize in Practical Ethics: Should We Take Moral Advice From Our Computers? written by Mahmoud Ghanem

This essay received an Honourable Mention in the undergraduate category of the Oxford Uehiro Prize in Practical Ethics.

Written by University of Oxford student, Mahmoud Ghanem

The Case For Computer Assisted Ethics

In the interest of rigour, I will avoid use of the phrase “Artificial Intelligence”, though many of the techniques I will discuss, namely statistical inference and automated theorem proving underpin most of what is described as “AI” today.

Whether we believe that the goal of moral actions ought to be to form good habits, to maximise some quality in the world, to follow the example of certain role models, or to adhere to some set of rules or guiding principles, a good case for consulting a well designed computer program in the process of making our moral decisions can be made. After all, the process of carrying out each of the above successfully at least requires:

(1) Access to relevant and accurate data, and

(2) The ability to draw accurate conclusions by analysing such data.

Both of which are things that computers are very good at. Continue reading

Video Series: Walter Sinnott-Armstrong on Moral Artificial Intelligence

Professor Walter Sinnott-Armstrong (Duke University and Oxford Martin Visiting Fellow) plans to develop a computer system (and a phone app) that will help us gain knowledge about human moral judgment and that will make moral judgment better. But will this moral AI make us morally lazy? Will it be abused? Could this moral AI take over the world? Professor Armstrong explains…

The unbearable asymmetry of bullshit

By Brian D. Earp (@briandavidearp)

* Note: this article was first published online at Quillette magazine. The official version is forthcoming in the HealthWatch Newsletter; see http://www.healthwatch-uk.org/.

Introduction

Science and medicine have done a lot for the world. Diseases have been eradicated, rockets have been sent to the moon, and convincing, causal explanations have been given for a whole range of formerly inscrutable phenomena. Notwithstanding recent concerns about sloppy research, small sample sizes, and challenges in replicating major findings—concerns I share and which I have written about at length — I still believe that the scientific method is the best available tool for getting at empirical truth. Or to put it a slightly different way (if I may paraphrase Winston Churchill’s famous remark about democracy): it is perhaps the worst tool, except for all the rest.

Scientists are people too

In other words, science is flawed. And scientists are people too. While it is true that most scientists — at least the ones I know and work with — are hell-bent on getting things right, they are not therefore immune from human foibles. If they want to keep their jobs, at least, they must contend with a perverse “publish or perish” incentive structure that tends to reward flashy findings and high-volume “productivity” over painstaking, reliable research. On top of that, they have reputations to defend, egos to protect, and grants to pursue. They get tired. They get overwhelmed. They don’t always check their references, or even read what they cite. They have cognitive and emotional limitations, not to mention biases, like everyone else.

At the same time, as the psychologist Gary Marcus has recently put it, “it is facile to dismiss science itself. The most careful scientists, and the best science journalists, realize that all science is provisional. There will always be things that we haven’t figured out yet, and even some that we get wrong.” But science is not just about conclusions, he argues, which are occasionally (or even frequently) incorrect. Instead, “It’s about a methodology for investigation, which includes, at its core, a relentless drive towards questioning that which came before.” You can both “love science,” he concludes, “and question it.”

I agree with Marcus. In fact, I agree with him so much that I would like to go a step further: if you love science, you had better question it, and question it well, so it can live up to its potential.

And it is with that in mind that I bring up the subject of bullshit.

Continue reading

Guest Post: KILLER ROBOTS AND THE ETHICS OF WAR IN THE 21th CENTURY

Written by Darlei Dall’Agnol[1]

killer robot

I attended, recently, the course Drones, Robots and the Ethics of Armed Conflict in the 21st Century, at the Department for Continuing Education, Oxford University, which is, by the way, offering a wide range of interesting courses for 2015-6 (https://www.conted.ox.ac.uk/). Philosopher Alexander Leveringhaus, a Research Fellow at the Oxford Institute for Ethics, Law and Armed Conflict, spoke on “What, if anything, is wrong with Killer Robots?” and ex-military Wil Wilson, a former RAF Regiment Officer, who is now working as a consultant in Defence and Intelligence, was announced to talk on “Why should autonomous military machines act ethically?” changed his title, which I will comment on soon. The atmosphere of the course was very friendly and the discussions illuminating. In this post, I will simply reconstruct the main ideas presented by the main speakers and leave my impression in the end on this important issue.  Continue reading

Blessed are the wastrels, for their surplus could save the Earth

Reposted from an article in “the Conversation”. 

In a world where too many go to bed hungry, it comes as a shock to realise that more than half the world’s food production is left to rot, lost in transit, thrown out, or otherwise wasted. This loss is a humanitarian disaster. It’s a moral tragedy. It’s a blight on the conscience of the world.

It might ultimately be the salvation of the human species.

To understand why, consider that we live in a system that rewards efficiency. Just-in-time production, reduced inventories, providing the required service at just the right time with minimised wasted effort: those are the routes to profit (and hence survival) for today’s corporations. This type of lean manufacturing aims to squeeze costs as much as possible, pruning anything extraneous from the process. That’s the ideal, anyway; and many companies are furiously chasing after this ideal. Continue reading

Beyond 23andMe’s Shutdown: The Role of the FDA in the Future of Direct-to-Consumer Genetic Testing

Kyle Edwards, Uehiro Centre for Practical Ethics and The Ethox Centre, University of Oxford

Caroline Huang, The Ethox Centre, University of Oxford

An article based on this blog post has now been published in the May – June 2014 Hastings Center Report: http://onlinelibrary.wiley.com/doi/10.1002/hast.310/full. Please check out our more developed thoughts on this topic there!

Twitter, paywalls, and access to scholarship — are license agreements too restrictive?

By Brian D. Earp

Follow Brian on Twitter by clicking here.

Twitter, paywalls, and access to scholarship — are license agreements too restrictive? 

I think I may have done something unethical today. But I’m not quite sure, dear reader, so I’m enlisting your energy to help me think things through. Here’s the short story:

Someone posted a link to an interesting-looking article by Caroline Williams at New Scientist — on the “myth” that we should live and eat like cavemen in order to match our lifestyle to that of our evolutionary ancestors, and thereby maximize health. Now, I assume that when you click on the link I just gave you (unless you’re a New Scientist subscriber), you get a short little blurb from the beginning of the article and then–of course–it dissolves into an ellipsis as soon as things start to get interesting:

Our bodies didn’t evolve for lying on a sofa watching TV and eating chips and ice cream. They evolved for running around hunting game and gathering fruit and vegetables. So, the myth goes, we’d all be a lot healthier if we lived and ate more like our ancestors. This “evolutionary discordance hypothesis” was first put forward in 1985 by medic S. Boyd Eaton and anthropologist Melvin Konner …

Holy crap! The “evolutionary discordance hypothesis” is a myth? I hope not, because I’ve been using some similar ideas in a lot of my arguments about neuroenhancement recently. So I thought I should really plunge forward and read the rest of the article. Unfortunately, I don’t have a subscription to New Scientist, and when I logged into my Oxford VPN-thingy, I discovered that Oxford doesn’t have access either. Weird. What was I to do?

Since I typically have at least one eye glued to my Twitter account, it occurred to me that I could send a quick tweet around to check if anyone had the PDF and would be willing to send it to me in an email. The majority of my “followers” are fellow academics, and I’ve seen this strategy play out before — usually when someone’s institutional log-in isn’t working, or when a key article is behind a pay-wall at one of those big “bundling” publishers that everyone seems to hold in such low regard. Another tack would be to dash off an email to a couple of colleagues of mine, and I could “CC” the five or six others who seem likeliest to be New Scientist subscribers. In any case, I went for the tweet.

Sure enough, an hour or so later, a chemist friend of mine sent me a message to “check my email” and there was the PDF of the “caveman” article, just waiting to be devoured. I read it. It turns out that the “evolutionary discordance hypothesis” is basically safe and sound, although it may need some tweaking and updates. Phew. On to other things.

But then something interesting happened! Whoever it is that manages the New Scientist Twitter account suddenly shows up in my Twitter feed with a couple of carefully-worded replies to my earlier PDF-seeking hail-mary:

Continue reading

Would you hand over a moral decision to a machine? Why not? Moral outsourcing and Artificial Intelligence.

Artificial Intelligence and Human Decision-making.

Recent developments in artificial intelligence are allowing an increasing number of decisions to be passed from human to machine. Most of these to date are operational decisions – such as algorithms on the financial markets deciding what trades to make and how. However, the range of such decisions that can be computerisable are increasing, and as many operational decisions have moral consequences, they could be considered to have a moral component.

One area in which this is causing growing concern is military robotics. The degree of autonomy with which uninhabited aerial vehicles and ground robots are capable of functioning is steadily increasing. There is extensive debate over the circumstances in which robotic systems should be able to operate with a human “in the loop” or “on the loop” – and the circumstances in which a robotic system should be able to operate independently. A coalition of international NGOs recently launched a campaign to “stop killer robots”.

While there have been strong arguments raised against robotic systems being able to use lethal force against human combatants autonomously, it is becoming increasingly clear that in many circumstances in the near future the “no human in the loop” robotic system will have advantages over the “in the loop system”. Automated systems already have better perception and faster reflexes than humans in many ways, and are slowed down by the human input. The human “added value” comes from our judgement and decision-making – but these are by no means infallible, and will not always be superior to the machine’s. In June’s Centre for a New American Society (CNAS) conference, Rosa Brooks (former pentagon official, now Georgetown Law professor) put this in a provocative way:
“Our record- we’re horrible at it [making “who should live and who should die” decisions] … it seems to me that it could very easily turn out to be the case that computers are much better than we are doing. And the real ethical question would be can we ethically and lawfully not let the autonomous machines do that when they’re going to do it better than we will.” (1)

For a non-military example, consider the adaptation of IBM’s Jeopardy-winning “Watson” for use in medicine. As evidenced by IBM’s technical release this week, progress in developing these systems continues apace (shameless plug: Selmer Bringsjord, the AI researcher “putting Watson through college” will speak in Oxford about “Watson 2.0” next month as part of the Philosophy and Theory of AI conference).

Soon we will have systems that will enter use as doctor’s aides – able to analyse the world’s medical literature to diagnose a medical problem and provide recommendations to the doctor. But it seems likely that a time will come when these thorough analyses produce recommendations that are sometimes at odds with the doctor’s recommendation – but are proven to be more accurate on average. To return to combat, we will have robotic systems that can devise and implement non-intuitive (to human) strategies that involve using lethal force, but achieve a military objective more efficiently with less loss of life. Human judgement added to the loop may prove to be an impairment.

Moral Outsourcing

At a recent academic workshop I attended on autonomy in military robotics, a speaker posed a pair of questions to test intuitions on this topic.
“Would you allow another person to make a moral decision on your behalf? If not, why not?” He asked the same pair of questions substituting “machine” for “a person”.

Regarding the first pair of questions, we all do this kind of moral outsourcing to a certain extent – allowing our peers, writers, and public figures to influence us. However, I was surprised to find I was unusual in doing this in a deliberate and systematic manner. In the same way that I rely on someone with the right skills and tools to fix my car, I deliberately outsource a wide range of moral questions to people who I know can answer then better than I can. These people tend to be better-informed on specific issues than I am, have had more time to think them through, and in some cases are just plain better at making moral assessments. I of course select for people who have a roughly similar world view to me, and from time to time do “spot tests” – digging through their reasoning to make sure I agree with it.

We each live at the centre of a spiderweb of moral decisions – some obvious, some subtle. As a consequentialist I don’t believe that “opting out” by taking the default course or ignoring many of them absolves me of responsibility. However, I just don’t have time to research, think about, and make sound morally-informed decisions about my diet, the impact of my actions on the environment, feminism, politics, fair trade, social equality – the list goes on. So I turn to people who can, and who will make as good a decision as I would in ideal circumstances (or a better one) nine times out of ten.

So Why Shouldn’t I Trust The Machine?

So to the second pair of questions:
“Would you allow a machine to make a moral decision on your behalf? If not, why not?”

It’s plausible that in the near future we will have artificial intelligence that for given, limited situations (for example: make a medical treatment decision, a resource allocation decision, or an “acquire military target” decision) is able to weigh up the facts for a and make as a decision or better than a human can 99.99% of the time – unclouded by bias, with vastly more information available to it.

So why not trust the machine?

Human decision-making is riddled with biases and inconsistencies, and can be impacted heavily by as little as fatigue, or when we last ate. For all that, our inconsistencies are relatively predictable, and have bounds. Every bias we know about can be taken into account, and corrected for to some extent. And there are limits to how insane an intelligent, balanced person’s “wrong” decision will be – even if my moral “outsourcees” are “less right” than me 1 time out of 10, there’s a limit to how bad their wrong decision will be.

This is not necessarily the case with machines. When a machine is “wrong”, it can be wrong in a far more dramatic way, with more unpredictable outcomes, than a human could.
Simple algorithms should be extremely predictable, but can make bizarre decisions in “unusual” circumstances. Consider the two simple pricing algorithms that got in a pricing war, pushing the price of a book about flies to $23 million. Or the 2010 stock market flash crash. It gets even more difficult to keep track of when evolutionary algorithms and other “learning” methods are used. Using self-modifying heuristics Douglas Lenat’s Eurisko won the US Championship of the Traveller TCS game using unorthodox, non-intuitive fleet designs. This fun youtube video shows a Super Mario-playing greedy algorithm figuring out how to make use of several hitherto-unknown game glitches to win (see 10:47).

Why should this concern us? As the decision-making processes become more complicated, and the strategies more non-intuitive, it becomes ever-harder to “spot test” if we agree with them – provided the results turn out good the vast majority of the time. The upshot is that we have to just “trust” the methods and strategies more and more. It also becomes harder to figure out how, why, and in what circumstances the machine will go wrong – and what the magnitude of the failure will be.

Even if we are outperformed 99.99% of the time, the unpredictability of the 0.01% failures may be a good reason to consider carefully what and how we morally outsource to the machine.

1. Transcript available here.
For further discussion on Brooks’s talk, see Foreign Policy Magazine articles here and here.

We may need to end all war. Quickly.

Public opinion and governments wrestle with a difficult problem: whether or not to intervene in Syria. The standard arguments are well known – just war theory, humanitarian protection of civilian populations, the westphalian right of states to non-intervention, the risk of quagmires, deterrence against chemical weapons use… But the news that an American group has successfully 3D printed a working handgun may put a new perspective on things.

Why? It’s not as if there’s a lack of guns in the world – either in the US or in Syria – so a barely working weapon, built from still-uncommon technology, is hardly going to upset any balance of power. But that may just be the beginning. As 3D printing technology gets better, as private micro-manufacturing improves (possibly all the way to Drexlerian nanotechnology), the range of weapons that can be privately produced increases. This type of manufacturing could be small scale, using little but raw material, and be very fast paced. We may reach a situation where any medium-sized organisation (a small country, a corporation, a town) could build an entire weapons arsenal in the blink of an eye: 20,000 combat drones, say, and 10,000 cruise missiles, all within a single day. All that you’d need are the plans, cheap raw materials, and a small factory floor. Continue reading

How to deal with double-edged technology

By Brian D. Earp

 World’s smallest drone? Or how to deal with double-edged technology 

BBC News reports that Harvard scientists have developed the world’s smallest flying robot. It’s about the size of a penny, and it moves faster than a human hand can swat. Of course, the inventors of this “diminutive flying vehicle” immediately lauded its potential for bringing good to the world:

1. “We could envision these robots being used for search-and-rescue operations to search for human survivors under collapsed buildings or [in] other hazardous environments.”

2. “They [could] be used for environmental monitoring, to be dispersed into a habitat to sense trace chemicals or other factors.”

3. They might even behave like many real insects and assist with the pollination of crops, “to function as the now-struggling honeybee populations do in supporting agriculture around the world.”

These all seem like pretty commendable uses of a new technology. Yet one can think of some “bad” uses too. The “search and rescue” version of this robot (for example) would presumably be fitted with a camera; and the prospect of a swarm of tiny, remote-controlled flying video recorders raises some obvious questions about spying and privacy. It also prompts one to wonder who will have access to these spy bugs (the U.S. Air Force has long been interested in building miniature espionage drones), and whether there will be effective regulatory strategies capable of tilting future usage more toward the search-and-rescue side of things, and away from the peep-and-record side.

Continue reading

Authors

Subscribe Via Email

Name
Email *

Affiliations