technology

Cross Post: Machine Learning and Medical Education: Impending Conflicts in Robotic Surgery

Guest Post by Nathan Hodson 

* Please note that this article is being cross-posted from the Journal of Medical Ethics Blog 

Research in robotics promises to revolutionize surgery. The Da Vinci system has already brought the first fruits of the revolution into the operating theater through remote controlled laparoscopic (or “keyhole”) surgery. New developments are going further, augmenting the human surgeon and moving toward a future with fully autonomous robotic surgeons. Through machine learning, these robotic surgeons will likely one day supersede their makers and ultimately squeeze human surgical trainees out of operating room.

This possibility raises new questions for those building and programming healthcare robots. In their recent essay entitled “Robot Autonomy for Surgery,” Michael Yip and Nikhil Das echoed a common assumption in health robotics research: “human surgeons [will] still play a large role in ensuring the safety of the patient.” If human surgical training is impaired by robotic surgery, however—as I argue it likely will be—then this safety net would not necessarily hold.

Imagine an operating theater. The autonomous robot surgeon makes an unorthodox move. The human surgeon observer is alarmed. As the surgeon reaches to take control, the robot issues an instruction: “Step away. Based on data from every single operation performed this year, by all automated robots around the world, the approach I am taking is the best.”

Should we trust the robot? Should we doubt the human expert? Shouldn’t we play it safe—but what would that mean in this scenario? Could such a future really materialize?

Continue reading

Guest Post: Mind the accountability gap: On the ethics of shared autonomy between humans and intelligent medical devices

Guest Post by Philipp Kellmeyer

Imagine you had epilepsy and, despite taking a daily cocktail of several anti-epileptic drugs, still suffered several seizures per week, some minor, some resulting in bruises and other injuries. The source of your epileptic seizures lies in a brain region that is important for language. Therefore, your neurologist told you, epilepsy surgery – removing brain tissue that has been identified as the source of seizures in continuous monitoring with intracranial electroencephalography (iEEG) – is not viable in your case because it would lead to permanent damage to your language ability.

There is however, says your neurologist, an innovative clinical trial under way that might reduce the frequency and severity of your seizures. In this trial, a new device is implanted in your head that contains an electrode array for recording your brain activity directly from the brain surface and for applying small electric shocks to interrupt an impending seizure.

The electrode array connects wirelessly to a small computer that analyses the information from the electrodes to assess your seizure risk at any given moment in order to decide when to administer an electric shock. The neurologist informs you that trials with similar devices have achieved a reduction in the frequency of severe seizures in 50% of patients so that there would be a good chance that you benefit from taking part in the trial.

Now, imagine you decided to participate in the trial and it turns out that the device comes with two options: In one setting, you get no feedback on your current seizure risk by the device and the decision when to administer an electric shock to prevent an impending seizure is taken solely by the device.

This keeps you completely out of the loop in terms of being able to modify your behaviour according to your seizure risk and – in a sense – relegates some autonomy of decision-making to the intelligent medical device inside your head.

In the other setting, the system comes with a “traffic light” that signals your current risk level for a seizure, with green indicating a low, yellow a medium, and red a high probability of a seizure. In case of an evolving seizure, the device may additionally warn you with an alarm tone. In this scenario, you are kept in the loop and you retain your capacity to modify your behavior accordingly, for example to step from a ladder or stop riding a bike when you are “in the red.”

Continue reading

Carissa Véliz on how our privacy is threatened when we use smartphones, computers, and the internet.

Smartphones are like spies in our pocket; we should cover the camera and microphone of our laptops; it is difficult to opt out of services like Facebook that track us on the internet; IMSI-catchers can ‘vacuum’ data from our smartphones; data brokers may  sell our internet profile to criminals and/or future employees; and yes, we should protect people’s privacy even if they don’t care about it. Carissa Véliz (University of Oxford) warns us: we should act now before it is too late. Privacy damages accumulate, and, in many cases, are irreversible. We urgently need more regulations to protect our privacy.

Oxford Uehiro Prize in Practical Ethics: Should We Take Moral Advice From Our Computers? written by Mahmoud Ghanem

This essay received an Honourable Mention in the undergraduate category of the Oxford Uehiro Prize in Practical Ethics.

Written by University of Oxford student, Mahmoud Ghanem

The Case For Computer Assisted Ethics

In the interest of rigour, I will avoid use of the phrase “Artificial Intelligence”, though many of the techniques I will discuss, namely statistical inference and automated theorem proving underpin most of what is described as “AI” today.

Whether we believe that the goal of moral actions ought to be to form good habits, to maximise some quality in the world, to follow the example of certain role models, or to adhere to some set of rules or guiding principles, a good case for consulting a well designed computer program in the process of making our moral decisions can be made. After all, the process of carrying out each of the above successfully at least requires:

(1) Access to relevant and accurate data, and

(2) The ability to draw accurate conclusions by analysing such data.

Both of which are things that computers are very good at. Continue reading

Video Series: Walter Sinnott-Armstrong on Moral Artificial Intelligence

Professor Walter Sinnott-Armstrong (Duke University and Oxford Martin Visiting Fellow) plans to develop a computer system (and a phone app) that will help us gain knowledge about human moral judgment and that will make moral judgment better. But will this moral AI make us morally lazy? Will it be abused? Could this moral AI take over the world? Professor Armstrong explains…

The unbearable asymmetry of bullshit

By Brian D. Earp (@briandavidearp)

* Note: this article was first published online at Quillette magazine. The official version is forthcoming in the HealthWatch Newsletter; see http://www.healthwatch-uk.org/.

Introduction

Science and medicine have done a lot for the world. Diseases have been eradicated, rockets have been sent to the moon, and convincing, causal explanations have been given for a whole range of formerly inscrutable phenomena. Notwithstanding recent concerns about sloppy research, small sample sizes, and challenges in replicating major findings—concerns I share and which I have written about at length — I still believe that the scientific method is the best available tool for getting at empirical truth. Or to put it a slightly different way (if I may paraphrase Winston Churchill’s famous remark about democracy): it is perhaps the worst tool, except for all the rest.

Scientists are people too

In other words, science is flawed. And scientists are people too. While it is true that most scientists — at least the ones I know and work with — are hell-bent on getting things right, they are not therefore immune from human foibles. If they want to keep their jobs, at least, they must contend with a perverse “publish or perish” incentive structure that tends to reward flashy findings and high-volume “productivity” over painstaking, reliable research. On top of that, they have reputations to defend, egos to protect, and grants to pursue. They get tired. They get overwhelmed. They don’t always check their references, or even read what they cite. They have cognitive and emotional limitations, not to mention biases, like everyone else.

At the same time, as the psychologist Gary Marcus has recently put it, “it is facile to dismiss science itself. The most careful scientists, and the best science journalists, realize that all science is provisional. There will always be things that we haven’t figured out yet, and even some that we get wrong.” But science is not just about conclusions, he argues, which are occasionally (or even frequently) incorrect. Instead, “It’s about a methodology for investigation, which includes, at its core, a relentless drive towards questioning that which came before.” You can both “love science,” he concludes, “and question it.”

I agree with Marcus. In fact, I agree with him so much that I would like to go a step further: if you love science, you had better question it, and question it well, so it can live up to its potential.

And it is with that in mind that I bring up the subject of bullshit.

Continue reading

Guest Post: KILLER ROBOTS AND THE ETHICS OF WAR IN THE 21th CENTURY

Written by Darlei Dall’Agnol[1]

killer robot

I attended, recently, the course Drones, Robots and the Ethics of Armed Conflict in the 21st Century, at the Department for Continuing Education, Oxford University, which is, by the way, offering a wide range of interesting courses for 2015-6 (https://www.conted.ox.ac.uk/). Philosopher Alexander Leveringhaus, a Research Fellow at the Oxford Institute for Ethics, Law and Armed Conflict, spoke on “What, if anything, is wrong with Killer Robots?” and ex-military Wil Wilson, a former RAF Regiment Officer, who is now working as a consultant in Defence and Intelligence, was announced to talk on “Why should autonomous military machines act ethically?” changed his title, which I will comment on soon. The atmosphere of the course was very friendly and the discussions illuminating. In this post, I will simply reconstruct the main ideas presented by the main speakers and leave my impression in the end on this important issue.  Continue reading

Blessed are the wastrels, for their surplus could save the Earth

Reposted from an article in “the Conversation”. 

In a world where too many go to bed hungry, it comes as a shock to realise that more than half the world’s food production is left to rot, lost in transit, thrown out, or otherwise wasted. This loss is a humanitarian disaster. It’s a moral tragedy. It’s a blight on the conscience of the world.

It might ultimately be the salvation of the human species.

To understand why, consider that we live in a system that rewards efficiency. Just-in-time production, reduced inventories, providing the required service at just the right time with minimised wasted effort: those are the routes to profit (and hence survival) for today’s corporations. This type of lean manufacturing aims to squeeze costs as much as possible, pruning anything extraneous from the process. That’s the ideal, anyway; and many companies are furiously chasing after this ideal. Continue reading

Beyond 23andMe’s Shutdown: The Role of the FDA in the Future of Direct-to-Consumer Genetic Testing

Kyle Edwards, Uehiro Centre for Practical Ethics and The Ethox Centre, University of Oxford

Caroline Huang, The Ethox Centre, University of Oxford

An article based on this blog post has now been published in the May – June 2014 Hastings Center Report: http://onlinelibrary.wiley.com/doi/10.1002/hast.310/full. Please check out our more developed thoughts on this topic there!

Twitter, paywalls, and access to scholarship — are license agreements too restrictive?

By Brian D. Earp

Follow Brian on Twitter by clicking here.

Twitter, paywalls, and access to scholarship — are license agreements too restrictive? 

I think I may have done something unethical today. But I’m not quite sure, dear reader, so I’m enlisting your energy to help me think things through. Here’s the short story:

Someone posted a link to an interesting-looking article by Caroline Williams at New Scientist — on the “myth” that we should live and eat like cavemen in order to match our lifestyle to that of our evolutionary ancestors, and thereby maximize health. Now, I assume that when you click on the link I just gave you (unless you’re a New Scientist subscriber), you get a short little blurb from the beginning of the article and then–of course–it dissolves into an ellipsis as soon as things start to get interesting:

Our bodies didn’t evolve for lying on a sofa watching TV and eating chips and ice cream. They evolved for running around hunting game and gathering fruit and vegetables. So, the myth goes, we’d all be a lot healthier if we lived and ate more like our ancestors. This “evolutionary discordance hypothesis” was first put forward in 1985 by medic S. Boyd Eaton and anthropologist Melvin Konner …

Holy crap! The “evolutionary discordance hypothesis” is a myth? I hope not, because I’ve been using some similar ideas in a lot of my arguments about neuroenhancement recently. So I thought I should really plunge forward and read the rest of the article. Unfortunately, I don’t have a subscription to New Scientist, and when I logged into my Oxford VPN-thingy, I discovered that Oxford doesn’t have access either. Weird. What was I to do?

Since I typically have at least one eye glued to my Twitter account, it occurred to me that I could send a quick tweet around to check if anyone had the PDF and would be willing to send it to me in an email. The majority of my “followers” are fellow academics, and I’ve seen this strategy play out before — usually when someone’s institutional log-in isn’t working, or when a key article is behind a pay-wall at one of those big “bundling” publishers that everyone seems to hold in such low regard. Another tack would be to dash off an email to a couple of colleagues of mine, and I could “CC” the five or six others who seem likeliest to be New Scientist subscribers. In any case, I went for the tweet.

Sure enough, an hour or so later, a chemist friend of mine sent me a message to “check my email” and there was the PDF of the “caveman” article, just waiting to be devoured. I read it. It turns out that the “evolutionary discordance hypothesis” is basically safe and sound, although it may need some tweaking and updates. Phew. On to other things.

But then something interesting happened! Whoever it is that manages the New Scientist Twitter account suddenly shows up in my Twitter feed with a couple of carefully-worded replies to my earlier PDF-seeking hail-mary:

Continue reading

Authors

Subscribe Via Email

Affiliations