technology

Beyond 23andMe’s Shutdown: The Role of the FDA in the Future of Direct-to-Consumer Genetic Testing

Kyle Edwards, Uehiro Centre for Practical Ethics and The Ethox Centre, University of Oxford

Caroline Huang, The Ethox Centre, University of Oxford

An article based on this blog post has now been published in the May – June 2014 Hastings Center Report: http://onlinelibrary.wiley.com/doi/10.1002/hast.310/full. Please check out our more developed thoughts on this topic there!

Twitter, paywalls, and access to scholarship — are license agreements too restrictive?

By Brian D. Earp

Follow Brian on Twitter by clicking here.

Twitter, paywalls, and access to scholarship — are license agreements too restrictive? 

I think I may have done something unethical today. But I’m not quite sure, dear reader, so I’m enlisting your energy to help me think things through. Here’s the short story:

Someone posted a link to an interesting-looking article by Caroline Williams at New Scientist – on the “myth” that we should live and eat like cavemen in order to match our lifestyle to that of our evolutionary ancestors, and thereby maximize health. Now, I assume that when you click on the link I just gave you (unless you’re a New Scientist subscriber), you get a short little blurb from the beginning of the article and then–of course–it dissolves into an ellipsis as soon as things start to get interesting:

Our bodies didn’t evolve for lying on a sofa watching TV and eating chips and ice cream. They evolved for running around hunting game and gathering fruit and vegetables. So, the myth goes, we’d all be a lot healthier if we lived and ate more like our ancestors. This “evolutionary discordance hypothesis” was first put forward in 1985 by medic S. Boyd Eaton and anthropologist Melvin Konner …

Holy crap! The “evolutionary discordance hypothesis” is a myth? I hope not, because I’ve been using some similar ideas in a lot of my arguments about neuroenhancement recently. So I thought I should really plunge forward and read the rest of the article. Unfortunately, I don’t have a subscription to New Scientist, and when I logged into my Oxford VPN-thingy, I discovered that Oxford doesn’t have access either. Weird. What was I to do?

Since I typically have at least one eye glued to my Twitter account, it occurred to me that I could send a quick tweet around to check if anyone had the PDF and would be willing to send it to me in an email. The majority of my “followers” are fellow academics, and I’ve seen this strategy play out before — usually when someone’s institutional log-in isn’t working, or when a key article is behind a pay-wall at one of those big “bundling” publishers that everyone seems to hold in such low regard. Another tack would be to dash off an email to a couple of colleagues of mine, and I could “CC” the five or six others who seem likeliest to be New Scientist subscribers. In any case, I went for the tweet.

Sure enough, an hour or so later, a chemist friend of mine sent me a message to “check my email” and there was the PDF of the “caveman” article, just waiting to be devoured. I read it. It turns out that the “evolutionary discordance hypothesis” is basically safe and sound, although it may need some tweaking and updates. Phew. On to other things.

But then something interesting happened! Whoever it is that manages the New Scientist Twitter account suddenly shows up in my Twitter feed with a couple of carefully-worded replies to my earlier PDF-seeking hail-mary:

Continue reading

Would you hand over a moral decision to a machine? Why not? Moral outsourcing and Artificial Intelligence.

Artificial Intelligence and Human Decision-making.

Recent developments in artificial intelligence are allowing an increasing number of decisions to be passed from human to machine. Most of these to date are operational decisions – such as algorithms on the financial markets deciding what trades to make and how. However, the range of such decisions that can be computerisable are increasing, and as many operational decisions have moral consequences, they could be considered to have a moral component.

One area in which this is causing growing concern is military robotics. The degree of autonomy with which uninhabited aerial vehicles and ground robots are capable of functioning is steadily increasing. There is extensive debate over the circumstances in which robotic systems should be able to operate with a human “in the loop” or “on the loop” – and the circumstances in which a robotic system should be able to operate independently. A coalition of international NGOs recently launched a campaign to “stop killer robots”.

While there have been strong arguments raised against robotic systems being able to use lethal force against human combatants autonomously, it is becoming increasingly clear that in many circumstances in the near future the “no human in the loop” robotic system will have advantages over the “in the loop system”. Automated systems already have better perception and faster reflexes than humans in many ways, and are slowed down by the human input. The human “added value” comes from our judgement and decision-making – but these are by no means infallible, and will not always be superior to the machine’s. In June’s Centre for a New American Society (CNAS) conference, Rosa Brooks (former pentagon official, now Georgetown Law professor) put this in a provocative way:
“Our record- we’re horrible at it [making “who should live and who should die” decisions] … it seems to me that it could very easily turn out to be the case that computers are much better than we are doing. And the real ethical question would be can we ethically and lawfully not let the autonomous machines do that when they’re going to do it better than we will.” (1)

For a non-military example, consider the adaptation of IBM’s Jeopardy-winning “Watson” for use in medicine. As evidenced by IBM’s technical release this week, progress in developing these systems continues apace (shameless plug: Selmer Bringsjord, the AI researcher “putting Watson through college” will speak in Oxford about “Watson 2.0″ next month as part of the Philosophy and Theory of AI conference).

Soon we will have systems that will enter use as doctor’s aides – able to analyse the world’s medical literature to diagnose a medical problem and provide recommendations to the doctor. But it seems likely that a time will come when these thorough analyses produce recommendations that are sometimes at odds with the doctor’s recommendation – but are proven to be more accurate on average. To return to combat, we will have robotic systems that can devise and implement non-intuitive (to human) strategies that involve using lethal force, but achieve a military objective more efficiently with less loss of life. Human judgement added to the loop may prove to be an impairment.

Moral Outsourcing

At a recent academic workshop I attended on autonomy in military robotics, a speaker posed a pair of questions to test intuitions on this topic.
“Would you allow another person to make a moral decision on your behalf? If not, why not?” He asked the same pair of questions substituting “machine” for “a person”.

Regarding the first pair of questions, we all do this kind of moral outsourcing to a certain extent – allowing our peers, writers, and public figures to influence us. However, I was surprised to find I was unusual in doing this in a deliberate and systematic manner. In the same way that I rely on someone with the right skills and tools to fix my car, I deliberately outsource a wide range of moral questions to people who I know can answer then better than I can. These people tend to be better-informed on specific issues than I am, have had more time to think them through, and in some cases are just plain better at making moral assessments. I of course select for people who have a roughly similar world view to me, and from time to time do “spot tests” – digging through their reasoning to make sure I agree with it.

We each live at the centre of a spiderweb of moral decisions – some obvious, some subtle. As a consequentialist I don’t believe that “opting out” by taking the default course or ignoring many of them absolves me of responsibility. However, I just don’t have time to research, think about, and make sound morally-informed decisions about my diet, the impact of my actions on the environment, feminism, politics, fair trade, social equality – the list goes on. So I turn to people who can, and who will make as good a decision as I would in ideal circumstances (or a better one) nine times out of ten.

So Why Shouldn’t I Trust The Machine?

So to the second pair of questions:
“Would you allow a machine to make a moral decision on your behalf? If not, why not?”

It’s plausible that in the near future we will have artificial intelligence that for given, limited situations (for example: make a medical treatment decision, a resource allocation decision, or an “acquire military target” decision) is able to weigh up the facts for a and make as a decision or better than a human can 99.99% of the time – unclouded by bias, with vastly more information available to it.

So why not trust the machine?

Human decision-making is riddled with biases and inconsistencies, and can be impacted heavily by as little as fatigue, or when we last ate. For all that, our inconsistencies are relatively predictable, and have bounds. Every bias we know about can be taken into account, and corrected for to some extent. And there are limits to how insane an intelligent, balanced person’s “wrong” decision will be – even if my moral “outsourcees” are “less right” than me 1 time out of 10, there’s a limit to how bad their wrong decision will be.

This is not necessarily the case with machines. When a machine is “wrong”, it can be wrong in a far more dramatic way, with more unpredictable outcomes, than a human could.
Simple algorithms should be extremely predictable, but can make bizarre decisions in “unusual” circumstances. Consider the two simple pricing algorithms that got in a pricing war, pushing the price of a book about flies to $23 million. Or the 2010 stock market flash crash. It gets even more difficult to keep track of when evolutionary algorithms and other “learning” methods are used. Using self-modifying heuristics Douglas Lenat’s Eurisko won the US Championship of the Traveller TCS game using unorthodox, non-intuitive fleet designs. This fun youtube video shows a Super Mario-playing greedy algorithm figuring out how to make use of several hitherto-unknown game glitches to win (see 10:47).

Why should this concern us? As the decision-making processes become more complicated, and the strategies more non-intuitive, it becomes ever-harder to “spot test” if we agree with them – provided the results turn out good the vast majority of the time. The upshot is that we have to just “trust” the methods and strategies more and more. It also becomes harder to figure out how, why, and in what circumstances the machine will go wrong – and what the magnitude of the failure will be.

Even if we are outperformed 99.99% of the time, the unpredictability of the 0.01% failures may be a good reason to consider carefully what and how we morally outsource to the machine.

1. Transcript available here.
For further discussion on Brooks’s talk, see Foreign Policy Magazine articles here and here.

We may need to end all war. Quickly.

Public opinion and governments wrestle with a difficult problem: whether or not to intervene in Syria. The standard arguments are well known – just war theory, humanitarian protection of civilian populations, the westphalian right of states to non-intervention, the risk of quagmires, deterrence against chemical weapons use… But the news that an American group has successfully 3D printed a working handgun may put a new perspective on things.

Why? It’s not as if there’s a lack of guns in the world – either in the US or in Syria – so a barely working weapon, built from still-uncommon technology, is hardly going to upset any balance of power. But that may just be the beginning. As 3D printing technology gets better, as private micro-manufacturing improves (possibly all the way to Drexlerian nanotechnology), the range of weapons that can be privately produced increases. This type of manufacturing could be small scale, using little but raw material, and be very fast paced. We may reach a situation where any medium-sized organisation (a small country, a corporation, a town) could build an entire weapons arsenal in the blink of an eye: 20,000 combat drones, say, and 10,000 cruise missiles, all within a single day. All that you’d need are the plans, cheap raw materials, and a small factory floor. Continue reading

How to deal with double-edged technology

By Brian D. Earp

 World’s smallest drone? Or how to deal with double-edged technology 

BBC News reports that Harvard scientists have developed the world’s smallest flying robot. It’s about the size of a penny, and it moves faster than a human hand can swat. Of course, the inventors of this “diminutive flying vehicle” immediately lauded its potential for bringing good to the world:

1. “We could envision these robots being used for search-and-rescue operations to search for human survivors under collapsed buildings or [in] other hazardous environments.”

2. “They [could] be used for environmental monitoring, to be dispersed into a habitat to sense trace chemicals or other factors.”

3. They might even behave like many real insects and assist with the pollination of crops, “to function as the now-struggling honeybee populations do in supporting agriculture around the world.”

These all seem like pretty commendable uses of a new technology. Yet one can think of some “bad” uses too. The “search and rescue” version of this robot (for example) would presumably be fitted with a camera; and the prospect of a swarm of tiny, remote-controlled flying video recorders raises some obvious questions about spying and privacy. It also prompts one to wonder who will have access to these spy bugs (the U.S. Air Force has long been interested in building miniature espionage drones), and whether there will be effective regulatory strategies capable of tilting future usage more toward the search-and-rescue side of things, and away from the peep-and-record side.

Continue reading

Your password will probably be hacked soon, and how to (actually) solve the problem

By Brian D. Earp

See Brian’s most recent previous post by clicking here.

See all of Brian’s previous posts by clicking here.

Follow Brian on Twitter by clicking here.

 

Your password will probably be hacked soon, and how to (actually) solve the problem

Smithsonian Magazine recently reported: “Your Password Will Probably Be Hacked Soon” and delivered a troubling quote from Ars Technica:

The ancient art of password cracking has advanced further in the past five years than it did in the previous several decades combined. At the same time, the dangerous practice of password reuse has surged. The result: security provided by the average password in 2012 has never been weaker.

After the Twitter accounts for Burger King as well as Chrysler’s Jeep were recently broken into, Twitter apparently issued some advice to the effect that people should be smarter about their password security practices. So: use lots of letters and numbers, passwords should be 10-digits or longer, use a different password for every one of your online accounts and so on.

But this is nuts. Does Twitter know anything about how human beings actually work? Why do you think people reuse their passwords for multiple sites? Why do you think people select easy-to-remember (and easy-to-discover) factoids from their childhoods as answers to security questions?

Continue reading

Personalised weapons of mass destruction: governments and strategic emerging technologies

Andrew Hessel, Marc Goodman and Steven Kotler sketches in an article in The Atlantic a not-too-far future when the combination of cheap bioengineering, synthetic biology and crowdsourcing of problem solving allows not just personalised medicine, but also personalised biowarfare. They dramatize it by showing how this could be used to attack the US president, but that is mostly for effect: this kind of technology could in principle be targeted at anyone or any group as long as there existed someone who had a reason to use it and the resources to pay for it. The Secret Service looks like it is aware of the problem and does its best to swipe away traces of the President, but it is hard to imagine this to be perfect, doable for old DNA left behind years ago, or applied by all potential targets. In fact, it looks like the US government is keen on collecting not just biometric data, but DNA from foreign potentates. They might be friends right now, but who knows in ten years…

Continue reading

Technology is outrunning science

It’s a common trope that our technology is outrunning our wisdom: we have great technological power, so the argument goes, but not the wisdom to use it.

Forget wisdom: technology is outrunning science! We have great technological power, but not the science to know what it does. In a recent bizarre  trial in Italy, scientists were found guilty of manslaughter for failing to predict an earthquake in L’Aquila – prompting seismologists all over the world to sign an open letter stating, basically, that science can’t predict earthquakes.

But though we can’t predict earthquakes, we can certainly cause them. Pumping out water from an aquifer, oil and gas wells, rock quarries, even dams, have all been showed to cause earthquakes – though their magnitude and their timing remain unpredictable.

Geoengineering is another example of the phenomena: we have the technological know-how to radically change the planet’s climate at relatively low cost – but lack the science to predict the extent and true impact of this radical change. Soon we may be able to build artificial minds, though whole-brain emulations or other methods,  but we can’t predict when this might happen or even the likely consequences of such a dramatically transformative technology.

The path from pure science to grubby technological implementation is traditionally seen as running in one clear direction: pure science develops ground-breaking ivory tower ideas, that eventually get taken up and transformed into useful technology, year down the line. To do this, science has to stay continually ahead of technology: we have to know more than we do. But now it’s pure science and research that have to play catch-up: we have find a way to know what we’re doing.

Artificial organs: “good guys” finish last to technology

It is hardly a keen insight to note that there are a lot of problems in the world today, and that there are also lots of suggested solutions. Often these can be classified under three different labels:

  • “Good guy” solutions which rely on changing individual people’s attitudes and behaviours.
  • Institutional solutions which rely on designing good institutions to address the problem.
  • Technological solutions which count on technology to resolve the problem.

In this view, it is tremendously good news that scientists are getting closer to producing artificial organs. If this goal is achieved, it will be a technological solution to the problem of transplant organ shortages – and technological solutions tend to be better than institutional solutions, which are generally much better than “good guy” solutions. The “good guy” solution to organ donation was to count on people to volunteer to donate when they died. Better institutions (such as an opt-out system where you have to make a special effort not to be a donor, rather than a special effort to be a donor) have resulted in much improved donation rates. But cheap artificial organs would really be the ultimate solution.

Of course I don’t denigrate the use of getting people on your side, nor the motivations of those who sincerely want to change things. But changes to people’s attitudes only tend to stick around as long term solutions if this is translated into actual institutional or technological changes.

Take slavery, for instance. Continue reading

Water, food or energy: we won’t lack them

The world is full of problems. Pollution is a problem. The destruction of the coral reefs, the eradication of the rain forests, the mass extinction of animal species are problems, and tragedies. Loss of biodiversity is a problem. Global warming is a problem. Poverty and the unequal distribution of resources are major problems.

But lack of basic resources isn’t a problem. We’ll have enough food, water and energy for the whole human race for the forseable future, at reasonable costs. Take a worse-case scenario for all three areas, and let’s look at the figures.

Continue reading

Authors

Affiliations