Regulating The Untapped Trove Of Brain Data
Written by Stephen Rainey and Christoph Bublitz
Increasing use of brain data, either from research contexts, medical device use, or in the growing consumer brain-tech sector raises privacy concerns. Some already call for international regulation, especially as consumer neurotech is about to enter the market more widely. In this post, we wish to look at the regulation of brain data under the GDPR and suggest a modified understanding to provide better protection of such data.
In medicine, the use of brain-reading devices is increasing, e.g. Brain-Computer-Interfaces that afford communication, control of neural or motor prostheses. But there is also a range of non-medical applications devices in development, for applications from gaming to the workplace.
Currently marketed ones, e.g. by Emotiv, Neurosky, are not yet widespread, which might be owing to a lack of apps or issues with ease of use, or perhaps just a lack of perceived need. However, various tech companies have announced their entrance to the field, and have invested significant sums. Kernel, a three year old multi-million dollar company based in Los Angeles, wants to ‘hack the human brain’. More recently, they are joined by Facebook, who want to develop a means of controlling devices directly with data derived from the brain (to be developed by their not-at-all-sinister sounding ‘Building 8’ group). Meanwhile, Elon Musk’s ‘Neuralink’ is a venture which aims to ‘merge the brain with AI’ by means of a ‘wizard hat for the brain’. Whatever that means, it’s likely to be based in recording and stimulating the brain.
Better Living Through Neurotechnology
Written by Stephen Rainey
If ‘neurotechnology’ isn’t a glamour area for researchers yet, it’s not far off. Technologies centred upon reading the brain are rapidly being developed. Among the claims made of such neurotechnologies are that some can provide special access to normally hidden representations of consciousness. Through recording, processing, and making operational brain signals we are promised greater understanding of our own brain processes. Since every conscious process is thought to be enacted, or subserved, or realised by a neural process, we get greater understanding of our consciousness.
Besides understanding, these technologies provide opportunities for cognitive optimisation and enhancement too. By getting a handle on our obscure cognitive processes, we can get the chance to manipulate them. By representing our own consciousness to ourselves, through a neurofeedback device for instance, we can try to monitor and alter the processes we witness, changing our minds in a very literal sense.
This looks like some kind of technological mind-reading, and perhaps too good to be true. Is neurotechnology overclaiming its prospects? Maybe more pressingly, is it understating its difficulties? Continue reading
Cross Post: Common Sense for A.I. Is a Great Idea. But it’s Harder Than it Sounds.
Written by Carissa Veliz
Crosspost from Slate. Click here to read the full article
At the moment, artificial intelligence may have perfect memories and be better at arithmetic than us, but they are clueless. It takes a few seconds of interaction with any digital assistant to realize one is not in the presence of a very bright interlocutor. Among some of the unexpected items users have found in their shopping lists after talking to (or near) Amazon’s Alexa are 150,000 bottles of shampoo, sled dogs, “hunk of poo,” and a girlfriend.
The mere exasperation of talking to a digital assistant can be enough to miss human companionship, feel nostalgia of all things analog and dumb, and foreswear any future attempts at communicating with mindless pieces of metal inexplicably labelled “smart.” (Not to mention all the privacy issues.) A.I. not understanding what a shopping list is, and the kinds of items that are appropriate to such lists, is evidence of a much broader problem: They lack common sense.
The Allen Institute for Artificial Intelligence, or AI2, created by Microsoft co-founder Paul Allen, has announced it is embarking on a new research $125 million initiative to try to change that. “To make real progress in A.I., we have to overcome the big challenges in the area of common sense,” Allen told the New York Times. AI2 takes common sense to include the “infinite set of facts, heuristics, observations … that we bring to the table when we address a problem, but the computer doesn’t.” Researchers will use a combination of crowdsourcing, machine learning, and machine vision to create a huge “repository of knowledge” that will bring about common sense. Of paramount importance among its uses is to get A.I. to “understand what’s harmful to people.”
This article was originally published on Slate. To read the full article and to join in the conversation please follow this link.
The ‘Killer Robots’ Are Us
Written by Dr Michael Robillard
In a recent New York Times article Dr Michael Robillard writes: “At a meeting of the United Nations Convention on Conventional Weapons in Geneva in November, a group of experts gathered to discuss the military, legal and ethical dimensions of emerging weapons technologies. Among the views voiced at the convention was a call for a ban on what are now being called “lethal autonomous weapons systems.”
A 2012 Department of Defense directive defines an autonomous weapon system as one that, “once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.” “
Follow this link to read the article in full.
Cross Post: Machine Learning and Medical Education: Impending Conflicts in Robotic Surgery
Guest Post by Nathan Hodson
* Please note that this article is being cross-posted from the Journal of Medical Ethics Blog
Research in robotics promises to revolutionize surgery. The Da Vinci system has already brought the first fruits of the revolution into the operating theater through remote controlled laparoscopic (or “keyhole”) surgery. New developments are going further, augmenting the human surgeon and moving toward a future with fully autonomous robotic surgeons. Through machine learning, these robotic surgeons will likely one day supersede their makers and ultimately squeeze human surgical trainees out of operating room.
This possibility raises new questions for those building and programming healthcare robots. In their recent essay entitled “Robot Autonomy for Surgery,” Michael Yip and Nikhil Das echoed a common assumption in health robotics research: “human surgeons [will] still play a large role in ensuring the safety of the patient.” If human surgical training is impaired by robotic surgery, however—as I argue it likely will be—then this safety net would not necessarily hold.
Imagine an operating theater. The autonomous robot surgeon makes an unorthodox move. The human surgeon observer is alarmed. As the surgeon reaches to take control, the robot issues an instruction: “Step away. Based on data from every single operation performed this year, by all automated robots around the world, the approach I am taking is the best.”
Should we trust the robot? Should we doubt the human expert? Shouldn’t we play it safe—but what would that mean in this scenario? Could such a future really materialize?
Guest Post: Mind the accountability gap: On the ethics of shared autonomy between humans and intelligent medical devices
Guest Post by Philipp Kellmeyer
Imagine you had epilepsy and, despite taking a daily cocktail of several anti-epileptic drugs, still suffered several seizures per week, some minor, some resulting in bruises and other injuries. The source of your epileptic seizures lies in a brain region that is important for language. Therefore, your neurologist told you, epilepsy surgery – removing brain tissue that has been identified as the source of seizures in continuous monitoring with intracranial electroencephalography (iEEG) – is not viable in your case because it would lead to permanent damage to your language ability.
There is however, says your neurologist, an innovative clinical trial under way that might reduce the frequency and severity of your seizures. In this trial, a new device is implanted in your head that contains an electrode array for recording your brain activity directly from the brain surface and for applying small electric shocks to interrupt an impending seizure.
The electrode array connects wirelessly to a small computer that analyses the information from the electrodes to assess your seizure risk at any given moment in order to decide when to administer an electric shock. The neurologist informs you that trials with similar devices have achieved a reduction in the frequency of severe seizures in 50% of patients so that there would be a good chance that you benefit from taking part in the trial.
Now, imagine you decided to participate in the trial and it turns out that the device comes with two options: In one setting, you get no feedback on your current seizure risk by the device and the decision when to administer an electric shock to prevent an impending seizure is taken solely by the device.
This keeps you completely out of the loop in terms of being able to modify your behaviour according to your seizure risk and – in a sense – relegates some autonomy of decision-making to the intelligent medical device inside your head.
In the other setting, the system comes with a “traffic light” that signals your current risk level for a seizure, with green indicating a low, yellow a medium, and red a high probability of a seizure. In case of an evolving seizure, the device may additionally warn you with an alarm tone. In this scenario, you are kept in the loop and you retain your capacity to modify your behavior accordingly, for example to step from a ladder or stop riding a bike when you are “in the red.”
Carissa Véliz on how our privacy is threatened when we use smartphones, computers, and the internet.
Smartphones are like spies in our pocket; we should cover the camera and microphone of our laptops; it is difficult to opt out of services like Facebook that track us on the internet; IMSI-catchers can ‘vacuum’ data from our smartphones; data brokers may sell our internet profile to criminals and/or future employees; and yes, we should protect people’s privacy even if they don’t care about it. Carissa Véliz (University of Oxford) warns us: we should act now before it is too late. Privacy damages accumulate, and, in many cases, are irreversible. We urgently need more regulations to protect our privacy.
Oxford Uehiro Prize in Practical Ethics: Should We Take Moral Advice From Our Computers? written by Mahmoud Ghanem
This essay received an Honourable Mention in the undergraduate category of the Oxford Uehiro Prize in Practical Ethics.
Written by University of Oxford student, Mahmoud Ghanem
The Case For Computer Assisted Ethics
In the interest of rigour, I will avoid use of the phrase “Artificial Intelligence”, though many of the techniques I will discuss, namely statistical inference and automated theorem proving underpin most of what is described as “AI” today.
Whether we believe that the goal of moral actions ought to be to form good habits, to maximise some quality in the world, to follow the example of certain role models, or to adhere to some set of rules or guiding principles, a good case for consulting a well designed computer program in the process of making our moral decisions can be made. After all, the process of carrying out each of the above successfully at least requires:
(1) Access to relevant and accurate data, and
(2) The ability to draw accurate conclusions by analysing such data.
Both of which are things that computers are very good at. Continue reading
Video Series: Walter Sinnott-Armstrong on Moral Artificial Intelligence
Professor Walter Sinnott-Armstrong (Duke University and Oxford Martin Visiting Fellow) plans to develop a computer system (and a phone app) that will help us gain knowledge about human moral judgment and that will make moral judgment better. But will this moral AI make us morally lazy? Will it be abused? Could this moral AI take over the world? Professor Armstrong explains…
The unbearable asymmetry of bullshit
By Brian D. Earp (@briandavidearp)
Introduction
Science and medicine have done a lot for the world. Diseases have been eradicated, rockets have been sent to the moon, and convincing, causal explanations have been given for a whole range of formerly inscrutable phenomena. Notwithstanding recent concerns about sloppy research, small sample sizes, and challenges in replicating major findings—concerns I share and which I have written about at length — I still believe that the scientific method is the best available tool for getting at empirical truth. Or to put it a slightly different way (if I may paraphrase Winston Churchill’s famous remark about democracy): it is perhaps the worst tool, except for all the rest.
Scientists are people too
In other words, science is flawed. And scientists are people too. While it is true that most scientists — at least the ones I know and work with — are hell-bent on getting things right, they are not therefore immune from human foibles. If they want to keep their jobs, at least, they must contend with a perverse “publish or perish” incentive structure that tends to reward flashy findings and high-volume “productivity” over painstaking, reliable research. On top of that, they have reputations to defend, egos to protect, and grants to pursue. They get tired. They get overwhelmed. They don’t always check their references, or even read what they cite. They have cognitive and emotional limitations, not to mention biases, like everyone else.
At the same time, as the psychologist Gary Marcus has recently put it, “it is facile to dismiss science itself. The most careful scientists, and the best science journalists, realize that all science is provisional. There will always be things that we haven’t figured out yet, and even some that we get wrong.” But science is not just about conclusions, he argues, which are occasionally (or even frequently) incorrect. Instead, “It’s about a methodology for investigation, which includes, at its core, a relentless drive towards questioning that which came before.” You can both “love science,” he concludes, “and question it.”
I agree with Marcus. In fact, I agree with him so much that I would like to go a step further: if you love science, you had better question it, and question it well, so it can live up to its potential.
And it is with that in mind that I bring up the subject of bullshit.
Recent Comments