Skip to content

technology

Cross Post: Privacy is a Collective Concern: When We Tell Companies About Ourselves, We Give Away Details About Others, Too.

BY CARISSA VÉLIZ

This article was originally published in New Statesman America

Making Ourselves Better

Written by Stephen Rainey

Human beings are sometimes seen as uniquely capable of enacting life plans and controlling our environment. Take technology, for instance; with it we make the world around us yield to our desires in various ways. Communication technologies, and global transport, for example, have the effect of practically shrinking a vast world, making hitherto impossible coordination possible among a global population. This contributes to a view of human-as-maker, or ‘homo faber‘. But taking such a view can risk minimising human interests that ought not to be ignored.

Homo faber is a future-oriented, adaptable, rational animal, whose efforts are aligned with her interests when she creates technology that enables a stable counteraction of natural circumstance. Whereas animals are typically seen to have well adapted responses to their environment, honed through generations of adaptation, human beings appear to have instead a general and adaptable skill that can emancipate them from material, external circumstances. We are bad at running away from danger, for instance, but good at building barriers to obviate the need to run. The protections this general, adaptable skill offer are inherently future-facing: humans seem to seek not to react to, but to control the environment.

Read More »Making Ourselves Better

Regulating The Untapped Trove Of Brain Data

Written by Stephen Rainey and Christoph Bublitz

Increasing use of brain data, either from research contexts, medical device use, or in the growing consumer brain-tech sector raises privacy concerns. Some already call for international regulation, especially as consumer neurotech is about to enter the market more widely. In this post, we wish to look at the regulation of brain data under the GDPR and suggest a modified understanding to provide better protection of such data.

In medicine, the use of brain-reading devices is increasing, e.g. Brain-Computer-Interfaces that afford communication, control of neural or motor prostheses. But there is also a range of non-medical applications devices in development, for applications from gaming to the workplace.

Currently marketed ones, e.g. by Emotiv, Neurosky, are not yet widespread, which might be owing to a lack of apps or issues with ease of use, or perhaps just a lack of perceived need. However, various tech companies have announced their entrance to the field, and have invested significant sums. Kernel, a three year old multi-million dollar company based in Los Angeles, wants to ‘hack the human brain’. More recently, they are joined by Facebook, who want to develop a means of controlling devices directly with data derived from the brain (to be developed by their not-at-all-sinister sounding ‘Building 8’ group). Meanwhile, Elon Musk’s ‘Neuralink’ is a venture which aims to ‘merge the brain with AI’ by means of a ‘wizard hat for the brain’. Whatever that means, it’s likely to be based in recording and stimulating the brain.

Read More »Regulating The Untapped Trove Of Brain Data

Better Living Through Neurotechnology

Written by Stephen Rainey

If ‘neurotechnology’ isn’t a glamour area for researchers yet, it’s not far off. Technologies centred upon reading the brain are rapidly being developed. Among the claims made of such neurotechnologies are that some can provide special access to normally hidden representations of consciousness. Through recording, processing, and making operational brain signals we are promised greater understanding of our own brain processes. Since every conscious process is thought to be enacted, or subserved, or realised by a neural process, we get greater understanding of our consciousness.

Besides understanding, these technologies provide opportunities for cognitive optimisation and enhancement too. By getting a handle on our obscure cognitive processes, we can get the chance to manipulate them. By representing our own consciousness to ourselves, through a neurofeedback device for instance, we can try to monitor and alter the processes we witness, changing our minds in a very literal sense.

This looks like some kind of technological mind-reading, and perhaps too good to be true. Is neurotechnology overclaiming its prospects? Maybe more pressingly, is it understating its difficulties?Read More »Better Living Through Neurotechnology

Cross Post: Common Sense for A.I. Is a Great Idea. But it’s Harder Than it Sounds.

Written by Carissa Veliz Crosspost from Slate.  Click here to read the full article At the moment, artificial intelligence may have perfect memories and be better at arithmetic than us, but they are clueless. It takes a few seconds of interaction with any digital assistant to realize one is not in the presence of a… Read More »Cross Post: Common Sense for A.I. Is a Great Idea. But it’s Harder Than it Sounds.

The ‘Killer Robots’ Are Us

Written by Dr Michael Robillard In a recent New York Times article Dr Michael Robillard writes: “At a meeting of the United Nations Convention on Conventional Weapons in Geneva in November, a group of experts gathered to discuss the military, legal and ethical dimensions of emerging weapons technologies. Among the views voiced at the convention… Read More »The ‘Killer Robots’ Are Us

Cross Post: Machine Learning and Medical Education: Impending Conflicts in Robotic Surgery

Guest Post by Nathan Hodson 

* Please note that this article is being cross-posted from the Journal of Medical Ethics Blog 

Research in robotics promises to revolutionize surgery. The Da Vinci system has already brought the first fruits of the revolution into the operating theater through remote controlled laparoscopic (or “keyhole”) surgery. New developments are going further, augmenting the human surgeon and moving toward a future with fully autonomous robotic surgeons. Through machine learning, these robotic surgeons will likely one day supersede their makers and ultimately squeeze human surgical trainees out of operating room.

This possibility raises new questions for those building and programming healthcare robots. In their recent essay entitled “Robot Autonomy for Surgery,” Michael Yip and Nikhil Das echoed a common assumption in health robotics research: “human surgeons [will] still play a large role in ensuring the safety of the patient.” If human surgical training is impaired by robotic surgery, however—as I argue it likely will be—then this safety net would not necessarily hold.

Imagine an operating theater. The autonomous robot surgeon makes an unorthodox move. The human surgeon observer is alarmed. As the surgeon reaches to take control, the robot issues an instruction: “Step away. Based on data from every single operation performed this year, by all automated robots around the world, the approach I am taking is the best.”

Should we trust the robot? Should we doubt the human expert? Shouldn’t we play it safe—but what would that mean in this scenario? Could such a future really materialize?

Read More »Cross Post: Machine Learning and Medical Education: Impending Conflicts in Robotic Surgery

Guest Post: Mind the accountability gap: On the ethics of shared autonomy between humans and intelligent medical devices

Guest Post by Philipp Kellmeyer

Imagine you had epilepsy and, despite taking a daily cocktail of several anti-epileptic drugs, still suffered several seizures per week, some minor, some resulting in bruises and other injuries. The source of your epileptic seizures lies in a brain region that is important for language. Therefore, your neurologist told you, epilepsy surgery – removing brain tissue that has been identified as the source of seizures in continuous monitoring with intracranial electroencephalography (iEEG) – is not viable in your case because it would lead to permanent damage to your language ability.

There is however, says your neurologist, an innovative clinical trial under way that might reduce the frequency and severity of your seizures. In this trial, a new device is implanted in your head that contains an electrode array for recording your brain activity directly from the brain surface and for applying small electric shocks to interrupt an impending seizure.

The electrode array connects wirelessly to a small computer that analyses the information from the electrodes to assess your seizure risk at any given moment in order to decide when to administer an electric shock. The neurologist informs you that trials with similar devices have achieved a reduction in the frequency of severe seizures in 50% of patients so that there would be a good chance that you benefit from taking part in the trial.

Now, imagine you decided to participate in the trial and it turns out that the device comes with two options: In one setting, you get no feedback on your current seizure risk by the device and the decision when to administer an electric shock to prevent an impending seizure is taken solely by the device.

This keeps you completely out of the loop in terms of being able to modify your behaviour according to your seizure risk and – in a sense – relegates some autonomy of decision-making to the intelligent medical device inside your head.

In the other setting, the system comes with a “traffic light” that signals your current risk level for a seizure, with green indicating a low, yellow a medium, and red a high probability of a seizure. In case of an evolving seizure, the device may additionally warn you with an alarm tone. In this scenario, you are kept in the loop and you retain your capacity to modify your behavior accordingly, for example to step from a ladder or stop riding a bike when you are “in the red.”

Read More »Guest Post: Mind the accountability gap: On the ethics of shared autonomy between humans and intelligent medical devices

Carissa Véliz on how our privacy is threatened when we use smartphones, computers, and the internet.

Smartphones are like spies in our pocket; we should cover the camera and microphone of our laptops; it is difficult to opt out of services like Facebook that track us on the internet; IMSI-catchers can ‘vacuum’ data from our smartphones; data brokers may  sell our internet profile to criminals and/or future employees; and yes, we… Read More »Carissa Véliz on how our privacy is threatened when we use smartphones, computers, and the internet.

Oxford Uehiro Prize in Practical Ethics: Should We Take Moral Advice From Our Computers? written by Mahmoud Ghanem

  • by

This essay received an Honourable Mention in the undergraduate category of the Oxford Uehiro Prize in Practical Ethics.

Written by University of Oxford student, Mahmoud Ghanem

The Case For Computer Assisted Ethics

In the interest of rigour, I will avoid use of the phrase “Artificial Intelligence”, though many of the techniques I will discuss, namely statistical inference and automated theorem proving underpin most of what is described as “AI” today.

Whether we believe that the goal of moral actions ought to be to form good habits, to maximise some quality in the world, to follow the example of certain role models, or to adhere to some set of rules or guiding principles, a good case for consulting a well designed computer program in the process of making our moral decisions can be made. After all, the process of carrying out each of the above successfully at least requires:

(1) Access to relevant and accurate data, and

(2) The ability to draw accurate conclusions by analysing such data.

Both of which are things that computers are very good at.Read More »Oxford Uehiro Prize in Practical Ethics: Should We Take Moral Advice From Our Computers? written by Mahmoud Ghanem