Skip to content

Cross Post: Machine Learning and Medical Education: Impending Conflicts in Robotic Surgery

Guest Post by Nathan Hodson 

* Please note that this article is being cross-posted from the Journal of Medical Ethics Blog 

Research in robotics promises to revolutionize surgery. The Da Vinci system has already brought the first fruits of the revolution into the operating theater through remote controlled laparoscopic (or “keyhole”) surgery. New developments are going further, augmenting the human surgeon and moving toward a future with fully autonomous robotic surgeons. Through machine learning, these robotic surgeons will likely one day supersede their makers and ultimately squeeze human surgical trainees out of operating room.

This possibility raises new questions for those building and programming healthcare robots. In their recent essay entitled “Robot Autonomy for Surgery,” Michael Yip and Nikhil Das echoed a common assumption in health robotics research: “human surgeons [will] still play a large role in ensuring the safety of the patient.” If human surgical training is impaired by robotic surgery, however—as I argue it likely will be—then this safety net would not necessarily hold.

Imagine an operating theater. The autonomous robot surgeon makes an unorthodox move. The human surgeon observer is alarmed. As the surgeon reaches to take control, the robot issues an instruction: “Step away. Based on data from every single operation performed this year, by all automated robots around the world, the approach I am taking is the best.”

Should we trust the robot? Should we doubt the human expert? Shouldn’t we play it safe—but what would that mean in this scenario? Could such a future really materialize?

The technology

This is not just sci-fi. Given the direction robotic surgery is heading, it is increasingly likely to become reality.

The Da Vinci system has become a regular feature in the operating theater, optimizing many laparoscopic procedures in gynecology, urology, and general surgery. Although it has the potential for remote control and automation its present clinical use is limited to operation by a human surgeon in the same room. Increasingly, other robots are improving performance and outcomes by contributing to planning and decision-making as well as technical skills. The Soft Tissue Autonomous Robot (STAR) has shown that automated robots can form more reliable connections between sections of bowel than human surgeons. We must anticipate a watershed moment when robots are able to plan and perform entire operations without the input of human surgeons.

Machine learning is one precondition for such robot-led operations. Machine learning is the process whereby computers optimize their algorithms through feedback, allowing machines to perform tasks without prior programming. The underlying concepts behind deep-learning neural networks have been around for many years, but have only recently come to the fore due to increasing computational power. Recent applications to healthcare have included the diagnosis of melanoma and the DREAM system for diagnosing diabetic retinopathy.

The UC Berkeley Centre for Automation and Learning for Medical Robotics (CAL-MR)is now integrating machine learning and the Da Vinci system. Given the complexity and delicacy of human soft tissue, these researchers believe that programming a robot to operate on internal organs, the model used by STAR, would be improved by allowing robots to learn for themselves.

Preliminary work uses Learning By Observation, which means that the robot “learns” without being programmed. Robots can identify different sensor conditions and represent them in terms of certain parameters. Some of the necessary motions within an operation, or “surgemes,” that have so far been replicated include penetration, grasping, retraction, and cutting. Surgemes can be combined into a “finite state machine,” a kind of algorithm, to execute each subtask within the operation. The concept of reinforcement learning implies that the robot could refine and update the finite state machine based on feedback (for much more detail see this CAL-MR paper on learning by observation).

The education of human surgeons involves the development of muscle memory and heuristics (as well as concrete knowledge) through high-intensity exposure and experience in theater. Combining programmed anatomical knowledge with machine-learned experience reflects this pattern. Both robots and human surgeons will rely upon practical exposure in order to develop surgical expertise, raising the prospect of conflict as they vie for theater time.

What happens if a surgeon disagrees with an autonomous robot? 

In the immediate future, trained surgeons will oversee even autonomous robots, but what happens when the robot takes an approach with which the surgeon disagrees? Until now it seemed likely that robots would be programmed by human surgeons, affirming the supremacy of human surgical knowledge over robot skill. The idea behind machine learning is that a robot could potentially surpass human understanding of surgery, likely by sharing data between different robots faster than humans can share skills informally or through journals.

At this point it becomes difficult for the human surgeon to justify overriding the machine. A difference of opinion would pit human intuition and knowledge against the data-driven approach of a machine, rendering a human surgeon’s intervention difficult to defend. This scenario has the potential for positive health outcomes, but it risks taking robotic surgery into an unforeseen domain where no human surgeon is in overall control, disrupting the somewhat blasé picture in much health robotics research.

Would patients consent to machine learning over human education?

The training of human surgeons, contingent upon access to human bodies, may be undermined by the presence of machine learning. By opting for robotic surgeons, patients would know that their operation was helping to improve the care of future patients, but could be spared the potential indignity of showing their internal organs to another person.

In human medical education there is an unspoken exchange: (a) the patient grants the trainee surgeon access to their body and (b) the surgical team performs the operation. As an unavoidable part of the process (a) tends not to be mentioned, even though hands on experience is an essential and scarce resource for trainee surgeons. With machine learning in the picture, the information flow from the patient’s body to the surgeon comes into focus: would patients prefer to use their bodies to educate robots or humans? What if patients opt out altogether?

Within reason, most patients in teaching hospitals are happy to help trainees and students. Surveys of patient attitudes have revealed an awareness that such participation benefits future patients (see for example Haffling and Hakansson, [2008] or Sayed-Hassan, Bashour, and Koudsi [2012]). This altruistic motive would hold for machine learning robots, as increasing data would allow for increasing iterations and improvements to the finite state machine.

Patient surveys also show that engagement with students and trainees is valued for the human contact it offers. This sentiment may carry over to the much-poeticized physical intimacy between the surgeon and the body. It is conceivable that patients would rather be treated by a human in order to fully experience this knowledge exchange, but this may only apply to certain cases. On balance, an anaesthetized patient is unlikely to feel any loss of connection.

Conversely, the loss of this physical intimacy would actually be preferable for many patients. The same surveys confirm that patients perceive medical training as an invasion of their privacy. Data about the toughness of a person’s prostate or the resistance offered by a sigmoid colon is sensitive. Given the choice, many patients would feel less self-conscious about sharing such “carnal knowledge” with a robot than a trainee surgeon.

A final possibility is that some patients may value their privacy over the altruistic motive suggested above. When surgeons are human, the flow of anatomical knowledge to the surgeon is unavoidable. But with a robotic surgeon, the patient could choose (or pay) to delete any data obtained during the operation.

Machine learning in robotic surgeons would offer increased privacy and allow the patient to know they are benefiting those who will subsequently undergo the operation. Additionally it would facilitate entirely non-educational operations as the patient preferred. Both of these features would reduce the educational opportunities available to human surgeons.

Can human surgeons retain authority in the operating theater?

Most authors have presumed that human surgeons would function as a safety mechanism on robotic surgeons, remaining on hand to manage malfunctions or emergencies. Undoubtedly this is the immediate future of robotic surgery, but it is unlikely to be sustainable.

When robots operate they will integrate new information from the patient and this data can be shared with other robots. The purpose is to produce robots whose results are better than those of human surgeons. With time, it is likely that they will take on the majority of the operating. Human surgeons could possibly be squeezed out of theater and trainees prevented from getting the necessary experience. These inadequately-trained humans would be systematically deskilled through an absence of educational opportunities, leaving them unable to resolve an emergency and ill-equipped to disagree with the robot’s intended plan of treatment.

In this event, the best chance for human hands may come from high quality virtual reality surgical training, preserving for as long as possible the necessary skills to resolve surgical emergencies and the necessary knowledge to challenge robot-led treatment plans. While robots glean data from the anatomy of human patients, the remaining human surgeons would train on computer-generated simulations.

Perhaps there is no need for human involvement. An argument that we are safer without human decision-makers in ultimate control could be incorporated into defenses of autonomous robotics. Until then, effective means of training humans outside of the theater, such as virtual reality, are a priority if the pursuit of autonomous robotic surgeons is to retain its human safety catch.

Further Reading:
Learning by Observation for Surgical Subtasks: Multilateral Cutting of 3D Viscoelastic and 2D Orthotropic Tissue Phantoms http://cal-mr.berkeley.edu/papers/davinci-icra-2015-v26.pdf
Robot Autonomy for Surgery https://arxiv.org/pdf/1707.03080.pdf

Share on

2 Comment on this post

  1. Trinity College Cambridge academic

    Many of your claims demonstrate the shallowness of your understanding of machine learning. In short, your write-up has too much verbiage and too little substance. Take this sentence for example:

    “A difference of opinion would pit human intuition and knowledge against the data-driven approach of a machine, rendering a human surgeon’s intervention difficult to defend.”

    No, it does not. A human’s understanding of a situation makes use of far more semantic layers than anything that we have even the vaguest of ideas of actually implementing at present. For me, “tree” invokes visual, auditory, tactile, olfactory, historical, microbiological, macrobiological, and a whole other range of representations and concepts. We are nowhere near able to understand how a knowledge system like that might be replicated. On the other hand, if you are talking about such far future then sure, the human and the machine are just two decision making agents. By then (if it comes) heaven only knows how our relationship with machines will change.

  2. I am in agreement with much of your post but would like to very brief explore some points where I disagree. I believe you are over estimating the capabilities and effectiveness of machine learning, which does, of course, exacerbate the problem of assessing and controlling the performance of machines.

    The first issue that concerns me, which might seem nit-picking, is one of language. Turing predicted in paper Computing Machinery and Intelligence, ‘…that by the end of the century the use of words and general educated opinion will have altered so much that we will be able to speak of machines thinking without expecting to be contradicted.’ (I apologise for using this quote again in a post but it particularly germane here.) Turing’s paper is very confused and contradictory; indeed, as we know from his colleague Robin Gandy, Turing seemed to think that it was all a bit of a joke.
    However, at one point he is quite clear that ‘“Can machines think?” should be replaced by “Are there imaginable digital computers which would do well in the imitation game [Turing test]?”…“Can machines think?” I believe to be too meaningless to deserve discussion.’

    You say, ‘a robot could potentially surpass human understanding of surgery’. I must contradict, the robots you are referring to are machine learning systems that can never ‘understand’ surgery. Of course, as you touch upon, machines might well outperform human surgeons in well-defined and closed domains, but that does not mean they have an understanding of surgery any more than chess and Go computers that can outperform humans can be said to understand these games. They are still, so to speak, dumb machines and cannot be described as surpassing human understanding.

    You also say, ‘…the robot “learns” without being programmed.’ When describing a machine’s performance, it has for some time been quite common to describe it in terms of human perceptual and cognitive abilities like seeing, hearing, thinking, believing, learning, etc.(they are often placed within double inverted commas). Anthropomorphism can be quite harmless, but when rigorously assessing machines we need to be very careful with our language. Even more confusing, when describing human perceptual and cognitive abilities, it is quite usual to find them described in computational terms such as process, compute and, of course, programme (used for ‘learn’). So we should perhaps take Turing’s prediction as more of a warning of how our thinking can be confused, leading us to situations where we speak of machines thinking, understanding, learning, seeing, hearing, etc., and humans as processing, computing, programmed, etc., without expecting to be contradicted.

    We should also remember that their great strength, their connectivity and ability to share information among themselves to improve performance, is also a weakness because one or more of them may share incorrect information (e.g., simple hard/software errors, information about particular situations that do not scale, etc..) and they will, of course, be vulnerable to hacking so they will be hacked

    Machine learning is basically inductive probability, which is by no means epistemologically unproblematic. When you say, ‘[a] difference of opinion would pit human intuition and knowledge against the data-driven approach of a machine, rendering a human surgeon’s intervention difficult to defend’, you seem to be suggesting that we should accept the machines actions because they are ‘data-driven’ (presumably meaning the ‘data’ has been ‘driven’ through inductive probability). This could be reckless given machine surgeons will be incapable of giving reasons for their actions when human surgeons should be capable of giving their reasons. Being told to step away by a machine surgeon because “[b]ased on data from every single operation performed this year, by all automated robots around the world, the approach I am taking is the best” is not a reason. Indeed, there are countless clinical instruments that have been in use for over a century that could claim very high reliability. The effectiveness and dependability of these instruments has led to accidents when medical staff have trusted them when they faulted. We can expect a machine surgeon to come equipped with the simple software that enables it to “speak” and use the first person pronoun, but that again is all part of the problem of assessing computers. As Joseph Weizenbaum demonstrated over sixty years ago, it is extremely easy to make very simple machines ‘appear’ to be very intelligent.

    (I have also posted this on the JME site.)

Comments are closed.