The Tunnel Problem

Driverless or autonomous cars will almost certainly be commonplace quite soon. Imagine you are sitting in such a car, approaching a tunnel on a single-lane mountain road. A child wanders into the middle of the road, blocking the entrance to the tunnel. How should such cars be programmed to react? Two options are: to keep going and kill the child; or to swerve aside into the tunnel wall and kill the driver.

The tunnel problem was invented by the philosopher Jason Millar. The question, of course, is not what the ‘user’ of the car should do. Nor is it any good suggesting an override function: there may be cases where there isn’t time to react. Millar’s own suggestion is based on an analogy with medical ethics. Those who purchase driverless cars should be permitted to choose their own ‘ethics package’. That suggestion itself rests on his view that there is no ‘right answer’ about what to do in the tunnel case, and that programming a particular programme into the car would ‘alienate’ users from their moral convictions.

Now Millar is quite clear that he doesn’t mean that anything goes here: he says it would be absurd to allow someone to use a program that swerves only to avoid males. But this raises the question for him why people’s own moral commitments are relevant only within a certain range. A more parsimonious and elegant, and I suspect popular, view would be that there is a right answer in the tunnel case, but we don’t know what it is.

On that view, then, perhaps we can agree with Millar’s solution, but for a different reason. There is a right answer, and we know it lies within some range  (so, as he says, sexist positions are out). But as we don’t know which view within that range is correct, we should leave it up to individuals to choose for themselves.

Here an argument influenced by work by Will MacAskill might be helpful (I’m not claiming MacAskill himself would accept it). Consider a tunnel case involving a car driven in the ordinary way. People will disagree about what the driver should do, and about what she is permitted to do. Some will say that driving into the tunnel is unjust killing; others will allow the driver to do so, on the ground that she isn’t responsible for the child’s presence on the road and that she has a right to ‘defend herself’, or rather not sacrifice herself. Few people, however, will claim that the driver’s turning the wheel and saving the life of the child at the cost of her own is wrong. So one reasonable view here is that a driver should sacrifice herself to minimize the chance of her doing wrong.

This argument of course carries over to the users and designers of driverless cars. Other things equal, the cars should be programmed to kill users rather than innocent threats. But what about a non-innocent threat, such as someone who deliberately steps in front of a user’s car in an attempt to make it swerve? If a car could be programmed to distinguish innocent from non-innocent threats, then here the case for a progam that selects driving into the tunnel when the threat is non-innocent is significantly stronger. But if there were no such program available, then perhaps all cars should be programmed to continue regardless, to prevent non-innocent threatening, and all carers should be told even more firmly than they are now not to allow their children to play near roads!

  • Facebook
  • Twitter
  • Reddit

36 Responses to The Tunnel Problem

  • Raymond says:

    “Few people, however, will claim that the driver’s turning the wheel and saving the life of the child at the cost of her own is wrong. So one reasonable view here is that a driver should sacrifice herself to minimize the chance of her doing wrong.”

    Yet that intuition no doubt turns on the supererogatory agency implicit in swerving: few would similarly normatively condone forced sacrifice. In any case, on what basis is one locus of decision-making – the individual, nation, regional bloc, world – to be privileged over others in identifying the relevant distribution of normative beliefs? I don’t see any reason why beliefs would be spatio-temporally consistent.

  • Bill says:

    Its not really self sacrifice if the car chooses to kill you – so I’m not sure the analogy holds. Both driver and child are innocent in the case, so why privilege one over the other? To make this point clearer, imagine its not an adult in the car, but another child (at least conceivable in a world of self-driving cars). It doesn’t seem fair that the child in the car should die, purely because they happen to be in the car rather than on the road.

  • Sarah says:

    I suppose one option would be that it should drive so slowly that the situation would never arise that it would have the choice between swerving or hitting- safely stopping or avoiding the child will be possible for much longer, and right at the end, there will be no option to avoid the child by swerving whether you want to or not.
    One reason we don’t have those kinds of speed limits now is enforceability, but that would be overcome by driverless cars.
    The other reason is that in fact we value efficient transport over (at least a certain number of) human lives.
    Once we have accepted that then I think that the car passenger should take the risk- they have made a conscious decision to prioritise their transport over (risk to) human life on this occasion – so they should bear the costs of that if the risk eventuates.

  • Paul Treanor says:

    Typical of Wired to think this is a ‘new’ ethical issue, when it is comparable to older ethics problems involving harm to third parties. Surprising, however, that Wired missed relevant technological developments. The trend is that not simply one driverless car, but other cars and other road users, would communicate with each other, and react in combination. For instance anti-collision transponders for cyclists are already in the development stage (which has predictably led to suggestions to make them compulsory). Collision transponders are already in use in aircraft, although they don’t make ‘ethical’ decisions – they allocate each aircraft to an avoidance manoeuvre (one left, one right, and/or one up, one down).

    In the case of the ‘child on the road’ problem, the child could have a protective transponder which would ‘negotiate’ with the driverless car’s computer, to determine an action. In reality there is no negotiation, it is all pre-programmed according to the inputs. We don’t need to go into the detail, however, the relevant point is that there are then two ‘users’ in the ‘child on the road’ problem. One is the person who takes the decision about that driverless car (which might be a taxi company and not the occupants). The other is the child, and if it can not legally make decisions itself, then its parents or guardians. They may have very different views. In fact we can be fairly certain that parents would put the life of their child first. In the real world, we can expect that some form of child avoidance radar will be compulsory for driverless cars.

    Now I suggest that the introduction of a transponder does not alter the ethics of the case. There were always two persons to consider, and it was never morally permissible for the owner of the car, to make the decision about its collision avoidance strategy. Transponder or no transponder, the alternative outcome in which the child is killed can only be adequately assessed, if the interest of the child is taken into account.

    Millar’s analogy with informed consent in medicine therefore fails. A more correct analogy would be: two patients are waiting for a life-saving organ transplant, and the doctors let one patient decide who gets the organ. Most will want it for themselves, selfish perhaps, but also rational and predictable. Now if we let the owner of a driverless car decide which collision strategies it follows, then most will prioritise themselves, and the occupants of their own car, over other road users. Surprise, surprise.

    So I don’t think there is any real ‘ethical dilemma’ here. Since by definition more than one party is involved in a collision where two lives are at stake, then to allow only one party to decide the outcome, would unacceptably privilege them. There must be some form of collective or external decision, and that implies government regulation.

    However… As I said, there are parallels with other older problem cases concerning harm to third parties. We can not simply follow mathematical models such as least-harm or maximum-benefit. That’s why I also think a prohibition of ‘sexist’ decisions, which other comments take as unproblematic, is not self-evident. There are feminists who would say that the life of a woman must never be sacrificed to benefit a man. If in the original ‘Tunnel Problem’ the occupant was a woman, and the child male, they would have the driverless car run over the child. We can’t simply dismiss that preference, because feminism presents comprehensive arguments for ethically favouring women. There can be other factors which might influence that type of life-or-death decision, some well known from medical ethics. What if one party is a terminal cancer patient with a few days to live, and the other is young healthy talented person with a bright future ahead of them? Who gets a donor organ, and who gets run over by the driverless car? Clearly related, if not identical, issues.

    So we don’t know, as Roger Crisp claims, that there is a permissible range of decisions in this type of case. There are decisions which many will find distasteful, or politically incorrect, or abhorrent, but there is no consensus.

    That is also relevant when considering the real-world ethical issues about cars. Tens of millions of people have been killed by motor vehicles since their introduction, and many more injured. Although that is widely recognised as a serious problem, the reality is that most states and societies, and many governments with very diverse ideologies, consider that an acceptable price to pay, for the economic advantages of motor vehicle transport, and the personal convenience (for private car users above all).

    That is a classic example of a collective decision, which imposes extreme disbenefit on individuals, without offering any procedure by which they can escape the harm, or effectively minimise it. I think that deserves more attention, than hypothetical issues about driverless cars.

  • Nicholas. R says:

    In the tunnel problem, would not a “smart car” be programmed to sacrifice the individual, be it child or driver, who had the higher static probability of living through a crash/hit?

  • Paul Treanor says:

    Here is a more realistic version of an ‘ethical dilemma’ involving driverless cars. Suppose that driverless cars are common, that they have obstacle detection and collision avoidance systems, and that they obey speed limits and traffic laws. Nevertheless, they still sometimes collide with pedestrians and cyclists, killing and injuring them. The manufacturers do research, and find that driving at 80% of the speed limit will eliminate almost all these accidents. They therefore reprogram the vehicles for that speed, but the consequence is that journeys take 25% longer.

    Now should the owner-users be allowed to override the new speed restriction? Remember that the driverless cars would still stay within the legal speed limit, so the override would not be illegal on those grounds. If manufacturers allowed an override, additional legislation would be required to prohibit it.

    In practice most owner-users would choose to override, to get their destination quicker. How do I know that? Because that’s exactly what real drivers do. They drive too fast, and take too many risks, simply to save time – and in the process they kill and injure people, disproportionately from vulnerable groups of road users.

    Obviously, therefore, pedestrians and cyclists have an interest in whether such an override should be legal. The override would allow owner-users of driverless cars to sacrifice the life and health of others, to benefit themselves. If there is to be a law, then the legislature should take account of the interest of pedestrians and cyclists. And again, we know there is a real-world conflict of interest between cyclist and pedestrian groups, who seek tougher road safety laws, and the motorist’s lobby, which usually opposes further restrictions on drivers.

    If you read between the lines, Jason Miller is simply advocating that owners of driverless cars should be allowed to override the manufacturers safety settings, for their own benefit. Technically that is indeed an ethical issue, and he has framed it to resemble a classic ‘ethical dilemma’ such as the Trolley Problem, and he presents it an issue of freedom of choice with respect to moral intuitions. But in the real world it is simply a political demand to privilege the owners of driverless cars, at the expense of other road users.

  • Keith Tayler says:

    This type of ethical problem is all part of the AI hype. As Paul has outlined, autonomous car technology would at best require everyone to carry transponders, be a dangerous obstruction to human drivers and we would require us to redesign roads and our urban centres at a colossal cost. Assuming researchers could develop a half reliable system, it is difficult to see how this technology could be kept secure and maintained. In short, it is a very bad solution looking for a problem.

    As for the so-called tunnel problem, Sarah is obviously correct. An autonomous car should always be programmed to swerve and crash itself rather than hit a pedestrian (human drivers usually follow this rule). Of course this will mean, unlike a car driven by a human, it might take the same avoidance manoeuvre if a sheet of paper blows across the road or a mischievous person throws a large box in its path (it could become quite a sport for some people).

    I agree with Paul, instead of bothering about hypothetical problems about fictional driverless cars, we should concern ourselves with why companies, governments, academics and media are so obsessed with this silly sci-fi technology.

    • Anthony Drinkwater says:

      You may be right, Keith, but I’m not as sure as you are that driverless cars in some form or other will always be as fictional as you believe. Nor that, if they turn out to become a reality, the costs will be higher than the huge current costs of private motoring. (Death, injury, pollution, expensively-produced assets doing nothing most of the time….)
      I seem to remember that a few days ago you wrote : “we should be very reluctant to halt debate because our interlocutor is not following what we believe to be the logic of discourse. We can learn from some very strange “logics”.”

      • Keith Tayler says:

        Don’t get me wrong, autonomous vehicle technology might be viable and have uses in closed domains in some urban settings, motorways journeys and convoying lorries. Some or all these might be achievable within a few decades if there was enough money thrown at them. I am nonetheless still of a mind that the huge costs of private motoring you list could be more effectively reduced by other means. Being an advocate of reliable secure machine intelligence that protects the liberties and privacy of individuals, I would like to see it playing its part to reduce costs and increase safety across all transport systems.

        Living in the wilds of Devon as I do, I am pretty certain a policy maintaining roads before they become rough tracks and keeping some semblance of public transport would be a simpler transport solution. Much of the world’s population live with poorly maintained roads and declining public transport. Do we really need driverless cars before we have roads they can use?

    • Matt Sharp says:

      Driverless cars aren’t fictional. They’re already on the roads. The question is whether they will be produced on commercial levels.

      Given how many deaths (and other problems) that occur currently due to human error, I think it’s almost inevitable that they will eventually replace human-driven cars over the next 4 or 5 decades. There will also be a demand from people who are too ill/old to drive existing cars, but for whom a self-driving car would massively increase access and opportunities in life.

    • James says:

      You obviously have no idea what your talking about. Functional driverless cars are already here, and will be widespread within a generation.

  • Paul Treanor says:

    It is worth summarising the primary initial ethical issues with driverless cars. The first question is whether they should be on the public highway at all, or in any public space. They differ substantially from first-generation robots, which are generally static and confined to factories, where access is restricted. Driverless cars are autonomous, mobile, and intrinsically dangerous even if they don’t malfunction, and they are out on the street.

    Cars cannot be designed to be safe, and that must be taken into account. Cars are dangerous because of their momentum, and the energy on impact. It is collision which accounts for most deaths and injuries – to the occupants of the car, the occupants of other cars, and other road users, cyclists, motorcyclists and pedestrians. There is no way that a driverless car can be designed to be collision-free, since that risk is inherent to its mobility.

    Now of course existing cars are dangerous too, but that is not an excuse for ignoring the risks. Why should we repeat the historical mistake made with the introduction of the automobile? By the time its death toll was apparent, it was already too well established as a transport technology, to simply ban it. We don’t have to accept driverless cars, just because companies want to sell them.

    An obvious initial form of regulation is to keep them off existing roads. Separate roads are possible in new developments, and in for instance airports and theme parks. If the manufacturers wanted to sell more driverless cars, they would have to invest in retro-fitted segregated roads for them. That might work in low-density American suburbs, but not even Apple and Google can pay for a complete parallel highway system, so this is a severe restriction.

    If driverless cars are allowed on the public highway, then other road users must have some form of veto over their use. Remember: this is not a beneficial technology. The driverless cars are dangerous, and they will inevitably kill people. Physically vulnerable road users – cyclists and pedestrians – are most at risk. Again there is no way to design around this. The risk is primarily a function of the vehicles mass and speed, and inherent in any motor vehicle.

    In practice such a veto would take the form of legislation, which restricts the movement of driverless cars, to protect those that the car might harm. That is not the way things are going at present. Developers such as Google are lobbying governments to simply let their driverless cars on the road, because they have collision avoidance systems. That is not a sufficient real-world protection against being hit by a driverless car, and by simply ignoring the legitimate interest of cyclists and pedestrians, it reduces them to second-class status.

    An example of the protection that could be afforded, is that driverless cars are programmed not to overtake a cyclist. This is known to be a dangerous manoeuvre, and cyclists often complain about the frightening experience of being overtaken. Obviously an overtaking ban will slow down driverless cars in urban areas, matching their speed to that of cyclists (about 15 to 20 km/h). The manufacturers won’t like that, because they are trying to sell a car, and consumers expect a car to go faster than a cyclist.

    That’s only one example. Almost all restrictions on driverless cars, intended to protect other road users from them, will negatively impact on the speed, convenience, and comfort which they offer to the buyer. In turn that will depress sales, hitting the manufacturers who invested in the technology. We can’t avoid this type of conflict of interest. It is not a justification, however, for sweeping the issues under the carpet, and pretending that everyone will be better off with driverless cars.

    • Matt Sharp says:

      “Almost all restrictions on driverless cars, intended to protect other road users from them, will negatively impact on the speed, convenience, and comfort which they offer to the buyer.”

      The restrictions shouldn’t be so great that uptake of driverless cars is so severely restricted that greater harm is inflicted on other road users by normal cars. For example, a restriction might be imposed that will reduce deaths caused by driverless cars by 50 per year in any given country. However, if this restriction reduces the use of driverless cars such that 100 deaths occur by normal cars (that wouldn’t have been on the road without the restriction), then the restriction has in effect *caused* 50 extra deaths.

      So sure, there should be some sensible restrictions, but there’s no need to let the perfect (i.e. no deaths at all from cars) be the enemy of the good (i.e. significantly fewer deaths and accidents with driverless cars than normal cars).

      • Paul Treanor says:

        There is no evidence that driverless cars will ‘reduce road deaths’. At present that is no more than an advertising slogan, and no serious research has been done. There is good reason to doubt claims that driverless motor vehicles will be safer than those with a driver, since the risk of death and injury from collision is inherent to motor vehicles travelling at any significant speed. We also know that even careful drivers kill and injure pedestrians, so driving strategy cannot eliminate harm to others.

        Even if true that a journey by driverless car presents less risk to non-occupants, than the same journey in a human-driven car, road deaths could still increase significantly due to different routing, different trip patterns, and above all the predicted increase in car use. The promoters of driverless cars have made no effort to assess that type of risk, and have no interest in doing so.

        An then, even if it is true that replacement of all existing cars by driverless cars leads to a reduction in total death and injuries, that still would not justify the like-for-like replacement. Matt Sharp is wrong to present this as a perfectionism issue, because there is no either-or decision involved. The decision does not have to be made at the level of a single ‘society’, and individuals are not without options. The individual pedestrian or cyclist does not have to simply accept the risk of being killed or injured by a driverless car. What is needed in the real world is a strategy by which threatened individuals can avoid the threat posed to them. I indicated some of the options, and there are many more.

        There are obvious parallels here with other political and ethical issues. Some gun control advocates in the United States propose that military weapons should be banned. That would indeed hinder spree shootings with automatic weapons, but most shooting victims in the US are killed by handguns anyway. It’s a case of substituting a slightly less dangerous device for a very dangerous device, but it is not a reason to allow people to be killed. Another parallel is with vaccination: to be effective in eliminating a disease, vaccination rates must be close to 100%. A minority who fear vaccination can obstruct that public-health strategy, but would that justify forced vaccination? Should we force individuals to accept a significant risk that they will be killed or injured by a driverless car, for the sake of a reduction in total deaths and injuries? I don’t see any justification for that strategy, which is what Matt Sharp appears to be suggesting.

        Note that Jason Millar (the ‘Tunnel Problem’ author) has not even begun to address these broader issues. I mailed him about this post and my comments, and perhaps he can say something about the issues.

        • Matt Sharp says:

          “There is no evidence that driverless cars will ‘reduce road deaths’. At present that is no more than an advertising slogan, and no serious research has been done. There is good reason to doubt claims that driverless motor vehicles will be safer than those with a driver, since the risk of death and injury from collision is inherent to motor vehicles travelling at any significant speed.”

          Of course it’s impossible to be certain that road deaths will be reduced by driverless cars *before* they are widely used. However, most accidents are caused by human error:

          Driverless cars could malfunction. But they’re not likely to be drunk, tired, distracted, fail to look properly, exceed the speed limit, be blinded by the sun etc.

          “Even if true that a journey by driverless car presents less risk to non-occupants, than the same journey in a human-driven car, road deaths could still increase significantly due to different routing, different trip patterns, and above all the predicted increase in car use.”

          This is a possibility. But if road deaths increased so much because of increased use, it would still be likely that road deaths per mile or kilometre travelled would be reduced. And we’d be getting a lot more benefit from such additional use.

  • Keith Tayler says:

    According to James you do not know what you are talking about. I think you do. The only way driverless cars could be commonplace on some roads within a generation would be if pedestrians and other road users were basically forced to get out their way and we give up trying to contain surveillance technology.

    I remember in the early 60s there were some computer types that were predicting that ‘robot cars’ would be commonplace by the 80s. For sure the navigation, recognition and hardware systems are far more advanced today (still very limited), but, as you outline, the software and whole technology/people interface problems are much the same.

  • Paul Treanor says:

    The issue raised by Matt Sharp can be reformulated as a consumer choice between three types of car. Suppose that manufacturers offer, at the same price:

    – existing cars, which are dangerous to pedestrians and cyclists;

    – a new type of car which is slightly less dangerous, but much more convenient in use;

    – another new type of car which is much less dangerous, but also much less convenient than existing cars.

    Left to themselves, consumers will switch to the slightly less dangerous cars. Other things being equal, that will result in some reduction in deaths and injuries. Matt Sharp sees this strategy as preferable, because otherwise consumers will stick with the existing dangerous cars, and road deaths will not fall.

    However, that model does not reflect the options available in the real world. The government can simply oblige manufacturers to build the safest type of car, and offer it for sale. Since older cars must be replaced at some time, deaths and injuries will then gradually fall, even if consumers dislike the new cars. The government can further oblige and/or encourage motorists to switch from existing cars to the safer cars, to accelerate that process.

    The flaw in Matt Sharp’s reasoning is that he takes consumer demand to be not only an autonomous factor, but uncontrollable. His suggested trade-off also ignores the legitimate interests of pedestrians and cyclists, and the governments legitimate responsibility to protect them.

    This way of looking at the issues is not limited to driverless cars. It can apply to any innovation, which increases road safety while imposing some cost or disbenefit on motorists. It reflects the historical reality that automobile manufacturers did not want to improve safety because of the cost, and were forced to do so by governments. It did indeed cost money, and the extra cost was indeed passed on to the consumer. The consumer (car buyer) did not suffer too much, however, because it was offset by cost reductions from improved design and manufacturing.

    Now with driverless cars, the ‘cost’ to the motorist of increased safety will probably take the form of limits on the cars use and performance, rather than increased spending. If the government seriously wants to protect pedestrians and cyclists, then it will limit their speed, compel cautious driving strategies, and keep them out of city centres. Motorists certainly won’t like that, because autonomy and convenience were fundamental reasons to choose the car in the first place.

    It is predictable that motorists lobbies will then argue, that restrictions should be eased, because the driverless cars are still safer than their human-driven predecessors. That is however a different ethical issue. It is a simple choice between lifting restrictions which inconvenience many people in their daily lives, at the price of death and injury to others. In general the inconvenience would never be so great to justify death and injury, especially since driverless cars will probably not cut accident rates as much as their supporters claim.

    • Matt Sharp says:

      This is brilliant strawmanning you’ve done here.

      Congratulations on defeating an argument you’ve set up for yourself to defeat.

      I’m perfectly happy for the government to intervene if that is what is required to achieve the best overall outcome. I’m not against legislation. I am concerned about the interests of pedestrians and cyclists (but not just pedestrians and cyclists).

      My position is simply that driverless cars have the potential to be much safer than human-driven cars, and that legislation/government intervention has the potential to either make things better or worse.

      If government insisted that driverless cars could only be sold if they were one million times safer than human drivers, then that seems to be an excessively strict regulation, and such a car would probably be impossible to build whilst also maintaining its usefulness/functionality. On the other hand, if cars were allowed to be sold that were worse than human drivers, that would clearly be excessively weak legislation.

      In between there will be cars that are technically build-able, but so expensive that no-one would willingly buy them, and cars that are a bit better than existing human drivers, but not significantly so.

      There will be an optimal point which combines practicality, affordability and safety.

      • Paul Treanor says:

        At issue is not the usefulness or functionality of the driverless car itself. Nor does restriction on their use always require additional technology. It is essentially the manufacturers software which would be regulated. At its simplest that could mean low speed limits, at no additional cost to the manufacturers or buyers. However, low speeds are not what motorists want.

        The choice is generally between convenience for users of driverless cars, and death and injury to others. The social choice which Matt Sharp presents would only be an accurate model, if driverless cars had only two possible configurations, the ‘somewhat safer’ model and the ‘very safe but expensive’ model. That is not the case in reality. It is also not true that driverless cars are a precondition for reducing road deaths and injuries. Even if they did reduce deaths and injuries, when substituted one-for-one for existing cars, there are alternative policies which would have an equivalent effect. The issue is too complex for a single optimum solution, for society as whole.

        It is worth repeating here that driverless cars are not a safe technology, they are an extremely dangerous technology. Motor vehicles are the most lethal non-military technology ever invented, and most motor vehicles are passenger automobiles. Almost all deaths and injuries caused by motor vehicles are due to some form of collision, where mass and speed of the vehicles determine the energy available to harm occupants, and other road users. Driverless cars cannot avoid this inherent risk, and there is no reason to view them as a ‘safe alternative’ for existing automobiles. All claims that they will be safer are wholly unproven, and no serious research has been done on the effects of their introduction.

        • Matt Sharp says:

          “The social choice which Matt Sharp presents would only be an accurate model, if driverless cars had only two possible configurations, the ‘somewhat safer’ model and the ‘very safe but expensive’ model.”

          I beg you to stop with the strawman arguments. I didn’t present such a model.

          “Almost all deaths and injuries caused by motor vehicles are due to some form of collision, where mass and speed of the vehicles determine the energy available to harm occupants, and other road users”

          And I think it’s worth repeating here that most accidents are caused by human error. Given this, it is entirely reasonable to expect driverless cars to be a relatively ‘safe*r* alternative’; though of course they can’t be guaranteed to be perfectly safe.

          • Keith Tayler says:

            I think this safety issue being overemphasised. In the UK there are about 1,700 road deaths per annum. Obviously it would a lot easier and far less expensive to improve roads, vehicle safety technology and driver performance than it would be to R & D and manufacture driverless car. It would be reasonable to assume that the former could reduce road deaths to well below 1000 and would keep vehicles moving at an acceptable speed. It is doubtful that autonomous cars could do much to improve on this as most of these deaths are going to involve pedestrians, animals, cycles, motorcycles and be caused by dangerous human drivers. Autonomous cars would also be a very effective form of surveillance technology that many of us would strenuously oppose.

            • Jay Walker says:

              About 1 in 3 accidents involve drunk drivers. These would be virtually eliminated if everyone had access to autonomous cars.

              • Keith Tayler says:

                There are much easier ways of stopping drunk drivers. Given we are talking about a few hundred people, I am certain autonomous cars would cause just as many deaths. So I am still not convinced that there would be a net reduction in deaths.

                • Jay Walker says:

                  “There are much easier ways of stopping drunk drivers”

                  Do you have evidence for this empirical claim?

                  Rich companies like Google are betting these technologies will be safe enough to become widely popular. If they are right it will require little public investment. Seems easy to me.

                  • Keith Tayler says:

                    The empirical evidence is that death by drunken drivers has been falling for the last 50 years because we have been taking action to control it. There are plenty of measures that can be taken to reduce it further which might involve technologies that prevent drunken drivers using their cars. Changes in legislation works wanders, e.g. much tougher penalises for drivers and publicans. Given that quite a large number of drunk drivers are getting a bit long in the tooth, fatalities will continue to decline as they decline in numbers.

                    Google are not going to redesign and construct roads and junctions. Even if they and other companies did, do we really want Google et al to effectively owning and control our roads?

                    • Jay Walker says:

                      Let’s focus on the next 50 years rather than the previous 50….Given the rate at which this technology is advancing, autonomous cars that work with current infrastructure are plausible.

  • Jason Millar says:

    I’ve had a great time reading the article and ensuing comments! Thank you for all of it.

    I originally conceived of the Tunnel Problem as a way of pointing out some limits to engineers’ moral authority when designing and automating certain decision algorithms like that featured in the Tunnel Problem. If you’re interested I’d recommend the paper that contains the much longer argument. It can be found here:

    Writing articles like the one Roger Crisp references in WIRED often requires sacrifices to context and detail owing to the word limits editors must impose. I hope that a fuller treatment of the Tunnel Problem will help to clarify some of the issues that have thoughtfully been raised by Matt, Paul, and other commenters! I hope it will also save me from the charge that I am “simply advocating that owners of driverless cars should be allowed to override the manufacturers safety settings, for their own benefit”. That is most definitely not my intent. The ethics of driverless cars certainly demands more of us than that.

    I look forward to further comment!

  • Keith Tayler says:


    You asked me for empirical evidence so I had to give you past evidence.

    The technology cannot work safely on existing infrastructure at an acceptable speed for the reasons inter alia mentioned above. There have been no great advances in the technology for quite a few decades, and even those are brittle and unsafe in conditions actual cars would encounter. Even if we assume that the complexity of the system works under ideal conditions, it will make keeping vehicles working and safe very difficult if not impossible under prolonged normal and extreme operational conditions. Autonomous cars would be difficult for pedestrians and human drivers to interact with because there cannot be any eye contact. Again, driverless cars are unacceptable to many people because they would be a powerful surveillance technology.

    • Jay Walker says:

      You said there are better ways of reducing accidents from drink driving. The fact that other things( eg banning drink driving ) have reduced these deaths in the past is not evidence there are better ways of reducing these deaths in the future. All of your arguments rest on empirical claims for which you provide no evidence.

      • Keith Tayler says:

        Obviously the continued application of a policy may not work indefinitely. However, in the case of drunken drivers there is strong evidence that greater detection and conviction of drivers reduces accidents and deters drivers from drinking. We also know that by making it more difficult for drivers to consume alcohol we can reduce drunken driving. You appear to be taking the irrational position of favouring a non-existent technology, which from past experience of such technology we know is highly unlikely to be reliable and safe, over tried and tested measures. (Microsoft has launched Windows 10 this week in an attempt to get Windows more reliable and safe after 30 years of trying. Unfortunately we already know they have failed). Surely it is not necessary to impose upon use such a hugely expensive, intolerably intrusive and unreliable technology in order to hopefully reduce a few hundred road deaths. (There are some other benefits, but again the costs are too great). As I have said, we can greatly reduce road deaths and injuries by other means. If you want evidence, Google ‘road safety history’ and you will see the extraordinary success we have had in preventing accidents and how we could relatively easily further reduce them throughout the world, thereby saving considerable more lives than the introduction of autonomous cars in advanced capitalist nations.

        • Jay Walker says:

          I favour whatever technology works best in reducing deaths. I don’t dismiss non-existent technologies if I think their existence is sufficiently probable. Your case rests on the fact that improving roads will be so much better at reducing deaths than the development of autonomous cars, that the latter doesn’t even need to be considered. This is an empirical claim. You need evidence to support this, rather than just assumptions you think are clear from googling. Some studies predict that autonomous cars will save lives and money. You need studies saying the opposite.

          • Keith Tayler says:

            No you can look at existing policies and technologies and weigh them against new policies and technologies. You cannot expect to find empirical evidence for all future events. I have been researching and making predictions about AI for over 40 years and as yet I have been pretty accurate. On the other hand, AI researchers have been very far off the mark and are still making completely unrealistic claims and predictions.

            I am certainly not saying autonomous car technology need not be considered. It does, as I said above, have its place and it might be possible to slowly develop it from these closed domains to more open domains. Vehicle safety technology has, I believe, a promising future. However, my understanding of the “problem” and knowledge of vehicle hardware and software (I have personally experienced an accident caused by brake software failure) cautions me to suggest that robust reliability should take precedence over rapid massive technological change.

            • Jay Walker says:

              So given you now concede that autonomous cars will be part of improving vehicle safety, surely considering thier ethical dimensions is not “all part of the AI hype”, as you originally asserted.

              • Keith Tayler says:

                I have always, as I said in my earlier posts, accepted that the technology could and indeed is being used in certain environments. In these environments there are few if any ethical issues; it is only when we get to the open road and hapless pedestrians we start to get speculation which does tend to play into the hands of the AI lobby. They have been doing it for decades and it is all part of the hype. Please remember, I am an advocate of machine intelligence not AI.

  • Jay Walker says:

    I favour whatever technology works best in reducing deaths. I don’t dismiss non-existent technologies if I think their existence is sufficiently probable. Your case rests on the fact that improving roads will be so much better at reducing deaths than the development of autonomous cars, that the latter doesn’t even need to be considered. This is an empirical claim. You need evidence to support this, rather than just assumptions you think are clear from googling. Some studies predict that autonomous cars will save lives and money ( ). You need studies saying the opposite.

    • Keith Tayler says:

      I do not need studies to say that improving roads and vehicle technology will save more lives than an unproven non-existent technology. I know what measures work and how much they have reduced deaths because there is plenty of evidence and studies on road safety publicly available on the net. We could reduce road deaths in the UK to a few hundred people, but to reduce it further would be almost impossible regardless of the technology because these deaths would be motorcyclists, dangerous drivers, unlucky pedestrians, etc.. Unless you are proposing to outlaw human drivers and have us all passengers of autonomous cars, it is impossible to prevent all road accidents. Even if you are proposing such a totalitarian measure, you must accept that even if driverless car technology performed to the highest safety critical standards, it could not prevent all accidents and it would have its failures that could cause accidents. So again we are left with a few hundred deaths it is impossible to prevent.

      Taking a global view, road accidents could be dramatically reduced in countries where they are still high by providing the funds and implementing the policies and technology that have worked so effectively in countries like the UK where road deaths are already extremely low. Expensive autonomous cars are of little use to people in developing countries where road deaths run into the hundreds of thousands.

      Finally, we must not forget that motor accident victims are a valuable source of human organs: so a few hundred road deaths can save the lives of thousands of transplant patients. It is a grim statistic but nonetheless true, and there are quite a few bioethicists who would argue for the continuation of road fatalities on the grounds there is a net saving of life (I am not one).

      Dare I say it, but I think we are going to have to agree to disagree.


Subscribe Via Email