Skip to content

National Oxford Uehiro Prize in Practical Ethics: Why the Responsibility Gap is Not a Compelling Objection to Lethal Autonomous Weapons

  • by

This article received an honourable mention in the undergraduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics

Written by Tanae Rao, University of Oxford student

There are some crimes, such as killing non-combatants and mutilating corpses, so vile that they are clearly impermissible even in the brutal chaos of war. Upholding human dignity, or whatever is left of it, in these situations may require us to hold someone morally responsible for violation of the rules of combat. Common sense morality dictates that we owe it to those unlawfully killed or injured to punish the people who carried out the atrocity. But what if the perpetrators weren’t people at all? Robert Sparrow argues that, when lethal autonomous weapons cause war crimes, it is often impossible to identify someone–man or machine–who can appropriately be held morally responsible (Sparrow 2007; Sparrow 2016). This might explain some of our ambivalence about the deployment of autonomous weapons, even if their use would replace human combatants who commit war crimes more frequently than their robotic counterparts.

This essay rejects Sparrow’s argument, at least as it applies to a wide class of lethal autonomous weapons I call ‘LAW-1’. When LAW-1s cause war crimes, then at least one human being can usually be held morally responsible. I acknowledge that there is a subset of accidents for which attributing moral responsibility is murkier, but they do not give us reason to refrain from using LAW-1s as compared with less sophisticated weapons like guns and missiles.

LAW-1s are the weapons systems that most people envision when imagining a lethal autonomous weapon. I predict that most systems developed in the next decade will be LAW-1s, although some may push the boundaries between LAW-1s and the next generation of lethal autonomous weapons. The defining characteristics of an LAW-1 are:

1.Moderate task specificity: An LAW-1 is a model trained to fulfil a relatively specific task, such as ‘fly around this area and kill any enemy combatants identified if and only if this is allowed under international law’. An example of a task too specific for a LAW-1 is ‘fly to these specific coordinates, then explode’ (this would be more akin to an unsophisticated missile, land mine, etc.). An example of a task too general is ‘perform tasks that will help our state win the war’.

2.No human intervention needed: An LAW-1 is capable of identifying targets and using lethal force without human intervention. For example, an unmanned aerial vehicle (UAV) that uses computer vision techniques to discern active combatants from non-combatants, then shoots the combatants with an attached gun without waiting for human approval would qualify as an LAW-1. An aerial vehicle that requires a remote pilot to operate it is not an LAW-1.

3.No mental states: An LAW-1 does not have mental states, such as pain or regret, and does not have subjective experiences. It is reasonable to believe that all weapons systems currently in operation fulfil this criterion.

I will now outline Sparrow’s argument that lethal autonomous weapons introduce a responsibility gap.

(1) There is a responsibility gap for some war crimes caused by lethal autonomous weapons, meaning that no one can be held morally responsible for the war crime.

(2) Out of basic respect for enemy combatants and non-combatants alike, the legitimate use of any weapon requires that someone can be held responsible if wrongful harm arises as a result of its use.

(C) Therefore, we should not use lethal autonomous weapons during wartime.

I deny the existence of a responsibility gap for an LAW-1. Therefore, the focus of this essay is on the first premise of Sparrow’s argument. There are two reasons why an LAW-1 might commit a war crime. First, this might be intentionally programmed, in which case at least one human being is morally responsible. Second, if the war crime was not a result of human intention, human beings can often be held responsible for gross negligence. I concede that there will be a small number of freak accidents involving the use of LAW-1s for which no human can be held responsible but argue that these cases give us no special reason to reject LAW-1s as compared with less sophisticated weapons.

i. Humans develop and deploy an LAW-1 despite knowing that it will likely commit a war crime.

It should be uncontroversial that humans using an LAW-1 with the knowledge that it will likely commit war crimes are morally responsible for those crimes. For example, a human could knowingly train an LAW-1 with a reward function that incentivises killing non-combatants, even if killing non-combatants is not its explicit goal (e.g., the machine is trained to kill non-combatants that get in its way). The programmers of such a horrible weapon are morally responsible for the war crimes committed. If the military officials knew about its criminal programming, then they too would be morally responsible for the war crimes committed. Therefore, if humans knowingly deploy an LAW-1 that will commit war crimes, there is no responsibility gap.

 Humans deploy an LAW-1, without knowing that it could commit a war crime.

Here is where the existence of a responsibility gap is most plausible. Sparrow argues that “the more the system is autonomous then the most it has the capacity to make choices other than those predicted or encouraged by its programmers. At some point then, it will no longer be possible to hold the programmers/designers responsible for outcomes that they could neither control or predict” (Sparrow 2006, 70).

I make two contentions about accidental war crimes caused by LAW-1s. Firstly, many of these automation failures are a result of gross negligence and should have been foreseen by human programmers. As in other cases of negligence, it is appropriate to hold some human beings morally responsible for the results. For example, weapons company executives and/or military leadership could justifiably be imprisoned for some accidents. Secondly, the accidents which could not have been foreseen or prevented through sensible design practice do not give us special reason to dismiss LAW-1s. These accidents are not dissimilar from the misfiring of a gun, or human mistargeting of an unsophisticated missile.

When considering my arguments, it is prudent to think of why such accidents happen. Not all LAW-1s use machine learning (ML) techniques, but ML is widespread enough in tasks important for LAW-1s, such as computer vision, that it is worth exploring in some detail. In general, a machine learning-powered LAW-1 might fail because a) it is (accidentally) given a goal compatible with war crimes without robust constraints, and/or b) it fails at achieving its goal or staying within its constraints (e.g., misidentifying non-combatants as enemy combatants about to shoot friendly combatants).[1]

A body of machine learning research has identified, forewarned, and discussed these potential failure modes in detail.[2] I think it is reasonable to expect LAW-1 programmers to rigorously test their systems to ensure that the frequency of war crimes committed is exceedingly low. Sensible development of LAW-1s might involve intensive testing on representative datasets, early-stage deployments in real combat zones without weaponry to check if non-combatants can be consistently identified, etc. Techniques to solve the problem of misspecified goals (in this case, goals compatible with war crimes) continue to be developed (Ouyang et al. 2022). The comparatively specific objectives given to LAW-1s makes overcoming these technical challenges easier than for ML models given very general objectives. And, in the worst-case scenario, LAW-1s committing war crimes can be quickly recalled, and either decommissioned or improved to avoid recurrences.

Crucially, developers of LAW-1s need not be able to predict exactly how or why their machines will fail to be held morally responsible for their failure. As long as the LAW-1 committed a war crime as a result of a known failure mode (e.g., glitches in computer vision misclassifying non-combatants) that was not ruled out with a sufficient degree of confidence, developers (among others) can be held morally responsible. This is analogous to an unsophisticated missile whose faulty targeting system causes target coordinates to be miscommunicated, resulting in the accidental bombing of a hospital. The weapons manufacturer can plausibly be held morally responsible for not rigorously testing their product before selling it to the military.

Therefore, it is likely that, in many though not all circumstances, humans can be held morally responsible for war crimes caused by LAW-1s, even if no human explicitly intended for a war crime to be committed. In particular, programmers can be held responsible for not carefully checking for common failure modes, military officials can be held responsible for not sufficiently auditing the weapons they choose to deploy, and states can be held responsible for failing to regulate the development of faulty LAW-1s. I acknowledge that careful, rigorous checks might not currently be possible for LAW-1s, let alone more sophisticated lethal autonomous weapons. But ensuring a very low failure rate in such systems is a technical problem to be solved, rather than some sort of mathematical impossibility. Perhaps the deployment of LAW-1s ought to be delayed until further progress on these technical problems is made, but this does not justify a complete ban.

To be clear, LAW-1s still identify and kill people without human intervention. There will likely always be a small risk of accidentally violating international law when using an LAW-1 even if no negligence is involved. But there is no morally relevant difference between this and a human keying in the wrong target for a missile accidentally, or even a gun misfiring and hurting a surrendered enemy combatant. If LAW-1s have a very high rate of accidental killings, then they should not be used, for the same reason that a very inaccurate missile should not be used. The degree of autonomy exhibited by a weapons system is only relevant insofar as it is correlated with the frequency of accidents; the responsibility gap is not a reason to discount the deployment of LAW-1s with low accident rates.

Sparrow’s response to the charge that non-autonomous weapon-related unjust killings sometimes also have responsibility gaps is that “if the nature of a weapon, or other means of war fighting, is such that it is typically impossible to identify or hold individuals responsible for the casualties that it causes then it is contrary to [the] important requirement of jus in bello” (Sparrow 2007, 67). But I have argued that, at least for the LAW-1s currently being deployed and developed by the world’s militaries, the responsibility gap is far from typical. By this, I mean that the overall number of LAW-1-caused war crimes for which no one can be held morally responsible is plausibly smaller than Sparrow needs for his quoted response to be compelling.

Despite being able to use lethal force without human intervention, LAW-1s are not so different with regards to the attribution of moral responsibility than a gun. Just as a gun might misfire, or a human being may accidentally (and understandably) misaim, LAW-1s might not fulfil the task intended by the humans developing and deploying them. If these accidents are just as infrequent as accidents caused by human combatants, then the existence of a responsibility gap does not give us compelling reason to abandon LAW-1s. As technology develops, it seems likely that accident rates will decrease to the point that LAW-1s are superior to human combatants. Clever programming can allow LAW-1s to escape the violence-inducing cognitive biases shown to be present in human militaries, intake and provide relevant information faster than humans, and ultimately render law-abiding decisions in chaotic situations (Arkin 2010).

Therefore, the responsibility gap is not a compelling reason to refrain from developing and deploying certain kinds of lethal autonomous weapons. In fact, the need to minimise accidents may justify more expenditure on developing LAW-1s to be as safe as is feasible. Additionally, further research should establish a clearer classification of the degree of autonomy displayed by different weapons systems, as is relevant to moral responsibility. Not all lethal autonomous weapons have the same ethical implications, and it is dangerous to be overly general in our conclusions about such a consequential subject.



Amodei, Dario, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. “Concrete problems in AI safety.” arXiv preprint arXiv:1606.06565 (2016).

Arkin, Ronald C. “The case for ethical autonomy in unmanned systems.” Journal of Military Ethics 9, no. 4 (2010): 332-341.

Di Langosco, Lauro Langosco, Jack Koch, Lee D. Sharkey, Jacob Pfau, and David Krueger. “Goal misGeneralization in deep reinforcement learning.” In International Conference on Machine Learning, pp. 12004-12019. PMLR, 2022.

Ouyang, Long, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang et al. “Training language models to follow instructions with human feedback.” arXiv preprint arXiv:2203.02155 (2022).

Sparrow, Robert. “Killer robots.” Journal of applied philosophy 24, no. 1 (2007): 62-77.

Sparrow, Robert. “Robots and respect: Assessing the case against autonomous weapon systems.” Ethics & international affairs 30, no. 1 (2016): 93-116.

[1] The former category is not limited to models for which goals are misspecified; I intend for ‘inner alignment’ failures, also known as goal misgeneralisation, to be included as well (see Langosco et al. 2022).

[2] See Amodei et al. 2016 for an overview of these research problems.

Share on

16 Comment on this post

  1. In the introduction an invalid assumption is made of,

    “An aerial vehicle that requires a remote pilot to operate it is not an LAW-1.”

    A pilot is not of necessity the weapons officer.

    ML is starting to be used in semi-autonomous targeting system to increase aerial vehicle survivability in a hostile zone to minimise inbound threats such as SAM and even ground based guns. This is especially true of remotely piloted vehicles where the response loop time is to long.

    That is due to the speed of light and relay delays, a remote pilot does not even operate the aerial vehical as the response loop time is to slow. In effect the pilot adjusts the course in an onboard autopilot navigation system that also would have unattended flown the aerial vehicle to the hostile zone, thus maximizing the remote pilot time on target.

    But there is a second very dangerous assumption,

    “Firstly, many of these automation failures are a result of gross negligence and should have been foreseen by human programmers.”

    Back in the early 1930’s there were two important things established about computers and the logic they function by before an electrical or electronic computer was even built.

    These were the answer to the “Halting Problem” carried out independently by Alan Turing and Alonzo Church.

    Also and more fundemental was the issue that Kurt Gödel dealt withvin his two papers on undecidability.

    Basically it was shown that the designer and implementor of a program running on a General Purpose Turing Engine which all modern computers are could not outside of very trivial examples know what the full functionality of the program would be.

    Yes I know this is a subject more pertainent to mathmatics, but all AI and thus ML systems run on Turing Machines, thus anyone discussing the morals or ethics relating to ML and AI needs to be fully aware of the Church-Turing and Gödel works.

    Otherwise they make mistakes such as,

    “But ensuring a very low failure rate in such systems is a technical problem to be solved, rather than some sort of mathematical impossibility.”

  2. The “responsibility gap”, whatever it is, seems like a calculated cop out. The world is presently controlled by the big kids in the metaphorical sand box.
    No one has leverage to hold anyone else responsible for anything. Item: I find it ironic that recent news stories have said there is mounting evidence that COVID was released from a Chinese laboratory. Let us suppose this is true and not misinformation coming from conservative politics, intended to make current leadership look bad. A former president famously said: it came from China. He was alternately praised and denounced for that. Let us say, just for grins, he was right, out of the gate. What were his sources? Prima facie, this makes a mockery of any alleged responsibility gap. People do not give a flip about responsibility; refuse to accept it, at ANY level. Why would the major power brokers, those big kids, be held or let themselves be held to a higher standard?

  3. Paul,

    The “responsability gap” is also called “Arms length managment” in some places.

    The new kid on the block for this nonsense is “Machine Learning”(ML) which altough lumped under “Artificial Intelligence”(AI) is nothing of the sort. ML’s actual roots are in “Expert Systems”(ES) from the 1980’s and were basically “Decision Tree”(DT) based formed by a domain expert.

    It did not take long for people to realise there was something lacking in such systems, which was they were fixed and could not usefully improve.

    Thus other areas came along, one of which was “rule testers” you gave them data and suggested a rule and you got a confidence rating back. A few times around the loop usually told you if your rule idea had merit or not.

    I was working for a smallish company in Islington North London back then and was involved in a product called HULK (for Helps Uncover Latent Knowledge). It ran on a BBC Model B home computer or it’s Office version from Torch Computers. As far as I’m aware it was a first of it’s type and like all ideas that are realy new, it was not the success it could have been.

    Time moved on and about a decade later people started taking the ideas of “Matched Filters” that could self adjust and put them in a feedback loop. These became the start of what some call “Inference Engines”(IE) that moved ES systems into the start of what we now call ML systems. But they are in no way “Intelligent” or even “dumb” they simply pick signals out of random looking but actually non stochastic data.

    The problem as those at the Cambridge Computer Labs and more recently up in Edinburgh know, the entire ML process from begining to end can be easily poisoned. That is you can hide bias in them at every stage, because if you think about it the rule finding mechanism acts like a leaky integrator. As the data passes through it as the integrator regression is not linear in nature, the order data is fed in changes the outcome…

    You might remember Microsoft’s first toe in the ML water “Tay” and how she was abused…

    Well round two with ChatGPT has a dirty little secret… It has knomes or knowledge measurers being paid a couple of USD to sift through the data looking for “sin” etc to remove… (which is actually immoral and unethical as it causes clear mental harm in normal people).

    The knomes too effect the rules finding process to try and stop undesired bias building up.

    The thing is when you follow the logic through, they are actually introducing bias as a counter to unwanted bias.

    Now flip that over, lets say you are a government official with an agenda (RoboDebt in Aus will show you several of those). You use computers to build in your agenda, and thus provide the “Computer Says” excuse when it acts against people you dislike for what ever reason.

    With ML this becomes even easier, if you select apparently neutral training data but order it correctly the ML becomes biased and looks at all future data through that bias, so in effect “cherry picks” for the bias and reinforces it.

    If challenged you eventually hand over the training data in a different order and a copy of the ML system. It comes out differently… So auto-magically it’s not your fault and you can deny all responsability, especially as it will have been done through an external company to give that “arms length” distancing.

    That is what our future holds with ML unless we take steps to prevent it.

    A UK Law Lord when faced with British Gas blatently harassing a woman and trying to hide behind the “computer says” excuse made an observation. That the computer hardware and software were the creations of man under a directing mind, and that the directing mind could not evade responsability…

    I suspect the subsequent move to ML systems is a less than subtle attempt to get around the consequences of that observation…

    I could be wrong, but who would take a bet on it even if the stakes were only the price of a cup of tea?

  4. Bias and manipulation. This reminds me of the lyrics of the Stones’ “You can’t always get what you want…but sometimes you can get what you need.”

    I share your skepticism of AI/ML. As a programmer, I once had pseudo-code instructions from a “directing mind” that would not work. I tried an inversion of the coding. to get the desired result. The “mind” got what was wanted.

  5. Thanks to all who have contributed here. I needed to think more about all of this. After re-reading the paper offered by the prize winner, I understood my own confusion and at least part of what I had either a. Missed, or, b. Dismissed. Warfare is becoming distasteful among people who find waste at least as abhorrent. This smacked me in the face, like a hammer. Now, and only now, I think I understand why planners and implementers of war are so enamoured with artificial intelligence: It could fully impersonalize chaos. Thanks, Clive, for the arms-length analogy. My reptilian brain was interfering with any better angels of nature that may remain. So, briefly, civilized war mongrels—be that an obvious contradiction—need a cop out. Those who rely on power and authoritarianism need not, nor care not, apply. The notion of something like computer warfare is no longer a notion. It is happening everyday. People who are victims of bank fraud, via identity theft get it. So, where does this go? Machine war, I think. No blood, as Cosby once intoned, just put (it) somewhere. I mean, think about it. Sooner or later, if prior to 2525, power mongers and authoritarians may evaporate. Machine war might generate less revenue. But it would be so, uh, civilized…

    Friends: these are only new world musings on what we have seen coming. Some folks I know are not as optimistic. I get that.

  6. Further: today, the foolishness of an American broadcast personality has hammered itself home. He was a fool, in my opinion, the first time I ever heard his broadcasts. Now, and moreover, this moron solidifies my initial trepidations about him. He is a fool. Sin duda, como no?

    1. What has that to do with anything so far discussed here? The conversation and comments have been about specificity, not generality. Or are you just poking fun at philosophy?……was that better than ‘Huh?’ ¿.

  7. I am still parsing ‘responsibility gap’. The month of March features ethics awareness; women’s history and several more commemorations that escape me right now. After a reading on pets, on another blog, I find most of the complaints pretty inane. The vegan/omnivorian argument is, purely, ludicrous, near as I can tell. After a couple thousand years of history and eradication of trichinosis—which is a not recognized as a word by this tablet—animal flesh remains a source of protein and sustenance for both humans and other carnivores. Morality and ethics appear to be targets for revisionism.
    Ethics awareness seems equally pointless, insofar as ethics and morality are just not that important. A writer here, philosopher of morality, said so.
    By the way, parse this if you will: ‘ serious complications from… (misuse) of this product may result, and could be severe’. What, in your estimation, is the difference between serious and severe? I see little to none. But, context is tricky, right?

    1. We live by solutions that we create for troublesome situations. A problem arises. We think about it. We try something that works for a while. Maybe a few refinements make our solution last. We have success that may last for decades, centuries, or millennia. Then we impact the situation with something else we do. By now many of us have come to rely on the old solution as a model or ideal way to do things. But our other impacts have made the old solution useless or detrimental. Suppose that the old solution was to consume animal protein at a time when meat was plentiful, and we were few. However, we have become a swarm and the meat solution has morphed into a massive and destructive industry that threatens to destroy our world and cause our extinction.

      Some of us will be open to trying another way. A substantial number want to keep things the way they are. That is laziness. Once atop the food chain we mostly prefer to stay there. We should not have become a swarm. But we did. We have learned other ways to supply our protein now. In the near term, we can get protein another way.

      This will turn out to be a minor problem. We can get over our reliance on meat. We seem unable to solve the much larger problems. As Pogo said we have seen the enemy as he is us. The mass that we have become creates a multitude of other harder problems. We resist even thinking about them. There is the laziness problem. There are conflicts between groups that culminate in wars. Some of us are still primitive bullies. We fight over our favorite models such as democracy versus authoritarianism. We fight over old grudges that are centuries out of date. We are just too lazy to evolve. We are going extinct.

  8. I attempt to embellish or expand what my dear brother said about the world’$ greatest rock and roll band: you can’t always get what you want. You don’t always want what you get. Some have found fault with that. I don’t much care. Don’t imagine they do either.

  9. Larry Van Pelt,

    “We are just too lazy to evolve. We are going extinct.”

    Yes, not just the birth rate, but in some places (the US) the life expectancy.

    The simple fact is children get less of use and exponentialy more expensive. And contrary to what many think medicine is not making child birth safer for women or the children.

    If correlation is causation, then two things appear,

    1, Industrialization
    2, Wealth divide.

    The latter is caused by wage stagnation for nearly the past two decades. In fact in real terms the price of putting food on the table has become increasingly a larger part of a families expenditure.

    Whilst Governments try to hide it in real terms 8/10ths of the working populous are becoming poorer in real terms compared to hours of labour.

    So women have to work now and the better off women are leaving child birth to ten to fifteen or more years later. Thus the generations have moved from 15-25 years appart to 30-50 years appart and having just one or two children, which is insufficient to maintain the population. But the women on average do get to live that much more than women in agrarian based societies and at a generally better standard of living.

    As I said if correlation is causation then we can blaim the businese people who want cheap labour, to earn them profit.

    Oh… If we look at not just the wage gap but other signs of wealth being moved into the top few percent of the population is that confirmation of the previous two points as causation or just another correlation?

    A society should be based on the notion of “a rising tide lifts all boats” not drowns those at the lower points on the harbour wall due to the weight forced onto them.

  10. Thinking further on this entire issue. And, the framing of responsibility gap. This blog, and its’ title, appear to be about matters of an ethical nature or impetus. Fair enough. Crisp’s recent post on morality came racing back. He admitted his assessment was inconsistent with his background as moral philosopher. It strongly appears to me the responsibility gap notion has some foundation in a brief(?) concept known as situational ethics: a squishy idea that gained traction, and, like authoritarian populism, ran off the road. Or, did it? I think not. The premise of situational ethics has ever been a principle of warfare: do unto thine enemy, before he can do unto you. Clearly, then, it never goes away. It was embarrassing when some folks parsed it and found the premise, squishy. People entwined in economics and international affairs find it a useful, if risky, distraction. Maybe, I should have written, tricky.
    Political interests want to have it both ways—or maybe three. There are always three ways to see anything. Possibly more.

  11. Since everyone has their own take on things—their private value systems and personal ethics—I’d say the matters discussed in this blog are all squishy.
    Of course, about half of the planet is (in reality) still authoritarian and/or tribal.

  12. Today, I heard, 1. A governing(?) body at the Hagu issued an arrest warrant for Putin. I think it is called the ICC. 2. Further reports that China will provide assistance to Russia,, in the form of lethal aid, in Putin’s quest towards genocide in the Ukraine. I noted on another blog that symbolism and ersatz censorship are useless against anyone. Especially, totalitarian regimes. Such paltry sabre-rattling carries no influence. Has no teeth. No dog, no hunt; no horse, no race. There is no reasoning with madmen who believe they hold all the cards. The ICC may as well issue a blanket warrant for all the bad actors in the world. That would require yet another clandestine infrastructure. To track their every move, defeat their security details and take them into custody for further disposition. Can’t do it. Wouldn’t be prudent. Or, economically or logistically feasible. People in Russian and Chinese leaderships know this. Their sense of invincibility is, as a practical matter, unshakeable. Bluff is for five-card poker. Modern contingency is, by far, too complex. The bad guys are laughing. Spend some useful think-tank time and logistical sweat—or shut up. Don’t just make us laugh. Finally, don’t pay career bureaucrats for useless edicts and poker games.

  13. U.S. policy had there been one, on this issue, changed within three hours. Amazing how autonomous matters can morph so quickly, while objects of debate continue to fester. The Hague thing appears to have been effectively erased. Clearly, prudent,under circumstances. I thought as much, three hours ago. As to the policy matter, that is ephemeral, much like that ICC edict that also had only symbolism and quasi-censorship I mentioned before. Power, on any level, is inexorable, insidious and inescapable. If this is not challenged with like determination, the resulting assessment from adversaries is weakness. There must have been some mention of this in The Art of War, or, Mein Kamp. Certainly,leaders of free world governments have read these edicts. I know of some. Yeah.

    So, where does it go from here…. rhetorical statement—the question is unanswerable, now, if ever. My dear brother believes there is no way beyond extinction. We are the enemy. Yes. I agree. He has children and a grandchild. Well.

Comments are closed.