Skip to content

Three arguments against turning the Large Hadron Collider on

In response to Anders Sandberg’s post on the Large Hadron Collider.

The physicists responses to worries about the risks posed by the LHC make it unclear whether they understand the moral issue. They may have the power, but they do not have the liberty to hazard the destruction of all present and future goodness. Nobody does.

Professor Frank Close of the University of Oxford has been quoted as saying that "The idea that it could cause the end of the world is ridiculous." (here). Is it ridiculous because it is impossible, or because it is very unlikely? I don’t think he knows it is impossible, and being very unlikely is not sufficient to dismiss the risk. Yes, it’s very unlikely, but being very unlikely is not remotely unlikely enough and may be beside the point, as, I think, these three arguments demonstrate.

1st Argument.

1.        A necessary condition on doing anything which might destroy all present and future goodness is that the expected value of doing it is positive

2.        Setting g to be the total goodness (all present and future goodness) in the absence of running the LHC, x the factor by which running the LHC for a week increases goodness if it doesn’t bring total destruction, and p the chance of total destruction per week of running, then (gxg) is the benefit that might be gained from a week’s running and the expected value is (1-p)(gxg)-pg .

3.        For the expected value of one week’s running of LHC to be positive we require (1-p)(gxg)-pg >0 i.e. x > 1/(1-p).

4.        Suppose p is one billionth, then x > 1.000000001….

5.        So one week’s running of the LHC must increase total goodness by more than one billionth for the expected value to be positive.

6.        But one week’s running of the LHC won’t increase total goodness by anything like one billionth.

7.        Therefore the LHC should not be turned on.

2nd Argument

8.        Suppose that a sufficient condition for it to be permissible to do something which might bring on the destruction of all present and future goodness is that the expected value of doing it is positive

9.        Let g be the total goodness without doing that thing, x the factor by which doing it increases goodness if it doesn’t bring total destruction, and p the chance of total destruction. Then for the expected value to be positive requires x > 1/(1-p)

10.    In that case it would be permissible to risk total goodness by doing something that risked total destruction with a chance of 50% provided it offered to increase total goodness by more than twice.

11.    But not even doubling goodness justifies the risk of destroying all goodness.

12.    Therefore positive expected value is not sufficient to risk the destruction of all present and future goodness.

3rd Argument

13.    Avoidable risks of destruction of all present and future goodness should not be taken.

14.    Turning on LHC is an avoidable risk of destruction of all present and future goodness.

15.    Therefore it should not be turned on.

Share on

11 Comment on this post

  1. Point 11 in the second argument seem to be contradictory to the initial point 8. It almost sounds like an axiom, which is the role it has as point 13.

    But any action has a low background probability of causing the end of all goodness (e.g. by triggering quantum tunnelling that drops the Earth into the sun, a thermodynamic miracle freezing the atmosphere, the appearance of a deadly virus or something equally unlikely but physically possible). Even subtracting out the background probability that is not due to our actions per se, it is clear that some residual dependent on our actions would remain. Certain actions would make disaster microscopically more likely than others. We can even predict to some extent what actions are more likely to be dangerous, such as actions involving large effects across the world and actions that have never before been tried. Hence accepting the line of reasoning in the second and third argument means that we must not undertake any large actions (like trying to cure a disease) or do anything new, no matter what the benefits are since they are deliberate, avoidable actions that could bring about the end of all good.

    I think the first argument is the most viable, but would dispute point 6 (as I argued in my post, CERN may actually have done that much good). There is an interesting can of worms there if the goodness is not evenly distributed or has a subjective component, though.

  2. Yes, the second argument could be much compressed: the point is to lay out how something that might be initially appealing, the supposition of 8, implies something that is contradicted by 11, which premiss I think most people would find intuitively compelling.

    Important points in the major premiss of third argument is that it refers to avoidable risks and to destruction of all present and future goodness. Several of the examples you raise are not avoidable, indeed, not dependent on our actions in any way. Other examples do not hazard the destruction of all present and future goodness (viruses might kill off humans but not all life, so rational agents could evolve again). I think you’re right, however, that it is stronger than it need be, or is true. I don’t want to rule out all experimentation. I think perhaps it should be:

    13. A known hazard of destruction of all present and future goodness of which we are uncertain (do not know the probabilities at all), and that is avoidable, should not be taken.

    We know the LHC poses such a risk, since we know we don’t know the true physical theory (as Toby pointed out in his post ‘These are not the probabilities you are looking for’) and on some contending theories there is an objective chance of total destruction. So it is a known hazard of which we are uncertain.

    Can you really believe that the LHC will increase all present and future goodness by more than one billionth per week of its running? It seems incredible to me. You can’t credit it with the increases in goodness to which it contributes a part, but only its actual marginal increase per week. Furthermore, if the probabilities Toby mentions were right then we would be requiring an increase in total goodness of 2 millionths (per week? month?).

  3. I think the updated point 13 works much better, but the concept of “known hazard” becomes very tricky under these circumstances. Clearly that involves some kind of probability measure over alternative theories, but how do we evaluate claims about these probabilities?

    If I warn that (say) running the ITER fusion experiment risks world-ending disasters due to unknown physics, should that be taken seriously? Most likely not, since I have not provided any information. Should I be taken seriously if I provide an example of an alternative physics (normally indistinguishable from ordinary physics) where disasters could happen? Perhaps, but clearly the form of this theory matters a lot: something that looks like a proper physical theory would likely cause more concern than quantum phlogiston theory or divine wrath – yet if these are all (so far) empirically indistinguishable and equally complex there seems to be little but prejudice to choose some as more worthy of concern than others.

    I blogged a bit about this at my own blog,
    http://www.aleph.se/andart/archives/2008/04/more_lhc_philosophy_this_time_with_demons.html
    where I came to the conclusion that we might have a cognitive bias forcing us to assign probability to any new remotely plausible outcome as they are mentioned in situations of radical uncertainty. This means that if I spend an evening brainstorming possible disasters in possible theories I could increase the amount of “known hazard” – without even knowing anything!

    It seems that we need to ground the “known hazard” more. In the case of hazards occurring in theories that differ from our expectations only in the values of some (uncertain) parameters it seems rational to call it a known hazard. But how to put a measure of credence on the full space of possible theories?

  4. At what point does magnifying total goodness outweigh the cost of potential destruction of all goodness? Point 11 says doubling goodness isn’t worth it, which is a subjective threshold. Would tripling, quadrupling, etc. be worth it? Or is there no factor that would outweigh this potential cost? Businesses are known for making decisions that have the potential of increasing their market share or revenue but may put them in bankruptcy if the scheme fails; should they not undertake projects that have the potential to destroy the company (or should I avoid comparing economics to humanity?).

  5. The prevailing view is that the risks are small, perhaps 1 in 50,000,000 as was estimated for the RHIC collider, but this is very strongly disputed unsupportable with respect to the LHC collider.

    The lawsuit before US Federal Court estimates the risk at closer to 50% with a very high degree of uncertainty. (Credible Nuclear Physicists assert that those risk probability calculations are as accurate as can be currently supported. Actually risk may be closer to 0% or closer to 100% depending on what assumptions prove correct.)

    I am a former US Army Officer, and I am willing to accept a fairly high level of risk, I would not oppose 1 in 50 million odds, but I also have non-trivial physics back ground, and I have done extensive research related to LHC Safety risks, including creating of the web site LHCFacts.org. And I am very concerned because I conclude that the risks may be exceedingly high, potentially a probability, and I conclude that CERN has NOT been open and honest about the risks involved. And I believe we can prove this in court.

    In short, these are the conditions necessary for safety or not:

    If the following reasonable and plausible assumptions prove to be correct, then the uncomfortable truth may be that the probability of destruction of Earth could be closer to 100%, far from the often quoted 1 in 50 million, though only mother nature currently knows for certain due to our limited understanding of the physics involved.

    A. LHC Creates black holes as CERN Predicted (1 per second)
    B. Micro Black holes do not evaporate as LSAG accepts as plausible.
    C. One or more micro black holes are captured by Earth’s gravity as LSAG accepts as plausible.
    D. Micro Black holes grow exponentially as Dr. Otto E. Rossler’s paper predicts and calculates.

    Supporting references to PHDs and Professors of Math, Physics and other Theoretical Sciences that very strongly support these arguments in favor of possible extreme risk are available at http://www.lhcfacts.org/?cat=53

  6. Hello, my initial comment is this.
    It puzzles me that interested observers readily accept the physicists’ view of black holes, and accept the possibility (however small) that miniature black holes could one day be produced in high-energy experiments – yet at the same time, the same observers refuse flatly to accept calculations from the same scientists that the black holes produced could only be of a certain type, namely evaporating black holes, i.e. a type of BH that could never grow to become a danger

    I’m no BH expert, but I’m perfectly happy to take the calculations of the experts on faith…I think it’s important not to accept some scientific calcuations, and reject others, on whim….Cormac

  7. Anders: I don’t think there is a special difficulty about what it is for something to be a known hazard, by which I mean, I don’t think there is a special difficulty in knowing of a hazard as opposed to knowing anything else. I reject your attempt to characterise it in terms of credences over all possibilities. A known hazard is a hazard which cannot be ruled out. Two things are clear from the disagreements among relevantly informed physicists:

    1. For some bizarre reason they seem to think that they have the right to decide whether to switch the LHC on. But if the LHC might destroy all present and future goodness they do not have that right and they ought to know better than to think they do.

    2. That they disagree as they do means the possibility that the LHC might destroy all present and future goodness is not ruled out and is therefore a known hazard. The nature of the disagreements also mean that we are uncertain of this hazard. Consequently my updated premiss 13 is true and so the third argument is sound.

    Matthew: Whether there is any factor by which increasing total goodness justifies risking its total destruction is a question for a different occasion. All that matters for the second argument is that not even doubling total goodness would justify the risk and it is true that turning on the LHC won’t double total goodness.

    James and Professor R: I am deliberately not taking any position about the physics, nor do my arguments depend on picking and choosing. They are based on stepping back from the physicists’ disagreement about facts and theories and pointing out that the reason that the LHC should not be turned on does not depend on taking sides in their disagreement.

    The upshot of my three arguments are that:

    (a) even if the chance of the destruction of total goodness is one billionth, turning on the LHC does not satisfy a necessary condition on taking the risk. Some physicists say the chance of destruction is 1 in 50 million, James says he’s not opposed to taking that chance. My first argument makes it clear that he should be.

    (b) that despite knowledge being very valuable, even if the probabilities are such that the expectation of turning on the LHC is positive, that is insufficient to justify turning it on.

    (c) it shouldn’t be turned on if it is a known hazard of which we are uncertain. I see that on James’ website he has quotations from respectable physicists who do not think the destruction of total goodness is ruled out. There are others. I’ve already explained why I think the nature of the disagreement means we are uncertain. So it shouldn’t be turned on.

  8. When I step out the door, there’s a very small chance that the displacement of air particles may, some day down the line, via the butterfly effect, in some inexplicable and giant chain of causality, one day cause a mad dictator to initiate global nuclear war.

    Walking out the door satisfies the conditions of premise 14 – i.e., it is an avoidable action. Assuming destruction of the human race is roughly equivalent to “destruction of all present and future goodness” as evaluated under a human utility function, should I never leave my house?

  9. I’ve seen several similar arguments already, and there are several posts above that give the usual rebuttals, but I have yet to see a rebuttal to those rebuttals, so I’m tempted to insist:

    A) The generic argument seems to be:
    i. The future goodness is very large (some seem to imply infinite, though the argument in this post doesn’t);
    ii. LHC has a very small chance of eliminating all that future goodness;
    iii. If not (ii), LHC will improve future goodness by a small amount;
    iv. The unknown quantities in (ii) and (iii) can’t give an expected future goodness larger than not running the LHC, so:
    v. We should not run the LHC.

    B) There are two related rebuttals:
    i. The same argument above can be applied to every action, which means we shouldn’t do pretty much anything.
    ii. It is not reasonable to use the key assumption (A.iv): the relative magnitudes of the two factors are very hard to estimate, and different reasonable assumptions can give estimates that vary by tens of orders of magnitude.

    In fact, it’s B.ii that causes B.i. Example:
    – One can reasonably argue that humanity might last forever if the LHC isn’t run. This implies infinite future goodness in that case. (Note that we use similar theories to estimate the future lifetime of the universe and the eschatological potential of the LHC, so if we’re not sure about the latter’s safety we’re not sure of the former’s finiteness, either.) If “infinite” is objectionable, replace with “unbounded”, since a reasonable argument can be given for any lower bound on future goodness.
    – One can just as reasonably argue that even if the LHC isn’t run, humanity will disappear in a few centuries or even decades. In that case, expected future goodness is relatively small.

    The ratio between the two extreme reasonable arguments above is infinite (or at least unbounded), and we have no real information to attribute relative probability to them, nor to the myriad intermediate cases.

    On the other side, one can reasonably argue that the improvement of future goodness if LHC is run can be unbounded. The post hides this by arguing in terms of expected benefit per week, which seems infinitesimal. But it is reasonable to imagine that any new technology (e.g., the LHC) can teach us to avoid an (unknown, unanticipated) future eschatological event. Given that the eschatological event can reasonably be assumed to be very close and the benefit if it’s avoided very large (the exact same arguments as above), then the possible future benefit of the LHC can be unbounded.

    My opinion is that any generic argument that doesn’t really justify its likeliness assumptions is invalid for these reasons. Of course, once one tries to justify the assumptions one needs a physical theory. As soon as someone else has a different theory, the temptation arises to return to the generic argument above, trying to deal with the unknown probability of either theory being correct. But this fails.

    A reply above shows that this analysis has the same effects even for trivial actions like opening the door. If that seems exaggerated, try to apply the same argument to some past innovations. For instance, some preliminary calculations suggested an atomic bomb might set the atmosphere on fire, thus destroying human-kind. The calculations were proven wrong, but nobody could be absolutely sure of that until actually trying. (And in fact some explosions were larger than expected; by less than an order of magnitude, but still unanticipated.)

    However, one future eschatological event is an asteroid hitting the Earth. There are plausible such events that can’t be deflected except using technology acquired as a result of nuclear developments. This wasn’t anticipated at that point. Note that the LHC could also lead to such unbounded-benefit technologies.

    Similar arguments could also have been brought about innovations as widespread as electricity (global thunderstorms), cities and nations (global wars), coal-based steam engines (global warming) or computers (machines taking over). Note that all events in parentheses were reasonable low-probability outcomes either given knowledge at that point, or due to factors impossible to anticipate at the time (electrodynamics, WMDs, feedback loops, still unknown).

    I have yet to see a good argument why those developments shouldn’t have been attempted, or why this one is different. (Or, for that matter, why isn’t this argument applied to past particle accelerators; our knowledge of black holes and strange matter was smaller enough in the past. Should we have made _no_ collision experiments with subatomic particles?)

    Note that I’m not arguing for or against running the LHC. This is just a critique of the above line of reasoning.

  10. None of these arguments hold, as they all disregard the possibility that NOT turning on the LHC is more likely to lead to the destruction of all future goodness than turning it on.

    By my estimate, the extreme low bound of the likelihood of knowledge from the LHC preventing the annihilation of the human race is at least 100 billion times larger than the extreme high bound of the likelihood of the LHC causing the annihilation of the human race.

Comments are closed.