Skip to content

Extinction Risks and Particle Physics: When Are They Worth it?

The Large Hadron Collider, LHC, is the worlds biggest particle accelerator and due to start investigating the structure of matter later this year. Now a lawsuit has been filed in the US calling on the U.S. Department of Energy, Fermilab, the National Science Foundation and CERN to stop preparations for starting the LHC for a reassessment of the safety of the collider. The reason is fears that the high energy collisions could cause some form of devastating effect threatening the Earth: either the formation of miniature black holes, strangelets that absorb matter to make more strangelets or even a decay of the vacuum state of the universe. Needless to say, physicists are very certain there are no risks. But how certain should we be about safety when there could be a risk to the survival of the human species?

The main reason physicists are not worried is that all of the disaster scenarios involve very speculative physics. Current theories do not seem to predict any danger and some disaster cases would require particles that have never been observed despite extensive searches. But this requires our understanding to be accurate, something the experiment itself is about to test. Perhaps the most convincing argument that we are safe is that if particle collisions could collapse planets, why is the moon (or any other heavenly body) still around after billions of years of bombardment that often involve energies far larger than what the LHC ever could produce? The solar system ought to be littered with strange matter and black holes if a measly 14 TeV could cause danger.

However, there is a problem with the moon argument. If the moon or any other body in the solar system imploded, it would likely release enough hard radiation to wipe out life on Earth. Our own existence depends on no disaster having occurred nearby for the past four billion years. In an universe where disasters happen fairly frequently a few worlds may be extremely lucky and produce observers who erroneously conclude that the world is safe – we could be one of them.

Fortunately, Nick Bostrom and Max Tegmark found a way of estimating the risk that does not suffer this kind of bias. The trick is to consider when the lucky observers appear in the history of the universe. In a risky universe it is very unlikely for an observer to emerge late rather than early, since their planet would be likely to have been devastated. In a safe universe there would be no such bias. Plugging in estimates of when a typical planet forms and how old the Earth is, they could show that the rate of sterilization of planets for any reason is less than one in a billion per year. Good news, and it doesn’t even require assuming a particular kind of disaster.

However, this might not be enough to calm some people. A small but finite chance of global destruction seems to be a risk we should not take if we can easily avoid it. Perhaps surprisingly, this may not be true if there are benefits linked to taking the risk. Imagine that there is a one in a billion chance that turning on the LHC destroys mankind, and otherwise makes every human life on average X times better. When is the expected benefit of taking the risk larger than the value of not taking the risk? It turns out X only needs to be 1.00000000100000008 – a minuscule amount better.

Does particle physics produce improvements in human well-being, beyond the delight of physicists? There have been many spin-offs in nuclear physics, electronics, material science and medical imaging. Perhaps the biggest was the world wide web itself, developed originally at CERN for information management. These together have probably directly and indirectly contributed more than one in a billionth to the enjoyment of life for a large part of humanity over the past century.

The calculation is not changed if we try counting future human lives than just the present ones. The only way to escape it is to argue that human survival has a special value that is either very large compared to the total value of all humans, or of a different kind that cannot even be compared to it. But if human survival has that kind of great value it would tend to overshadow any concern for individual humans.

Further Reading

Nick Bostrom, Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards, Journal of Evolution and Technology, Vol. 9, March 2002

W. Busza, R. Jaffe, J. Sandweiss, F. Wilczek, Review of speculative ‘disaster scenarios’ at RHIC, Rev. Mod. Phys.72:1125-1140 (2000)

CERN, Study of potentially dangerous events during heavy-ion collisions at the LHC: report of the lhc safety study group, 2003

M Tegmark & N Bostrom, How Unlikely is a Doomsday Catastrophe? 2005, astro-ph/0512204, Nature, 438, 754

Large Hadron Collider legal defense fund

Thanks to Toby Ord for the original version of the risk/benefit argument.

Share on

16 Comment on this post

  1. The “one in a billion” calculation assumes black holes/strangelets created by cosmic rays would be as dangerous as black holes/strangelets created in colliders; some people dispute this (see http://www.risk-evaluation-forum.org/prob.htm , point 3).

    Also:
    “Imagine that there is a one in a billion chance that turning on the LHC destroys mankind, and otherwise makes every human life on average X times better. When is the expected benefit of taking the risk larger than the value of not taking the risk? It turns out X only needs to be 1.00000000100000008 – a minuscule amount better.”
    The vast majority of people probably live in the far future, making this argument somewhat weaker than it looks.

  2. Even when a fast particle hits a particle at rest it can produce a slow decay product (especially when the product is heavy); it is just much less likely. The large flux of cosmic rays over long time ought to have produced enough such cases to show up. Also, scattered fast-moving dangerous particles would seem to be very noticeable phenomena. But the key concern remains: could there be some property of colliders that is not found in nature?

    Even if we take into account the value of future people the argument is the same. In the case of just the present N people, the expected value of doing the experiment is P*0 + (1-P)*N*X (where P is the chance of disaster). When this expression is larger than N (the value if we do not do anything) the experiment is worthwhile. This happens when X > 1/(1-P). But notice that N factors out: if we use all people there will ever be for N the result is the same. If the benefit X only accrues to the present generation, then concerns for future generations may swamp our risk-taking. But it seems unlikely that the benefits are only in the present; in fact, it seems likely that the biggest benefits will be further in the future rather than today.

  3. “Even when a fast particle hits a particle at rest it can produce a slow decay product (especially when the product is heavy); it is just much less likely. The large flux of cosmic rays over long time ought to have produced enough such cases to show up.”

    Has anyone done the math on this, as far as you know?

    “But it seems unlikely that the benefits are only in the present; in fact, it seems likely that the biggest benefits will be further in the future rather than today.”

    If the dilemma is run LHC versus not run LHC, I agree the argument is at least as strong. But if the dilemma is run LHC now versus run LHC after thinking more about the risks, I have a much harder time seeing how this could affect far-future people.

  4. I think the math has been done, although I’m not terribly up to date on the literature. A quick back-of-the-envelope estimation:

    Calculating the risk from beam collisions is reasonably easy: Let’s call the mass of a dangerous remnant M. In order to be dangerous it has to move slower than Earth’s escape velocity v=11.2 km/s, so its energy has to be between Emin = Mc^2 and Emax = sqrt(M^2 c^4 + M^2 v^2 c^2/(1+v^2/c^2))
    = Mc sqrt(c^2+v^2/(1+v^2/c^2)). Emax-Emin = 0.2092*Mc^2

    When two particles collide with total energy E and produce N offspring particles, all different partitions of the energy between the N particles are going to be equally probable (from the equipartition principle in statistical mechanics). The sum of their energies has to be E, so one can view the individual energies as coordinate axes in a N dimensional coordinate system and the state located on a N-1 dimensional standard simplex sum_i E_i = E. The dangerous configurations are those where one of the coordinates is between Emin/E and Emax/E. The fraction of states where this occurs can be estimated as (Emax/E)^(N-1) – (Emin/E)^(N-1). (This leaves out momentum conservation, which becomes an issue when Emax/E starts to get close to 1)

    E = 14 TeV (2.2428*10^-9 Joule) for the LHC. For strangelets, M is assumed to be on the order of a medium atomic mass. Setting M to 1e-26 kg (like a lithium atom) makes Emax about 40% of E (it cannot be much higher, or there will not be enough energy). Putting in these numbers for the case N=2 gives a probability P=6.3e-19. For N=3, P=1.1e-27, and so on. More particles means a few more ways a particle can get into the dangerous area, but the total space of mostly safe states grows much faster. The LHC will have about 8e7 collisions per second, so we should expect a “dangerous” interaction for N=2 every thousand year of experiment or so (and this assumes every collision to work!).

    Now, in the stationary target case things get a bit messier. Assuming that the two colliding particles have equal mass and one moves ~c, we can work in a centre of mass coordinate system that moves at speed c/2. The total momentum is zero, and we are interested in how likely it is that reaction produces a particle of mass M moving at speed c/2 plus or minus 11.2 km/s in this coordinate system. Emax = sqrt(M^2 c^4 + M^2 (c/2+v)^2 c^2/(1+(c/2+v)^2/c^2)), Emin = sqrt(M^2 c^4 + M^2 (c/2-v)^2 c^2/(1+(c/2-v)^2/c^2)). The difference is just 0.0005 Mc^2, but more importantly both Emax and Emin are now (for the strangelet mass above) bigger than E – it would require a lot of energy to both create the particle and give it the fast velocity (in the center-of-mass frame) that corresponds to being at rest in the lab.

    At least I think so, but the probability that I messed up a calculation is relatively high.

    These calculations does seem to reinforce the estimate that it is pretty unlikely that particles get the relatively precise velocity needed to remain on Earth even in beam collisions (good for our security *), and that fixed targets are very inefficient in producing high energy reactions (bad for the moon argument). Maybe we ought to look for small black holes and strangelets whizzing around at high speed?

    * This assumes that the cross section between the particle and the planet is not so large that it can gobble up enough mass while passing by to slow down and get trapped. A particle travelling at c/2 needs to absorb around 13,383 times its own mass before it gets below escape velocity.

  5. It is, I think, a bit misleading to present the estimate in terms of benefits to individual people, since our assessment of how likely it is that the experiment will have an average benefit of X to each person will vary dramatically depending on the number of human beings that we believe will ever exist. And because we tend to underestimate how many such people would live if humanity did not become extinct prematurely, presenting the estimate in this manner will tend to downplay the expected costs of running the experiment. When readers of this blog consider how plausible is it that starting the LHC will expectedly increase the welfare of people by that tiny amount, they are likely to think about the people that currently exist, and perhaps a few extra generations into the future. If, however, we assume that the human species will expand in space and time constrained only by the availability of energy resources, then the number of future individuals becomes astronomically large–think about how many individuals will exist if, not implausibly, we assume that a trillion lives will be lived at any given time during the next trillion years. It is not at all clear that running the experiment is expected to benefit on average all these future people, even by a tiny amount. Such benefits may well be like ripples in a pond, diminishing in proportion to the distance from the origin. If the pond is large enough, it might be prudent not to throw the rock into it.

  6. Anders, I’m unhappy with how the expected value has been calculated. The value of the bad outcome is massively negative (the value of everything good that would have happened till the end of time), not 0. The value of the good outcome is not the entire goodness but only the marginal increase in goodness due to the experiment.

    Setting g to be the total goodness without the experiment, x the factor by which the experiment improves goodness, and p the chance of disaster and assuming we take maximising expected value as determinative of what to do (which, of course, I don’t), we are looking for the condition on x for which

    (1-p)(gx-g)-pg>g

    which resolves to x> (2+2p)/(1-p)

    which for p = one billionth gives

    x = 2.000000004.

    ie. the LHC would have to more than double the goodness in the world to be worth the risk (and furthermore, x tends to 2 as p tends to 0 so increasing safety doesn’t change the essense of this).

    Now I have done this pretty quickly, so perhaps I’ve slipped up, but if I haven’t then, since the LHC won’t get remotely near doubling the goodness in the world it seems clear that the LHC should not be switched on.

  7. I think Pablo’s argument is very sensitive to how lasting the good effects of the experiment are. If they decline over time (because they are local to our way of life, or because they are spin-offs that can be achieved otherwise) then the case against doing the experiment would seem to improve a lot. But if the gains are due to them being fundamental physics information, then they might not decline – Pythagoras’ theorem or the Michelson-Morley experiment does not seem to have declining effects. Maybe physics and other science experiments are extra privileged this way.

    Nick’s argument seems to have a problem: it requires any action, even actions that have p=0 (i.e. does not threaten mankind) to double goodness to be worth the risk. That seems a pretty tall order to me.

  8. CERNs web site states that we have not been destroyed by effects of cosmic rays and micro black holes will evaporate.

    However, cosmic rays travel too fast to be captured by Earths gravity, and Hawking Radiation is disputed and contradicts Einsteins highly successful relativity theory. Collider particles smash head on like a car collision and can be captured by Earths gravity, and relativity predicts micro black holes will not decay (Hawking called Einstein doubly wrong, yet it is Einstein who is repeatedly found to have been correct in his theories). There is currently no reasonable proof of LHC safety, LSAG (LHC Safety Assessment Group) has been trying for months to prove safety without success. I hold the minority opinion that it may not be possible because it may in fact not be safe.

    Cosmic Rays from the legal complaint.

    any such novel particle created in nature by cosmic ray impacts would be left with a velocity at nearly the speed of light, relative to earth. At such speeds, . . . , is believed by most theorists to simply pass harmlessly through our planet with nary an impact, safely exiting on the other side. . . . Conversely, any such novel particle that might be created at the LHC would be at slow speed relative to earth, a goodly percentage would then be captured by earths gravity, and could possibly grow larger [accrete matter] with disastrous consequences of the earth turning into a large black hole.

    Professor Dr. Otto E. Roessler estimates 50 months Earth accretion time from a single micro black hole captured by Earth’s gravity (www.golem.de/0802/57477-4.html, translation at http://www.lhcconcerns.com/LHCConcerns/Forums/phpBB3/viewtopic.php?f=10&t=52)

    If this thing is so safe, why arent CERN scientists allowed to express any personal fears they might have about this Collider?

    Alleged in the legal action: Chief Scientific Officer, Mr. Engelen passed an internal memorandum to workers at CERN, asking them, regardless of personal opinion, to affirm in all interviews that there were no risks involved in the experiments, changing the previous assertion of minimal risk.

    (Statisticians generally consider minimal risk as 1-10%).

    JTankers
    LHCConcerns.com

  9. Anders: yes,it’s not right. I’ve double counted the total goodness by, in effect, insisting that the marginal benefit of the experiment outweigh the downside by the the total goodness, which is a mistake. Now I get x > 1/(1-p) (alternate way of setting it up gives x> (1+p)/(1-p)) which gives a figure of the kind Toby came up with. Gives x> 1.000000001 (recurring) for p one billionth.

    OK, three arguments: 1. LHC will not increase total goodness by one billionth therefore should not be turned on. 2. Under maximising goodness it would be permissible to risk all goodness by doing something that risked destroying all goodness with a chance of 50% provided it offered to increase total goodness by more than twice. But doubling goodness does not justify the chance of destroying all goodness. Therefore maximising goodness is not the right criterion to decide whether taking such a risk is right. 3. Avoidable risks of destroying all goodness should not be taken. Turning on LHC is an avoidable risk of destroying all goodness. Therefore it should not be turned on.

  10. JTankers: I am curious about what your standards for reasonable proof of LHC safety would be? It is after all proving a negative, something no amount of LSAG work will ever reach. What kind of physics or reasoning could it ever be based on?

    The burden of proof in risk estimation is always a fraught issue. Normally we assume persons doing apparently safe things not to have to show they are safe, while for apparently unsafe things they have to do it. Maybe existential risks are sufficient to also force a burden of proof onto them, but then we need to have reason in the first place to think there is an existential risk. Since safety can never be proven, it would seem that the sceptic instead have to demonstrate a sufficiently serious risk. But in the case of physics disasters the real worry is always going to be about the least understood or unknown physics, not the known physics that could be used to derive solid risk estimates. Given the human tendency to regard uncertainty as threatening this will also cause a bias to regard uncertain risks as more serious than certain risks.

    I’m also curious about Dr. Roessler’s calculation, since I have been doing accretion time calculations myself. To my knowledge they have not been published in any scientific paper?

    Nick: I think argument 1 is probably false, but there are fascinating issues here about unequal distribution of the benefits in time and space as well as subjective benefits. I think 2 is a pretty good argument, and both 2-3 would be in line with Nick Bostrom’s “max-OK” principle. The problem with never doing anything that has the chance of justifying all goodness is that there seem to be such as background chance in any action and non-action (our actions plus quantum tunnelling can always produce planet-eating dragons). Saying that it is wrong to do actions that have a higher-than-default chance of ending goodness would as far as I see stop us from doing anything new or large-scale.

    The particular peculiarity about physics disasters is that the actions necessary to learn what the risks are contribute to the risk. Since the uncertainty is model uncertainty rather than parameter uncertainty, we cannot rely on deduction to reduce the uncertainty. Is the only way forward sideways anthropic arguments like Nick’s and Max’s to bound risks?

  11. Three arguments against turning the Large Hadron Collider on

    The physicists responses to worries about the risks posed by the LHC make it unclear whether they understand the moral issue. They may have the power, but they do not have the liberty to hazard the destruction of all present

  12. I would like to echo Steven’s comments about delaying the experiments. It is clear that the CERN team either haven’t addressed many of the important issues, or haven’t been transparent about it. Either way, the experiments should not presently go ahead. We can gain almost all the same benefits at reduced risks if we have a decade or five to seriously think about the issues, collect more data on the intersection of relativity and quantum mechanics, and put together a safety report worthy of the stakes.

    A serious reconsideration of the safety of the LHC, followed by the appropriate action is better than a blanket ban, or proceeding with our current level of ignorance.

  13. Gosh, it’s just a big atom smasher, and the possibility of something dangerous to happen is too small for that something to happen 😉 If it would have been real risk, scientists would inform us, or take measures against it, or, after all, never would have thought of taking this idea to reality. So stop worrying, listen to common sense and do not let this rumor by fools take over your mind.
    http://www.votetheday.com/polls/worlds-largest-particle-accelerator-experiment-214/

Comments are closed.