Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

What is an acceptable risk to destroy the earth ?

  1. Sep 5, 2008 #1

    vanesch

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    This is a spin-off thread from the thread "A Black Hole in the LHC":
    https://www.physicsforums.com/showthread.php?p=1860755#post1860755

    The scientific debate (in as much as it is scientific :uhh: ) about the remote possibility that micro black holes even exist, and could even be produced at the relatively low energies of the LHC (only 7 times higher than existing accelerators), and that they would not evaporate quickly as should be the case with Hawking radiation, and would be in such kinematic conditions that they'd be captured in the Earth gravitational field, and that they would nevertheless interact strongly enough with matter to slowly eat up the earth from within on a time scale shorter than the remaining lifetime of earth in the solar system etc... takes place in that thread. However, in that thread, the issue was raised:

    "What's an Acceptable Risk for Destroying the Earth?"

    My provocative answer was the following:
    This is of course more an ethical and philosophical debate than a scientific one. So here we go :smile:
     
    Last edited: Sep 5, 2008
  2. jcsd
  3. Sep 5, 2008 #2
    I assume for this discussion to be relevant that : the probability for a human being to be willing to destroy the Earth, and have the means to do so, is less than 1 in 6 billion.
     
  4. Sep 5, 2008 #3
    We are clearly already destroying the Earth. How many more decades until all wildlife and natural vegetation is destroyed? And then some even dare to call that progress.
     
  5. Sep 5, 2008 #4
    You are free to buy your island and live in a cave there.

    A somewhat related question is : when will we establish laws concerning planet engineering ? Until then, we will always have the possibility to wake up and discover that somebody somewhere has decided to change the composition of the upper atmosphere to reduce global warming.
     
  6. Sep 5, 2008 #5

    Ivan Seeking

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    um, er, 42?
     
  7. Sep 5, 2008 #6

    Ivan Seeking

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    It would appear that we get about 9 billion years out of the planet, so 1/9 billion years sounds reasonable.
     
  8. Sep 5, 2008 #7
    But given that virtually every single human on Earth will die regardless within 100 years, drivers or not, you could use 1/100 as an upper limit instead. I'm saying that there is no justification to use some arbitrary annual mortality figure. The scenario are entirely different: steady culling vs annihilation.
     
  9. Sep 5, 2008 #8

    lisab

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Hmmm...interesting idea. Never thought I would post in philosophy...!

    If a person finds the risk of being killed in a car accident is too high, he can always opt out of driving, drastically reducing his risk. Sure it's inconvenient but the risk managment is in the hands of the individual.

    Not so with the "doomsday" scenario. The average citizen of Earth has no power over this decision. Perhaps this (and perhaps a general distrust of science) is the heart of the public's concern.
     
  10. Sep 5, 2008 #9

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Clappin'. :)
     
  11. Sep 5, 2008 #10

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Vanesch said:
    In opposition, out of whack said:
    I agree with whack. Why only consider human life living today? Why not add to that the loss of animals? vegitation? add to that the total loss of all living things. Why only today's human inhabitants?

    That said, I suspect the calculated chance of producing the doomsday black hole is likely non-existant. Still, the question has application to other human actions as MeJennifer points out. We live at a time during which the rate of extinction is higher than at any time over the entire history of the Earth.
     
  12. Sep 5, 2008 #11

    Evo

    User Avatar

    Staff: Mentor

    There have been events where 95% of the earths' species were wiped out (Permian event).

    New species are being found every day.

    http://archive.wri.org/item_detail.cfm?id=535&section=pubs&page=pubs_content_text&z=? [Broken]

    Take a gander at all of the different articles discussing new species found.

    http://www.google.com/search?hl=en&rlz=1T4HPIA_en___US243&q=new+species+found+how+many?

    Previous major extinctions on earth

    http://www.space.com/scienceastronomy/planetearth/extinction_sidebar_000907.html

    The earth has shown an amazing ability to recover. Humans wouldn't be here now if some of the horrendous extinctions hadn't happened and life on earth will most likely flourish after humans disappear.
     
    Last edited by a moderator: May 3, 2017
  13. Sep 5, 2008 #12

    Ivan Seeking

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Who cares what happens to earth if humans all die, and why?

    So, who doesn't care if their kids or grandkids are deprived of life? And who is going to take my head out of the freezer and provide a body for me, and revive me, if everyone dies?

    Seriously though, we are ignoring the promise of life extension of up to 400 years by some claims. I don't think I'll see it, but I think there are people alive today who might see another hundred years or two added to the potential human lifespan.
     
  14. Sep 5, 2008 #13

    vanesch

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Well, this is why I brought this question here. Is there any intrinsic value to "life on earth" without humans - given that it are humans who are "judging the value" ? It's not such an innocent question as you might think. Imagine that you give a certain value to what I call "Gaia" (the biosphere of earth). That means that at a certain point, if humanity threatens Gaia too much, you should make a choice, and eventually choose to annihilate humanity for sake of Gaia's survival. It is not unthinkable that humans develop enough technology to be able to live on a totally lifeless planet for instance (except for human-controlled biological processes which produce food and so on). If this threatens to happen, and if you give an intrinsic value to Gaia, then you should maybe have to decide to kill off humanity for the sake of Gaia. That's what it means to give an intrinsic value to "life on earth". It's the kind of value that leads to "I'd rather burn my country, rather than to rule over a land of heathens!"

    The other question: why only consider human life today ? Well, clearly, we don't have to consider anymore past life, do we ? And we can ask the question whether it makes any sense to talk about future life that never will be. Did I kill my daughter I never had ? Am I a murderer of unborn, unconceived children ? Worse, am I a murderer of the great-grand children my non-existent daughter never had ?

    So if humanity is killed *today*, future generations will never have existed, so we didn't destroy them, did we ?

    Does the "perpetuation of humanity" have any value - apart from a form of motivation for some to spend their life doing "important things" ?

    Sure, the LHC-will-make-a-black-hole stuff is kind of ridiculous, but it brought up the question.

    The question is of course not "should a random fool be allowed to blow up the earth just for the fun of it", but rather, can we do a risk-benefit analysis with earth as a whole in the same way as we do it for ourselves, every day, when we take small risks (which could kill us, but low probability) in order to increase the joy of our lives, like participating in traffic, just because it makes our life much better when we can travel (to the grocery shop, to our work place, to home,...) ? Clearly my personal answer is yes, but I thought it might make for an interesting discussion.
     
    Last edited: Sep 5, 2008
  15. Sep 5, 2008 #14

    vanesch

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    :rofl:

    This is funny: Ivan Seeking found a very similar number:
    (I thought it was done here in 5 billion years when the sun became a red giant, but ok... same ballpark).
     
  16. Sep 5, 2008 #15

    vanesch

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    The point is that 1.2 million dead from driving is an *acceptable* risk, given that we accept it. If we didn't, we'd stop driving world wide. It is a *choice*. We know (we in the sense of the large majority of people on this planet) that if we allow people to drive cars/trucks,... that we will kill this year 1.2 million people, and next year again, and the year after again. It is a very stable number. Nevertheless, there are very few places on earth where people decided collectively to ban driving all together. Not because they are idiots, but simply because people did a cost-benefit analysis, and came to the conclusion that for the benefit it brings us, the risk is acceptable. We could, collectively and socially, eliminate that risk almost immediately by prohibiting car driving world wide. We don't. By far we don't. So we accept that.
    Moreover, it is a very good measure, because we collectively do so, each of us. It is not a few powerful lunatics in their armchairs who decided to take risks with other people's lives. We do it all ourselves. So it is a very good measure of what we collectively accept as a risk.

    Not so with dying at 100 years. We don't CHOOSE to die. It is not an "accepted" risk. We simply undergo it.
     
    Last edited: Sep 5, 2008
  17. Sep 5, 2008 #16

    Ivan Seeking

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    If we are going to consider value, then we have to ask: Value to whom? Since the answer is clearly "us", as we can't speak for other species, or for Gaia, the question itself suggests that our perception of value is all that is required in order for something to have value. Therefore, the "value" of anything depends entirely on the preservation of humans. If we live, anything has value that we say has value. If we all die, nothing has value.

    If you mean implicit value, then you will have to ask God. :biggrin:

    We know only the risk and not the benefit. If we are going to gamble with humanity, it seems reasonable to have a specific example. If we are talking about the LHC, it is a bit difficult to justify risking the planet were the risk any more than infinitely small, like 1:9 billion years.
     
    Last edited: Sep 5, 2008
  18. Sep 5, 2008 #17

    Ivan Seeking

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    The earth has already been around for 4 billion.
     
  19. Sep 6, 2008 #18
    If humans die out then isn't it conceivable that other beings might evolve to a state where they can have values.

    How can you talk about "values to us"? Surely you can only talk about your own values? Would you count Hitler as "one of us"?

    If we all die doesn't a rational existence like ET or Deep Thought have value? (Assuming they actually exist, or can exist!) What about Dolphins? Don't they have value? I kinda like the idea of them still existing if humans top themselves.

    On the LHC, I've seen some figures which suggest the risk is less than 1:9 billion years. Of course these can only be *theoretical*. Martin Rees:

    "It is not inconceivable that physics could be dangerous too. Some experiments are designed to generate conditions more extreme than ever occur naturally. Nobody then knows exactly what will happen. Indeed, there would be no point in doing any experiments if their outcomes could be fully predicted in advance. Some theorists have conjectured that certain types of experiment could conceivably unleash a runaway process that destroyed not just us but Earth itself." -- From "Our final hour".

    But Rees was on Rado 4 news this morning supporting the LHC. His desire to find out what dark matter is seems to have outweighed the risk (for him). Also, according to Dawkins, Rees is a church goer so's he's got somewhere else to go...

    By the way, listen out for Radio 4's "Big Bang" day on 10 Sep:

    http://www.bbc.co.uk/radio4/bigbang/
     
  20. Sep 6, 2008 #19

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Hi vanesch,
    It’s a valid question. Companies like mine do risk analysis every day to minimize how many deaths any given engineered system might incur. Implicit in that analysis is that the system being designed and created, provides a benefit to people. There’s a point I heard once made about a wonderful new energy source that, if it is to be used to provide energy for families in the US, there will be some large number (thousands +) of deaths and many tens of millions or thousands of millions in property damage every year – so should we use this new energy source? Turns out the energy source is natural gas, and we’ve already accepted the risk. The point here is that we can put a monetary value on life and property and compare that to the monetary value created in this capitalistic society, and if the monetary value created is >> than the value put on life and property, there is justification in creating this value and accepting the risk to human life and property.

    However, that’s not really applicable when we talk about wiping out species of plants and animals, including humans. We can’t put (or it is extremely difficult to put) a monetary value on the Brazilian rain forest or the breeding grounds of salmon, or the whole of the Earth. For that, we have to go beyond the logical application of monetary value (ie: the logical comparison of monetary benefit). To put a value on such things, we have to resort to emotional arguments.

    There’ve been a few interesting posts about the nihilist view. Moridin posted an interesting article here.
    The paper referenced by Moridin attempts to “define a methodology for validating moral theories that is objective, consistent, clear, rational, empirical – and true.” In so doing, it provides axioms on page 38 including “Morality is a valid concept” and that they must be consistent for all mankind. In creating these axioms, the author has based his argument not on logic, but on an emotional predisposition. Why should we logically assume morality is a valid concept? Why ‘mankind’? A much more advanced civilization than ours might see the human race as nothing more than ants in the sugar bowl needing extermination. Morality – not killing everyone on earth nor wiping out every living thing – is based on an emotional axiom, not a logical one. So saying there is X value in a rain forest or in all life on Earth is an emotional one, not a logical one. You can’t boil down such arguments to anything logical, the arguments always boil down to emotional axioms.

    What’s interesting is that we as humans all seem to share similar moral axioms, and in general, people seem to agree that there is value in the future. Wiping out rain forests and future generations is a bad thing and we should have value placed on the future ‘health’ of the planet. Again, there’s nothing strictly logical in this belief, but there seems to be a consistent agreement among humans that the future is worth more than x number of lives.
     
  21. Sep 6, 2008 #20
    I understand this, but I don't know why you picked an annual figure. Why not pick 137 per hour or 0.04 per second instead? How long will the critical collision take at the end of the accelerator, a millionth of a second? I have no idea. But that would give an acceptable risk of 4 x 10^-8 for the duration of that critical event, so that's what you could be using in your risk estimate instead of some time figure that isn't really pertinent to the source of your discussion: not 1.2 million. But this is only one way your approach to risk calculation is inadequate in my view.

    A more important way is the cost/benefit assessment. You can decide to place value strictly on the life of currently-living Homo sapiens for the sake of discussion, so 6 billion people with an average lifespan of, say for argument sake, 70 years. On average, we all have another 35 years to live so our gamble involves 2.1e11 years of human life. Well, if the risk is good, it will favor an expected increase in this number. If the risk is bad, it involves an expected decrease instead. So I ask, what is the expected increase from the accelerator? Well, it's not a medical device. Granted, the knowledge obtained can still serve nuclear medecine for example, so let's say we can cure some cancers as a result for a million people currently alive, adding 35 million years to our pot. That's an increase of 2e-4 (ratio), not much on the benefit side. On the risk side, we could go from 2.1e11 years of human life to... what was that again? Oh yeah, zero. Hmm. My quick calculation tells me that given that pesky zero, the only acceptable degree of risk would have to be zero as well. So from a strict, cold calculation based on human life alone, the risk of annihilation has to be zero before trying this out.

    Which brings this discussion to more human levels. What non-numeric criteria are actually relevant here? Curiosity is a big one, progress, consideration for other things that exist, stuff like that. But not just cold accounting on human lives.
     
  22. Sep 6, 2008 #21
    I've just read the OP a second time and I see that you did state 1/5000 per year so the first paragraph of my previous post is moot. I also see that you had pondered the risk/benefit consideration as well so I should read more carefully what I reply to.

    The argument remains that if the risk is to lose the entire pot under consideration then no amount of benefit can cancel out a zero in the risk/benefit ratio.
     
  23. Sep 6, 2008 #22

    vanesch

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    ? I don't follow that. As Rees said somewhere, there is always a risk in doing extreme physics experiments, simply because we will do something that is new, and in doing something that is new, it is impossible to know perfectly in advance what will happen with 100% certainty, so it is always conceivable that some unforeseen catastrophe will occur - by the sheer novelty of the experiment.
    What we can do in such circumstances is to try to get an upper limit of the probability that such an unexpected catastrophe can happen, by trying to find out if nature hasn't in fact done more extreme "experiments" and finding a statistical limit to doomsday scenarios. After all, nature is usually much more extreme than what we can do in the lab. However, these estimates of guaranteed upper bounds of probabilities of doomsday will usually result in a finite upper bound. That is, they will not indicate us that ANY new experiment is PERFECTLY safe.

    So if we are to make scientific progress, then we have to accept a finite upper bound to the probability of doing the unconceivable, like blowing up the earth, the solar system, or the entire universe. This is an *upper limit*, but it is the only hard guarantee that we have. This implies that scientific progress requires us to accept a finite upper bound on the risk of "destroying the earth/solar system/universe". It is the Faustian bet, so to say.

    So we have to weight the benefit of scientific progress against this risk. If we don't accept the risk, the price to pay is that we won't have scientific progress.

    Now, my *personal* opinion is that a society that has decided to give up on scientific inquiry has in fact lost the only valuable thing that a society has, and as such, represents almost zero value, but again, that's only *my* opinion :redface: .
     
  24. Sep 7, 2008 #23
    I claim that annihilation is an infinitely bad outcome and that no probability of failure above zero can balance this out from a strict accounting/statistical point of view. Let me elaborate on this intuitive shortcut.

    The approach you took in the OP was to only consider people currently alive as of value when determining an acceptable risk. You suggest a cost/benefit calculation of our gamble in human lives for comparison against the driving gamble. So we restrict ourselves to the accounting based on our pot: 2.1e+11 years of human life (this person-year scale was mine). What we need now is a fair way to quantify the desirability/undesirability of gains or losses for this pot.

    I don't consider absolute numbers appropriate for this: they give a skewed scale without upper bound but with a lower bound (0) since we cannot use negative numbers in our scenario. To get a scale that is balanced on both sides, a factor of growth/shrinkage is fair, eg: doubling our pot gives a positive factor of 2, halfing it yields -2. This is balanced and it matches my claim that annihilation is infinitely undesirable: the cost value goes to negative infinity as years of human life go to zero. Using this scale, no probability of total failure (annihilation) other than zero can give a positive outcome when we calculate the statistical expected result of the experiment.

    What remains to the debate is whether that scale is fair, or whether an absolute scale is more correct. I call my scale fair because I see a huge qualitative difference between some human lives surviving the experiment and no human life remaining after it. It's my human judgment that if the only thing of value is human life then it needs to be preserved at all cost and zero is plainly unacceptable.

    Of course, once this accounting is all done, we can start considering all other factors like progress and advancement of the species, factors that were ignored in the above calculations. When we consider these, the risk of zero actually becomes acceptable since risk is part of what defines our species. We agree on this point. This mitigates quantifiable risk values.
     
  25. Sep 7, 2008 #24
    Re: Black hole in LHC?

    5000 to 1 are acceptable odds? You must be joking. Let's say one of every 5000 people on the planet are killed by automobiles every year -- in 5000 years you will still have as many or more people than you started with due to population replacement, so human life goes on. In the case of a planet-destroying accident happening with 5000-1 odds each year, there is a better than even chance of destroying all life once every 3500 years or so. So in the one case you have zero chance of your scenario destroying all life, and in the other case a very significant chance as time passes. Hard to equate the two, and I find your reasoning faulty. Therefore I will assume you were joking.
     
  26. Sep 8, 2008 #25
    Re: Black hole in LHC?

    Asking whether Vanesch was joking completely misses his point. The answer is as logical as can be from the specific (but generally, not LHC-related) question "What's an Acceptable Risk for Destroying the Earth?" itself. The actual upper bound on LHC-doomsday scenario probabilities as evaluated by physicists is much, much, much lower. If there was an actual risk, of course it would not be taken, no matter how small. When you reach such absurdly small probabilities, there is no point in talking whether those things can happen or not. Remember, a rabbit can vaporize the atmosphere by passing a gas. People still call them "bunny".

    As a side note, I also think you might not have put so much thought into the probability of destroying Earth being more than 1/2 in 3500 years. Most people who do serious research on that come with much higher numbers, which means much sooner, and that has nothing to do with LHC. This is geopolitical speculations, and probably should not be discussed here.
     
    Last edited: Sep 8, 2008
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook