Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

What is an acceptable risk to destroy the earth ?

  1. Sep 5, 2008 #1

    vanesch

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    This is a spin-off thread from the thread "A Black Hole in the LHC":
    https://www.physicsforums.com/showthread.php?p=1860755#post1860755

    The scientific debate (in as much as it is scientific :uhh: ) about the remote possibility that micro black holes even exist, and could even be produced at the relatively low energies of the LHC (only 7 times higher than existing accelerators), and that they would not evaporate quickly as should be the case with Hawking radiation, and would be in such kinematic conditions that they'd be captured in the Earth gravitational field, and that they would nevertheless interact strongly enough with matter to slowly eat up the earth from within on a time scale shorter than the remaining lifetime of earth in the solar system etc... takes place in that thread. However, in that thread, the issue was raised:

    "What's an Acceptable Risk for Destroying the Earth?"

    My provocative answer was the following:
    This is of course more an ethical and philosophical debate than a scientific one. So here we go :smile:
     
    Last edited: Sep 5, 2008
  2. jcsd
  3. Sep 5, 2008 #2
    I assume for this discussion to be relevant that : the probability for a human being to be willing to destroy the Earth, and have the means to do so, is less than 1 in 6 billion.
     
  4. Sep 5, 2008 #3
    We are clearly already destroying the Earth. How many more decades until all wildlife and natural vegetation is destroyed? And then some even dare to call that progress.
     
  5. Sep 5, 2008 #4
    You are free to buy your island and live in a cave there.

    A somewhat related question is : when will we establish laws concerning planet engineering ? Until then, we will always have the possibility to wake up and discover that somebody somewhere has decided to change the composition of the upper atmosphere to reduce global warming.
     
  6. Sep 5, 2008 #5

    Ivan Seeking

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    um, er, 42?
     
  7. Sep 5, 2008 #6

    Ivan Seeking

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    It would appear that we get about 9 billion years out of the planet, so 1/9 billion years sounds reasonable.
     
  8. Sep 5, 2008 #7
    But given that virtually every single human on Earth will die regardless within 100 years, drivers or not, you could use 1/100 as an upper limit instead. I'm saying that there is no justification to use some arbitrary annual mortality figure. The scenario are entirely different: steady culling vs annihilation.
     
  9. Sep 5, 2008 #8

    lisab

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Hmmm...interesting idea. Never thought I would post in philosophy...!

    If a person finds the risk of being killed in a car accident is too high, he can always opt out of driving, drastically reducing his risk. Sure it's inconvenient but the risk managment is in the hands of the individual.

    Not so with the "doomsday" scenario. The average citizen of Earth has no power over this decision. Perhaps this (and perhaps a general distrust of science) is the heart of the public's concern.
     
  10. Sep 5, 2008 #9

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Clappin'. :)
     
  11. Sep 5, 2008 #10

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Vanesch said:
    In opposition, out of whack said:
    I agree with whack. Why only consider human life living today? Why not add to that the loss of animals? vegitation? add to that the total loss of all living things. Why only today's human inhabitants?

    That said, I suspect the calculated chance of producing the doomsday black hole is likely non-existant. Still, the question has application to other human actions as MeJennifer points out. We live at a time during which the rate of extinction is higher than at any time over the entire history of the Earth.
     
  12. Sep 5, 2008 #11

    Evo

    User Avatar

    Staff: Mentor

    There have been events where 95% of the earths' species were wiped out (Permian event).

    New species are being found every day.

    http://archive.wri.org/item_detail.cfm?id=535&section=pubs&page=pubs_content_text&z=?

    Take a gander at all of the different articles discussing new species found.

    http://www.google.com/search?hl=en&rlz=1T4HPIA_en___US243&q=new+species+found+how+many?

    Previous major extinctions on earth

    http://www.space.com/scienceastronomy/planetearth/extinction_sidebar_000907.html

    The earth has shown an amazing ability to recover. Humans wouldn't be here now if some of the horrendous extinctions hadn't happened and life on earth will most likely flourish after humans disappear.
     
  13. Sep 5, 2008 #12

    Ivan Seeking

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Who cares what happens to earth if humans all die, and why?

    So, who doesn't care if their kids or grandkids are deprived of life? And who is going to take my head out of the freezer and provide a body for me, and revive me, if everyone dies?

    Seriously though, we are ignoring the promise of life extension of up to 400 years by some claims. I don't think I'll see it, but I think there are people alive today who might see another hundred years or two added to the potential human lifespan.
     
  14. Sep 5, 2008 #13

    vanesch

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Well, this is why I brought this question here. Is there any intrinsic value to "life on earth" without humans - given that it are humans who are "judging the value" ? It's not such an innocent question as you might think. Imagine that you give a certain value to what I call "Gaia" (the biosphere of earth). That means that at a certain point, if humanity threatens Gaia too much, you should make a choice, and eventually choose to annihilate humanity for sake of Gaia's survival. It is not unthinkable that humans develop enough technology to be able to live on a totally lifeless planet for instance (except for human-controlled biological processes which produce food and so on). If this threatens to happen, and if you give an intrinsic value to Gaia, then you should maybe have to decide to kill off humanity for the sake of Gaia. That's what it means to give an intrinsic value to "life on earth". It's the kind of value that leads to "I'd rather burn my country, rather than to rule over a land of heathens!"

    The other question: why only consider human life today ? Well, clearly, we don't have to consider anymore past life, do we ? And we can ask the question whether it makes any sense to talk about future life that never will be. Did I kill my daughter I never had ? Am I a murderer of unborn, unconceived children ? Worse, am I a murderer of the great-grand children my non-existent daughter never had ?

    So if humanity is killed *today*, future generations will never have existed, so we didn't destroy them, did we ?

    Does the "perpetuation of humanity" have any value - apart from a form of motivation for some to spend their life doing "important things" ?

    Sure, the LHC-will-make-a-black-hole stuff is kind of ridiculous, but it brought up the question.

    The question is of course not "should a random fool be allowed to blow up the earth just for the fun of it", but rather, can we do a risk-benefit analysis with earth as a whole in the same way as we do it for ourselves, every day, when we take small risks (which could kill us, but low probability) in order to increase the joy of our lives, like participating in traffic, just because it makes our life much better when we can travel (to the grocery shop, to our work place, to home,...) ? Clearly my personal answer is yes, but I thought it might make for an interesting discussion.
     
    Last edited: Sep 5, 2008
  15. Sep 5, 2008 #14

    vanesch

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    :rofl:

    This is funny: Ivan Seeking found a very similar number:
    (I thought it was done here in 5 billion years when the sun became a red giant, but ok... same ballpark).
     
  16. Sep 5, 2008 #15

    vanesch

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    The point is that 1.2 million dead from driving is an *acceptable* risk, given that we accept it. If we didn't, we'd stop driving world wide. It is a *choice*. We know (we in the sense of the large majority of people on this planet) that if we allow people to drive cars/trucks,... that we will kill this year 1.2 million people, and next year again, and the year after again. It is a very stable number. Nevertheless, there are very few places on earth where people decided collectively to ban driving all together. Not because they are idiots, but simply because people did a cost-benefit analysis, and came to the conclusion that for the benefit it brings us, the risk is acceptable. We could, collectively and socially, eliminate that risk almost immediately by prohibiting car driving world wide. We don't. By far we don't. So we accept that.
    Moreover, it is a very good measure, because we collectively do so, each of us. It is not a few powerful lunatics in their armchairs who decided to take risks with other people's lives. We do it all ourselves. So it is a very good measure of what we collectively accept as a risk.

    Not so with dying at 100 years. We don't CHOOSE to die. It is not an "accepted" risk. We simply undergo it.
     
    Last edited: Sep 5, 2008
  17. Sep 5, 2008 #16

    Ivan Seeking

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    If we are going to consider value, then we have to ask: Value to whom? Since the answer is clearly "us", as we can't speak for other species, or for Gaia, the question itself suggests that our perception of value is all that is required in order for something to have value. Therefore, the "value" of anything depends entirely on the preservation of humans. If we live, anything has value that we say has value. If we all die, nothing has value.

    If you mean implicit value, then you will have to ask God. :biggrin:

    We know only the risk and not the benefit. If we are going to gamble with humanity, it seems reasonable to have a specific example. If we are talking about the LHC, it is a bit difficult to justify risking the planet were the risk any more than infinitely small, like 1:9 billion years.
     
    Last edited: Sep 5, 2008
  18. Sep 5, 2008 #17

    Ivan Seeking

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    The earth has already been around for 4 billion.
     
  19. Sep 6, 2008 #18
    If humans die out then isn't it conceivable that other beings might evolve to a state where they can have values.

    How can you talk about "values to us"? Surely you can only talk about your own values? Would you count Hitler as "one of us"?

    If we all die doesn't a rational existence like ET or Deep Thought have value? (Assuming they actually exist, or can exist!) What about Dolphins? Don't they have value? I kinda like the idea of them still existing if humans top themselves.

    On the LHC, I've seen some figures which suggest the risk is less than 1:9 billion years. Of course these can only be *theoretical*. Martin Rees:

    "It is not inconceivable that physics could be dangerous too. Some experiments are designed to generate conditions more extreme than ever occur naturally. Nobody then knows exactly what will happen. Indeed, there would be no point in doing any experiments if their outcomes could be fully predicted in advance. Some theorists have conjectured that certain types of experiment could conceivably unleash a runaway process that destroyed not just us but Earth itself." -- From "Our final hour".

    But Rees was on Rado 4 news this morning supporting the LHC. His desire to find out what dark matter is seems to have outweighed the risk (for him). Also, according to Dawkins, Rees is a church goer so's he's got somewhere else to go...

    By the way, listen out for Radio 4's "Big Bang" day on 10 Sep:

    http://www.bbc.co.uk/radio4/bigbang/
     
  20. Sep 6, 2008 #19

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Hi vanesch,
    It’s a valid question. Companies like mine do risk analysis every day to minimize how many deaths any given engineered system might incur. Implicit in that analysis is that the system being designed and created, provides a benefit to people. There’s a point I heard once made about a wonderful new energy source that, if it is to be used to provide energy for families in the US, there will be some large number (thousands +) of deaths and many tens of millions or thousands of millions in property damage every year – so should we use this new energy source? Turns out the energy source is natural gas, and we’ve already accepted the risk. The point here is that we can put a monetary value on life and property and compare that to the monetary value created in this capitalistic society, and if the monetary value created is >> than the value put on life and property, there is justification in creating this value and accepting the risk to human life and property.

    However, that’s not really applicable when we talk about wiping out species of plants and animals, including humans. We can’t put (or it is extremely difficult to put) a monetary value on the Brazilian rain forest or the breeding grounds of salmon, or the whole of the Earth. For that, we have to go beyond the logical application of monetary value (ie: the logical comparison of monetary benefit). To put a value on such things, we have to resort to emotional arguments.

    There’ve been a few interesting posts about the nihilist view. Moridin posted an interesting article here.
    The paper referenced by Moridin attempts to “define a methodology for validating moral theories that is objective, consistent, clear, rational, empirical – and true.” In so doing, it provides axioms on page 38 including “Morality is a valid concept” and that they must be consistent for all mankind. In creating these axioms, the author has based his argument not on logic, but on an emotional predisposition. Why should we logically assume morality is a valid concept? Why ‘mankind’? A much more advanced civilization than ours might see the human race as nothing more than ants in the sugar bowl needing extermination. Morality – not killing everyone on earth nor wiping out every living thing – is based on an emotional axiom, not a logical one. So saying there is X value in a rain forest or in all life on Earth is an emotional one, not a logical one. You can’t boil down such arguments to anything logical, the arguments always boil down to emotional axioms.

    What’s interesting is that we as humans all seem to share similar moral axioms, and in general, people seem to agree that there is value in the future. Wiping out rain forests and future generations is a bad thing and we should have value placed on the future ‘health’ of the planet. Again, there’s nothing strictly logical in this belief, but there seems to be a consistent agreement among humans that the future is worth more than x number of lives.
     
  21. Sep 6, 2008 #20
    I understand this, but I don't know why you picked an annual figure. Why not pick 137 per hour or 0.04 per second instead? How long will the critical collision take at the end of the accelerator, a millionth of a second? I have no idea. But that would give an acceptable risk of 4 x 10^-8 for the duration of that critical event, so that's what you could be using in your risk estimate instead of some time figure that isn't really pertinent to the source of your discussion: not 1.2 million. But this is only one way your approach to risk calculation is inadequate in my view.

    A more important way is the cost/benefit assessment. You can decide to place value strictly on the life of currently-living Homo sapiens for the sake of discussion, so 6 billion people with an average lifespan of, say for argument sake, 70 years. On average, we all have another 35 years to live so our gamble involves 2.1e11 years of human life. Well, if the risk is good, it will favor an expected increase in this number. If the risk is bad, it involves an expected decrease instead. So I ask, what is the expected increase from the accelerator? Well, it's not a medical device. Granted, the knowledge obtained can still serve nuclear medecine for example, so let's say we can cure some cancers as a result for a million people currently alive, adding 35 million years to our pot. That's an increase of 2e-4 (ratio), not much on the benefit side. On the risk side, we could go from 2.1e11 years of human life to... what was that again? Oh yeah, zero. Hmm. My quick calculation tells me that given that pesky zero, the only acceptable degree of risk would have to be zero as well. So from a strict, cold calculation based on human life alone, the risk of annihilation has to be zero before trying this out.

    Which brings this discussion to more human levels. What non-numeric criteria are actually relevant here? Curiosity is a big one, progress, consideration for other things that exist, stuff like that. But not just cold accounting on human lives.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: What is an acceptable risk to destroy the earth ?
  1. Destroy earth (Replies: 31)

Loading...