What is an acceptable risk to destroy the earth ?

  • Thread starter vanesch
  • Start date
  • Tags
    Earth
In summary, the risk of destroying Earth is small, and acceptable. It is up to the individual to decide if he wants to risk his life by driving.
  • #1
vanesch
Staff Emeritus
Science Advisor
Gold Member
5,117
20
This is a spin-off thread from the thread "A Black Hole in the LHC":
https://www.physicsforums.com/showthread.php?p=1860755#post1860755

The scientific debate (in as much as it is scientific :uhh: ) about the remote possibility that micro black holes even exist, and could even be produced at the relatively low energies of the LHC (only 7 times higher than existing accelerators), and that they would not evaporate quickly as should be the case with Hawking radiation, and would be in such kinematic conditions that they'd be captured in the Earth gravitational field, and that they would nevertheless interact strongly enough with matter to slowly eat up the Earth from within on a time scale shorter than the remaining lifetime of Earth in the solar system etc... takes place in that thread. However, in that thread, the issue was raised:

"What's an Acceptable Risk for Destroying the Earth?"

My provocative answer was the following:
It's funny, but there's an entirely adequate answer to that, as far as we consider that the only valuable thing on Earth are the human lives that are right now living on it, and that we don't delve into ethical and philosophical debates about "Gaia" or "future generations" and so on. After all, if Earth is destroyed, those future generations will never exist, and hence will never have to be considered.

So we have to find out what is the acceptable risk of killing 6 billion people. Given that car accidents alone already kill 1.2 million people per year, and that this is considered an acceptable risk, visibly if the risk of killing 6 billion people is acceptable on the same level, it should be of the same magnitude, which means that the probability of it occurring should be about 5000 times smaller (because 5000 times more lives) than the probability of killing 1.2 million people, which is once every year. So the acceptable risk of destroying Earth must be about 1/5000 per year.

Of course, something is not right in this reasoning, and that is that the acceptability of a certain risk is a function of the advantage we get from taking that risk. We accept 1.2 million dead per year because it allows us to travel around. It is not clear that the LHC gives us the same kind of global benefit. But I guess a probability of 1/5000 per year probability of destroying the Earth is in the acceptable ballpark, give or take a few orders of magnitude.

This is of course more an ethical and philosophical debate than a scientific one. So here we go :smile:
 
Last edited:
Earth sciences news on Phys.org
  • #2
I assume for this discussion to be relevant that : the probability for a human being to be willing to destroy the Earth, and have the means to do so, is less than 1 in 6 billion.
 
  • #3
We are clearly already destroying the Earth. How many more decades until all wildlife and natural vegetation is destroyed? And then some even dare to call that progress.
 
  • #4
MeJennifer said:
We are clearly already destroying the Earth. How many more decades until all wildlife and natural vegetation is destroyed? And then some even dare to call that progress.
You are free to buy your island and live in a cave there.

A somewhat related question is : when will we establish laws concerning planet engineering ? Until then, we will always have the possibility to wake up and discover that somebody somewhere has decided to change the composition of the upper atmosphere to reduce global warming.
 
  • #5
vanesch said:
"What's an Acceptable Risk for Destroying the Earth?"

um, er, 42?
 
  • #6
It would appear that we get about 9 billion years out of the planet, so 1/9 billion years sounds reasonable.
 
  • #7
vanesch said:
the probability of it occurring should be about 5000 times smaller (because 5000 times more lives) than the probability of killing 1.2 million people, which is once every year. So the acceptable risk of destroying Earth must be about 1/5000 per year.
But given that virtually every single human on Earth will die regardless within 100 years, drivers or not, you could use 1/100 as an upper limit instead. I'm saying that there is no justification to use some arbitrary annual mortality figure. The scenario are entirely different: steady culling vs annihilation.
 
  • #8
Hmmm...interesting idea. Never thought I would post in philosophy...!

If a person finds the risk of being killed in a car accident is too high, he can always opt out of driving, drastically reducing his risk. Sure it's inconvenient but the risk managment is in the hands of the individual.

Not so with the "doomsday" scenario. The average citizen of Earth has no power over this decision. Perhaps this (and perhaps a general distrust of science) is the heart of the public's concern.
 
  • #9
MeJennifer said:
We are clearly already destroying the Earth. How many more decades until all wildlife and natural vegetation is destroyed? And then some even dare to call that progress.
Clappin'. :)
 
  • #10
Vanesch said:
It's funny, but there's an entirely adequate answer to that, as far as we consider that the only valuable thing on Earth are the human lives that are right now living on it, and that we don't delve into ethical and philosophical debates about "Gaia" or "future generations" and so on. After all, if Earth is destroyed, those future generations will never exist, and hence will never have to be considered.

In opposition, out of whack said:
But given that virtually every single human on Earth will die regardless within 100 years, drivers or not, you could use 1/100 as an upper limit instead. I'm saying that there is no justification to use some arbitrary annual mortality figure. The scenario are entirely different: steady culling vs annihilation.
I agree with whack. Why only consider human life living today? Why not add to that the loss of animals? vegitation? add to that the total loss of all living things. Why only today's human inhabitants?

That said, I suspect the calculated chance of producing the doomsday black hole is likely non-existant. Still, the question has application to other human actions as MeJennifer points out. We live at a time during which the rate of extinction is higher than at any time over the entire history of the Earth.
 
  • #11
Q_Goest said:
We live at a time during which the rate of extinction is higher than at any time over the entire history of the Earth.
There have been events where 95% of the Earth's' species were wiped out (Permian event).

New species are being found every day.

http://archive.wri.org/item_detail.cfm?id=535&section=pubs&page=pubs_content_text&z=?

Take a gander at all of the different articles discussing new species found.

http://www.google.com/search?hl=en&rlz=1T4HPIA_en___US243&q=new+species+found+how+many?

Previous major extinctions on earth

http://www.space.com/scienceastronomy/planetearth/extinction_sidebar_000907.html

The Earth has shown an amazing ability to recover. Humans wouldn't be here now if some of the horrendous extinctions hadn't happened and life on Earth will most likely flourish after humans disappear.
 
Last edited by a moderator:
  • #12
Who cares what happens to Earth if humans all die, and why?

So, who doesn't care if their kids or grandkids are deprived of life? And who is going to take my head out of the freezer and provide a body for me, and revive me, if everyone dies?

Seriously though, we are ignoring the promise of life extension of up to 400 years by some claims. I don't think I'll see it, but I think there are people alive today who might see another hundred years or two added to the potential human lifespan.
 
  • #13
Q_Goest said:
I agree with whack. Why only consider human life living today? Why not add to that the loss of animals? vegitation? add to that the total loss of all living things. Why only today's human inhabitants?

Well, this is why I brought this question here. Is there any intrinsic value to "life on earth" without humans - given that it are humans who are "judging the value" ? It's not such an innocent question as you might think. Imagine that you give a certain value to what I call "Gaia" (the biosphere of earth). That means that at a certain point, if humanity threatens Gaia too much, you should make a choice, and eventually choose to annihilate humanity for sake of Gaia's survival. It is not unthinkable that humans develop enough technology to be able to live on a totally lifeless planet for instance (except for human-controlled biological processes which produce food and so on). If this threatens to happen, and if you give an intrinsic value to Gaia, then you should maybe have to decide to kill off humanity for the sake of Gaia. That's what it means to give an intrinsic value to "life on earth". It's the kind of value that leads to "I'd rather burn my country, rather than to rule over a land of heathens!"

The other question: why only consider human life today ? Well, clearly, we don't have to consider anymore past life, do we ? And we can ask the question whether it makes any sense to talk about future life that never will be. Did I kill my daughter I never had ? Am I a murderer of unborn, unconceived children ? Worse, am I a murderer of the great-grand children my non-existent daughter never had ?

So if humanity is killed *today*, future generations will never have existed, so we didn't destroy them, did we ?

Does the "perpetuation of humanity" have any value - apart from a form of motivation for some to spend their life doing "important things" ?

That said, I suspect the calculated chance of producing the doomsday black hole is likely non-existant. Still, the question has application to other human actions as MeJennifer points out. We live at a time during which the rate of extinction is higher than at any time over the entire history of the Earth.

Sure, the LHC-will-make-a-black-hole stuff is kind of ridiculous, but it brought up the question.

The question is of course not "should a random fool be allowed to blow up the Earth just for the fun of it", but rather, can we do a risk-benefit analysis with Earth as a whole in the same way as we do it for ourselves, every day, when we take small risks (which could kill us, but low probability) in order to increase the joy of our lives, like participating in traffic, just because it makes our life much better when we can travel (to the grocery shop, to our work place, to home,...) ? Clearly my personal answer is yes, but I thought it might make for an interesting discussion.
 
Last edited:
  • #14
humanino said:
I assume for this discussion to be relevant that : the probability for a human being to be willing to destroy the Earth, and have the means to do so, is less than 1 in 6 billion.

:rofl:

This is funny: Ivan Seeking found a very similar number:
It would appear that we get about 9 billion years out of the planet, so 1/9 billion years sounds reasonable.

(I thought it was done here in 5 billion years when the sun became a red giant, but ok... same ballpark).
 
  • #15
out of whack said:
But given that virtually every single human on Earth will die regardless within 100 years, drivers or not, you could use 1/100 as an upper limit instead. I'm saying that there is no justification to use some arbitrary annual mortality figure. The scenario are entirely different: steady culling vs annihilation.

The point is that 1.2 million dead from driving is an *acceptable* risk, given that we accept it. If we didn't, we'd stop driving world wide. It is a *choice*. We know (we in the sense of the large majority of people on this planet) that if we allow people to drive cars/trucks,... that we will kill this year 1.2 million people, and next year again, and the year after again. It is a very stable number. Nevertheless, there are very few places on Earth where people decided collectively to ban driving all together. Not because they are idiots, but simply because people did a cost-benefit analysis, and came to the conclusion that for the benefit it brings us, the risk is acceptable. We could, collectively and socially, eliminate that risk almost immediately by prohibiting car driving world wide. We don't. By far we don't. So we accept that.
Moreover, it is a very good measure, because we collectively do so, each of us. It is not a few powerful lunatics in their armchairs who decided to take risks with other people's lives. We do it all ourselves. So it is a very good measure of what we collectively accept as a risk.

Not so with dying at 100 years. We don't CHOOSE to die. It is not an "accepted" risk. We simply undergo it.
 
Last edited:
  • #16
vanesch said:
So if humanity is killed *today*, future generations will never have existed, so we didn't destroy them, did we ?

Does the "perpetuation of humanity" have any value - apart from a form of motivation for some to spend their life doing "important things" ?

If we are going to consider value, then we have to ask: Value to whom? Since the answer is clearly "us", as we can't speak for other species, or for Gaia, the question itself suggests that our perception of value is all that is required in order for something to have value. Therefore, the "value" of anything depends entirely on the preservation of humans. If we live, anything has value that we say has value. If we all die, nothing has value.

If you mean implicit value, then you will have to ask God. :biggrin:

The question is of course not "should a random fool be allowed to blow up the Earth just for the fun of it", but rather, can we do a risk-benefit analysis with Earth as a whole in the same way as we do it for ourselves, every day, when we take small risks (which could kill us, but low probability) in order to increase the joy of our lives, like participating in traffic, just because it makes our life much better when we can travel (to the grocery shop, to our work place, to home,...) ? Clearly my personal answer is yes, but I thought it might make for an interesting discussion.

We know only the risk and not the benefit. If we are going to gamble with humanity, it seems reasonable to have a specific example. If we are talking about the LHC, it is a bit difficult to justify risking the planet were the risk any more than infinitely small, like 1:9 billion years.
 
Last edited:
  • #17
vanesch said:
:I thought it was done here in 5 billion years when the sun became a red giant, but ok... same ballpark).

The Earth has already been around for 4 billion.
 
  • #18
If humans die out then isn't it conceivable that other beings might evolve to a state where they can have values.

How can you talk about "values to us"? Surely you can only talk about your own values? Would you count Hitler as "one of us"?

If we all die doesn't a rational existence like ET or Deep Thought have value? (Assuming they actually exist, or can exist!) What about Dolphins? Don't they have value? I kinda like the idea of them still existing if humans top themselves.

On the LHC, I've seen some figures which suggest the risk is less than 1:9 billion years. Of course these can only be *theoretical*. Martin Rees:

"It is not inconceivable that physics could be dangerous too. Some experiments are designed to generate conditions more extreme than ever occur naturally. Nobody then knows exactly what will happen. Indeed, there would be no point in doing any experiments if their outcomes could be fully predicted in advance. Some theorists have conjectured that certain types of experiment could conceivably unleash a runaway process that destroyed not just us but Earth itself." -- From "Our final hour".

But Rees was on Rado 4 news this morning supporting the LHC. His desire to find out what dark matter is seems to have outweighed the risk (for him). Also, according to Dawkins, Rees is a church goer so's he's got somewhere else to go...

By the way, listen out for Radio 4's "Big Bang" day on 10 Sep:

http://www.bbc.co.uk/radio4/bigbang/
 
  • #19
Hi vanesch,
vanesch said:
Well, this is why I brought this question here. Is there any intrinsic value to "life on earth" without humans - given that it are humans who are "judging the value" ?

The question is of course not "should a random fool be allowed to blow up the Earth just for the fun of it", but rather, can we do a risk-benefit analysis with Earth as a whole in the same way as we do it for ourselves, every day, when we take small risks (which could kill us, but low probability) in order to increase the joy of our lives, like participating in traffic, just because it makes our life much better when we can travel (to the grocery shop, to our work place, to home,...) ? Clearly my personal answer is yes, but I thought it might make for an interesting discussion.
It’s a valid question. Companies like mine do risk analysis every day to minimize how many deaths any given engineered system might incur. Implicit in that analysis is that the system being designed and created, provides a benefit to people. There’s a point I heard once made about a wonderful new energy source that, if it is to be used to provide energy for families in the US, there will be some large number (thousands +) of deaths and many tens of millions or thousands of millions in property damage every year – so should we use this new energy source? Turns out the energy source is natural gas, and we’ve already accepted the risk. The point here is that we can put a monetary value on life and property and compare that to the monetary value created in this capitalistic society, and if the monetary value created is >> than the value put on life and property, there is justification in creating this value and accepting the risk to human life and property.

However, that’s not really applicable when we talk about wiping out species of plants and animals, including humans. We can’t put (or it is extremely difficult to put) a monetary value on the Brazilian rain forest or the breeding grounds of salmon, or the whole of the Earth. For that, we have to go beyond the logical application of monetary value (ie: the logical comparison of monetary benefit). To put a value on such things, we have to resort to emotional arguments.

There’ve been a few interesting posts about the nihilist view. Moridin posted an interesting article here.
Moridin said:
As for ethics, feel free to read http://www.box.net/shared/static/mb4n75g0s8.pdf or Carrier (2005).

The paper referenced by Moridin attempts to “define a methodology for validating moral theories that is objective, consistent, clear, rational, empirical – and true.” In so doing, it provides axioms on page 38 including “Morality is a valid concept” and that they must be consistent for all mankind. In creating these axioms, the author has based his argument not on logic, but on an emotional predisposition. Why should we logically assume morality is a valid concept? Why ‘mankind’? A much more advanced civilization than ours might see the human race as nothing more than ants in the sugar bowl needing extermination. Morality – not killing everyone on Earth nor wiping out every living thing – is based on an emotional axiom, not a logical one. So saying there is X value in a rain forest or in all life on Earth is an emotional one, not a logical one. You can’t boil down such arguments to anything logical, the arguments always boil down to emotional axioms.

What’s interesting is that we as humans all seem to share similar moral axioms, and in general, people seem to agree that there is value in the future. Wiping out rain forests and future generations is a bad thing and we should have value placed on the future ‘health’ of the planet. Again, there’s nothing strictly logical in this belief, but there seems to be a consistent agreement among humans that the future is worth more than x number of lives.
 
  • #20
vanesch said:
The point is that 1.2 million dead from driving is an *acceptable* risk, given that we accept it.
I understand this, but I don't know why you picked an annual figure. Why not pick 137 per hour or 0.04 per second instead? How long will the critical collision take at the end of the accelerator, a millionth of a second? I have no idea. But that would give an acceptable risk of 4 x 10^-8 for the duration of that critical event, so that's what you could be using in your risk estimate instead of some time figure that isn't really pertinent to the source of your discussion: not 1.2 million. But this is only one way your approach to risk calculation is inadequate in my view.

A more important way is the cost/benefit assessment. You can decide to place value strictly on the life of currently-living Homo sapiens for the sake of discussion, so 6 billion people with an average lifespan of, say for argument sake, 70 years. On average, we all have another 35 years to live so our gamble involves 2.1e11 years of human life. Well, if the risk is good, it will favor an expected increase in this number. If the risk is bad, it involves an expected decrease instead. So I ask, what is the expected increase from the accelerator? Well, it's not a medical device. Granted, the knowledge obtained can still serve nuclear medecine for example, so let's say we can cure some cancers as a result for a million people currently alive, adding 35 million years to our pot. That's an increase of 2e-4 (ratio), not much on the benefit side. On the risk side, we could go from 2.1e11 years of human life to... what was that again? Oh yeah, zero. Hmm. My quick calculation tells me that given that pesky zero, the only acceptable degree of risk would have to be zero as well. So from a strict, cold calculation based on human life alone, the risk of annihilation has to be zero before trying this out.

Which brings this discussion to more human levels. What non-numeric criteria are actually relevant here? Curiosity is a big one, progress, consideration for other things that exist, stuff like that. But not just cold accounting on human lives.
 
  • #21
I've just read the OP a second time and I see that you did state 1/5000 per year so the first paragraph of my previous post is moot. I also see that you had pondered the risk/benefit consideration as well so I should read more carefully what I reply to.

The argument remains that if the risk is to lose the entire pot under consideration then no amount of benefit can cancel out a zero in the risk/benefit ratio.
 
  • #22
out of whack said:
The argument remains that if the risk is to lose the entire pot under consideration then no amount of benefit can cancel out a zero in the risk/benefit ratio.

? I don't follow that. As Rees said somewhere, there is always a risk in doing extreme physics experiments, simply because we will do something that is new, and in doing something that is new, it is impossible to know perfectly in advance what will happen with 100% certainty, so it is always conceivable that some unforeseen catastrophe will occur - by the sheer novelty of the experiment.
What we can do in such circumstances is to try to get an upper limit of the probability that such an unexpected catastrophe can happen, by trying to find out if nature hasn't in fact done more extreme "experiments" and finding a statistical limit to doomsday scenarios. After all, nature is usually much more extreme than what we can do in the lab. However, these estimates of guaranteed upper bounds of probabilities of doomsday will usually result in a finite upper bound. That is, they will not indicate us that ANY new experiment is PERFECTLY safe.

So if we are to make scientific progress, then we have to accept a finite upper bound to the probability of doing the unconceivable, like blowing up the earth, the solar system, or the entire universe. This is an *upper limit*, but it is the only hard guarantee that we have. This implies that scientific progress requires us to accept a finite upper bound on the risk of "destroying the earth/solar system/universe". It is the Faustian bet, so to say.

So we have to weight the benefit of scientific progress against this risk. If we don't accept the risk, the price to pay is that we won't have scientific progress.

Now, my *personal* opinion is that a society that has decided to give up on scientific inquiry has in fact lost the only valuable thing that a society has, and as such, represents almost zero value, but again, that's only *my* opinion :redface: .
 
  • #23
vanesch said:
? I don't follow that.
I claim that annihilation is an infinitely bad outcome and that no probability of failure above zero can balance this out from a strict accounting/statistical point of view. Let me elaborate on this intuitive shortcut.

The approach you took in the OP was to only consider people currently alive as of value when determining an acceptable risk. You suggest a cost/benefit calculation of our gamble in human lives for comparison against the driving gamble. So we restrict ourselves to the accounting based on our pot: 2.1e+11 years of human life (this person-year scale was mine). What we need now is a fair way to quantify the desirability/undesirability of gains or losses for this pot.

I don't consider absolute numbers appropriate for this: they give a skewed scale without upper bound but with a lower bound (0) since we cannot use negative numbers in our scenario. To get a scale that is balanced on both sides, a factor of growth/shrinkage is fair, eg: doubling our pot gives a positive factor of 2, halfing it yields -2. This is balanced and it matches my claim that annihilation is infinitely undesirable: the cost value goes to negative infinity as years of human life go to zero. Using this scale, no probability of total failure (annihilation) other than zero can give a positive outcome when we calculate the statistical expected result of the experiment.

What remains to the debate is whether that scale is fair, or whether an absolute scale is more correct. I call my scale fair because I see a huge qualitative difference between some human lives surviving the experiment and no human life remaining after it. It's my human judgment that if the only thing of value is human life then it needs to be preserved at all cost and zero is plainly unacceptable.

Of course, once this accounting is all done, we can start considering all other factors like progress and advancement of the species, factors that were ignored in the above calculations. When we consider these, the risk of zero actually becomes acceptable since risk is part of what defines our species. We agree on this point. This mitigates quantifiable risk values.
 
  • #24


vanesch said:
So we have to find out what is the acceptable risk of killing 6 billion people. Given that car accidents alone already kill 1.2 million people per year, and that this is considered an acceptable risk, visibly if the risk of killing 6 billion people is acceptable on the same level, it should be of the same magnitude, which means that the probability of it occurring should be about 5000 times smaller (because 5000 times more lives) than the probability of killing 1.2 million people, which is once every year. So the acceptable risk of destroying Earth must be about 1/5000 per year.

Of course, something is not right in this reasoning, and that is that the acceptability of a certain risk is a function of the advantage we get from taking that risk. We accept 1.2 million dead per year because it allows us to travel around. It is not clear that the LHC gives us the same kind of global benefit. But I guess a probability of 1/5000 per year probability of destroying the Earth is in the acceptable ballpark, give or take a few orders of magnitude.

5000 to 1 are acceptable odds? You must be joking. Let's say one of every 5000 people on the planet are killed by automobiles every year -- in 5000 years you will still have as many or more people than you started with due to population replacement, so human life goes on. In the case of a planet-destroying accident happening with 5000-1 odds each year, there is a better than even chance of destroying all life once every 3500 years or so. So in the one case you have zero chance of your scenario destroying all life, and in the other case a very significant chance as time passes. Hard to equate the two, and I find your reasoning faulty. Therefore I will assume you were joking.
 
  • #25


dilletante said:
5000 to 1 are acceptable odds? You must be joking. Let's say one of every 5000 people on the planet are killed by automobiles every year -- in 5000 years you will still have as many or more people than you started with due to population replacement, so human life goes on. In the case of a planet-destroying accident happening with 5000-1 odds each year, there is a better than even chance of destroying all life once every 3500 years or so. So in the one case you have zero chance of your scenario destroying all life, and in the other case a very significant chance as time passes. Hard to equate the two, and I find your reasoning faulty. Therefore I will assume you were joking.
Asking whether Vanesch was joking completely misses his point. The answer is as logical as can be from the specific (but generally, not LHC-related) question "What's an Acceptable Risk for Destroying the Earth?" itself. The actual upper bound on LHC-doomsday scenario probabilities as evaluated by physicists is much, much, much lower. If there was an actual risk, of course it would not be taken, no matter how small. When you reach such absurdly small probabilities, there is no point in talking whether those things can happen or not. Remember, a rabbit can vaporize the atmosphere by passing a gas. People still call them "bunny".

As a side note, I also think you might not have put so much thought into the probability of destroying Earth being more than 1/2 in 3500 years. Most people who do serious research on that come with much higher numbers, which means much sooner, and that has nothing to do with LHC. This is geopolitical speculations, and probably should not be discussed here.
 
Last edited:
  • #26


humanino said:
As a side note, I also think you might not have put so much thought into the probability of destroying Earth being more than 1/2 in 3500 years. Most people who do serious research on that come with much higher numbers, which means much sooner, and that has nothing to do with LHC. This is geopolitical speculations, and probably should not be discussed here.

Hi Humanino,

With all due respect we will have to agree to disagree, if you think 5000-1 is a logical answer to the question "What's an acceptable risk for destroying the earth".

If you read my post you would see that I based my 50% probability in 3500 years on the stated odds of one chance in 5000 every year. Do the math, it is a simple probability calculation.
 
  • #27


dilletante said:
5000 to 1 are acceptable odds? You must be joking. Let's say one of every 5000 people on the planet are killed by automobiles every year -- in 5000 years you will still have as many or more people than you started with due to population replacement, so human life goes on.

Ok, but that is because you put a very high value on an abstract notion such as "human life goes on", meaning: "humanity will exist forever" or something of the kind. I classify this in the same ballpark as "Celtic culture will still exist", or "the Egyptian pyramids will still exist", or "the giant panda will still be around", or "capitalism will still exist" or something of the kind. "humanity" is an abstract notion, which is in itself not automatically very valuable. To me, the value of "humanity" is the sum of the values of its components, namely human beings. Now, what is the value of human beings who will never exist ? What is the value of the descendants of inexistent people ?

So do we care about the continued existence of humanity in itself ? I have difficulties with that idea. It sounds almost religious to me. Do we care about the suffering and death of existent people ? Yes, of course. Do we care about the suffering and death of future generations ? Yes. But do we care about the, well the what, of non-existent people ?

Imagine people decided not to have kids anymore. Of course, this will not happen, for different reasons, because there are strong biological drives to have kids, but just as a thought experiment. Would this be in any way a disaster ? I don't think so. Of course, for the last few, it will be a bit harsh, because they might feel lonely. In the mean time, it would be good, and better and better life, for most of us, as more and more ressources would be made free for less and less people. I wouldn't consider this as a disaster at all. It would of course mean "the end of humanity", but I'd rather find this a blessing rather than a disaster.
In the case of a planet-destroying accident happening with 5000-1 odds each year, there is a better than even chance of destroying all life once every 3500 years or so. So in the one case you have zero chance of your scenario destroying all life, and in the other case a very significant chance as time passes. Hard to equate the two, and I find your reasoning faulty. Therefore I will assume you were joking.

As I tried to line out, I don't give much value to abstract concepts such as "the perpetuation of humanity". I couldn't care less in fact. However, I do care about the suffering and death of individual human beings. I would even go further: if there were a way to avoid most of the suffering and death of people today, by doing something that will for sure destroy the Earth in 3500 years, then I'd go all for it.

And even then, if Earth is destroyed 3500 years from now, the probability is STILL not 0 for "humanity to persist": we might have colonized other planets by then. Or we might have blown ourselves apart much earlier (see humanino's post), or have caused other disasters that have put an end to our existence, in which case the point of destroying the Earth 3500 years from now is moot.

BTW, if you want to nitpick: indeed, the probability of destroying the earth, given an exponential distribution with tau = 5000 years, is 50% in 3500 years, but the average time in between such events is still 5000 years.

We can see this differently. Imagine that you are alone on earth. What is now an "acceptable risk to destroy the earth" ? Isn't it equal to, well, what you take as an acceptable risk to get killed yourself ?

Now, imagine that there are 3 people on earth, each with about the same acceptable risks themselves to get killed. What's now an acceptable risk ?

Imagine that there are 10 people on earth. What now ? 1000 people ? 1 000 000 people ? 6 000 000 000 people ?
 
Last edited:
  • #28


dilletante said:
If you read my post you would see that I based my 50% probability in 3500 years on the stated odds of one chance in 5000 every year. Do the math, it is a simple probability calculation.
When I said what I said, I already had learned logarithms years ago, thanks for the tip. What I am telling you is that 50% probability in 3500 years (what you are panicked by) is very LOW on serious standards. Reasonable ruth is, if we make it 3500 years more at this pace, that would really be miraculous.
 
  • #29
vanesch said:
"What's an Acceptable Risk for Destroying the Earth?"

We live with the same acceptable risk every day. Our astronomers miss the "ghoul" type asteroids every day. These are asteroids with 4% reflectivity passing near earth. One hit by a 100k sized one and we are kaput. I guess rats and cockroaches might make through the ensuing fire storm, but we wouldn't. Destroying the earth?... that would take a 1000k asteroid or bigger. Less likely but, not unheard of.

But, philosophically speaking, just one death destroys the Earth for the person who dies. As soon as they check out, the Earth has been decimated and is gone. This is not a risk, its a given.

Should the risk of Earth's demise be held by one group of individuals smashing quarks together? Sure, why not? If one inanimate asteroid carries that risk, why not a few humans?
 
  • #30
The distinction was drawn earlier in the thread between an acceptable risk - one that we voluntarily assume, like getting in a car- and those circumstances over which we have no control.
 
  • #31
muppet said:
The distinction was drawn earlier in the thread between an acceptable risk - one that we voluntarily assume, like getting in a car- and those circumstances over which we have no control.

This is true, but baywax also has a point, in that relatively high risk levels associated with choices of action are usually a comparison between risk and benefit, but if the involved risk is much lower than "natural and unavoidable" risks, then it would be a bit strange to prohibit it, as it would in any way just be "noise". If we can obtain the slightest benefit out of an action which poses a risk which is orders of smaller than the natural risk (such as asteroid impact) to get us all done, then I think the discussion is moot.
 

1. What factors are considered when determining an acceptable risk to destroy the earth?

The factors that are typically considered when determining an acceptable risk to destroy the earth include the potential consequences of the risk, the likelihood of those consequences occurring, the benefits of taking the risk, and the alternatives to taking the risk.

2. Is there a specific threshold or level of risk that is considered acceptable?

There is no universally agreed upon threshold or level of risk that is considered acceptable to destroy the earth. It ultimately depends on the specific situation and the values and priorities of the individuals making the decision.

3. How do scientists determine the potential consequences of a risk to the earth?

Scientists use various methods to assess the potential consequences of a risk to the earth, such as computer modeling, risk assessment frameworks, and expert opinions. They also consider historical data and scientific evidence to inform their evaluations.

4. Are there any risks that are universally considered unacceptable to destroy the earth?

While there may not be a consensus on what level of risk is acceptable, there are certain risks that are generally considered unacceptable to destroy the earth, such as nuclear war, catastrophic climate change, and large-scale environmental disasters.

5. How can we balance the need for progress and development with the potential risks to the earth?

Balancing the need for progress and development with the potential risks to the earth is a complex and ongoing challenge. It requires careful consideration of the potential consequences and trade-offs, as well as a commitment to sustainable and responsible practices that minimize harm to the earth.

Similar threads

  • Earth Sciences
Replies
28
Views
2K
  • Classical Physics
Replies
13
Views
1K
  • Astronomy and Astrophysics
Replies
1
Views
1K
  • Sci-Fi Writing and World Building
Replies
6
Views
671
  • Astronomy and Astrophysics
Replies
5
Views
964
  • Biology and Medical
Replies
8
Views
1K
Replies
17
Views
2K
  • Sci-Fi Writing and World Building
Replies
6
Views
2K
Replies
4
Views
7K
  • Astronomy and Astrophysics
Replies
1
Views
1K
Back
Top