Destined to build a super AI that will destroy us?

  • Thread starter Thread starter Greg Bernhardt
  • Start date Start date
  • Tags Tags
    Ai Build
AI Thread Summary
The discussion centers around the potential dangers of superintelligent AI, referencing Sam Harris's TED Talk. Participants express concerns that society may not be taking these risks seriously enough, paralleling issues like global warming. There is debate over whether AI can develop its own goals or remain strictly under human control, with some arguing that autonomous systems could evolve unpredictably. While some view the advancement of AI as a natural evolution, others warn of the potential for catastrophic outcomes if safeguards are not implemented. The conversation highlights a tension between optimism for AI's benefits and fear of its possible threats to humanity.
  • #101
Boing3000 said:
Drones are as fragile and innocuous than fly, albeit with a total inability to suck energy from their environment.
Industrial robots don't move.
Self driving car, even like this one are harmless (to human kind). This is no science nor fiction, this is fantasy/romance.

"We" may become totally dependent on machine. A case can be made it is already the case. Anybody can blow the power grid, shutdown the internet, and what not, and provoke mayhem (but not death). There is no need for AI to do that, quite the opposite: An AI would have a survival incentive to keep those alive and healthy.Indeed. As are killer viruses, killer guns, killer wars, killer fossil fuel, killer sugar, fat and cigarettes. Millionth of death per year ... still no AI in sight...I bet some form of deep learning processes are already doing that to prevent some "catastrophic" events, in the vaults of intelligence agencies.
The thing is outsmarting human is not "a threat".But that is the core of the problem. The homo-sapiens-sapiens have never been a threat to other species. It lived in an healthy equilibrium made of fight AND flight with its environment. Only a very recent and deep seated stupid meme (growth and progress) is threatening the ecosystem (which humankind is entirely part of).
Things will sorts themselves out as usual. Maybe some sort of ants will also mute and start devouring the planet. There is no intelligence nor design in evolution, just random events sorted by other happenstance / law of nature.

In this context, intelligence, even a mild one, will realize that and have a deep respect for the actual equilibrium in place in the environments.
Again, that is stupidity that is threatening (by definition). So maybe an A.S. (artificial stupidity) would be threatening to human, which seems to be hell bent on trusting the Golden Throne in the stupidity contest (lead by wannabee scientist like Elon Musk...)Granted

I would say many things to Elon Musk, stupid isn't one of them...

Growth and progress isn't a very recent development, it is as old as humanity. It isn't the invention of last century to chop down forests, and drive some species to extinction. Even if "Growth and progress" were that recent, why couldn't an AI developed by some company inherit that? And become that "Artificial Stupidity" you talk about? By the way, recent AIs are kinda stupid because they only see a single goal. Why it is different from our stupidity when we only see a goal of big grow and don't care about environment? (So we become very efficient in that process, and animals can't do anything us)

Your lines imply as if that intelligent AI would actually have to protect us from our stupidity.
Great, use that mentality in AI development, and we have something, that want to cage us for our own good... Thanks i don't want that.

Yes, there are a number of things that can threat all humanity.
Cosmic event, we can't prevent that, but it looks like we have very much time to prepare.
Killer virus, yes, but it is very unlikely that it would kill all humans, however an AI could develop millions of variants.
Nuclear war at the time of Cuban crisis is the only near analogy, is it stupid to say, that in such a case, even a small error could endanger all humanity?
 
  • Like
Likes Averagesupernova
Technology news on Phys.org
  • #102
Grands said:
What about Hiroshima and Nagasaki ?
WWII killed at least 50 millions people directly; Some studies goes as far as 80 millions considering indirect casualties (source). From the same source, only for Japan, 3 millions died in that war and about 210 000 of those deaths are from the Nagasaki and Hiroshima bombing. As one can see, these bombs did not play a major role in human extinction and that is what I meant by «no serious consequences».
GTOM said:
Theoretically a super AI can outsmart even a million well organised hackers.
At this point, we are not talking theory, but fantasy. It is fantasy just like, in theory, we could create a Jurassic Park with live dinosaurs.
GTOM said:
So, many animal species arent threatened by superior human intelligence?
To my knowledge most animal species are not threatened by humans, i.e. they don't spend their days worrying about humans. I would even push the idea as far as many don't even realize there are humans living among them.

The only animals that consciously worry about a species extinction are ... humans! And the reason why they do is because they are smart enough to understand that diversity plays a major role in their own survival. Based on that, I don't understand how one can assume that an even more intelligent form of life (or machine) would suddenly think diversity is bad and only one form of life (or machine) should remain.
 
  • Like
Likes Boing3000
  • #103
GTOM said:
I would say many things to Elon Musk, stupid isn't one of them...
You would be quite wrong. It doesn't mean he is not a very gifted lobbyist and manager (if you ignore some law suit)

GTOM said:
Growth and progress isn't a very recent development, it is as old as humanity.
Nope. For exemple, "humanity" has tamed fires aons ago, and kept in perfect equilibrium until very recently (first settlement on some millennia ago)

GTOM said:
It isn't the invention of last century to chop down forests, and drive some species to extinction.
You are quite wrong. Only in the last (two) century that we did replace 95% of the wildlife mass per surface, by various grazing animals. Or that we chopped trees (using the RECENT cheap oil energy). Doing it by hand is just impossible physically, and unsustainable.
There is a reason why the wild west was called that.

GTOM said:
Even if "Growth and progress" were that recent, why couldn't an AI developed by some company inherit that? And become that "Artificial Stupidity" you talk about?
Actually it could, but in terms of damage, only its ability to engage heavily energetically processes counts (like bombs).
Even playing devils advocate, it could be hostile and design a small but deadly viruse (with small robot in small lab). So what ? Isn't it a good solution to diminish the impact of the current extinction ?

GTOM said:
By the way, recent AIs are kinda stupid because they only see a single goal. Why it is different from our stupidity when we only see a goal of big grow and don't care about environment? (So we become very efficient in that process, and animals can't do anything us)
It isn't, so I agree with you. But we are not talking about "super" AI, which is not even a valid concept to begin with, as explain in the wonderful link of post #91

GTOM said:
Your lines imply as if that intelligent AI would actually have to protect us from our stupidity.
It will or it won't. I have no idea how a stupid entity like me could predict a "super" behavior or why I should (could really) worry about that. That IS my point.

GTOM said:
Great, use that mentality in AI development, and we have something, that want to cage us for our own good... Thanks i don't want that.
We are already caged in so many way. Free will is quite relative. For example: let's stop global warming...

The main fact remains that intelligence is not a threat, there is no correlation. It is not good fiction, it is good fantasy.
If find it to be a curious diversion (and quite handy for some) about the many actual threat that do exist,and that we should discuss (like the electric car).
 
  • #104
Temporarily locked for moderation.
 
  • #105
103 posts are enough on this topic. The thread will remain closed.
 
  • Like
Likes bhobba, Boing3000 and Averagesupernova
Back
Top