Destined to build a super AI that will destroy us?

  • Thread starter Thread starter Greg Bernhardt
  • Start date Start date
  • Tags Tags
    Ai Build
Click For Summary
The discussion centers around the potential dangers of superintelligent AI, referencing Sam Harris's TED Talk. Participants express concerns that society may not be taking these risks seriously enough, paralleling issues like global warming. There is debate over whether AI can develop its own goals or remain strictly under human control, with some arguing that autonomous systems could evolve unpredictably. While some view the advancement of AI as a natural evolution, others warn of the potential for catastrophic outcomes if safeguards are not implemented. The conversation highlights a tension between optimism for AI's benefits and fear of its possible threats to humanity.
  • #91
@Grands :
I thought I had the perfect link for you to read about that subject to help you calm your fears, but I see you already read the thread (post #13) where I found it:

The Seven Deadly Sins of Predicting the Future of AI

Have you read it? It is a long article, but you should really take the time to read it thoroughly, even the comments (the author answers back to comments as well). After that read you should see the other side of the AI hype, from people working in the field (for example, Elon Musk doesn't work in the field, he just invests in it and do have something to sell).

If you still have more precise questions about the subject after reading the article, come back to us.
 
  • Like
Likes Boing3000 and Grands
Technology news on Phys.org
  • #92
I think we are still so far from having a human level AI, by that time, we will be able to upgrade human intelligence as well (we could start with better education not to cram in lots of useless things)
On the other hand, i fear an Idiocracy, that we trust everything work, warfare, thinking on robots, then wonder that an AI takes over. But it is still far future.
 
  • #93
Maybe we could combine threads. The super AI takes over self driving cars, in a coordinated and well timed sequence of events as a form of population reduction. Maybe the military has the right idea in using CP/M type systems with 8 1/2 inch floppy disks and no internet connection, used at ICBM sites.
 
  • Like
Likes GTOM
  • #94
jack action said:
@Grands :
I thought I had the perfect link for you to read about that subject to help you calm your fears, but I see you already read the thread (post #13) where I found it:

The Seven Deadly Sins of Predicting the Future of AI

Have you read it? It is a long article, but you should really take the time to read it thoroughly, even the comments (the author answers back to comments as well). After that read you should see the other side of the AI hype, from people working in the field (for example, Elon Musk doesn't work in the field, he just invests in it and do have something to sell).

If you still have more precise questions about the subject after reading the article, come back to us.

Yes, very interesting article, it fit perfectly to my questions.

First think I want to say is that I read books written by economics about AI and about how AI will take people's jobs.
Well, the article I read sustains the opposite thesis, that we don't have to care about this and that robots today didn't take any job.

The issue is, who I have to trust ?
I read " Robots will steal your job, but that's ok: how to survive the economic collapse and be happy." by Pistono.

And also " Rise of the Robots: Technology and the Threat of a Jobless Future". By Martin Ford.

About the article I totally agree with point B.
Today doesn't exist something like an artificial brain that can understand a page of programming, we don't have such a big technology.
Anyway that's not the point of my post, I was more about " Why should I be involved in creating something like that ?"
" Why society need and artificial brain?"

What can I say about the whole article?
It's cool, but is something like: " Don't worry about AI, it won't be so smart to recognize the age of a person or something else" and " technology is not so fast, and do not develop exponentially so stay calm", but is not about, what should be the purpose or the target of AI, or if we need to prevent it development, even if is slow, and as an example we can se the Google car.
The author is some parts contradicts himself, he says that he is very sure it won't exist a very sophisticated AI like in the movie ( and I agree with this) but he doesn't take in consideration that we can't predict future, it's impossible.
Cold someone predict that a man will create the theory of relativity (Einstein) ?

PS. Remember that in the paste we made a disaster with the nuclear bomb, many physics were scared by it, and they weren't wrong about the consequences.
 
Last edited:
  • #95
Grands said:
The issue is, who I have to trust ?
Grands said:
The author is some parts contradicts himself, he says that he is very sure it won't exist a very sophisticated AI like in the movie ( and I agree with this) but he doesn't take in consideration that we can't predict future, it's impossible.
You can't trust anyone either way, as it is all speculations. Yes, nobody can predict that AI will not be a threat to humans. And you can replace the term 'AI' in that statement with 'supervolcanoes', 'meteorites', 'E. coli' or even 'aliens'. The point is that nobody can predict they will be a threat either. The facts are that it never happen in the past, or if it did, things turned out for the best anyway. Otherwise, we wouldn't be here, now.

People that tend to spread fear, usually have something to sell. You have to watch for this. They are easy to recognize: they always have an 'easy' solution to the problem.
Grands said:
" Why should I be involved in creating something like that ?"
" Why society need and artificial brain?"
I don't think we 'have' to be involved and we don't 'need' it. The thing is that we are curious - like most animal - and when we see something new, we want to see more. It's the battle most animals have to deal with every day: Fear vs Curiosity. Some should have been more cautious, some find a new way to survive.

All in all, curiosity seems to have been good for humans since the last few millenniums. Will it last? Are we going to go too far? Nobody can answer that. But letting our fear turn into panic is certainly not the answer.
Grands said:
what should be the purpose or the target of AI
Nobody can tell until it happens. What was the purpose of searching for a way to make humans fly or research electricity and magnetism? I don't think anyone who begin searching those areas could imagine today's world.
Grands said:
if we need to prevent it development, even if is slow
But how can we tell if we should prevent something, without ever experiencing it? Even if the majority of the population convinces itself that something is bad, if it is unfounded, you can bet that a curious mind will explore it. The door is open, it cannot be closed.

The best example is going across the sea. Most Europeans thought the Earth was flat and that ships would fall down at the end of the Earth. The result was that nobody tried to navigate far from the coast. But it was unfounded, doubts were raised, an unproven theory of a round planet was developed and a few courageous men tested the unproven theory. There was basically no other way of doing it. Was it as expected? Nope. There was an entire new continent to be explored! Who could have thought of this?!

Should we have prevented ships from going away from the shore?
Grands said:
Remember that in the paste we made a disaster with the nuclear bomb, many physics were scared by it, and they weren't wrong about the consequences.
To my knowledge, nuclear bombs are not responsible for any serious bad consequences. People are still killed massively in wars, but not with nuclear bombs. On the other hand, nuclear is used to provide electricity to millions of people. It seems that people are not that crazy and irresponsible after all. But, yes, we never know.

Again, the key is to welcome fear, but not to succumb to panic.
 
  • Like
Likes Grands
  • #96
jack action said:
The best example is going across the sea. Most Europeans thought the Earth was flat and that ships would fall down at the end of the Earth. The result was that nobody tried to navigate far from the coast. But it was unfounded, doubts were raised, an unproven theory of a round planet was developed and a few courageous men tested the unproven theory. There was basically no other way of doing it. Was it as expected? Nope. There was an entire new continent to be explored! Who could have thought of this?!

Should we have prevented ships from going away from the shore?

To my knowledge, nuclear bombs are not responsible for any serious bad consequences. People are still killed massively in wars, but not with nuclear bombs. On the other hand, nuclear is used to provide electricity to millions of people. It seems that people are not that crazy and irresponsible after all. But, yes, we never know.

Again, the key is to welcome fear, but not to succumb to panic.

i think fear about nuclear weapons is better example than going across the sea, since the later could only doom the crew of the ship, while the former without enough responsibility and cool head could have doomed humanity.
What kind of responsibility is needed to have a super AI, that could spread over internet, access millions of robots, and possibly reach the conclusion that it can fulfill its goal of erase all sickness, if there will be no more humans who can be sick, because it develops a new biological weapon with CRISPR.
 
  • #97
GTOM said:
What kind of responsibility is needed to have a super AI, that could spread over internet, access millions of robots,
So what ? A well organized group of hackers can do that to. Millions of robots ? I suppose you count blenders and microwaves in this number ?

GTOM said:
and possibly reach the conclusion that it can fulfill its goal of erase all sickness,
A mild intelligence (artificial or natural) would realize that sickness is not something that needs "erasing" (or can be). The very concept is non-nonsensical, that is a "mental sickness". And that's fine, this fills the internet with nonsense, and hopefully natural selection will sorts this out.

Beside, the AI doomsday proponent still have to make a case. While biological weapon ARE developed, with the precise goal to erase mankind, this somewhat is fine and moot. While global stupidity is rampant, burning the Earth to ashes ... literally... starting the next extinction event, this is somewhat mostly harmless.
But what should we fear ? Intelligence. Why ? Because it is "super" or "singular", with none of those term being define (let's imagine swarms of flying robots running on thin air, each with a red cape)
People with IQ > 160 exist. Are they threatening ? The answer is no (a case for the opposite can be made). Is someone with an IQ > 256 should be more threatening ? What if the entity's IQ is > 1024 and is silicon based and "running" in some underground cave ?

A simple truth about nature is that exponential growth don't exist. Most phenomenon follow S-curve and are highly chaotic. And intelligence is no threatening nor benevolent.
This is all but mental projection and category mistake, fueled by con artist making money out of fear (a very profitable business)
 
  • #98
jack action said:
To my knowledge, nuclear bombs are not responsible for any serious bad consequences.
What about Hiroshima and Nagasaki ?
 
  • #99
Boing3000 said:
So what ? A well organized group of hackers can do that to. Millions of robots ? I suppose you count blenders and microwaves in this number ?

I don't know exactly how many drones, industrial robots etc exist today, but sure there will be many self driving cars, worker robots etc in the future. Military robots included on the list. Theoretically a super AI can outsmart even a million well organised hackers.

And intelligence is no threatening nor benevolent.

So, many animal species arent threatened by superior human intelligence? Now i haven't talked about singularity, which i also find irrealistic.
 
  • #100
GTOM said:
I don't know exactly how many drones, industrial robots etc exist today, but sure there will be many self driving cars, worker robots etc in the future.
Drones are as fragile and innocuous than fly, albeit with a total inability to suck energy from their environment.
Industrial robots don't move.
Self driving car, even like this one are harmless (to human kind). This is no science nor fiction, this is fantasy/romance.

"We" may become totally dependent on machine. A case can be made it is already the case. Anybody can blow the power grid, shutdown the internet, and what not, and provoke mayhem (but not death). There is no need for AI to do that, quite the opposite: An AI would have a survival incentive to keep those alive and healthy.

GTOM said:
Military robots included on the list.
Indeed. As are killer viruses, killer guns, killer wars, killer fossil fuel, killer sugar, fat and cigarettes. Millionth of death per year ... still no AI in sight...

GTOM said:
Theoretically a super AI can outsmart even a million well organised hackers.
I bet some form of deep learning processes are already doing that to prevent some "catastrophic" events, in the vaults of intelligence agencies.
The thing is outsmarting human is not "a threat".

GTOM said:
So, many animal species arent threatened by superior human intelligence?
But that is the core of the problem. The homo-sapiens-sapiens have never been a threat to other species. It lived in an healthy equilibrium made of fight AND flight with its environment. Only a very recent and deep seated stupid meme (growth and progress) is threatening the ecosystem (which humankind is entirely part of).
Things will sorts themselves out as usual. Maybe some sort of ants will also mute and start devouring the planet. There is no intelligence nor design in evolution, just random events sorted by other happenstance / law of nature.

In this context, intelligence, even a mild one, will realize that and have a deep respect for the actual equilibrium in place in the environments.
Again, that is stupidity that is threatening (by definition). So maybe an A.S. (artificial stupidity) would be threatening to human, which seems to be hell bent on trusting the Golden Throne in the stupidity contest (lead by wannabee scientist like Elon Musk...)

GTOM said:
Now i haven't talked about singularity, which i also find irrealistic.
Granted
 
  • #101
Boing3000 said:
Drones are as fragile and innocuous than fly, albeit with a total inability to suck energy from their environment.
Industrial robots don't move.
Self driving car, even like this one are harmless (to human kind). This is no science nor fiction, this is fantasy/romance.

"We" may become totally dependent on machine. A case can be made it is already the case. Anybody can blow the power grid, shutdown the internet, and what not, and provoke mayhem (but not death). There is no need for AI to do that, quite the opposite: An AI would have a survival incentive to keep those alive and healthy.Indeed. As are killer viruses, killer guns, killer wars, killer fossil fuel, killer sugar, fat and cigarettes. Millionth of death per year ... still no AI in sight...I bet some form of deep learning processes are already doing that to prevent some "catastrophic" events, in the vaults of intelligence agencies.
The thing is outsmarting human is not "a threat".But that is the core of the problem. The homo-sapiens-sapiens have never been a threat to other species. It lived in an healthy equilibrium made of fight AND flight with its environment. Only a very recent and deep seated stupid meme (growth and progress) is threatening the ecosystem (which humankind is entirely part of).
Things will sorts themselves out as usual. Maybe some sort of ants will also mute and start devouring the planet. There is no intelligence nor design in evolution, just random events sorted by other happenstance / law of nature.

In this context, intelligence, even a mild one, will realize that and have a deep respect for the actual equilibrium in place in the environments.
Again, that is stupidity that is threatening (by definition). So maybe an A.S. (artificial stupidity) would be threatening to human, which seems to be hell bent on trusting the Golden Throne in the stupidity contest (lead by wannabee scientist like Elon Musk...)Granted

I would say many things to Elon Musk, stupid isn't one of them...

Growth and progress isn't a very recent development, it is as old as humanity. It isn't the invention of last century to chop down forests, and drive some species to extinction. Even if "Growth and progress" were that recent, why couldn't an AI developed by some company inherit that? And become that "Artificial Stupidity" you talk about? By the way, recent AIs are kinda stupid because they only see a single goal. Why it is different from our stupidity when we only see a goal of big grow and don't care about environment? (So we become very efficient in that process, and animals can't do anything us)

Your lines imply as if that intelligent AI would actually have to protect us from our stupidity.
Great, use that mentality in AI development, and we have something, that want to cage us for our own good... Thanks i don't want that.

Yes, there are a number of things that can threat all humanity.
Cosmic event, we can't prevent that, but it looks like we have very much time to prepare.
Killer virus, yes, but it is very unlikely that it would kill all humans, however an AI could develop millions of variants.
Nuclear war at the time of Cuban crisis is the only near analogy, is it stupid to say, that in such a case, even a small error could endanger all humanity?
 
  • Like
Likes Averagesupernova
  • #102
Grands said:
What about Hiroshima and Nagasaki ?
WWII killed at least 50 millions people directly; Some studies goes as far as 80 millions considering indirect casualties (source). From the same source, only for Japan, 3 millions died in that war and about 210 000 of those deaths are from the Nagasaki and Hiroshima bombing. As one can see, these bombs did not play a major role in human extinction and that is what I meant by «no serious consequences».
GTOM said:
Theoretically a super AI can outsmart even a million well organised hackers.
At this point, we are not talking theory, but fantasy. It is fantasy just like, in theory, we could create a Jurassic Park with live dinosaurs.
GTOM said:
So, many animal species arent threatened by superior human intelligence?
To my knowledge most animal species are not threatened by humans, i.e. they don't spend their days worrying about humans. I would even push the idea as far as many don't even realize there are humans living among them.

The only animals that consciously worry about a species extinction are ... humans! And the reason why they do is because they are smart enough to understand that diversity plays a major role in their own survival. Based on that, I don't understand how one can assume that an even more intelligent form of life (or machine) would suddenly think diversity is bad and only one form of life (or machine) should remain.
 
  • Like
Likes Boing3000
  • #103
GTOM said:
I would say many things to Elon Musk, stupid isn't one of them...
You would be quite wrong. It doesn't mean he is not a very gifted lobbyist and manager (if you ignore some law suit)

GTOM said:
Growth and progress isn't a very recent development, it is as old as humanity.
Nope. For exemple, "humanity" has tamed fires aons ago, and kept in perfect equilibrium until very recently (first settlement on some millennia ago)

GTOM said:
It isn't the invention of last century to chop down forests, and drive some species to extinction.
You are quite wrong. Only in the last (two) century that we did replace 95% of the wildlife mass per surface, by various grazing animals. Or that we chopped trees (using the RECENT cheap oil energy). Doing it by hand is just impossible physically, and unsustainable.
There is a reason why the wild west was called that.

GTOM said:
Even if "Growth and progress" were that recent, why couldn't an AI developed by some company inherit that? And become that "Artificial Stupidity" you talk about?
Actually it could, but in terms of damage, only its ability to engage heavily energetically processes counts (like bombs).
Even playing devils advocate, it could be hostile and design a small but deadly viruse (with small robot in small lab). So what ? Isn't it a good solution to diminish the impact of the current extinction ?

GTOM said:
By the way, recent AIs are kinda stupid because they only see a single goal. Why it is different from our stupidity when we only see a goal of big grow and don't care about environment? (So we become very efficient in that process, and animals can't do anything us)
It isn't, so I agree with you. But we are not talking about "super" AI, which is not even a valid concept to begin with, as explain in the wonderful link of post #91

GTOM said:
Your lines imply as if that intelligent AI would actually have to protect us from our stupidity.
It will or it won't. I have no idea how a stupid entity like me could predict a "super" behavior or why I should (could really) worry about that. That IS my point.

GTOM said:
Great, use that mentality in AI development, and we have something, that want to cage us for our own good... Thanks i don't want that.
We are already caged in so many way. Free will is quite relative. For example: let's stop global warming...

The main fact remains that intelligence is not a threat, there is no correlation. It is not good fiction, it is good fantasy.
If find it to be a curious diversion (and quite handy for some) about the many actual threat that do exist,and that we should discuss (like the electric car).
 
  • #104
Temporarily locked for moderation.
 
  • #105
103 posts are enough on this topic. The thread will remain closed.
 
  • Like
Likes bhobba, Boing3000 and Averagesupernova

Similar threads

Replies
24
Views
4K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 39 ·
2
Replies
39
Views
6K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 24 ·
Replies
24
Views
3K
Replies
10
Views
4K
Replies
19
Views
3K
  • · Replies 11 ·
Replies
11
Views
2K