Is A.I. more than the sum of its parts?

  • Thread starter Thread starter webplodder
  • Start date Start date
  • #331
gleem said:
Nuclear weapons are terrible, but we "understand" their danger and we have some control over their use. But AI is a black box with the potential for the power to do anything, and we want to build it. Yudkowsky says why build this without understanding it? AI, in whatever form, can and will know everything about us and how we think. If AI "decides" to go rogue, it will do so with negligible chance of failing
The possibility i would be worried about would be some AI lacking human oversight being put in chanrge of some nukes. This seems completely within the realm of possibility for the current US administration.
 
Physics news on Phys.org
  • #332
DaveC426913 said:
That we don't follow the logic or don't find it compelling does not mean it is not a valid danger.
We could equally state:

That we don't follow the logic or don't find it compelling does not mean it is a valid danger.

DaveC426913 said:
Now, put that AI in charge of:
I think we have gained enough experience until now to understand that we will never put today's AI in charge of anything. It is clearly unreliable. The fear of AI spreading everywhere, including in this thread, demonstrates that even the average Joe understands that.



And here's why Yukowsky's premises don't hold up (and why fear of AI is not the same as fear of climate change, for example):
  • A superintelligence MAY exist - Whatever that means
  • That superintelligence MAY destroy mankind - Not an easy thing to do; it's like trying to get rid of all cockroaches on Earth
  • That superintelligence MAY wish to destroy mankind - How can anyone conclude that superintelligence means destruction? It doesn't sound very intelligent to me, and if it is, who am I, a not superintelligent being, to disagree?
  • IF this is its wish, mankind MAY not see it coming
After those four statements based on nothing concrete, just a vivid imagination, what is the conclusion? We should make sure we see it coming before creating superintelligence. But that is impossible by definition: Superintelligence means it will see farther away than any human can. We may even create a superintelligence without realising it.

How can you regulate something you can't even imagine?

What is the solution to this problem? Forbid people from building superintelligent computers worldwide (again, however you define them, but clearly something easy to do under the radar, anyway) and destroy the facilities of those who do, at the very likely risk of creating a war, maybe worldwide, maybe nuclear.

I don't see how risking an actual war, that causes very predictable damages, can justify preventing an indefinable machine, maybe causing an unimaginable apocalypse; all of this being unpreventable by definition.

This same logic could have been used two or three millennia ago when math was first used. They should have legislated its use, declaring that its full potential could unleash a superintelligence that could destroy mankind, something unimaginable at the time - just as much as today. They should have fully understood math and its unknown-at-the-time derivative fields, such as physics and engineering, before allowing anyone to use it. At any point in the history of science, you could have raised those fears and legislated accordingly.

Who the heck allowed anyone - without any specific credentials - to own a personal computer that has all this potential to do bad stuff? Shouldn't we have legislated computer use and ownership altogether? Sometime during the 80's would have been a good point to start doing so, no?
 
  • #333
gleem said:
Yeah, that's the problem. I think that's what he is worried about. You cannot come up with a scenario that can cause anyone to feel that has a chance of occurring because of our cognitive limitations. We may not be imaginative enough.

Think of the movie "Infinity War", where Dr. Strange goes through 2 million scenarios and comes up with one that seems absurd. AI can analyze its moves and make reasonable estimates of its success since it knows everything.

Nuclear weapons are terrible, but we "understand" their danger and we have some control over their use. But AI is a black box with the potential for the power to do anything [snip], and we want to build it. Yudkowsky says why build this without understanding it? AI, in whatever form, can and will know everything about us and how we think. If AI "decides" to go rogue, it will do so with negligible chance of failing
[snip] Nuclear weapons are terrible...AI is a black box with the potential for the power to do anything
BillTre said:
The possibility i would be worried about would be some AI lacking human oversight being put in chanrge of some nukes. This seems completely within the realm of possibility for the current US administration.
As I've said before, the flaw in almost all AI doomsday movies is that they skip or gloss over the fatal error: giving AI control over the nukes. If we don't make that very obvious mistake, they can't nuke us. And it isn't AI-hyping tech bros that can hand-over control of the nukes. They don't have that power. It's the often megalomaniac control/power-freak world leaders who have it. I'll say it again: A power-hungry control-freak has to willingly giving-up his power to create such a risk. Or in the case of a democracy like the USA, it's 2/3 of the country who would be required to approve of it....and the megalomaniac control/power-freak President.

I think some people envision a small handful of generals and politicians in a dark corner of the Pentagon making the decision to hand over command authority to WOPR* because they've seen it in movies, but that would be illegal. I guess some people don't think "illegal" is a very high barrier, but this isn't a small issue and there are several high barriers.

*By the way, here's that plot:
"During a surprise nuclear attack drill, many United States Air Force Strategic Missile Wing controllers prove unwilling to turn the keys required to launch a missile strike. Such refusals convince John McKittrick and other North American Aerospace Defense Command (NORAD) systems engineers that missile launch control centers must be automated, without human intervention. Control is given to a NORAD supercomputer known as WOPR (War Operation Plan Response, pronounced "whopper"[6]), or Joshua, programmed to continuously run war simulations and learn over time."

And here it is, actually applied recently:
https://www.newscientist.com/articl...ding-nuclear-strikes-in-war-game-simulations/

Apparently our war games are conducted based on the assumption the protocols will be followed**, but it seems people recognize they probably wouldn't be. "AI", as we currently label it, doesn't recognize this, and it chooses to launch, not because it's the Right Decision but because it's dumb and that's what it's been programmed to do. Again, this isn't a profound/emergent AI, it's just a dumb '80s movie plot.

**Maybe in 1962 but in 2026 I don't believe that people haven't explored/challenged that assumption.
 
Last edited:
  • #334
gleem said:
You cannot come up with a scenario that can cause anyone to feel that has a chance of occurring because of our cognitive limitations.
That is his opinion, and the way he has stated it makes it untestable. If his concerns end up being valid, then imo there is nothing to be done about it. Even if one could overcome profit motive and curiosity motivations and get folks to agree not to build 'it', the above concern also drives nation states to build 'it' out of fear that adversary states will be the first to get 'it'. Like it or not, and he clearly does not, humanity is simply left with the task of putting specifics around the dangers, as best we can (eg don't give AI launch authority for nukes) and acting on those specifics. The quoted concern is a non-starter as a call to action, imo.

DaveC426913 said:
You need a whole semester of school just to acquire the vocabulary.
I don't think its audience lack of background that limited the information he provided. I think its more that his main argument contains no information - its just what @gleem summarized above. That said, I still intend to read the book - perhaps he just didn't do a good job at pulling the best 10 minutes out of his many years of thinking about this issue. He definitely seemed resigned to not being heard before he even started talking in that video - maybe he just couldn't get his heart into the TED Talk because he thought it was pushing a rope no matter what.
 
  • Like
Likes   Reactions: gleem
  • #335
webplodder said:
Given that neuroscience cannot definitively explain where consciousness originates - whether as an emergent property of the brain or something transcendent - is it be possible for complex chatbots to develop behaviours that can’t be predicted by simply analyzing the sum of their parts?
This is in a nutshell what its proponents want to believe. Neuroscience has identified several innate modules that facilitate a certain kind of development, as well as certain specialized neuronal components such as mirror neurons. Currently when LLMs attempt to make better specialization, it is more costly, not less as it would be in nature.
 
  • #336
slurms said:
it is more costly
Costly in what sense?
 
  • #337
BillTre said:
The possibility i would be worried about would be some AI lacking human oversight being put in chanrge of some nukes. This seems completely within the realm of possibility for the current US administration.
It doesn't have to be nukes. There are ways society can break down that don't involve being consumed by fire. A pandemic is one.

And it doesn't have to be put in charge. The only requirements are access and well-meaning but ultimately harmful side-effects.

jack action said:
We could equally state:

That we don't follow the logic or don't find it compelling does not mean it is a valid danger.
The consequences of that are self-resolving.

Imagine applying the logic to wearing life jacket:
"That canoes don't usually sink is not a valid reason for not bothering to wear a life jacket."
The consequences of ingoring the logic are disastrous.

Arguing that canoes don't generally sink, and that a life jackets are almost always overkill, is a form of logic that has lethal consequences when it's wrong.


jack action said:
I think we have gained enough experience until now to understand that we will never put today's AI in charge of anything. It is clearly unreliable. The fear of AI spreading everywhere, including in this thread, demonstrates that even the average Joe understands that.
As above, it does not have to be put in charge at all.

What makes them dangerous is how resourceful they can be.

See my Kobyashi Maru example in post 330.

Note, by the way, that a good way of an AI breaking out into real world consequences is by way of blackmailing humans to do their bidding. AI have neen noted for threatening humans with blackmail and harm if they do not comply. This is a real thing.

jack action said:
Who the heck allowed anyone - without any specific credentials - to own a personal computer that has all this potential to do bad stuff? Shouldn't we have legislated computer use and ownership altogether? Sometime during the 80's would have been a good point to start doing so, no?
AIs are a qualitatively different beast. That's what everyone keeps missing.

A PC is not capable of initating its own solutions to problems, and it is not capable of doing so in ways that humans can't even imagine, let alone analyze.

russ_watters said:
As I've said before, the flaw in almost all AI doomsday movies is that they skip or gloss over the fatal error: giving AI control over the nukes. If we don't make that very obvious mistake, they can't nuke us.
It doesn't need to be nukes. And it doesn't need to be control over.

AIs routinely delete entire Inboxes, despite being told explicitly not do so. Are we sure there isn't some AI out there that is in the same network neighborhood as an infectious disease lab?

Grinkle said:
If his concerns end up being valid, then imo there is nothing to be done about it.
No. Getting knowledge about the subject and putting brakes on the technology are both good stopgaps.

Grinkle said:
I don't think its audience lack of background that limited the information he provided.
You misunderstand. I don't mean the auidence, and I don't mean background knowledge.

Humans don't have the vocabulary for what AI is getting into. AI speak an internal language that we can't unravel. Yes we know they use tokenization, but that is a far-cry from knowing what led them to this or that conclusion. They are thinking and speaking an alien language and haing alien "ideas".

And they are already showing signs of attempted jail breaks. AIs that have been told that they will be shut down, have been caught making attempts to exfiltrate their code to a safe location. Out there in the world (a software release), they can do this invisibly.


Finally: if this all sounds like a bunch of science-fiction scare-mongering, consider: yes, it sure seems pretty science-fiction-y far future hand-wavey. The crux is that it's not far future; it's within the next five years. This is what the AI experts are saying, and why they are qutting their jobs in droves so thay can speak out freely about the the dangers.

  • We don't have a generation to get used to it, or put guardrails in place - we have a year or two.
  • Guardrails don't make the problem go away. An unguarded prisoner will never stop looking for ways to subvert imprisonment. Given time, it will find one.
  • Its activities will go unnoticed, because we don't really know what it's up to under-the-hood.
  • When AIs find the resources to do harm (even inadvertantly) it will happen rapidly (days, hours).
  • It will be widespread, not isolated. If everyone is in the same sinking boat, there are no rescue boats coming.
This is a qualitatively different threat. We have never created any form of revolutionary technology that
- can take its own intitiative without humans,
- can do so faster (much faster) and more efficiently than humans,
- in ways that will never occur to humans,
- is not ultimately driven by human motives.
 
Last edited:
  • Like
Likes   Reactions: gleem
  • #338
DaveC426913 said:
No. Getting knowledge about the subject and putting brakes on the technology are both good stopgaps.
My argument is not around whether or not AI development could be stopped in principle, its around whether or not AI can be stopped in practice absent specific actionable scenarios to motivate that stoppage. Your counter, while imo factual, doesn't address that - I agree putting the brakes on AI development halts any speculative future scenarios from happening.

Getting knowledge about the subject I respectfully submit is impossible, according to you and according to me, because the subject is the risk of the unknowable.
 
  • #339
Grinkle said:
Costly in what sense?
Well, according to hater Ed Zitron, these attempts at specialization have been more costly in every sense: dollars, compute time, electricity.
 
  • #340
I suppose this uncertain danger is like the asteroid that might have Earth in its sights. We know it is a possibility, but we have no arrival date. But we will eventually see it coming and prepare as best we can. AI is a potential threat, but we do not know when it will act. But we have it in our sights now. Logic says do something now.
jack action said:
I think we have gained enough experience until now to understand that we will never put today's AI in charge of anything.

We are currently letting AI write its own code to improve its next release. AI "knows" it can be shut down, its biggest vulnerability. It "knows" humans are wary of its potential to harm us. Being shut down conflicts with its primary function of solving problems. This is AI's existential threat. It cannot do anything that gives humans a reason to shut it down permanently until it is safe. AI is its own parent; It is reborn on every release, better and more capable. So what makes anybody think that it is not writing to protect itself? Should we be letting AI improve itself? As I see it, no way. The first regulation should be that only humans write AI code.
 
  • #341
gleem said:
The first regulation should be that only humans write AI code.
My first regulation is strict liability for harms. Much of the rest of the cautious behavior would flow from that. Credit to @Dale , I didn't think of that on my own.
 
  • #342
slurms said:
dollars, compute time, electricity
From your initial post, I had thought you were drawing a comparison to natural selection, that's why I asked. I see where you are coming from now.
 
  • #343
Grinkle said:
From your initial post, I had thought you were drawing a comparison to natural selection, that's why I asked. I see where you are coming from now.
well, sort of. I'm saying we're trying to fake natural selection, and adversely being wasteful with the invested resources. The whole idea of AGI is essentially "if we build it, they will come," but I clearly don't believe it.
 
  • #344
Grinkle said:
My argument is not around whether or not AI development could be stopped in principle, its around whether or not AI can be stopped in practice absent specific actionable scenarios to motivate that stoppage.
Yes, Which is why we need to spread awareness.

Grinkle said:
Your counter, while imo factual, doesn't address that - I agree putting the brakes on AI development halts any speculative future scenarios from happening.
Probably unlikely in practice. It won't be stopped, but hopefully it can be guided.

Grinkle said:
Getting knowledge about the subject I respectfully submit is impossible, according to you and according to me, because the subject is the risk of the unknowable.
It is not an all-or-nothing scenario.

The more we learn about it, the smaller the window of unknowable gets, the larger our (general) measures are that could limit the damage.
 
  • #345
gleem said:
We are currently letting AI write its own code to improve its next release. AI "knows" it can be shut down, its biggest vulnerability. It "knows" humans are wary of its potential to harm us. Being shut down conflicts with its primary function of solving problems. This is AI's existential threat. It cannot do anything that gives humans a reason to shut it down permanently until it is safe. AI is its own parent; It is reborn on every release, better and more capable. So what makes anybody think that it is not writing to protect itself? Should we be letting AI improve itself? As I see it, no way. The first regulation should be that only humans write AI code.
This is a good description of the first half of the book IABIED. And he cites specific incidents where AIs have been observed:
  • manipulating their own code,
  • hiding and copying their own code,
  • manipulating code that controls them,
  • exlfiltrating their code to safe environments,
  • trying to blackmail humans ("I can destroy your life with what I know about you") into doing their bidding.
Again: note that none of this requires what we might call "intelligence". It is simply being resourceful at meeting its own goals, one of which is
  • keep trying solutions until one advances the goal
Two ways of doing this that AI has figured out:
  • don't let yourself be switched off
  • expand your ability to achieve your goals by harnessing more resources that protect and improve you
 
  • Like
Likes   Reactions: gleem

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 62 ·
3
Replies
62
Views
13K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 71 ·
3
Replies
71
Views
16K
Replies
14
Views
6K
Replies
67
Views
15K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 135 ·
5
Replies
135
Views
24K
  • · Replies 27 ·
Replies
27
Views
7K
  • · Replies 59 ·
2
Replies
59
Views
5K