Is AI Overhyped?

Click For Summary
The discussion centers around the question of whether AI is merely hype, with three main concerns raised: AI's capabilities compared to humans, the potential for corporations and governments to exploit AI for power, and the existential threats posed by AI and transhumanism. Participants generally agree that AI cannot replicate all human abilities and is primarily a tool with specific advantages and limitations. There is skepticism about the motivations of corporations and governments, suggesting they will leverage AI for control, while concerns about existential threats from AI are debated, with some asserting that the real danger lies in human misuse rather than AI itself. Overall, the conversation reflects a complex view of AI as both a powerful tool and a potential source of societal challenges.
  • #181
Rive said:
One of my biggest fears is, that once the time comes the one presiding over a doomsday button won't be an actual AGI but something which only looks like one, while it's doing its business based on processed tiktok data, for example.
What self-respecting dictator would hand launch authority over to anyone or any thing?
 
  • Like
Likes jack action
Computer science news on Phys.org
  • #182
DaveC426913 said:
If it spends its time watching TikTok it would probably just press the button on principle.
Absolutely, which is so scary. Especially with all the white supremacy on TikTok (OK, admittedly this is hearsay, but I wouldn't be surprised).
 
  • #183
russ_watters said:
What self-respecting dictator would hand launch authority over to anyone or any thing?
Indeed, which is one of the reasons why the movie "The Forbin Project" makes little or no sense.
 
  • #184
sbrothy said:
Indeed, which is one of the reasons why the movie "The Forbin Project" makes little or no sense.
But with the deterrence of nuclear war being based on the concept of mutually assured destruction (MAD), couldn't it be argued that a well-publicized policy of automatic nuclear counter attack by computer might well enhance the deterrence effect? That was the premise of "Dr. Strangelove", in which the Soviets unfortunately waited too long to announce their doomsday machine to the world.
 
  • #185
russ_watters said:
What self-respecting dictator would hand launch authority over to anyone or any thing?
I think the point has always been that, if the fate of one's country and people is in the hands of a single person, that is a weakness in strategic defense.

Opposing forces don't even have to kill him, all they have to do is delay his pressing of the button for the few minutes it takes to gain destructive superiority.

This is pretty well steeped in Cold War history.

@renormalize hits the nail on the head.
 
  • #186
renormalize said:
But with the deterrence of nuclear war being based on the concept of mutually assured destruction (MAD), couldn't it be argued that a well-publicized policy of automatic nuclear counter attack by computer might well enhance the deterrence effect? That was the premise of "Dr. Strangelove", in which the Soviets unfortunately waited too long to announce their doomsday machine to the world.
Yes, but we hardly need GAI for that, do we? A normal program would do fine i think. Even a human would be better, as long it's not the dictator himself, but some of the grownups in the room, which in retrospect doesn't looks so assuring as it did at the time.
 
  • Like
Likes russ_watters
  • #187
sbrothy said:
Yes, but we hardly need GAI for that, do we? A normal program would do fine i think.
Well, there's a lot of high level decisions to be juggled.

The whole danger of an automated response is the manifestation to false positives - launching a counter-strike based on too literal an interpretation of rapidly evolving events.

The Holy Grail is a machine that's "smart" enough (and fast enough) to make the right decision at the right time.

Of course that raises the spectre of how one defines "the right decision". Or who defines it. Better yet, what defines it.

AI being the one to define the 'what' to do and 'when' is the basis of the Terminator Franchise.

sbrothy said:
Even a human would be better, as long it's not the dictator himself, but some of the grownups in the room, which in retrospect doesn't looks so assuring as it did at the time.
Any person can be fooled, thwarted, corrupted, delayed, killed.
 
  • #188
russ_watters said:
What self-respecting dictator would hand launch authority over to anyone or any thing?
The problem is the time window. There is less than half an hour to decide whether to launch a counterstrike or not.
 
  • #189
DaveC426913 said:
I think the point has always been that, if the fate of one's country and people is in the hands of a single person, that is a weakness in strategic defense.

Opposing forces don't even have to kill him, all they have to do is delay his pressing of the button for the few minutes it takes to gain destructive superiority.

@renormalize hits the nail on the head.
Right, but what does that have to do with AI? Or are you suggesting that because of that delay a leader might choose to hand over authority to an AI? I think that's unlikely. Moreover:
renormalize said:
But with the deterrence of nuclear war being based on the concept of mutually assured destruction (MAD), couldn't it be argued that a well-publicized policy of automatic nuclear counter attack by computer might well enhance the deterrence effect? That was the premise of "Dr. Strangelove", in which the Soviets unfortunately waited too long to announce their doomsday machine to the world.
Dr. Strangelove was released 60 years ago (War Games and The Terminator, 40 years ago). Clearly we've been capable of cutting the President/Supreme Leader out of the loop for many decades, even without "AI". So, if it helps, why hasn't it happened yet? (Note: the reason the exact Dr. Strangelove scenario hasn't been implemented is it was a dumb idea that made for good comedy, but that's a side issue to the broader point.)

Or is "hype" the answer here? AI maximalists believe in/anthropomorphize AI to the point where (as seen earlier in the thread) they see alarming human-like lying/etc. whereas the minimalists just see a poorly functioning program. So, might the maximalists convince a world leader the AI is worthy of launch authority? If that's the fear, then ironically AI maximalists are the ones who might cause their fears to be realized.
 
Last edited:
  • #190
russ_watters said:
Right, but what does that have to do with AI? Or are you suggesting that because of that delay a leader might choose to hand over authority to an AI? I think that's unlikely. Moreover:

Dr. Strangelove was released 60 years ago (War Games, 40 years ago). Clearly we've been capable of cutting the President/Supreme Leader out of the loop for many decades, even without "AI". So, why hasn't it happened yet? (Note: the reason the exact Dr. Strangelove scenario hasn't been implemented is it was a dumb idea that made for good comedy, but that's a side issue to the broader point.)

Or is "hype" the answer here? AI maximalists believe in/anthropomorphize AI to the point where (as seen earlier in the thread) they see alarming human-like lying/etc. whereas the minimalists just see a poorly functioning program. So, might the maximalists convince a world leader the AI is worthy of launch authority? If that's the fear, then ironically AI maximalists are the ones who might cause their fears to be realized.
https://en.wikipedia.org/wiki/Self-fulfilling_prophecy
 
  • #191
Last edited:
  • #192
russ_watters said:
Right, but what does that have to do with AI?
In theory, AI is harder to thwart or mislead than humans, for a multitude of reasons (some of which might even be true!).

russ_watters said:
Or are you suggesting that because of that delay a leader might choose to hand over authority to an AI?
Reaction delay is one of a long list of conditions smart computers have been thought to handle better than humans.

This thinking has been around since the Cold War. I suspect almost everyone here is old enough to have some exposure to the Cold War, so it might be redundant to go over the list*. I sort of thought we all knew the rationale for automation in military global warfare.

* but we could discuss and summarize the points, if that were warranted. No, I don't have them at-hand.


Keep in mind - it doesn't have to be objectively true that "automation is better" - all that matters is whether said Supreme Leader thinks it's true. (which - not to put too fine a point on it - could be for no other reason than because they watched and were deeply affected by Dr. Strangelove.)
 
  • #193
DaveC426913 said:
In theory, AI is harder to thwart or mislead than humans, for a multitude of reasons (some of which might even be true!).
I'm not sure what you mean by that or how it applies here.
Reaction delay is one of a long list of conditions smart computers have been thought to handle better than humans.

This thinking has been around since the Cold War. I suspect almost everyone here is old enough to have some exposure to the Cold War, so it might be redundant to go over the list*. I sort of thought we all knew the rationale for automation in military global warfare.
Right, but what I pointed out and questioned is what this has to do with AI. And also, the fact that automation has been around for decades and has not been entrusted with such command decisions means that something has to change in order for this fear of AI maximalists to be realized. I'd like for someone to articulate what they think that change is or could be (automation being faster isn't an answer because it has always been faster). Was the first part your answer to that?
Keep in mind - it doesn't have to be objectively true that "automation is better" - all that matters is whether said Supreme Leader thinks it's true.
It takes more than that: they not only have to trust that it is better than they are, they have to be willing to give up the authority, period, which isn't something that is generally in the programming of a Supreme Leader.

I don't think it's unreasonable to believe that nuclear launch authority would be literally the very last thing anyone would ever choose to cede to automation, AI or otherwise.
 
  • Like
Likes jack action
  • #194
russ_watters said:
What self-respecting dictator would hand launch authority over to anyone or any thing?
These days we have plenty of doomsday buttons around us: no need to think about nukes right away.
I wrote 'a', not 'the' there.
 
  • #195
Rive said:
These days we have plenty of doomsday buttons around us:
Such as?
 
  • #196
So let's say there is a doomsday button somewhere. Some [important and responsible] guy is in charge of pressing it in due time. It may never happen, and if it does, it will happen only once.

This guy doesn't want to make a mistake, so he asks for a machine to help him make the right decision. AI is suggested, and it is said to provide the best possible decision. After consulting the machine, all that is left for the guy to do is either press the button or not.

Knowing this, it seems that some think this [important and responsible] guy would say: "You know what? This machine seems so reliable. Why don't you just let it directly control the button? This way, I won't have to get up and push it myself."

I cannot imagine any scenario implicating a doomsday button where this could realistically happen. No matter how perfect a machine can be, everyone at that level understands that machines can fail and can also be hacked. I'll refer again to Stuxnet.
 
  • Like
Likes russ_watters
  • #198
phyzguy said:
In terms of the original question, "Is AI hype?", this article is a good read.
See posts #113 and #114 in this same thread.
 
  • #199
Ah, sorry. I didn't see the earlier posts.
 
  • #200
russ_watters said:
Such as?
Let's talk about cars, then.
As we all know it well, every car is a potential weapon.
...
Can you still can't imagine that control over such weapon is willingly handled to an AI? Of dubious origin/performance?
 
  • #201
There are a lot of different possible scenarios that lead to conditions that most would agree are to be avoided at all costs and all of those scenarios depend on loss of control over some period of time, but where, contrary to what several on this thread seems to argue for, there are no clear point along the path to the bad scenario where we actually choose to stop. For instance, if two armed superpowers compete in an AI race and both get to ASI level, then it is almost given that they will be forced to apply ASI to their military capabilties in order to not loose out. The argument is that at any given point there is a large probability that those in control will always want to continue because for them the path still go towards a benefit ("not loosing the war") and the the worst-case scenario (where no human really are in control) is still believe to be theorical or preventable furter down the path. Note that the human decision mechanisms in such scenarios are (as far as I see it) almost identical to the mechanisms that lead to the nuclear arms race, so we can take it as a historical fact that human in all likelyhood are prone to choose "insane" paths when conditions are set right (this is meant to address the counter-arguments that "clearly no one will be so insane as to give AI military control so therefore there can be no AI military doomsday scenario"). But this is just one type of scenario.

As could be expected from previous discussions, this thread seem goes in a lot of different directions and often gets hung up on some very small details that are difficult to see if is relevant or not, or stick to a very specific scenario while ignoring others. In regards to scenarios with severe bad outcome for the majority of humans, they all (as far as I am aware) hinge on 1) the emergence of scalable ASI and 2) the gradual voluntary loss of control by the majority of human because ASI simply does everything better. Now 1) may prove to be impossible for some yet unknown reason, but right now we are not aware of any reason why ASI should not be possible at some point in the future and given the current reasearch effort we cannot expect ASI reasearch to stop by itself (the benefits are simply too luring for us humans). That leaves 2), the loss of control, or more accurately, loss of power of the people.

So to avoid anything bad we "just" have to ensure people remains in power. On paper, a simple and sane way to avoid most of the severe scenarios is to do what we already know works fairly well in human world affairs, namely to ensure the majority of humans remains truthful informed and in enough control so they well in advance can move towards blocking out paths towards bad scenarios. In practice this may prove more difficult with ASI because of how hard it is to well in advance discern paths towards beneficial scenarios from bad ones. And on top of that, addressing my main current concern, we also have some of the select few in current political and technological power that are actively working towards eroding the level of power the people have over AI, with the risk that over time the majority will not be able to form any coherent consensus and even if they do they may not have any real options for coordinated control or even opting out for themselves (relevant for scenarios where the majority of humans at that point are on universal income and all production is dirt cheap because of ASI).

And to steer a bit towards the thread topic of AI hype, maybe we all here can agree that constructive discussions of both benefits and potential risks of AI are suffering from the high level of hype, of which much hinging on the possibility of ASI. It may thus add to constructive discussion if we separate those cases. For instance, if the invention of ASI is a precondition for a specific scenario (like it is for most of the worst-case scenarios) then arguing against the existence of ASI when discussing such scenarios is not very helpful for anyone. I personally find discussions about whether or not ASI can exist interesting and extremely relevant, but its a bit separate discussions from the potiental consequences of ASI.
 
  • #202
Filip Larsen said:
There are a lot of different possible scenarios that lead to conditions that most would agree are to be avoided at all costs and all of those scenarios depend on loss of control over some period of time, but where, contrary to what several on this thread seems to argue for, there are no clear point along the path to the bad scenario where we actually choose to stop. For instance, if two armed superpowers compete in an AI race and both get to ASI level, then it is almost given that they will be forced to apply ASI to their military capabilties in order to not loose out. The argument is that at any given point there is a large probability that those in control will always want to continue because for them the path still go towards a benefit ("not loosing the war") and the the worst-case scenario (where no human really are in control) is still believe to be theorical or preventable furter down the path. Note that the human decision mechanisms in such scenarios are (as far as I see it) almost identical to the mechanisms that lead to the nuclear arms race, so we can take it as a historical fact that human in all likelyhood are prone to choose "insane" paths when conditions are set right (this is meant to address the counter-arguments that "clearly no one will be so insane as to give AI military control so therefore there can be no AI military doomsday scenario"). But this is just one type of scenario.

As could be expected from previous discussions, this thread seem goes in a lot of different directions and often gets hung up on some very small details that are difficult to see if is relevant or not, or stick to a very specific scenario while ignoring others. In regards to scenarios with severe bad outcome for the majority of humans, they all (as far as I am aware) hinge on 1) the emergence of scalable ASI and 2) the gradual voluntary loss of control by the majority of human because ASI simply does everything better. Now 1) may prove to be impossible for some yet unknown reason, but right now we are not aware of any reason why ASI should not be possible at some point in the future and given the current reasearch effort we cannot expect ASI reasearch to stop by itself (the benefits are simply too luring for us humans). That leaves 2), the loss of control, or more accurately, loss of power of the people.

So to avoid anything bad we "just" have to ensure people remains in power. On paper, a simple and sane way to avoid most of the severe scenarios is to do what we already know works fairly well in human world affairs, namely to ensure the majority of humans remains truthful informed and in enough control so they well in advance can move towards blocking out paths towards bad scenarios. In practice this may prove more difficult with ASI because of how hard it is to well in advance discern paths towards beneficial scenarios from bad ones. And on top of that, addressing my main current concern, we also have some of the select few in current political and technological power that are actively working towards eroding the level of power the people have over AI, with the risk that over time the majority will not be able to form any coherent consensus and even if they do they may not have any real options for coordinated control or even opting out for themselves (relevant for scenarios where the majority of humans at that point are on universal income and all production is dirt cheap because of ASI).

And to steer a bit towards the thread topic of AI hype, maybe we all here can agree that constructive discussions of both benefits and potential risks of AI are suffering from the high level of hype, of which much hinging on the possibility of ASI. It may thus add to constructive discussion if we separate those cases. For instance, if the invention of ASI is a precondition for a specific scenario (like it is for most of the worst-case scenarios) then arguing against the existence of ASI when discussing such scenarios is not very helpful for anyone. I personally find discussions about whether or not ASI can exist interesting and extremely relevant, but its a bit separate discussions from the potiental consequences of ASI.
A- Some people think that discussions about AGI serve to create more powerful AIs, but not to create AGI.

B- Other people think that in addition to serving to create more powerful AIs, it also serves to create AGI.

People in A think AGI is an unattainable and undefined concept. This group doesn't contribute to the hype.

People in B think AGI is an achievable and defined concept. This group contributes to the hype.
 
  • #203
javisot said:
People in A think AGI is an unattainable [...]
Yes, so it sounds to me you at least agree it is two different discussions.

We can discuss whether or not ASI or even AGI is theoretical and/or practical (im)possible, and we can discuss potential consequences in assuming ASI is possible. People here who are convinced ASI will remain impossible for the foreseeable future do not have to participate in the discussion of the consequences if they lack the imagination or desire to pretend its possible just for the sake of analysing its consequences. I am well aware from general risk management that people who are not trained in risk management often gets stumped on rare probability events to the extend they refuse or find it a waste of time to analyse consequences. What I don't get, though, is why they think their position should mean that no one else is entitled to analyse or disuss such consequences. In risk management of, say, a new system you generally want to analyse failure modes that, relative to the existing background risks, has a high enough probability of occuring or has severe enough consequences, or both, i.e. "risk = probability x severity". Since ASI very much introduce new severe consequence failure modes it is prudent to discuss those consequences in parallel with discussing how likely they are.
 
  • #204
NO
gleem said:
Does anyone think besides myself that this thread has become boring?
Not uninteresting per se, but I'll admit it's becoming pretty speculative, which I guess is at least one of the reasons it's on the watch list... EDIT2: Or was it taken off again? Never there?

EDIT: Oh, I answered the wrong post. I was supposed to answer the one by @gleem , sorry.
 
  • #205
Filip Larsen said:
What I don't get, though, is why they think their position should mean that no one else is entitled to analyse or disuss such consequences.
Anyone is entitled to analyze or discuss anything. It is forcing others to follow you that is concerning. Some may not be willing to invest the effort, time, and resources required to answer such questions.

Let's go with a similar problem. A large meteorite might hit the Earth. It is a real possibility. Personally, I think it is more probable that creating ASI.

Lots of people have been thinking about that problem very seriously. We all know that the bigger it is, the worse it will be. But nobody is actively working on a solution because 1) the chances of it happening are very low; 2) the size of the meteorite is unknown, and if it is large enough, there are no possible solutions.

Conclusion: Why waste effort, time, and resources on something that might not happen, and if it does, our solution might not be effective?

If we get clear signs of a meteorite coming towards the Earth, we will evaluate the threat and the possible solutions. There are no other reasonable plans.

Same thing with ASI. Nobody can identify the threat level. Worse than the meteorite, we cannot even imagine it.

So, why waste effort, time, and resources on something that might not happen, and if it does, our solution might not be effective?

Why not wait for clear signs of an ASI possibility, and then evaluate the threat and the possible solutions?

If you have a potential scenario you want to discuss, specify it so we are all on the same page. For example, @Rive introduced the subject of AI in cars. I can imagine a few bad scenarios with that. The proof has been made that vehicles can be hacked. The proof has been made that battery-operated objects can be set to explode remotely. Electric cars have huge batteries. That is concerning. With vehicles, we can even imagine an attack where all cars of a region/country would be used to kill everyone in their path, including their passengers. We are not even talking about ASI here, just plain old human hacking into a system.

But if I can imagine this just by barely watching the news, I can't imagine experts are not thinking about this. There are car hacking seminars on YouTube for Pete's sake!
 
  • #206
Rive said:
Let's talk about cars, then.
As we all know it well, every car is a potential weapon.
...
Can you still can't imagine that control over such weapon is willingly handled to an AI? Of dubious origin/performance?
A car is a doomsday weapon? C'mon.
 
  • #208
jack action said:
Experts have ALWAYS been wrong in the past ABOUT CATASTROPHIC OUTCOMES TO HUMAN KIND, so we can ignore what any so-called experts say ABOUT CATASTROPHIC OUTCOMES TO HUMAN KIND.
Please reconsider this statement. It is flawed on many levels and so has no logical force (despite the CAPITAL LETTERS).
Survivor bias: Any truly catastrophic outcome to humankind would preclude the existence of this colloquy, and thereby your argument is tautological. Any prediction of extinction events will of necessity be shown historically inaccurate.
This is weirdly similar to the Great Disappointment(s) of the Adventists.
Your argument that we can therefore ignore the warnings is specious, but it does show that we can have no way to historically vet the "experts"......


..
 
  • Like
Likes russ_watters and Filip Larsen
  • #209
jack action said:
Let's go with a similar problem. A large meteorite might hit the Earth. It is a real possibility. Personally, I think it is more probable that creating ASI.

Lots of people have been thinking about that problem very seriously. We all know that the bigger it is, the worse it will be. But nobody is actively working on a solution because 1) the chances of it happening are very low; 2) the size of the meteorite is unknown, and if it is large enough, there are no possible solutions.

Conclusion: Why waste effort, time, and resources on something that might not happen, and if it does, our solution might not be effective?
That was a confusing example to bring up because we are exactly considering this, including testing in practice if our ideas for mitigating this natural threat seems to work. The main concern is primarily to ensure we have enough time to mitigate a specific threat (e.g. Earth-crossing asteroid) when we detect one, which means we need a few years lead time. To me that is clearly worth investing in.

But if you think your conclusion is the right approach (for you, at least), the I assume you also insist of having no smoke detector in your house, to drive your bike without helmet, and in generally stay away from all non-obligatory ensurances? I mean, you wouldn't really need to bother with any of that.

Edit: inserted missing word.
 
Last edited:
  • #210
jack action said:
Experts have ALWAYS been wrong in the past ABOUT CATASTROPHIC OUTCOMES TO HUMAN KIND, so we can ignore what any so-called experts say ABOUT CATASTROPHIC OUTCOMES TO HUMAN KIND.
My mother has always been wrong about my BASE-jumping proclivities. I've parachuted off buildings and bridges and survived every time.

It follows that she is - and will ALWAYS be - wrong when I try some new adventure I haven't thought of yet, (maybe jumping off radio antennae, maybe flying a squirrel suit, IDK).

I can IGNORE the dangers in these new activities, knowing I've survived DIFFERENT activities in the past.


Because one thing I know is that past outcomes can ALWAYS be used to predict future outcomes - especially when trying to predict scenarios that don't even exist yet.

🤔
 

Similar threads

Replies
10
Views
4K
Replies
3
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
3
Views
6K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K