Is AI Overhyped?

Click For Summary
The discussion centers around the question of whether AI is merely hype, with three main concerns raised: AI's capabilities compared to humans, the potential for corporations and governments to exploit AI for power, and the existential threats posed by AI and transhumanism. Participants generally agree that AI cannot replicate all human abilities and is primarily a tool with specific advantages and limitations. There is skepticism about the motivations of corporations and governments, suggesting they will leverage AI for control, while concerns about existential threats from AI are debated, with some asserting that the real danger lies in human misuse rather than AI itself. Overall, the conversation reflects a complex view of AI as both a powerful tool and a potential source of societal challenges.
  • #121
Hornbein said:
I don't see how to respond without entering the forbidden zone of politics.
Indeed tricky, but my point (while correlated to current political trends) can be expressed as being mostly about psychology and risk management: If someone would like a new (nuclear) power plant or a new car, or some new medicine to be safe for use, I would assume they also would want to require the same rigor in safety for other technologies, include AI. So when some people here with background in tech seems to express the opinion that no-one really needs to worry about AI misuse (as long as at least their preferred benefit is achieved) then I simply don't understand what rational reason could motivate this.

To stay on topic of AI hype, I would like to suggest that people in general are more inclined to take the above mentioned opinion for technology when it has a high level of hype, that is, when they percieve the promises to be of high enough value for them they tend to ignore any potential negatives even if this by far may out-weight benefits on average for others or even themselves. But I'm not sure what precise psycological phenomenon, if any, covers this.
 
Last edited:
  • Like
  • Agree
Likes 256bits, weirdoguy, javisot and 1 other person
Computer science news on Phys.org
  • #122
Here's an interview with a leading AI expert, Yoshua Bengio:



Why would we not take this seriously? The experts in the field are telling us there is a risk.
 
  • Agree
Likes Filip Larsen and gleem
  • #123
Yeah. I think He summed it up pretty well. AI can be a useful tool or a deadly weapon. I could end poverty or be used to enslave us. Those are the goals of human actors. More and more AI is being given more autonomy. Instead of being asked to perform a given task AI is asked to make decisions for us.

We are aware of AI's ability to do anything to accomplish a task. Our job is to give it principles to follow religiously. But the problem is still human. We desire the fruits of AI so much we will cut corners to get there first. We have time and time again seen dangers only to set them aside until those dangers materialize. Californian Gov. Newsom vetoed the proposed AI Safety legislation Senate Bill 1027 last year citing the hindrance of AI development. The current development mantra "move fast and break things" is too dangerous. If used for AGI it will be too late.

The AI Futures Project is a small group of researchers who are forecasting the future of AI. They have developed a timeline for the emergence of super AGI (in the form of a coder) and its consequences predicting a 50% development as soon as 2032. They have created a scenario called AI 2027 showing the most imminent possible developments around 2027 or 2028 of Super AGI and its possible consequences.

You can read the scenario here: https://ai-2027.com/.

They have additional timelines and give their methodologies here: https://ai-futures.org/

One of the authors of this project is interviewed here: https://www.nytimes.com/2025/05/15/opinion/artifical-intelligence-2027.html
 
  • Like
  • Informative
Likes Filip Larsen and PeroK
  • #124


I enjoyed this presentation; it's interesting to think that humans and LLMs simply rely on the same principles to generate natural language. That doesn't mean LLMs are intelligent like us, or that AI has an internal world.

Elan deflates (in some sense) the AI hype with the position that natural language is not so special that it cannot be generated automatically, without the need for human intelligence.
 
  • #126
https://en.wiktionary.org/wiki/hype said:
Promotion or propaganda, especially exaggerated claims.
AI is hype.

And by AI, we should target especially LLM.

LLM is just a machine. It is not smart, it doesn't do reasoning, it has no intentions.

I'll stand by my first affirmation:

change-my-mind-jpg.webp

It is a program that you feed electronic documents and it absorbs all the information in them, regurgitating it in some prettyfied ways. That's it.

There is only one case where the use can be weird: Someone thought of feeding it the entire Internet. That is a lot of information. The output is necessarily really simple compared to the input. The output can sometimes be unpredictable because of this. Nevertheless, too many people upsell the good sides and downplay the downsides, hence "hype".

Then there is the other side. People who want to warn us about the potential danger. They love to use words like "smart", "reason", "self-awareness", and such. This is again all hype to sell their own point. (More about this below.)

PeroK said:
Why would we not take this seriously? The experts in the field are telling us there is a risk.
First, the fact that experts are discussing it means the problem is taken seriously. In my experience, the problems that are discussed are never the ones that are problematic in the future (because, of course, they were already analyzed).

But some skepticism must be included because of what he says at around 11:25 :
[...] and for a government, it is really important to prepare to anticipate so that we can put in place the right incentives for companies, maybe do the right research and development, for example, the sort of research that we are doing in my group at LawZero right now, [...]
Translation: Invest in me and my company; give me money.

This is what triggers my skepticism before blindly following an expert of any kind.

Experts who have a solution to offer often contribute to the hype of what they consider the problem. "You will die a terrible, terrible death! But, not to worry, I have a solution!" Yeah, right.

About regulations

There are two categories:
  1. Some people might use it with malicious intentions;
  2. Some people might use it and hurt themselves or others.
There are already laws to forbid malicious intentions, whether done with AI or not.

If we are on an international level for malicious acts, we are talking warfare, and no laws can apply. But whatever one think the other can do, there are experts at this level who are preparing for it as well, with possibly the help of the new tools too. But you'll never hear about it. I will refer anyone to Stuxnet as an example of this.

About the other category, which I think is what @Filip Larsen is referring to, it is more problematic, and it is mostly happening BECAUSE of the hype. Which is why I'm pleading to downplay it. This is why I don't like wording like:
PeroK said:
There an interesting video from Sky News here, where it appears that ChatGPT fabricated a podcast, then lied repeatedly about it until eventually it was backed into a corner and admitted it had made the whole thing up.
The machine doesn't "fabricate", "lie", "admit", or "make things up"; it makes mistakes or malfunctions, or it is misused. It has no intentions; it is not smart.

There are already laws concerning liability for companies, either the ones producing the AI programs or the ones using them. I don't see what other laws could be added without just creating more red tape and giving a false sense of security.

There is also the fact that AI may reveal other problems in society, like mental illnesses and other social problems. The following article from the NY Times is excellent. The wording emphasizes how chatbots are just machines and are NOT human-like at all. They cannot be your friend. Conversing with them is not a good idea. Do not fall for the hype.

They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.

Generative A.I. chatbots are going down conspiratorial rabbit holes and endorsing wild, mystical belief systems. For some people, conversations with the technology can deeply distort reality.
In my opinion, in this case, I think it is wrong to go after the machine rather than the real root of the problem. And the hype is certainly part of it.

AI is hype.
 
  • Like
  • Agree
Likes russ_watters and 256bits
  • #127
jack action said:
In my experience, the problems that are discussed are never the ones that are problematic in the future (because, of course, they were already analyzed).
What is your experience in the development of AI? This is a scientific site. We are supposed to believe in the scientific method and in peer-reviewed material. If you are not actively researching this area, then you have are personal theories and speculation.
 
  • Skeptical
Likes russ_watters
  • #128
PeroK said:
What is your experience in the development of AI? This is a scientific site. We are supposed to believe in the scientific method and in peer-reviewed material. If you are not actively researching this area, then you have are personal theories and speculation.
I have done enough projects to understand that the things I worry about never happen or, if they do, have no serious consequences, because I spent so much time worrying about them. And any problem that arises is always something that I never thought could happen.

So, if something bad happens with AI in the future, for sure, it won't be something that people will say: "They called it on PF, years ago! We should have listened."

The point is that - as you said yourself - the experts are taking it seriously. That is why it is impossible for me to worry about it. Who am I to tell the experts what to do about it? Do we really need a law to tell them, "It's illegal to make a machine that will lead to human extinction"?
 
  • #129
SamRoss said:
[...]
3. Are AI and transhumans an existential threat?
[...]

I know I've mentioned this before, but I don't think AI will ever work like in the movies where some AI entity can move from/to different types of hardware (or even wetware). I know this borders on personal speculation but I don't agree with dualism (by which I mean that mind and body can exist independently of each other, one of the points made by the French philosopher René Descartes). Taking humans as an example, our mind or agency (for lack of better words and to avoid using the word "soul") are inseparable from our physical manifestation. In other words: our minds aren't copyable, or able to be transferred into some computer for immortal golf playing.

I think a hard AI will be it's physical manifestation and unable to escape it.

I may well be proved wrong, but if so we're the ones with a finger on the off-button.

It's a premise often examined in science fiction. Blade Runner being the most obvious one (If we're talking films especially the 2047 sequel). There is also a fun read where the above dichotomy is not the case: the series Robopocalypse by Daniel H. Wilson. It was supposed to be made into a film but seem to be in "development hell" as I think it's called.
 
  • Like
Likes 256bits and jack action
  • #130
jack action said:
Who am I to tell the experts what to do about it? Do we really need a law to tell them, "It's illegal to make a machine that will lead to human extinction"?

"...scientists profit-driven corporations were so preoccupied with whether they could, they didn't stop to think if they should.
- Ian Malcolm

Yes.
 
  • Like
Likes 256bits and sbrothy
  • #131
DaveC426913 said:
"...scientists profit-driven corporations were so preoccupied with whether they could, they didn't stop to think if they should.
- Ian Malcolm
Yes.
Thank God someone is bringing peer-reviewed material on this scientific site to back up their statement. 😆
 
  • Haha
Likes russ_watters and DaveC426913
  • #132
DaveC426913 said:
Yes.
Very emphatically. :smile:
 
  • Like
Likes DaveC426913
  • #133
jack action said:
Thank God someone is bringing peer-reviewed material on this scientific site to back up their statement. 😆
Hey. That's an ad hom! You can't dismiss wise words just because the person who spoke them is fictional. That's realityist!


Besides, quotes are quotable for a reason.
 
  • Like
Likes jack action
  • #134
jack action said:
And any problem that arises is always something that I never thought could happen.
And I have also watched "just do it engineers" be bit in the ass not just once, but many times over as the project unfolded with large errors. I tended to compute, revise, compute, revise...and they all feared I was getting nowhere because they were used to "engineers that just did things", I put all together on paper, then I executed. They would always say to me, "he's doing things, we just don't know what". There was always some point where you had to call it, but if you have some will power to see it through, you will have overblown the things that you don't know such that the "whoops" that comes (when they inevitably show up) are relatively small. “Aim small, miss small”

I think we should approach the AI situation with some restraint and forward thinking about potential impacts to the system when it’s immediately employed instead of the human alternative. Ironically, there was this whole "learn to code" slogan being shouted at blue collar workers, now it seems they are the most secure in the "new revolution". The robo minds will need blue collar humans to maintain the infrastructure ...for a bit.
 
Last edited:
  • #135
jack action said:
Do we really need a law to tell them, "It's illegal to make a machine that will lead to human extinction"?
So say the experts.
(Who have Never ever been wrong before? Or who have given their opinion on fantasy using 'expert' as a validation of extrapolation and prediction The unsinkable Titanic should still be making trans Atlantic voyages. Mars habitation is feasible while minimizing the backdrop of effects of prolonged voyage in space and zero gravity, supply chain, ... endless list .... Humanity should have been already be wiped out if an 18th century economist was to be truly believed, with the theory still making the rounds. An expert on climate change such as Greta T will be continuously stating disaster next year, no next year, right up to her dying breath of expected longevity of 80 )

Doom and gloom futures are based upon what? How humans treat the other flora and fauna on earth? Is a silicon based intelligence to act the same as human intelligence?
One faulty premise from the experts is that an ASI will attempt to over accomplish its goal and in so doing gather all resources ( a Malthus extension ). If so, is this ASI so super intelligent that it does not have the capacity of restraint? And that it is able to coerce other ASI's to aid and abet its worldly cause? becoming a supreme dictator within the ASI world devoid of ASI morals and ethics, eliminating all biological life upon the planet just because it can, strip the planet of all resources, thus eliminating its raisin d'etre, and thus itself.
Not really an accomplished ASI that has more intelligence than a human.

Since I am not an expert in destruction of the world life, can I ask who is?
 
  • Like
  • Agree
  • Sad
Likes russ_watters, jack action and PeroK
  • #136
jack action said:
Do we really need a law to tell them, "It's illegal to make a machine that will lead to human extinction"?
Yes, essentially.

In more detail, due to the identified motivation that likely drives the select few to push aggressively for AGI+ (i.e. if super-intelligence is at all physical possible someone will surely continue to drive towards it if unchecked) we need checks and balances, just like with in any technology, yet currently these checks and balances are being dismantled by the very same select few. So far the scenario that has unfolded is more or less indiscernible from the worst-case scenario, i.e. we are on the same path as the worst-case scenario. That doesn't mean the worst will happen, but it means right now there are no indication that it will not happen. We are not able to point to a single physical hard limit or similar constraint that will prevent the worst case from happening. We can point to a few things that slow the process down, but without global regulation or the occurance of a yet unknown hard limit the drive towards AGI will continue.

As far as I see it, the only thing that currently seems to be the best limit this chance for worst-case and other generally bad scenario is if people in large enough numbers are aware of the risks and pretty soon stop doing things that support the trend towards negative outcomes (i.e. put the "brakes on"). But that limit is not going well either if we considering how easy it is to get people to participate in training AI systems to essentially do their job. Even when workers see how capable the systems has become over a very short time they still think there is some magic in there that will prevent it from doing the rest of their job despite they are participate in the training for it to do exactly that. Why do programmers keep on using these system well-knowning that they in principle participate in training with the potential and even stated aim of automating their work and putting them out of a job, let alone that it this also assists the drive towards self-accelerated software development for more capable AI which is a key step for the worst-case scenario. Puzzle.

jack action said:
That is why it is impossible for me to worry about it.
I am fine with you personally not worrying, but I do have an issue if you mean to tell others they don't need to worry because nothing seriously bad can ever happen because someone surely will step in and save the day even though no one really seems knowns exactly who or how. If you know then please share. If you don't know, why aren't you worried there may not be a who or a how at all?
 
  • #137
sbrothy said:
I may well be proved wrong, but if so we're the ones with a finger on the off-button.
There is no off-button that anyone can press. There is an on-button that you can stop holding down, but you then you need to persuade others holding their on-button down to also let go as well. It's a Mexican standoff where everyone need to take their finger off the trigger at the same time, with the added challenge that no one can see the other people trigger finger very well.

Also the assumed division into "we, the humans" vs "the computers" is misunderstood. The drive towards worst-case scenario is now and in the near future driven by a select few humans, so the division at the time where the worst-case scenario turns from bad to really bad is more like "we, the obsolete and powerless" vs "those who yield the AI power " and if we ever get there is far too late. The only trick I see is to stop becoming powerless and obsolete well in advance which might as well start now. Take your finger off the on-button now.
 
  • Skeptical
  • Agree
  • Like
Likes russ_watters, Hornbein and 256bits
  • #138
Filip Larsen said:
As far as I see it, the only thing that currently seems to be the best limit this chance for worst-case and other generally bad scenario is if people in large enough numbers are aware of the risks and pretty soon stop doing things that support the trend towards negative outcomes
Boycott, or conscientious objector.
The general population seems to have very little say in the matter, and as usual have no choice but to go along for the ride.
Scientists themselves, who would be more familiar with the harms of AI, use its benefits, and thus create a demand to some degree. The business and political community are encouraged to rush in to acquire, support and use the tech, even knowingly confronted with the conflicting '50% of jobs will be lost' - 'GDP will increase 30%' by the experts. The experts are on the two opposite sides of the coin, promoting the usage, yet saying that there could be severe repercussions, and warning that those not on board will be left behind.

Einstein, an expert, lended his voice to advise Roosevelt to build nuclear weapons. He later regretted his involvement, which is similar to the situation today regarding AI. The refrain is if we don't get ahead of the curve, the other side will have great advantage.

So, who should one believe as being the most truthful. The promoters who say it will be wonderful, or the doom and gloomers who say it is the death of us all. At times the promoters and the gloomers can be the one and the same entity. Musk, for example, involved in AI research, uttered warnings about existential risks, leading to some discussions about regulation policy, but this has been shelved. Musk, the expert that he may be, was not a complete gloom and doomer, as I, but proposed the chance of the tech's unexpected or non-desired outcomes.

The world stage today is busy. The alarms about AI, and its ethical and moral usage, is being drowned out by more pressing immediate concerns. If and when the world stage dies down several levels, the AI 'problem' may be addressed, if not too late. AND, the doom and gloomers can tone down their fear factor from utter annihilation to one of sustainability. The gloomer experts strike fear into people, either turning peoples minds off, or enhancing unbelievability, both used as self defense mechanisms of the individual as sanity protection.

A moderate and open discussion of threats, perceived and real, and benefits, should include individuals from affected fields, and all walks of life, to balance the 'expert' AI narrow viewport of society. Any exclusion of viewpoints, from expert to non-expert, should not be summarily dismissed as being irrelevant, nor unimportant, for a discussion of tech that will surely affect each and every person.
 
  • #139
Filip Larsen said:
Yes, essentially.

In more detail, due to the identified motivation that likely drives the select few to push aggressively for AGI+ (i.e. if super-intelligence is at all physical possible someone will surely continue to drive towards it if unchecked) we need checks and balances, just like with in any technology, yet currently these checks and balances are being dismantled by the very same select few. So far the scenario that has unfolded is more or less indiscernible from the worst-case scenario, i.e. we are on the same path as the worst-case scenario. That doesn't mean the worst will happen, but it means right now there are no indication that it will not happen. We are not able to point to a single physical hard limit or similar constraint that will prevent the worst case from happening. We can point to a few things that slow the process down, but without global regulation or the occurance of a yet unknown hard limit the drive towards AGI will continue.

As far as I see it, the only thing that currently seems to be the best limit this chance for worst-case and other generally bad scenario is if people in large enough numbers are aware of the risks and pretty soon stop doing things that support the trend towards negative outcomes (i.e. put the "brakes on"). But that limit is not going well either if we considering how easy it is to get people to participate in training AI systems to essentially do their job. Even when workers see how capable the systems has become over a very short time they still think there is some magic in there that will prevent it from doing the rest of their job despite they are participate in the training for it to do exactly that. Why do programmers keep on using these system well-knowning that they in principle participate in training with the potential and even stated aim of automating their work and putting them out of a job, let alone that it this also assists the drive towards self-accelerated software development for more capable AI which is a key step for the worst-case scenario. Puzzle.


I am fine with you personally not worrying, but I do have an issue if you mean to tell others they don't need to worry because nothing seriously bad can ever happen because someone surely will step in and save the day even though no one really seems knowns exactly who or how. If you know then please share. If you don't know, why aren't you worried there may not be a who or a how at all?
I don't plan to use AGI to destroy the universe or humans. If anyone has thought of doing that, and let's assume AGI could potentially do it, the problem isn't AGI. That problem should be treated by a psychologist, not a computer scientist.

On the other hand, an AGI that isn't capable of destroying the universe and humans might not be "general" enough. Let's be serious, AGI isn't even a defined concept...

How do we prove that something is an AGI?
 
  • Like
Likes russ_watters, jack action and 256bits
  • #140
256bits said:
So, who should one believe as being the most truthful. The promoters who say it will be wonderful, or the doom and gloomers who say it is the death of us all
It is not that hard to do as we know works best, namely work towards benefits in a controlled fashion while staying clear of the negatives. But this requires we, as a whole, at all times have an eye on the potential negatives and work towards blocking out paths that lead to high risk scenarios. This is just text book risk management, yet strangely enough, this approach is being suspended for a technology which has worst-case scenarios right up there with nuclear holocaust.
 
  • #141
256bits said:
A moderate and open discussion of threats, perceived and real, and benefits, should include individuals from affected fields, and all walks of life, to balance the 'expert' AI narrow viewport of society. Any exclusion of viewpoints, from expert to non-expert, should not be summarily dismissed as being irrelevant, nor unimportant, for a discussion of tech that will surely affect each and every person.
This discussion is luckily also occurring in a lot of contexts and there is hope it will lead to sanity prevailing over the current "lets blindly insert AI everywhere we can" vibe that the select few currently in charge promotes.
 
  • #142
javisot said:
How do we prove that something is an AGI?
In the context of the worst-case scenarios with acceleration into super-intelligence, the key step is not the exactly level of intelligence but that we get to a point where we use one generation of AI to autonomously and unchecked generate the next generation with "improve capabilities".

The human-level intelligence becomes relevant for the unchecked part, since its the expected the level where it becomes hard to spot if a trained AI is behaving deceptive or not. Deceptive here means that due to the actual training the AI learns to strategize (i.e. not only making easy to spot confabulations) in a way that result in what we would call deceptive behavior if a human did the same. Note that we may still very well select the training material for the AI but we no longer have a reliable capability to detect if it actually learns all the rules we would like it to also be constrained by (note compassion and other desirable human traits also needs to be trained in with the possibility of failing to some degree). This means the AI over generations at some point gets to a capability level where it can find novel new ways of solving problems that would be considered "outside the rules" it was supposed to learn, but we can not really tell if it has this ability or not, simply because the complexity is too high. If the autonomous improvement is also coupled with search mechanisms that mimic the benefits from evolution then fittest AI models emerging from such a process are the one that are capable of passing our fitness harness. If the harness and fitness function can only check issue on a human-level scale so to speak, we have really have no idea if the harness actually restrain the AI after some point [1] . Again, all this this does not mean such AI will manifest serious deceptive behavior, only that we cannot exclude it from happening.

At least, that is how I understand it.

[1] Edit (I failed to make the point I wanted by bringing up evolution): By naively using evolutionary search where we weed out AI models that fail to fit our harness (e.g. we down-score models that fails harness tests that tries to trap them to be deceptive or otherwise exhibit bad behavior) the fittest models we end up with has a high likelihood to just have evolved to system that has learned not to fall into our harness traps (since that is the power of evolutionary search). It may be researchers will learn of some fundamental better way to train, evolve and test AI's that will reduce this to a point where we believe we can control it, but if AI tech still with this can scale up way past human-level capabilities the control is probably going to be an illusion more than a fact.
 
Last edited:
  • Like
Likes javisot and PeroK
  • #143
Filip Larsen said:
There is no off-button that anyone can press. There is an on-button that you can stop holding down, but you then you need to persuade others holding their on-button down to also let go as well. It's a Mexican standoff where everyone need to take their finger off the trigger at the same time, with the added challenge that no one can see the other people trigger finger very well.

Also the assumed division into "we, the humans" vs "the computers" is misunderstood. The drive towards worst-case scenario is now and in the near future driven by a select few humans, so the division at the time where the worst-case scenario turns from bad to really bad is more like "we, the obsolete and powerless" vs "those who yield the AI power " and if we ever get there is far too late. The only trick I see is to stop becoming powerless and obsolete well in advance which might as well start now. Take your finger off the on-button now.
Even a motorcycle has a dead man's switch, I would like to think an AI would come equipped with one too. Also ultimately we are (at least in theory) in control of the energy supply.

Unless of course it'll play out something like the first real quantum computer becoming conscious (yes, I know that's not how it works but it sounds technobabbly enough for a novel) and hiding the fact from us, thus becoming the "ghost in the machine" and controlling us by subtle manipulation. I'd like to think the scientists would notice something so outlandish going on though. I was just about to say crazier things have happened but I'm far from sure. :woot:

By definition it's impossible to predict what a world after a technological singularity will look like. What if, for example, spintronics (behind paywall i think) or atomtronics inadvertently made the internet become alive?!

And no, I'm still not inebriated. :smile:
 
  • #145
sbrothy said:
Even a motorcycle has a dead man's switch, I would like to think an AI would come equipped with one too.
The interesting buttons are the ones you can actually control. In the worst-case scenarios all such buttons are no longer in your control, so how will you press it? And even if you have access to a button how will you decide when to use it and how will you prevent those select few in nearly universal control at the time to not just flip it on again?

If want to press an off-button I am absolutely for it, but I recommend you try push it early rather than late. Or just tap the brakes a bit so we don't take the corners doing 90.
 
  • #146
256bits said:
See Movie from 1970:
https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project
which explores some of the capabilities of AI smarter than humans.
I saw that. Possibly on your recommendation. I think I said it has aged rather well, but there are some pretty silly decisions made in it. Giving it exclusive control of the nuclear arsenal and then locking yourself out of access to it screams to high heaven. What could possibly go wrong, right? Even a 5-year old could tell you that's a pretty stupid idea. Also, the film is supposedly from the fifties, the era of vacuum tube transistors which had a tendency to burn out making maintenance imperative. Entertaining film nonetheless.

EDIT: Oh... seventies.... It just had such a fifties feel to it.

It kinda reminds me of The Andromeda Strain (even though that has nothing to do with AI), as it plays on some of the same fears of the unknown. But I guess that's a common staple of science fiction.
 
  • #147
Filip Larsen said:
The interesting buttons are the ones you can actually control. In the worst-case scenarios all such buttons are no longer in your control, so how will you press it? And even if you have access to a button how will you decide when to use it and how will you prevent those select few in nearly universal control at the time to not just flip it on again?

If want to press an off-button I am absolutely for it, but I recommend you try push it early rather than late. Or just tap the brakes a bit so we don't take the corners doing 90.

Yes, sadly. Also there's the possibility that it'll be so smart it could talk us out of pressing any buttons. "For our own sake". :smile:
 
  • Like
Likes Filip Larsen
  • #148
Filip Larsen said:
In the context of the worst-case scenarios with acceleration into super-intelligence, the key step is not the exactly level of intelligence but that we get to a point where we use one generation of AI to autonomously and unchecked generate the next generation with "improve capabilities".

The human-level intelligence becomes relevant for the unchecked part, since its the expected the level where it becomes hard to spot if a trained AI is behaving deceptive or not. Deceptive here means that due to the actual training the AI learns to strategize (i.e. not only making easy to spot confabulations) in a way that result in what we would call deceptive behavior if a human did the same. Note that we may still very well select the training material for the AI but we no longer have a reliable capability to detect if it actually learns all the rules we would like it to also be constrained by (note compassion and other desirable human traits also needs to be trained in with the possibility of failing to some degree). This means the AI over generations at some point gets to a capability level where it can find novel new ways of solving problems that would be considered "outside the rules" it was supposed to learn, but we can not really tell if it has this ability or not, simply because the complexity is too high. If the autonomous improvement is also coupled with search mechanisms that mimic the benefits from evolution then fittest AI models emerging from such a process are the one that are capable of passing our fitness harness. If the harness and fitness function can only check issue on a human-level scale so to speak, we have really have no idea if the harness actually restrain the AI after some point [1] . Again, all this this does not mean such AI will manifest serious deceptive behavior, only that we cannot exclude it from happening.

At least, that is how I understand it.

[1] Edit (I failed to make the point I wanted by bringing up evolution): By naively using evolutionary search where we weed out AI models that fail to fit our harness (e.g. we down-score models that fails harness tests that tries to trap them to be deceptive or otherwise exhibit bad behavior) the fittest models we end up with has a high likelihood to just have evolved to system that has learned not to fall into our harness traps (since that is the power of evolutionary search). It may be researchers will learn of some fundamental better way to train, evolve and test AI's that will reduce this to a point where we believe we can control it, but if AI tech still with this can scale up way past human-level capabilities the control is probably going to be an illusion more than a fact.
I see a problem with this reasoning. We agree that we don't know how to prove that something is AGI, but you assume that AGI can be built without knowing how to prove that something is AGI.

I reasonably disagree; all those who claim that AGI should exist should first be able to define what they are referring to (and it's not enough to say "something that can solve what an AI can't" since we're not specifying anything).

The scenario that could violate the above is one in which AI autonomously evolves into AGI and we don't know how to define the final product.
 
  • #149
javisot said:
The scenario that could violate the above is one in which AI autonomously evolves into AGI and we don't know how to define the final product

How will we identify AGI? Chief Justice Potter Stewart summed up the problem of defining difficult situations in ruling in a 1964 movie pornography case said, with regards to what is pornographic: "I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description ["hard-core pornography"], and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the motion picture involved in this case is not that."

https://en.wikipedia.org/wiki/I_know_it_when_I_see_it

Similarly, I think we will know it when we see it.
 
  • Like
Likes sbrothy and PeroK
  • #150
... and the fallacy is that until you can precisely define something, there is nothing to be done.
 

Similar threads

Replies
10
Views
4K
Replies
3
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
3
Views
6K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K