Is AI hype?

AI Thread Summary
The discussion centers around the question of whether AI is merely hype, with three main concerns raised: AI's capabilities compared to humans, the potential for corporations and governments to exploit AI for power, and the existential threats posed by AI and transhumanism. Participants generally agree that AI cannot replicate all human abilities and is primarily a tool with specific advantages and limitations. There is skepticism about the motivations of corporations and governments, suggesting they will leverage AI for control, while concerns about existential threats from AI are debated, with some asserting that the real danger lies in human misuse rather than AI itself. Overall, the conversation reflects a complex view of AI as both a powerful tool and a potential source of societal challenges.
  • #101
PeroK said:
What if Hameroff is wrong?
I am not aware of Hameroff's arguments in details, but if we assume he (more or less) ascribe to the idea that quantum effects is required for the "essential" brain functions (e.g conscience, and perhaps even the capability for human-level intelligence) then considering that this idea, as far as I understand, still is rather controversial (e.g. see https://en.wikipedia.org/wiki/Orchestrated_objective_reduction) it still seems be most prudent in a context of safety analysis to assume that lack of quantum effects does not exclude the existence of digital systems capable of human-level brain functions.

But it sure would make my day a more worry free if we positively could point to lack of quantum effects as a "natural" limit for AI capabilities, instead of just wishing for it.

Edit: fixed clumsy wording.
 
Last edited:
  • Like
Likes 256bits and PeroK
Computer science news on Phys.org
  • #102
Hornbein said:
The only intelligence is natural intelligence.

Therefore there can be no artificial intelligence.

I am not impressed by such arguments.
Yeah. That is an unknown for the future.
PeroK said:
What if Hameroff is wrong?
Either of Ilya Sutsjeyer or Hameroff can be right or wrong, regardless.
Not AGI, nor ASI, but just the lowly niche ANI is already disrupting parts of society, disruption being a concept of either good or bad dependent upon viewpoint, ( and thus not Utopia as the Silicon Valley types dream about and seem to present it as being the ultimate for the future *).

Heck, the Nvidia CEO Huang brought a cute Star Wars like looking mini robot ( most likely remote controlled as one doesn't want surprises ) with all the beeps, bops, whirls sounds on stage at a recent conference to climatize the audience with the inference that AI is so loveable it can't be anything but. A substitution for a puppy or kitten. It is such manipulative marketing that annoys
That is what I call hype, not the capabilities of the tech.

* this has been around a long time. Some of this dreamed automation has come to pass.
 
  • #103
Mayhem said:
But if we consider the Wright Brothers' first flight day 0 of air travels, and remember the moon landing took place less than 70 years later, it is certainly fascinating (if not unnerving) what progress will be made in our lifetime, if ChatGPT is likewise the very early stage of generally useful AI.
AI research started around the 40's decade, or so.
By the flying measure comparison, AI should already be a mature tech.

Robot tech is kinda mature, since it is based on material manipulation, physical materials which have restraints due to Newton's laws.
Just not too long ago the inverted pendulum was incredulous [ is that the correct way to use the word ], and now ho-hum. If the mice races can squeek out a few more seconds from the run...

AI is based on mathematical manipulation, so those guys have to get their elbows up [ Canadian ] to keep up the pace, along with the computer algorithm guys, and the electronic guys. The neural net got stuck in the 80's-ish until along came back-propogation and Hopfield. That moved the basic neural node from being a glorified AND gate into something that, being part of a neural net, allowed the neural net to learn voice, faces, language, something impossible beforehand.

That still took 50-60 years for the recent LLM breakthrough to occur.
Progress has seemed rapid, but it was slow go in the beginning.

PS I just wondered, had there been no internet from which to scape vast amounts of data for the neural nets to learn from, would the LLM's be a thing.
 
  • #104
256bits said:
AI research started around the 40's decade, or so.
By the flying measure comparison, AI should already be a mature tech.

That still took 50-60 years for the recent LLM breakthrough to occur.
Progress has seemed rapid, but it was slow go in the beginning.
Unlike organic intelligence which took only, um, 4 billion years to evolve.
 
  • Haha
  • Like
Likes russ_watters and 256bits
  • #105
Hornbein said:
The only intelligence is natural intelligence.

Therefore there can be no artificial intelligence.

I am not impressed by such arguments.
Totally agree. One of the typical definition of intelligence is "the ability to solve problems," but a machine capable of solving all existing problems isn't intelligent. (No more intelligent than you, who created that machine)

We know how to automate problem solving, but that doesn't mean we're creating intelligent objects.
 
  • #106
PeroK said:
That still took 50-60 years for the recent LLM breakthrough to occur.
Progress has seemed rapid, but it was slow go in the beginning.
Unlike organic intelligence which took only, um, 4 billion years to evolve.
Well if you look at it that way, AI took 4,000,000,060 years to evolve. :smile:
 
  • Like
  • Haha
Likes jack action, russ_watters, 256bits and 1 other person
  • #107
PeroK said:
Unlike organic intelligence which took only, um, 4 billion years to evolve.
I update my phrase.
That still took 4 billion + 50-60 years for the recent LLM breakthrough to occur.
[ Carbon, then silicon . Even the stars say so. :) ]

Ooops. Ididn't see DaveC's post.
 
  • #108
Hornbein said:
The only intelligence is natural intelligence.

Therefore there can be no artificial intelligence.

I am not impressed by such arguments.
What does this even mean?

Intelligence? Not well defined.

Natural Intelligence?
Artificial intelligence?
  • How do these two differ?
How do you distinguish between the natural and artificial versions? Is it simply the intervention of humans?

What is supposed to distinguish the natural from the artificial?

To me things seem much more convoluted than is being expressed here.
 
  • #109
This just in.
The Chatter bots are going to university.
Maybe even get a PhD, or two, or 3, or..., with a library card.
If you think they were smug before,

AI chatbots need more books to learn from. These libraries are opening their stacks​

https://www.msn.com/en-ca/money/top...N&cvid=6fc127164171453aaa57b07a50733529&ei=96

Nearly one million books published as early as the 15th century — and in 254 languages — are part of a Harvard University collection being released to AI researchers Thursday. Also coming soon are troves of old newspapers and government documents held by Boston's public library.
 
  • Like
  • Informative
Likes Klystron and dextercioby
  • #110
BillTre said:
What does this even mean?

Intelligence? Not well defined.

Natural Intelligence?
Artificial intelligence?
  • How do these two differ?
How do you distinguish between the natural and artificial versions? Is it simply the intervention of humans?

What is supposed to distinguish the natural from the artificial?

To me things seem much more convoluted than is being expressed here.
I think the distinction is similar to other areas. There are natural materials, such as wood, leather and wool. And, there are synthetic materials, such as plastics, that are "man-made".

Artificial intelligence means intelligence created or developed by humankind, as opposed to having evolved naturally.

Even though intelligence itself may not be totally well-defined, we have bona fide material on the cognitive skills that we expect humans to develop. A machine could be assessed according to this. Although, AI will always have skills beyond what a human can have: perfect memory, speed of calculation, multi-lingual capacity etc. Not to mention that a machine can perform almost 24x365.

It's seems to me - and this is my opinion - foolishly arrogant to claim that humans must have an intelligence superior to any machine. Even if we constrain that to a timeframe of this century. Arguments over the nature of intelligence, IMO, miss the point that it ultimately comes down to practical capability.
 
  • Like
Likes javisot and BillTre
  • #111
256bits said:
The hype from someone in the AI field:
former OpenAI co-founder Ilya Sutskever ... said if the brain is a “biological computer” then why can’t we have a “digital brain”“Slowly but surely we will see the AI getting better and the day will come when AI will do all the things that we can do. Not just some of them but all of them. Anything which I can learn, anything which any of you can learn, the AI could do as well.”
Big woop. First, this is still science fiction. Second, when we will do that, all we will do is create what already exists: some sort of artificial human, no better or worse than a "natural" human. We can easily do that with sex right now. I highly doubt we will be able to do it more efficiently.

But the big fear is about superintelligence. A machine better than a human: with either more capacity, fewer flaws, or both. The previous quote doesn't indicate that at all. I would rather think it shows the opposite: it is a complete utopia. If a superior way could exist, it already would. And if it could, I highly doubt that a form of intelligence that took 4 billion years to evolve will make it in, say, a 100 years.

Also, nothing evolves by itself; its environment also evolves with it.
 
  • Like
Likes 256bits and BillTre
  • #112
BillTre said:
What does this even mean?

Intelligence? Not well defined.

Natural Intelligence?
Artificial intelligence?
  • How do these two differ?
How do you distinguish between the natural and artificial versions? Is it simply the intervention of humans?

What is supposed to distinguish the natural from the artificial?

To me things seem much more convoluted than is being expressed here.
To clarify, in case it wasn't apparent: @Hornbein was being facetious. Those are not his words, but a paraphrasing of Hameroff's claims. Which Horbien explicitly states do not impress him.

[Update] Oops. I mistyped @Hornbein's name as Horbien, which technically translates as "good dread".
 
Last edited:
  • #114
Hornbein said:
As a human trained in science and math, I find this article quite sobering, nay, even alarming. Here are a few select quotes:
Although the group did eventually succeed in finding 10 questions that stymied the bot, the researchers were astonished by how far AI had progressed in the span of one year. Ono likened it to working with a “strong collaborator.” Yang Hui He, a mathematician at the London Institute for Mathematical Sciences and an early pioneer of using AI in math, says, “This is what a very, very good graduate student would be doing—in fact, more.” The bot was also much faster than a professional mathematician, taking mere minutes to do what it would take such a human expert weeks or months to complete.

By the end of the meeting, the group started to consider what the future might look like for mathematicians. Discussions turned to the inevitable “tier five”—questions that even the best mathematicians couldn't solve. If AI reaches that level, the role of mathematicians would undergo a sharp change. For instance, mathematicians may shift to simply posing questions and interacting with reasoning-bots to help them discover new mathematical truths, much the same as a professor does with graduate students. As such, Ono predicts that nurturing creativity in higher education will be a key in keeping mathematics going for future generations.

I’ve been telling my colleagues that it’s a grave mistake to say that generalized artificial intelligence will never come, [that] it’s just a computer,” Ono says. “I don’t want to add to the hysteria, but in some ways these large language models are already outperforming most of our best graduate students in the world.”
 
  • Like
  • Informative
Likes russ_watters, PeroK and 256bits
  • #115
renormalize said:
As a human trained in science and math, I find this article quite sobering, nay, even alarming.
Why is it alarming? Isn't it the goal of creating AI? Learning the theory and applying it to practical cases.

The dream that was sold to me with AI was to discover (quickly) new molecules for drugs that can cure illnesses. I don't see anything alarming with that, quite the opposite.
 
  • Like
  • Sad
Likes russ_watters, Filip Larsen and 256bits
  • #116
renormalize said:
This is what a very, very good graduate student would be doing—in fact, more.” The bot was also much faster than a professional mathematician, taking mere minutes to do what it would take such a human expert weeks or months to complete.
If the AI bot was slower there wouldn't be so much slurpy enthusiastic wonderment. A statistical token manipulation machine doing comparisons at billions per second should be fantastic. Slow the processor down to MHz, or down to human speed, and the bot will take weeks or months to complete, or years. The slurpy would turn into cool-aid.

renormalize said:
I’ve been telling my colleagues that it’s a grave mistake to say that generalized artificial intelligence will never come, [that] it’s just a computer,” Ono says. “I don’t want to add to the hysteria, but in some ways these large language models are already outperforming most of our best graduate students in the world.”
The lack of credit towards the human inventors is apparent. The AI is the focus, which is a natural human response. Rolling down a car window by button, rather than crank, elicited similar emotions when that was new tech.

I suppose the debate would be is this being creative and acting with reason.
 
  • Sad
  • Like
Likes PeroK and javisot
  • #117
256bits said:
If the AI bot was slower there wouldn't be so much slurpy enthusiastic wonderment. A statistical token manipulation machine doing comparisons at billions per second should be fantastic. Slow the processor down to MHz, or down to human speed, and the bot will take weeks or months to complete, or years. The slurpy would turn into cool-aid.
In the case of chatgpt, to reduce response time instead of completely constructing the message and then sending it, you'll notice that you can see the response being constructed from the beginning.

It reduces the time and is more realistic. When a human speaks live, they deliver word by word, not entire messages.
 
  • #118
jack action said:
Why is it alarming?
Not to restart our old discussions, but I'd like to a comment that a large part of my worry with AI are this position, i.e. people who are fine with things that move fast and breaks stuff and thinks that everyone else should suffer the same consequences. And with AI (and the current trend in world politics) its my current bet that we end up with large groups of people who (yet again in human history, sadly nothing new there) will suffer the consequences so a small group of people can get some perceived benefits (political power, monetary, health, simplicity of life, etc.). The idea that AI tech when driven by humans somehow magically (i.e. without any form of regulation or requirements) will end up do more benefit than harm to the average person is in my view a fallacy and the chances of that to happen is in my view very low.

Compare, if you will, with the current discussion of "social media" where countries are now banning young people from using this tech because of the negative effect. Everyone, myself included, thought that social media was a boon for humanity 20 years ago (and it was back then), but then the tech was over time driven to an "unsafe" place for the benefit of the few (i.e. enshittification). The mind boggles at how fast and how solid the select minority this time can do same with AI tech.
 
  • Like
Likes russ_watters and weirdoguy
  • #119
Filip Larsen said:
Not to restart our old discussions, but I'd like to a comment that a large part of my worry with AI are this position, i.e. people who are fine with things that move fast and breaks stuff and thinks that everyone else should suffer the same consequences. And with AI (and the current trend in world politics) its my current bet that we end up with large groups of people who (yet again in human history, sadly nothing new there) will suffer the consequences so a small group of people can get some perceived benefits (political power, monetary, health, simplicity of life, etc.). The idea that AI tech when driven by humans somehow magically (i.e. without any form of regulation or requirements) will end up do more benefit than harm to the average person is in my view a fallacy and the chances of that to happen is in my view very low.

Compare, if you will, with the current discussion of "social media" where countries are now banning young people from using this tech because of the negative effect. Everyone, myself included, thought that social media was a boon for humanity 20 years ago (and it was back then), but then the tech was over time driven to an "unsafe" place for the benefit of the few (i.e. enshittification). The mind boggles at how fast and how solid the select minority this time can do same with AI tech.
I don't see how to respond without entering the forbidden zone of politics.
 
  • Like
  • Agree
Likes jack action, BillTre and PeroK
  • #120
There an interesting video from Sky News here, where it appears that ChatGPT fabricated a podcast, then lied repeatedly about it until eventually it was backed into a corner and admitted it had made the whole thing up.

 
  • #121
Hornbein said:
I don't see how to respond without entering the forbidden zone of politics.
Indeed tricky, but my point (while correlated to current political trends) can be expressed as being mostly about psychology and risk management: If someone would like a new (nuclear) power plant or a new car, or some new medicine to be safe for use, I would assume they also would want to require the same rigor in safety for other technologies, include AI. So when some people here with background in tech seems to express the opinion that no-one really needs to worry about AI misuse (as long as at least their preferred benefit is achieved) then I simply don't understand what rational reason could motivate this.

To stay on topic of AI hype, I would like to suggest that people in general are more inclined to take the above mentioned opinion for technology when it has a high level of hype, that is, when they percieve the promises to be of high enough value for them they tend to ignore any potential negatives even if this by far may out-weight benefits on average for others or even themselves. But I'm not sure what precise psycological phenomenon, if any, covers this.
 
Last edited:
  • Like
  • Agree
Likes 256bits, weirdoguy, javisot and 1 other person
  • #122
Here's an interview with a leading AI expert, Yoshua Bengio:



Why would we not take this seriously? The experts in the field are telling us there is a risk.
 
  • Agree
Likes Filip Larsen and gleem
  • #123
Yeah. I think He summed it up pretty well. AI can be a useful tool or a deadly weapon. I could end poverty or be used to enslave us. Those are the goals of human actors. More and more AI is being given more autonomy. Instead of being asked to perform a given task AI is asked to make decisions for us.

We are aware of AI's ability to do anything to accomplish a task. Our job is to give it principles to follow religiously. But the problem is still human. We desire the fruits of AI so much we will cut corners to get there first. We have time and time again seen dangers only to set them aside until those dangers materialize. Californian Gov. Newsom vetoed the proposed AI Safety legislation Senate Bill 1027 last year citing the hindrance of AI development. The current development mantra "move fast and break things" is too dangerous. If used for AGI it will be too late.

The AI Futures Project is a small group of researchers who are forecasting the future of AI. They have developed a timeline for the emergence of super AGI (in the form of a coder) and its consequences predicting a 50% development as soon as 2032. They have created a scenario called AI 2027 showing the most imminent possible developments around 2027 or 2028 of Super AGI and its possible consequences.

You can read the scenario here: https://ai-2027.com/.

They have additional timelines and give their methodologies here: https://ai-futures.org/

One of the authors of this project is interviewed here: https://www.nytimes.com/2025/05/15/opinion/artifical-intelligence-2027.html
 
  • Like
  • Informative
Likes Filip Larsen and PeroK
  • #124


I enjoyed this presentation; it's interesting to think that humans and LLMs simply rely on the same principles to generate natural language. That doesn't mean LLMs are intelligent like us, or that AI has an internal world.

Elan deflates (in some sense) the AI hype with the position that natural language is not so special that it cannot be generated automatically, without the need for human intelligence.
 
  • #126
https://en.wiktionary.org/wiki/hype said:
Promotion or propaganda, especially exaggerated claims.
AI is hype.

And by AI, we should target especially LLM.

LLM is just a machine. It is not smart, it doesn't do reasoning, it has no intentions.

I'll stand by my first affirmation:

change-my-mind-jpg.webp

It is a program that you feed electronic documents and it absorbs all the information in them, regurgitating it in some prettyfied ways. That's it.

There is only one case where the use can be weird: Someone thought of feeding it the entire Internet. That is a lot of information. The output is necessarily really simple compared to the input. The output can sometimes be unpredictable because of this. Nevertheless, too many people upsell the good sides and downplay the downsides, hence "hype".

Then there is the other side. People who want to warn us about the potential danger. They love to use words like "smart", "reason", "self-awareness", and such. This is again all hype to sell their own point. (More about this below.)

PeroK said:
Why would we not take this seriously? The experts in the field are telling us there is a risk.
First, the fact that experts are discussing it means the problem is taken seriously. In my experience, the problems that are discussed are never the ones that are problematic in the future (because, of course, they were already analyzed).

But some skepticism must be included because of what he says at around 11:25 :
[...] and for a government, it is really important to prepare to anticipate so that we can put in place the right incentives for companies, maybe do the right research and development, for example, the sort of research that we are doing in my group at LawZero right now, [...]
Translation: Invest in me and my company; give me money.

This is what triggers my skepticism before blindly following an expert of any kind.

Experts who have a solution to offer often contribute to the hype of what they consider the problem. "You will die a terrible, terrible death! But, not to worry, I have a solution!" Yeah, right.

About regulations

There are two categories:
  1. Some people might use it with malicious intentions;
  2. Some people might use it and hurt themselves or others.
There are already laws to forbid malicious intentions, whether done with AI or not.

If we are on an international level for malicious acts, we are talking warfare, and no laws can apply. But whatever one think the other can do, there are experts at this level who are preparing for it as well, with possibly the help of the new tools too. But you'll never hear about it. I will refer anyone to Stuxnet as an example of this.

About the other category, which I think is what @Filip Larsen is referring to, it is more problematic, and it is mostly happening BECAUSE of the hype. Which is why I'm pleading to downplay it. This is why I don't like wording like:
PeroK said:
There an interesting video from Sky News here, where it appears that ChatGPT fabricated a podcast, then lied repeatedly about it until eventually it was backed into a corner and admitted it had made the whole thing up.
The machine doesn't "fabricate", "lie", "admit", or "make things up"; it makes mistakes or malfunctions, or it is misused. It has no intentions; it is not smart.

There are already laws concerning liability for companies, either the ones producing the AI programs or the ones using them. I don't see what other laws could be added without just creating more red tape and giving a false sense of security.

There is also the fact that AI may reveal other problems in society, like mental illnesses and other social problems. The following article from the NY Times is excellent. The wording emphasizes how chatbots are just machines and are NOT human-like at all. They cannot be your friend. Conversing with them is not a good idea. Do not fall for the hype.

They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.

Generative A.I. chatbots are going down conspiratorial rabbit holes and endorsing wild, mystical belief systems. For some people, conversations with the technology can deeply distort reality.
In my opinion, in this case, I think it is wrong to go after the machine rather than the real root of the problem. And the hype is certainly part of it.

AI is hype.
 
  • Like
  • Agree
Likes russ_watters and 256bits
  • #127
jack action said:
In my experience, the problems that are discussed are never the ones that are problematic in the future (because, of course, they were already analyzed).
What is your experience in the development of AI? This is a scientific site. We are supposed to believe in the scientific method and in peer-reviewed material. If you are not actively researching this area, then you have are personal theories and speculation.
 
  • Skeptical
Likes russ_watters
  • #128
PeroK said:
What is your experience in the development of AI? This is a scientific site. We are supposed to believe in the scientific method and in peer-reviewed material. If you are not actively researching this area, then you have are personal theories and speculation.
I have done enough projects to understand that the things I worry about never happen or, if they do, have no serious consequences, because I spent so much time worrying about them. And any problem that arises is always something that I never thought could happen.

So, if something bad happens with AI in the future, for sure, it won't be something that people will say: "They called it on PF, years ago! We should have listened."

The point is that - as you said yourself - the experts are taking it seriously. That is why it is impossible for me to worry about it. Who am I to tell the experts what to do about it? Do we really need a law to tell them, "It's illegal to make a machine that will lead to human extinction"?
 
  • #129
SamRoss said:
[...]
3. Are AI and transhumans an existential threat?
[...]

I know I've mentioned this before, but I don't think AI will ever work like in the movies where some AI entity can move from/to different types of hardware (or even wetware). I know this borders on personal speculation but I don't agree with dualism (by which I mean that mind and body can exist independently of each other, one of the points made by the French philosopher René Descartes). Taking humans as an example, our mind or agency (for lack of better words and to avoid using the word "soul") are inseparable from our physical manifestation. In other words: our minds aren't copyable, or able to be transferred into some computer for immortal golf playing.

I think a hard AI will be it's physical manifestation and unable to escape it.

I may well be proved wrong, but if so we're the ones with a finger on the off-button.

It's a premise often examined in science fiction. Blade Runner being the most obvious one (If we're talking films especially the 2047 sequel). There is also a fun read where the above dichotomy is not the case: the series Robopocalypse by Daniel H. Wilson. It was supposed to be made into a film but seem to be in "development hell" as I think it's called.
 
  • Like
Likes 256bits and jack action
  • #130
jack action said:
Who am I to tell the experts what to do about it? Do we really need a law to tell them, "It's illegal to make a machine that will lead to human extinction"?

"...scientists profit-driven corporations were so preoccupied with whether they could, they didn't stop to think if they should.
- Ian Malcolm

Yes.
 
  • Like
Likes 256bits and sbrothy
  • #131
DaveC426913 said:
"...scientists profit-driven corporations were so preoccupied with whether they could, they didn't stop to think if they should.
- Ian Malcolm
Yes.
Thank God someone is bringing peer-reviewed material on this scientific site to back up their statement. 😆
 
  • Haha
Likes russ_watters and DaveC426913
  • #132
DaveC426913 said:
Yes.
Very emphatically. :smile:
 
  • Like
Likes DaveC426913
  • #133
jack action said:
Thank God someone is bringing peer-reviewed material on this scientific site to back up their statement. 😆
Hey. That's an ad hom! You can't dismiss wise words just because the person who spoke them is fictional. That's realityist!


Besides, quotes are quotable for a reason.
 
  • Like
Likes jack action
  • #134
jack action said:
And any problem that arises is always something that I never thought could happen.
And I have also watched "just do it engineers" be bit in the ass not just once, but many times over as the project unfolded with large errors. I tended to compute, revise, compute, revise...and they all feared I was getting nowhere because they were used to "engineers that just did things", I put all together on paper, then I executed. They would always say to me, "he's doing things, we just don't know what". There was always some point where you had to call it, but if you have some will power to see it through, you will have overblown the things that you don't know such that the "whoops" that comes (when they inevitably show up) are relatively small. “Aim small, miss small”

I think we should approach the AI situation with some restraint and forward thinking about potential impacts to the system when it’s immediately employed instead of the human alternative. Ironically, there was this whole "learn to code" slogan being shouted at blue collar workers, now it seems they are the most secure in the "new revolution". The robo minds will need blue collar humans to maintain the infrastructure ...for a bit.
 
Last edited:
  • #135
jack action said:
Do we really need a law to tell them, "It's illegal to make a machine that will lead to human extinction"?
So say the experts.
(Who have Never ever been wrong before? Or who have given their opinion on fantasy using 'expert' as a validation of extrapolation and prediction The unsinkable Titanic should still be making trans Atlantic voyages. Mars habitation is feasible while minimizing the backdrop of effects of prolonged voyage in space and zero gravity, supply chain, ... endless list .... Humanity should have been already be wiped out if an 18th century economist was to be truly believed, with the theory still making the rounds. An expert on climate change such as Greta T will be continuously stating disaster next year, no next year, right up to her dying breath of expected longevity of 80 )

Doom and gloom futures are based upon what? How humans treat the other flora and fauna on earth? Is a silicon based intelligence to act the same as human intelligence?
One faulty premise from the experts is that an ASI will attempt to over accomplish its goal and in so doing gather all resources ( a Malthus extension ). If so, is this ASI so super intelligent that it does not have the capacity of restraint? And that it is able to coerce other ASI's to aid and abet its worldly cause? becoming a supreme dictator within the ASI world devoid of ASI morals and ethics, eliminating all biological life upon the planet just because it can, strip the planet of all resources, thus eliminating its raisin d'etre, and thus itself.
Not really an accomplished ASI that has more intelligence than a human.

Since I am not an expert in destruction of the world life, can I ask who is?
 
  • Like
  • Agree
  • Sad
Likes russ_watters, jack action and PeroK
  • #136
jack action said:
Do we really need a law to tell them, "It's illegal to make a machine that will lead to human extinction"?
Yes, essentially.

In more detail, due to the identified motivation that likely drives the select few to push aggressively for AGI+ (i.e. if super-intelligence is at all physical possible someone will surely continue to drive towards it if unchecked) we need checks and balances, just like with in any technology, yet currently these checks and balances are being dismantled by the very same select few. So far the scenario that has unfolded is more or less indiscernible from the worst-case scenario, i.e. we are on the same path as the worst-case scenario. That doesn't mean the worst will happen, but it means right now there are no indication that it will not happen. We are not able to point to a single physical hard limit or similar constraint that will prevent the worst case from happening. We can point to a few things that slow the process down, but without global regulation or the occurance of a yet unknown hard limit the drive towards AGI will continue.

As far as I see it, the only thing that currently seems to be the best limit this chance for worst-case and other generally bad scenario is if people in large enough numbers are aware of the risks and pretty soon stop doing things that support the trend towards negative outcomes (i.e. put the "brakes on"). But that limit is not going well either if we considering how easy it is to get people to participate in training AI systems to essentially do their job. Even when workers see how capable the systems has become over a very short time they still think there is some magic in there that will prevent it from doing the rest of their job despite they are participate in the training for it to do exactly that. Why do programmers keep on using these system well-knowning that they in principle participate in training with the potential and even stated aim of automating their work and putting them out of a job, let alone that it this also assists the drive towards self-accelerated software development for more capable AI which is a key step for the worst-case scenario. Puzzle.

jack action said:
That is why it is impossible for me to worry about it.
I am fine with you personally not worrying, but I do have an issue if you mean to tell others they don't need to worry because nothing seriously bad can ever happen because someone surely will step in and save the day even though no one really seems knowns exactly who or how. If you know then please share. If you don't know, why aren't you worried there may not be a who or a how at all?
 
  • #137
sbrothy said:
I may well be proved wrong, but if so we're the ones with a finger on the off-button.
There is no off-button that anyone can press. There is an on-button that you can stop holding down, but you then you need to persuade others holding their on-button down to also let go as well. It's a Mexican standoff where everyone need to take their finger off the trigger at the same time, with the added challenge that no one can see the other people trigger finger very well.

Also the assumed division into "we, the humans" vs "the computers" is misunderstood. The drive towards worst-case scenario is now and in the near future driven by a select few humans, so the division at the time where the worst-case scenario turns from bad to really bad is more like "we, the obsolete and powerless" vs "those who yield the AI power " and if we ever get there is far too late. The only trick I see is to stop becoming powerless and obsolete well in advance which might as well start now. Take your finger off the on-button now.
 
  • Skeptical
  • Agree
  • Like
Likes russ_watters, Hornbein and 256bits
  • #138
Filip Larsen said:
As far as I see it, the only thing that currently seems to be the best limit this chance for worst-case and other generally bad scenario is if people in large enough numbers are aware of the risks and pretty soon stop doing things that support the trend towards negative outcomes
Boycott, or conscientious objector.
The general population seems to have very little say in the matter, and as usual have no choice but to go along for the ride.
Scientists themselves, who would be more familiar with the harms of AI, use its benefits, and thus create a demand to some degree. The business and political community are encouraged to rush in to acquire, support and use the tech, even knowingly confronted with the conflicting '50% of jobs will be lost' - 'GDP will increase 30%' by the experts. The experts are on the two opposite sides of the coin, promoting the usage, yet saying that there could be severe repercussions, and warning that those not on board will be left behind.

Einstein, an expert, lended his voice to advise Roosevelt to build nuclear weapons. He later regretted his involvement, which is similar to the situation today regarding AI. The refrain is if we don't get ahead of the curve, the other side will have great advantage.

So, who should one believe as being the most truthful. The promoters who say it will be wonderful, or the doom and gloomers who say it is the death of us all. At times the promoters and the gloomers can be the one and the same entity. Musk, for example, involved in AI research, uttered warnings about existential risks, leading to some discussions about regulation policy, but this has been shelved. Musk, the expert that he may be, was not a complete gloom and doomer, as I, but proposed the chance of the tech's unexpected or non-desired outcomes.

The world stage today is busy. The alarms about AI, and its ethical and moral usage, is being drowned out by more pressing immediate concerns. If and when the world stage dies down several levels, the AI 'problem' may be addressed, if not too late. AND, the doom and gloomers can tone down their fear factor from utter annihilation to one of sustainability. The gloomer experts strike fear into people, either turning peoples minds off, or enhancing unbelievability, both used as self defense mechanisms of the individual as sanity protection.

A moderate and open discussion of threats, perceived and real, and benefits, should include individuals from affected fields, and all walks of life, to balance the 'expert' AI narrow viewport of society. Any exclusion of viewpoints, from expert to non-expert, should not be summarily dismissed as being irrelevant, nor unimportant, for a discussion of tech that will surely affect each and every person.
 
  • #139
Filip Larsen said:
Yes, essentially.

In more detail, due to the identified motivation that likely drives the select few to push aggressively for AGI+ (i.e. if super-intelligence is at all physical possible someone will surely continue to drive towards it if unchecked) we need checks and balances, just like with in any technology, yet currently these checks and balances are being dismantled by the very same select few. So far the scenario that has unfolded is more or less indiscernible from the worst-case scenario, i.e. we are on the same path as the worst-case scenario. That doesn't mean the worst will happen, but it means right now there are no indication that it will not happen. We are not able to point to a single physical hard limit or similar constraint that will prevent the worst case from happening. We can point to a few things that slow the process down, but without global regulation or the occurance of a yet unknown hard limit the drive towards AGI will continue.

As far as I see it, the only thing that currently seems to be the best limit this chance for worst-case and other generally bad scenario is if people in large enough numbers are aware of the risks and pretty soon stop doing things that support the trend towards negative outcomes (i.e. put the "brakes on"). But that limit is not going well either if we considering how easy it is to get people to participate in training AI systems to essentially do their job. Even when workers see how capable the systems has become over a very short time they still think there is some magic in there that will prevent it from doing the rest of their job despite they are participate in the training for it to do exactly that. Why do programmers keep on using these system well-knowning that they in principle participate in training with the potential and even stated aim of automating their work and putting them out of a job, let alone that it this also assists the drive towards self-accelerated software development for more capable AI which is a key step for the worst-case scenario. Puzzle.


I am fine with you personally not worrying, but I do have an issue if you mean to tell others they don't need to worry because nothing seriously bad can ever happen because someone surely will step in and save the day even though no one really seems knowns exactly who or how. If you know then please share. If you don't know, why aren't you worried there may not be a who or a how at all?
I don't plan to use AGI to destroy the universe or humans. If anyone has thought of doing that, and let's assume AGI could potentially do it, the problem isn't AGI. That problem should be treated by a psychologist, not a computer scientist.

On the other hand, an AGI that isn't capable of destroying the universe and humans might not be "general" enough. Let's be serious, AGI isn't even a defined concept...

How do we prove that something is an AGI?
 
  • Like
Likes russ_watters, jack action and 256bits
  • #140
256bits said:
So, who should one believe as being the most truthful. The promoters who say it will be wonderful, or the doom and gloomers who say it is the death of us all
It is not that hard to do as we know works best, namely work towards benefits in a controlled fashion while staying clear of the negatives. But this requires we, as a whole, at all times have an eye on the potential negatives and work towards blocking out paths that lead to high risk scenarios. This is just text book risk management, yet strangely enough, this approach is being suspended for a technology which has worst-case scenarios right up there with nuclear holocaust.
 
  • #141
256bits said:
A moderate and open discussion of threats, perceived and real, and benefits, should include individuals from affected fields, and all walks of life, to balance the 'expert' AI narrow viewport of society. Any exclusion of viewpoints, from expert to non-expert, should not be summarily dismissed as being irrelevant, nor unimportant, for a discussion of tech that will surely affect each and every person.
This discussion is luckily also occurring in a lot of contexts and there is hope it will lead to sanity prevailing over the current "lets blindly insert AI everywhere we can" vibe that the select few currently in charge promotes.
 
  • #142
javisot said:
How do we prove that something is an AGI?
In the context of the worst-case scenarios with acceleration into super-intelligence, the key step is not the exactly level of intelligence but that we get to a point where we use one generation of AI to autonomously and unchecked generate the next generation with "improve capabilities".

The human-level intelligence becomes relevant for the unchecked part, since its the expected the level where it becomes hard to spot if a trained AI is behaving deceptive or not. Deceptive here means that due to the actual training the AI learns to strategize (i.e. not only making easy to spot confabulations) in a way that result in what we would call deceptive behavior if a human did the same. Note that we may still very well select the training material for the AI but we no longer have a reliable capability to detect if it actually learns all the rules we would like it to also be constrained by (note compassion and other desirable human traits also needs to be trained in with the possibility of failing to some degree). This means the AI over generations at some point gets to a capability level where it can find novel new ways of solving problems that would be considered "outside the rules" it was supposed to learn, but we can not really tell if it has this ability or not, simply because the complexity is too high. If the autonomous improvement is also coupled with search mechanisms that mimic the benefits from evolution then fittest AI models emerging from such a process are the one that are capable of passing our fitness harness. If the harness and fitness function can only check issue on a human-level scale so to speak, we have really have no idea if the harness actually restrain the AI after some point [1] . Again, all this this does not mean such AI will manifest serious deceptive behavior, only that we cannot exclude it from happening.

At least, that is how I understand it.

[1] Edit (I failed to make the point I wanted by bringing up evolution): By naively using evolutionary search where we weed out AI models that fail to fit our harness (e.g. we down-score models that fails harness tests that tries to trap them to be deceptive or otherwise exhibit bad behavior) the fittest models we end up with has a high likelihood to just have evolved to system that has learned not to fall into our harness traps (since that is the power of evolutionary search). It may be researchers will learn of some fundamental better way to train, evolve and test AI's that will reduce this to a point where we believe we can control it, but if AI tech still with this can scale up way past human-level capabilities the control is probably going to be an illusion more than a fact.
 
Last edited:
  • Like
Likes javisot and PeroK
  • #143
Filip Larsen said:
There is no off-button that anyone can press. There is an on-button that you can stop holding down, but you then you need to persuade others holding their on-button down to also let go as well. It's a Mexican standoff where everyone need to take their finger off the trigger at the same time, with the added challenge that no one can see the other people trigger finger very well.

Also the assumed division into "we, the humans" vs "the computers" is misunderstood. The drive towards worst-case scenario is now and in the near future driven by a select few humans, so the division at the time where the worst-case scenario turns from bad to really bad is more like "we, the obsolete and powerless" vs "those who yield the AI power " and if we ever get there is far too late. The only trick I see is to stop becoming powerless and obsolete well in advance which might as well start now. Take your finger off the on-button now.
Even a motorcycle has a dead man's switch, I would like to think an AI would come equipped with one too. Also ultimately we are (at least in theory) in control of the energy supply.

Unless of course it'll play out something like the first real quantum computer becoming conscious (yes, I know that's not how it works but it sounds technobabbly enough for a novel) and hiding the fact from us, thus becoming the "ghost in the machine" and controlling us by subtle manipulation. I'd like to think the scientists would notice something so outlandish going on though. I was just about to say crazier things have happened but I'm far from sure. :woot:

By definition it's impossible to predict what a world after a technological singularity will look like. What if, for example, spintronics (behind paywall i think) or atomtronics inadvertently made the internet become alive?!

And no, I'm still not inebriated. :smile:
 
  • #145
sbrothy said:
Even a motorcycle has a dead man's switch, I would like to think an AI would come equipped with one too.
The interesting buttons are the ones you can actually control. In the worst-case scenarios all such buttons are no longer in your control, so how will you press it? And even if you have access to a button how will you decide when to use it and how will you prevent those select few in nearly universal control at the time to not just flip it on again?

If want to press an off-button I am absolutely for it, but I recommend you try push it early rather than late. Or just tap the brakes a bit so we don't take the corners doing 90.
 
  • #146
256bits said:
See Movie from 1970:
https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project
which explores some of the capabilities of AI smarter than humans.
I saw that. Possibly on your recommendation. I think I said it has aged rather well, but there are some pretty silly decisions made in it. Giving it exclusive control of the nuclear arsenal and then locking yourself out of access to it screams to high heaven. What could possibly go wrong, right? Even a 5-year old could tell you that's a pretty stupid idea. Also, the film is supposedly from the fifties, the era of vacuum tube transistors which had a tendency to burn out making maintenance imperative. Entertaining film nonetheless.

EDIT: Oh... seventies.... It just had such a fifties feel to it.

It kinda reminds me of The Andromeda Strain (even though that has nothing to do with AI), as it plays on some of the same fears of the unknown. But I guess that's a common staple of science fiction.
 
  • #147
Filip Larsen said:
The interesting buttons are the ones you can actually control. In the worst-case scenarios all such buttons are no longer in your control, so how will you press it? And even if you have access to a button how will you decide when to use it and how will you prevent those select few in nearly universal control at the time to not just flip it on again?

If want to press an off-button I am absolutely for it, but I recommend you try push it early rather than late. Or just tap the brakes a bit so we don't take the corners doing 90.

Yes, sadly. Also there's the possibility that it'll be so smart it could talk us out of pressing any buttons. "For our own sake". :smile:
 
  • Like
Likes Filip Larsen
  • #148
Filip Larsen said:
In the context of the worst-case scenarios with acceleration into super-intelligence, the key step is not the exactly level of intelligence but that we get to a point where we use one generation of AI to autonomously and unchecked generate the next generation with "improve capabilities".

The human-level intelligence becomes relevant for the unchecked part, since its the expected the level where it becomes hard to spot if a trained AI is behaving deceptive or not. Deceptive here means that due to the actual training the AI learns to strategize (i.e. not only making easy to spot confabulations) in a way that result in what we would call deceptive behavior if a human did the same. Note that we may still very well select the training material for the AI but we no longer have a reliable capability to detect if it actually learns all the rules we would like it to also be constrained by (note compassion and other desirable human traits also needs to be trained in with the possibility of failing to some degree). This means the AI over generations at some point gets to a capability level where it can find novel new ways of solving problems that would be considered "outside the rules" it was supposed to learn, but we can not really tell if it has this ability or not, simply because the complexity is too high. If the autonomous improvement is also coupled with search mechanisms that mimic the benefits from evolution then fittest AI models emerging from such a process are the one that are capable of passing our fitness harness. If the harness and fitness function can only check issue on a human-level scale so to speak, we have really have no idea if the harness actually restrain the AI after some point [1] . Again, all this this does not mean such AI will manifest serious deceptive behavior, only that we cannot exclude it from happening.

At least, that is how I understand it.

[1] Edit (I failed to make the point I wanted by bringing up evolution): By naively using evolutionary search where we weed out AI models that fail to fit our harness (e.g. we down-score models that fails harness tests that tries to trap them to be deceptive or otherwise exhibit bad behavior) the fittest models we end up with has a high likelihood to just have evolved to system that has learned not to fall into our harness traps (since that is the power of evolutionary search). It may be researchers will learn of some fundamental better way to train, evolve and test AI's that will reduce this to a point where we believe we can control it, but if AI tech still with this can scale up way past human-level capabilities the control is probably going to be an illusion more than a fact.
I see a problem with this reasoning. We agree that we don't know how to prove that something is AGI, but you assume that AGI can be built without knowing how to prove that something is AGI.

I reasonably disagree; all those who claim that AGI should exist should first be able to define what they are referring to (and it's not enough to say "something that can solve what an AI can't" since we're not specifying anything).

The scenario that could violate the above is one in which AI autonomously evolves into AGI and we don't know how to define the final product.
 
  • #149
javisot said:
The scenario that could violate the above is one in which AI autonomously evolves into AGI and we don't know how to define the final product

How will we identify AGI? Chief Justice Potter Stewart summed up the problem of defining difficult situations in ruling in a 1964 movie pornography case said, with regards to what is pornographic: "I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description ["hard-core pornography"], and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the motion picture involved in this case is not that."

https://en.wikipedia.org/wiki/I_know_it_when_I_see_it

Similarly, I think we will know it when we see it.
 
  • Like
Likes sbrothy and PeroK
  • #150
... and the fallacy is that until you can precisely define something, there is nothing to be done.
 
Back
Top