Is AI Overhyped?

Click For Summary
The discussion centers around the question of whether AI is merely hype, with three main concerns raised: AI's capabilities compared to humans, the potential for corporations and governments to exploit AI for power, and the existential threats posed by AI and transhumanism. Participants generally agree that AI cannot replicate all human abilities and is primarily a tool with specific advantages and limitations. There is skepticism about the motivations of corporations and governments, suggesting they will leverage AI for control, while concerns about existential threats from AI are debated, with some asserting that the real danger lies in human misuse rather than AI itself. Overall, the conversation reflects a complex view of AI as both a powerful tool and a potential source of societal challenges.
  • #91
PeroK said:
There are three things I distrust when dealing with a complex issue. The first is statements that say that X must happen, that X is inevitable. The other is that X cannot possibly happen. The third is the statement that people who think X might happen have general deficiencies in their power of thought that invalidates their arguments.

This is supposed to be a scientific site where even when dealing with non-science topics we are supposed to exhibit evidence-based reasoning.

You might have been better simply to say that you do not believe that AI can possibly be a threat to humanity, because that's what your gut instinct tells you - and, anyone who disagrees is a doom-mongering zealot. That's the gist of what you've said.
There is the human element of the issue, and there is the AI element of the issue. Without either, the issue is non-existent.
The doom and gloom discourse is quite much one sided, as if humans are not players, but pacifist observers. I am saying that the gloomers one-sided approach surely has a bias of human non-interaction.
No. I do not say that AI cannot be a threat to society. It already is with regards to the disruptions, present and predicted, real or not, from ANI (no AGI or ASI yet available, if ever ). If the future is to evolve 'pleasantly as possible' some discourse has to available in response to the bias. At present, humans are making the decisions regarding utilization of AI, possibly with outcomes positive or negative the case may be. Doomsters are assuming or implying AI will overtake human agency, and subsequently determine the human fate. Disagreeing with that premise is surely worth consideration,

Quote:
There are three things I distrust when dealing with a complex issue. The first is statements that say that X must happen, that X is inevitable. The other is that X cannot possibly happen. The third is the statement that people who think X might happen have general deficiencies in their power of thought that invalidates their arguments.
/ Unquote
A sound argument, but it is deficient. There are ample enough cases where complex issues went astray, especially with the human element involved, and the scientific solution was discredited, .
One, for which the Nobel prize was given out, is the procedures of lobotomization to cure personality disorder, with many recipients put into vegetative state. As a cure I suppose it was, but quite a drastic cure, especially for those who expressed only mild anxiety, or who were put into the care by loved ones. One could come in with a problem, be diagnosed, treated, and released by mid-morning.
The deficiency comes about with increased knowledge about the brain, the influx of chemical treatments, and proper diagnosis.
The lobotomizers were X (must get a lobotomy ) must happen and is inevitable.
The non-lobotimizers were X cannot happen ( those who did not receive the treatment ).
The neuoscientists and pharmacologists were the nay-sayers to X. I do not consider these people of being inferior in thought..
 
Computer science news on Phys.org
  • #92
Rive said:
Though - you better keep in mind that even if something is a hype now, it still may become to be some essential part of life later on.
Unless you are talking about a long time frame, then by definition it's not hype!

Hype is perhaps not a well-defined word, but I would say it's not hype if it's a serious possibility in the time frame being considered.
 
  • #93
russ_watters said:
Sorry
Don't be, I'm not offended at all.
 
  • #94
I find it handy as a summariser
 
  • #95
SamRoss said:
"Is AI hype?"
I just asked Perplexity (AI) that question for fun: 😁

1000047358.webp



Here is the full answer:
https://www.perplexity.ai/search/is-ai-hype-Kc9tumxtSIKZOOulzcB54Q
 
  • Like
  • Agree
  • Haha
Likes 256bits, russ_watters and PeroK
  • #96
The experts do not agree.

Ex-OpenAI Scientist Says Brain ‘Biological Computer.’ Astrobiologist Disagrees​

https://www.msn.com/en-ca/money/tec...p&cvid=96f1948d96d04e38917dfb07493359d7&ei=33

The hype from someone in the AI field:
former OpenAI co-founder Ilya Sutskever ... said if the brain is a “biological computer” then why can’t we have a “digital brain”“Slowly but surely we will see the AI getting better and the day will come when AI will do all the things that we can do. Not just some of them but all of them. Anything which I can learn, anything which any of you can learn, the AI could do as well.”

The reply from someone who knows more about brain.
Stuart Hameroff, an astrobiologist and the Director of the Centre for Consciousness Studies:
“Ilya Sutskever is wrong on this. The brain is not a digital computer and not a computer at all, more like a quantum orchestra. Biology is based on organic carbon, which supports quantum processes and self-similar dynamics in hertz, kilohertz, megahertz, gigahertz, and terahertz in microtubules, composed of tubulin, the brain’s most abundant protein,”
“And while computers can learn, they’re not conscious, cannot feel, and have no intrinsic motivation. That’s why no AGI.”
 
  • Skeptical
Likes PeroK and BillTre
  • #97
256bits said:
The experts do not agree.

Ex-OpenAI Scientist Says Brain ‘Biological Computer.’ Astrobiologist Disagrees​

https://www.msn.com/en-ca/money/tec...p&cvid=96f1948d96d04e38917dfb07493359d7&ei=33

The hype from someone in the AI field:
former OpenAI co-founder Ilya Sutskever ... said if the brain is a “biological computer” then why can’t we have a “digital brain”“Slowly but surely we will see the AI getting better and the day will come when AI will do all the things that we can do. Not just some of them but all of them. Anything which I can learn, anything which any of you can learn, the AI could do as well.”

The reply from someone who knows more about brain.
Stuart Hameroff, an astrobiologist and the Director of the Centre for Consciousness Studies:
“Ilya Sutskever is wrong on this. The brain is not a digital computer and not a computer at all, more like a quantum orchestra. Biology is based on organic carbon, which supports quantum processes and self-similar dynamics in hertz, kilohertz, megahertz, gigahertz, and terahertz in microtubules, composed of tubulin, the brain’s most abundant protein,”
“And while computers can learn, they’re not conscious, cannot feel, and have no intrinsic motivation. That’s why no AGI.”
There's a case that consciousness and emotions may require biological stimuli. But, Conway's game of life proved that even a simple algorithm can produce self-replicating structures. A complex, self-adapting system would have no biological consciousness, but it could have an artificial motivation for self preservation and other emergent functionality, that is not inherent in its design.

Hameroff's argument is essentially that he is so clever that he knows what is possible and what is not. We don't have to do experiments to know that computers can and can't ultimately do. We need no safeguards agsinst AGI, because he personally can assure us a priori that it s not possible. This is the sort of person we should never trust.

Hameroff's argument boils down to: a computer can never develop biological intelligence, therefore a computer can never develop artificial intelligence. But, artificial is the operative word. AGI does not have to be literally a carbon-copy of the brain. AGI might emerge from complex non-biological algorithms. Or, it might not in the foreseeable future.

The question is whether computers - in this century, say - can do enough to replicate almost all human intelligence-based activities. AGI doesn't have to be able to do absolutely everything perfectly before it becomes a major competitor to humans.

What if Hameroff is wrong?
 
  • Like
  • Agree
Likes BillTre, DaveC426913, 256bits and 2 others
  • #98
The only intelligence is natural intelligence.

Therefore there can be no artificial intelligence.

I am not impressed by such arguments.
 
  • Like
  • Agree
Likes javisot, 256bits and PeroK
  • #99
I am under 30, so I'm young enough to have had the internet for the majority of my childhood (and by extension, adulthood) and I remember ChatBots from that time. They were so comically bad that people made "Let's Play" style YouTube videos of them interacting with them, sometimes to great amusement.

Then, many years later, ChatGPT is released. I had heard "GPT" in the context of image generation before, but otherwise it came out of the blue to me, and I imagine much of the non-technical population as well.

As an individual, I am always skeptical of hype and trends so I only learned of ChatGPT's capabilities through the news, social meda and friends who talked about its capability. Some were afraid it would taker over jobs due to its coding capabilities. Most people seemed to use it for their own entertainment rather than gain.

Now, some time later I started using it for my research and ... I'm not impressed. It's excellent for compiling resources on relatively well known subjects, but must also be held to scrutiny as I have been provided wrong results several times, despite the bot's utmos confidence.

It is also great for preparing routine calculations and unlike a script or spreadsheet, it can be tweaked to implement small differences simply by writing an instruction. For simple tasks, I have found it fairly reliable.

But if we consider the Wright Brothers' first flight day 0 of air travels, and remember the moon landing took place less than 70 years later, it is certainly fascinating (if not unnerving) what progress will be made in our lifetime, if ChatGPT is likewise the very early stage of generally useful AI.
 
  • Like
Likes russ_watters and javisot
  • #100
Mayhem said:
But if we consider the Wright Brothers' first flight day 0 of air travels, and remember the moon landing took place less than 70 years later, it is certainly fascinating (if not unnerving) what progress will be made in our lifetime, if ChatGPT is likewise the very early stage of generally useful AI.
Yes, considering the resources being poured into this very popular product we can expect rapid progress.

With AlphaGo and AlphaZero I knew the revolution was here but I didn't know which direction it would take.
 
  • #101
PeroK said:
What if Hameroff is wrong?
I am not aware of Hameroff's arguments in details, but if we assume he (more or less) ascribe to the idea that quantum effects is required for the "essential" brain functions (e.g conscience, and perhaps even the capability for human-level intelligence) then considering that this idea, as far as I understand, still is rather controversial (e.g. see https://en.wikipedia.org/wiki/Orchestrated_objective_reduction) it still seems be most prudent in a context of safety analysis to assume that lack of quantum effects does not exclude the existence of digital systems capable of human-level brain functions.

But it sure would make my day a more worry free if we positively could point to lack of quantum effects as a "natural" limit for AI capabilities, instead of just wishing for it.

Edit: fixed clumsy wording.
 
Last edited:
  • Like
Likes 256bits and PeroK
  • #102
Hornbein said:
The only intelligence is natural intelligence.

Therefore there can be no artificial intelligence.

I am not impressed by such arguments.
Yeah. That is an unknown for the future.
PeroK said:
What if Hameroff is wrong?
Either of Ilya Sutsjeyer or Hameroff can be right or wrong, regardless.
Not AGI, nor ASI, but just the lowly niche ANI is already disrupting parts of society, disruption being a concept of either good or bad dependent upon viewpoint, ( and thus not Utopia as the Silicon Valley types dream about and seem to present it as being the ultimate for the future *).

Heck, the Nvidia CEO Huang brought a cute Star Wars like looking mini robot ( most likely remote controlled as one doesn't want surprises ) with all the beeps, bops, whirls sounds on stage at a recent conference to climatize the audience with the inference that AI is so loveable it can't be anything but. A substitution for a puppy or kitten. It is such manipulative marketing that annoys
That is what I call hype, not the capabilities of the tech.

* this has been around a long time. Some of this dreamed automation has come to pass.
 
  • #103
Mayhem said:
But if we consider the Wright Brothers' first flight day 0 of air travels, and remember the moon landing took place less than 70 years later, it is certainly fascinating (if not unnerving) what progress will be made in our lifetime, if ChatGPT is likewise the very early stage of generally useful AI.
AI research started around the 40's decade, or so.
By the flying measure comparison, AI should already be a mature tech.

Robot tech is kinda mature, since it is based on material manipulation, physical materials which have restraints due to Newton's laws.
Just not too long ago the inverted pendulum was incredulous [ is that the correct way to use the word ], and now ho-hum. If the mice races can squeek out a few more seconds from the run...

AI is based on mathematical manipulation, so those guys have to get their elbows up [ Canadian ] to keep up the pace, along with the computer algorithm guys, and the electronic guys. The neural net got stuck in the 80's-ish until along came back-propogation and Hopfield. That moved the basic neural node from being a glorified AND gate into something that, being part of a neural net, allowed the neural net to learn voice, faces, language, something impossible beforehand.

That still took 50-60 years for the recent LLM breakthrough to occur.
Progress has seemed rapid, but it was slow go in the beginning.

PS I just wondered, had there been no internet from which to scape vast amounts of data for the neural nets to learn from, would the LLM's be a thing.
 
  • #104
256bits said:
AI research started around the 40's decade, or so.
By the flying measure comparison, AI should already be a mature tech.

That still took 50-60 years for the recent LLM breakthrough to occur.
Progress has seemed rapid, but it was slow go in the beginning.
Unlike organic intelligence which took only, um, 4 billion years to evolve.
 
  • Haha
  • Like
Likes russ_watters and 256bits
  • #105
Hornbein said:
The only intelligence is natural intelligence.

Therefore there can be no artificial intelligence.

I am not impressed by such arguments.
Totally agree. One of the typical definition of intelligence is "the ability to solve problems," but a machine capable of solving all existing problems isn't intelligent. (No more intelligent than you, who created that machine)

We know how to automate problem solving, but that doesn't mean we're creating intelligent objects.
 
  • #106
PeroK said:
That still took 50-60 years for the recent LLM breakthrough to occur.
Progress has seemed rapid, but it was slow go in the beginning.
Unlike organic intelligence which took only, um, 4 billion years to evolve.
Well if you look at it that way, AI took 4,000,000,060 years to evolve. :smile:
 
  • Like
  • Haha
Likes jack action, russ_watters, 256bits and 1 other person
  • #107
PeroK said:
Unlike organic intelligence which took only, um, 4 billion years to evolve.
I update my phrase.
That still took 4 billion + 50-60 years for the recent LLM breakthrough to occur.
[ Carbon, then silicon . Even the stars say so. :) ]

Ooops. Ididn't see DaveC's post.
 
  • #108
Hornbein said:
The only intelligence is natural intelligence.

Therefore there can be no artificial intelligence.

I am not impressed by such arguments.
What does this even mean?

Intelligence? Not well defined.

Natural Intelligence?
Artificial intelligence?
  • How do these two differ?
How do you distinguish between the natural and artificial versions? Is it simply the intervention of humans?

What is supposed to distinguish the natural from the artificial?

To me things seem much more convoluted than is being expressed here.
 
  • #109
This just in.
The Chatter bots are going to university.
Maybe even get a PhD, or two, or 3, or..., with a library card.
If you think they were smug before,

AI chatbots need more books to learn from. These libraries are opening their stacks​

https://www.msn.com/en-ca/money/top...N&cvid=6fc127164171453aaa57b07a50733529&ei=96

Nearly one million books published as early as the 15th century — and in 254 languages — are part of a Harvard University collection being released to AI researchers Thursday. Also coming soon are troves of old newspapers and government documents held by Boston's public library.
 
  • Like
  • Informative
Likes Klystron and dextercioby
  • #110
BillTre said:
What does this even mean?

Intelligence? Not well defined.

Natural Intelligence?
Artificial intelligence?
  • How do these two differ?
How do you distinguish between the natural and artificial versions? Is it simply the intervention of humans?

What is supposed to distinguish the natural from the artificial?

To me things seem much more convoluted than is being expressed here.
I think the distinction is similar to other areas. There are natural materials, such as wood, leather and wool. And, there are synthetic materials, such as plastics, that are "man-made".

Artificial intelligence means intelligence created or developed by humankind, as opposed to having evolved naturally.

Even though intelligence itself may not be totally well-defined, we have bona fide material on the cognitive skills that we expect humans to develop. A machine could be assessed according to this. Although, AI will always have skills beyond what a human can have: perfect memory, speed of calculation, multi-lingual capacity etc. Not to mention that a machine can perform almost 24x365.

It's seems to me - and this is my opinion - foolishly arrogant to claim that humans must have an intelligence superior to any machine. Even if we constrain that to a timeframe of this century. Arguments over the nature of intelligence, IMO, miss the point that it ultimately comes down to practical capability.
 
  • Like
Likes javisot and BillTre
  • #111
256bits said:
The hype from someone in the AI field:
former OpenAI co-founder Ilya Sutskever ... said if the brain is a “biological computer” then why can’t we have a “digital brain”“Slowly but surely we will see the AI getting better and the day will come when AI will do all the things that we can do. Not just some of them but all of them. Anything which I can learn, anything which any of you can learn, the AI could do as well.”
Big woop. First, this is still science fiction. Second, when we will do that, all we will do is create what already exists: some sort of artificial human, no better or worse than a "natural" human. We can easily do that with sex right now. I highly doubt we will be able to do it more efficiently.

But the big fear is about superintelligence. A machine better than a human: with either more capacity, fewer flaws, or both. The previous quote doesn't indicate that at all. I would rather think it shows the opposite: it is a complete utopia. If a superior way could exist, it already would. And if it could, I highly doubt that a form of intelligence that took 4 billion years to evolve will make it in, say, a 100 years.

Also, nothing evolves by itself; its environment also evolves with it.
 
  • Like
Likes 256bits and BillTre
  • #112
BillTre said:
What does this even mean?

Intelligence? Not well defined.

Natural Intelligence?
Artificial intelligence?
  • How do these two differ?
How do you distinguish between the natural and artificial versions? Is it simply the intervention of humans?

What is supposed to distinguish the natural from the artificial?

To me things seem much more convoluted than is being expressed here.
To clarify, in case it wasn't apparent: @Hornbein was being facetious. Those are not his words, but a paraphrasing of Hameroff's claims. Which Horbien explicitly states do not impress him.

[Update] Oops. I mistyped @Hornbein's name as Horbien, which technically translates as "good dread".
 
Last edited:
  • #114
Hornbein said:
As a human trained in science and math, I find this article quite sobering, nay, even alarming. Here are a few select quotes:
Although the group did eventually succeed in finding 10 questions that stymied the bot, the researchers were astonished by how far AI had progressed in the span of one year. Ono likened it to working with a “strong collaborator.” Yang Hui He, a mathematician at the London Institute for Mathematical Sciences and an early pioneer of using AI in math, says, “This is what a very, very good graduate student would be doing—in fact, more.” The bot was also much faster than a professional mathematician, taking mere minutes to do what it would take such a human expert weeks or months to complete.

By the end of the meeting, the group started to consider what the future might look like for mathematicians. Discussions turned to the inevitable “tier five”—questions that even the best mathematicians couldn't solve. If AI reaches that level, the role of mathematicians would undergo a sharp change. For instance, mathematicians may shift to simply posing questions and interacting with reasoning-bots to help them discover new mathematical truths, much the same as a professor does with graduate students. As such, Ono predicts that nurturing creativity in higher education will be a key in keeping mathematics going for future generations.

I’ve been telling my colleagues that it’s a grave mistake to say that generalized artificial intelligence will never come, [that] it’s just a computer,” Ono says. “I don’t want to add to the hysteria, but in some ways these large language models are already outperforming most of our best graduate students in the world.”
 
  • Like
  • Informative
Likes russ_watters, PeroK and 256bits
  • #115
renormalize said:
As a human trained in science and math, I find this article quite sobering, nay, even alarming.
Why is it alarming? Isn't it the goal of creating AI? Learning the theory and applying it to practical cases.

The dream that was sold to me with AI was to discover (quickly) new molecules for drugs that can cure illnesses. I don't see anything alarming with that, quite the opposite.
 
  • Like
  • Sad
Likes russ_watters, Filip Larsen and 256bits
  • #116
renormalize said:
This is what a very, very good graduate student would be doing—in fact, more.” The bot was also much faster than a professional mathematician, taking mere minutes to do what it would take such a human expert weeks or months to complete.
If the AI bot was slower there wouldn't be so much slurpy enthusiastic wonderment. A statistical token manipulation machine doing comparisons at billions per second should be fantastic. Slow the processor down to MHz, or down to human speed, and the bot will take weeks or months to complete, or years. The slurpy would turn into cool-aid.

renormalize said:
I’ve been telling my colleagues that it’s a grave mistake to say that generalized artificial intelligence will never come, [that] it’s just a computer,” Ono says. “I don’t want to add to the hysteria, but in some ways these large language models are already outperforming most of our best graduate students in the world.”
The lack of credit towards the human inventors is apparent. The AI is the focus, which is a natural human response. Rolling down a car window by button, rather than crank, elicited similar emotions when that was new tech.

I suppose the debate would be is this being creative and acting with reason.
 
  • Sad
  • Like
Likes PeroK and javisot
  • #117
256bits said:
If the AI bot was slower there wouldn't be so much slurpy enthusiastic wonderment. A statistical token manipulation machine doing comparisons at billions per second should be fantastic. Slow the processor down to MHz, or down to human speed, and the bot will take weeks or months to complete, or years. The slurpy would turn into cool-aid.
In the case of chatgpt, to reduce response time instead of completely constructing the message and then sending it, you'll notice that you can see the response being constructed from the beginning.

It reduces the time and is more realistic. When a human speaks live, they deliver word by word, not entire messages.
 
  • #118
jack action said:
Why is it alarming?
Not to restart our old discussions, but I'd like to a comment that a large part of my worry with AI are this position, i.e. people who are fine with things that move fast and breaks stuff and thinks that everyone else should suffer the same consequences. And with AI (and the current trend in world politics) its my current bet that we end up with large groups of people who (yet again in human history, sadly nothing new there) will suffer the consequences so a small group of people can get some perceived benefits (political power, monetary, health, simplicity of life, etc.). The idea that AI tech when driven by humans somehow magically (i.e. without any form of regulation or requirements) will end up do more benefit than harm to the average person is in my view a fallacy and the chances of that to happen is in my view very low.

Compare, if you will, with the current discussion of "social media" where countries are now banning young people from using this tech because of the negative effect. Everyone, myself included, thought that social media was a boon for humanity 20 years ago (and it was back then), but then the tech was over time driven to an "unsafe" place for the benefit of the few (i.e. enshittification). The mind boggles at how fast and how solid the select minority this time can do same with AI tech.
 
  • Like
Likes russ_watters and weirdoguy
  • #119
Filip Larsen said:
Not to restart our old discussions, but I'd like to a comment that a large part of my worry with AI are this position, i.e. people who are fine with things that move fast and breaks stuff and thinks that everyone else should suffer the same consequences. And with AI (and the current trend in world politics) its my current bet that we end up with large groups of people who (yet again in human history, sadly nothing new there) will suffer the consequences so a small group of people can get some perceived benefits (political power, monetary, health, simplicity of life, etc.). The idea that AI tech when driven by humans somehow magically (i.e. without any form of regulation or requirements) will end up do more benefit than harm to the average person is in my view a fallacy and the chances of that to happen is in my view very low.

Compare, if you will, with the current discussion of "social media" where countries are now banning young people from using this tech because of the negative effect. Everyone, myself included, thought that social media was a boon for humanity 20 years ago (and it was back then), but then the tech was over time driven to an "unsafe" place for the benefit of the few (i.e. enshittification). The mind boggles at how fast and how solid the select minority this time can do same with AI tech.
I don't see how to respond without entering the forbidden zone of politics.
 
  • Like
  • Agree
Likes jack action, BillTre and PeroK
  • #120
There an interesting video from Sky News here, where it appears that ChatGPT fabricated a podcast, then lied repeatedly about it until eventually it was backed into a corner and admitted it had made the whole thing up.

 

Similar threads

Replies
10
Views
4K
Replies
3
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
3
Views
6K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K