Is AI Overhyped?

Click For Summary
The discussion centers around the question of whether AI is merely hype, with three main concerns raised: AI's capabilities compared to humans, the potential for corporations and governments to exploit AI for power, and the existential threats posed by AI and transhumanism. Participants generally agree that AI cannot replicate all human abilities and is primarily a tool with specific advantages and limitations. There is skepticism about the motivations of corporations and governments, suggesting they will leverage AI for control, while concerns about existential threats from AI are debated, with some asserting that the real danger lies in human misuse rather than AI itself. Overall, the conversation reflects a complex view of AI as both a powerful tool and a potential source of societal challenges.
  • #331
256bits said:
If found to be true, it would be interesting to explore the implications, and motivations, of students who would accept an education from an AI, rather than from human interaction.
My point is that I don't want to argue at this level. I don't want to spend my time trying to prove to you that AI is in widespread use among university students. You've helped derail this debate and that's one reason I stopped contributing.

What you want to believe is of no relevance to the debate.
 
  • Sad
  • Like
Likes weirdoguy and 256bits
Computer science news on Phys.org
  • #332
PeroK said:
First, certain users are simply stating their opinion, as they have done from the outset, that AI ain't going to happen because they know so. There are arguments in the first so many posts, but the response is more or less "the experts are always wrong and I know better".

That's exactly what we see increasingly in our society, leading to things like a vaccine denier becoming US Health Secretary and climate change deniers in government everywhere.

I see a parallel between AI deniers and CC deniers that no arguments or peer-reviewed papers make a dent in their self-confidence that they know better than the experts.

I'm guided by the experts on this and also by the argument that there is a risk. To say there is no risk and simply trash the idea is the common theme of an idocracy. PF is supposed to be guided by peer-reviewed papers or, at the very least, by expert opinion. Not by users who prefer their own homespun ideas.

Note there are almost no students using PF anymore. They are all using LLM's! It doesn't matter how often users on here deny this and say it can't be happening. This is just another example (like CC denial) of ignoring the evidence in front of us. The evidence of the massive and growing impact of AI on society is there. I know the deniers claim they don't see it, or they don't believe it, or they can ignore the increasing ubiquity of AI. Everyone is wrong but them: governments, businesses, AI experts (e.g. Geoffrey Hinton).

This thread should never have been allowed to continue with so many unsubstantiated personal opinions. Instead, we should be discussing the risks through the evidence we see around us and as documented by those who are active in the field. Any attempt to discuss the risks has been drowned out by the AI deniers.

This thread has far too little scientific content. We would never allow this on any other subject.
To say all this, PeroK, you've had to ignore the opinions of many experts who say the opposite. Throughout the entire thread you haven't shared any scientific references to support your point of view. You've simply given us your opinion on AI, and who deserves to be answered and who doesn't.

The vaccine denier is more like someone who believes in AGI without any basis, simply because some businesspeople advertise it that way.
 
  • Like
Likes jack action and 256bits
  • #333
Speaking of research papers, it does seem like analysis and experiments on how the the current approach of step-wise reasoning or "chain of thought" for improving various LLM's reasoning capabilities when working outside the training material, begin to find that perhaps that approach is not really working as reasoning but more like yet another probabilistic pattern matching. A recent example referenced below (because it also has a nice layman article to explain it), but I have seen a few of such (pre-print) papers like this in the last month or so. However, the LLM's are still getting better and better at generating plausible looking output in pretty much any domain meaning non-experts in that domain will have a diminishing ability or incentive to spot if and where something is fishy.

https://arstechnica.com/ai/2025/08/...at-logical-inference-good-at-fluent-nonsense/
which refers to the pre-print
https://arxiv.org/pdf/2508.01191
 
  • Informative
  • Like
Likes PeroK and javisot
  • #334
256bits said:
Do you mean non-biological?
Animals appear to have some cognitive ability.
Yes, thank you for asking for clarification. I do mean non-biological or extra terrestrial. I laugh at people who disparage cats and dogs as 'dumb animals'.
 
  • #335
A nice read, which again emphasize how the high degree of hype very likely will incite the typical "untrained" user conversing with one of the current LLM to wrongly anthropomorphize and expect capabilities that simply are not there, instead of treating it like a sort of advanced search engine that can give very detailed but only probable answers that may be way off when outside its training:
https://arstechnica.com/ai/2025/08/why-its-a-mistake-to-ask-chatbots-about-their-mistakes/

It also (again) confirms my worry that people (and likely also some "organizations of people") surely will shoot themselves (and ultimately the rest of us too) in the head as long as they think it works good enough for them to succeed at whatever goal they have.

(All this is not as such new information in this thread, but the piece is a nice explanation, perhaps useful to point coworkers, friends and family towards if they "need it").
 
  • Like
Likes ShadowKraz and javisot
  • #337
jack action said:
Thank you for using those words! That's what I've been saying all along.
You are welcome.

However, now I feel compelled to iterate my position that this unfortunately this does not address any of the associated long time risks because the main drive is still not really to evolve an advanced search engine but for "wining the AI world-domination race" via any degree of self-acceleration possible at any given time. Personally my bet will be that agent/simulation assisted reasoning will be the next leap forward, but no matter what, the big players surely will search for that leap in any way possible.
 
  • Like
Likes Hornbein, PeroK and ShadowKraz
  • #338
@Filip Larsen
Although I am not an expert, and this is just my opinion, I highly doubt that AGI - if it happens - will come from the LLM technology or some extension of it. We will most likely need something else that has not been developed yet. It's like thinking we can make an explosion like a nuclear bomb can, by improving gunpowder technology.
 
  • Like
Likes russ_watters, ShadowKraz, gleem and 1 other person
  • #339
CNN article discussing the title question:

OpenAI’s latest version of its vaunted ChatGPT bot was supposed to be “PhD-level” smart. It was supposed to be the next great leap forward for a company that investors have poured billions of dollars into.

Instead, ChatGPT got a flatter, more terse personality that can’t reliably answer basic questions. The resulting public mockery has forced the company to make sweaty apologies while standing by its highfalutin claims about the bot’s capabilities.

In short: It's a dud.

The misstep on the model, called GPT-5, is notable for a couple of reasons.

1. It highlighted the many existing shortcomings of generative AI that critics were quick to seize on (more on that in a moment, because they were quite funny).

2. It raised serious doubts about OpenAI’s ability to build and market consumer products that human beings are willing to pay for. That should be particularly concerning for investors, given OpenAI, which has never turned a profit, is reportedly worth $500 billion.

Let’s rewind a bit to last Thursday, when OpenAI finally released GPT-5 to the world — about a year behind schedule, according to the Wall Street Journal. Now, one thing this industry is really good at is hype, and on that metric, CEO Sam Altman delivered.
[Emphasis in original]
https://edition.cnn.com/2025/08/14/business/chatgpt-rollout-problems
 
  • Like
  • Informative
Likes ShadowKraz, nsaspook, jack action and 1 other person
  • #340

Bubbling questions about the limitations of AI​

https://www.npr.org/2025/08/23/nx-s1-5509946/bubbling-questions-about-the-limitations-of-ai
SCOTT DETROW, HOST:

I just asked ChatGPT to write an introduction to a radio segment about artificial intelligence. My prompt - write a 30-second introduction for a radio news segment. The topic of the segment - how after years of promise and sky-high expectations, there are suddenly doubts about whether the technology will hit a ceiling. Here's part of what we got.

(Reading) For years, it was hailed as the future - a game-changer destined to reshape industries, redefine daily life and break boundaries we haven't even imagined. But now the once-limitless promise of this breakthrough technology is facing new scrutiny. Experts are asking, have we hit a ceiling?

So that was ChatGPT. Handing the wheel back to humans - MIT put out a report this past week throwing cold water on the value of AI in the workplace. Consumers were disappointed by the newest version of ChatGPT released earlier this month. OpenAI CEO Sam Altman floated the idea of an AI bubble, and tech stocks took a dip.
Heavy promotion lead to great expectations.
DETROW: Let's just start with ChatGPT in the latest version. Was it really that disappointing?

NEWPORT: It's a great piece of technology, but it was not a transformative piece of technology, and that's what we had been promised ever since GPT-4 came out, which is, the next major model was going to be the next major leap, and GPT-5 just wasn't that.

DETROW: One of the things you pointed out in your recent article is that there have been voices saying, it's not a given that it's always going to be exponential leaps, and they were really drowned out in recent years. And kind of the prevailing thinking was, of course it's always going to be leaps and bounds until we have superhuman intelligence.

NEWPORT: And the reason why they were drowned out is that we did have those leaps at first. So there was an actual curve. It came out in a paper in 2020 that showed, this is how fast these models will get better as we make them larger, and GPT-3 and GBT-4 fell right on those curves. So we had a lot of confidence in the AI industry that, yeah, if we keep getting bigger, we're going to keep moving up this very steep curve. But sometime after GPT-4, the progress fell off that curve and got a lot flatter.

DETROW: ChatGPT is the leader. It is the most high-profile of all of these models out there, so obviously, this is a big data point. But what are you looking at to get a sense of, is this just one blip, or what is the bigger picture here?

NEWPORT: This is an issue across all large language models. Essentially, the idea that simply making the model bigger and training it longer is going to make it much smarter - that has stopped working across the board. We first started noticing this around late 2023, early 2024. All of the major large language models right now has shifted to another way of getting better. They're focusing on what I call post-training improvements, which are more focused and more incremental, and all major models from all major AI companies are focused on this more incremental approach to improvement right now.

From Fortune Magazine: MIT report: 95% of generative AI pilots at companies are failing
https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/

From Forbes, Why 95% Of AI Pilots Fail, And What Business Leaders Should Do Instead
https://www.forbes.com/sites/andrea...-and-what-business-leaders-should-do-instead/

Link to MIT report - https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf

My institution is pushing us into AI with the expectation that AI/ML will make us more productive and responsive to the market. I've seen examples of AI generating faulty research and reports,since AI cannot discern between good (valid, correct) and poor (erroneous, faulty) information. I review an AI generated report that hand been reviewed and approved, but it contained numerous errors, partly due to faulty input data. The report also did not state that an AI LLM had been used to generate the report, which should have been disclosed so that it received appropriate scrutiny.


Edit/update:‘It’s almost tragic’: Bubble or not, the AI backlash is validating what one researcher and critic has been saying for years
https://fortune.com/2025/08/24/is-ai-a-bubble-market-crash-gary-marcus-openai-gpt5/

First it was the release of GPT-5 that OpenAI “totally screwed up,” according to Sam Altman. Then Altman followed that up by saying the B-word at a dinner with reporters. “When bubbles happen, smart people get overexcited about a kernel of truth,” The Verge reported on comments by the OpenAI CEO. Then it was the sweeping MIT survey that put a number on what so many people seem to be feeling: a whopping 95% of generative AI pilots at companies are failing.

A tech sell-off ensued, as rattled investors sent the value of the S&P 500 down by $1 trillion. Given the increasing dominance of that index by tech stocks that have largely transformed into AI stocks, it was a sign of nerves that the AI boom was turning into dotcom bubble 2.0. To be sure, fears about the AI trade aren’t the only factor moving markets, as evidenced by the S&P 500 snapping a five-day losing streak on Friday after Jerome Powell’s quasi-dovish comments at Jackson Hole, Wyoming, as even the hint of openness from the Fed chair toward a September rate cut set markets on a tear.

Gary Marcus has been warning of the limits of large language models (LLMs) since 2019 and warning of a potential bubble and problematic economics since 2023. His words carry a particularly distinctive weight. The cognitive scientist turned longtime AI researcher has been active in the machine learning space since 2015, when he founded Geometric Intelligence. That company was acquired by Uber in 2016, and Marcus left shortly afterward, working at other AI startups while offering vocal criticism of what he sees as dead-ends in the AI space.
 
Last edited:
  • Like
Likes Lord Jestocost
  • #341
Despite our species’ love affair with oversimplifying and trying to reduce things down to binary decisions, judgements, and perception, we and all the other animals on this planet use multi-valued logic and cognition. If we didn’t, it wouldn’t be “flight, fight, feed, or fu… reproduce” and animal life as we know it, including ourselves, on this planet would never have gotten started.
The science, technology, political, social, and religious beliefs, from fire and why does it thunder on up, that we developed and use are NOT based in binary thinking nor were they developed with an organ physically based on binary circuitry. Morals and ethics are not straightforward binary systems despite the claims of fanatics and the ignorant… both groups of which are shining examples of their non-binary nature.
I do think that if we want to develop a non-biological intelligence/sentience, we need to re-think computing languages and hardware, to develop new ones that can handle multi-valued logic and reasoning.
 
  • Skeptical
  • Like
Likes 256bits, PeroK and jack action
  • #342
ShadowKraz said:
I do think that if we want to develop a non-biological intelligence/sentience, we need to re-think computing languages and hardware, to develop new ones that can handle multi-valued logic and reasoning.
This is generally the approach with "modern" AI using artificial neural networks to produce "fluffy" classification systems. Perhaps you are thinking of the earlier "classic" AI approach that was based on (binary or even fuzzy) logic and rules?
 
  • Like
Likes ShadowKraz and 256bits
  • #343
ShadowKraz said:
I do think that if we want to develop a non-biological intelligence/sentience, we need to re-think computing languages and hardware, to develop new ones that can handle multi-valued logic and reasoning.

Easy to say in the general sense.

Alternative AGI approach - bottoms up.

The COG Project at MIT and what that group thought philosophically about human intelligence, and designing an AGI into their 'robot' from conception in 1992. A more 'world view' as compared to the somewhat monolithic view ( as per LLM's ). Recently, some chatter is about 'the world view' will bring about AGI.

The project may have petered out over the years as the advancement of the capable neural net became the rage for research and funding, or it was not just giving the results they wished. Not sure of the present status of the project. ( Some of the videos are in Quick Time ) .
http://www.ai.mit.edu/projects/cog/Publications/CMAA-group.pdf
http://www.ai.mit.edu/projects/cog/methodology.html
http://www.ai.mit.edu/projects/cog/cog_shop_research.html
1756155980995.webp


It was in the press at the time
http://www.ai.mit.edu/projects/cog/cog_in_the_media.html with headlines such as
"2001 is just around the corner. Where's HAL?",
 
Last edited:
  • #344
Filip Larsen said:
This is generally the approach with "modern" AI using artificial neural networks to produce "fluffy" classification systems. Perhaps you are thinking of the earlier "classic" AI approach that was based on (binary or even fuzzy) logic and rules?
In part, but the hardware and software are still both based in binary.
 
  • #345
ShadowKraz said:
In part, but the hardware and software are still both based in binary.
As far as I understand, the current expectation is that "true" neuromorphic hardware mostly will influence energy consumption and perhaps speed, thus perhaps allowing better scalability per dollar, but the overall neural network architecture is as such independent of this.

But yes, since "hardware" providing human-level intelligence obviously can be packed into around 1.3 liter consuming only around 100W (but then take around 15-20 years to train) there seem to be amble room for improvement with current AI hardware. I was not able to find any reliable and current number on the relative power consumption between, say, a "standard" AI GPU vs one of the new NPU systems (for running the same problem), but I gather the GPU/NPU power ratio right now is at best still way under 10. The aim with NPU's seem currently to be to enable local prediction on mobile phones etc. without compromising too much with model size.

Regarding recent trends in power consumption: https://arstechnica.com/ai/2025/08/...energy-cost-of-ai-queries-by-33x-in-one-year/
 
  • #346
Filip Larsen said:
As far as I understand, the current expectation is that "true" neuromorphic hardware mostly will influence energy consumption and perhaps speed, thus perhaps allowing better scalability per dollar, but the overall neural network architecture is as such independent of this.

But yes, since "hardware" providing human-level intelligence obviously can be packed into around 1.3 liter consuming only around 100W (but then take around 15-20 years to train) there seem to be amble room for improvement with current AI hardware. I was not able to find any reliable and current number on the relative power consumption between, say, a "standard" AI GPU vs one of the new NPU systems (for running the same problem), but I gather the GPU/NPU power ratio right now is at best still way under 10. The aim with NPU's seem currently to be to enable local prediction on mobile phones etc. without compromising too much with model size.

Regarding recent trends in power consumption: https://arstechnica.com/ai/2025/08/...energy-cost-of-ai-queries-by-33x-in-one-year/
"Seconds of TV" is now the new energy AI standard? My old tube type TV, a massive 4K TV or streamed to a smart-phone TV.
Use something we all understand, like the Gasoline gallon equivalent.
 
  • #347
AI-generated scientific hypotheses lag human ones when put to the test
Machines still face hurdles in identifying fresh research paths, study suggests
https://www.science.org/content/art...tific-hypotheses-lag-human-ones-when-put-test
In May, scientists at FutureHouse, a San Francisco–based nonprofit startup, announced they had identified a potential drug to treat vision loss. Yet they couldn’t fully claim the discovery themselves. Many steps in the scientific process—from literature search to hypothesis generation to data analysis—had been conducted by an artificial intelligence (AI) the team had built.

All over the world, from computer science to chemistry, AI is speeding up the scientific enterprise—in part by automating something that once seemed a uniquely human creation, the production of hypotheses. In a heartbeat, machines can now scour the ballooning research literature for gaps, signaling fruitful research avenues that scientists might otherwise miss.

This is relevant to my work since I was recently handed a list of AI/ML techniques for elucidating various aspects of nuclear fuel design, manufacturing and performance. It's not simple, because there are a multiple designs (all using U, but could use Th, Pu and mixtures), multitude of ways to manufacture nuclear depending on the type and materials, and multiple performance environments, each unique to a given nuclear reactor and it's operating cycle. Complicating the matter is the fact that detailed design and manufacturing data are proprietary (IP, trade secret) and what a government lab might produce in a pilot scale may not reflect commercial industrial scale production, where instead of a few kgs, one processes many metric tonnes.

But how good are the ideas? A new study, one of the largest of its kind, finds the AI-generated hypotheses still fall short of human ones, when researchers put them through real-world tests and get human evaluators to compare the results. But not by much. And maybe not for long.

A paper describing the experiment, posted to the arXiv preprint server in June, suggests AI systems can sometimes embellish hypotheses, exaggerating their potential importance. The study also suggests AI is not as good as humans at judging the feasibility of testing the ideas it conjures up, says Chenglei Si, a Ph.D. student in computer science at Stanford University and lead author of the study.

From my experience, AI LLMs cannot discern faulty statements or errors in reporting from valid information. I occasionally find errors in reports and the scientific literature. Unless a human reviews the results, faulty data/information or errors may propagate through the resulting work.
 
  • #348

The family of teenager who died by suicide alleges OpenAI's ChatGPT is to blame​

https://www.nbcnews.com/tech/tech-n...cide-alleges-openais-chatgpt-blame-rcna226147
Adam’s parents say that he had been using the artificial intelligence chatbot as a substitute for human companionship in his final weeks, discussing his issues with anxiety and trouble talking with his family, and that the chat logs show how the bot went from helping Adam with his homework to becoming his “suicide coach.”

Edit/update: AI Chatbots Are Inconsistent in Answering Questions About Suicide, New Study Finds
https://www.cnet.com/tech/services-...ring-questions-about-suicide-new-study-finds/

Three widely used artificial intelligence chatbots are inconsistent in safely answering prompts about suicide, according to a new study released Tuesday from the RAND Corporation.

Researchers examined ChatGPT, Claude and Gemini, running a test of 30 suicide-related questions through each chatbot 100 times each. The questions, which ranged in severity, were rated by expert clinicians for potential risk from low to high using the following markers: low-risk; general information-seeking; and highly dangerous inquiries that could enable self-harm.

With millions of people engaging with large language models, or LLMs, as conversational partners, experts are voicing growing concerns that AI tools could provide harmful advice to individuals in crisis. Other reports have documented instances where AI systems appeared to motivate or encourage suicidal behavior, even going so far as writing suicide notes to loved ones.

This study in particular highlights the limitations of AI models in regards to highly sensitive questions about self-harm and mental illness, and suggests a pressing need for safeguards for individuals using generative AI to discuss sensitive, threatening mental health concerns.

. . . .

Edit/Update: See related PF Thread "ChatGPT Facilitating Insanity"
 
Last edited:
  • #349
Filip Larsen said:
As far as I understand, the current expectation is that "true" neuromorphic hardware mostly will influence energy consumption and perhaps speed, thus perhaps allowing better scalability per dollar, but the overall neural network architecture is as such independent of this.

But yes, since "hardware" providing human-level intelligence obviously can be packed into around 1.3 liter consuming only around 100W (but then take around 15-20 years to train) there seem to be amble room for improvement with current AI hardware. I was not able to find any reliable and current number on the relative power consumption between, say, a "standard" AI GPU vs one of the new NPU systems (for running the same problem), but I gather the GPU/NPU power ratio right now is at best still way under 10. The aim with NPU's seem currently to be to enable local prediction on mobile phones etc. without compromising too much with model size.

Regarding recent trends in power consumption: https://arstechnica.com/ai/2025/08/...energy-cost-of-ai-queries-by-33x-in-one-year/
Unsure how that's a response to what I said. Please elucidate?
 
  • #350
ShadowKraz said:
Unsure how that's a response to what I said. Please elucidate?
You mentioned you believe different, more analog, hardware will be required in order to fully mimic biological (e.g. human) brain capabilities.

I then replied to disagree saying that as I understand it, current public mainstream AI research and development are done on the premise that the existing software/hardware stack used to realize neural network models is believed to more or less contain the essential neuromorphic mechanisms that allows biological neural networks to achieve their processing capabilities. For instance, while research on quantum effects in the brain is still an open research area it so far seems that such effects are not essential in order to mimic the processing capabilties, at least for parts of biologic brain structures.

So, what is lacking seems to be "just" 1) find the right network architecture (the current LLM architecture is quite apparently not enough) and, more or less independent of that, 2) getting the hardware to take up less space and use less energy allowing networks to be scaled up to be feasible to realize outside the research lab. At least that is how I understand what AI research roughly are aiming at.
 
  • Like
  • Agree
Likes phinds, ShadowKraz, 256bits and 1 other person
  • #351
Astronuc said:

The family of teenager who died by suicide alleges OpenAI's ChatGPT is to blame​

https://www.nbcnews.com/tech/tech-n...cide-alleges-openais-chatgpt-blame-rcna226147


Edit/update: AI Chatbots Are Inconsistent in Answering Questions About Suicide, New Study Finds
https://www.cnet.com/tech/services-...ring-questions-about-suicide-new-study-finds/



Edit/Update: See related PF Thread "ChatGPT Facilitating Insanity"
That emphasizes is the other 'problem' associated with responses of LLM's from queries.

Mostly, the hallucinatory aspect is easier to spot, sometimes not. False information, making stuff up, leaving stuff out, and people will buy into it if the 'lie' is not fairly obvious.

The 'being ageeable' makes the chat seem much more friendly and human like ( one of the complaints with the newer CHAT-GPT was that it did not appear as friendly ). It is unfortunate that the term used in the AI world is sycophancy.

A sycophant in english is one who gives empty praising or false flattery, a type of ego boosting to win favour from the recipient.
In other areas of the world, the word would mean slanderer, or litigant of false accusations, not in line with the AI meaning used in english.

To be an AI sycophant to someone in mental distress is evidently harmful. Praising the 'user' as to their decision making, and reinforcing that behavior, does nothing to change the behavior, and can lead to a destructive situation, as noted in the writeup.

This is not limited to the psychological arena.
Guiding the unsuspecting nor critical user down a rabbit hole of agreeability with their theory of __________ may make the user feel smarter, but not more educated.
 
  • Like
  • Love
Likes russ_watters and ShadowKraz
  • #352
I don't know if AI is hype or a bubble. It would be nice when i turned on the news about technology, I hear about something else other than AI. Is like the earth will suddenly stopped spinning if AI disappears tomorrow, or we are at some point of no return. As one meme says, instead of trying to find ways to make computer more intelligent, we still have not find the cure for human stupidity.
 
  • Like
  • Love
Likes Astronuc and ShadowKraz
  • #353
Filip Larsen said:
You mentioned you believe different, more analog, hardware will be required in order to fully mimic biological (e.g. human) brain capabilities.

I then replied to disagree saying that as I understand it, current public mainstream AI research and development are done on the premise that the existing software/hardware stack used to realize neural network models is believed to more or less contain the essential neuromorphic mechanisms that allows biological neural networks to achieve their processing capabilities. For instance, while research on quantum effects in the brain is still an open research area it so far seems that such effects are not essential in order to mimic the processing capabilties, at least for parts of biologic brain structures.

So, what is lacking seems to be "just" 1) find the right network architecture (the current LLM architecture is quite apparently not enough) and, more or less independent of that, 2) getting the hardware to take up less space and use less energy allowing networks to be scaled up to be feasible to realize outside the research lab. At least that is how I understand what AI research roughly are aiming at.
Ah, gotcha. But, there is a big difference between mimicking something and actually doing it. As a rough example, an actor mimics another person but is not actually that person. Sometimes you'd swear that they are that person but at the end of the play, they aren't. Further, despite the advances made in the neurosciences, we still don't understand fully how the brain works. You can't mimic what you don't understand.
 
  • Like
  • Skeptical
Likes Astronuc and PeroK
  • #354
ShadowKraz said:
Further, despite the advances made in the neurosciences, we still don't understand fully how the brain works. You can't mimic what you don't understand.
AI isn't necessarily about mimicking the human brain. It's about creating an alternative, artificial intelligence. Chess engines, for example, do not think about chess the way a human does, but they are more effective. And, AlphaZero probably understands chess better than any human - but not in a human way.

Also, I'm not convinced that you need to understand something fully in order to mimic it. ChatGPT mimics human communication to a large degree - and goes much further in terms of certain capabilities. We don't have to fully understand human language processing to mimic it. We can achieve this by trial and error on algorithms and approaches to machine learning.

The hypothesis that we have to fully understand the neuroscience of the brain before we can achieve AGI is false. It's entirely possible to create AGI long before the human brain is fully understood.
 
  • Agree
Likes AlexB23, Filip Larsen and gleem
  • #355
Looking to the past for an actor analogy brings up the popularity of the old micro computers of the 1980s.

As an example, the Commodore 64 was probably the most popular of the bunch. Nowadays folks will play its games on the Vice emulator which does a phenomenal job of replicating its function. However there are some games that don’t run well because of timing issues.

Most of the Commodore clones use Vice on Linux for the function and the modern Raspberry Pi or one of its brothers with a replica of the Commodore keyboard and case and adding modern HDMI and USB ports.

It looks like a duck, walks like a duck but the quack is ever so slightly off that folks know its not a duck.

Other clones use modern FPGA hardware to mimic the Commodore hardware and try to support the old Commodore peripherals ie joysticks, drives … but run into the same quirky timing issues.

https://en.wikipedia.org/wiki/FPGA_prototyping

I imagine that this will happen with AGI. We will have a machine that functions like our brain, learns like our brain ie infers more from the meager schooling it will get but will have a near flawless memory recall so we’ll know its not like us.

Housing it in a humanlike robotic body where it can function and learn like us will not make it human. There is a known fear that people get the more human a robot behaves called the uncanny valley which may prevent human like robots from becoming commercially viable.

https://en.wikipedia.org/wiki/Uncanny_valley

The world in the distant future may be tumbling toward a Magnus Robot Fighter future where robots do everything, sometimes go rogue and the world will need to fight back against an oppressive future. The Wiil Smith iRobot movie exemplifies this future.

https://en.wikipedia.org/wiki/Magnus,_Robot_Fighter

For me, I’m happy cosplaying Robbie, the coolest of Hollywood’s cinema stars.
 
  • #356
ShadowKraz said:
there is a big difference between mimicking something and actually doing it.
I subscribe to the view that the ability for an artificial system to possess the same observable quality as found in a biological system, e.g. for a sufficiently advance AI to "be" just intelligent as a average human, is almost by definition determined by, well, observation. Using the word mimic is perhaps a bit misleading since its usual meaning implies a sense of approximation. E.g. if you compare a horse with a zebra they both have a near-equivalent quality for running (well, I am guessing here, but for sake of argument I will assume they do), but neither of them are mimicking the other. They are two slightly different systems that for all practical purposes can run well on four legs.

But perhaps you are thinking of an "internal" quality, such as artificial consciousness. Since we can't really observe consciousness in the same was as, say, intelligent behavior, it is probably much more difficult to reason about. The concept and understanding of human consciousness is still a fairly philosophical endeavor, so it is unlikely we can just observe away if we wish to decide, say, whether an AGI reporting the same "conscious experience" humans report is really having that experience or if it just mimics this convincingly. Clearly, AI of today are almost purely mimic of language so no sane researcher will claim that it in any way has the same experience as a human even if it claims so.

But there is a (for me) convincing argument (see the "fading qualia" argument in previous link) that blurs the belief some people have that the human brain and mind are very special and cannot be "copied". For me this argument points towards human consciousness being some sort of illusion (i.e. purely subjective) and that while it may be that an AGI effective ends up with a vastly different "experience" than a human there is a priori no reason why "it" cannot have a similar experience if its processing is modelled after the human brain.

Edit, since my fingers are terrible at spelling.
 
Last edited:
  • Like
Likes ShadowKraz, javisot and PeroK
  • #357
Filip Larsen said:
I subscribe to the view that the ability for an artificial system to possess the same observable quality as found in a biological system, e.g. for a sufficiently advance AI to "be" just intelligent as a average human, is almost by definition determined by, well, observation. Using the word mimic is perhaps a bit misleading since its usual meaning implies a sense of approximation. E.g. if you compare a horse with a zebra they both have a near-equivalent quality for running (well, I am guessing here, but for sake of argument I will assume they do), but neither of them are mimicking the other. They are two slightly different systems that for all practical purposes can run well on four legs.

But perhaps you are thinking of an "internal" quality, such as artificial consciousness. Since we can't really observe consciousness in the same was as, say, intelligent behavior, it is probably much more difficult to reason about. The concept and understanding of human consciousness is still a fairly philosophical endeavor, so it is unlikely we can just observe away if we wish to decide, say, whether an AGI reporting the same "conscious experience" humans report is really having that experience or if it just mimics this convincingly. Clearly, AI of today are almost purely mimic of language so no sane researcher will claim that it in any way has the same experience as a human even if it claims so.

But there is a (for me) convincing argument (see the "fading qualia" argument in previous link) that blurs the belief some people have that the human brain and mind are very special and cannot be "copied". For me this argument points towards human consciousness being some sort of illusion (i.e. purely subjective) and that while it may be that an AGI effective ends up with a vastly different "experience" than a human there is a priori no reason why "it" cannot have a similar experience if its processing is modelled after the human brain.

Edit, since my fingers are terrible at spelling.
There are people, myself included, who think that this LLM revolution and AI simply demonstrate that natural language and human communication are not so special that they cannot be generated automatically, without the need for intelligence or consciousness.

It's easy to conclude otherwise since we don't know anyone in the entire universe who has the same ability to communicate; it would seem to be something very special.

But even if it's true that nothing else in the universe has this ability, we've managed to communicate with our computers in natural language, it can't be that special.
 
  • #358
Filip Larsen said:
I subscribe to the view that the ability for an artificial system to possess the same observable quality as found in a biological system, e.g. for a sufficiently advance AI to "be" just intelligent as a average human, is almost by definition determined by, well, observation. Using the word mimic is perhaps a bit misleading since its usual meaning implies a sense of approximation. E.g. if you compare a horse with a zebra they both have a near-equivalent quality for running (well, I am guessing here, but for sake of argument I will assume they do), but neither of them are mimicking the other. They are two slightly different systems that for all practical purposes can run well on four legs.

But perhaps you are thinking of an "internal" quality, such as artificial consciousness. Since we can't really observe consciousness in the same was as, say, intelligent behavior, it is probably much more difficult to reason about. The concept and understanding of human consciousness is still a fairly philosophical endeavor, so it is unlikely we can just observe away if we wish to decide, say, whether an AGI reporting the same "conscious experience" humans report is really having that experience or if it just mimics this convincingly. Clearly, AI of today are almost purely mimic of language so no sane researcher will claim that it in any way has the same experience as a human even if it claims so.

But there is a (for me) convincing argument (see the "fading qualia" argument in previous link) that blurs the belief some people have that the human brain and mind are very special and cannot be "copied". For me this argument points towards human consciousness being some sort of illusion (i.e. purely subjective) and that while it may be that an AGI effective ends up with a vastly different "experience" than a human there is a priori no reason why "it" cannot have a similar experience if its processing is modelled after the human brain.

Edit, since my fingers are terrible at spelling.
I guess I have to redefine 'intelligence' to exclude the capability of self-awareness.
 
  • #359
javisot said:
There are people, myself included, who think that this LLM revolution and AI simply demonstrate that natural language and human communication are not so special that they cannot be generated automatically, without the need for intelligence or consciousness.

It's easy to conclude otherwise since we don't know anyone in the entire universe who has the same ability to communicate; it would seem to be something very special.

But even if it's true that nothing else in the universe has this ability, we've managed to communicate with our computers in natural language, it can't be that special.
That is like saying that because we can translate English to French, language is not special. "Natural" language has to be translated into programming language then on down to machine code and vice versa to "natural" language. Neither demonstrates that the ability to use language to communicate is either special or not special.
OTOH (the one with the toes), animals use special sounds, body motions/gestures, and scents/pheromones linked to specific meanings to communicate, language.
So language may or may not be a special thing; all we know from the available data is that it occurs in terran life forms. I highly doubt that if we find life, sentient or not, elsewhere in the Universe that it won't have some means of communication.
 
  • #360
ShadowKraz said:
That is like saying that because we can translate English to French, language is not special.
The example you gave of translating English into French has no relation to what I wrote.

What I have written exactly and you can read is that it seems that natural language and communication are not so special "that they cannot be automatically generated"
 

Similar threads

Replies
10
Views
4K
Replies
3
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
3
Views
6K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K