Is AI hype?

AI Thread Summary
The discussion centers around the question of whether AI is merely hype, with three main concerns raised: AI's capabilities compared to humans, the potential for corporations and governments to exploit AI for power, and the existential threats posed by AI and transhumanism. Participants generally agree that AI cannot replicate all human abilities and is primarily a tool with specific advantages and limitations. There is skepticism about the motivations of corporations and governments, suggesting they will leverage AI for control, while concerns about existential threats from AI are debated, with some asserting that the real danger lies in human misuse rather than AI itself. Overall, the conversation reflects a complex view of AI as both a powerful tool and a potential source of societal challenges.
  • #51
Computer science news on Phys.org
  • #52
PeroK said:
As a previous poster indicated, your posts are not worthy of a response.
So only those who claim without proof that AI is a black box are worthy of a response?

If a poster said that, both that poster and you are being deeply disrespectful.
 
  • #53
fresh_42 said:
Are you sure you can rule out that the Russians will never use an AI to make this decision within half an hour instead of relying on the chain of command?
DaveC426913 said:
AI is being incorporated to replace the role of humans in tasks, with little or no oversight.
As of today, AI is not replacing humans in any role. It does not make decisions anywhere, especially critical ones. This is still pure science fiction.

Some neophytes think AI can do that and trust it to write some official reports, but they are learning rapidly the limits of these tools with their mistakes.

I went to a conference a few weeks ago where a consultant was educating such people. His message was simple: AI - LLM in particular - should be considered as a colleague that helps you with the work you can do rather than an expert you are consulting about a subject you don't understand.

That being said, maybe this will change in the future, and maybe AI will be able to make a decision as good - some dream of even better - as a Soviet officer in 1983. When the time comes, if the time comes, they will be no reason to worry then.

But today, AI is still a dumb tool, and all experts are treating it as such. Especially for something like missile launches.
 
  • #56
DaveC426913 said:
AI is being incorporated to replace the role of humans in tasks, with little or no oversight. AI is being employed to make decisions - a dynamic, non-deterministic process. And it is doing so in a fundamentally different way than humans make decisions. Oftimes, we don't even know by what logic it is reaching its decisions. It is an inscrutable black box.

They are qualitatively different - apples and oranges.
I mostly agree with you, but I think while what you are saying is the goal, it mostly hasn't been realized yet. While it has widespread applications for a vast array of jobs as a tool, it is only currently a full replacement for a narrow set of job types (mostly language generating jobs like coding and copy writing).

For the other 95%(?) of current AI users, it's a tool in the same way a calculator or spreadsheet is a tool. Which, by the way, doesn't mean those things don't replace jobs either, because clearly they do/have.

So while I understand the goal is very ambitious I don't see that it's there yet, nor do I share the optimism(fear?) of many that it's very close or that LLMs can get there.

I'll need to go talk to one of the remaining old-heads at my company and find out how it went when AutoCAD was introduced in the '80s. At that time most of the employees of an engineering company like mine were "draftsmen" and that job description doesn't exist anymore. We don't even have "CAD Draftsmen" anymore.

And by the way, this relates to the other thread on the subject in that IMO many people are looking at the question backwards, in two different ways:

1. Calculator vs AI is irrelevant today: both are tools that in the hands of a skilled physicist can result in quality research and in the hands of an ambitious layperson cannot.

2. So comparing AI to a calculator is missing the point: the real question is when is the AI is going to be capable of replacing the physicist.
 
Last edited:
  • #57
PeroK said:
PeroK said:
https://www.forbes.com/sites/jackke...ll-fall-first-as-ai-takes-over-the-workplace/

It's clear that information technology has replaced many previously manual tasks. There's no reason for AI to be any different.

The reasonable debate would be about the extent of this. The idea that AI is science fiction is extremely misguided.
Did you even read those links? Today's AI can do clerical jobs, ones that are repetitive. Decision-making is still 100% science fiction. From your second link:
Paralegal work, contract drafting, and legal research are prime targets, as AI tools like Harvey and CoCounsel automate document analysis with 90% accuracy, according to a 2025 Stanford study. Dalio highlights AI’s ability to parse vast datasets, threatening research-heavy roles in academia and consulting. Senior legal strategy and courtroom advocacy, however, will resist longer due to human judgment needs.

[...] Ackman, commenting on X, predicts AI-generated content will dominate advertising soon but argues human creativity in storytelling and high art will endure longer, delaying full automation.

Software development, engineering, and data science are dual-edged: AI boosts productivity but also automates routine coding and design tasks. [...] Bessent sees growth in AI-adjacent roles like cybersecurity, but standardized STEM work will gradually cede to algorithms. Complex innovation, like breakthrough research and development, will remain human-driven longer.

Diagnostic AI and robotic surgery are advancing, but empathy-driven roles like nursing, therapy, and social work are harder to automate.
A 2023 Lancet study estimates 25% of medical administrative tasks could vanish by 2035, but patient-facing care requires human trust.

Teaching, especially in nuanced fields like philosophy or early education, and high-level management jobs rely on emotional intelligence and adaptability, which AI struggles to replicate.
A 2024 OECD report suggests only 10% of teaching tasks are automatable by 2040. Dimon and Ackman stress that strategic leadership, navigating ambiguity and inspiring teams, will remain human-centric.
When we hear about AI replacing workers today, it means one worker with an AI tool is more efficient than X workers without it, not that human intervention is not needed.
 
  • #58
jack action said:
Did you even read those links? Today's AI can do clerical jobs, ones that are repetitive. Decision-making is still 100% science fiction. From your second link:

When we hear about AI replacing workers today, it means one worker with an AI tool is more efficient than X workers without it, not that human intervention is not needed.
IT systems already make decisions without any human intervention.
 
  • #59
I'd like a clear answer to the following question. Let's say chatgpt isn't comparable to a calculator, but I just asked it to solve a calculation, and it did.

How is this possible if it's not comparable to a calculator?
 
  • #60
javisot said:
I'd like a clear answer to the following question. Let's say chatgpt isn't comparable to a calculator, but I just asked it to solve a calculation, and it did.

How is this possible if it's not comparable to a calculator?
Ask your calculator for a potted history of Zen Buddhism!
 
  • #61
PeroK said:
Ask your calculator for a potted history of Zen Buddhism!
And that makes chatgpt a calculator capable of working with natural language, okay.

Understand me, you're saying chatgpt isn't a calculator, while I'm actually using it as a calculator. Something doesn't add up, don't you think?
 
  • Like
Likes russ_watters
  • #62
PeroK said:
Ask your calculator for a potted history of Zen Buddhism!
My calculator said 7353. :biggrin: (works only in German)
 
  • #63
javisot said:
And that makes chatgpt a calculator capable of working with natural language, okay.

Understand me, you're saying chatgpt isn't a calculator, while I'm actually using it as a calculator. Something doesn't add up, don't you think?
If you use your car to charge your cell phone, does that make your car a phone charger?

Why spend $30K on a car, when you can get a phone charger for $5?
 
  • #64
PeroK said:
If you use your car to charge your cell phone, does that make your car a phone charger?
Yes, obviously.

Something that isn't comparable to a calculator isn't useful for calculating.
ChatGPT is useful for calculating, so it's comparable to a calculator.

But PeroK, I use this logic simply to assert that there is a possible positive future, similar to the case of calculators.
It is not necessary to simply visualize a future where humans are poor and we lose our intellectual supremacy.
 
Last edited:
  • #65
javisot said:
Let's say chatgpt isn't comparable to a calculator, but I just asked it to solve a calculation, and it did.
And we are back to the silly wordplay arguments. If we were discussing nuclear vs. traditional (chemical) bombs your arguments so far would corresponds to claiming that since both can sort of explode then a nuclear bomb can be no more "dangerous" than a chemical bomb. It is very hard to take such an argument serious.

javisot said:
I use this logic simply to assert that there is a possible positive future
Again a nearly pointless argument. The existence of positive outcomes of a technology does not exclude that it also can have severe negative effects. In fact, the correlation is more like opposite, namely that the more positive potential a technology is shown or estimated to have the more (some people) will overlook or ignore any negative potentials. Like you, they only spot the most significant effect or two and then stop there and end up up with "designing for sunshine only".
 
  • #66
jack action said:
As of today, AI is not replacing humans in any role. It does not make decisions anywhere, especially critical ones. This is still pure science fiction.
Sorry. I did not say displace a person from their job. I said in tasks.

I just asked AI to write me a web page to calculate the age of my cat. My role in that task would have been to write the code myself, but I had AI do it.

All sorts of tasks are being assigned to AI that would normally be done by humans. Writing books, editing, proofing, styling, assessing insurance rates, answering complex questions, giving advice.

These tasks ostensibly call for "executive level thinking". AI is great at mimicking this but mimicking is not doing.

The problem is that AI is not assisting humans to do their tasks, it is stepping into the role as decision-maker, replacing a human in the solution-finding process.

And it's really bad at it. Yet too many people just trust it. They think AI=smart.

Imagine your calculator giving you correct answers merely most of the time. Imagine your calculator deliberately changing up the answers a little bit to keep it seeming spontaneous. Imagine your calculator giving you six different answers - varying by several orders of magnitude - to a question asked identically six times in a row. Bcause it doesn't actually know what it's doing; it's just mimicking solving a math problem.

Now imagine trusting it without verifying its answers.


This isn't just rhetoric about what AI "knows" and "thinks". I've shown examples where it gets answers wrong but explicitly says they're correct. When corrected, it repeats its error and again explicitly says it got it right.

This is an indication that it doesn't know correct from incorrect; its merely mimicking human responses. Badly.
 
Last edited:
  • #67
javisot said:
Yes, obviously.
Yes, obviously....amongst other things.

One side in this debate is downplaying the functionality of current, so-called "AI"(actually, both kinda are) and the other is downplaying the functionality of humans. Both are correct and both are mainly missing the point.

Text and phone based chatbots have been around for decades, and neither were called "AI" until recently. Thermostats have been around for a century and though they make decisions they have never been called "AI" until recently. A bunch of Indians masquerading as AI recently went out of business because it was found out that they were only human, and honestly, what's sexy about that? Who cares if they can code on the cheap.

Does "AI" make decisions? Honestly, with everything claimed in this thread I don't really know how to answer that. But I do know that if you give it a choice of operating Chiller #1 or Chiller #2 it won't respond by buying a new, more efficient chiller, and that's in a nutshell why I still have a job. It's the complexity, independence and broader connection to reality rather than just a narrow set of human-provided/curated data that matter in describing the capability.

So to the OP's question: Is AI [over]hype[d]? Certainly -- for a technology that aims to impersonate humans, nobody would even think to use humans to impersonate it if it weren't a high fraction of hype. Is it "making decisions" within very heavy guardrails drawn by humans? Sure, I guess? Is it taking jobs from humans, at least insofar as it takes half a job from two humans (thus resulting in one getting laid off). Sure. It may even be taking a whole job from someone who has an exceptionally repetitive job that uses almost no brain power (again, machines have been doing that for centuries).

But is it the existential threat that I understand many people are arguing it to be? Not with the capability that it has demonstrated so far. And what's my definition of an "existential threat"? Well, it's nothing short of a new technology that is so disruptive that it upends the economy, as measured by large scale, sustained, unfixable unemployment. For 200 years people have been predicting this next new technology will cause massive permanent unemployment, and for 200 years they've been wrong. Even the computer revolution didn't cause that. I haven't seen anything yet in current "AI" capabilities to indicate that it is such a technology. Maybe if in that other thread the AI had rewritten OP's paper into something groundbreaking (complete with original math or research) or at least told OP to pick a better hobby I'd think we might be there.
 
  • #68
russ_watters said:
Thermostats have been around for a century and though they make decisions...
Do they? I mean, if you want to split hairs, any sensor-switch makes a decision, does it not? But surely that dilutes the very meaning of 'intelligence', no?

You're the HVAC guy (right?), so I leave myself open to being corrected about how rich and complex this decision-making is.


russ_watters said:
A bunch of Indians masquerading as AI recently went out of business because it was found out that they were only human, and honestly, what's sexy about that? Who cares if they can code on the cheap.
:biggrin: How ironic!

russ_watters said:
For 200 years people have been predicting this next new technology will cause massive permanent unemployment, and for 200 years they've been wrong. Even the computer revolution didn't cause that. I haven't seen anything yet in current "AI" capabilities to indicate that it is such a technology.
I dunno if it's just more of the same.

The problem isn't what the technology is capable of, it's that the people think it's intelligent, and put it in charge of stuff and then trust it to do what a human would do, and then don't oversee it.

I posted an article recently about how AI is poisoning itself to death because it is taking its own output as input. This results in an ever-increasing drift from reality.
 
Last edited:
  • #69
DaveC426913 said:
Do they? I mean, if you want to split hairs, any sensor-switch makes a decision, does it not?
[snip]
But surely that dilutes the very meaning of 'intelligence', no?
Yup. That's exactly my point. If we dilute the meaning of "intelligence" so that a 100 year old thermostat counts as "AI" (and I see both sides doing this), then "AI" becomes much less impressive. More hype, as the OP asks. [edit] But broader applicability.
You're the HVAc guy, so I leave myself open to being corrected about how rich and complex this decision-making is.
It runs the gamut. Heck, I have a major telecom client that's currently implementing a product I'd never heard of before that explicitly doesn't make any decisions (only recommendations), and they call it "AI".

I also helped install a central plant optimization system 15 years ago that figures out and executes equipment sequences no human could do (at least not in realtime), and they didn't call it "AI" because the term hadn't been so applied yet (big missed opportunity I guess). [edit] I just checked; they still don't. They need to get on the ball with their marketing.
 
Last edited:
  • #70
russ_watters said:
Yup. That's exactly my point. If we dilute the meaning of "intelligence" so that a 100 year old thermostat counts as "AI" (and I see both sides doing this), then "AI" becomes much less impressive. More hype, as the OP asks. [edit] But broader applicability.

It runs the gamut. Heck, I have a major telecom client that's currently implementing a product I'd never heard of before that explicitly doesn't make any decisions (only recommendations), and they call it "AI".

I also helped install a central plant optimization system 15 years ago that figures out and executes equipment sequences no human could do (at least not in realtime), and they didn't call it "AI" because the term hadn't been so applied yet (big missed opportunity I guess). [edit] I just checked; they still don't. They need to get on the ball with their marketing.
Ehh. OK. If we dilute the definition.

But ... what if we don't? I am asserting that a calculator and an AI are qualitatively different, and that there are good reasons for thinking that the uses AI is being put to are unprecedented. Is that too far from the OP's question to be part of the same topic?
 
  • #71
It seems to me that “AI” has and will continue to replace humans. We have declining fertility and aging populations, depression, anxiety, mental disease. Seems to be long term side effects of implementing technology to perform a societal function in place of human labor in my opinion.
 
  • Skeptical
Likes russ_watters, weirdoguy and DaveC426913
  • #72
DaveC426913 said:
I just asked AI to write me a web page to calculate the age of my cat. My role in that task would have been to write the code myself, but I had AI do it.
I know, right? I just ask my calculator to find the log of 3.25. My role in that task would have been to calculate it myself, but I had the calculator do it.
DaveC426913 said:
The problem is that AI is not assisting humans to do their tasks, it is stepping into the role as decision-maker, replacing a human in the solution-finding process.
No, decision-makers are misusing AI (particularly LLM) when not considering it as an assistant, a tool. They are still the decision-makers, and this error of not verifying the proper completion of their tasks is still their responsibility. At the risk of repeating myself, I don't think any serious decision-maker, with critical decisions to make, is making this mistake.
DaveC426913 said:
Now imagine trusting it without verifying its answers.
Anyone doing this is not going to do it for long. I think we already see the examples of bad outcomes of such behavior, and it will decrease rapidly as people will realize that the term AI is just hype at the moment. Like all my food suddenly became "bio" overnight.
DaveC426913 said:
The problem isn't what the technology is capable of, it's that the people think it's intelligent, and put it in charge of stuff and then trust it to do what a human would do, and then don't oversee it.
DaveC426913 said:
and that there are good reasons for thinking that the uses AI is being put to are unprecedented.
Do you have any real examples of where AI is carelessly put in charge, i.e., is used without anyone verifying the recommendations before applying the final decision? Something critical, something scary.

At this point, it seems like all urban legends to me.
 
  • Like
Likes russ_watters
  • #73
jack action said:
I know, right? I just ask my calculator to find the log of 3.25. My role in that task would have been to calculate it myself, but I had the calculator do it.
Uh huh. Did it make any decisions about how to answer you? Could it have made a different decision? If you ask it again, will it vary the answer a little, so as to appear more human-like? If you tell it it got it wrong, will it modify its answer to please you?
 
  • #74
jack action said:
No, decision-makers are misusing AI (particularly LLM) when not considering it as an assistant, a tool.
Agree.
jack action said:
They are still the decision-makers, and this error of not verifying the proper completion of their tasks is still their responsibility.
Says who?

jack action said:
Do you have any real examples of where AI is carelessly put in charge, i.e., is used without anyone verifying the recommendations before applying the final decision? Something critical, something scary.

At this point, it seems like all urban legends to me.
I don't think I was asserting critical or scary. Is that your threshold for concern?

How about self-driving cars? They are making decisions that are causing deaths. (This is qualitatively different than an "accident", in my view).
 
  • #75
The discussion about those comparisons can easily be resolved if we stop switching between device and role to create confusion and then arguing about the confusion rather than the mixture between device and role. A device is simply a piece of hardware. It is its role that is important. AI can probably have many roles, which are all different. If it regulates the room temperature, then a comparison with a thermostat is valid. If it develops a kind of consciousness, then the comparison is not valid.

We should start to concentrate on the role we want to discuss, not the device, because that is what people are concerned about. Will AI - someday - acquire a role we shouldn't want it to have? Nobody will probably argue against an AI searching for connections on cuneiform clay tablets. The problem is that the current roles of AI are limited, and with some exceptions, like "creating new scientific statements", indisputable.

So, not only do we mix device and role, or at least look at all possible roles at once and thus confuse the debate, but we also discuss the fear of future roles. This has two limitations: fear is very individual and hard to generalize, and the future is hard to predict.

In my opinion, every post here should start with the role it refers to rather than just saying AI or LLM, and it should give a (subjective) probability that the device AI will ever find itself in that role. Otherwise, we are arguing in circles.
 
  • Like
Likes dextercioby, russ_watters, robphy and 2 others
  • #76
DaveC426913 said:
Did it make any decisions about how to answer you? Could it have made a different decision? If you ask it again, will it vary the answer a little, so as to appear more human-like? If you tell it it got it wrong, will it modify its answer to please you?
An LLM accomplishes a task different from a calculator, and all these features you are stating are desirable characteristics that were voluntarily implemented.

But we all know that. We also know some of their limitations, like hallucinations. That is why it is not a good tool for calculations. It wasn't built for that. Use a calculator. However, due to AI hype, some people attribute more power to these tools than they have.

If someone uses a tool for the wrong task, do we blame the tool or the operator?

DaveC426913 said:
jack action said:
They are still the decision-makers, and this error of not verifying the proper completion of their tasks is still their responsibility.
Says who?
The person who hires them. If you hire a mechanic who damages your car while trying to repair it, and he tells you, "Sorry, not my fault, my tool malfunctioned." Isn't he still responsible from your point of view? If he wants to sue the tool company afterwards - assuming he didn't simply misused it - that is his problem.

DaveC426913 said:
I don't think I was asserting critical or scary. Is that your threshold for concern?
If an automatic system - AI or not - makes an error with no serious consequences, say, a student gets caught cheating because of AI hallucinations, I see no harm and a good lesson learn by everybody: AI is not trustworthy and its work must always be verified, especially if the consequences could be catastrophic.

DaveC426913 said:
How about self-driving cars? They are making decisions that are causing deaths.
Actually, no car makers are selling this technology as "leave it to the AI". They all state that the final decision must be in the driver's hands. Just like cruise control. They all know the limits of their machines.

These accidents are thus due to AI hype, people hoping this technology can do more than it can.

Even then, some may argue that it saved more lives than it destroyed.
 
  • #77
DaveC426913 said:
Ehh. OK. If we dilute the definition.

But ... what if we don't?
Then it gets harder to describe/define whether what "AI" is doing counts as "intelligence" or more to the point, whether it is going to cause fundamental changes in how/if we work. My main or at least starting point here is that I think both sides tend to try to force a definition, thereby defining their position to be the "correct" one. Both "AI makes decisions therefore it is 'intelligent'." and "AI doesn't make decisions therefore it is not 'intelligent'." are wrong.
I am asserting that a calculator and an AI are qualitatively different...
I agree calling it just a calculator is going too far. As is calling it a calculator that works in natural language...though not by much.
and that there are good reasons for thinking that the uses AI is being put to are unprecedented.
I'd like to see some [good] examples, because that to me is the rub. Not declaring it "intelligent" because it "makes decisions" or "replaces jobs" but the actual decisions and actual jobs or fractions of jobs. Because so far while I see people talking about a potential upcoming apocalypse, I haven't been particularly impressed by the examples I'm seeing, in that context. Moreover, the aggressive vision of the people foreseeing an apocalypse is limited to the AI only and relies on an assumption that humans won't adapt, like they always have. It's never been easy to see what the landscape looks like after the groundbreaking change happens, but IMO it's a mistake to believe that humans will no longer be able to think of other/new things to pay other humans to do, like they always have. Is it possible This Time Will Be Different? Sure. But IMO it's more likely people are just making the same mistake they always have with that.

There was a list posted earlier of companies replacing jobs with AI (can't seem to find it now) where for several companies listed they couldn't point to any actual jobs that were being replaced, they just assumed layoffs were because of AI. Several others were customer service chatbots, which have been around for decades in other forms, or phone support which was long ago mostly outsourced to developing countries, to people who were barely allowed to deviate from a pre-written script. No, these jobs do not impress me in the context of a predicted AI apocalypse.
Is that too far from the OP's question to be part of the same topic?
Well, OP's first question is key (actually two questions):
1. Can AI do everything a human can do
1a. [or] how close are we to that?

I'd hope we can agree the answer to #1 is a clear-cut "no" but given the arguments I'm seeing touting AI's current capabilities I'm not sure everyone agrees.

For 1a, I do think a lot of the proponents of AI think it is close while the detractors(like me) don't think it is very close. Like I said about the other thread about an "independent research" paper in physics, while it's helpful in rewriting a paper to make it look like a scientific paper, it is of little help in actually judging or creating the actual physics content. It appears to me to be a long way from replacing the physicist in that equation.

Some observers are even saying LLMs have already pretty much peaked.

But yeah, that's what the thread seems to be about.
 
  • Like
Likes javisot and jack action
  • #78
DaveC426913 said:
How about self-driving cars? They are making decisions that are causing deaths. (This is qualitatively different than an "accident", in my view).
...and that's something that in my opinion the government should not be allowing without standards, testing and a proper legal framework. It seems that certain companies are getting away with calling the capabilities "full self driving" then putting in the fine print that it isn't and that drivers are required to still maintain full override control. To OP's question: that's hype killing people.
jack action said:
Actually, no car makers are selling this technology as "leave it to the AI". They all state that the final decision must be in the driver's hands. Just like cruise control. They all know the limits of their machines.

These accidents are thus due to AI hype, people hoping this technology can do more than it can.
Well, the car makers are doing some of the hyping so IMO there should be shared legal culpability there. If in your auto-mechanic example the mechanic damages a car because their torque wrench was factory calibrated wrong and they over-torqued a bunch of bolts, then the legal culpability does pass through to the company that made the wrench.

@jack action making sure you see my edit.
 
Last edited:
  • Like
Likes BillTre, javisot, fresh_42 and 1 other person
  • #79
jack action said:
An LLM accomplishes a task different from a calculator, and all these features you are stating are desirable characteristics that were voluntarily implemented.

But we all know that.
Well no, we don't. I'd say most people greatly overestimate its capabilities and intended use.

(Someone very close to me is telling me how good they feel that ChatGPT is complimenting them on how well they're grasping the subject matter she's giving it to review. How do I convince them this is nothing more than used car salesman puffery?)

jack action said:
However, due to AI hype, some people attribute more power to these tools than they have.
Yes.

jack action said:
If someone uses a tool for the wrong task, do we blame the tool or the operator?
The operator, of course. It is meaningless to blame a tool.

Is this just a quibble over semantics? Clearly, we are talking about the pitfalls of humans who use AI.

jack action said:
These accidents are thus due to AI hype, people hoping this technology can do more than it can.
I don't know about 'hype'. That has implications. Certainly AI ignorance.

jack action said:
Even then, some may argue that it saved more lives than it destroyed.
And would agree with them.

But that does not directly translate into 'we should go all out to use it that way'.
 
  • Like
Likes jack action
  • #80
Apple downplays AI -->They agree with me as being dumb as toad flicking flies.
No reasoning capability, except for the most trivial problem solving.

Breakthrough Apple study shows advanced reasoning AI doesn't actually reason at al​

https://www.msn.com/en-ca/news/tech...N&cvid=5d245aedc1834151b4c6571291a2b40b&ei=10
Apple created tests that reveal reasoning AI models available to the public don't actually reason. These models produce impressive results in math problems and other tasks because they've seen those types of tests during training. They've memorized the steps to solve problems or complete various tasks users might give to a chatbot.
 
  • Like
Likes javisot, fresh_42 and russ_watters
  • #81
DaveC426913 said:
...How do I convince them this is nothing more than used car salesman puffery?)

...I don't know about 'hype'. That has implications. Certainly AI ignorance.
In the case of cars, it's the car salesman/manufacturer creating/harnessing the hype by misinforming their buyers, which in part is what causes the ignorance, and the danger.
 
  • #82
256bits said:

Breakthrough Apple study shows advanced reasoning AI doesn't actually reason at a​

Strange to call it a breakthrough when it's what anyone who understands AI tech has been saying all along.

Maybe by 'breakthrough' they mean ordinary people might trust Apple's statements more than those of their local IT-guy-who-helps-me-set-up-their-home-printer's weird conspiracy-sounding mumblings (i.e. you and me).
 
  • #83
russ_watters said:
In the case of cars, it's the car salesman/manufacturer creating/harnessing the hype by misinforming their buyers, which in part is what causes the ignorance, and the danger.
I don't know if they're any better off than anyone else.

"It's got AI" is about as far as most people get, and they just assume that means it's a thinking automaton, making no more granular distinction than that.

A used car salesman isn't responsible for knowing how a convex wide-angle side view mirror works. All he has to do is note it as a feature.
 
Last edited:
  • #84
There's something about the calculator argument that's not being understood. The point isn't to belittle ChatGPT; a calculator capable of working in natural language is impressive, an intellectual feat. I want to make this clear.

Does ChatGPT have more potential than a simple calculator?

Yes, obviously.

Is ChatGPT capable of working with natural language (and other languages), unlike a calculator?

Yes, obviously.

Does ChatGPT have hallucinations, unlike a calculator?

Yes, obviously. Working with natural language isn't easy.
 
  • #85
Me operating under the idea that AI is going to progress. It's been circling in my head that ants have a kind of AI system for their civilization. They have been around much longer than humans, too. Here is what I asked AI, and its reply.

Do you think humans will evolve into an ant like species given artificial intellegince is reliveing the need for us to waste energy on large brains?

ChatGPT said:​

That’s a really interesting and thoughtful question.

There is some logic to the idea that if artificial intelligence handles complex thought, memory, creativity, and problem-solving for humanity, the selective pressure to maintain large, energy-hungry human brains could diminish. Our brains take about 20% of our body’s energy, a huge cost in evolutionary terms.

If intelligence is outsourced to AI over many thousands of years, evolution could indeed favor humans becoming more specialized for other roles—maybe simpler, task-oriented, communal beings—similar in function to ants or eusocial insects. Some evolutionary biologists have even speculated on this, calling it the "domestication of humanity" by its own technology.

But some caveats:

  1. Cultural Evolution is Faster than biological evolution. Technology and social systems change far quicker than genes.
  2. Artificial selection (gene editing, body/brain augmentation, neural links) could disrupt normal evolution entirely.
  3. It assumes humans will survive as biological entities rather than merge with AI or become post-biological.
But if the natural process is left untouched and AI steadily removes the need for individual thought, humans might evolve toward a more hive-like, possibly caste-divided species—just as ants do. Especially if the environment rewards obedience, physical specialization, and collective efficiency over individual intellect.

The big question is: Who will control that process—us or evolution itself?

It's not impossible. But it would take an extremely long time unless accelerated by bioengineering.
What about "our" intelligence then...Will you even be able to think it's so great! Why do we think that "we" as we are now is where the buck stops. Nature may favor devolution of personal intellect moving forward.
 
Last edited:
  • Skeptical
Likes weirdoguy and russ_watters
  • #86
russ_watters said:
Then it gets harder to describe/define whether what "AI" is doing counts as "intelligence" [...]
The term has always been a bit of a misnomer, more describing research goals and marketing hype than actual achieved capability. The current batch of popular technologies in ML (machine learning) based on training and artificial neural-like networks really ought to have been labelled Artificial Knowledge or something similar, but I guess marketing departments wouldn't approve and now its too late anyway.
 
  • Like
Likes jack action and russ_watters
  • #87
erobz said:
Me operating under the idea that AI is going to progress. It's been circling in my head that ants have a kind of AI system for their civilization. They have been around much longer than humans, too. Here is what I asked AI, and its reply.
Do you think humans will evolve into an ant like species given artificial intellegince is reliveing the need for us to waste energy on large brains?

ChatGPT said:​

That’s a really interesting and thoughtful question.

What about "our" intelligence then...Will you even be able to think it's so great! Why do we think that "we" as we are now is where the buck stops. Nature may favor devolution of personal intellect moving forward.
Sorry, but both the question and answer strike me as gibberish.
 
  • #88
erobz said:
It's been circling in my head that ants have a kind of AI system for their civilization.
And bacteria, the first general life form on the planet, can be considered intelligent, due to the agency they espouse, either as a single bacterium, or as a collection, in response to the environment.

AI, as LLM's, being asked any question gives no better an answer than that from Encyclopedia Britanica, with maybe some Tik-Tok thrown in there.

People keep on promoting these AI's as some kind of Master of the Earth. Then, that is what the machine will become - masters of humans. AI will respond towards humans need of guidance and leadership, and treat them as subservient.
That's what the prophesiers of doom and fear don't tell you. Give up human agency to the AI is the whole jist of their doom and gloom. ( religious overtones )
 
  • #89
256bits said:
And bacteria, the first general life form on the planet, can be considered intelligent, due to the agency they espouse, either as a single bacterium, or as a collection, in response to the environment.

AI, as LLM's, being asked any question gives no better an answer than that from Encyclopedia Britanica, with maybe some Tik-Tok thrown in there.

People keep on promoting these AI's as some kind of Master of the Earth. Then, that is what the machine will become - masters of humans. AI will respond towards humans need of guidance and leadership, and treat them as subservient.
That's what the prophesiers of doom and fear don't tell you. Give up human agency to the AI is the whole jist of their doom and gloom. ( religious overtones )
There are three things I distrust when dealing with a complex issue. The first is statements that say that X must happen, that X is inevitable. The other is that X cannot possibly happen. The third is the statement that people who think X might happen have general deficiencies in their power of thought that invalidates their arguments.

This is supposed to be a scientific site where even when dealing with non-science topics we are supposed to exhibit evidence-based reasoning.

You might have been better simply to say that you do not believe that AI can possibly be a threat to humanity, because that's what your gut instinct tells you - and, anyone who disagrees is a doom-mongering zealot. That's the gist of what you've said.
 
  • Like
Likes Filip Larsen and 256bits
  • #90
SamRoss said:
A question that comes up is, "Is AI hype?"
I can't really connect those other three questions to this one, so I'll just stick to the title.
Yes, in many ways AI now is a hype. Surrounded with unrealistic expectations, wild promises, blind beliefs, faith (and yes, denial also has a stick in this mess).

Though - you better keep in mind that even if something is a hype now, it still may become to be some essential part of life later on.
 
  • Like
Likes russ_watters and 256bits
  • #91
PeroK said:
There are three things I distrust when dealing with a complex issue. The first is statements that say that X must happen, that X is inevitable. The other is that X cannot possibly happen. The third is the statement that people who think X might happen have general deficiencies in their power of thought that invalidates their arguments.

This is supposed to be a scientific site where even when dealing with non-science topics we are supposed to exhibit evidence-based reasoning.

You might have been better simply to say that you do not believe that AI can possibly be a threat to humanity, because that's what your gut instinct tells you - and, anyone who disagrees is a doom-mongering zealot. That's the gist of what you've said.
There is the human element of the issue, and there is the AI element of the issue. Without either, the issue is non-existent.
The doom and gloom discourse is quite much one sided, as if humans are not players, but pacifist observers. I am saying that the gloomers one-sided approach surely has a bias of human non-interaction.
No. I do not say that AI cannot be a threat to society. It already is with regards to the disruptions, present and predicted, real or not, from ANI (no AGI or ASI yet available, if ever ). If the future is to evolve 'pleasantly as possible' some discourse has to available in response to the bias. At present, humans are making the decisions regarding utilization of AI, possibly with outcomes positive or negative the case may be. Doomsters are assuming or implying AI will overtake human agency, and subsequently determine the human fate. Disagreeing with that premise is surely worth consideration,

Quote:
There are three things I distrust when dealing with a complex issue. The first is statements that say that X must happen, that X is inevitable. The other is that X cannot possibly happen. The third is the statement that people who think X might happen have general deficiencies in their power of thought that invalidates their arguments.
/ Unquote
A sound argument, but it is deficient. There are ample enough cases where complex issues went astray, especially with the human element involved, and the scientific solution was discredited, .
One, for which the Nobel prize was given out, is the procedures of lobotomization to cure personality disorder, with many recipients put into vegetative state. As a cure I suppose it was, but quite a drastic cure, especially for those who expressed only mild anxiety, or who were put into the care by loved ones. One could come in with a problem, be diagnosed, treated, and released by mid-morning.
The deficiency comes about with increased knowledge about the brain, the influx of chemical treatments, and proper diagnosis.
The lobotomizers were X (must get a lobotomy ) must happen and is inevitable.
The non-lobotimizers were X cannot happen ( those who did not receive the treatment ).
The neuoscientists and pharmacologists were the nay-sayers to X. I do not consider these people of being inferior in thought..
 
  • #92
Rive said:
Though - you better keep in mind that even if something is a hype now, it still may become to be some essential part of life later on.
Unless you are talking about a long time frame, then by definition it's not hype!

Hype is perhaps not a well-defined word, but I would say it's not hype if it's a serious possibility in the time frame being considered.
 
  • #93
russ_watters said:
Sorry
Don't be, I'm not offended at all.
 
  • #94
I find it handy as a summariser
 
  • #95
SamRoss said:
"Is AI hype?"
I just asked Perplexity (AI) that question for fun: 😁

1000047358.webp



Here is the full answer:
https://www.perplexity.ai/search/is-ai-hype-Kc9tumxtSIKZOOulzcB54Q
 
  • Like
  • Agree
  • Haha
Likes 256bits, russ_watters and PeroK
  • #96
The experts do not agree.

Ex-OpenAI Scientist Says Brain ‘Biological Computer.’ Astrobiologist Disagrees​

https://www.msn.com/en-ca/money/tec...p&cvid=96f1948d96d04e38917dfb07493359d7&ei=33

The hype from someone in the AI field:
former OpenAI co-founder Ilya Sutskever ... said if the brain is a “biological computer” then why can’t we have a “digital brain”“Slowly but surely we will see the AI getting better and the day will come when AI will do all the things that we can do. Not just some of them but all of them. Anything which I can learn, anything which any of you can learn, the AI could do as well.”

The reply from someone who knows more about brain.
Stuart Hameroff, an astrobiologist and the Director of the Centre for Consciousness Studies:
“Ilya Sutskever is wrong on this. The brain is not a digital computer and not a computer at all, more like a quantum orchestra. Biology is based on organic carbon, which supports quantum processes and self-similar dynamics in hertz, kilohertz, megahertz, gigahertz, and terahertz in microtubules, composed of tubulin, the brain’s most abundant protein,”
“And while computers can learn, they’re not conscious, cannot feel, and have no intrinsic motivation. That’s why no AGI.”
 
  • Skeptical
Likes PeroK and BillTre
  • #97
256bits said:
The experts do not agree.

Ex-OpenAI Scientist Says Brain ‘Biological Computer.’ Astrobiologist Disagrees​

https://www.msn.com/en-ca/money/tec...p&cvid=96f1948d96d04e38917dfb07493359d7&ei=33

The hype from someone in the AI field:
former OpenAI co-founder Ilya Sutskever ... said if the brain is a “biological computer” then why can’t we have a “digital brain”“Slowly but surely we will see the AI getting better and the day will come when AI will do all the things that we can do. Not just some of them but all of them. Anything which I can learn, anything which any of you can learn, the AI could do as well.”

The reply from someone who knows more about brain.
Stuart Hameroff, an astrobiologist and the Director of the Centre for Consciousness Studies:
“Ilya Sutskever is wrong on this. The brain is not a digital computer and not a computer at all, more like a quantum orchestra. Biology is based on organic carbon, which supports quantum processes and self-similar dynamics in hertz, kilohertz, megahertz, gigahertz, and terahertz in microtubules, composed of tubulin, the brain’s most abundant protein,”
“And while computers can learn, they’re not conscious, cannot feel, and have no intrinsic motivation. That’s why no AGI.”
There's a case that consciousness and emotions may require biological stimuli. But, Conway's game of life proved that even a simple algorithm can produce self-replicating structures. A complex, self-adapting system would have no biological consciousness, but it could have an artificial motivation for self preservation and other emergent functionality, that is not inherent in its design.

Hameroff's argument is essentially that he is so clever that he knows what is possible and what is not. We don't have to do experiments to know that computers can and can't ultimately do. We need no safeguards agsinst AGI, because he personally can assure us a priori that it s not possible. This is the sort of person we should never trust.

Hameroff's argument boils down to: a computer can never develop biological intelligence, therefore a computer can never develop artificial intelligence. But, artificial is the operative word. AGI does not have to be literally a carbon-copy of the brain. AGI might emerge from complex non-biological algorithms. Or, it might not in the foreseeable future.

The question is whether computers - in this century, say - can do enough to replicate almost all human intelligence-based activities. AGI doesn't have to be able to do absolutely everything perfectly before it becomes a major competitor to humans.

What if Hameroff is wrong?
 
  • Like
  • Agree
Likes BillTre, DaveC426913, 256bits and 2 others
  • #98
The only intelligence is natural intelligence.

Therefore there can be no artificial intelligence.

I am not impressed by such arguments.
 
  • Like
  • Agree
Likes javisot, 256bits and PeroK
  • #99
I am under 30, so I'm young enough to have had the internet for the majority of my childhood (and by extension, adulthood) and I remember ChatBots from that time. They were so comically bad that people made "Let's Play" style YouTube videos of them interacting with them, sometimes to great amusement.

Then, many years later, ChatGPT is released. I had heard "GPT" in the context of image generation before, but otherwise it came out of the blue to me, and I imagine much of the non-technical population as well.

As an individual, I am always skeptical of hype and trends so I only learned of ChatGPT's capabilities through the news, social meda and friends who talked about its capability. Some were afraid it would taker over jobs due to its coding capabilities. Most people seemed to use it for their own entertainment rather than gain.

Now, some time later I started using it for my research and ... I'm not impressed. It's excellent for compiling resources on relatively well known subjects, but must also be held to scrutiny as I have been provided wrong results several times, despite the bot's utmos confidence.

It is also great for preparing routine calculations and unlike a script or spreadsheet, it can be tweaked to implement small differences simply by writing an instruction. For simple tasks, I have found it fairly reliable.

But if we consider the Wright Brothers' first flight day 0 of air travels, and remember the moon landing took place less than 70 years later, it is certainly fascinating (if not unnerving) what progress will be made in our lifetime, if ChatGPT is likewise the very early stage of generally useful AI.
 
  • Like
Likes russ_watters and javisot
  • #100
Mayhem said:
But if we consider the Wright Brothers' first flight day 0 of air travels, and remember the moon landing took place less than 70 years later, it is certainly fascinating (if not unnerving) what progress will be made in our lifetime, if ChatGPT is likewise the very early stage of generally useful AI.
Yes, considering the resources being poured into this very popular product we can expect rapid progress.

With AlphaGo and AlphaZero I knew the revolution was here but I didn't know which direction it would take.
 
Back
Top