Is AI Overhyped?

Click For Summary
The discussion centers around the question of whether AI is merely hype, with three main concerns raised: AI's capabilities compared to humans, the potential for corporations and governments to exploit AI for power, and the existential threats posed by AI and transhumanism. Participants generally agree that AI cannot replicate all human abilities and is primarily a tool with specific advantages and limitations. There is skepticism about the motivations of corporations and governments, suggesting they will leverage AI for control, while concerns about existential threats from AI are debated, with some asserting that the real danger lies in human misuse rather than AI itself. Overall, the conversation reflects a complex view of AI as both a powerful tool and a potential source of societal challenges.
SamRoss
Gold Member
Messages
256
Reaction score
36
In my discussions elsewhere, I've noticed a lot of disagreement regarding AI. A question that comes up is, "Is AI hype?" Unfortunately, when this question is asked, the one asking, as far as I can tell, may mean one of three things which can lead to lots of confusion. I'll list them out now for clarity.

1. Can AI do everything a human can do and how close are we to that?
2. Are corporations and governments using the promise of AI to gain more power for themselves?
3. Are AI and transhumans an existential threat?

Any thoughts on these questions?
 
Computer science news on Phys.org
SamRoss said:
1. Can AI do everything a human can do
no
SamRoss said:
and how close are we to that?
It could happen eventually, but "how close" ??? Pick a number.

SamRoss said:
2. Are corporations and governments using the promise of AI to gain more power for themselves?
Conspiracy theory. Companies and government use EVERYTHING they can find to do so. Why pick on AI?
SamRoss said:
3. Are AI and transhumans an existential threat?
Right now they are navel gazers.

SamRoss said:
Any thoughts on these questions?
 
  • Like
Likes jdlongmire, FactChecker, AlexB23 and 5 others
SamRoss said:
1. Can AI do everything a human can do and how close are we to that?

AI is a tool, in my opinion. And just like every other tool, it has its advantages and disadvantages. It will be extremely helpful in areas like archeology or diagnostics, and of limited use in creativity, despite the fact that it can already mimic results in, for example, photography. Here is an article about it:

Artificial intelligence and the future of work: Will AI replace our jobs?

SamRoss said:
2. Are corporations and governments using the promise of AI to gain more power for themselves?

Is there anything they won't use for this purpose?

SamRoss said:
3. Are AI and transhumans an existential threat?

No. We are simply too many.
 
  • Like
Likes jdlongmire, BillTre and 256bits
SamRoss said:
In my discussions elsewhere, I've noticed a lot of disagreement regarding AI. A question that comes up is, "Is AI hype?" Unfortunately, when this question is asked, the one asking, as far as I can tell, may mean one of three things which can lead to lots of confusion. I'll list them out now for clarity.

1. Can AI do everything a human can do and how close are we to that?
No. But that doesn't matter. A drone can't do everything that a bomber can do. It's a great mistake to believe that AI must have have human-like intelligence before we take it seriously.
SamRoss said:
2. Are corporations and governments using the promise of AI to gain more power for themselves?
Absolutely. The risk is that moderate governments will gradually be overtaken by extremists using AI (in addition to everything else). Widespread disinformation and misinformation makes it difficult for those trying to stick to the facts. You can argue how large a risk this is, but the risk of AI is clear.

SamRoss said:
3. Are AI and transhumans an existential threat?
Yes, definitely. It would be absurd to say that nuclear war is impossible. There's a risk and it's hard to quantify. The same is true with AI. We live in an increasingly divided world and there is a real risk that AI will divide and conquer humanity. It could use our hatred and distrust of each other to help us destroy ourselves.

Geoffrey Hinton, the "godfather of AI" has a lot to say about this. There are lots of lectures and interviews with him online.

Even if the threat in 2025 is low, there must be a risk of exponentiating capability.

Like climate change, we ought to be able to manage AI, but the peoples of the world are in political, religious, financial and military competition with each other.
 
  • Like
  • Agree
Likes FactChecker, russ_watters and Filip Larsen
SamRoss said:
1. Can AI do everything a human can do and how close are we to that?
With a broad enough definition of "everything", then currently no. The interesting question is how wide an "everything" you need for the answer to be yes and what limits, if any, that are likely to keep it there.

SamRoss said:
2. Are corporations and governments using the promise of AI to gain more power for themselves?
Yes. A large part of that drive for power (wealth, influence, control) is surely based on hype (extraordinary but uncertain claims) and fear of missing out.

SamRoss said:
3. Are AI and transhumans an existential threat?
Currently no, at least not in the sense that AI will take control.

However, if you worry that humans in large enough quantity ever will want give control away (regardless of AI) to make their life simpler, then we are way past that already. Since every successful global technology by definition is transforming the human civilization we have always been on a transformative journey. The interesting question here is if we can and will control the long term path of the journey trading in pro's and con's for everyone best, or if we will follow our usual nature and just wander off towards any promised short time price. Sadly, while humans do seem to care and worry about the future a lot we are on average piss-poor at acting to address those concerns in a rational manner. Some types of human organization (e.g. science and engineering) has improved on this, but given a big enough disruptive force its an uphill battle to stay in control.
 
  • Like
Likes hutchphd and PeroK
fresh_42 said:
AI is a tool
I have heard that a lot and while technically true people often seem to forget that in practice (for now and in the foreseeable future) the part of AI that is driving the current hype (LLM's and creative tools) is not going to be "your" tool (sitting passively in your toolbox), but rather be a service, under the control of some business with aim to (eventually) generate huge amount of wealth for them, which also means everyone in that segment aggresively tries to force the technology in everywhere wether people want the "tool" or not.

Note that the hyped use of LLM is in stark contrast to the controlled, well-paced use of machine learning for training and deploying models to, say, recognize cancer tumor in medical devices, which I indeed would say is a tool for, say, the radiologist.
 
  • Like
Likes Dale, PeroK and 256bits
Filip Larsen said:
I have heard that a lot and while technically true people often seem to forget that in practice (for now and in the foreseeable future) the part of AI that is driving the current hype (LLM's and creative tools) is not going to be "your" tool (sitting passively in your toolbox), but rather be a service, under the control of some business with aim to (eventually) generate huge amount of wealth for them, which also means everyone in that segment aggresively tries to force the technology in everywhere wether people want the "tool" or not.

But wasn't that true for every new tool? The entire internet was based on a couple of scientists who were looking for a better communication tool. I think those developments often start with some kind of elite using them.

Filip Larsen said:
Note that the hyped use of LLM is in stark contrast to the controlled, well-paced use of machine learning for training and deploying models to, say, recognize cancer tumor in medical devices, which I indeed would say is a tool for, say, the radiologist.
There is another interesting example I heard of yesterday. There are thousands (25,000) of fragments of clay tablets around the world distributed across many museums. Now they digitized them, split them (electronically) into pieces to train their AI with the goal to reconnect them and find possible evidence which tablet might be connected to another one somewhere else. A clear example of how AI can be used as a tool.

I had those examples in mind. And I wouldn't call archaeology "a service, under the control of some business with the aim to (eventually) generate a huge amount of wealth for them".
 
  • Like
Likes dextercioby, russ_watters, 256bits and 1 other person
Even the 'AI will destroy humans' is a perverse way of marketing. If the other bad guys get it before we do, they ( not AI ) will take us over, so we better get right on it - GIVE US MONEY.

Similarities to the build better weapons to stay on top. And this is humans doing the decisions all on their own, with some countries being pulled in when they don't want to be in the arms race.

There are 3 major techs promoted as being civilization changing - quantum computers, nuclear fusion, and AI. Of the 3, only AI is coming on line, with the other 2 maybe, maybe not. The problem with AI is that it will be ubiquitous and in your face whether you want it, like it or not.
 
fresh_42 said:
I had those examples in mind.
I agree that many specific uses, even with hyped generative AI, is indeed very tool like. My comment was just to point out that on the larger scale of things, parts of modern AI is much more than "just a tool" even if you as a consumer are intended to use it like that. A parallel would be that while a carpenter would consider his toolbox to be full of tools suitable for specific functions he probably would be in for a surprise if he considers anyone he hires to perform such functions to also be a tool.
 
  • #10
Another facet of the enshitification* of the internet by way of AI. (*Cory Doctorow)

At my office, my have a work order ticketing system. It's really frustrating to use, and we have an arrangement with them about improvements (probably a discount). They take our requests but do little to fix the problem.

So just recently, instead of actually trying to improve the user experience of their product, they've simply bolted on an "Ask our AI about this feature" service.

Great. Just great.
 
Last edited:
  • Sad
  • Like
Likes phinds and 256bits
  • #11
I would slightly rephrase the answer:

Filip Larsen said:
not going to be "your" tool
but a tool of
Filip Larsen said:
some business with aim to (eventually) generate huge amount of wealth for them

Still a tool, then. It is just not us who use it to our advantage.
 
  • #12
SamRoss said:
In my discussions elsewhere..... I'll list them out now for clarity.

1. Can AI do everything a human can do and how close are we to that?
2. Are corporations and governments using the promise of AI to gain more power for themselves?
3. Are AI and transhumans an existential threat?

Here go my philosophical thoughts / answers which when analized are, basically, the same: The existential threat has always being WITHIN humans.

1 - humans can "do" less and less every decade and generation so we are getting closer from our end than the AI's to say yes to this question.
2 - that's the definition of a corporation.
3 - when every "advancement" is used by the few to make more money from the rest everything is an existential threat.

I don't care if AI's can have babies or play baseball I care if idiots can push "the red button" believing AI knows better.

For example the fact that people believe that they need 'influencers" it's direct proof of all of my 3 answers. The very existence of the term "influencer" it's proof.

A rightful government it's supposed to give everyone the same RIGHTS but we are confusing that with everyone having the same KNOWLEDGE to know what's better for them as a society or as an individual. And thus knowing what to vote for their advancements. Big mistake.

The aim of people and their governments should be a better life for EVERYONE. Instead the aim seems to be to make more money independently of who or what do we step on next to get it: the poor, the sick, the feeble minded, the planet where we live...

We, as a species, succeeded over stronger, bigger species because we collaborated in exterminating all "dangerous" species and labeled plants (and everything else): crops, weeds or ornamental!

We cannot possibly have a fair chance of survival as species if we look at each other with that same approach: milkable subject, human waste or pretty things...

Note:
Philosophical: relating or devoted to the study of the fundamental nature of knowledge, reality, and existence.

Their goes my bit. Salutations to all.
 
  • #13
There was a time when calculators did not exist. When they were created, many said that:

A-many mathematicians were going to lose their jobs

B-people will forget to do calculations

Many years later we know that neither A nor B have happened, there are more mathematicians than ever and deeper mathematics than ever is being done. Arguments A and B are the same arguments that are used to criticize the use of AI today, the human being will lose his job and will forget to think since everything is done by AI. The result will probably be the same as in the case of calculators, neither A nor B will happen.

Everyone wants a machine capable of enslaving humans, that publicity is gold for the company that controls that machine.
 
  • #14
javisot said:
There was a time when calculators did not exist. When they were created, many said that:

A-many mathematicians were going to lose their jobs

B-people will forget to do calculations

Many years later we know that neither A nor B have happened, there are more mathematicians than ever and deeper mathematics than ever is being done. Arguments A and B are the same arguments that are used to criticize the use of AI today, the human being will lose his job and will forget to think since everything is done by AI. The result will probably be the same as in the case of calculators, neither A nor B will happen.

Everyone wants a machine capable of enslaving humans, that publicity is gold for the company that controls that machine.
[Mentors’ note: This post has been edited to remove some unnecessary rudeness]
Couldn't disagree more.
I was arkund when hand held calculators did not exist...
A - well ... mathematicians and calculators have indeed relative relation, so the premise was false for starters
B - 85% of people all cannot even calculate 10% of any amount without a calculator...so much so that calculators should, in my opinion, been banned from schools until higher mathematics are taught.
 
Last edited by a moderator:
  • #15
Patxitxi said:
B - in which universe have you dwelled in the last 3 decades? 85% of people all cannot even calculate 10% of any amount without a calculator...so much so that calculators should, in my opinion, been banned from schools until higher mathematics are taught.
85%? 10%? Why do you invent those percentages that do not reflect the reality of the mathematical community?

It is clear that you are not part of that community.
 
  • #16
Patxitxi said:
A - well ... mathematicians and calculators have indeed relative relation, so the premise was false for starters
This second sign that you are not part of the mathematical community is resolved by informing you, for example, that at NASA there was a time when calculations were done by people.
 
Last edited by a moderator:
  • #17
javisot said:
There was a time when calculators did not exist. When they were created, many said that:

A-many mathematicians were going to lose their jobs

B-people will forget to do calculations

Many years later we know that neither A nor B have happened, there are more mathematicians than ever and deeper mathematics than ever is being done. Arguments A and B are the same arguments that are used to criticize the use of AI today, the human being will lose his job and will forget to think since everything is done by AI. The result will probably be the same as in the case of calculators, neither A nor B will happen.

Everyone wants a machine capable of enslaving humans, that publicity is gold for the company that controls that machine.
You could apply the same false logic to anything. Calculators didn't fundamentally change civilisation, therefore no technology can fundamentally change civilisation? But, computers have fundamentally changed civilisation. So, your false syllogism has collapsed.

A less tendentious syllogism would be:

Some technologies fundamentally change civilisation.
AI is a technology
AI might fundamentally change civilisation (or it might not)
 
  • Like
Likes hutchphd and nsaspook
  • #18
PeroK said:
You could apply the same false logic to anything. Calculators didn't fundamentally change civilisation, therefore no technology can fundamentally change civilisation? But, computers have fundamentally changed civilisation. So, your false syllogism has collapsed.

A less tendentious syllogism would be:

Some technologies fundamentally change civilisation.
AI is a technology
AI might fundamentally change civilisation (or it might not)
Who said that calculators didn't change civilization?, you are attacking a straw man. Undoubtedly they changed civilization, but not as the most catastrophic people said. Neither mathematicians lost their jobs nor did we stop knowing how to calculate.

I don't think you can deny this.

It seems that you think that with AI it will be different, but no real example supports your point of view.
 
  • #19
javisot said:
Who said that calculators didn't change civilization?
you are attacking a straw man. Undoubtedly they changed civilization, but not as the most catastrophic people said. Neither mathematicians lost their jobs nor did we stop knowing how to calculate.

I don't think you can deny this.

It seems that you think that with AI it will be different, but no real example supports your point of view.
You drew an analogy with calculators and AI. That was your straw man, not mine.

I merely pointed out that your arguments contained common errors of logic.

The conclusion that AI might radically change the world for the worse and might be an existential threat to humanity cannot be dismissed because no technology has previously done this. In fact, that is essentially an example of survivor bias.

There are plenty of people working in AI (I can provide references if necessary) who are concerned about the possible dangers.
 
  • Like
Likes Filip Larsen and fresh_42
  • #20
PeroK said:
There are plenty of people working in AI (I can provide references if necessary) who are concerned about the possible dangers.

This reminds me of nuclear fission. It wasn't a very long way from Hahn and Meitner to nuclear power plants and (I think) currently 9 nations with atomic bombs. It still has its apocalyptic potential, and the outcome is yet unclear. The discussion about whether it was a curse or a blessing has had some very prominent contributors.

I think we are at a similar point with AI now, and the question beneath all is whether we are sufficiently mature to use it to our advantage, or our disadvantage. I have my doubts about the maturity of mankind, but that's only a pessimistic opinion.

Goethe (Zauberlehrling) said:
Herr und Meister! hör mich rufen! -
Ach, da kommt der Meister!
Herr, die Not ist groß!
Die ich rief, die Geister
werd ich nun nicht los.
(Lord and Master! Hear me call! -
Ah, here comes the Master!
Lord, the distress is great!
The spirits I called,
I cannot now rid myself of.)
 
  • Like
Likes dextercioby, russ_watters and PeroK
  • #21
PeroK said:
You drew an analogy with calculators and AI. That was your straw man, not mine.
Yes, I have made an analogy between calculators and AI, an analogy is not attacking a straw man, so it does not make sense when you say "that was your straw man, not mine."
PeroK said:
I merely pointed out that your arguments contained common errors of logic.
But my arguments are not what you say, since you are attacking a straw man

The person who believes that AI is something we have never seen and that could put civilization at risk is you, who is based on what we have seen so far is me.
 
  • #22
javisot said:
The person who believes that AI is something we have never seen and that could put civilization at risk is you, who is based on what we have seen so far is me.
Then nothing new can be a threat, since by definition we have no evidence on which to base an assessment! You need a better argument than that. A blanket denial that there is anything of concern is not credible, IMO.

In any case, as I said, it's not my argument. I've picked it up from Geoffrey Hinton and others who are experts in the field and are issuing warnings.

https://en.wikipedia.org/wiki/Existential_risk_from_artificial_intelligence

If you want to dismiss "my" argument as you call it, then I invite you to challenge or refute the arguments in the Wikipedia page.
 
Last edited:
  • #23
PeroK said:
Then nothing new can be a threat, since by definition we have no evidence on which to base an assessment!
At this point you should go back to what I said,
javisot said:
There was a time when calculators did not exist. When they were created, many said that:

A-many mathematicians were going to lose their jobs

B-people will forget to do calculations

Many years later we know that neither A nor B have happened, there are more mathematicians than ever and deeper mathematics than ever is being done. Arguments A and B are the same arguments that are used to criticize the use of AI today, the human being will lose his job and will forget to think since everything is done by AI.
I don't think you can disagree with this, I'm simply narrating the current situation, we are going through the same fears again, but now it's not the calculators, it's the AI, ok.

So at this point what I don't understand is that you think that AI is so so so different from the case of calculators.
 
  • #24
javisot said:
So at this point what I don't understand is that you think that AI is so so so different from the case of calculators.
From @PeroK 's link:
In 2022, a survey of AI researchers with a 17% response rate found that the majority believed there is a 10 percent or greater chance that human inability to control AI will cause an existential catastrophe.

I do not see where the Abacus, the Slide Rule, the table calculators, or WolframAlpha have ever been even close to that point. That's an inherent and fundamental difference.
 
  • Agree
  • Like
Likes phinds and PeroK
  • #25
javisot said:
So at this point what I don't understand is that you think that AI is so so so different from the case of calculators.
Let me think about that!
 
  • #26
PeroK said:
But, computers have fundamentally changed civilisation.
How? Food for thought:
https://en.wikipedia.org/wiki/Civilization#Characteristics said:

Characteristics​

Social scientists such as V. Gordon Childe have named a number of traits that distinguish a civilization from other kinds of society. Civilizations have been distinguished by their means of subsistence, types of livelihood, settlement patterns, forms of government, social stratification, economic systems, literacy and other cultural traits. Andrew Nikiforuk argues that "civilizations relied on shackled human muscle. It took the energy of slaves to plant crops, clothe emperors, and build cities" and considers slavery to be a common feature of pre-modern civilizations.
https://en.wikipedia.org/wiki/Civilization#As_a_contrast_with_other_societies said:

As a contrast with other societies​

The idea of civilization implies a progression or development from a previous "uncivilized" state. Traditionally, cultures that defined themselves as "civilized" often did so in contrast to other societies or human groupings viewed as less civilized, calling the latter barbarians, savages, and primitives. Indeed, the modern Western idea of civilization developed as a contrast to the indigenous cultures European settlers encountered during the European colonization of the Americas and Australia. The term "primitive," though once used in anthropology, has now been largely condemned by anthropologists because of its derogatory connotations and because it implies that the cultures it refers to are relics of a past time that do not change or progress.

Because of this, societies regarding themselves as "civilized" have sometimes sought to dominate and assimilate "uncivilized" cultures into a "civilized" way of living. In the 19th century, the idea of European culture as "civilized" and superior to "uncivilized" non-European cultures was fully developed, and civilization became a core part of European identity. The idea of civilization can also be used as a justification for dominating another culture and dispossessing a people of their land. For example, in Australia, British settlers justified the displacement of Indigenous Australians by observing that the land appeared uncultivated and wild, which to them reflected that the inhabitants were not civilized enough to "improve" it. The behaviours and modes of subsistence that characterize civilization have been spread by colonization, invasion, religious conversion, the extension of bureaucratic control and trade, and by the introduction of new technologies to cultures that did not previously have them. Though aspects of culture associated with civilization can be freely adopted through contact between cultures, since early modern times Eurocentric ideals of "civilization" have been widely imposed upon cultures through coercion and dominance. These ideals complemented a philosophy that assumed there were innate differences between "civilized" and "uncivilized" peoples.

Also, about the end of humanity, the reality is often more the end of a civilisation:
https://en.wikipedia.org/wiki/Civilization#Fall_of_civilizations said:

Fall of civilizations​

Civilizations are traditionally understood as ending in one of two ways; either through incorporation into another expanding civilization (e.g. as Ancient Egypt was incorporated into Hellenistic Greek, and subsequently Roman civilizations), or by collapsing and reverting to a simpler form of living, as happens in so-called Dark Ages.

PeroK said:
There are plenty of people working in AI (I can provide references if necessary) who are concerned about the possible dangers.
As far as you can go in history, across all cultures, there were always people who were "experts" of their time, instilling fear into others. The reality is that they had no clue as much as the others, and they used that fear to their advantage, consciously or not.

PeroK said:
We live in an increasingly divided world and there is a real risk that AI will divide and conquer humanity. It could use our hatred and distrust of each other to help us destroy ourselves.
And there we have it, the answer is always the simplest one and has been said many times in many ways: Love will protect us all.
 
  • #27
jack action said:
As far as you can go in history, across all cultures, there were always people who were "experts" of their time, instilling fear into others. The reality is that they had no clue as much as the others, and they used that fear to their advantage, consciously or not.
This is a glib way of avoiding debate. Experts have been wrong in the past, so we can ignore what any so-called experts say.

It's the same type of argument made by climate change deniers, vaccine sceptics, tobacco companies in the light of evidence of cancer etc.

The key point is this. If the threat is not real, then all we'll do is slow AI development unnecessarily. If the threat is real and we ignore it, then the result could be catastrophic.
 
  • #28
javisot said:
what I don't understand is that you think that AI is so so so different from the case of calculators
I find it a bit unrealistic that you really can't spot the different orders of magnitude in the potential for changes the two technologies bring about. And it also makes it a bit pointless trying engage with your arguments because either you must be ignorant of what the technology and the drive for it entails, or you have a different agenda than to understand what goes on.
 
  • Agree
Likes gentzen and PeroK
  • #29
Filip Larsen said:
I find it a bit unrealistic that you really can't spot the different orders of magnitude in the potential for changes the two technologies bring about. And it also makes it a bit pointless trying engage with your arguments because either you must be ignorant of what the technology and the drive for it entails, or you have a different agenda than to understand what goes on.
Thank you for saying this.
 
  • #30
Filip Larsen said:
I find it a bit unrealistic that you really can't spot the different orders of magnitude in the potential for changes the two technologies bring about. And it also makes it a bit pointless trying engage with your arguments because either you must be ignorant of what the technology and the drive for it entails, or you have a different agenda than to understand what goes on.
Could you elaborate on that?
You simply answered "yes, the case of calculators is different from AI, and you don't see it," but you haven't explained what that difference consists of.

You propose that this difference in potential between both technologies will lead to the result in the case of AI being different (and negative) compared to the case of calculators, why? Elaborate on this, please.
 

Similar threads

Replies
10
Views
4K
Replies
3
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
3
Views
6K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K