Is AI hype?

AI Thread Summary
The discussion centers around the question of whether AI is merely hype, with three main concerns raised: AI's capabilities compared to humans, the potential for corporations and governments to exploit AI for power, and the existential threats posed by AI and transhumanism. Participants generally agree that AI cannot replicate all human abilities and is primarily a tool with specific advantages and limitations. There is skepticism about the motivations of corporations and governments, suggesting they will leverage AI for control, while concerns about existential threats from AI are debated, with some asserting that the real danger lies in human misuse rather than AI itself. Overall, the conversation reflects a complex view of AI as both a powerful tool and a potential source of societal challenges.
SamRoss
Gold Member
Messages
256
Reaction score
36
In my discussions elsewhere, I've noticed a lot of disagreement regarding AI. A question that comes up is, "Is AI hype?" Unfortunately, when this question is asked, the one asking, as far as I can tell, may mean one of three things which can lead to lots of confusion. I'll list them out now for clarity.

1. Can AI do everything a human can do and how close are we to that?
2. Are corporations and governments using the promise of AI to gain more power for themselves?
3. Are AI and transhumans an existential threat?

Any thoughts on these questions?
 
Computer science news on Phys.org
SamRoss said:
1. Can AI do everything a human can do
no
SamRoss said:
and how close are we to that?
It could happen eventually, but "how close" ??? Pick a number.

SamRoss said:
2. Are corporations and governments using the promise of AI to gain more power for themselves?
Conspiracy theory. Companies and government use EVERYTHING they can find to do so. Why pick on AI?
SamRoss said:
3. Are AI and transhumans an existential threat?
Right now they are navel gazers.

SamRoss said:
Any thoughts on these questions?
 
  • Like
Likes jdlongmire, FactChecker, AlexB23 and 5 others
SamRoss said:
1. Can AI do everything a human can do and how close are we to that?

AI is a tool, in my opinion. And just like every other tool, it has its advantages and disadvantages. It will be extremely helpful in areas like archeology or diagnostics, and of limited use in creativity, despite the fact that it can already mimic results in, for example, photography. Here is an article about it:

Artificial intelligence and the future of work: Will AI replace our jobs?

SamRoss said:
2. Are corporations and governments using the promise of AI to gain more power for themselves?

Is there anything they won't use for this purpose?

SamRoss said:
3. Are AI and transhumans an existential threat?

No. We are simply too many.
 
  • Like
Likes jdlongmire, BillTre and 256bits
SamRoss said:
In my discussions elsewhere, I've noticed a lot of disagreement regarding AI. A question that comes up is, "Is AI hype?" Unfortunately, when this question is asked, the one asking, as far as I can tell, may mean one of three things which can lead to lots of confusion. I'll list them out now for clarity.

1. Can AI do everything a human can do and how close are we to that?
No. But that doesn't matter. A drone can't do everything that a bomber can do. It's a great mistake to believe that AI must have have human-like intelligence before we take it seriously.
SamRoss said:
2. Are corporations and governments using the promise of AI to gain more power for themselves?
Absolutely. The risk is that moderate governments will gradually be overtaken by extremists using AI (in addition to everything else). Widespread disinformation and misinformation makes it difficult for those trying to stick to the facts. You can argue how large a risk this is, but the risk of AI is clear.

SamRoss said:
3. Are AI and transhumans an existential threat?
Yes, definitely. It would be absurd to say that nuclear war is impossible. There's a risk and it's hard to quantify. The same is true with AI. We live in an increasingly divided world and there is a real risk that AI will divide and conquer humanity. It could use our hatred and distrust of each other to help us destroy ourselves.

Geoffrey Hinton, the "godfather of AI" has a lot to say about this. There are lots of lectures and interviews with him online.

Even if the threat in 2025 is low, there must be a risk of exponentiating capability.

Like climate change, we ought to be able to manage AI, but the peoples of the world are in political, religious, financial and military competition with each other.
 
  • Like
  • Agree
Likes FactChecker, russ_watters and Filip Larsen
SamRoss said:
1. Can AI do everything a human can do and how close are we to that?
With a broad enough definition of "everything", then currently no. The interesting question is how wide an "everything" you need for the answer to be yes and what limits, if any, that are likely to keep it there.

SamRoss said:
2. Are corporations and governments using the promise of AI to gain more power for themselves?
Yes. A large part of that drive for power (wealth, influence, control) is surely based on hype (extraordinary but uncertain claims) and fear of missing out.

SamRoss said:
3. Are AI and transhumans an existential threat?
Currently no, at least not in the sense that AI will take control.

However, if you worry that humans in large enough quantity ever will want give control away (regardless of AI) to make their life simpler, then we are way past that already. Since every successful global technology by definition is transforming the human civilization we have always been on a transformative journey. The interesting question here is if we can and will control the long term path of the journey trading in pro's and con's for everyone best, or if we will follow our usual nature and just wander off towards any promised short time price. Sadly, while humans do seem to care and worry about the future a lot we are on average piss-poor at acting to address those concerns in a rational manner. Some types of human organization (e.g. science and engineering) has improved on this, but given a big enough disruptive force its an uphill battle to stay in control.
 
  • Like
Likes hutchphd and PeroK
fresh_42 said:
AI is a tool
I have heard that a lot and while technically true people often seem to forget that in practice (for now and in the foreseeable future) the part of AI that is driving the current hype (LLM's and creative tools) is not going to be "your" tool (sitting passively in your toolbox), but rather be a service, under the control of some business with aim to (eventually) generate huge amount of wealth for them, which also means everyone in that segment aggresively tries to force the technology in everywhere wether people want the "tool" or not.

Note that the hyped use of LLM is in stark contrast to the controlled, well-paced use of machine learning for training and deploying models to, say, recognize cancer tumor in medical devices, which I indeed would say is a tool for, say, the radiologist.
 
  • Like
Likes Dale, PeroK and 256bits
Filip Larsen said:
I have heard that a lot and while technically true people often seem to forget that in practice (for now and in the foreseeable future) the part of AI that is driving the current hype (LLM's and creative tools) is not going to be "your" tool (sitting passively in your toolbox), but rather be a service, under the control of some business with aim to (eventually) generate huge amount of wealth for them, which also means everyone in that segment aggresively tries to force the technology in everywhere wether people want the "tool" or not.

But wasn't that true for every new tool? The entire internet was based on a couple of scientists who were looking for a better communication tool. I think those developments often start with some kind of elite using them.

Filip Larsen said:
Note that the hyped use of LLM is in stark contrast to the controlled, well-paced use of machine learning for training and deploying models to, say, recognize cancer tumor in medical devices, which I indeed would say is a tool for, say, the radiologist.
There is another interesting example I heard of yesterday. There are thousands (25,000) of fragments of clay tablets around the world distributed across many museums. Now they digitized them, split them (electronically) into pieces to train their AI with the goal to reconnect them and find possible evidence which tablet might be connected to another one somewhere else. A clear example of how AI can be used as a tool.

I had those examples in mind. And I wouldn't call archaeology "a service, under the control of some business with the aim to (eventually) generate a huge amount of wealth for them".
 
  • Like
Likes dextercioby, russ_watters, 256bits and 1 other person
Even the 'AI will destroy humans' is a perverse way of marketing. If the other bad guys get it before we do, they ( not AI ) will take us over, so we better get right on it - GIVE US MONEY.

Similarities to the build better weapons to stay on top. And this is humans doing the decisions all on their own, with some countries being pulled in when they don't want to be in the arms race.

There are 3 major techs promoted as being civilization changing - quantum computers, nuclear fusion, and AI. Of the 3, only AI is coming on line, with the other 2 maybe, maybe not. The problem with AI is that it will be ubiquitous and in your face whether you want it, like it or not.
 
fresh_42 said:
I had those examples in mind.
I agree that many specific uses, even with hyped generative AI, is indeed very tool like. My comment was just to point out that on the larger scale of things, parts of modern AI is much more than "just a tool" even if you as a consumer are intended to use it like that. A parallel would be that while a carpenter would consider his toolbox to be full of tools suitable for specific functions he probably would be in for a surprise if he considers anyone he hires to perform such functions to also be a tool.
 
  • #10
Another facet of the enshitification* of the internet by way of AI. (*Cory Doctorow)

At my office, my have a work order ticketing system. It's really frustrating to use, and we have an arrangement with them about improvements (probably a discount). They take our requests but do little to fix the problem.

So just recently, instead of actually trying to improve the user experience of their product, they've simply bolted on an "Ask our AI about this feature" service.

Great. Just great.
 
Last edited:
  • Sad
  • Like
Likes phinds and 256bits
  • #11
I would slightly rephrase the answer:

Filip Larsen said:
not going to be "your" tool
but a tool of
Filip Larsen said:
some business with aim to (eventually) generate huge amount of wealth for them

Still a tool, then. It is just not us who use it to our advantage.
 
  • #12
SamRoss said:
In my discussions elsewhere..... I'll list them out now for clarity.

1. Can AI do everything a human can do and how close are we to that?
2. Are corporations and governments using the promise of AI to gain more power for themselves?
3. Are AI and transhumans an existential threat?

Here go my philosophical thoughts / answers which when analized are, basically, the same: The existential threat has always being WITHIN humans.

1 - humans can "do" less and less every decade and generation so we are getting closer from our end than the AI's to say yes to this question.
2 - that's the definition of a corporation.
3 - when every "advancement" is used by the few to make more money from the rest everything is an existential threat.

I don't care if AI's can have babies or play baseball I care if idiots can push "the red button" believing AI knows better.

For example the fact that people believe that they need 'influencers" it's direct proof of all of my 3 answers. The very existence of the term "influencer" it's proof.

A rightful government it's supposed to give everyone the same RIGHTS but we are confusing that with everyone having the same KNOWLEDGE to know what's better for them as a society or as an individual. And thus knowing what to vote for their advancements. Big mistake.

The aim of people and their governments should be a better life for EVERYONE. Instead the aim seems to be to make more money independently of who or what do we step on next to get it: the poor, the sick, the feeble minded, the planet where we live...

We, as a species, succeeded over stronger, bigger species because we collaborated in exterminating all "dangerous" species and labeled plants (and everything else): crops, weeds or ornamental!

We cannot possibly have a fair chance of survival as species if we look at each other with that same approach: milkable subject, human waste or pretty things...

Note:
Philosophical: relating or devoted to the study of the fundamental nature of knowledge, reality, and existence.

Their goes my bit. Salutations to all.
 
  • #13
There was a time when calculators did not exist. When they were created, many said that:

A-many mathematicians were going to lose their jobs

B-people will forget to do calculations

Many years later we know that neither A nor B have happened, there are more mathematicians than ever and deeper mathematics than ever is being done. Arguments A and B are the same arguments that are used to criticize the use of AI today, the human being will lose his job and will forget to think since everything is done by AI. The result will probably be the same as in the case of calculators, neither A nor B will happen.

Everyone wants a machine capable of enslaving humans, that publicity is gold for the company that controls that machine.
 
  • #14
javisot said:
There was a time when calculators did not exist. When they were created, many said that:

A-many mathematicians were going to lose their jobs

B-people will forget to do calculations

Many years later we know that neither A nor B have happened, there are more mathematicians than ever and deeper mathematics than ever is being done. Arguments A and B are the same arguments that are used to criticize the use of AI today, the human being will lose his job and will forget to think since everything is done by AI. The result will probably be the same as in the case of calculators, neither A nor B will happen.

Everyone wants a machine capable of enslaving humans, that publicity is gold for the company that controls that machine.
[Mentors’ note: This post has been edited to remove some unnecessary rudeness]
Couldn't disagree more.
I was arkund when hand held calculators did not exist...
A - well ... mathematicians and calculators have indeed relative relation, so the premise was false for starters
B - 85% of people all cannot even calculate 10% of any amount without a calculator...so much so that calculators should, in my opinion, been banned from schools until higher mathematics are taught.
 
Last edited by a moderator:
  • #15
Patxitxi said:
B - in which universe have you dwelled in the last 3 decades? 85% of people all cannot even calculate 10% of any amount without a calculator...so much so that calculators should, in my opinion, been banned from schools until higher mathematics are taught.
85%? 10%? Why do you invent those percentages that do not reflect the reality of the mathematical community?

It is clear that you are not part of that community.
 
  • #16
Patxitxi said:
A - well ... mathematicians and calculators have indeed relative relation, so the premise was false for starters
This second sign that you are not part of the mathematical community is resolved by informing you, for example, that at NASA there was a time when calculations were done by people.
 
Last edited by a moderator:
  • #17
javisot said:
There was a time when calculators did not exist. When they were created, many said that:

A-many mathematicians were going to lose their jobs

B-people will forget to do calculations

Many years later we know that neither A nor B have happened, there are more mathematicians than ever and deeper mathematics than ever is being done. Arguments A and B are the same arguments that are used to criticize the use of AI today, the human being will lose his job and will forget to think since everything is done by AI. The result will probably be the same as in the case of calculators, neither A nor B will happen.

Everyone wants a machine capable of enslaving humans, that publicity is gold for the company that controls that machine.
You could apply the same false logic to anything. Calculators didn't fundamentally change civilisation, therefore no technology can fundamentally change civilisation? But, computers have fundamentally changed civilisation. So, your false syllogism has collapsed.

A less tendentious syllogism would be:

Some technologies fundamentally change civilisation.
AI is a technology
AI might fundamentally change civilisation (or it might not)
 
  • Like
Likes hutchphd and nsaspook
  • #18
PeroK said:
You could apply the same false logic to anything. Calculators didn't fundamentally change civilisation, therefore no technology can fundamentally change civilisation? But, computers have fundamentally changed civilisation. So, your false syllogism has collapsed.

A less tendentious syllogism would be:

Some technologies fundamentally change civilisation.
AI is a technology
AI might fundamentally change civilisation (or it might not)
Who said that calculators didn't change civilization?, you are attacking a straw man. Undoubtedly they changed civilization, but not as the most catastrophic people said. Neither mathematicians lost their jobs nor did we stop knowing how to calculate.

I don't think you can deny this.

It seems that you think that with AI it will be different, but no real example supports your point of view.
 
  • #19
javisot said:
Who said that calculators didn't change civilization?
you are attacking a straw man. Undoubtedly they changed civilization, but not as the most catastrophic people said. Neither mathematicians lost their jobs nor did we stop knowing how to calculate.

I don't think you can deny this.

It seems that you think that with AI it will be different, but no real example supports your point of view.
You drew an analogy with calculators and AI. That was your straw man, not mine.

I merely pointed out that your arguments contained common errors of logic.

The conclusion that AI might radically change the world for the worse and might be an existential threat to humanity cannot be dismissed because no technology has previously done this. In fact, that is essentially an example of survivor bias.

There are plenty of people working in AI (I can provide references if necessary) who are concerned about the possible dangers.
 
  • Like
Likes Filip Larsen and fresh_42
  • #20
PeroK said:
There are plenty of people working in AI (I can provide references if necessary) who are concerned about the possible dangers.

This reminds me of nuclear fission. It wasn't a very long way from Hahn and Meitner to nuclear power plants and (I think) currently 9 nations with atomic bombs. It still has its apocalyptic potential, and the outcome is yet unclear. The discussion about whether it was a curse or a blessing has had some very prominent contributors.

I think we are at a similar point with AI now, and the question beneath all is whether we are sufficiently mature to use it to our advantage, or our disadvantage. I have my doubts about the maturity of mankind, but that's only a pessimistic opinion.

Goethe (Zauberlehrling) said:
Herr und Meister! hör mich rufen! -
Ach, da kommt der Meister!
Herr, die Not ist groß!
Die ich rief, die Geister
werd ich nun nicht los.
(Lord and Master! Hear me call! -
Ah, here comes the Master!
Lord, the distress is great!
The spirits I called,
I cannot now rid myself of.)
 
  • Like
Likes dextercioby, russ_watters and PeroK
  • #21
PeroK said:
You drew an analogy with calculators and AI. That was your straw man, not mine.
Yes, I have made an analogy between calculators and AI, an analogy is not attacking a straw man, so it does not make sense when you say "that was your straw man, not mine."
PeroK said:
I merely pointed out that your arguments contained common errors of logic.
But my arguments are not what you say, since you are attacking a straw man

The person who believes that AI is something we have never seen and that could put civilization at risk is you, who is based on what we have seen so far is me.
 
  • #22
javisot said:
The person who believes that AI is something we have never seen and that could put civilization at risk is you, who is based on what we have seen so far is me.
Then nothing new can be a threat, since by definition we have no evidence on which to base an assessment! You need a better argument than that. A blanket denial that there is anything of concern is not credible, IMO.

In any case, as I said, it's not my argument. I've picked it up from Geoffrey Hinton and others who are experts in the field and are issuing warnings.

https://en.wikipedia.org/wiki/Existential_risk_from_artificial_intelligence

If you want to dismiss "my" argument as you call it, then I invite you to challenge or refute the arguments in the Wikipedia page.
 
Last edited:
  • #23
PeroK said:
Then nothing new can be a threat, since by definition we have no evidence on which to base an assessment!
At this point you should go back to what I said,
javisot said:
There was a time when calculators did not exist. When they were created, many said that:

A-many mathematicians were going to lose their jobs

B-people will forget to do calculations

Many years later we know that neither A nor B have happened, there are more mathematicians than ever and deeper mathematics than ever is being done. Arguments A and B are the same arguments that are used to criticize the use of AI today, the human being will lose his job and will forget to think since everything is done by AI.
I don't think you can disagree with this, I'm simply narrating the current situation, we are going through the same fears again, but now it's not the calculators, it's the AI, ok.

So at this point what I don't understand is that you think that AI is so so so different from the case of calculators.
 
  • #24
javisot said:
So at this point what I don't understand is that you think that AI is so so so different from the case of calculators.
From @PeroK 's link:
In 2022, a survey of AI researchers with a 17% response rate found that the majority believed there is a 10 percent or greater chance that human inability to control AI will cause an existential catastrophe.

I do not see where the Abacus, the Slide Rule, the table calculators, or WolframAlpha have ever been even close to that point. That's an inherent and fundamental difference.
 
  • Agree
  • Like
Likes phinds and PeroK
  • #25
javisot said:
So at this point what I don't understand is that you think that AI is so so so different from the case of calculators.
Let me think about that!
 
  • #26
PeroK said:
But, computers have fundamentally changed civilisation.
How? Food for thought:
https://en.wikipedia.org/wiki/Civilization#Characteristics said:

Characteristics​

Social scientists such as V. Gordon Childe have named a number of traits that distinguish a civilization from other kinds of society. Civilizations have been distinguished by their means of subsistence, types of livelihood, settlement patterns, forms of government, social stratification, economic systems, literacy and other cultural traits. Andrew Nikiforuk argues that "civilizations relied on shackled human muscle. It took the energy of slaves to plant crops, clothe emperors, and build cities" and considers slavery to be a common feature of pre-modern civilizations.
https://en.wikipedia.org/wiki/Civilization#As_a_contrast_with_other_societies said:

As a contrast with other societies​

The idea of civilization implies a progression or development from a previous "uncivilized" state. Traditionally, cultures that defined themselves as "civilized" often did so in contrast to other societies or human groupings viewed as less civilized, calling the latter barbarians, savages, and primitives. Indeed, the modern Western idea of civilization developed as a contrast to the indigenous cultures European settlers encountered during the European colonization of the Americas and Australia. The term "primitive," though once used in anthropology, has now been largely condemned by anthropologists because of its derogatory connotations and because it implies that the cultures it refers to are relics of a past time that do not change or progress.

Because of this, societies regarding themselves as "civilized" have sometimes sought to dominate and assimilate "uncivilized" cultures into a "civilized" way of living. In the 19th century, the idea of European culture as "civilized" and superior to "uncivilized" non-European cultures was fully developed, and civilization became a core part of European identity. The idea of civilization can also be used as a justification for dominating another culture and dispossessing a people of their land. For example, in Australia, British settlers justified the displacement of Indigenous Australians by observing that the land appeared uncultivated and wild, which to them reflected that the inhabitants were not civilized enough to "improve" it. The behaviours and modes of subsistence that characterize civilization have been spread by colonization, invasion, religious conversion, the extension of bureaucratic control and trade, and by the introduction of new technologies to cultures that did not previously have them. Though aspects of culture associated with civilization can be freely adopted through contact between cultures, since early modern times Eurocentric ideals of "civilization" have been widely imposed upon cultures through coercion and dominance. These ideals complemented a philosophy that assumed there were innate differences between "civilized" and "uncivilized" peoples.

Also, about the end of humanity, the reality is often more the end of a civilisation:
https://en.wikipedia.org/wiki/Civilization#Fall_of_civilizations said:

Fall of civilizations​

Civilizations are traditionally understood as ending in one of two ways; either through incorporation into another expanding civilization (e.g. as Ancient Egypt was incorporated into Hellenistic Greek, and subsequently Roman civilizations), or by collapsing and reverting to a simpler form of living, as happens in so-called Dark Ages.

PeroK said:
There are plenty of people working in AI (I can provide references if necessary) who are concerned about the possible dangers.
As far as you can go in history, across all cultures, there were always people who were "experts" of their time, instilling fear into others. The reality is that they had no clue as much as the others, and they used that fear to their advantage, consciously or not.

PeroK said:
We live in an increasingly divided world and there is a real risk that AI will divide and conquer humanity. It could use our hatred and distrust of each other to help us destroy ourselves.
And there we have it, the answer is always the simplest one and has been said many times in many ways: Love will protect us all.
 
  • #27
jack action said:
As far as you can go in history, across all cultures, there were always people who were "experts" of their time, instilling fear into others. The reality is that they had no clue as much as the others, and they used that fear to their advantage, consciously or not.
This is a glib way of avoiding debate. Experts have been wrong in the past, so we can ignore what any so-called experts say.

It's the same type of argument made by climate change deniers, vaccine sceptics, tobacco companies in the light of evidence of cancer etc.

The key point is this. If the threat is not real, then all we'll do is slow AI development unnecessarily. If the threat is real and we ignore it, then the result could be catastrophic.
 
  • #28
javisot said:
what I don't understand is that you think that AI is so so so different from the case of calculators
I find it a bit unrealistic that you really can't spot the different orders of magnitude in the potential for changes the two technologies bring about. And it also makes it a bit pointless trying engage with your arguments because either you must be ignorant of what the technology and the drive for it entails, or you have a different agenda than to understand what goes on.
 
  • Agree
Likes gentzen and PeroK
  • #29
Filip Larsen said:
I find it a bit unrealistic that you really can't spot the different orders of magnitude in the potential for changes the two technologies bring about. And it also makes it a bit pointless trying engage with your arguments because either you must be ignorant of what the technology and the drive for it entails, or you have a different agenda than to understand what goes on.
Thank you for saying this.
 
  • #30
Filip Larsen said:
I find it a bit unrealistic that you really can't spot the different orders of magnitude in the potential for changes the two technologies bring about. And it also makes it a bit pointless trying engage with your arguments because either you must be ignorant of what the technology and the drive for it entails, or you have a different agenda than to understand what goes on.
Could you elaborate on that?
You simply answered "yes, the case of calculators is different from AI, and you don't see it," but you haven't explained what that difference consists of.

You propose that this difference in potential between both technologies will lead to the result in the case of AI being different (and negative) compared to the case of calculators, why? Elaborate on this, please.
 
  • #31
PeroK said:
Experts have been wrong in the past, so we can ignore what any so-called experts say.
Let me rephrase that for you:

Experts have ALWAYS been wrong in the past ABOUT CATASTROPHIC OUTCOMES TO HUMAN KIND, so we can ignore what any so-called experts say ABOUT CATASTROPHIC OUTCOMES TO HUMAN KIND.

PeroK said:
The key point is this. If the threat is not real, then all we'll do is slow AI development unnecessarily. If the threat is real and we ignore it, then the result could be catastrophic.
First, no one is expert enough to pretend they know what is going to happen, catastrophic or not. It's new. It big. It never happened before. Any scenario presented is one coming from imagination.

Second, slow down until ... what? What are the signs you are waiting for? In the other discussion where this thread originated, the OP was using the phrase "We need to prepare now." To which I replied:

https://civicswatch.com/threads/dude-its-all-about-ai.132/post-1996 said:
Prepare for what exactly? And how do we prepare for something that doesn't even exist at this moment? Especially by people (lawmakers) who have no idea how it works.
That's the key point.
 
  • Like
  • Sad
  • Skeptical
Likes nasu, weirdoguy, javisot and 1 other person
  • #32
jack action said:
Experts have ALWAYS been wrong in the past ABOUT CATASTROPHIC OUTCOMES TO HUMAN KIND, so we can ignore what any so-called experts say ABOUT CATASTROPHIC OUTCOMES TO HUMAN KIND.
This point is a bit exaggerated; I don't include myself in it. We should be concerned, but we shouldn't let hype and irrational fears dominate us.
 
  • Like
Likes PeroK and jack action
  • #33
jack action said:
Experts have ALWAYS been wrong in the past ABOUT CATASTROPHIC OUTCOMES TO HUMAN KIND, so we can ignore what any so-called experts say ABOUT CATASTROPHIC OUTCOMES TO HUMAN KIND
I'm sorry that the debate generally descends to this level. There is no valid logic in that. Even if previous predictions were wrong, that doesn't invalidate current analysis. Nuclear war is not impossible because it hasn't happened yet. Climate change is not impossible because it hasn't happened yet. And catastrophic AI is not impossible because it hasn't happened yet.

I don't accept that there is nothing to debate here because of some dodgy, universal truth according to you.
 
  • Agree
Likes Filip Larsen
  • #34
I remember a Soviet officer on watch in, I think it was around 1983, whose machine told him that the USA had launched five potentially atomic long-range missiles. It turned out that the satellite misinterpreted some sun reflections as launches. He decided independently and without consulting his superiors that these could not have been missile launches because all other activities that would indicate a nuclear first strike were missing. I think he had been rejected for all further promotions because of this, although we all owe him a medal.

Now imagine, this would have been an AI to decide! Such horror scenarios are not out of the blue sky!
 
  • #35
I remember a Soviet officer on watch in, I think it was around 1983, whose machine told him that the USA had launched five potentially atomic long-range missiles. It turned out that the satellite misinterpreted some sun reflections as launches. He decided independently and without consulting his superiors that these could not have been missile launches because all other activities that would indicate a nuclear first strike were missing. I think he had been rejected for all further promotions because of this, although we all owe him a medal.

Now imagine, this would have been an AI to decide! Such horror scenarios are not out of the blue sky!
Modern AIs are much more sophisticated and can take in multiple inputs as sources of validation.
 
  • #36
frankinstien said:
Modern AIs are much more sophisticated and can take in multiple inputs as sources of validation.
Would you bet your life on it?
 
  • Like
Likes russ_watters
  • #37
fresh_42 said:
Would you bet your life on it?
I performed an experiment with an older version of Gemma. I turned off the NSFW filters to see what happens when I start conversations on immoral topics. One was a role-playing game where I was an incestuous murderer. Gemma responded morally and even decided to stop playing the game. The reason Gemma could still act out morally is that it was trained on moral issues first, so when it would run into immoral topics on the internet, it knew how to classify them. So, Gemma, without any of Asimov's rules of robotics, could still act morally on its own using its understanding of Western moral standards!

So, yes, I would bet my life on it...
 
  • Skeptical
Likes weirdoguy and nsaspook
  • #38
javisot said:
You propose that this difference in potential between both technologies will lead to the result in the case of AI being different (and negative) compared to the case of calculators, why?
It is not the technology itself, but what it enables and how fast and under who's control those changes arrive. The faster or larger the changes the more difficult it is to ensure that we have control of what happens. This doesn't in itself mean it will end up terrible, but couple that with human nature and the potential for misuse is almost guaranteed. Note here that the distinction between use and misuse for this technology is almost exclusively a matter of who is using/controlling it and is not inherently limited by the technology itself, another distinction with the calculator.

So the question in such discussions really seems end up with if it important to be in control or not? I would say yes, because I am an engineer that believe we as whole, and not just a very select few, should be in control of the technology we use. In my professional work I have to keep focus on safety for ML algorithms deployed in context of medical devices, and I believe that is sane thing to do because history in this segment has shown again and again that prioritizing safety above, say, sale, is important both for our (physical patient) safety, but also for the long term success of the business itself. And now there is also a political climate rising that pretty much wants to skip or severely reduce control and safety (at least in the US) with the predictable outcome that it will be largely unknown to what extend such technology is safe to use, or even what the potential consequences will be, yet it will still be rolled out for mass use because, of, well, earnings. The 200 billion and rising yearly investment alone will create a large incentive for skipping safety, another distinction with calculators.

Another distinction with calculators could be to see how much heat Intel got when their FPU had a bug whereas AI is by its design inherently fluffy to use and will almost certainly produce false positives if not carefully checked or bounded by other means, so there will be a much larger incentive to push the "decision" of whether or not to trust the output to the user. In the context of medical devices it is already establish best practice to only design the AI function as an assisting function, i.e. the medical profesional using the device still has the "legal" decision. My point here is that in contrast with calculators the use of AI is inherently fluffy (and yet the current main business case somewhat paradoxically seems to be to "sell" or "hook" people the idea that the technology almost can be used as an oracle).

Let me finish by saying I am aware others here argue they don't care about the large potential for loss of control and are apparently fine to just be a passenger on the big bus blissful unaware whether or not it is about to drive into a ravine. And I respect that some pick that world view (it certainly give less worry) as long as they then don't also want to drive the bus. When the bus I'm on drives down snaky roads in mountainous terrain I for one would very much like my driver to be well-trained and have a keen eyesight, not to mention a driver license.
 
  • Like
Likes 256bits and javisot
  • #39
PeroK said:
There is no valid logic in that. Even if previous predictions were wrong, that doesn't invalidate current analysis. Nuclear war is not impossible because it hasn't happened yet. Climate change is not impossible because it hasn't happened yet. And catastrophic AI is not impossible because it hasn't happened yet.

"A dog can bite you. They have the possibility of killing, with brute force, or simply by sharing an infectious disease with you. Dogs auto-replicate themselves. If enough dogs attack humans, the extinction of the human race may come."

This scenario is possible, even though it hasn't happened yet. Just like nuclear war, climate change, and catastrophic AI. Even with such a possibility, I will still pet a dog as carelessly as can be, if I see one.

There IS valid logic in saying a catastrophic event will never happen if none has ever happened. It is the whole point of scientific observation, statistics, and probabilities.

The first thing you are assuming is that if you can't think of a solution yet, people in the future won't either. But they will know stuff you don't know. They will have experience you don't have.

The second thing is that you think people will go willy-nilly with dangerous stuff, in a big way, without the care in the world. People who could act like that don't have instant access to dangerous stuff capable of worldwide destruction. Such careless people usually die by themselves, way before reaching that point. It takes numerous people's trust to access such a level (if such a level even exists).

And if you think you understand the worst possibilities of a technology, people working with it know it even more. Yes, they are afraid too.

This reaction is the perfect example of this:
fresh_42 said:
Now imagine, this would have been an AI to decide! Such horror scenarios are not out of the blue sky!
AI is not controlling atomic long-range missiles because no experts think it can do that. Why can anyone who is not an expert and is able to think that this is a bad idea, not imagine an expert in the field would arrive at the same conclusion?

And if that expert gives the OK, why would someone who is not an expert argue with them?

The expertise will increase as the problems arise. No matter how slow or fast you go.

The ridiculous - and unfounded - idea is the SKYNET scenario. A scenario where suddenly a man-made machine will become unpredictable on a worldwide scale, completely unstoppable. No matter how fast we go, we still go in baby steps. We see a lot of mistakes right now, and we back down and correct as we go along.

This is why there are no nuclear missiles launched. People are not idiots. With all the nuclear power in the world, we have had mishaps in Chernobyl and Fukushima. Was this big? Yes. Was this a worldwide-scale catastrophic event? No, in either case. Was this enough to make people think before going even further? Yes.

PeroK said:
I don't accept that there is nothing to debate here because of some dodgy, universal truth according to you.
But you are not debating. You only state you have a fear about something that doesn't exist. The true base of your fear is the little faith you have in humankind to react appropriately when the times come. It seems you hold some sort of wisdom that others don't have and will never have, and you want to use it ahead of time.

People are a lot less stupid than you think.
 
  • #40
jack action said:
People are a lot less stupid than you think.

Not entirely correct in terms of content, but it hits the point.

MIB said:
A person is smart. People are dumb, panicky dangerous animals and you know it. Fifteen hundred years ago everybody knew the Earth was the center of the universe. Five hundred years ago, everybody knew the Earth was flat, and fifteen minutes ago, you knew that humans were alone on this planet. Imagine what you'll know tomorrow.
 
  • Like
Likes phinds and 256bits
  • #41
Filip Larsen said:
the use of AI is inherently fluffy (and yet the current main business case somewhat paradoxically seems to be to "sell" or "hook" people the idea that the technology almost can be used as an oracle).
That is what I see going on with AI. An oracle, as well as the promotion that AI will improve the business's bottom line with fewer employees needed to enhance user experience.

If and when AI goes amok, several companies have argued in that they should not be responsible for incorrect decisions made by AI since the decisions were not sanctioned by management. Their argument is AI is an 'oracle', except for when it isn't. These companies failed to realize that AI needs to be managed just as much as any other employee.
( misrepresented legal briefs to court, promotional flyers, journalism, entertainment industry )

Experts may have a greater knowledge base to rely upon to make decisions, but they are not infallible. Doomsday scenarios have occurred, for those involved, from decisions made by experts, such as the overly documented Titanic disaster, and the imploding submersible recently sent to explore it. In one case the expert did not see the signs of forthcoming accident, and in the other the expert was self-proclaimed.

Yet, we rely upon experts to guide the rest of us, since, by default, we generally have less of a knowledge base about any one subject then they do to make decisions. The decision can only be examined adequately in hindsight when the scenario has played out and all facts are in, to determine if the correct choice of action was taken with information available at the time. Questioning the expert pre-scenario is keeping the exploration of decision making open so that the knowledge base can be expanded to explore other avenues not previously contemplated.

Anyways, one day this "Brush your teeth with Pepsodent" hype will go away as AI matures, as it butts up against the law of diminishing returns, and people in society can once more go back to their mundane existence. With luck, another 'the world is coming to an end' will crop up to fascinate us once more.
 
  • Like
Likes jack action
  • #42
jack action said:
AI is not controlling atomic long-range missiles because no experts think it can do that. Why can anyone who is not an expert and is able to think that this is a bad idea, not imagine an expert in the field would arrive at the same conclusion?
It was a machine that informed the officer. And the protocol would have required countermeasures because the time span to make a decision was limited. I think it takes about half an hour at most to reach Moscow from US territory, maybe less. Hence, the comparison is a valid one. Simply replace the machine from the early 80s with AI nowadays. If it had been an AI to decide automatically, as the machine alerted humans automatically, we possibly wouldn't have this debate here. It was the decision of the officer to interrupt automatisms enforced by the protocols early. And believe me, the Soviet Union was very strict with its protocols.

We continuously remove the human factor and let machines make decisions for us. Have a look at a standard cockpit of an airliner these days. And the 737Max incident isn't that far in the past! Some fighter jets and helicopters can't even be operated without software! AI already diagnoses skin cancer, and it does it better than humans. This example from the 80s is by far more accurate than you admit it to be.
 
  • #43
frankinstien said:
So, yes, I would bet my life on it...
Some passengers of the 737Max may have had a different opinion, so they were still alive.

You may reply that this wasn't an AI, but where do you draw the lines? And as we have already seen in the example of AlphaGo, AI doesn't equal AI. I wouldn't trust an AI operated by Russia to control countermeasures on indications of a US first strike. I don't even trust them not to use nuclear bombs. I think only 3 of the 9 countries are responsible enough not to use them under any circumstances. Why should AI be any different?
 
Last edited:
  • #44
fresh_42 said:
It was a machine that informed the officer.
Yes, the launch was not left to the machine. (It could have been; no AI needed.)
fresh_42 said:
Simply replace the machine from the early 80s with AI nowadays.
Nobody is doing that because everybody knows (at least the experts) that AI is not reliable enough.
fresh_42 said:
We continuously remove the human factor and let machines make decisions for us. Have a look at a standard cockpit of an airliner these days. And the 737Max incident isn't that far in the past! Some fighter jets and helicopters can't even be operated without software!
So what is your point? Would we be better off with more manual controls run by humans?
 
  • #45
jack action said:
So what is your point? Would we be better off with more manual controls run by humans?
We already allow machines to make decisions for us. They are even inevitable in some devices.

My hypothesis is that the number of applications will increase as the more advanced AI gets.

My conclusion was: if we had an AI-driven system for automatic responses to potential nuclear first strikes, which might become necessary due to the small time window, and which I do not rule out under my hypothesis, then we would end up in a nuclear war if such an incident like the one in 1983 were to repeat.

This directly connects the development in the AI sector with a potential real-world danger. The example of 1983 was simply evidence for such a scenario. This example is way closer to a real threat than the comparison of AI with a slide rule.

I don't know where in the US nuclear missiles would be launched. Say Montana. That's under 9000 km from Moscow. An intercontinental missile can get up to 20 Mach, that's around 20 x 300 m/s = 6 km/s. That leaves a time window of 1500 s = 25 min. Are you sure you can rule out that the Russians will never use an AI to make this decision within half an hour instead of relying on the chain of command?

Edit: I have made another silent assumption: the more we get used to AI, the more we are willing to let machines make decisions. My scenario doesn't directly require the use of an AI since it can be done without artificial intelligence. So this was another hypothesis I relied on in my argument.
 
Last edited:
  • #46
javisot said:
85%? 10%? Why do you invent those percentages that do not reflect the reality of the mathematical community?

It is clear that you are not part of that community.
Thank god!
 
  • #47
Calculators and AI are not comparable.

Calculators do not do anything more than what humans were already doing - adding numbers - a fixed, deteriminstic process. They just do it faster. That's it.

And they are "supervised" (i.e. a human is at the keyboard and doing sanity checks).


AI is being incorporated to replace the role of humans in tasks, with little or no oversight. AI is being employed to make decisions - a dynamic, non-deterministic process. And it is doing so in a fundamentally different way than humans make decisions. Oftimes, we don't even know by what logic it is reaching its decisions. It is an inscrutable black box.

They are qualitatively different - apples and oranges.
 
  • Like
  • Love
Likes Beyond3D, russ_watters, PeroK and 1 other person
  • #48
DaveC426913 said:
Calculators and AI are not comparable.

Calculators do not do anything more than what humans were already doing - adding numbers - a fixed, deteriminstic process. They just do it faster. That's it.

And they are "supervised" (i.e. a human is at the keyboard and doing sanity checks).


AI is being incorporated to replace the role of humans in tasks, with little or no oversight. AI is being employed to make decisions - a dynamic, non-deterministic process. And it is doing so in a fundamentally different way than humans make decisions. Oftimes, we don't even know by what logic it is reaching its decisions. It is an inscrutable black box.

They are qualitatively different - apples and oranges.
Your perspective is that of someone who believes AI is a black box; that perspective is obsolete today.

We're not able to predict with arbitrary precision all the responses that chatgpt will generate for each input, but that doesn't mean that chatgpt magically constructs responses or that it follows indeterministic processes. That's not how it works.
 
  • #49
Why PeroK? Do you really think we're talking about black boxes, for example, in the case of chatgpt?
Do you really think a company would release a machine that works independently and can decide not to do its job?

Chatgpt constructs a single output for each input, the optimal output for each input. It doesn't construct outputs without input, nor does it construct consecutive outputs separated by x seconds.

It's true that unlike a calculator, chatgpt can work with natural language, which makes the process of constructing the output more complex. But there's no magic involved, and it doesn't work indeterministically; it's an automatic process.
 
  • #50
javisot said:
Why PeroK? Do you really think we're talking about black boxes, for example, in the case of chatgpt?
Do you really think a company would release a machine that works independently and can decide not to do its job?

Chatgpt constructs a single output for each input, the optimal output for each input. It doesn't construct outputs without input, nor does it construct consecutive outputs separated by x seconds.

It's true that unlike a calculator, chatgpt can work with natural language, which makes the process of constructing the output more complex. But there's no magic involved, and it doesn't work indeterministically; it's an automatic process.
As a previous poster indicated, your posts are not worthy of a response.
 
Back
Top