Why is 'AI' often so negatively portrayed?

  • Thread starter Thread starter n01
  • Start date Start date
  • Tags Tags
    Ai
Click For Summary
The portrayal of AI often leans negative in popular media, primarily due to societal fears of the unknown and the potential consequences of advanced technology. This negativity is fueled by the ease of creating compelling narratives around villainous AI, which resonate with audiences and reflect historical anxieties about advanced societies displacing less advanced ones. While there are positive representations of AI in fiction, such as benevolent characters in various films and books, these are often overshadowed by dystopian themes. Discussions emphasize the importance of context when evaluating AI's portrayal, as well as the need to consider specific applications and their potential impacts in the real world. Overall, the conversation highlights a complex relationship between AI's fictional depictions and societal perceptions, underscoring the necessity for balanced discourse.
  • #31
ZapperZ said:
Aren't R2D2 and C3PO depiction of AI? Are they portrait negatively?
Not sure about R2D2 being AI -- I can't understand him well enough to know what (if?) he's thinking!
Culturally, anything that is unfamiliar, new, unknown, and not understood tend to be demonized. This includes aliens, bugs, creatures from the sea, gays and lesbians, computer program gone mad, people who look different than you, etc... etc. Why shouldn't AI, if it truly has been negatively portrait, suffer from the same prejudices and ignorance that have been imposed on others?
There's a thread in the sci fi section asking why there are few movies that have mostly or entirely non-human characters. I think the answer is less nefarious: it's because Hollywood has been unable to get any market penetration in the Galactic Republic (though clearly all Star Wars characters are non-human; they were just made to look human). A similar problem exists for robots (they don't buy movie tickets). So I think it is more business practicality than prejudice.

I don't want to debate the mixture of practicality and prejudice for those other groups though...
 
Physics news on Phys.org
  • #32
UsableThought said:
Nice mention of a film that definitely fits this thread. I forget the computer's "name" - but was it evil, or good? Answer, neither; it was humans who were the bad guys & also the good guys.
I had to look up the plot since it has been awhile: it looks to me like the computer was neither good nor evil, but rather was not self aware enough to realize it was mixing simulation with reality. And the people using it were also more clueless than evil.

I think it is a good example that in today's world, it isn't self-aware computers that are the problem, but rather hacked or buggy or improperly designed ones that are.
 
  • Like
Likes UsableThought
  • #33
russ_watters said:
In War Games, the NORAD computer was accessed via a phone line. Obsolete today, but I would think today that the computers controlling the nuclear weapons are not on the internet.
Ask Iran if nuclear facilities are immune to computer attacks ...

What some people are worried about concerning AI is could a computer program - able to re-program itself or create other programs by itself - build such a program? Even if it had a «good intention» behind the production of such program.
 
  • #34
jack action said:
Ask Iran if nuclear facilities are immune to computer attacks ...

What some people are worried about concerning AI is could a computer program - able to re-program itself or create other programs by itself - build such a program? Even if it had a «good intention» behind the production of such program.
l

I see no limiting principle to answer no to that question. It has to be yes.

A better question is will machines replace bio life as the next major step in evolution? I'm sure that dinosaurs ( could they think) would fear being overtaken by mammals. The world could benefit from AI life, but Homo sapiens would not. The conflict here is the word "we." Does we include us plus our creations? Or is it us versus our creations?
 
  • #35
anorlunda said:
I'm sure that dinosaurs ( could they think) would fear being overtaken by mammals.
Mammals did not «overtake» dinosaurs; they survived an event (choose your theory) that dinosaurs didn't. In fact they are not really extinct as birds are considered a type of dinosaur.

IMHO, Life as whole is extremely difficult to destroy, much more than how people often imagine.
anorlunda said:
The world could benefit from AI life, but Homo sapiens would not.
This brings us back the original OP: Why do you think AI has to destroy humans and why AI will necessarily be better for the world?

I really don't understand when people see themselves as the enemy of life and that the world would be a better place without them in it.
 
  • Like
Likes russ_watters
  • #36
jack action said:
This brings us back the original OP: Why do you think AI has to destroy humans and why AI will necessarily be better for the world?

Do you think humans would be content being the second most intelligent beings on Earth? How do humans treat dolphins ?
War sounds inevitable. Superior beings would see human presence as a threat.But this talk about beings is contrary to my point in #27.
 
  • #37
anorlunda said:
Do you think humans would be content being the second most intelligent beings on Earth?
What makes you think AI would be smarter than human beings? (How do we define «smarter»?)
anorlunda said:
How do humans treat dolphins ?
AFAIK, there is no war between humans and dolphins.
anorlunda said:
War sounds inevitable. Superior beings would see human presence as a threat.
So human beings see ants and dolphins as a treat?

Which brings another question: Are lions smarter than gazelles because they seek to kill them? I know, it's a straw man argument :biggrin:, but linking intelligence with destruction is an argument that I don't understand. If it was the case than I must be a terrible human being as I don't seek to destroy every life form that I meet (or maybe I'm not smart enough? :)):oops::frown::woot:o0)).
 
  • Like
Likes Drakkith
  • #38
russ_watters said:
In War Games, the NORAD computer was accessed via a phone line. Obsolete today, but I would think today that the computers controlling the nuclear weapons are not on the internet.

Nuclear weapons are not controlled by computers in such a way as to allow a completely remote launch. All ICBM launches have to be initiated locally by two officers sitting in a launch facility somewhat near the silos. You'd have to physically re-wire the entire system to allow for a completely remote launch.

Sub-launched missiles are even more disconnected from a remote launch. Subs are manned craft which can't even be communicated with unless they purposely trail a huge antenna behind them or float an antenna on a buoy.

Air-dropped/launched weapons are similar to the subs except that the aircraft is easier to communicate with. Still, like a sub, the aircraft cannot be controlled remotely at all (for now at least), so there is no way to launch a nuclear weapon remotely. Heck, you need hundreds of people just to get the aircraft ready for takeoff and to load the weapons in the first place.

anorlunda said:
Do you think humans would be content being the second most intelligent beings on Earth? How do humans treat dolphins ?
War sounds inevitable. Superior beings would see human presence as a threat.

The wants and needs of an AI truly superior to humans are impossible to predict right now. Perhaps it would be content to get lost in its own thoughts as it takes in data from the internet. Perhaps it would choose to completely ignore us and go on its merry way. Perhaps it would see us as children and decide it is morally wrong to do us any harm. Who knows? I think it's important to keep in mind that human beings think the way that we do because evolution drove us to be this way. It was beneficial given the conditions we evolved under. The same is not true for an AI. The conditions will be very different and there is little reason that I can see to think that conflict is a likely outcome.
 
Last edited:
  • #39
n01 said:
Are people just afraid of what AI might entail?
Perhaps, but still .... I consider
 
Last edited:
  • Like
Likes symbolipoint and n01
  • #40
Drakkith said:
The wants and needs of an AI truly superior to humans are impossible to predict right now. Perhaps it would be content to get lost in its own thoughts as it takes in data from the internet. Perhaps it would choose to completely ignore us and go on its merry way. Perhaps it would see us as children and decide it is morally wrong to do us any harm.

"Wants and needs of an AI truly superior to humans" - plot driver (theme?) of William Gibson's Neuromancer.

(And yet another fictional representation of a non-evil AI, contrary to the OP's initial supposition.)
 
  • #41
At the risk of being possibly off topic, here's a link to a paper that might serve as some background reading.

A video about the paper:


And a link to the actual paper:
https://arxiv.org/pdf/1606.06565.pdf
 
  • #42
Some more possible background videos from Computerphile:
(These might not address the OP's question directly, but are at least indirectly relevant.)







At the very least they are fun topics to ponder.
 
Last edited:
  • #43
It appears we may be Outsourcing Science to AI before long, another "Brave New World" of Technology.
http://www.sciencemag.org/news/2017/07/new-breed-scientist-brains-silicon
"I want to be very clear," says Zymergen CEO Joshua Hoffman, heading off a persistent misunderstanding. "There is a human scientist in the loop, looking at the results and reality checking them." But for interpreting data, generating hypotheses, and planning experiments, he says, the ultimate goal is "to get rid of human intuition."
 
  • #45
I think it's simply because we're on the precipice of a mind boggling social disruption but we haven't quite gone over it yet. It's simply new and untested. It has the potential for destruction on an unimaginable scale, or it could ferry us into a new golden age. Humans had the same reservations about unlocking the power of the atom. The main horror is that we don't know where the major breakthrough will come from, and we don't like being out of control. It's understood that the invention of a truly intelligent machine could outsmart every banker and investor in the world and have total control over the stock market before we can even notice.

War is an even scarier proposition. If two advanced states end up warring, the AI race will heat up. It's a paradigm shifting technology, and the side that gets there first will overwhelm everyone else. If Hitler figured out the bomb before us, I'm not sure the allies still would have won the war. It's a terrifying thought that we didn't get there first by very much, but doing so completely changed the world order.

I've thought a bunch about the effects of AI on a planet long term. I've come to the conclusion that AI will be the masters of the universe. If we continue to build benign AI, we will become more and more dependent on it. Over generations, it'll just become a more and more important role in society. It'll control the economy, entertain us, server us, and shape our society. As society gets more complex, the need for humans to work will become less and less. Humans and AI will at first work together, but eventually the work will get too complex for humans and the AI will take over. There is a history of this. There are two species on this planet that worked together in order to survive, but as complexity grew, the smarter one came to dominate and the lesser one ended up never working much at all: humans and dogs. I think we'll eventually become more like pets to god-like machines. I think we'll be perfectly okay with that. Most humans currently believe that we are subservient to one or more gods.
 
  • Like
Likes anorlunda
  • #46
gleem said:
Elon Musk warns governors to regulate AI development before it is too late.

https://www.inverse.com/article/342...ernors-ai-is-fundamental-risk-to-civilization
When I hear the sayings of Elon Musk and the like, I'm always wondering: Is he overestimating AI or underestimating human kind?
newjerseyrunner said:
There are two species on this planet that worked together in order to survive, but as complexity grew, the smarter one came to dominate and the lesser one ended up never working much at all: humans and dogs. I think we'll eventually become more like pets to god-like machines.
There are a lot of people that think dogs have «won» that «battle»: Humans treat dogs like gods and fulfill their every need.
 
  • #47
jack action said:
There are a lot of people that think dogs have «won» that «battle»: Humans treat dogs like gods and fulfill their every need.
Actually, that's the opposite. The gods are the ones that provide to the faithful. In your analogy, we are the gods and they are our worshipers.
 
  • #48
Gods-Dogs, An Anagram ?
 
  • Like
Likes jack action
  • #49
newjerseyrunner said:
Actually, that's the opposite. The gods are the ones that provide to the faithful. In your analogy, we are the gods and they are our worshipers.
I'm not sure I agree. If we're the gods, why aren't they following us around, picking up our feces?
 
  • Like
Likes 1oldman2
  • #50
russ_watters said:
I'm not sure I agree. If we're the gods, why aren't they following us around, picking up our feces?
I'm pretty sure the sewage technology up until very recently was "dump it in the ocean and let Poseidon/God deal with it."
 
Last edited:
  • #51
newjerseyrunner said:
I think it's simply because we're on the precipice of a mind boggling social disruption but we haven't quite gone over it yet. It's simply new and untested. It has the potential for destruction on an unimaginable scale, or it could ferry us into a new golden age. Humans had the same reservations about unlocking the power of the atom. The main horror is that we don't know where the major breakthrough will come from, and we don't like being out of control. It's understood that the invention of a truly intelligent machine could outsmart every banker and investor in the world and have total control over the stock market before we can even notice.

War is an even scarier proposition. If two advanced states end up warring, the AI race will heat up. It's a paradigm shifting technology, and the side that gets there first will overwhelm everyone else. If Hitler figured out the bomb before us, I'm not sure the allies still would have won the war. It's a terrifying thought that we didn't get there first by very much, but doing so completely changed the world order.

I've thought a bunch about the effects of AI on a planet long term. I've come to the conclusion that AI will be the masters of the universe. If we continue to build benign AI, we will become more and more dependent on it. Over generations, it'll just become a more and more important role in society. It'll control the economy, entertain us, server us, and shape our society. As society gets more complex, the need for humans to work will become less and less. Humans and AI will at first work together, but eventually the work will get too complex for humans and the AI will take over. There is a history of this. There are two species on this planet that worked together in order to survive, but as complexity grew, the smarter one came to dominate and the lesser one ended up never working much at all: humans and dogs. I think we'll eventually become more like pets to god-like machines. I think we'll be perfectly okay with that. Most humans currently believe that we are subservient to one or more gods.

@newjerseyrunner, for the scenario you present to come to fruition, are you not assuming that progress in AI development will proceed more or less smoothly? Because, from my vantage point, that is far from obvious. Yes, we have made considerable advances in areas of machine learning, such as neural networks/deep learning, but it is far from clear to me that "strong" AI will necessarily emerge out of deep learning (from what I've read, the most impressive results from deep learning came about more through advances in computing power rather than anything particularly groundbreaking in the specific algorithms or theoretical underpinnings, much of which has been already laid out since the early 1990s, if not before then).
 
  • #52
newjerseyrunner said:
I'm pretty sure the sewage technology up until very recently was "dump it in the ocean and let Poseidon/God deal with it."
A stray dog doesn't need a human to live. It will just «work» to find its food and change territory as it gets soiled by its feces (which a «god» will clean up within a certain time). Finding new territories will most likely require fighting with others (offense and defense).

But if the dog «acts» cute enough, it might convince a human being to find the food, clean the territory and do the fighting with others instead of doing it itself. That's one way of answering the question «Who's leading who?» in the human/dog relationship.

Humans might just not be the gods of dogs, they might just have been lead (outsmarted?) to believe they were.
 
  • #53
StatGuy2000 said:
@newjerseyrunner, for the scenario you present to come to fruition, are you not assuming that progress in AI development will proceed more or less smoothly? Because, from my vantage point, that is far from obvious. Yes, we have made considerable advances in areas of machine learning, such as neural networks/deep learning, but it is far from clear to me that "strong" AI will necessarily emerge out of deep learning (from what I've read, the most impressive results from deep learning came about more through advances in computing power rather than anything particularly groundbreaking in the specific algorithms or theoretical underpinnings, much of which has been already laid out since the early 1990s, if not before then).
The major breakthrough recently has been how neural networks are connected to each other, but yes, hardware has pushed t along mostly. The thing is that neural networks are perfect for quantum computers. If quantum computers work out, neural networks potentials explode.

And no, I'm not saying it'll be steady. In fact, I leave open the possibility that it may not even be our global civilization that does it. We could go all the way back to a dark age and have to start it all over again. I propose that eventually we will do get there. It's even possible that an AI God would end up destroying everything, set us back to more dark ages, and start the whole thing over again. But some should be stable enough to last many human generations. And if one is really self stabilizing , it could live for millions of years as our benign overloads. Those would likely be the dominate beings of the universe, not green men in ships. All of this I find to be very likely progression for any creature capable of developing technology as advanced as itself over cosmological timescales.
 
  • #54
Borek said:
n01 said:
Are people just afraid of what AI might entail?
That's my bet.
Mine too.
And AI will be faster, for sure.
And, just like us.



But it's a long way away, IMHO.
If not, I'll find it fascinating to watch.

They won't have the weakness, of panic.
 
  • #55
OmCheeto said:
But it's a long way away, IMHO.

Don't be so sure. There's people like me who are working hard to make it happen before my dissertation defense, or actually for my dissertation defense...hopefully.

The reason AI gets a bad rap sometimes is twofold. One, it never delivered what it promised. The idea of AI has been around since the war (and not the Vietnam war), and, to make a gross understatement, it hasn't lived up to the hype. This is the AI insiders frustration, however, and I think the OP was referring more to the general bad rap AI can get in the popular culture.

The reason for the popular bad rap is simply that people don't understand it. And because they don't understand it, i.e., understand what's "under the hood" of AI architectures, they fear it. Basic fear of the unknown. As far as the average Joe or Joanne out there, a human-derived AI creation might as well be an alien from another galaxy. We know nothing about these aliens and they can be good or bad. But fear is stronger than tolerance, and it's much easier to just say AI is dangerous than to take the time to explore the research that is going on in this field. It's fascinating. I run dozens of computer simulations a day on spiking neuron network populations in order to simulate mammalian cortical processes in the attempt to develop a tractable architecture we can exploit for advanced information processing capabilities. These networks I deal with are like little children, they're naughty sometimes, they don't cooperate, and they don't don't make any sense. But they're learning...They're coming along and growing more cooperative with some love and attention. That's the way I look at it. They'll become what we make them become. From there, sure, they may take on a mind of their own, but so what? That's what evolution is all about.

You can't stop it. You can't stop what's coming...

Hahhaha
 
  • Like
Likes OmCheeto
  • #56
DiracPool said:
The reason for the popular bad rap is simply that people don't understand it. And because they don't understand it, i.e., understand what's "under the hood" of AI architectures, they fear it. Basic fear of the unknown. As far as the average Joe or Joanne out there, a human-derived AI creation might as well be an alien from another galaxy. We know nothing about these aliens and they can be good or bad. But fear is stronger than tolerance, and it's much easier to just say AI is dangerous than to take the time to explore the research that is going on in this field.
This point of view speaks more to me. It makes sense that someone who is in the field speaks like that.

What I don't understand is why people like Musk and Hawking speak of it with such fear. Aren't they in the field too (Anyway closer than I can be)? I'm curious to hear your thoughts about the speeches of these people.
 
  • #57
jack action said:
What I don't understand is why people like Musk and Hawking speak of it with such fear.

I would say that people with more means than the average lay person to see both possible positive and negative outcomes current level of research and use of AI can lead to on a global scale will find it natural to point out that we currently are not able to discern the technological preconditions that separate desirable scenarios from undesirable scenarios, and thus that we are unable to ensure that we all do not end up in one of the really bad scenarios. I am not surprised that knowledgeable people like Musk and Hawking (and many others) consider it prudent to point this out.

Some people seems to evaluate risk in the context of AI (or even in general) by trying to guess or estimate the most likely scenarios, and if these are all desirable scenario then they apparently find it unnecessary to analyse or even acknowledge the possibility of less likely scenarios. And in the context of AI they do this even when the actual probabilities are very hard to estimate correctly. To me, this is not prudent risk management.

Also, the use of the words "such fear" sounds to me like an attempt to portrait Musk and Hawking's statements as a result of phobia (irrational fear). However, if fear in this context is taken to mean a rational perception of danger then I will consider it an appropriate label.
 
  • #58
It's because it is a huge game changer. For good or for bad.

For instance, say AI becomes so advanced that it can do most human tasks. This could lead to:
1) Massive unemployment/ social strife

Or

2) Utopia in which the machines/programs do and build everything for us and with the cost of items reducing down to cost of raw materials, no one would need to work anymore.

Or 1) followed by 2).

Or neither of any of the above.

These are just some examples of what could happen. I'm sure there are more.

Another possibility is that the rules would change. Different societies in history have different economic systems. The hunter-gathers had a different system of trade and followed different economics. The current economic theories of society are based off of a post industrial revolutionary period. We can't predict that the post AI revolutionary period would have similar economic laws considering just how drastically significant an impact of a runaway AI development would be on human society.
 
  • #59
Here's a case where a little more AI wouldn't have hurt.
http://www.bbc.com/news/technology-40642968
"We were promised flying cars, instead we got suicidal robots," wrote one worker from the building on Twitter.

"Steps are our best defence against the Robopocalypse," commented Peter Singer - author of Wired for War, a book about military robotics.
 
  • #60
1oldman2 said:
Here's a case where a little more AI wouldn't have hurt.
http://www.bbc.com/news/technology-40642968
"We were promised flying cars, instead we got suicidal robots," wrote one worker from the building on Twitter.

"Steps are our best defence against the Robopocalypse," commented Peter Singer - author of Wired for War, a book about military robotics.

We were promised flying cars, but we got better than that. We got the internet.
 
  • Like
Likes 1oldman2

Similar threads

  • · Replies 12 ·
Replies
12
Views
4K
Replies
10
Views
4K
Replies
3
Views
3K
Replies
15
Views
4K
Replies
33
Views
7K
  • · Replies 28 ·
Replies
28
Views
11K
  • · Replies 67 ·
3
Replies
67
Views
15K
Replies
9
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
Replies
19
Views
7K