Why is 'AI' often so negatively portrayed?

  • Thread starter n01
  • Start date
  • Tags
    Ai
In summary, people portray AI negatively because it sells well and people are afraid of what it might entail.
  • #1
n01
49
4
I just have a simple question, which probably a psychologist might better answer; but, here it goes...

Why is AI so badly portrayed by people? I'm kind of tired of all the negativity surrounding AI, which seems unwarranted in any way.

Why do people, often well educated and intelligent people/scientists, portray AI in such a negative manner? I'm talking about movies, books, and sci-fi otherwise. Is it that portraying AI negatively sells well? Are people just afraid of what AI might entail? I rarely hear people talking about the positives of having general AI, and there are many if not more than negative things to say about functional general AI.

So, what's all the hot air and speculation about AI being detrimental or dooming? If I become irrelevant within my lifetime, then all the better that I get to witness an interesting future in which I get to participate in, otherwise can all the nay-sayer and neo-Luddites remain calm and quiet until that day arrives?
 
Physics news on Phys.org
  • #2
n01 said:
Are people just afraid of what AI might entail?

That's my bet.

n01 said:
If I become irrelevant within my lifetime, then all the better that I get to witness an interesting future

My understanding is that some people are afraid of witnessing this interesting future from quite an uninteresting position, being poor, hungry and unemployed. Hard to say whether they are right or not. I admit I have long stopped to believe in a bright, happy future for everyone, it was already predicted several times in my lifetime and so far has not realized.
 
  • Like
Likes davenn
  • #3
Wait a second. Perry Mason would say "Objection! Assumes facts not in evidence!"

AI is portrayed both positively and negatively. If your question is "Why is it portrayed more often negatively", I'd like to see some proof of that. If your question is "When you don't consider the times it's portrayed positively, it's always portrayed negatively", I would agree with that, but it's kind of true by construction.
 
  • Like
Likes Ryan_m_b
  • #4
Following up on @Vanadium 50's comment - isn't context needed to even discuss this? Beginning with fantasy vs. real-world, say?

AI presented in sci fi movies & books is pure fantasy; it's part of the genre So although it might bother you personally, objecting to a villainous AI in a sci fi movie makes as much sense as objecting to a villainous monster in a sci fi movie. And if you still wanted to tilt at this particular windmill, you'd need to somehow attempt a survey of positive vs. negative vs. neutral instances in movies & books - pretty much an impossible task given the size of the literature & the impossibility of catching all instances. Moreover it's easy to recall positive or neutral instances, or even ambivalent instances with both positive and negative implications - e.g. the galaxy wide "General Information" AI in Stars in My Pocket Like Grains of Sand.

And on the other hand, if you're going to talk about real-world possibilities for AI, context becomes hugely important: What specific application? Who is talking about this application? What are they saying? Further, if you are going to discuss all this in a serious way, you'd have to be willing to acknowledge that all technologies have the potential for doing harm; AI is not exempted.
 
  • #5
In regards to whether AI is depicted positively or negatively. That's a highly culturally relevant question. For the most part, I think most readers here are of Western origin. I can only speak for myself and mention some notable examples of AI in cinema. 2001, Terminator series, Matrix, Blade Runner, AI (although AI has a rather positive view of AI in general). As for books, I am only familiar with stuff like Solaris by Stanislaw Lem. Never read too much of Isaac Asimov; but, the man went so far and tried making rules of ethical conduct for artificial intelligence. I've read only Rendevous with Rama by Arthur C Clarke, which had little mention of AI in it.

I think doing some analytical and easy research on Google will show results portraying AI in the mind of the ordinary folk as a potential threat to humans.

Instead of nitpicking, I'm going to say that in general AI is viewed with some uncertainty, and that uncertainty is a breeding ground for speculation and hype?

Elon Musk, who is idolized by many has his own serious concerns in regards to AI. Before I go on a tangent, is it agreeable that in general AI is viewed with some uncertainty, to put it mildly?
 
  • #6
n01 asks:
Why is AI so badly portrayed by people? I'm kind of tired of all the negativity surrounding AI, which seems unwarranted in any way.

Why do people, often well educated and intelligent people/scientists, portray AI in such a negative manner? I'm talking about movies, books, and sci-fi otherwise. Is it that portraying AI nega...

H.A.L. 9000
 
  • #7
symbolipoint said:
H.A.L. 9000

Which movie? In the first HAL was evil, in the sequel HAL was good.
 
  • #8
UsableThought said:
Which movie? In the first HAL was evil, in the sequel HAL was good.
The original movie, 2001: A Space Odyssey
 
  • #9
symbolipoint said:
The original movie, 2001: A Space Odyssey

You're missing my point. You gave a negative example; I pointed out there is a positive example to counterbalance it. This goes to the OP's unsupported claim that AI is portrayed negatively in sci-fi w/out significant exception.
 
  • #10
UsableThought said:
You're missing my point. You gave a negative example; I pointed out there is a positive example to counterbalance it. This goes to the OP's unsupported claim that AI is portrayed negatively in sci-fi w/out significant exception.

I think I understand your point. However, for some peculiar reason, people seem to fear the unknown, case in point AI.

My question is twofold. Is there really anything worth speculating about? And, can anything meaningful be said without hyperbole and fear mongering about AI in general?
 
  • #11
n01 said:
My question is twofold. Is there really anything worth speculating about? And, can anything even be said without hyperbole and fear mongering about AI in general?

If your goal is to rescue AI, I'd suggest avoiding criticism of depictions in sci fi. As I tried to point out, there are many wild & crazy villains in sci fi; AI gone mad is just one of them. You're not going to change this.

What really matters is how AI would be implemented in the real world in the near term & whether people object to specific implementations. That's the only territory where your effort to encourage a positive view could make a difference. But if you go that route, you will then honestly have to look at a specific implementation & the possible consequences. This gets very messy, like any social/technological discussion, of course.

I suppose one thing you could do, if you have enough background in AI, is propose an Insights article in which you cite positive examples of hypothetical or likely applications?
 
  • #12
UsableThought said:
If your goal is to rescue AI, I'd suggest avoiding criticism of depictions in sci fi. As I tried to point out, there are many wild & crazy villains in sci fi; AI gone mad is just one of them. You're not going to change this.

What really matters is how AI would be implemented in the real world in the near term & whether people object to specific implementations. That's the only territory where your effort to encourage a positive view could make a difference. But if you go that route, you will then honestly have to look at a specific implementation & the possible consequences. This gets very messy, like any social/technological discussion, of course.

I suppose one thing you could do, if you have enough background in AI, is propose an Insights article in which you cite positive examples of hypothetical or likely applications?
I'll just leave the open-ended question for now in case anyone more knowledgeable comes around and happens to stumble on my wanderings, but, I do appreciate the advice:

Can anything meaningful be said without hyperbole and fear mongering about AI in general?
 
  • Like
Likes UsableThought
  • #13
symbolipoint said:
H.A.L. 9000

On the other side, C3PO and R2D2. Tom Servo and Crow. The Jetson's Rosie. R. Daneel Olivaw.
 
  • #14
There are plenty of examples of AI being represented in a positive light, two examples that jump to mind are Data from Star Trek and Vision from the Avenger's films (who counterbalances the evil AI ultron). Even in some of the negative examples listed above like Terminator or the Matrix there were AIs who were allies like the reprogrammed Arnie or the Oracle.

I think there's a few reasons why negative AI are a common enemy in fiction:

1) It's an easy enemy. Everyone can understand it and it allows you to have a lot of violence in visual media without being an adult rated film. It also takes some of the moral ambiguity out of things as our heros mow down waves of machinery rather than flesh and blood beings.

2) Human history has seen plenty of examples of more advanced societies displacing, enslaving and otherwise destroying less advanced ones. So there's always an undercurrent of that fear in a lot of our fiction, when AIs are your protagonists they can be easily cast in the role of more advanced aggressor (the same can be said of aliens).

3) It's "realistic" in the sense that a thinking machine appears to be more plausible than demons, monsters or aliens. We're living in an age of increasingly sophisticated thinking machines so it plugs into the current narrative.

As to whether or not we should be afraid I think it's the same as anything; you have to be wary of the possible negative consequences. Whether its being economically outcompeted, accidental war crimes as drones misinterpret orders or a straight up terminator situation.
 
  • Like
Likes Choppy
  • #15
UsableThought said:
You're missing my point. You gave a negative example; I pointed out there is a positive example to counterbalance it. This goes to the OP's unsupported claim that AI is portrayed negatively in sci-fi w/out significant exception.
I think there is a visibility problem kind of inherent to human created media in that it is human-centric. As a result, heros are almost always human wheras villans may be ai. But if people put a bit more thought into it, with that in mind, I'm sure they can think of numerous examples of benevolent ai as minor characters.
 
  • #16
russ_watters said:
I think there is a visibility problem kind of inherent to human created media in that it is human-centric. As a result, heros are almost always human wheras villans may be ai. But if people put a bit more thought into it, with that in mind, I'm sure they can think of numerous examples of benevolent ai as minor characters.

Easily. This comes to me immediately: Heinlein's The Moon Is A Harsh Mistress, where "Mike" is a huge computer running the moon; it becomes sentient and is the hero's best friend.

Or what about "Jarvis" in the Iron Man movies? There may be one in the series I've missed where Jarvis is a villain, but the three or four IM/Avengers movies I've seen, he's Stark's best buddy and definitely a "good guy."

Or turning to AI as embodied in androids: The "synthetics" in the Alien movies started out as evil; then with Bishop in the second movie became good; then evil in the third movie with a different Bishop; then good again in the fourth movie. I haven't seen the latest prequel, but I understand even in that there is both a good synthetic & a bad synthetic. Same with Next Generation, we had good Data and bad Lore. In Voyager, AI as embodied in the holographic entity known as the Doctor was also good. And in the Matrix movies, the AI ("programs") started out as all bad (agents; the AI system overall); then we started seeing good instances as of the second movie, e.g. the Oracle was revealed to have been a program.
 
Last edited:
  • Like
Likes russ_watters
  • #17
Vanadium 50 said:
Wait a second. Perry Mason would say "Objection! Assumes facts not in evidence!"

AI is portrayed both positively and negatively. If your question is "Why is it portrayed more often negatively", I'd like to see some proof of that. If your question is "When you don't consider the times it's portrayed positively, it's always portrayed negatively", I would agree with that, but it's kind of true by construction.

Well I would think it was somewhat connected to a comparison of dystopian vs non dystopian films.

Here's a list of top 500 dystopian films:

https://en.wikipedia.org/wiki/List_of_dystopian_films

Here's a list of non-dystopian films:
http://www.imdb.com/list/ls053734935/

Exercise for the reader: How many of each feature AI?

There are 17. 10 Of them are star trek.

Now I am not claiming either of these lists is exhaustive. But that's a pretty stark difference.

Here is a list of films specific to A.I.
https://www.theguardian.com/culture...-20-artificial-intelligence-films-in-pictures

But it's pretty selective. I haven't gone through and checked how many I would consider dystopian.

We could bring data to this question. :D

-Dave K
 
  • #18
n01 said:
Can anything meaningful be said without hyperbole and fear mongering about AI in general?
Let's start by answering this question: Why anyone would think AI can be a good thing?

Right now we have machines where for every action, a reaction have been programmed by a human being. Although - with the amazing calculation and storage power of today's computers - it may mimics intelligence, that is not AI.

The problem with that concept is that what will the machine do when it will face an action not thought by the programmers? That can be scary and one solution that seems promising for this problem is AI.

AI is a machine that can learn from its actions, meaning it will do in the future things that weren't thought by the programmers. This is great, because - like a human - face with an unlikely situation, it can choose to do the «right thing» instead of «what is most probably the right thing» given by a set of probability and statistics, for example. There, you have the positive.

But once you think about it, the next question is what is the «right thing»? How will a machine, on its own, identify the «right thing» to do? It is reprogramming itself by itself. Will it interpret correctly the initial desires of the programmers? Will it wander off on its own, doing other things not originally planned?

The catastrophic scenarios of today are mainly based on the fact that we wouldn't be able to turn off such a machine that goes crazy. Or that «crazy» will sneak up on us once we have given full control on our life to such machines. I think such scenarios are highly pessimistic and unrealistic. But the question on how to program a program that reprograms itself is a still a valid and unanswered question where all aspects must be evaluated.
 
  • #19
n01 said:
So, what's all the hot air and speculation about AI being detrimental or dooming? If I become irrelevant within my lifetime, then all the better that I get to witness an interesting future in which I get to participate in, otherwise can all the nay-sayer and neo-Luddites remain calm and quiet until that day arrives?

Ai is not considered evil or malevolent in general and like fire or nuclear energy it is recognized as useful/beneficial but considered dangerous if not properly managed. In fact the likes of Stephen Hawking consider it one of the likely causes of the demise of humanity if we take a cavalier attitude toward its implementation. The Asilomar conference on AI 2017 gave 23 principles to be applied to prevent an unintentional AI catastrophe.
 
  • #20
Here's the thought that I always have:

Every awful thing that's been done in history has been done for irrational reasons. (Greed, jealousy, posturing, religious zealotry, etc.) So my question is, are machines rational? If so, why would we be worried?

-Dave K
 
  • #21
dkotschessaa said:
Every awful thing that's been done in history has been done for irrational reasons. (Greed, jealousy, posturing, religious zealotry, etc.) So my question is, are machines rational? If so, why would we be worried?

Eugenic? Not irrational but non the less awful.
 
  • #22
gleem said:
Eugenic? Not irrational but non the less awful.

By what basis can you judge it "awful" but not "irrational?"
 
  • #23
Can't we make a parallel between AI and immigration? It can be portrayed as a threat or as a blessing. Threats make much better drama.

I prefer a very broad definition of AI. I think that machines have been getting smarter for a long time. For example, James Watt's 1784 steam engine was self regulating via the flyball governor in the drawing below.. Between then and now, what is our verdict? Has AI been harmful or beneficial?
800px-SteamEngine_Boulton%26Watt_1784.png
 
  • #24
anorlunda said:
I prefer a very broad definition of AI. I think that machines have been getting smarter for a long time. For example, James Watt's 1784 steam engine was self regulating via the flyball governor in the drawing below.. Between then and now, what is our verdict? Has AI been harmful or beneficial?
Interesting take. Definitely valid questions: what is AI? I think it is generally self-awareness or passing the Turing test (are we that arrogant?), but I don't think self-awareness or acting convincingly human is a major component of what constitutes AI or what could be dangerous. Reacting to their environment? The steam engine did that. Computers that learn? They exist too. What makes a computer dangerous is when it controls a dangerous weapon and isn't programmed well. The computer controlling a chilled water plant? There is just no way for it to destroy the world, no matter how smart it is.
 
  • #25
russ_watters said:
The computer controlling a chilled water plant? There is just no way for it to destroy the world, no matter how smart it is.

Unless it's "smart," i.e. networked; in which case it can get hacked and taken over & along with an army of its fellows start doing bad things to more important computers . . . which apparently is happening these days.
 
  • #26
anorlunda said:
Threats make much better drama.

That was my exact thought as well. Writers will go to great lengths to up the drama of even mundane situations, let alone the fantastic and futuristic.
 
  • #27
Neural networks are considered AI. The most everyday application of neural nets today that I'm aware of is speech recognition. The AI is useful but far from the word intelligent.

In #23, I looked back in time. Looking forward in time to the forecasted 2043 arrival of the technological singularity: I think that we could get all the way to the singularity including all the risks and rewards, without ver coming close to a machine that is comparable to human-like intelligence. IMO, comparisons of AI to humans is a red herring in the debate about future risks and benefits. Comparison of machines to people is another way to make better drama.

Consider scientific research. I can visualize (a) a hypothesis generator based on symbolic logic, and (b) a hypothesis screening app that searches the entire archive of experimental raw data from all sources looking for serependipitous evidence that might be relevant to the hypothesis. Together, those apps would greatly magnify the productivity of a human researcher, but there no hint of human-like intelligence in those apps. It is precisely that kind of magnification that the technological singularity envisions.

Similarly, I can envision an AI app that creates better AI apps via hypothesis and experiment, or genetic algorithms and experiment. That creates positive feedback using nothing that resembles human intelligence.

If we could eliminate the word intelligence from AI, we could have better debates. Like the phrase "Big Bang" in cosmology, and "observable" in QM, it is an unfortunate choice of words that acts like a magnet, attracting people's thoughts down the wrong path.
 
  • #28
Aren't R2D2 and C3PO depiction of AI? Are they portrait negatively?

Culturally, anything that is unfamiliar, new, unknown, and not understood tend to be demonized. This includes aliens, bugs, creatures from the sea, gays and lesbians, computer program gone mad, people who look different than you, etc... etc. Why shouldn't AI, if it truly has been negatively portrait, suffer from the same prejudices and ignorance that have been imposed on others?

Zz.
 
  • #29
UsableThought said:
Unless it's "smart"; i.e. networked; in which case it can get hacked and taken over & along with an army of its fellows start doing bad things to more important computers . . . which apparently is happening these days.
Being smart and being networked are totally different things and due to vulnerability to hacking, that's exactly why some of the most important (and least important?) computers aren't connected to the internet -- including that chiller optimization systems. Not everyone takes good enough care, though; the entry point for Target's hack was, if I remember correctly, through the HVAC system.

In War Games, the NORAD computer was accessed via a phone line. Obsolete today, but I would think today that the computers controlling the nuclear weapons are not on the internet.

Either way, clearly being networked has a more immediate downside than enabling an evil AI to communicate with his friends.
 
Last edited:
  • #30
russ_watters said:
In War Games, the NORAD computer was accessed via a phone line.

Nice mention of a film that definitely fits this thread. I forget the computer's "name" - but was it evil, or good? Answer, neither; it was humans who were the bad guys & also the good guys.
 
  • Like
Likes russ_watters
  • #31
ZapperZ said:
Aren't R2D2 and C3PO depiction of AI? Are they portrait negatively?
Not sure about R2D2 being AI -- I can't understand him well enough to know what (if?) he's thinking!
Culturally, anything that is unfamiliar, new, unknown, and not understood tend to be demonized. This includes aliens, bugs, creatures from the sea, gays and lesbians, computer program gone mad, people who look different than you, etc... etc. Why shouldn't AI, if it truly has been negatively portrait, suffer from the same prejudices and ignorance that have been imposed on others?
There's a thread in the sci fi section asking why there are few movies that have mostly or entirely non-human characters. I think the answer is less nefarious: it's because Hollywood has been unable to get any market penetration in the Galactic Republic (though clearly all Star Wars characters are non-human; they were just made to look human). A similar problem exists for robots (they don't buy movie tickets). So I think it is more business practicality than prejudice.

I don't want to debate the mixture of practicality and prejudice for those other groups though...
 
  • #32
UsableThought said:
Nice mention of a film that definitely fits this thread. I forget the computer's "name" - but was it evil, or good? Answer, neither; it was humans who were the bad guys & also the good guys.
I had to look up the plot since it has been awhile: it looks to me like the computer was neither good nor evil, but rather was not self aware enough to realize it was mixing simulation with reality. And the people using it were also more clueless than evil.

I think it is a good example that in today's world, it isn't self-aware computers that are the problem, but rather hacked or buggy or improperly designed ones that are.
 
  • Like
Likes UsableThought
  • #33
russ_watters said:
In War Games, the NORAD computer was accessed via a phone line. Obsolete today, but I would think today that the computers controlling the nuclear weapons are not on the internet.
Ask Iran if nuclear facilities are immune to computer attacks ...

What some people are worried about concerning AI is could a computer program - able to re-program itself or create other programs by itself - build such a program? Even if it had a «good intention» behind the production of such program.
 
  • #34
jack action said:
Ask Iran if nuclear facilities are immune to computer attacks ...

What some people are worried about concerning AI is could a computer program - able to re-program itself or create other programs by itself - build such a program? Even if it had a «good intention» behind the production of such program.
l

I see no limiting principle to answer no to that question. It has to be yes.

A better question is will machines replace bio life as the next major step in evolution? I'm sure that dinosaurs ( could they think) would fear being overtaken by mammals. The world could benefit from AI life, but Homo sapiens would not. The conflict here is the word "we." Does we include us plus our creations? Or is it us versus our creations?
 
  • #35
anorlunda said:
I'm sure that dinosaurs ( could they think) would fear being overtaken by mammals.
Mammals did not «overtake» dinosaurs; they survived an event (choose your theory) that dinosaurs didn't. In fact they are not really extinct as birds are considered a type of dinosaur.

IMHO, Life as whole is extremely difficult to destroy, much more than how people often imagine.
anorlunda said:
The world could benefit from AI life, but Homo sapiens would not.
This brings us back the original OP: Why do you think AI has to destroy humans and why AI will necessarily be better for the world?

I really don't understand when people see themselves as the enemy of life and that the world would be a better place without them in it.
 
  • Like
Likes russ_watters
Back
Top