Why is 'AI' often so negatively portrayed?

  • Thread starter Thread starter n01
  • Start date Start date
  • Tags Tags
    Ai
Click For Summary
The portrayal of AI often leans negative in popular media, primarily due to societal fears of the unknown and the potential consequences of advanced technology. This negativity is fueled by the ease of creating compelling narratives around villainous AI, which resonate with audiences and reflect historical anxieties about advanced societies displacing less advanced ones. While there are positive representations of AI in fiction, such as benevolent characters in various films and books, these are often overshadowed by dystopian themes. Discussions emphasize the importance of context when evaluating AI's portrayal, as well as the need to consider specific applications and their potential impacts in the real world. Overall, the conversation highlights a complex relationship between AI's fictional depictions and societal perceptions, underscoring the necessity for balanced discourse.
  • #91
Every time I dip my toes in a PF General Discussions thread, I wind up regretting it.
 
  • Like
Likes gleem and russ_watters
Physics news on Phys.org
  • #92
https://www.bls.gov/opub/mlr/2016/a...disrupt-occupations-but-it-wont-kill-jobs.htm
But here’s where the Monthly Labor Review(MLR) can provide some needed historical perspective. In the 50th anniversary issue in 1965, Lloyd Ulman included this quote from a former Secretary of Labor: “In the long run, new types of industries have always absorbed the workers displaced by machinery, but of late, we have been developing new machinery at a faster rate than we have been developing new industries…At the same time we must ask ourselves, is automatic machinery, driven by limitless power, going to leave on our hands a state of chronic and increasing unemployment? Is the machine that turns out wealth also to create poverty? Is it giving us a permanent jobless class? Is prosperity going to double back on itself and bring us social distress?” This was from the Secretary of Labor in 1927, "Puddler” Jim J. Davis.

In short, there has long been a national worry about technology-driven unemployment and that fear seems to spike either in recessions or in periods of particularly robust innovation. But to date that fear has always been misplaced. And despite the claims of current techno-distopians, these claims will be wrong going forward. We can rest easy that in 25 years the unemployment and labor force participation rates will be similar to today’s.[emphasis mine]
The bolded part is the flaw - the self-contradiction - in the pessimistic view; the "new machinery" is the new industry.
 
  • Like
Likes jack action
  • #93
I am often not happy the way these discussions go also. But I hope this thread can remain unlocked so an exchange of views and ideas may continue.

russ_watters said:
The only thing actually being proposed is fear.

Referring to the Asilomar Principles?

anorlunda said:
Sorry, being an expert who creates software has no relation to being qualified to predict social impacts.

Have you read and evaluated the list of the signers of the Asilomar Principles and in particular the researcher?One thing about controversial issues is that the discussions can send one deeper into the issues than one otherwise might have gone.

Even now AI related problems are being identified with significant social/legal implications. Algorithmic Bias.

https://www.technologyreview.com/s/608248/biased-algorithms-are-everywhere-and-no-one-seems-to-care/
 
  • #94
gleem said:
russ_watters said:
The only thing actually being proposed is fear.
Referring to the Asilomar Principles?
No, it's related to predictions of what AI will be able to do and how much importance it will take in people lives, even though all of that is speculation at this point. The fear comes from the fact that it seems only negative outcomes (so far as destruction of the civilized world as we know it) are possible.

The problem is that we have so little knowledge about this AI future, that no one can either prove or disprove that these negative outcomes can actually happen.
gleem said:
Even now AI related problems are being identified with significant social/legal implications. Algorithmic Bias.

https://www.technologyreview.com/s/608248/biased-algorithms-are-everywhere-and-no-one-seems-to-care/
Although an important problem, I don't think this is related to AI specifically. You can create a simple model with algorithmic bias and solved it with a pencil and a piece of paper. It is more linked to people's ignorance about the limits of mathematical models.

I don't think it should be discussed in this thread (But if you start another one, I might jump in :wink::smile:).
 
  • Like
Likes russ_watters
  • #95
jack action said:
The problem is that we have so little knowledge about this AI future, that no one can either prove or disprove that these negative outcomes can actually happen.

I think everybody agrees in fact there are some who think true "general AI" is not even possible. The signers of the Asilomar Principle seem to be totally in favor of development of AI and are implementing it as fast as they can with prudence. The history of automation has shown that it has been by and large beneficial and the "Luddite" type fears have been unfounded. But many reasonable people think that AI is a game changer of extreme importance and not to be ignored. But non the less there is extreme optimism among AI researcher about the beneficial value while realizing that this does not come without some negative consequences which should be addressed early on.

jack action said:
Although an important problem, I don't think this is related to AI specifically.

I agree and almost did not include it but I think it demonstrates a lack of concern or attention for some values they might find its way into AI systems intentionally or not which many AI researcher seem to see as a distinct problem.
 
  • #96
gleem said:
Referring to the Asilomar Principles?
No, those are just generic guiding principles. They have little relevance to the subject of the thread, which is doom and gloom predictions and legislation to prevent them. I took engineering ethics in college - that isn't what this is about.
Have you read and evaluated the list of the signers of the Asilomar Principles and in particular the researcher?
No, why should I do that?

Edit: I'm not being coy or flip here: you're putting the cart before the horse. If I don't think the list of principles has any relevance, then the signatories don't matter.
 
Last edited:
  • #97
The doom and gloom is not from the signers of the Asilomar conference. It is from the popular media. To answer the OP reason for this outlook is simple: all news about death, destruction, crime, sex, violence, apocalypse scenarios etc. sell. Amen. Any thing vulnerable to the attaching of any of those items is fair game radiation, genetic engineering, weather, stock market, war ...

Good, fun, AI?

1oldman2 said:
"We were promised flying cars, instead we got suicidal robots," wrote one worker from the building on Twitter

There currently companies beginning to produce a various types of flying cars and AI will be vital in the implementation of this mode of transportation especially in metropolitan areas.

And then there are AI companions http://www.newsweek.com/ai-ageing-companion-alexa-old-people-elliq-artificial-intelligence-541481
you don't need an imaginary friend anymore or even a dog or cat.

And what about those cleaning robots? and all without negative sociological or economic affect.(I think).
 
  • #98
gleem said:
And what about those cleaning robots? and all without negative sociological or economic affect.(I think)

Whoops. Just recently heard that Irobot's cleaning robot Rumba has been doing more than cleaning floors. Apparently it has been also surveying the house as it cleans . Irobot was intending to sell this info to whoever needed this info. They said of course that they would not have released the info without the homeowners consent.

Of course we already know Alexa and Siri are collecting data.
 

Similar threads

  • · Replies 12 ·
Replies
12
Views
4K
Replies
10
Views
4K
Replies
3
Views
3K
Replies
15
Views
4K
Replies
33
Views
7K
  • · Replies 28 ·
Replies
28
Views
11K
  • · Replies 67 ·
3
Replies
67
Views
15K
Replies
9
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
Replies
19
Views
7K