Why is 'AI' often so negatively portrayed?

  • Thread starter n01
  • Start date
  • Tags
    ai
  • #76
OCR
905
782
Because of things like this:
Yeah, I agree...[COLOR=#black].[/COLOR] :oldeyes:

Blocking ads.JPG
 
  • Like
Likes Rubidium_71
  • #77
InfiniteEntity
The problem with AI is we don't know whether the intelligent machine can, or will edit themselves. We won't know their capabilities until we create one, and just one AI can do alot of interesting things. I'm not sure if the movie Transcendence is scientific, but the power gained by the AI is unparalleled. Humans fear the future for what it holds, but then again, why would we need AI to begin with?
 
  • #78
gleem
Science Advisor
Education Advisor
1,795
1,148
I know it is hard to see the dire consequence of AI gone amuck but so many experts do feel that it is very possible that like climate change you might want to hedge you bet and go with the expert opinion even though it might be a little inconvenient. While I do not think AI is the most immediate threat to our civilzatioin it would be sad if we survived nukes, climate change pestilence only to go gently into the night.

I like to point out that Musk is really heavy into AI and not just with his cars. He own two AI enterprises Neuralink to develop an interface between our brain and the internet and OpenAI a research company to develop safe AI.

Let us hope that those from the likes of IBM, Facebook, Apple and Google who signed the Asilomar agreement are putting their money where their mouths are.
 
  • #79
565
60
This argument is along the line of one of Musk's arguments found in the article of post #44:

I'm no expert on AI, so I cannot bring arguments about the potential of AI. But with this kind of argument, I have enough knowledge about it to know it is a logical fallacy. It is an argument along the line of «buggy whips will become obsolete when people start to travel in cars rather than in horse-drawn buggie, therefore one should be careful before encouraging the car industry.»

I repeat here what Musk said:

Really? Why don't Musk see that in a positive way instead? Like those 4.6 million people will be able to spend their time doing something more valuable, enriching their lives. My grandfather quit school at age 12 and he did cut down trees with axes and saws as a job. Would it have been a good idea to not invent the chainsaw and the even more productive logging equipment that we have today such that I could have a job in the industry when I was 14 years-old. I don't know if it is better, but I instead went to university earning a baccalaureate in Mech. Eng. One thing is for sure, I don't think my life is worst than the life of my grandfather.

Thinking that if robots do the jobs we do today means that humans will cease to work, that is just plain wrong because the past is an indication of what the future holds. And people never stop being curious and embarking on new adventures.

When one serves me this kind of argument, it doesn't help me trusting the validity of his other arguments which only promote fear, often along the line of the end of the world (Lots of past examples related to this as well that prove those fears are unreasonable).

I haven't scrutinized what Musk said, so I can't really comment on that. But I think that it would be a little different than buggy whips. That's just a small section of society. When were are talking about a huge portion of society being put out of jobs at a fast enough rate that would make it difficult to adjust, we can't predict what would happen. The difference is that those people operating buggy whips can simply move on to other available jobs. Today, taxi drivers could just become Uber drivers or some other driver. But if general artificial intelligence takes over most tasks, then it would be difficult to find a new job. Clearly it depends on the rate at which AI progresses. If AI progresses fast, but not too fast that governments can't keep up, somethings can be done to safeguard the people. But if there's a runway effect where suddenly machines can do most tasks that humans can do( think a period of only a few years after a breakthrough in AI theory), then yes, you can surmise that there might be some problems.

Past examples don't work because they are too self contained and different in scope and characteristic to provide any meaningful extrapolations in the case of AI. I'm not against AI. I think it would help fix many problems over time. However, there are some risks and valid concerns about those risks is all that I am saying.
 
  • #80
jack action
Science Advisor
Insights Author
Gold Member
2,122
3,371
My view is that a dominant species (if species is the correct word) can never tolerate a secondary species with lethal military capability. That is why evolution of superior intelligence is an existential threat to humanity.
But the human species is made of a multitude of races (if race is the correct word), some with lethal military capability and we do tolerate each other. I mean there are wars, but we don't see extermination of one race over another.

I know it's not your point of view, but I really don't see the evidence that shows that life is a contest where only one winner will emerge.
For once you identify the evidence it might be too late. Once we are sure of impending climate change it might be too late to stop it.
That is true of so many problems. The best example is: Should we be prepared for a large asteroid colliding with Earth? If so, how large and fast of an asteroid should we be prepared for? No matter what, there is also the possibility that it will be so large that there is nothing we can do about it (say, as big as the planet itself).

Again, it's not about denying the possibility that something can happen, it is about estimating the probability of its occurrence. And to calculate probability, we need data. Without data, there is no valid way other than see what will happen as we go along, no matter how scary it can be.
But if general artificial intelligence takes over most tasks, then it would be difficult to find a new job.
But if so many people don't have job - and thus can't spend money - who will finance those companies manufacturing products and offering services with AI? It all works together, you cannot split offer and demand.
 
  • #81
anorlunda
Staff Emeritus
Insights Author
9,426
6,431
I know it's not your point of view, but I really don't see the evidence that shows that life is a contest where only one winner will emerge.

Of course there's no evidence. It's unprecedented.

But we're talking about human fear. A phobia. Fear is real, even if irrational. The nocebo effect applies to both individuals and groups. People have the right to take protective action based on fear alone. I don't think they should, but I recognize the right.
 
  • #82
jack action
Science Advisor
Insights Author
Gold Member
2,122
3,371
Of course there's no evidence. It's unprecedented.

But we're talking about human fear. A phobia. Fear is real, even if irrational. The nocebo effect applies to both individuals and groups. People have the right to take protective action based on fear alone. I don't think they should, but I recognize the right.
I also recognize that right. But when someone wants to force me to do something because of his fear, that is where I draw the line.
 
  • Like
Likes russ_watters
  • #83
565
60
Again, it's not about denying the possibility that something can happen, it is about estimating the probability of its occurrence. And to calculate probability, we need data. Without data, there is no valid way other than see what will happen as we go along, no matter how scary it can be.

You are right, without data, we can't predict. So we have no idea what could happen with AI, at least empirically. The thing is, for circumstances like like taxi drivers and telemarketers losing jobs, we have historical data because jobs were replaced before due to situations that where comparable in scope. For example, what is similar about buggy drivers losing their jobs to the situation of taxi drivers losing their jobs its that its just one industry being disrupted by a particular innovation. When we have displacement in many industries across the board, simultaneously, we don't know what could happen because this has never happened before. So the vast uncertainty in a potential situation like that is what is concerning.

But if so many people don't have job - and thus can't spend money - who will finance those companies manufacturing products and offering services with AI? It all works together, you cannot split offer and demand.

I suspect that the losses in jobs would be for jobs that do not require very high levels of analytical/abstract/creative thinking. So those that can do that would have all the money, presumably. However, in a situation like that, there would probably be a need for some redistribution of wealth to preserve society.
 
Last edited:
  • #84
russ_watters
Mentor
20,572
7,231
That might be a problem. For once you identify the evidence it might be too late. Once we are sure of impending climate change it might be too late to stop it.

I hold that Musk's opinion is of significant value, as a respected citizen and implementer of AI technology. The question of over regulation is very debatable considering the consequence of failure to prevent a captostropic disaster. You here it all the time "Why didn't someone do something to prevent this...? No one wants regulations until they need them.
You can add me to the list of people who says that's not good enough. Fortunately for me, for now it isn't even specific enough to be an issue since as far as I know, no one is proposing any *actual* legislation. The only thing actually being proposed is fear. I choose to vote no on that too.
 
Last edited:
  • #85
russ_watters
Mentor
20,572
7,231
I haven't scrutinized what Musk said, so I can't really comment on that. But I think that it would be a little different than buggy whips. That's just a small section of society. When were are talking about a huge portion of society being put out of jobs at a fast enough rate that would make it difficult to adjust, we can't predict what would happen.
How big? How fast? Faster than Netflix killed Blockbuster? Faster than Amazon is killing brick and mortar? Faster than overseas competition and automation (the subject of the thread!) have hurt the manufacturing and steel industries? "It will be worse" is easy and cheap to say -- and just as meaningless.
 
  • #86
russ_watters
Mentor
20,572
7,231
To be blunt: In none of your sweeping claims have you cited any evidence that would be of an acceptable form for any of the forums on PF where evidence is called for.
Nor has anyone on the "pro" side, where it is really most required. The fears - much less what to do about them - are far too vague.
 
  • #87
380
250
Nor has anyone on the "pro" side, where it is really most required. The fears - much less what to do about them - are far too vague.

Which is the whole point of actually reading up on the field & what the experts inside that field are actually talking about. Scare headlines about Musk etc. aren't sufficient to educate ourselves, anymore than for any other complex economic, political, social or technological issue.

Most of us simply don't have the time to read up on all possible fields and issues; but we still want to hold opinions so we can argue them. This is fine so long as we hold such opinions lightly.
 
Last edited:
  • #88
russ_watters
Mentor
20,572
7,231
Which is the whole point of actuallyreading up on the field & what the experts inside that field are actually talking about. Scare headlines about Musk etc. aren't sufficient to educate ourselves, anymore than for any other complex economic, political, social or technological issue.

Most of us simply don't have the time to read up on all possible fields and issues; but we still want to hold opinions so we can argue them. This is fine so long as we hold such opinions lightly.
That's all fine. Making snap "pro" judgements on those empty scare headlines and then burden of proof shifting was what I was objecting to.
 
  • Like
Likes UsableThought
  • #89
anorlunda
Staff Emeritus
Insights Author
9,426
6,431
Which is the whole point of actually reading up on the field & what the experts inside that field are actually talking about. Scare headlines about Musk etc. aren't sufficient to educate ourselves, anymore than for any other complex economic, political, social or technological issue.

Most of us simply don't have the time to read up on all possible fields and issues; but we still want to hold opinions so we can argue them. This is fine so long as we hold such opinions lightly.

Sorry, being an expert who creates software has no relation to being qualified to predict social impacts. To use a pop icon analogy, a Sheldon Cooper type could be a likely AI insider expert. Would you ask Sheldon to design our society?

If you want to extrapolate beyond the immediate future, I would look to SF before trusting tekkie AI programmers. If you want a vibrant society, it is better to have opinionated involved citizens than passive apathetic ones.
 
  • Like
Likes russ_watters
  • #90
380
250
Sorry, being an expert who creates software has no relation to being qualified to predict social impacts.

You are jumping to a completely mistaken conclusion about who exactly I consider the required experts. I suggest you re-read my earlier comments on the same theme.
 
Last edited:
  • #91
anorlunda
Staff Emeritus
Insights Author
9,426
6,431
Every time I dip my toes in a PF General Discussions thread, I wind up regretting it.
 
  • Like
Likes gleem and russ_watters
  • #92
russ_watters
Mentor
20,572
7,231
https://www.bls.gov/opub/mlr/2016/a...disrupt-occupations-but-it-wont-kill-jobs.htm
But here’s where the Monthly Labor Review(MLR) can provide some needed historical perspective. In the 50th anniversary issue in 1965, Lloyd Ulman included this quote from a former Secretary of Labor: “In the long run, new types of industries have always absorbed the workers displaced by machinery, but of late, we have been developing new machinery at a faster rate than we have been developing new industries…At the same time we must ask ourselves, is automatic machinery, driven by limitless power, going to leave on our hands a state of chronic and increasing unemployment? Is the machine that turns out wealth also to create poverty? Is it giving us a permanent jobless class? Is prosperity going to double back on itself and bring us social distress?” This was from the Secretary of Labor in 1927, "Puddler” Jim J. Davis.

In short, there has long been a national worry about technology-driven unemployment and that fear seems to spike either in recessions or in periods of particularly robust innovation. But to date that fear has always been misplaced. And despite the claims of current techno-distopians, these claims will be wrong going forward. We can rest easy that in 25 years the unemployment and labor force participation rates will be similar to today’s.[emphasis mine]
The bolded part is the flaw - the self-contradiction - in the pessimistic view; the "new machinery" is the new industry.
 
  • Like
Likes jack action
  • #93
gleem
Science Advisor
Education Advisor
1,795
1,148
I am often not happy the way these discussions go also. But I hope this thread can remain unlocked so an exchange of views and ideas may continue.

The only thing actually being proposed is fear.

Referring to the Asilomar Principles?

Sorry, being an expert who creates software has no relation to being qualified to predict social impacts.

Have you read and evaluated the list of the signers of the Asilomar Principles and in particular the researcher?


One thing about controversial issues is that the discussions can send one deeper into the issues than one otherwise might have gone.

Even now AI related problems are being identified with significant social/legal implications. Algorithmic Bias.

https://www.technologyreview.com/s/608248/biased-algorithms-are-everywhere-and-no-one-seems-to-care/
 
  • #94
jack action
Science Advisor
Insights Author
Gold Member
2,122
3,371
The only thing actually being proposed is fear.
Referring to the Asilomar Principles?
No, it's related to predictions of what AI will be able to do and how much importance it will take in people lives, even though all of that is speculation at this point. The fear comes from the fact that it seems only negative outcomes (so far as destruction of the civilized world as we know it) are possible.

The problem is that we have so little knowledge about this AI future, that no one can either prove or disprove that these negative outcomes can actually happen.
Even now AI related problems are being identified with significant social/legal implications. Algorithmic Bias.

https://www.technologyreview.com/s/608248/biased-algorithms-are-everywhere-and-no-one-seems-to-care/
Although an important problem, I don't think this is related to AI specifically. You can create a simple model with algorithmic bias and solved it with a pencil and a piece of paper. It is more linked to people's ignorance about the limits of mathematical models.

I don't think it should be discussed in this thread (But if you start another one, I might jump in :wink::smile:).
 
  • Like
Likes russ_watters
  • #95
gleem
Science Advisor
Education Advisor
1,795
1,148
The problem is that we have so little knowledge about this AI future, that no one can either prove or disprove that these negative outcomes can actually happen.

I think everybody agrees in fact there are some who think true "general AI" is not even possible. The signers of the Asilomar Principle seem to be totally in favor of development of AI and are implementing it as fast as they can with prudence. The history of automation has shown that it has been by and large beneficial and the "Luddite" type fears have been unfounded. But many reasonable people think that AI is a game changer of extreme importance and not to be ignored. But non the less there is extreme optimism among AI researcher about the beneficial value while realizing that this does not come without some negative consequences which should be addressed early on.

Although an important problem, I don't think this is related to AI specifically.

I agree and almost did not include it but I think it demonstrates a lack of concern or attention for some values they might find its way into AI systems intentionally or not which many AI researcher seem to see as a distinct problem.
 
  • #96
russ_watters
Mentor
20,572
7,231
Referring to the Asilomar Principles?
No, those are just generic guiding principles. They have little relevance to the subject of the thread, which is doom and gloom predictions and legislation to prevent them. I took engineering ethics in college - that isn't what this is about.
Have you read and evaluated the list of the signers of the Asilomar Principles and in particular the researcher?
No, why should I do that?

Edit: I'm not being coy or flip here: you're putting the cart before the horse. If I don't think the list of principles has any relevance, then the signatories don't matter.
 
Last edited:
  • #97
gleem
Science Advisor
Education Advisor
1,795
1,148
The doom and gloom is not from the signers of the Asilomar conference. It is from the popular media. To answer the OP reason for this outlook is simple: all news about death, destruction, crime, sex, violence, apocalypse scenarios etc. sell. Amen. Any thing vulnerable to the attaching of any of those items is fair game radiation, genetic engineering, weather, stock market, war ...............

Good, fun, AI?

"We were promised flying cars, instead we got suicidal robots," wrote one worker from the building on Twitter

There currently companies beginning to produce a various types of flying cars and AI will be vital in the implementation of this mode of transportation especially in metropolitan areas.

And then there are AI companions http://www.newsweek.com/ai-ageing-companion-alexa-old-people-elliq-artificial-intelligence-541481
you don't need an imaginary friend anymore or even a dog or cat.

And what about those cleaning robots? and all without negative sociological or economic affect.(I think).
 
  • #98
gleem
Science Advisor
Education Advisor
1,795
1,148
And what about those cleaning robots? and all without negative sociological or economic affect.(I think)

Whoops. Just recently heard that Irobot's cleaning robot Rumba has been doing more than cleaning floors. Apparently it has been also surveying the house as it cleans . Irobot was intending to sell this info to whoever needed this info. They said of course that they would not have released the info without the homeowners consent.

Of course we already know Alexa and Siri are collecting data.
 

Related Threads on Why is 'AI' often so negatively portrayed?

  • Last Post
Replies
2
Views
4K
  • Last Post
Replies
15
Views
2K
  • Last Post
Replies
1
Views
1K
Replies
23
Views
2K
  • Last Post
Replies
5
Views
693
  • Last Post
4
Replies
80
Views
15K
  • Last Post
3
Replies
61
Views
7K
  • Last Post
Replies
2
Views
1K
  • Last Post
Replies
2
Views
695
  • Last Post
Replies
1
Views
1K
Top