Why is 'AI' often so negatively portrayed?

  • Thread starter Thread starter n01
  • Start date Start date
  • Tags Tags
    Ai
Click For Summary
The portrayal of AI often leans negative in popular media, primarily due to societal fears of the unknown and the potential consequences of advanced technology. This negativity is fueled by the ease of creating compelling narratives around villainous AI, which resonate with audiences and reflect historical anxieties about advanced societies displacing less advanced ones. While there are positive representations of AI in fiction, such as benevolent characters in various films and books, these are often overshadowed by dystopian themes. Discussions emphasize the importance of context when evaluating AI's portrayal, as well as the need to consider specific applications and their potential impacts in the real world. Overall, the conversation highlights a complex relationship between AI's fictional depictions and societal perceptions, underscoring the necessity for balanced discourse.
  • #61
Decades ago, AI was grossly oversold, with everyone and his brother jumping on the bandwagon. Many people had too much "artificial" and not enough "intelligence". Progress was slow and idealistic theories ran into computer limitations.

That being said, I think that the current state of AI is very impressive (even if it is still being oversold). Significant capabilities like automated cars, license plate scanners, facial recognition, etc., etc., etc., are becoming practical.
 
  • Like
Likes russ_watters
Physics news on Phys.org
  • #62
FallenApple said:
For instance, say AI becomes so advanced that it can do most human tasks. This could lead to:
1) Massive unemployment/ social strife

Or

2) Utopia in which the machines/programs do and build everything for us and with the cost of items reducing down to cost of raw materials, no one would need to work anymore.

Or 1) followed by 2).
This argument is along the line of one of Musk's arguments found in the article of post #44:
Musk said job disruption will be massive when A.I.-powered machines reach their potential, joking “I’m not exactly sure what to do about this,” before adding, “This is really like the scariest problem to me.” He noted the transportation job sector — which accounted for 4.6 million jobs in 2014 — “will be one of the first things to go fully autonomous.”

“The robots will do everything, bar nothing,” he said dryly.
I'm no expert on AI, so I cannot bring arguments about the potential of AI. But with this kind of argument, I have enough knowledge about it to know it is a logical fallacy. It is an argument along the line of «buggy whips will become obsolete when people start to travel in cars rather than in horse-drawn buggie, therefore one should be careful before encouraging the car industry.»

I repeat here what Musk said:
“This is really like the scariest problem to me.”
Really? Why don't Musk see that in a positive way instead? Like those 4.6 million people will be able to spend their time doing something more valuable, enriching their lives. My grandfather quit school at age 12 and he did cut down trees with axes and saws as a job. Would it have been a good idea to not invent the chainsaw and the even more productive logging equipment that we have today such that I could have a job in the industry when I was 14 years-old. I don't know if it is better, but I instead went to university earning a baccalaureate in Mech. Eng. One thing is for sure, I don't think my life is worst than the life of my grandfather.

Thinking that if robots do the jobs we do today means that humans will cease to work, that is just plain wrong because the past is an indication of what the future holds. And people never stop being curious and embarking on new adventures.

When one serves me this kind of argument, it doesn't help me trusting the validity of his other arguments which only promote fear, often along the line of the end of the world (Lots of past examples related to this as well that prove those fears are unreasonable).
 
Last edited:
  • #63
UsableThought said:
jack action said:
Like those 4.6 million people will be able to spend their time doing something more valuable, enriching their lives.
What alternate universe is this going to happen in?
Sorry, I don't think I get your point. I just explained how my grandfather couldn't study past age 12 because he had to work. Years later, it wasn't my case. There are billions of people who can still follow that path and even go further.
UsableThought said:
And what world is it that has no possibility whatsoever of coming to an end because it never has ended before?
It is not about having no possibilities, it is about probabilities. And either you see the things you're doing in a positive way or, if you don't, you stop doing them and do something else.

I found weird someone like Musk who invests in AI and then spread fear about how it will destroy us. If you don't believe in it, stop doing it.

He wants laws. What laws? What laws could have been declared in 1900 about the future car society we live in today? How can you foresee the future about something you don't even master yet? Do we really think some evil genius will try to use the technology to destroy the world? Or is it more probable that people will adjust as they go along, for the greater good, like they always did in the past? Unstoppable technology that sneaks up on us without ever noticing it? I highly doubt that.

Another thing I don't understand is that fear that a being smarter than us (whatever that means) will try to get rid of us because we're less smart. Not only humans don't try to get rid of other life forms judged less smart, but as our knowledge grows, we recognize and value more and more the diversity of life in any shape or form. We spend an insane amount of energy trying to rescue and maintain other life forms, a very unusual trait among any other animals or plants. Why would a smarter being part ways from that tangent to go back to a cavemen mentality?
 
  • #64
jack action said:
I found weird someone like Musk who invests in AI and then spread fear about how it will destroy us. If you don't believe in it, stop doing it.
[rantmodeinitiated]
I just find it annoying that a billionaire has such a poor grasp of economics that he falls for a bumper-sticker style economic myth!

Here's how it works (and clearly, he and other aherents put no thought toward what happens after Step 1):
Step 1: New machine leaves millions unemployed.
Step 2: Large pool of unemployed workers reduces the cost of labor.
Step 3: Other industries hire more people because now it is cheaper to employ them. Or:
Step 3a: Unemployed people acquire new skills and get different new jobs.

Now, I'm not saying this process is pain-free - it can be extremely painful, especially for the individuals - but over the very long-term, the market adapts and adjusts and unemployment rates remain remarkably consistent.

[/endrant]
 
  • #65
jack action said:
What I don't understand is why people like Musk and Hawking speak of it with such fear. Aren't they in the field too (Anyway closer than I can be)? I'm curious to hear your thoughts about the speeches of these people.

That is a good point. But I'll refer to my earlier post when I say that people who don't know what's "under the hood" of how AI architectures are constructed are really just talking through their hat. I like Elon Musk, he's an inspiration to me and I intend to do everything in my power to help him realize his dream of colonizing Mars. Steven Hawking is a legend. But, iconic scientists as they are, I'm sure neither of them have much experience with working with neural network architectures, so how can we look to them for guidance or sanity?

The bottom line is no scientific quest is going to overscore the Manhattan project. You want to talk about a project that was going to manifest itself no matter what? That was it. And it aint ancient history, it really is our biggest threat to bring down the temple of the body of progress humans have made over the past 5000 years.

The threat of nuclear war and climate change are by far the most immediate threats to our existence. In that sense, I ally with Noam Chomsky who has been pushing this for years.

As far as the robots, sure, they can go awry, but again as I said in the earlier post, it's up to how we design them.

jack action said:
Another thing I don't understand is that fear that a being smarter than us (whatever that means) will try to get rid of us because we're less smart. Not only humans don't try to get rid of other life forms judged less smart, but as our knowledge grows, we recognize and value more and more the diversity of life in any shape or form. We spend an insane amount of energy trying to rescue and maintain other life forms, a very unusual trait among any other animals or plants. Why would a smarter being part ways from that tangent to go back to a cavemen mentality?

So this is a good sentiment, but the fact of the matter is that high intelligence does not equate with high altruism or morality. Yes, I love the apes and even the monkeys and want to preserve them, but that's becuase of a sentimental and altruistic sense that was burned into my brain for natural selection purposes a long time ago. And that's all good with me. But for the robots, we cannot assume anything, we need to make things explicit..

How? Well, if they were biological creatures, we could just do something with the genetics and make them dependent of dietary lysine, like in the Jurassic park movie.

Thank god for what we AI researchers are doing, though, LIFE WILL NOT FIND A WAY. Why? Becuase we are not dealing with life, we are dealing with in silico preparations, not in vivo preparations. Therefore, we can control the parameters to make sure nothing goes wrong...
 
  • #67
DiracPool said:
Thank god for what we AI researchers are doing, though, LIFE WILL NOT FIND A WAY. Why? Becuase we are not dealing with life, we are dealing with in silico preparations, not in vivo preparations. Therefore, we can control the parameters to make sure nothing goes wrong...

I am not sure why you think that non-living things will always be controllable, and I think you also miss that we humans are part of this, driving the technology forward having to agree on how to use it.

To be in control of a socio-technical system that uses a technology such as AI means that we at all times effectively can and will control the design and operation of the underlying systems so that we steer towards desirable outcomes and stay away from bad outcomes. This implies that several conditions must all be be established:
  1. We must be sufficiently able to predict the set of possible outcomes and their desirability ahead in time.
  2. We must have sufficient consensus on what is considered desirable and what is considered undesirable.
  3. There must exists a set of parameters that will allow us to steer our systems toward what we desire and away from what is not desired.
  4. We must have the collective will to actually apply this control.
  5. These conditions must all be established at all times.
Each of these conditions have obvious failure modes that could mean loss of control at the wrong time. Note, that conditions like number 4 depend heavily on "human nature" in a competitive environment and less on the technology itself. Of course, occasional loss of control do not imply we will get an undesirable outcome, but currently we are heading towards a situation where pretty much none of the conditions are established at any time, hence we have no idea what level of control we actually have.

The above can be said to have been true for pretty much any technology we have developed so far and yet we seem to be overall content with the outcome, so why would this be a problem for (general) AI technology? Currently there is heavy research in making systems self-learning, autonomous, adaptive and distributed to a higher degree (not necessary all at once) and each of these capabilities will (everything else being equal) weaken the conditions mentioned above far more than any of the previous "passive" technologies we have developed. For instance, to me it seems we already have serious issues with predictability of interconnected systems due to their complexities alone and instead of simplifying and thus gaining a higher degree of predictability we just add more "AI complexity" to the systems instead. If we look at the "traditional" AI singularity problem, then this is also a problem of loss of predictability both from increased complexity and from decrease of prediction time.
 
  • Like
Likes UsableThought
  • #68
DiracPool said:
So this is a good sentiment, but the fact of the matter is that high intelligence does not equate with high altruism or morality.
I did not meant it as altruism or morality. For my part anyway, I understand that I cannot live without other forms of life. We need fishes, weeds, reptiles, insects and bacteria. Most reptiles and insects don't inspire me a good sentiment of altruism! More often, I have to fight a sentiment of disgust, restraining myself from getting rid of them all! I think that it is the result of my intelligence connecting the dots between the survival of other species and my own survival.
Rubidium_71 said:
I really like the main point of that general though:
I don't think it's reasonable for us to put robots in charge of whether or not we take a human life...[America should continue to] take our values to war.
It's not really a question of fearing a robot uprising, but questioning our value system from an ethical point of view.

It is along the line of «Who is responsible in a car accident with driverless cars?» The car passenger, the car owner or the car manufacturer? That is not an easy question to answer and a more serious problem to solve in short term then AI taking over the world.
Filip Larsen said:
Currently there is heavy research in making systems self-learning, autonomous, adaptive and distributed to a higher degree (not necessary all at once) and each of these capabilities will (everything else being equal) weaken the conditions mentioned above far more than any of the previous "passive" technologies we have developed.
But the question is always how do you prepare for the unknown? So we need control. Control over what? How?

In 1900 when they begun dreaming about a car society, how could they have foreseen the problem the internal combustion engine would cause? How could they prepare for that? Talking about pollution? The pollution of the time was horse manure. That inoffensive gas going out the tailpipe was a god given gift! Even if you could go back in time and tell those people: «You know what? You should focus more on the electric car.» That is not even a guarantee that everything would be better today because we don't know what is the outcome of a world filled with electric cars. The reality is that it is our problem now, not the problem of people who died decades ago. And there will always be problems to solve.

The kind of predictability and control you are talking about is just an illusion. You'll never achieve it.

One should always advance with care into the unknown, that is good common sense.
 
Last edited by a moderator:
  • #69
The greatest concern is the development of a "general artificial intelligence". Do we need to be gods and create something in our own image. Wouldn't "smart" stuff be good enough. That is make tools that we control not competitors that we might not be able to.

Musk is as much an AI supporter as anybody. His "predictions" of our demise is a possibility if we are not careful in its implementation. He is a signer of the "Asilomar AI Principles" along with about 3500 others including 1200 AI/robotic researchers. These principles have been drafted by the AI community to help assure that AI will be a benefit to mankind.

His particular message at the governor's conference was to warn of the uncontrolled development and implementation of AI, market driven forces that migh try to exploit this technology for its economic advantage with little concern for possible unintended consequences.
jack action said:
In 1900 when they begun dreaming about a car society, how could they have foreseen the problem the internal combustion engine would cause? How could they prepare for that? Talking about pollution? The pollution of the time was horse manure. That inoffensive gas going out the tailpipe was a god given gift! Even if you could go back in time and tell those people: «You know what? You should focus more on the electric car.» That is not even a guarantee that everything would be better today because we don't know what is the outcome of a world filled with electric cars. The reality is that it is our problem now, not the problem of people who died decades ago. And there will always be problems to solve.

The kind of predictability and control you are talking about is just an illusion. You'll never achieve it.

If we build it correctly everything will be fine. Right. Man is not perfect and neither is his technology. He tend to mind his pocket book more than his future.

Today we are all connected through the internet and are becoming more dependent on it economically. In the past month we have seen a world wide virus attack. If one can take down the internet for a substantial time what will be the result.

Man is his own worst enemy. Every thing we do has a down side. Today the medical community (do no harm, right) is responsible for a growing opioid epidemic and responsible for a growing menagerie of super bugs. Technology kills or maims millions each year. Has man developed anything that did not have some unforeseen consequences? Is he learning anything from his past experiences? Will he ever? Shouldn't we be more cautious with the more complex technologies coming down the pike. You would think so.
 
  • Like
Likes Filip Larsen and UsableThought
  • #70
@Filip Larsen and @gleem have posted what I consider informed comments - that is, comments that are not merely opinion (although they include opinion) but in addition either list or else point to actual knowledge related to responsibly overseeing the development of AI - and not just at the basic level of coding, either. For example, going to the website gleem references for the Future of Life Institute and the 2017 Asilomar conference, we find on a list of principles agreed on at that conference. An interesting excerpt from that list:

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:
  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?
Look at number 2 above more closely - the last phrase in the sentence before we get to the bullet list of concerns. Read that phrase again:

thorny questions in computer science, economics, law, ethics, and social studies

And now look at the list of speakers from the conference page: https://futureoflife.org/bai-2017/

Most of the names don't mean anything to me. Why? Because I know nothing about AI. I could make a whole bunch of claims about AI and its perils, or lack of, based on my personal ideological spin to do with politics, economics, and social issues - but that would change the fact that I'm not a scholar when it comes to politics, economics, social issues, or AI. I'm not even a well-read amateur. I know nothing. Whereas the people listed in the conference, who developed this list of principles? Some of them look to be Hollywood types, brought on board for persuasion purposes perhaps; but others look to have solid scientific AI credentials.

In other words, they are experts. They might have a clue. Possibly you might recognize a name here & there and be able to dismiss that person for some reason or other - but can you dismiss all of these names?

To close, I really wish the General Discussion Forum held comments to the same standards of Quantum Physics, Classical Physics, etc. etc. Because it doesn't; and so we end up with some very bright people making sweeping claims about issues they really know very little about. You can look further than that and see that because PF is really set up for only the hard sciences, it doesn't know how to properly handle the "soft" sciences of economics, law, ethics, and social studies - all of which the folks at the Asilomar conference seem to think are important. Basically, PF unintentionally disses the soft sciences by not requiring the same level of sourcing required with the hard sciences. It's a shame.
 
  • #71
gleem said:
Musk is as much an AI supporter as anybody. His "predictions" of our demise is a possibility if we are not careful in its implementation. He is a signer of the "Asilomar AI Principles" along with about 3500 others including 1200 AI/robotic researchers. These principles have been drafted by the AI community to help assure that AI will be a benefit to mankind.

His particular message at the governor's conference was to warn of the uncontrolled development and implementation of AI, market driven forces that migh try to exploit this technology for its economic advantage with little concern for possible unintended consequences.
Who wouldn't agree to these principles? It can be applied to basically any industry. Here's how little modifications is needed (underlined and stroked through) to relate these principles to the automobile industry:
Research Issues

1) Research Goal: The goal of automotive research should be to create not undirected transportation, but beneficial transportation.
2) Research Funding: Investments in automotive transportation should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we make future automotive systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with automotive transportation, and to manage the risks associated with automotive transportation?
  • What set of values should automotive transportation be aligned with, and what legal and ethical status should it have?
3) Science-Policy Link: There should be constructive and healthy exchange between automotive researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of automotive transportation.

5) Race Avoidance: Teams developing automotive systems should actively cooperate to avoid corner-cutting on safety standards.

Ethics and Values

6) Safety: automotive systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7) Failure Transparency: If an automotive system causes harm, it should be possible to ascertain why.

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9) Responsibility: Designers and builders of advanced automotive systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10) Value Alignment: Automotive systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11) Human Values: Automotive systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

13) Liberty and Privacy: The application of automotive to personal transportation must not unreasonably curtail people’s real or perceived liberty.

14) Shared Benefit: Automotive technologies should benefit and empower as many people as possible.

15) Shared Prosperity: The economic prosperity created by automotive transportation should be shared broadly, to benefit all of humanity.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17) Non-subversion: The power conferred by control of automotive systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18) Automotive Arms Race: An arms race in automotive weapons should be avoided.

Longer-term Issues

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future automotive capabilities.
20) Importance: Automotive transportation could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by automotive systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

23) Common Good: Automotive transportation should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

Although, these weren't written in the 1900's, most of them have been followed by most of the industry and law makers since that time. That is because it is common sense. But there is also always a dictator somewhere that won't follow these guidelines and you can't do anything about it; Except waiting for his government to fall. Although, there seems to be less and less of those governments as technology evolves.

You could applied this to basically any technology we developed over the years. AI researchers are not the only ones with a conscience.
 
  • Like
Likes russ_watters
  • #72
jack action said:
You could applied this to basically any technology we developed over the years. AI researchers are not the only ones with a conscience.

Let me quote you, Jack:
jack action said:
The kind of predictability and control you are talking about is just an illusion. You'll never achieve it.

Basically saying to the person you were responding to, "So don't even try." Even though he was saying things that line up quite well with the conference principles you now say you agree with. Perhaps you are not aware of how your comments come across when you are dismissive without demonstrating evidence or knowledge to justify that attitude?

To be blunt: In none of your sweeping claims have you cited any evidence that would be of an acceptable form for any of the forums on PF where evidence is called for. We all have our personal beliefs and personal philosophies about how the world works, mostly inherited from families, friends, and both regional culture & mass culture; I just wish you would understand that "n of 1" beliefs and philosophizing don't substitute for expert knowledge. This is part of why PF exists. You should do some reading and studying on issues like this if they really interest you.

Me, I would much rather this thread had been shorter, but made up mostly of comments from people who actually do know something about the debate over AI and could inform the rest of us. At least we have a few of those.
 
Last edited:
  • #73
@UsableThought :

Don't forget that this discussion started with the article cited in post #44, where Musk said:
“we should be really concerned about A.I.,” and that the emerging technology presents a rare case where government should be proactive in regulation instead of reactive to industry.
He's asking for government regulations. Having principles is one thing, forcing them via a government is another thing.

Here's one of the principle that make me laugh:
18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
Here's the basic principle (it applies to anything we do or could ever possibly think of doing):

«An arms race in any type of weapons should be avoided.»

That is the basic statement. It has nothing to do with AI at all. Are AI researchers OK with a nuclear arms race? Or maybe they just draw the line at machine guns? Of course they're not OK with an arms race of any kind. Nobody in his right mind is.

What is the difference between principle and regulations? The best example of regulations going wrong is the Cookie Law. https://silktide.com/the-stupid-cookie-law-is-dead-at-last/ shows really well the concept of panic and ignorance of law makers in that case. But we're stuck with those annoying and useless pop-ups, wasting bandwidth that the industry is trying so hard to save. But when a law is put in place, it is really hard to remove it.

I agree with you about ignorance of the people about AI (me included). But that also includes the law makers. And in my experience, they tend to make laws that set questionable goals, protect the wrong things and waste valuable resources. Especially when the laws are created following a panic similar to the one about AI, the one where are discussing right now (whether justify or not). So, it's not my knowledge about AI that qualifies me for an opinion on the subject, it's my knowledge about law makers.

So before someone tells me that I don't know anything about, say, AI and that disqualifies me to have an opinion about the laws that will govern me, my only answer will be «Enlighten me». Up until now, there is not much evidence showed to me to justify entering in a panic mode other than:
“On the artificial intelligence front, I have exposure to the very, most cutting-edge A.I., and I think people should be really concerned about it,” Musk told Sandoval.
If Elon Musk wants me to support new regulations of any kind, he must show evidence. «People should be really concerned about it» is not evidence.
 
  • Like
Likes russ_watters
  • #74
First, let's just take a deep breath.

jack action said:
If Elon Musk wants me to support new regulations of any kind, he must show evidence. «People should be really concerned about it» is not evidence

That might be a problem. For once you identify the evidence it might be too late. Once we are sure of impending climate change it might be too late to stop it.

I hold that Musk's opinion is of significant value, as a respected citizen and implementer of AI technology. The question of over regulation is very debatable considering the consequence of failure to prevent a captostropic disaster. You here it all the time "Why didn't someone do something to prevent this...? No one wants regulations until they need them.

While the Asilomar principles can be applied to any number of technologies and maybe they should it at least gives basis on which many experts agree on how to proceed in a safe and responsible manner. As Musk warns you don't just want to turn this technology loose in the free and say may the most intelligent system win without some adherence to reasonable guidelines or if necessary regulations. Does anyone doubt the value of regulation of the food processing industry or of manufacturing working conditions or medical devices and drugs.
 
  • #75
Even the wildly optimistic "Technological Singularity" in 2043 can be viewed with alarm if you're so inclined. The dominant intelligence might not be human.

In terms of existential threat, I look at it this way. The following is a seriously flawed analogy, but I have no better analogy because we have never faced this situation before.

What would happen if we learned that elephants developed nuclear weapon capability. (remember, flawed analogy). Some people would rush to remind us how we hunted elephants for sport and ivory. They would have ample motive to nuke us for revenge. That possible threat would be intolerable, so we would exterminate the elephants immediately. After all, they have no human rights to make us hesitate. In human law, cockroaches and elephants stand equal.

My view is that a dominant species (if species is the correct word) can never tolerate a secondary species with lethal military capability. That is why evolution of superior intelligence is an existential threat to humanity.

But I am explaining the logic of the doom sayers. I don't subscribe to those fears myself.
 
  • Like
Likes russ_watters
  • #76
Rubidium_71 said:
Because of things like this:
Yeah, I agree.... :oldeyes:

Blocking ads.JPG
 
  • Like
Likes Rubidium_71
  • #77
The problem with AI is we don't know whether the intelligent machine can, or will edit themselves. We won't know their capabilities until we create one, and just one AI can do a lot of interesting things. I'm not sure if the movie Transcendence is scientific, but the power gained by the AI is unparalleled. Humans fear the future for what it holds, but then again, why would we need AI to begin with?
 
  • #78
I know it is hard to see the dire consequence of AI gone amuck but so many experts do feel that it is very possible that like climate change you might want to hedge you bet and go with the expert opinion even though it might be a little inconvenient. While I do not think AI is the most immediate threat to our civilzatioin it would be sad if we survived nukes, climate change pestilence only to go gently into the night.

I like to point out that Musk is really heavy into AI and not just with his cars. He own two AI enterprises Neuralink to develop an interface between our brain and the internet and OpenAI a research company to develop safe AI.

Let us hope that those from the likes of IBM, Facebook, Apple and Google who signed the Asilomar agreement are putting their money where their mouths are.
 
  • #79
jack action said:
This argument is along the line of one of Musk's arguments found in the article of post #44:

I'm no expert on AI, so I cannot bring arguments about the potential of AI. But with this kind of argument, I have enough knowledge about it to know it is a logical fallacy. It is an argument along the line of «buggy whips will become obsolete when people start to travel in cars rather than in horse-drawn buggie, therefore one should be careful before encouraging the car industry.»

I repeat here what Musk said:

Really? Why don't Musk see that in a positive way instead? Like those 4.6 million people will be able to spend their time doing something more valuable, enriching their lives. My grandfather quit school at age 12 and he did cut down trees with axes and saws as a job. Would it have been a good idea to not invent the chainsaw and the even more productive logging equipment that we have today such that I could have a job in the industry when I was 14 years-old. I don't know if it is better, but I instead went to university earning a baccalaureate in Mech. Eng. One thing is for sure, I don't think my life is worst than the life of my grandfather.

Thinking that if robots do the jobs we do today means that humans will cease to work, that is just plain wrong because the past is an indication of what the future holds. And people never stop being curious and embarking on new adventures.

When one serves me this kind of argument, it doesn't help me trusting the validity of his other arguments which only promote fear, often along the line of the end of the world (Lots of past examples related to this as well that prove those fears are unreasonable).

I haven't scrutinized what Musk said, so I can't really comment on that. But I think that it would be a little different than buggy whips. That's just a small section of society. When were are talking about a huge portion of society being put out of jobs at a fast enough rate that would make it difficult to adjust, we can't predict what would happen. The difference is that those people operating buggy whips can simply move on to other available jobs. Today, taxi drivers could just become Uber drivers or some other driver. But if general artificial intelligence takes over most tasks, then it would be difficult to find a new job. Clearly it depends on the rate at which AI progresses. If AI progresses fast, but not too fast that governments can't keep up, somethings can be done to safeguard the people. But if there's a runway effect where suddenly machines can do most tasks that humans can do( think a period of only a few years after a breakthrough in AI theory), then yes, you can surmise that there might be some problems.

Past examples don't work because they are too self contained and different in scope and characteristic to provide any meaningful extrapolations in the case of AI. I'm not against AI. I think it would help fix many problems over time. However, there are some risks and valid concerns about those risks is all that I am saying.
 
  • #80
anorlunda said:
My view is that a dominant species (if species is the correct word) can never tolerate a secondary species with lethal military capability. That is why evolution of superior intelligence is an existential threat to humanity.
But the human species is made of a multitude of races (if race is the correct word), some with lethal military capability and we do tolerate each other. I mean there are wars, but we don't see extermination of one race over another.

I know it's not your point of view, but I really don't see the evidence that shows that life is a contest where only one winner will emerge.
gleem said:
For once you identify the evidence it might be too late. Once we are sure of impending climate change it might be too late to stop it.
That is true of so many problems. The best example is: Should we be prepared for a large asteroid colliding with Earth? If so, how large and fast of an asteroid should we be prepared for? No matter what, there is also the possibility that it will be so large that there is nothing we can do about it (say, as big as the planet itself).

Again, it's not about denying the possibility that something can happen, it is about estimating the probability of its occurrence. And to calculate probability, we need data. Without data, there is no valid way other than see what will happen as we go along, no matter how scary it can be.
FallenApple said:
But if general artificial intelligence takes over most tasks, then it would be difficult to find a new job.
But if so many people don't have job - and thus can't spend money - who will finance those companies manufacturing products and offering services with AI? It all works together, you cannot split offer and demand.
 
  • #81
jack action said:
I know it's not your point of view, but I really don't see the evidence that shows that life is a contest where only one winner will emerge.

Of course there's no evidence. It's unprecedented.

But we're talking about human fear. A phobia. Fear is real, even if irrational. The nocebo effect applies to both individuals and groups. People have the right to take protective action based on fear alone. I don't think they should, but I recognize the right.
 
  • #82
anorlunda said:
Of course there's no evidence. It's unprecedented.

But we're talking about human fear. A phobia. Fear is real, even if irrational. The nocebo effect applies to both individuals and groups. People have the right to take protective action based on fear alone. I don't think they should, but I recognize the right.
I also recognize that right. But when someone wants to force me to do something because of his fear, that is where I draw the line.
 
  • Like
Likes russ_watters
  • #83
Again, it's not about denying the possibility that something can happen, it is about estimating the probability of its occurrence. And to calculate probability, we need data. Without data, there is no valid way other than see what will happen as we go along, no matter how scary it can be.

You are right, without data, we can't predict. So we have no idea what could happen with AI, at least empirically. The thing is, for circumstances like like taxi drivers and telemarketers losing jobs, we have historical data because jobs were replaced before due to situations that where comparable in scope. For example, what is similar about buggy drivers losing their jobs to the situation of taxi drivers losing their jobs its that its just one industry being disrupted by a particular innovation. When we have displacement in many industries across the board, simultaneously, we don't know what could happen because this has never happened before. So the vast uncertainty in a potential situation like that is what is concerning.

But if so many people don't have job - and thus can't spend money - who will finance those companies manufacturing products and offering services with AI? It all works together, you cannot split offer and demand.

I suspect that the losses in jobs would be for jobs that do not require very high levels of analytical/abstract/creative thinking. So those that can do that would have all the money, presumably. However, in a situation like that, there would probably be a need for some redistribution of wealth to preserve society.
 
Last edited:
  • #84
gleem said:
That might be a problem. For once you identify the evidence it might be too late. Once we are sure of impending climate change it might be too late to stop it.

I hold that Musk's opinion is of significant value, as a respected citizen and implementer of AI technology. The question of over regulation is very debatable considering the consequence of failure to prevent a captostropic disaster. You here it all the time "Why didn't someone do something to prevent this...? No one wants regulations until they need them.
You can add me to the list of people who says that's not good enough. Fortunately for me, for now it isn't even specific enough to be an issue since as far as I know, no one is proposing any *actual* legislation. The only thing actually being proposed is fear. I choose to vote no on that too.
 
Last edited:
  • Like
Likes anorlunda
  • #85
FallenApple said:
I haven't scrutinized what Musk said, so I can't really comment on that. But I think that it would be a little different than buggy whips. That's just a small section of society. When were are talking about a huge portion of society being put out of jobs at a fast enough rate that would make it difficult to adjust, we can't predict what would happen.
How big? How fast? Faster than Netflix killed Blockbuster? Faster than Amazon is killing brick and mortar? Faster than overseas competition and automation (the subject of the thread!) have hurt the manufacturing and steel industries? "It will be worse" is easy and cheap to say -- and just as meaningless.
 
  • Like
Likes 256bits
  • #86
UsableThought said:
To be blunt: In none of your sweeping claims have you cited any evidence that would be of an acceptable form for any of the forums on PF where evidence is called for.
Nor has anyone on the "pro" side, where it is really most required. The fears - much less what to do about them - are far too vague.
 
  • Like
Likes anorlunda
  • #87
russ_watters said:
Nor has anyone on the "pro" side, where it is really most required. The fears - much less what to do about them - are far too vague.

Which is the whole point of actually reading up on the field & what the experts inside that field are actually talking about. Scare headlines about Musk etc. aren't sufficient to educate ourselves, anymore than for any other complex economic, political, social or technological issue.

Most of us simply don't have the time to read up on all possible fields and issues; but we still want to hold opinions so we can argue them. This is fine so long as we hold such opinions lightly.
 
Last edited:
  • #88
UsableThought said:
Which is the whole point of actuallyreading up on the field & what the experts inside that field are actually talking about. Scare headlines about Musk etc. aren't sufficient to educate ourselves, anymore than for any other complex economic, political, social or technological issue.

Most of us simply don't have the time to read up on all possible fields and issues; but we still want to hold opinions so we can argue them. This is fine so long as we hold such opinions lightly.
That's all fine. Making snap "pro" judgements on those empty scare headlines and then burden of proof shifting was what I was objecting to.
 
  • Like
Likes UsableThought
  • #89
UsableThought said:
Which is the whole point of actually reading up on the field & what the experts inside that field are actually talking about. Scare headlines about Musk etc. aren't sufficient to educate ourselves, anymore than for any other complex economic, political, social or technological issue.

Most of us simply don't have the time to read up on all possible fields and issues; but we still want to hold opinions so we can argue them. This is fine so long as we hold such opinions lightly.

Sorry, being an expert who creates software has no relation to being qualified to predict social impacts. To use a pop icon analogy, a Sheldon Cooper type could be a likely AI insider expert. Would you ask Sheldon to design our society?

If you want to extrapolate beyond the immediate future, I would look to SF before trusting tekkie AI programmers. If you want a vibrant society, it is better to have opinionated involved citizens than passive apathetic ones.
 
  • Like
Likes russ_watters
  • #90
anorlunda said:
Sorry, being an expert who creates software has no relation to being qualified to predict social impacts.

You are jumping to a completely mistaken conclusion about who exactly I consider the required experts. I suggest you re-read my earlier comments on the same theme.
 
Last edited:

Similar threads

  • · Replies 12 ·
Replies
12
Views
4K
Replies
10
Views
4K
Replies
3
Views
3K
Replies
15
Views
4K
Replies
33
Views
7K
  • · Replies 28 ·
Replies
28
Views
11K
  • · Replies 67 ·
3
Replies
67
Views
15K
Replies
9
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
Replies
19
Views
7K