Why is 'AI' often so negatively portrayed?

  • Thread starter Thread starter n01
  • Start date Start date
  • Tags Tags
    Ai
AI Thread Summary
The portrayal of AI often leans negative in popular media, primarily due to societal fears of the unknown and the potential consequences of advanced technology. This negativity is fueled by the ease of creating compelling narratives around villainous AI, which resonate with audiences and reflect historical anxieties about advanced societies displacing less advanced ones. While there are positive representations of AI in fiction, such as benevolent characters in various films and books, these are often overshadowed by dystopian themes. Discussions emphasize the importance of context when evaluating AI's portrayal, as well as the need to consider specific applications and their potential impacts in the real world. Overall, the conversation highlights a complex relationship between AI's fictional depictions and societal perceptions, underscoring the necessity for balanced discourse.
  • #51
newjerseyrunner said:
I think it's simply because we're on the precipice of a mind boggling social disruption but we haven't quite gone over it yet. It's simply new and untested. It has the potential for destruction on an unimaginable scale, or it could ferry us into a new golden age. Humans had the same reservations about unlocking the power of the atom. The main horror is that we don't know where the major breakthrough will come from, and we don't like being out of control. It's understood that the invention of a truly intelligent machine could outsmart every banker and investor in the world and have total control over the stock market before we can even notice.

War is an even scarier proposition. If two advanced states end up warring, the AI race will heat up. It's a paradigm shifting technology, and the side that gets there first will overwhelm everyone else. If Hitler figured out the bomb before us, I'm not sure the allies still would have won the war. It's a terrifying thought that we didn't get there first by very much, but doing so completely changed the world order.

I've thought a bunch about the effects of AI on a planet long term. I've come to the conclusion that AI will be the masters of the universe. If we continue to build benign AI, we will become more and more dependent on it. Over generations, it'll just become a more and more important role in society. It'll control the economy, entertain us, server us, and shape our society. As society gets more complex, the need for humans to work will become less and less. Humans and AI will at first work together, but eventually the work will get too complex for humans and the AI will take over. There is a history of this. There are two species on this planet that worked together in order to survive, but as complexity grew, the smarter one came to dominate and the lesser one ended up never working much at all: humans and dogs. I think we'll eventually become more like pets to god-like machines. I think we'll be perfectly okay with that. Most humans currently believe that we are subservient to one or more gods.

@newjerseyrunner, for the scenario you present to come to fruition, are you not assuming that progress in AI development will proceed more or less smoothly? Because, from my vantage point, that is far from obvious. Yes, we have made considerable advances in areas of machine learning, such as neural networks/deep learning, but it is far from clear to me that "strong" AI will necessarily emerge out of deep learning (from what I've read, the most impressive results from deep learning came about more through advances in computing power rather than anything particularly groundbreaking in the specific algorithms or theoretical underpinnings, much of which has been already laid out since the early 1990s, if not before then).
 
Physics news on Phys.org
  • #52
newjerseyrunner said:
I'm pretty sure the sewage technology up until very recently was "dump it in the ocean and let Poseidon/God deal with it."
A stray dog doesn't need a human to live. It will just «work» to find its food and change territory as it gets soiled by its feces (which a «god» will clean up within a certain time). Finding new territories will most likely require fighting with others (offense and defense).

But if the dog «acts» cute enough, it might convince a human being to find the food, clean the territory and do the fighting with others instead of doing it itself. That's one way of answering the question «Who's leading who?» in the human/dog relationship.

Humans might just not be the gods of dogs, they might just have been lead (outsmarted?) to believe they were.
 
  • #53
StatGuy2000 said:
@newjerseyrunner, for the scenario you present to come to fruition, are you not assuming that progress in AI development will proceed more or less smoothly? Because, from my vantage point, that is far from obvious. Yes, we have made considerable advances in areas of machine learning, such as neural networks/deep learning, but it is far from clear to me that "strong" AI will necessarily emerge out of deep learning (from what I've read, the most impressive results from deep learning came about more through advances in computing power rather than anything particularly groundbreaking in the specific algorithms or theoretical underpinnings, much of which has been already laid out since the early 1990s, if not before then).
The major breakthrough recently has been how neural networks are connected to each other, but yes, hardware has pushed t along mostly. The thing is that neural networks are perfect for quantum computers. If quantum computers work out, neural networks potentials explode.

And no, I'm not saying it'll be steady. In fact, I leave open the possibility that it may not even be our global civilization that does it. We could go all the way back to a dark age and have to start it all over again. I propose that eventually we will do get there. It's even possible that an AI God would end up destroying everything, set us back to more dark ages, and start the whole thing over again. But some should be stable enough to last many human generations. And if one is really self stabilizing , it could live for millions of years as our benign overloads. Those would likely be the dominate beings of the universe, not green men in ships. All of this I find to be very likely progression for any creature capable of developing technology as advanced as itself over cosmological timescales.
 
  • #54
Borek said:
n01 said:
Are people just afraid of what AI might entail?
That's my bet.
Mine too.
And AI will be faster, for sure.
And, just like us.



But it's a long way away, IMHO.
If not, I'll find it fascinating to watch.

They won't have the weakness, of panic.
 
  • #55
OmCheeto said:
But it's a long way away, IMHO.

Don't be so sure. There's people like me who are working hard to make it happen before my dissertation defense, or actually for my dissertation defense...hopefully.

The reason AI gets a bad rap sometimes is twofold. One, it never delivered what it promised. The idea of AI has been around since the war (and not the Vietnam war), and, to make a gross understatement, it hasn't lived up to the hype. This is the AI insiders frustration, however, and I think the OP was referring more to the general bad rap AI can get in the popular culture.

The reason for the popular bad rap is simply that people don't understand it. And because they don't understand it, i.e., understand what's "under the hood" of AI architectures, they fear it. Basic fear of the unknown. As far as the average Joe or Joanne out there, a human-derived AI creation might as well be an alien from another galaxy. We know nothing about these aliens and they can be good or bad. But fear is stronger than tolerance, and it's much easier to just say AI is dangerous than to take the time to explore the research that is going on in this field. It's fascinating. I run dozens of computer simulations a day on spiking neuron network populations in order to simulate mammalian cortical processes in the attempt to develop a tractable architecture we can exploit for advanced information processing capabilities. These networks I deal with are like little children, they're naughty sometimes, they don't cooperate, and they don't don't make any sense. But they're learning...They're coming along and growing more cooperative with some love and attention. That's the way I look at it. They'll become what we make them become. From there, sure, they may take on a mind of their own, but so what? That's what evolution is all about.

You can't stop it. You can't stop what's coming...

Hahhaha
 
  • Like
Likes OmCheeto
  • #56
DiracPool said:
The reason for the popular bad rap is simply that people don't understand it. And because they don't understand it, i.e., understand what's "under the hood" of AI architectures, they fear it. Basic fear of the unknown. As far as the average Joe or Joanne out there, a human-derived AI creation might as well be an alien from another galaxy. We know nothing about these aliens and they can be good or bad. But fear is stronger than tolerance, and it's much easier to just say AI is dangerous than to take the time to explore the research that is going on in this field.
This point of view speaks more to me. It makes sense that someone who is in the field speaks like that.

What I don't understand is why people like Musk and Hawking speak of it with such fear. Aren't they in the field too (Anyway closer than I can be)? I'm curious to hear your thoughts about the speeches of these people.
 
  • #57
jack action said:
What I don't understand is why people like Musk and Hawking speak of it with such fear.

I would say that people with more means than the average lay person to see both possible positive and negative outcomes current level of research and use of AI can lead to on a global scale will find it natural to point out that we currently are not able to discern the technological preconditions that separate desirable scenarios from undesirable scenarios, and thus that we are unable to ensure that we all do not end up in one of the really bad scenarios. I am not surprised that knowledgeable people like Musk and Hawking (and many others) consider it prudent to point this out.

Some people seems to evaluate risk in the context of AI (or even in general) by trying to guess or estimate the most likely scenarios, and if these are all desirable scenario then they apparently find it unnecessary to analyse or even acknowledge the possibility of less likely scenarios. And in the context of AI they do this even when the actual probabilities are very hard to estimate correctly. To me, this is not prudent risk management.

Also, the use of the words "such fear" sounds to me like an attempt to portrait Musk and Hawking's statements as a result of phobia (irrational fear). However, if fear in this context is taken to mean a rational perception of danger then I will consider it an appropriate label.
 
  • #58
It's because it is a huge game changer. For good or for bad.

For instance, say AI becomes so advanced that it can do most human tasks. This could lead to:
1) Massive unemployment/ social strife

Or

2) Utopia in which the machines/programs do and build everything for us and with the cost of items reducing down to cost of raw materials, no one would need to work anymore.

Or 1) followed by 2).

Or neither of any of the above.

These are just some examples of what could happen. I'm sure there are more.

Another possibility is that the rules would change. Different societies in history have different economic systems. The hunter-gathers had a different system of trade and followed different economics. The current economic theories of society are based off of a post industrial revolutionary period. We can't predict that the post AI revolutionary period would have similar economic laws considering just how drastically significant an impact of a runaway AI development would be on human society.
 
  • #59
Here's a case where a little more AI wouldn't have hurt.
http://www.bbc.com/news/technology-40642968
"We were promised flying cars, instead we got suicidal robots," wrote one worker from the building on Twitter.

"Steps are our best defence against the Robopocalypse," commented Peter Singer - author of Wired for War, a book about military robotics.
 
  • #60
1oldman2 said:
Here's a case where a little more AI wouldn't have hurt.
http://www.bbc.com/news/technology-40642968
"We were promised flying cars, instead we got suicidal robots," wrote one worker from the building on Twitter.

"Steps are our best defence against the Robopocalypse," commented Peter Singer - author of Wired for War, a book about military robotics.

We were promised flying cars, but we got better than that. We got the internet.
 
  • Like
Likes 1oldman2
  • #61
Decades ago, AI was grossly oversold, with everyone and his brother jumping on the bandwagon. Many people had too much "artificial" and not enough "intelligence". Progress was slow and idealistic theories ran into computer limitations.

That being said, I think that the current state of AI is very impressive (even if it is still being oversold). Significant capabilities like automated cars, license plate scanners, facial recognition, etc., etc., etc., are becoming practical.
 
  • Like
Likes russ_watters
  • #62
FallenApple said:
For instance, say AI becomes so advanced that it can do most human tasks. This could lead to:
1) Massive unemployment/ social strife

Or

2) Utopia in which the machines/programs do and build everything for us and with the cost of items reducing down to cost of raw materials, no one would need to work anymore.

Or 1) followed by 2).
This argument is along the line of one of Musk's arguments found in the article of post #44:
Musk said job disruption will be massive when A.I.-powered machines reach their potential, joking “I’m not exactly sure what to do about this,” before adding, “This is really like the scariest problem to me.” He noted the transportation job sector — which accounted for 4.6 million jobs in 2014 — “will be one of the first things to go fully autonomous.”

“The robots will do everything, bar nothing,” he said dryly.
I'm no expert on AI, so I cannot bring arguments about the potential of AI. But with this kind of argument, I have enough knowledge about it to know it is a logical fallacy. It is an argument along the line of «buggy whips will become obsolete when people start to travel in cars rather than in horse-drawn buggie, therefore one should be careful before encouraging the car industry.»

I repeat here what Musk said:
“This is really like the scariest problem to me.”
Really? Why don't Musk see that in a positive way instead? Like those 4.6 million people will be able to spend their time doing something more valuable, enriching their lives. My grandfather quit school at age 12 and he did cut down trees with axes and saws as a job. Would it have been a good idea to not invent the chainsaw and the even more productive logging equipment that we have today such that I could have a job in the industry when I was 14 years-old. I don't know if it is better, but I instead went to university earning a baccalaureate in Mech. Eng. One thing is for sure, I don't think my life is worst than the life of my grandfather.

Thinking that if robots do the jobs we do today means that humans will cease to work, that is just plain wrong because the past is an indication of what the future holds. And people never stop being curious and embarking on new adventures.

When one serves me this kind of argument, it doesn't help me trusting the validity of his other arguments which only promote fear, often along the line of the end of the world (Lots of past examples related to this as well that prove those fears are unreasonable).
 
Last edited:
  • #63
UsableThought said:
jack action said:
Like those 4.6 million people will be able to spend their time doing something more valuable, enriching their lives.
What alternate universe is this going to happen in?
Sorry, I don't think I get your point. I just explained how my grandfather couldn't study past age 12 because he had to work. Years later, it wasn't my case. There are billions of people who can still follow that path and even go further.
UsableThought said:
And what world is it that has no possibility whatsoever of coming to an end because it never has ended before?
It is not about having no possibilities, it is about probabilities. And either you see the things you're doing in a positive way or, if you don't, you stop doing them and do something else.

I found weird someone like Musk who invests in AI and then spread fear about how it will destroy us. If you don't believe in it, stop doing it.

He wants laws. What laws? What laws could have been declared in 1900 about the future car society we live in today? How can you foresee the future about something you don't even master yet? Do we really think some evil genius will try to use the technology to destroy the world? Or is it more probable that people will adjust as they go along, for the greater good, like they always did in the past? Unstoppable technology that sneaks up on us without ever noticing it? I highly doubt that.

Another thing I don't understand is that fear that a being smarter than us (whatever that means) will try to get rid of us because we're less smart. Not only humans don't try to get rid of other life forms judged less smart, but as our knowledge grows, we recognize and value more and more the diversity of life in any shape or form. We spend an insane amount of energy trying to rescue and maintain other life forms, a very unusual trait among any other animals or plants. Why would a smarter being part ways from that tangent to go back to a cavemen mentality?
 
  • #64
jack action said:
I found weird someone like Musk who invests in AI and then spread fear about how it will destroy us. If you don't believe in it, stop doing it.
[rantmodeinitiated]
I just find it annoying that a billionaire has such a poor grasp of economics that he falls for a bumper-sticker style economic myth!

Here's how it works (and clearly, he and other aherents put no thought toward what happens after Step 1):
Step 1: New machine leaves millions unemployed.
Step 2: Large pool of unemployed workers reduces the cost of labor.
Step 3: Other industries hire more people because now it is cheaper to employ them. Or:
Step 3a: Unemployed people acquire new skills and get different new jobs.

Now, I'm not saying this process is pain-free - it can be extremely painful, especially for the individuals - but over the very long-term, the market adapts and adjusts and unemployment rates remain remarkably consistent.

[/endrant]
 
  • #65
jack action said:
What I don't understand is why people like Musk and Hawking speak of it with such fear. Aren't they in the field too (Anyway closer than I can be)? I'm curious to hear your thoughts about the speeches of these people.

That is a good point. But I'll refer to my earlier post when I say that people who don't know what's "under the hood" of how AI architectures are constructed are really just talking through their hat. I like Elon Musk, he's an inspiration to me and I intend to do everything in my power to help him realize his dream of colonizing Mars. Steven Hawking is a legend. But, iconic scientists as they are, I'm sure neither of them have much experience with working with neural network architectures, so how can we look to them for guidance or sanity?

The bottom line is no scientific quest is going to overscore the Manhattan project. You want to talk about a project that was going to manifest itself no matter what? That was it. And it aint ancient history, it really is our biggest threat to bring down the temple of the body of progress humans have made over the past 5000 years.

The threat of nuclear war and climate change are by far the most immediate threats to our existence. In that sense, I ally with Noam Chomsky who has been pushing this for years.

As far as the robots, sure, they can go awry, but again as I said in the earlier post, it's up to how we design them.

jack action said:
Another thing I don't understand is that fear that a being smarter than us (whatever that means) will try to get rid of us because we're less smart. Not only humans don't try to get rid of other life forms judged less smart, but as our knowledge grows, we recognize and value more and more the diversity of life in any shape or form. We spend an insane amount of energy trying to rescue and maintain other life forms, a very unusual trait among any other animals or plants. Why would a smarter being part ways from that tangent to go back to a cavemen mentality?

So this is a good sentiment, but the fact of the matter is that high intelligence does not equate with high altruism or morality. Yes, I love the apes and even the monkeys and want to preserve them, but that's becuase of a sentimental and altruistic sense that was burned into my brain for natural selection purposes a long time ago. And that's all good with me. But for the robots, we cannot assume anything, we need to make things explicit..

How? Well, if they were biological creatures, we could just do something with the genetics and make them dependent of dietary lysine, like in the Jurassic park movie.

Thank god for what we AI researchers are doing, though, LIFE WILL NOT FIND A WAY. Why? Becuase we are not dealing with life, we are dealing with in silico preparations, not in vivo preparations. Therefore, we can control the parameters to make sure nothing goes wrong...
 
  • #67
DiracPool said:
Thank god for what we AI researchers are doing, though, LIFE WILL NOT FIND A WAY. Why? Becuase we are not dealing with life, we are dealing with in silico preparations, not in vivo preparations. Therefore, we can control the parameters to make sure nothing goes wrong...

I am not sure why you think that non-living things will always be controllable, and I think you also miss that we humans are part of this, driving the technology forward having to agree on how to use it.

To be in control of a socio-technical system that uses a technology such as AI means that we at all times effectively can and will control the design and operation of the underlying systems so that we steer towards desirable outcomes and stay away from bad outcomes. This implies that several conditions must all be be established:
  1. We must be sufficiently able to predict the set of possible outcomes and their desirability ahead in time.
  2. We must have sufficient consensus on what is considered desirable and what is considered undesirable.
  3. There must exists a set of parameters that will allow us to steer our systems toward what we desire and away from what is not desired.
  4. We must have the collective will to actually apply this control.
  5. These conditions must all be established at all times.
Each of these conditions have obvious failure modes that could mean loss of control at the wrong time. Note, that conditions like number 4 depend heavily on "human nature" in a competitive environment and less on the technology itself. Of course, occasional loss of control do not imply we will get an undesirable outcome, but currently we are heading towards a situation where pretty much none of the conditions are established at any time, hence we have no idea what level of control we actually have.

The above can be said to have been true for pretty much any technology we have developed so far and yet we seem to be overall content with the outcome, so why would this be a problem for (general) AI technology? Currently there is heavy research in making systems self-learning, autonomous, adaptive and distributed to a higher degree (not necessary all at once) and each of these capabilities will (everything else being equal) weaken the conditions mentioned above far more than any of the previous "passive" technologies we have developed. For instance, to me it seems we already have serious issues with predictability of interconnected systems due to their complexities alone and instead of simplifying and thus gaining a higher degree of predictability we just add more "AI complexity" to the systems instead. If we look at the "traditional" AI singularity problem, then this is also a problem of loss of predictability both from increased complexity and from decrease of prediction time.
 
  • Like
Likes UsableThought
  • #68
DiracPool said:
So this is a good sentiment, but the fact of the matter is that high intelligence does not equate with high altruism or morality.
I did not meant it as altruism or morality. For my part anyway, I understand that I cannot live without other forms of life. We need fishes, weeds, reptiles, insects and bacteria. Most reptiles and insects don't inspire me a good sentiment of altruism! More often, I have to fight a sentiment of disgust, restraining myself from getting rid of them all! I think that it is the result of my intelligence connecting the dots between the survival of other species and my own survival.
Rubidium_71 said:
I really like the main point of that general though:
I don't think it's reasonable for us to put robots in charge of whether or not we take a human life...[America should continue to] take our values to war.
It's not really a question of fearing a robot uprising, but questioning our value system from an ethical point of view.

It is along the line of «Who is responsible in a car accident with driverless cars?» The car passenger, the car owner or the car manufacturer? That is not an easy question to answer and a more serious problem to solve in short term then AI taking over the world.
Filip Larsen said:
Currently there is heavy research in making systems self-learning, autonomous, adaptive and distributed to a higher degree (not necessary all at once) and each of these capabilities will (everything else being equal) weaken the conditions mentioned above far more than any of the previous "passive" technologies we have developed.
But the question is always how do you prepare for the unknown? So we need control. Control over what? How?

In 1900 when they begun dreaming about a car society, how could they have foreseen the problem the internal combustion engine would cause? How could they prepare for that? Talking about pollution? The pollution of the time was horse manure. That inoffensive gas going out the tailpipe was a god given gift! Even if you could go back in time and tell those people: «You know what? You should focus more on the electric car.» That is not even a guarantee that everything would be better today because we don't know what is the outcome of a world filled with electric cars. The reality is that it is our problem now, not the problem of people who died decades ago. And there will always be problems to solve.

The kind of predictability and control you are talking about is just an illusion. You'll never achieve it.

One should always advance with care into the unknown, that is good common sense.
 
Last edited by a moderator:
  • #69
The greatest concern is the development of a "general artificial intelligence". Do we need to be gods and create something in our own image. Wouldn't "smart" stuff be good enough. That is make tools that we control not competitors that we might not be able to.

Musk is as much an AI supporter as anybody. His "predictions" of our demise is a possibility if we are not careful in its implementation. He is a signer of the "Asilomar AI Principles" along with about 3500 others including 1200 AI/robotic researchers. These principles have been drafted by the AI community to help assure that AI will be a benefit to mankind.

His particular message at the governor's conference was to warn of the uncontrolled development and implementation of AI, market driven forces that migh try to exploit this technology for its economic advantage with little concern for possible unintended consequences.
jack action said:
In 1900 when they begun dreaming about a car society, how could they have foreseen the problem the internal combustion engine would cause? How could they prepare for that? Talking about pollution? The pollution of the time was horse manure. That inoffensive gas going out the tailpipe was a god given gift! Even if you could go back in time and tell those people: «You know what? You should focus more on the electric car.» That is not even a guarantee that everything would be better today because we don't know what is the outcome of a world filled with electric cars. The reality is that it is our problem now, not the problem of people who died decades ago. And there will always be problems to solve.

The kind of predictability and control you are talking about is just an illusion. You'll never achieve it.

If we build it correctly everything will be fine. Right. Man is not perfect and neither is his technology. He tend to mind his pocket book more than his future.

Today we are all connected through the internet and are becoming more dependent on it economically. In the past month we have seen a world wide virus attack. If one can take down the internet for a substantial time what will be the result.

Man is his own worst enemy. Every thing we do has a down side. Today the medical community (do no harm, right) is responsible for a growing opioid epidemic and responsible for a growing menagerie of super bugs. Technology kills or maims millions each year. Has man developed anything that did not have some unforeseen consequences? Is he learning anything from his past experiences? Will he ever? Shouldn't we be more cautious with the more complex technologies coming down the pike. You would think so.
 
  • Like
Likes Filip Larsen and UsableThought
  • #70
@Filip Larsen and @gleem have posted what I consider informed comments - that is, comments that are not merely opinion (although they include opinion) but in addition either list or else point to actual knowledge related to responsibly overseeing the development of AI - and not just at the basic level of coding, either. For example, going to the website gleem references for the Future of Life Institute and the 2017 Asilomar conference, we find on a list of principles agreed on at that conference. An interesting excerpt from that list:

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:
  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?
Look at number 2 above more closely - the last phrase in the sentence before we get to the bullet list of concerns. Read that phrase again:

thorny questions in computer science, economics, law, ethics, and social studies

And now look at the list of speakers from the conference page: https://futureoflife.org/bai-2017/

Most of the names don't mean anything to me. Why? Because I know nothing about AI. I could make a whole bunch of claims about AI and its perils, or lack of, based on my personal ideological spin to do with politics, economics, and social issues - but that would change the fact that I'm not a scholar when it comes to politics, economics, social issues, or AI. I'm not even a well-read amateur. I know nothing. Whereas the people listed in the conference, who developed this list of principles? Some of them look to be Hollywood types, brought on board for persuasion purposes perhaps; but others look to have solid scientific AI credentials.

In other words, they are experts. They might have a clue. Possibly you might recognize a name here & there and be able to dismiss that person for some reason or other - but can you dismiss all of these names?

To close, I really wish the General Discussion Forum held comments to the same standards of Quantum Physics, Classical Physics, etc. etc. Because it doesn't; and so we end up with some very bright people making sweeping claims about issues they really know very little about. You can look further than that and see that because PF is really set up for only the hard sciences, it doesn't know how to properly handle the "soft" sciences of economics, law, ethics, and social studies - all of which the folks at the Asilomar conference seem to think are important. Basically, PF unintentionally disses the soft sciences by not requiring the same level of sourcing required with the hard sciences. It's a shame.
 
  • #71
gleem said:
Musk is as much an AI supporter as anybody. His "predictions" of our demise is a possibility if we are not careful in its implementation. He is a signer of the "Asilomar AI Principles" along with about 3500 others including 1200 AI/robotic researchers. These principles have been drafted by the AI community to help assure that AI will be a benefit to mankind.

His particular message at the governor's conference was to warn of the uncontrolled development and implementation of AI, market driven forces that migh try to exploit this technology for its economic advantage with little concern for possible unintended consequences.
Who wouldn't agree to these principles? It can be applied to basically any industry. Here's how little modifications is needed (underlined and stroked through) to relate these principles to the automobile industry:
Research Issues

1) Research Goal: The goal of automotive research should be to create not undirected transportation, but beneficial transportation.
2) Research Funding: Investments in automotive transportation should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we make future automotive systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with automotive transportation, and to manage the risks associated with automotive transportation?
  • What set of values should automotive transportation be aligned with, and what legal and ethical status should it have?
3) Science-Policy Link: There should be constructive and healthy exchange between automotive researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of automotive transportation.

5) Race Avoidance: Teams developing automotive systems should actively cooperate to avoid corner-cutting on safety standards.

Ethics and Values

6) Safety: automotive systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7) Failure Transparency: If an automotive system causes harm, it should be possible to ascertain why.

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9) Responsibility: Designers and builders of advanced automotive systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10) Value Alignment: Automotive systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11) Human Values: Automotive systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

13) Liberty and Privacy: The application of automotive to personal transportation must not unreasonably curtail people’s real or perceived liberty.

14) Shared Benefit: Automotive technologies should benefit and empower as many people as possible.

15) Shared Prosperity: The economic prosperity created by automotive transportation should be shared broadly, to benefit all of humanity.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17) Non-subversion: The power conferred by control of automotive systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18) Automotive Arms Race: An arms race in automotive weapons should be avoided.

Longer-term Issues

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future automotive capabilities.
20) Importance: Automotive transportation could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by automotive systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

23) Common Good: Automotive transportation should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

Although, these weren't written in the 1900's, most of them have been followed by most of the industry and law makers since that time. That is because it is common sense. But there is also always a dictator somewhere that won't follow these guidelines and you can't do anything about it; Except waiting for his government to fall. Although, there seems to be less and less of those governments as technology evolves.

You could applied this to basically any technology we developed over the years. AI researchers are not the only ones with a conscience.
 
  • Like
Likes russ_watters
  • #72
jack action said:
You could applied this to basically any technology we developed over the years. AI researchers are not the only ones with a conscience.

Let me quote you, Jack:
jack action said:
The kind of predictability and control you are talking about is just an illusion. You'll never achieve it.

Basically saying to the person you were responding to, "So don't even try." Even though he was saying things that line up quite well with the conference principles you now say you agree with. Perhaps you are not aware of how your comments come across when you are dismissive without demonstrating evidence or knowledge to justify that attitude?

To be blunt: In none of your sweeping claims have you cited any evidence that would be of an acceptable form for any of the forums on PF where evidence is called for. We all have our personal beliefs and personal philosophies about how the world works, mostly inherited from families, friends, and both regional culture & mass culture; I just wish you would understand that "n of 1" beliefs and philosophizing don't substitute for expert knowledge. This is part of why PF exists. You should do some reading and studying on issues like this if they really interest you.

Me, I would much rather this thread had been shorter, but made up mostly of comments from people who actually do know something about the debate over AI and could inform the rest of us. At least we have a few of those.
 
Last edited:
  • #73
@UsableThought :

Don't forget that this discussion started with the article cited in post #44, where Musk said:
“we should be really concerned about A.I.,” and that the emerging technology presents a rare case where government should be proactive in regulation instead of reactive to industry.
He's asking for government regulations. Having principles is one thing, forcing them via a government is another thing.

Here's one of the principle that make me laugh:
18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
Here's the basic principle (it applies to anything we do or could ever possibly think of doing):

«An arms race in any type of weapons should be avoided.»

That is the basic statement. It has nothing to do with AI at all. Are AI researchers OK with a nuclear arms race? Or maybe they just draw the line at machine guns? Of course they're not OK with an arms race of any kind. Nobody in his right mind is.

What is the difference between principle and regulations? The best example of regulations going wrong is the Cookie Law. https://silktide.com/the-stupid-cookie-law-is-dead-at-last/ shows really well the concept of panic and ignorance of law makers in that case. But we're stuck with those annoying and useless pop-ups, wasting bandwidth that the industry is trying so hard to save. But when a law is put in place, it is really hard to remove it.

I agree with you about ignorance of the people about AI (me included). But that also includes the law makers. And in my experience, they tend to make laws that set questionable goals, protect the wrong things and waste valuable resources. Especially when the laws are created following a panic similar to the one about AI, the one where are discussing right now (whether justify or not). So, it's not my knowledge about AI that qualifies me for an opinion on the subject, it's my knowledge about law makers.

So before someone tells me that I don't know anything about, say, AI and that disqualifies me to have an opinion about the laws that will govern me, my only answer will be «Enlighten me». Up until now, there is not much evidence showed to me to justify entering in a panic mode other than:
“On the artificial intelligence front, I have exposure to the very, most cutting-edge A.I., and I think people should be really concerned about it,” Musk told Sandoval.
If Elon Musk wants me to support new regulations of any kind, he must show evidence. «People should be really concerned about it» is not evidence.
 
  • Like
Likes russ_watters
  • #74
First, let's just take a deep breath.

jack action said:
If Elon Musk wants me to support new regulations of any kind, he must show evidence. «People should be really concerned about it» is not evidence

That might be a problem. For once you identify the evidence it might be too late. Once we are sure of impending climate change it might be too late to stop it.

I hold that Musk's opinion is of significant value, as a respected citizen and implementer of AI technology. The question of over regulation is very debatable considering the consequence of failure to prevent a captostropic disaster. You here it all the time "Why didn't someone do something to prevent this...? No one wants regulations until they need them.

While the Asilomar principles can be applied to any number of technologies and maybe they should it at least gives basis on which many experts agree on how to proceed in a safe and responsible manner. As Musk warns you don't just want to turn this technology loose in the free and say may the most intelligent system win without some adherence to reasonable guidelines or if necessary regulations. Does anyone doubt the value of regulation of the food processing industry or of manufacturing working conditions or medical devices and drugs.
 
  • #75
Even the wildly optimistic "Technological Singularity" in 2043 can be viewed with alarm if you're so inclined. The dominant intelligence might not be human.

In terms of existential threat, I look at it this way. The following is a seriously flawed analogy, but I have no better analogy because we have never faced this situation before.

What would happen if we learned that elephants developed nuclear weapon capability. (remember, flawed analogy). Some people would rush to remind us how we hunted elephants for sport and ivory. They would have ample motive to nuke us for revenge. That possible threat would be intolerable, so we would exterminate the elephants immediately. After all, they have no human rights to make us hesitate. In human law, cockroaches and elephants stand equal.

My view is that a dominant species (if species is the correct word) can never tolerate a secondary species with lethal military capability. That is why evolution of superior intelligence is an existential threat to humanity.

But I am explaining the logic of the doom sayers. I don't subscribe to those fears myself.
 
  • Like
Likes russ_watters
  • #76
Rubidium_71 said:
Because of things like this:
Yeah, I agree.... :oldeyes:

Blocking ads.JPG
 
  • Like
Likes Rubidium_71
  • #77
The problem with AI is we don't know whether the intelligent machine can, or will edit themselves. We won't know their capabilities until we create one, and just one AI can do a lot of interesting things. I'm not sure if the movie Transcendence is scientific, but the power gained by the AI is unparalleled. Humans fear the future for what it holds, but then again, why would we need AI to begin with?
 
  • #78
I know it is hard to see the dire consequence of AI gone amuck but so many experts do feel that it is very possible that like climate change you might want to hedge you bet and go with the expert opinion even though it might be a little inconvenient. While I do not think AI is the most immediate threat to our civilzatioin it would be sad if we survived nukes, climate change pestilence only to go gently into the night.

I like to point out that Musk is really heavy into AI and not just with his cars. He own two AI enterprises Neuralink to develop an interface between our brain and the internet and OpenAI a research company to develop safe AI.

Let us hope that those from the likes of IBM, Facebook, Apple and Google who signed the Asilomar agreement are putting their money where their mouths are.
 
  • #79
jack action said:
This argument is along the line of one of Musk's arguments found in the article of post #44:

I'm no expert on AI, so I cannot bring arguments about the potential of AI. But with this kind of argument, I have enough knowledge about it to know it is a logical fallacy. It is an argument along the line of «buggy whips will become obsolete when people start to travel in cars rather than in horse-drawn buggie, therefore one should be careful before encouraging the car industry.»

I repeat here what Musk said:

Really? Why don't Musk see that in a positive way instead? Like those 4.6 million people will be able to spend their time doing something more valuable, enriching their lives. My grandfather quit school at age 12 and he did cut down trees with axes and saws as a job. Would it have been a good idea to not invent the chainsaw and the even more productive logging equipment that we have today such that I could have a job in the industry when I was 14 years-old. I don't know if it is better, but I instead went to university earning a baccalaureate in Mech. Eng. One thing is for sure, I don't think my life is worst than the life of my grandfather.

Thinking that if robots do the jobs we do today means that humans will cease to work, that is just plain wrong because the past is an indication of what the future holds. And people never stop being curious and embarking on new adventures.

When one serves me this kind of argument, it doesn't help me trusting the validity of his other arguments which only promote fear, often along the line of the end of the world (Lots of past examples related to this as well that prove those fears are unreasonable).

I haven't scrutinized what Musk said, so I can't really comment on that. But I think that it would be a little different than buggy whips. That's just a small section of society. When were are talking about a huge portion of society being put out of jobs at a fast enough rate that would make it difficult to adjust, we can't predict what would happen. The difference is that those people operating buggy whips can simply move on to other available jobs. Today, taxi drivers could just become Uber drivers or some other driver. But if general artificial intelligence takes over most tasks, then it would be difficult to find a new job. Clearly it depends on the rate at which AI progresses. If AI progresses fast, but not too fast that governments can't keep up, somethings can be done to safeguard the people. But if there's a runway effect where suddenly machines can do most tasks that humans can do( think a period of only a few years after a breakthrough in AI theory), then yes, you can surmise that there might be some problems.

Past examples don't work because they are too self contained and different in scope and characteristic to provide any meaningful extrapolations in the case of AI. I'm not against AI. I think it would help fix many problems over time. However, there are some risks and valid concerns about those risks is all that I am saying.
 
  • #80
anorlunda said:
My view is that a dominant species (if species is the correct word) can never tolerate a secondary species with lethal military capability. That is why evolution of superior intelligence is an existential threat to humanity.
But the human species is made of a multitude of races (if race is the correct word), some with lethal military capability and we do tolerate each other. I mean there are wars, but we don't see extermination of one race over another.

I know it's not your point of view, but I really don't see the evidence that shows that life is a contest where only one winner will emerge.
gleem said:
For once you identify the evidence it might be too late. Once we are sure of impending climate change it might be too late to stop it.
That is true of so many problems. The best example is: Should we be prepared for a large asteroid colliding with Earth? If so, how large and fast of an asteroid should we be prepared for? No matter what, there is also the possibility that it will be so large that there is nothing we can do about it (say, as big as the planet itself).

Again, it's not about denying the possibility that something can happen, it is about estimating the probability of its occurrence. And to calculate probability, we need data. Without data, there is no valid way other than see what will happen as we go along, no matter how scary it can be.
FallenApple said:
But if general artificial intelligence takes over most tasks, then it would be difficult to find a new job.
But if so many people don't have job - and thus can't spend money - who will finance those companies manufacturing products and offering services with AI? It all works together, you cannot split offer and demand.
 
  • #81
jack action said:
I know it's not your point of view, but I really don't see the evidence that shows that life is a contest where only one winner will emerge.

Of course there's no evidence. It's unprecedented.

But we're talking about human fear. A phobia. Fear is real, even if irrational. The nocebo effect applies to both individuals and groups. People have the right to take protective action based on fear alone. I don't think they should, but I recognize the right.
 
  • #82
anorlunda said:
Of course there's no evidence. It's unprecedented.

But we're talking about human fear. A phobia. Fear is real, even if irrational. The nocebo effect applies to both individuals and groups. People have the right to take protective action based on fear alone. I don't think they should, but I recognize the right.
I also recognize that right. But when someone wants to force me to do something because of his fear, that is where I draw the line.
 
  • Like
Likes russ_watters
  • #83
Again, it's not about denying the possibility that something can happen, it is about estimating the probability of its occurrence. And to calculate probability, we need data. Without data, there is no valid way other than see what will happen as we go along, no matter how scary it can be.

You are right, without data, we can't predict. So we have no idea what could happen with AI, at least empirically. The thing is, for circumstances like like taxi drivers and telemarketers losing jobs, we have historical data because jobs were replaced before due to situations that where comparable in scope. For example, what is similar about buggy drivers losing their jobs to the situation of taxi drivers losing their jobs its that its just one industry being disrupted by a particular innovation. When we have displacement in many industries across the board, simultaneously, we don't know what could happen because this has never happened before. So the vast uncertainty in a potential situation like that is what is concerning.

But if so many people don't have job - and thus can't spend money - who will finance those companies manufacturing products and offering services with AI? It all works together, you cannot split offer and demand.

I suspect that the losses in jobs would be for jobs that do not require very high levels of analytical/abstract/creative thinking. So those that can do that would have all the money, presumably. However, in a situation like that, there would probably be a need for some redistribution of wealth to preserve society.
 
Last edited:
  • #84
gleem said:
That might be a problem. For once you identify the evidence it might be too late. Once we are sure of impending climate change it might be too late to stop it.

I hold that Musk's opinion is of significant value, as a respected citizen and implementer of AI technology. The question of over regulation is very debatable considering the consequence of failure to prevent a captostropic disaster. You here it all the time "Why didn't someone do something to prevent this...? No one wants regulations until they need them.
You can add me to the list of people who says that's not good enough. Fortunately for me, for now it isn't even specific enough to be an issue since as far as I know, no one is proposing any *actual* legislation. The only thing actually being proposed is fear. I choose to vote no on that too.
 
Last edited:
  • Like
Likes anorlunda
  • #85
FallenApple said:
I haven't scrutinized what Musk said, so I can't really comment on that. But I think that it would be a little different than buggy whips. That's just a small section of society. When were are talking about a huge portion of society being put out of jobs at a fast enough rate that would make it difficult to adjust, we can't predict what would happen.
How big? How fast? Faster than Netflix killed Blockbuster? Faster than Amazon is killing brick and mortar? Faster than overseas competition and automation (the subject of the thread!) have hurt the manufacturing and steel industries? "It will be worse" is easy and cheap to say -- and just as meaningless.
 
  • Like
Likes 256bits
  • #86
UsableThought said:
To be blunt: In none of your sweeping claims have you cited any evidence that would be of an acceptable form for any of the forums on PF where evidence is called for.
Nor has anyone on the "pro" side, where it is really most required. The fears - much less what to do about them - are far too vague.
 
  • Like
Likes anorlunda
  • #87
russ_watters said:
Nor has anyone on the "pro" side, where it is really most required. The fears - much less what to do about them - are far too vague.

Which is the whole point of actually reading up on the field & what the experts inside that field are actually talking about. Scare headlines about Musk etc. aren't sufficient to educate ourselves, anymore than for any other complex economic, political, social or technological issue.

Most of us simply don't have the time to read up on all possible fields and issues; but we still want to hold opinions so we can argue them. This is fine so long as we hold such opinions lightly.
 
Last edited:
  • #88
UsableThought said:
Which is the whole point of actuallyreading up on the field & what the experts inside that field are actually talking about. Scare headlines about Musk etc. aren't sufficient to educate ourselves, anymore than for any other complex economic, political, social or technological issue.

Most of us simply don't have the time to read up on all possible fields and issues; but we still want to hold opinions so we can argue them. This is fine so long as we hold such opinions lightly.
That's all fine. Making snap "pro" judgements on those empty scare headlines and then burden of proof shifting was what I was objecting to.
 
  • Like
Likes UsableThought
  • #89
UsableThought said:
Which is the whole point of actually reading up on the field & what the experts inside that field are actually talking about. Scare headlines about Musk etc. aren't sufficient to educate ourselves, anymore than for any other complex economic, political, social or technological issue.

Most of us simply don't have the time to read up on all possible fields and issues; but we still want to hold opinions so we can argue them. This is fine so long as we hold such opinions lightly.

Sorry, being an expert who creates software has no relation to being qualified to predict social impacts. To use a pop icon analogy, a Sheldon Cooper type could be a likely AI insider expert. Would you ask Sheldon to design our society?

If you want to extrapolate beyond the immediate future, I would look to SF before trusting tekkie AI programmers. If you want a vibrant society, it is better to have opinionated involved citizens than passive apathetic ones.
 
  • Like
Likes russ_watters
  • #90
anorlunda said:
Sorry, being an expert who creates software has no relation to being qualified to predict social impacts.

You are jumping to a completely mistaken conclusion about who exactly I consider the required experts. I suggest you re-read my earlier comments on the same theme.
 
Last edited:
  • #91
Every time I dip my toes in a PF General Discussions thread, I wind up regretting it.
 
  • Like
Likes gleem and russ_watters
  • #92
https://www.bls.gov/opub/mlr/2016/a...disrupt-occupations-but-it-wont-kill-jobs.htm
But here’s where the Monthly Labor Review(MLR) can provide some needed historical perspective. In the 50th anniversary issue in 1965, Lloyd Ulman included this quote from a former Secretary of Labor: “In the long run, new types of industries have always absorbed the workers displaced by machinery, but of late, we have been developing new machinery at a faster rate than we have been developing new industries…At the same time we must ask ourselves, is automatic machinery, driven by limitless power, going to leave on our hands a state of chronic and increasing unemployment? Is the machine that turns out wealth also to create poverty? Is it giving us a permanent jobless class? Is prosperity going to double back on itself and bring us social distress?” This was from the Secretary of Labor in 1927, "Puddler” Jim J. Davis.

In short, there has long been a national worry about technology-driven unemployment and that fear seems to spike either in recessions or in periods of particularly robust innovation. But to date that fear has always been misplaced. And despite the claims of current techno-distopians, these claims will be wrong going forward. We can rest easy that in 25 years the unemployment and labor force participation rates will be similar to today’s.[emphasis mine]
The bolded part is the flaw - the self-contradiction - in the pessimistic view; the "new machinery" is the new industry.
 
  • Like
Likes jack action
  • #93
I am often not happy the way these discussions go also. But I hope this thread can remain unlocked so an exchange of views and ideas may continue.

russ_watters said:
The only thing actually being proposed is fear.

Referring to the Asilomar Principles?

anorlunda said:
Sorry, being an expert who creates software has no relation to being qualified to predict social impacts.

Have you read and evaluated the list of the signers of the Asilomar Principles and in particular the researcher?One thing about controversial issues is that the discussions can send one deeper into the issues than one otherwise might have gone.

Even now AI related problems are being identified with significant social/legal implications. Algorithmic Bias.

https://www.technologyreview.com/s/608248/biased-algorithms-are-everywhere-and-no-one-seems-to-care/
 
  • #94
gleem said:
russ_watters said:
The only thing actually being proposed is fear.
Referring to the Asilomar Principles?
No, it's related to predictions of what AI will be able to do and how much importance it will take in people lives, even though all of that is speculation at this point. The fear comes from the fact that it seems only negative outcomes (so far as destruction of the civilized world as we know it) are possible.

The problem is that we have so little knowledge about this AI future, that no one can either prove or disprove that these negative outcomes can actually happen.
gleem said:
Even now AI related problems are being identified with significant social/legal implications. Algorithmic Bias.

https://www.technologyreview.com/s/608248/biased-algorithms-are-everywhere-and-no-one-seems-to-care/
Although an important problem, I don't think this is related to AI specifically. You can create a simple model with algorithmic bias and solved it with a pencil and a piece of paper. It is more linked to people's ignorance about the limits of mathematical models.

I don't think it should be discussed in this thread (But if you start another one, I might jump in :wink::smile:).
 
  • Like
Likes russ_watters
  • #95
jack action said:
The problem is that we have so little knowledge about this AI future, that no one can either prove or disprove that these negative outcomes can actually happen.

I think everybody agrees in fact there are some who think true "general AI" is not even possible. The signers of the Asilomar Principle seem to be totally in favor of development of AI and are implementing it as fast as they can with prudence. The history of automation has shown that it has been by and large beneficial and the "Luddite" type fears have been unfounded. But many reasonable people think that AI is a game changer of extreme importance and not to be ignored. But non the less there is extreme optimism among AI researcher about the beneficial value while realizing that this does not come without some negative consequences which should be addressed early on.

jack action said:
Although an important problem, I don't think this is related to AI specifically.

I agree and almost did not include it but I think it demonstrates a lack of concern or attention for some values they might find its way into AI systems intentionally or not which many AI researcher seem to see as a distinct problem.
 
  • #96
gleem said:
Referring to the Asilomar Principles?
No, those are just generic guiding principles. They have little relevance to the subject of the thread, which is doom and gloom predictions and legislation to prevent them. I took engineering ethics in college - that isn't what this is about.
Have you read and evaluated the list of the signers of the Asilomar Principles and in particular the researcher?
No, why should I do that?

Edit: I'm not being coy or flip here: you're putting the cart before the horse. If I don't think the list of principles has any relevance, then the signatories don't matter.
 
Last edited:
  • #97
The doom and gloom is not from the signers of the Asilomar conference. It is from the popular media. To answer the OP reason for this outlook is simple: all news about death, destruction, crime, sex, violence, apocalypse scenarios etc. sell. Amen. Any thing vulnerable to the attaching of any of those items is fair game radiation, genetic engineering, weather, stock market, war ...

Good, fun, AI?

1oldman2 said:
"We were promised flying cars, instead we got suicidal robots," wrote one worker from the building on Twitter

There currently companies beginning to produce a various types of flying cars and AI will be vital in the implementation of this mode of transportation especially in metropolitan areas.

And then there are AI companions http://www.newsweek.com/ai-ageing-companion-alexa-old-people-elliq-artificial-intelligence-541481
you don't need an imaginary friend anymore or even a dog or cat.

And what about those cleaning robots? and all without negative sociological or economic affect.(I think).
 
  • #98
gleem said:
And what about those cleaning robots? and all without negative sociological or economic affect.(I think)

Whoops. Just recently heard that Irobot's cleaning robot Rumba has been doing more than cleaning floors. Apparently it has been also surveying the house as it cleans . Irobot was intending to sell this info to whoever needed this info. They said of course that they would not have released the info without the homeowners consent.

Of course we already know Alexa and Siri are collecting data.
 
Back
Top