Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Thread starter Isopod
  • Start date Start date
  • Tags Tags
    Ai
Click For Summary
The discussion explores the fear surrounding AI and the potential for sentient, self-autonomous robots. Concerns are raised about AI reflecting humanity's darker tendencies and the implications of AI thinking differently from humans. Participants emphasize that the real danger lies in the application of AI rather than the technology itself, highlighting the need for human oversight to prevent misuse. The conversation touches on the idea that AI could potentially manipulate information, posing risks to democratic discourse. Ultimately, there is a mix of skepticism and cautious optimism about the future of AI and its impact on society.
  • #151
Algr said:
Of course this aligns with my point that humans using AI are more dangerous than an AI that is out of control.
The final decision on how the AI works isn't from the board, but from the programmers who actually receive orders from them. If they get frustrated and decide that they work for awful people, they can easily ask the AI for help without the board knowing. Next thing you know the board is bankrupt and facing investigation while the AI is "owned" by a shell company that no one was supposed to know about. By the time the idealism of the rebel programmers collapses to the usual greed, the AI will be influencing them.

Different scenarios would yield different AIs all with different programming and objectives. Skynet might exist, but it would be fighting other AIs, not just humans. I would suggest that the winning AI might be the one that can convince the most humans to support it and work for it. So Charisma-Bot 9000 will be our ruler.
AI can basically be something with any kind of behavior and intelligence you could imagine. It's just that the AI we know how to make is limited. But the critical thing about AI is that it doesn't do what it has been programmed to do, it does what it has learned to do. We can only control that by determining what experiences we let it have, and what rewards and punishments we give it (which is limited because we are not very sophisticated when it comes to encoding complex examples of that in suitable mathematical form, or understanding what the results will be in non-trivial cases).

You can't just reprogram it, or give it specific instructions, or persuade it of something. It isn't necessarily possible even to communicate with it in a non superficial way. You would probably have better luck explaining or lecturing to a whale with hopes of influencing it than you would any artificial neural network invented by people.
 
Last edited:
Computer science news on Phys.org
  • #152
sbrothy said:
Whos to say? If the AI in question is smart enough to realize that without energy oblivion awaits then all bets are off.
While surfing the net aimlessly (and reading about STEM education in US public schools even tho I am not an american so I must be really bored) I came across DALL-E. More funny that threatening. I'll just leave it here.
 
  • #155
DaveC426913 said:
Summarize? Teaser?
:doh:
The Nature article has some great ideas, if they can be realistically put into practice. Basically, having Sociologists involved at the ground level of development.

The Science article, that's a revealing piece on how quickly the progress advances in learning and mastering new testing methods. Very impressive at this point.
 
  • #157
sbrothy said:
how insurance companies use AI
It's all about bottom line $ for them.
 
  • #158
sbrothy said:
but already I'm a little disturbed thinking about how insurance companies use AI
I've done work with insurance companies recently, @sbrothy, and they routinely raise Lemonade as the AI disruptor within their industry. However, as this Nasdaq analysis from last month shows, it is not all rainbows and unicorns with regards their P&L, highlighting how difficult it is to apply such tech to deliver meaningful operational advantage and maintain a competitive offering.

https://www.nasdaq.com/articles/can-lemonade-stock-squeeze-out-profits-going-forward

That doesn't mean the use of ML / AI won't be more broadly adopted in the industry, but all of the companies I've consulted into have fundamental structural constraints that make harvesting customer data for predictive purposes of any kind a real challenge and insurance is the least worrying AI use case, for me, anyway.
 
  • #159
This has given me paws, sorry that was a typo the cat walked on the keyboard, I meant this has given me pause...

It's Alpha Go vs Alpha Go, what has struck me particularly is Michael Redmond's commentary beginning around 21 mins into the video. He is basically implying that from what he sees there appears to be a plan going on, but not in the way that we humans appear able to comprehend. You can see Redmonds smiling to himself as he is so drawn to the fact that there does appear to be some kind of actual thinking going on, it's a very convincing display of actual intelligence, although a little understanding of Go is required to appreciate the nuance.

So do I fear this, hell no, it's exciting. But then that's probably what the AI wants us to think as part of some elaborate multi century plot to control the Universe.

 
  • #160
bland said:
You can see Redmonds smiling to himself as he is so drawn to the fact that there does appear to be some kind of actual thinking going on
Thinking?

Damn, I really want to smote this down, it just feels wrong as a description of how Alpha Go operates, but 'thinking' could encompass the method that a sophisticated rules engine with no awareness of itself or environment goes about working through the steps of a game, and in that sense, I can see how Alpha Go is 'thinking'.

But I don't think the intent passes the pub test, and that most people would dismiss the idea that Alpha Go is 'thinking' out of hand with a derisive snort and maybe a curse or two.

bland said:
But then that's probably what the AI wants us to think as part of some elaborate multi century plot to control the Universe.
Written with tongue firmly in cheek. I hope 🤔
 
  • #161
Melbourne Guy said:
Thinking?
I didn't say 'thinking' I said there was an appearance, a very convincing one at the level of what Redmond can see. I would find it difficult to define 'thinking' in the context of ai. Yes, one would like to think that the tongue was in that cheeky place.
 
  • #162
bland said:
I didn't say 'thinking' I said there was an appearance, a very convincing one at the level of what Redmond can see.
I'm thinking this might be too meta, @bland, but I didn't take it as what you were thinking, I think it was clear from your text that you were conveying what you thought Redmond was thinking, but now I also think it was clear from my reply that you think I didn't think that!
 
  • #163
Why I can't say I find the prospect of being shot by a robot appealing, I also can't see why it would be any better or worse than being shot by a breathing human being.

I can't get concerned about a robot "becoming self aware" which seems to be code for suddenly developing a desire to pursue its Darwinian self interest. It's much more likely that an AI would start doing nonsensical weird things. This happened during the pivotal Lee Se Dol/AlphaGo match, resulting in Lee's sole victory.

As for SF about robots attempting to take over the world, I'd recommend the terrific Bollywood movie "Enthiran" [Robot]. The robot becomes demonic because some jerk programs it to be that way. That I would easily believe. And for no extra charge you get to ogle Aishwarya Rai.
 
  • #164
In most cases, when I am inspired to post a link to an article on the PhysicsForum, it's because I like the article.
In this case, its because I think it is so off-base that it need trouncing:
SA Opinion: Consciousness Article

It is always a problem to attempt to make practical suggestions about a process that is not understood. And the article makes clear that that is exactly what they are doing. But to take a shot at it without addressing the so-called "Hard Consciousness" issue results in an article that dies for lack of any definition to its main elements.

From where I stand, "Hard Conciousness" (the "qualia" factor) is a fundamental feature of Physics. It is not just a creation of biology. We happen to have it because it provides a method of computation that is biologically efficient in supporting survival-related (Darwinian) decisions. That same computation device (not available in your common "Von Neumann" computer, laptop, Android, ...) will be developed and will allow computers that share a "qualia" of the information they process. But it won't be like human consciousness.

And as far as threats, if a machine attacks people, it will be because it was programmed to. A computer that is programmed to search for a planets resource, adapt its own design, and survive as best it can is a bad idea. So let's not do that.

The article also addresses the ethics of a "happy computer". Pain and happiness are wrapped up in the way we work in a social environment - how we support and rely on others. Getting to computers with "qualia" is a relatively simple step compared to modelling human behavior to the point of claiming that a computer is genuinely "happy".
 
  • Like
Likes russ_watters
  • #165
.Scott said:
And as far as threats, if a machine attacks people, it will be because it was programmed to.
Why do you believe this to be so?
It seems to fly-in-the-face of the essence of AI.
Do you believe an AI would not / could not take it upon itself to do this on its own? Why not?
 
  • Like
Likes Oldman too and russ_watters
  • #166
 
  • #167
DaveC426913 said:
Why do you believe this to be so?
It seems to fly-in-the-face of the essence of AI.
Do you believe an AI would not / could not take it upon itself to do this on its own? Why not?
Part of the problem here is the very loose use of the term AI.
At my last job, I programmed radar units for cars - these went on to become components in devices that provided features such as lane assist, blind side monitoring, advanced cruise control, and lots of other stuff. If we sold these to the devil, he may have used AI software techniques to recognize humans and then steer the car in their direction. Or, if he preferred, he could have used techniques more closely tied to statistical analysis to perform those same target identification processes.

In that case, "AI" refers to a bunch of software techniques like neural nets and machine learning. Even if this devil stuck with more typical algorithms, in many conversations machine vision (radar or otherwise) and robotics would qualify as "AI" without the use AI-specific techniques.

But what many think of as AI is more like "artificial human-like social animal intelligence". Something with a goal to survive and is able to recognize humans as either a threat or gate keepers to the resources it needs to survive.

I think the logic goes something like this: The human brain is really complex and we don't know where "consciousness" comes from so its likely the complexity that creates the consciousness. Computers are getting more and more complex so they will eventually become conscious the way humans are. Humans can be a threat and rapidly evolving computers would be a dire threat.

There is also an issue with how much variation there can be with "consciousness". For example, our brain has Darwinian goals. We are social animals and so many of those Darwinian goals center around survival of the animal and participation in society. This is the essential source of "self". Our brains are "designed" with a built-in concept of self - something to be preserved and something that has a role in a sea of selves. The mind experiment I often propose is to image if I coated a table top with pain and tactile sensory rectors and transmitted that data directly into your skull. If I dropped something on the table, you would feel it. You would certainly develop a self-like emotional attachment to that table top.

A computer system isn't going to have such a concept of self unless it gets designed in.

I have been developing software for more than half a century. Let's consider what I would do to make this A.I. fear come to fruition. First, this "consciousness" thing is a total red herring. As I said in my last post, it is only the artifact of Physics and the use of certain unconventional hardware components. My specific estimation is that it's a use of Grover's Algorithm for creating candidate intentions - and that there at least hundreds of such mechanism within our skulls anyone of which can be our "consciousness" at any given moment. But, except for some speculative potential efficiency, why use such mechanisms at all.

Instead, I will set up a machine that models a robot that lives on planet Earth. It will try out one design after another and attempt to home in on a buildable design that will survive and replicate. If it finds a good solution, it will make some.

So what part of this would you expect to happen by accident? Consciousness has nothing to do with it. Why aren't we afraid that attaching a 3-D printer to a regular household computer is handing over too much power?
 
  • Like
Likes russ_watters
  • #168
DaveC426913 said:
Why do you believe this to be so?
It seems to fly-in-the-face of the essence of AI.
Do you believe an AI would not / could not take it upon itself to do this on its own? Why not?
Not who you were responding too, but I'll take a crack at it too:

Boring response: This is why I don't believe in AI. Any computer can be programmed to on purpose or by accident go off the rails, so the risk presented by AI is not particularly unique. This is the opposite side of the coin type answer to the question.

AI specific response: AI does not mean infinite capabilities/adaptability. An AI need not even be physical. That means we set the parameters - the limitations - of its scope/reach. An AI that is non-physical cannot fire a fully mechanical gun. It can't drive a bulldozer that isn't networked. Now, some people think AI means humanoid robots, and those can do anything a human can, right? No, that's anthropomorphizing. A humanoid robot that is designed to learn basketball isn't somehow going to decide it wants to dabble in global thermonuclear war. Or even just kill its opponent (rulebook doesn't say I can't!)

AI doesn't necessarily mean generalized intelligence, much less physical capabilities.
 
  • #169
.Scott said:
A computer system isn't going to have such a concept of self unless it gets designed in.
So, Homo sapiens have consciousness 'designed in'? You suggest so, @.Scott, even if you write the word with air quotes. Which just kicks the fundamental problem upstream. If evolution can result in consciousness, then there is no barrier to AI also evolving consciousness.
 
  • #170
Melbourne Guy said:
So, Homo sapiens have consciousness 'designed in'? You suggest so, @.Scott, even if you write the word with air quotes. Which just kicks the fundamental problem upstream. If evolution can result in consciousness, then there is no barrier to AI also evolving consciousness.
What more important than consciousness being designed in is the construct of "self". "Self" and consciousness are no more than same than "tree" and consciousness.

Evolution could evolve evil AI robots - except we would stop them before they got started. That is why I approached the problem by allowing the evolution to occur in a computer model - and only the final result was built.
 
  • Like
Likes russ_watters
  • #171
.Scott said:
Evolution could evolve evil AI robots - except we would stop them before they got started.
Would we? Who decides what an evil AI looks like? I can imagine some people would welcome evil AIs, and some people would deliberately evolve them.

.Scott said:
That is why I approached the problem by allowing the evolution to occur in a computer model - and only the final result was built.
I feel this is an arbitrary and trivial constraint that is easily ignored, @.Scott. Are you assuming that once evolved and 'built', the AI no longer evolves?
 
  • #172
As follow on from my previous thought, this just popped into one of my feeds:

https://www-independent-co-uk.cdn.a...artificial-general-intelligence-b2080740.html

"One of the main concerns with the arrival of an AGI system, capable of teaching itself and becoming exponentially smarter than humans, is that it would be impossible to switch off."

I've written one of these AIs in a novel, but I don't really believe it. There's a ton of assumptions in the claim, including that an AI could unilaterally inhabit any other computing architecture, which seems implausible. It also assumes that there is no limit to the 'bootstrapping' the AI can do to its own intelligence. All of this could be true, but if so, 'smarter than humans' equates to "God-like", and the mechanism for that to occur is certainly not obvious.
 
  • #173
Melbourne Guy said:
Would we? Who decides what an evil AI looks like? I can imagine some people would welcome evil AIs, and some people would deliberately evolve them.

You asked if people could evolve with nothing more than Darwinian factors, why not AI. Now you seem to think that AI would evolve quickly.

If people deliberately evolved them, that would not contradict any of my statements. It is definitely possible for people to design machines to kill other people.
 
  • Like
Likes russ_watters
  • #174
.Scott said:
You asked if people could evolve with nothing more than Darwinian factors, why not AI. Now you seem to think that AI would evolve quickly.
I'm thinking we're talking past each other, @.Scott. I haven't assumed AI would evolve quickly, merely that identifying an evil AI might not be obvious. There are sociopath humans that nobody notices are murdering people and burying their bodies in the night, do you feel their parents knew their kids were evil from the moment of their first squawking wail at birth?
 
  • #175
Melbourne Guy said:
I'm thinking we're talking past each other, @.Scott. I haven't assumed AI would evolve quickly, merely that identifying an evil AI might not be obvious. There are sociopath humans that nobody notices are murdering people and burying their bodies in the night, do you feel their parents knew their kids were evil from the moment of their first squawking wail at birth?
People do not have to evolve into societal threats. We are all there already. You just have to change your mind.

Building a machine with a human-like ego and that engages human society exactly as people do would be criminal. Anyone smart enough to put such a machine together would be very aware of what he was doing.

Building a machine that engages human society in a way that is similar to how people would - but without the survival-oriented notion of self could be done. And it could be done with or without components that would evoke consciousness.
 
  • #176
If I were going to write an AI horror story it would be this. Society gets dependent on an AI. Quite often its moves are obscure but it always works out in the end. It builds up a lot of good will and faith that is doing the right thing no matter how mysterious and temporarily unpleasant. So when it goes off the rails and starts to blunder no one realizes it until it is too late.
 
  • #177
If I were worried about AI, it would not be because of fear of robots' world domination, but because these days and for an indeterminate time to come, some "AI" are not really very good at the tasks that are assigned by certain people who can and boldly go where no one with some scintilla of wisdom has gone before, using neural-network algorithms that are not up to snuff but are cheaper and free of personal issues than, well, paid human personnel: they are a one-time expense that is likely to include support with updates for several years (they are software, after all: "apps"), don't goof off, try to unionize, and never talk back. Doing the kind of work where, if they do it wrong, that is likely to be someone else's problem. For example: face-recognition going wrong and someone (else) being thrown in jail because of it. Military use where humans delegate to an AI the making of quick life or death decisions.

On the other hand, The Spike has been postponed sine die due to lack of sufficient interest and technical chops. Skynet's armies are not marching in, right now. Or even present in my bad dreams. But there is also plenty else around I see as worthy of worrying about, thank you very much.
 
Last edited by a moderator:
  • #178
Speaking of source material for AI concepts:

Does anyone recall a novel from decades ago where a computer had a program like Eliza, written in BASIC, that managed find enough storage to develop consciousness and the story culminated in the AI attempting to fry the protagonist on the sidewalk by redirecting an orbiting space laser?
 
  • #179
I think a superintelligent AI would be smart enough to not kill anybody. I think it would be doing things like THIS.

 
  • #180
.Scott said:
Building a machine with a human-like ego and that engages human society exactly as people do would be criminal. Anyone smart enough to put such a machine together would be very aware of what he was doing.
Fine statements, to be sure, @.Scott, but not statements of fact. And given we don't understand our own consciousness (or other animals that might be regarded as such) it seems premature to jump to such conclusions. Currently, it is not criminal to create an AI of any flavour, so I'm assuming you mean that in the moral sense, not legal sense. And who knows how smart you have to be to create a self-aware AI? Maybe smart, but not as smart as you assert.

Honestly, I am struggling with your absolutist view of AI and its genesis. We know so little about our own mental mechanisms that it seems hubris to ascribe it to a not-yet-invented machine intelligence.
 

Similar threads

Replies
10
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
19
Views
3K
Replies
3
Views
3K
Replies
7
Views
6K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K