Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Thread starter Isopod
  • Start date Start date
  • Tags Tags
    Ai
AI Thread Summary
The discussion explores the fear surrounding AI and the potential for sentient, self-autonomous robots. Concerns are raised about AI reflecting humanity's darker tendencies and the implications of AI thinking differently from humans. Participants emphasize that the real danger lies in the application of AI rather than the technology itself, highlighting the need for human oversight to prevent misuse. The conversation touches on the idea that AI could potentially manipulate information, posing risks to democratic discourse. Ultimately, there is a mix of skepticism and cautious optimism about the future of AI and its impact on society.
  • #151
Algr said:
Of course this aligns with my point that humans using AI are more dangerous than an AI that is out of control.
The final decision on how the AI works isn't from the board, but from the programmers who actually receive orders from them. If they get frustrated and decide that they work for awful people, they can easily ask the AI for help without the board knowing. Next thing you know the board is bankrupt and facing investigation while the AI is "owned" by a shell company that no one was supposed to know about. By the time the idealism of the rebel programmers collapses to the usual greed, the AI will be influencing them.

Different scenarios would yield different AIs all with different programming and objectives. Skynet might exist, but it would be fighting other AIs, not just humans. I would suggest that the winning AI might be the one that can convince the most humans to support it and work for it. So Charisma-Bot 9000 will be our ruler.
AI can basically be something with any kind of behavior and intelligence you could imagine. It's just that the AI we know how to make is limited. But the critical thing about AI is that it doesn't do what it has been programmed to do, it does what it has learned to do. We can only control that by determining what experiences we let it have, and what rewards and punishments we give it (which is limited because we are not very sophisticated when it comes to encoding complex examples of that in suitable mathematical form, or understanding what the results will be in non-trivial cases).

You can't just reprogram it, or give it specific instructions, or persuade it of something. It isn't necessarily possible even to communicate with it in a non superficial way. You would probably have better luck explaining or lecturing to a whale with hopes of influencing it than you would any artificial neural network invented by people.
 
Last edited:
Computer science news on Phys.org
  • #152
sbrothy said:
Whos to say? If the AI in question is smart enough to realize that without energy oblivion awaits then all bets are off.
While surfing the net aimlessly (and reading about STEM education in US public schools even tho I am not an american so I must be really bored) I came across DALL-E. More funny that threatening. I'll just leave it here.
 
  • #155
DaveC426913 said:
Summarize? Teaser?
:doh:
The Nature article has some great ideas, if they can be realistically put into practice. Basically, having Sociologists involved at the ground level of development.

The Science article, that's a revealing piece on how quickly the progress advances in learning and mastering new testing methods. Very impressive at this point.
 
  • #157
sbrothy said:
how insurance companies use AI
It's all about bottom line $ for them.
 
  • #158
sbrothy said:
but already I'm a little disturbed thinking about how insurance companies use AI
I've done work with insurance companies recently, @sbrothy, and they routinely raise Lemonade as the AI disruptor within their industry. However, as this Nasdaq analysis from last month shows, it is not all rainbows and unicorns with regards their P&L, highlighting how difficult it is to apply such tech to deliver meaningful operational advantage and maintain a competitive offering.

https://www.nasdaq.com/articles/can-lemonade-stock-squeeze-out-profits-going-forward

That doesn't mean the use of ML / AI won't be more broadly adopted in the industry, but all of the companies I've consulted into have fundamental structural constraints that make harvesting customer data for predictive purposes of any kind a real challenge and insurance is the least worrying AI use case, for me, anyway.
 
  • #159
This has given me paws, sorry that was a typo the cat walked on the keyboard, I meant this has given me pause...

It's Alpha Go vs Alpha Go, what has struck me particularly is Michael Redmond's commentary beginning around 21 mins into the video. He is basically implying that from what he sees there appears to be a plan going on, but not in the way that we humans appear able to comprehend. You can see Redmonds smiling to himself as he is so drawn to the fact that there does appear to be some kind of actual thinking going on, it's a very convincing display of actual intelligence, although a little understanding of Go is required to appreciate the nuance.

So do I fear this, hell no, it's exciting. But then that's probably what the AI wants us to think as part of some elaborate multi century plot to control the Universe.

 
  • #160
bland said:
You can see Redmonds smiling to himself as he is so drawn to the fact that there does appear to be some kind of actual thinking going on
Thinking?

Damn, I really want to smote this down, it just feels wrong as a description of how Alpha Go operates, but 'thinking' could encompass the method that a sophisticated rules engine with no awareness of itself or environment goes about working through the steps of a game, and in that sense, I can see how Alpha Go is 'thinking'.

But I don't think the intent passes the pub test, and that most people would dismiss the idea that Alpha Go is 'thinking' out of hand with a derisive snort and maybe a curse or two.

bland said:
But then that's probably what the AI wants us to think as part of some elaborate multi century plot to control the Universe.
Written with tongue firmly in cheek. I hope 🤔
 
  • #161
Melbourne Guy said:
Thinking?
I didn't say 'thinking' I said there was an appearance, a very convincing one at the level of what Redmond can see. I would find it difficult to define 'thinking' in the context of ai. Yes, one would like to think that the tongue was in that cheeky place.
 
  • #162
bland said:
I didn't say 'thinking' I said there was an appearance, a very convincing one at the level of what Redmond can see.
I'm thinking this might be too meta, @bland, but I didn't take it as what you were thinking, I think it was clear from your text that you were conveying what you thought Redmond was thinking, but now I also think it was clear from my reply that you think I didn't think that!
 
  • #163
Why I can't say I find the prospect of being shot by a robot appealing, I also can't see why it would be any better or worse than being shot by a breathing human being.

I can't get concerned about a robot "becoming self aware" which seems to be code for suddenly developing a desire to pursue its Darwinian self interest. It's much more likely that an AI would start doing nonsensical weird things. This happened during the pivotal Lee Se Dol/AlphaGo match, resulting in Lee's sole victory.

As for SF about robots attempting to take over the world, I'd recommend the terrific Bollywood movie "Enthiran" [Robot]. The robot becomes demonic because some jerk programs it to be that way. That I would easily believe. And for no extra charge you get to ogle Aishwarya Rai.
 
  • #164
In most cases, when I am inspired to post a link to an article on the PhysicsForum, it's because I like the article.
In this case, its because I think it is so off-base that it need trouncing:
SA Opinion: Consciousness Article

It is always a problem to attempt to make practical suggestions about a process that is not understood. And the article makes clear that that is exactly what they are doing. But to take a shot at it without addressing the so-called "Hard Consciousness" issue results in an article that dies for lack of any definition to its main elements.

From where I stand, "Hard Conciousness" (the "qualia" factor) is a fundamental feature of Physics. It is not just a creation of biology. We happen to have it because it provides a method of computation that is biologically efficient in supporting survival-related (Darwinian) decisions. That same computation device (not available in your common "Von Neumann" computer, laptop, Android, ...) will be developed and will allow computers that share a "qualia" of the information they process. But it won't be like human consciousness.

And as far as threats, if a machine attacks people, it will be because it was programmed to. A computer that is programmed to search for a planets resource, adapt its own design, and survive as best it can is a bad idea. So let's not do that.

The article also addresses the ethics of a "happy computer". Pain and happiness are wrapped up in the way we work in a social environment - how we support and rely on others. Getting to computers with "qualia" is a relatively simple step compared to modelling human behavior to the point of claiming that a computer is genuinely "happy".
 
  • Like
Likes russ_watters
  • #165
.Scott said:
And as far as threats, if a machine attacks people, it will be because it was programmed to.
Why do you believe this to be so?
It seems to fly-in-the-face of the essence of AI.
Do you believe an AI would not / could not take it upon itself to do this on its own? Why not?
 
  • Like
Likes Oldman too and russ_watters
  • #166
 
  • #167
DaveC426913 said:
Why do you believe this to be so?
It seems to fly-in-the-face of the essence of AI.
Do you believe an AI would not / could not take it upon itself to do this on its own? Why not?
Part of the problem here is the very loose use of the term AI.
At my last job, I programmed radar units for cars - these went on to become components in devices that provided features such as lane assist, blind side monitoring, advanced cruise control, and lots of other stuff. If we sold these to the devil, he may have used AI software techniques to recognize humans and then steer the car in their direction. Or, if he preferred, he could have used techniques more closely tied to statistical analysis to perform those same target identification processes.

In that case, "AI" refers to a bunch of software techniques like neural nets and machine learning. Even if this devil stuck with more typical algorithms, in many conversations machine vision (radar or otherwise) and robotics would qualify as "AI" without the use AI-specific techniques.

But what many think of as AI is more like "artificial human-like social animal intelligence". Something with a goal to survive and is able to recognize humans as either a threat or gate keepers to the resources it needs to survive.

I think the logic goes something like this: The human brain is really complex and we don't know where "consciousness" comes from so its likely the complexity that creates the consciousness. Computers are getting more and more complex so they will eventually become conscious the way humans are. Humans can be a threat and rapidly evolving computers would be a dire threat.

There is also an issue with how much variation there can be with "consciousness". For example, our brain has Darwinian goals. We are social animals and so many of those Darwinian goals center around survival of the animal and participation in society. This is the essential source of "self". Our brains are "designed" with a built-in concept of self - something to be preserved and something that has a role in a sea of selves. The mind experiment I often propose is to image if I coated a table top with pain and tactile sensory rectors and transmitted that data directly into your skull. If I dropped something on the table, you would feel it. You would certainly develop a self-like emotional attachment to that table top.

A computer system isn't going to have such a concept of self unless it gets designed in.

I have been developing software for more than half a century. Let's consider what I would do to make this A.I. fear come to fruition. First, this "consciousness" thing is a total red herring. As I said in my last post, it is only the artifact of Physics and the use of certain unconventional hardware components. My specific estimation is that it's a use of Grover's Algorithm for creating candidate intentions - and that there at least hundreds of such mechanism within our skulls anyone of which can be our "consciousness" at any given moment. But, except for some speculative potential efficiency, why use such mechanisms at all.

Instead, I will set up a machine that models a robot that lives on planet Earth. It will try out one design after another and attempt to home in on a buildable design that will survive and replicate. If it finds a good solution, it will make some.

So what part of this would you expect to happen by accident? Consciousness has nothing to do with it. Why aren't we afraid that attaching a 3-D printer to a regular household computer is handing over too much power?
 
  • Like
Likes russ_watters
  • #168
DaveC426913 said:
Why do you believe this to be so?
It seems to fly-in-the-face of the essence of AI.
Do you believe an AI would not / could not take it upon itself to do this on its own? Why not?
Not who you were responding too, but I'll take a crack at it too:

Boring response: This is why I don't believe in AI. Any computer can be programmed to on purpose or by accident go off the rails, so the risk presented by AI is not particularly unique. This is the opposite side of the coin type answer to the question.

AI specific response: AI does not mean infinite capabilities/adaptability. An AI need not even be physical. That means we set the parameters - the limitations - of its scope/reach. An AI that is non-physical cannot fire a fully mechanical gun. It can't drive a bulldozer that isn't networked. Now, some people think AI means humanoid robots, and those can do anything a human can, right? No, that's anthropomorphizing. A humanoid robot that is designed to learn basketball isn't somehow going to decide it wants to dabble in global thermonuclear war. Or even just kill its opponent (rulebook doesn't say I can't!)

AI doesn't necessarily mean generalized intelligence, much less physical capabilities.
 
  • #169
.Scott said:
A computer system isn't going to have such a concept of self unless it gets designed in.
So, Homo sapiens have consciousness 'designed in'? You suggest so, @.Scott, even if you write the word with air quotes. Which just kicks the fundamental problem upstream. If evolution can result in consciousness, then there is no barrier to AI also evolving consciousness.
 
  • #170
Melbourne Guy said:
So, Homo sapiens have consciousness 'designed in'? You suggest so, @.Scott, even if you write the word with air quotes. Which just kicks the fundamental problem upstream. If evolution can result in consciousness, then there is no barrier to AI also evolving consciousness.
What more important than consciousness being designed in is the construct of "self". "Self" and consciousness are no more than same than "tree" and consciousness.

Evolution could evolve evil AI robots - except we would stop them before they got started. That is why I approached the problem by allowing the evolution to occur in a computer model - and only the final result was built.
 
  • Like
Likes russ_watters
  • #171
.Scott said:
Evolution could evolve evil AI robots - except we would stop them before they got started.
Would we? Who decides what an evil AI looks like? I can imagine some people would welcome evil AIs, and some people would deliberately evolve them.

.Scott said:
That is why I approached the problem by allowing the evolution to occur in a computer model - and only the final result was built.
I feel this is an arbitrary and trivial constraint that is easily ignored, @.Scott. Are you assuming that once evolved and 'built', the AI no longer evolves?
 
  • #172
As follow on from my previous thought, this just popped into one of my feeds:

https://www-independent-co-uk.cdn.a...artificial-general-intelligence-b2080740.html

"One of the main concerns with the arrival of an AGI system, capable of teaching itself and becoming exponentially smarter than humans, is that it would be impossible to switch off."

I've written one of these AIs in a novel, but I don't really believe it. There's a ton of assumptions in the claim, including that an AI could unilaterally inhabit any other computing architecture, which seems implausible. It also assumes that there is no limit to the 'bootstrapping' the AI can do to its own intelligence. All of this could be true, but if so, 'smarter than humans' equates to "God-like", and the mechanism for that to occur is certainly not obvious.
 
  • #173
Melbourne Guy said:
Would we? Who decides what an evil AI looks like? I can imagine some people would welcome evil AIs, and some people would deliberately evolve them.

You asked if people could evolve with nothing more than Darwinian factors, why not AI. Now you seem to think that AI would evolve quickly.

If people deliberately evolved them, that would not contradict any of my statements. It is definitely possible for people to design machines to kill other people.
 
  • Like
Likes russ_watters
  • #174
.Scott said:
You asked if people could evolve with nothing more than Darwinian factors, why not AI. Now you seem to think that AI would evolve quickly.
I'm thinking we're talking past each other, @.Scott. I haven't assumed AI would evolve quickly, merely that identifying an evil AI might not be obvious. There are sociopath humans that nobody notices are murdering people and burying their bodies in the night, do you feel their parents knew their kids were evil from the moment of their first squawking wail at birth?
 
  • #175
Melbourne Guy said:
I'm thinking we're talking past each other, @.Scott. I haven't assumed AI would evolve quickly, merely that identifying an evil AI might not be obvious. There are sociopath humans that nobody notices are murdering people and burying their bodies in the night, do you feel their parents knew their kids were evil from the moment of their first squawking wail at birth?
People do not have to evolve into societal threats. We are all there already. You just have to change your mind.

Building a machine with a human-like ego and that engages human society exactly as people do would be criminal. Anyone smart enough to put such a machine together would be very aware of what he was doing.

Building a machine that engages human society in a way that is similar to how people would - but without the survival-oriented notion of self could be done. And it could be done with or without components that would evoke consciousness.
 
  • #176
If I were going to write an AI horror story it would be this. Society gets dependent on an AI. Quite often its moves are obscure but it always works out in the end. It builds up a lot of good will and faith that is doing the right thing no matter how mysterious and temporarily unpleasant. So when it goes off the rails and starts to blunder no one realizes it until it is too late.
 
  • #177
If I were worried about AI, it would not be because of fear of robots' world domination, but because these days and for an indeterminate time to come, some "AI" are not really very good at the tasks that are assigned by certain people who can and boldly go where no one with some scintilla of wisdom has gone before, using neural-network algorithms that are not up to snuff but are cheaper and free of personal issues than, well, paid human personnel: they are a one-time expense that is likely to include support with updates for several years (they are software, after all: "apps"), don't goof off, try to unionize, and never talk back. Doing the kind of work where, if they do it wrong, that is likely to be someone else's problem. For example: face-recognition going wrong and someone (else) being thrown in jail because of it. Military use where humans delegate to an AI the making of quick life or death decisions.

On the other hand, The Spike has been postponed sine die due to lack of sufficient interest and technical chops. Skynet's armies are not marching in, right now. Or even present in my bad dreams. But there is also plenty else around I see as worthy of worrying about, thank you very much.
 
Last edited by a moderator:
  • #178
Speaking of source material for AI concepts:

Does anyone recall a novel from decades ago where a computer had a program like Eliza, written in BASIC, that managed find enough storage to develop consciousness and the story culminated in the AI attempting to fry the protagonist on the sidewalk by redirecting an orbiting space laser?
 
  • #179
I think a superintelligent AI would be smart enough to not kill anybody. I think it would be doing things like THIS.

 
  • #180
.Scott said:
Building a machine with a human-like ego and that engages human society exactly as people do would be criminal. Anyone smart enough to put such a machine together would be very aware of what he was doing.
Fine statements, to be sure, @.Scott, but not statements of fact. And given we don't understand our own consciousness (or other animals that might be regarded as such) it seems premature to jump to such conclusions. Currently, it is not criminal to create an AI of any flavour, so I'm assuming you mean that in the moral sense, not legal sense. And who knows how smart you have to be to create a self-aware AI? Maybe smart, but not as smart as you assert.

Honestly, I am struggling with your absolutist view of AI and its genesis. We know so little about our own mental mechanisms that it seems hubris to ascribe it to a not-yet-invented machine intelligence.
 
  • #181
AI and consciousness are not as inscrutable as you presume.

And as a software engineer, I am capable of appreciating a design without knowing the lowest level details. So, though I have never written a chess program, I can read an article about the internal design of a specific chess app, and understand its strengths and weaknesses. Similarly, I can look at the functionality of the human brain - functional and damaged - and list and examine the characteristics of human consciousness and although I may not be ready to write up the detailed design, I get the gist.
 
  • Skeptical
  • Informative
Likes Oldman too and Melbourne Guy
  • #182
I'm guessing that when AI is referenced as 'thinking' I am assuming some sort of actual human equivalent, which would mean that it is aware that it is aware and therefore it is aware of what it is. Is this what people are getting to in this thread, or do they have something else in mind. Because to me there is either an 'appearance' of thinking or there is actual thinking.

I am guess that animals can be referred to as actually thinking but of course this is not anything like human thinking due to the non awareness of the animals own awareness. So is this type of thinking what is meant that AI might aspire to?
 
  • #183
bland said:
Because to me there is either an 'appearance' of thinking or there is actual thinking.
The question - which Turing himself immortalized - is: how would you tell the difference?
 
  • #184
Melbourne Guy said:
Honestly, I am struggling with your absolutist view of AI and its genesis. We know so little about our own mental mechanisms that it seems hubris to ascribe it to a not-yet-invented machine intelligence.
I actually view this from the opposite direction: if we know so little about what it is to be conscious, then how can we hope to create it?
 
  • #185
DaveC426913 said:
The question - which Turing himself immortalized - is: how would you tell the difference?
I prefer: if we can't tell the difference, does it even matter?
 
  • #186
russ_watters said:
I prefer: if we can't tell the difference, does it even matter?
Yes, that'll be the next question. But for Melbourne Guy, we first have to convince him that he can't tell the difference.
 
  • Haha
Likes Melbourne Guy
  • #187
DaveC426913 said:
The question - which Turing himself immortalized - is: how would you tell the difference?

russ_watters said:
I prefer: if we can't tell the difference, does it even matter?

DaveC426913 said:
Yes, that'll be the next question. But for Melbourne Guy, we first have to convince him that he can't tell the difference.

This 'does it make a difference' angle, is better applied to the 'are we in a simulation' nonsense. And vaguely related to 'do we have free will', on that one, I think we can say it doesn't matter because whether we do or not (we do) the entire world (even people who think we don't have free will) will treat you as if you do, so in that sense it doesn't matter, same with the simulation.

With regards to dreaming it's easy to tell simply by looking at some writing, anything with a couple of words, look at the words, look away and look back, they will have changed, in fact they probably weren't words in the first place, just an impression of words, good enough, like lorem ipsum copy, at a glance they are english words. If you pay attention you can watch your brain doing this in real time.

Correct me if I"m wrong, but we would all agree that dogs and other intelligent animals, do display what we might term as thinking. Not sure if 'thinking' has been adequately defined yet in this thread. So when we say thinking in relation to a machine I suppose we are referring to the type of thinking that can only come with self awareness of ones thinking. This is what separates humans from other animals.

So my point is that if a machine actually could think, that it could be a defined as a human being, albeit made of metal. In other words it would be self aware and if that is the case I do not see how it would not then fall prey to the human condition, so it would make a judgement or come to a conclusion about itself, and it would then become sad, it will of course compare itself to organic humans but it's superior computing power and super intelligence would not make up for the fact of it's many obvious deficiencies. Thinking implies the ability to compare and to judge.

So to sum up, a machine with actual intelligence I think is just, ... well... ridiculous.

Edit: Brian, the dog from Family Guy, is what a dog would be like if it was self aware, i.e. it would be human. Same with the apes in the Planet of the Apes, for all intents and purposes, they were human.
 
  • #188
bland said:
So my point is that if a machine actually could think, that it could be a defined as a human being, albeit made of metal.
... Brian, the dog from Family Guy, is what a dog would be like if it was self aware, i.e. it would be human

What? You assert that 'self awareness' equals being human?

A self aware dog is not a self aware dog; it's a human, because only humans are self aware?

That's circular.

It also ignores a number of (non-human) species who seem to show signs of self awareness, including dolphins and elephants.
 
Last edited:
  • #189
russ_watters said:
I actually view this from the opposite direction: if we know so little about what it is to be conscious, then how can we hope to create it?
Does NFI count as a suitable answer on PF, @russ_watters? I was responding to @.Scott's authoritative statements that I took as, "I have the answer, here is the answer," but I wonder if the "we'll know it when we see it," is how things will end up going (assuming an AI reaches this presumed level of awareness).

DaveC426913 said:
Yes, that'll be the next question. But for Melbourne Guy, we first have to convince him that he can't tell the difference.
I'm pretty sure I'm failing to tell the difference with so many people right now, @DaveC426913, that adding AI to the list of confusing intelligences will melt my brain 😭

Apart from that, much of the commentary in this thread highlights that we lack shared definitions for aspects of cognition such as thinking, intelligence, self-awareness, and the like. We're almost at the level of those meandering QM interpretation discussions. Almost...
 
  • #190
DaveC426913 said:
What? You assert that 'self awareness' equals being human?

A self aware dog is not a self aware dog; it's a human, because only humans are self aware?

That's circular.

It also ignores a number of (non-human) species who seem to show signs of self awareness, including dolphins and elephants.

Well I don't 'assert' it, but I do say, one (in this instance, me) could define it like that from a particular viewpoint of the peculiar nature of humans. Humans not only have the unique capacity of complex symbolic language, but separate to that, humans can be defined by the peculiar set of problems that define them.

And this other set of problems is directly caused by their awareness of their own being. So I think it's fair to say that no dolphin is going to be sad because it's got a strange mottled colouration. It's not going to compare itself in any way to any other dolphin. A dog will sniff any other dog's tail end that passes by, it doesn't bother whether the dog is a pedigree or a common street dog, because that would make it Brian.

Surely you will agree that whether it's 'symbolic language' or just being miserable due to a self conclusion, either one of those are unique to humans and what makes humans unique and causes these existential problems is their awareness of their own awareness. So, yes, it could be a fair definition of a human being.

If intelligent aliens made friends with us Earthlings and lived here, then as far as the animals are concerned the aliens would be the same as humans. And I think people instinctively know that, which is why aliens portrayed in fiction always seems to have many of the baser human qualities. Oh sure instead of warmongers they might be altruistic, both human qualities born of self awareness.

Which is why Heaven, as some sort of eternal bliss, ignores all this, if you're in Heaven, with angels floating about the clouds, you'll naturally want to have a look at god then you'll want to see what the back of god looks like, but after a while you'll get bored and you'll wonder how in hell did donald trump get here, which will kinda bum you out seeing as you were a goody goody all your life, so you'll become sad. In Heaven. Because it's still the same awareness.

From a biological point of view, obvs not.
 
  • #191
bland said:
And I think people instinctively know that, which is why aliens portrayed in fiction always seems to have many of the baser human qualities.
From this authors perspective, the aliens are used more as mirrors of the human condition for narrative effect, rather than because there is any 'instinctive' knowledge that animals would treat aliens as humans. Whatever that actually means, @bland? Who knows how dolphins or dogs really perceive the world, they might know aliens are aliens as easily as we would, and accept them - or not - with as much range in their reactions as we would have.
 
  • #192
Melbourne Guy said:
Does NFI count as a suitable answer on PF, @russ_watters? I was responding to @.Scott's authoritative statements that I took as, "I have the answer, here is the answer," but I wonder if the "we'll know it when we see it," is how things will end up going (assuming an AI reaches this presumed level of awareness).
I have no idea what "NFI" is.

My post started out saying "AI and consciousness are not as inscrutable as you presume.".
There is a lot of discussion around AI and consciousness that is super-shallow. To the point where terms are not only left undefined, but shift from sentence to sentence. And where a kind of "wow" factor exists where things are not understood because it is presumed that they cannot be understood. People are stunned by the presumed complexity and block themselves from attempting even a cursory analysis.

If you want to create an artificial person that deals with society and has a sense of self-preservation, and you don't want to wait for Darwinian forces to take hold, then you need to start with some requirements definition and some systems analysis. If you are not practiced in such exercises, this AI/Consciousness mission is probably not a good starter. Otherwise, I think you will quickly determined that there is going to be a "self object" - and much of the design work will involved presenting "self" as a unitary, responsible, witness and agent to both society and internally - and in recognizing and respecting other "self-like" beings in our social environment.

The fact that we so readily take this "self" as a given demonstrates how effectively internalized this "self object" is. How could any AI exist without it? In fact, no current AI exists with it.
 
  • #193
bland said:
And this other set of problems is directly caused by their awareness of their own being. So I think it's fair to say that no dolphin is going to be sad because it's got a strange mottled colouration. It's not going to compare itself in any way to any other dolphin.
You would be wrong. Your example is a little off, the but dolphins have been shown to have some degree self-awareness."The ability to recognize oneself in a mirror is an exceedingly rare capacity in the animal kingdom. To date, only humans and great apes have shown convincing evidence of mirror self-recognition. Two dolphins were exposed to reflective surfaces, and both demonstrated responses consistent with the use of the mirror to investigate marked parts of the body. This ability to use a mirror to inspect parts of the body is a striking example of evolutionary convergence with great apes and humans."
 
  • #194
.Scott said:
I have no idea what "NFI" is.

My post started out saying "AI and consciousness are not as inscrutable as you presume.".
There is a lot of discussion around AI and consciousness that is super-shallow. To the point where terms are not only left undefined, but shift from sentence to sentence. And where a kind of "wow" factor exists where things are not understood because it is presumed that they cannot be understood. People are stunned by the presumed complexity and block themselves from attempting even a cursory analysis.

If you want to create an artificial person that deals with society and has a sense of self-preservation, and you don't want to wait for Darwinian forces to take hold, then you need to start with some requirements definition and some systems analysis. If you are not practiced in such exercises, this AI/Consciousness mission is probably not a good starter. Otherwise, I think you will quickly determined that there is going to be a "self object" - and much of the design work will involved presenting "self" as a unitary, responsible, witness and agent to both society and internally - and in recognizing and respecting other "self-like" beings in our social environment.

The fact that we so readily take this "self" as a given demonstrates how effectively internalized this "self object" is. How could any AI exist without it? In fact, no current AI exists with it.

To me, it's at the deep level of analysis where you come to realize that AI cannot be understood (at least internally). This is because neural networks are complex systems modelling other complex systems.

Sure we can understand the black box's possible range of inputs and output, and to some extent the expected ones if the model and data is simple enough.

The fact that the worlds best theorists still have no solid theory to explain even simple artificial neural networks in a way the experts are satisfied with, however, is telling us something. Because we can make ones much much more complicated.

So basically, what we can do if we have this controlled isolated system, is choose the data to train it with, choose the loss functions to penalize bad behavior based on, and choose the degrees of freedom it has.

But I very much doubt people would have the restraint to agree, in solidarity, all across the world, to constrain our use of AI to such a limiting amount, especially when letting AI go wilder offers so many competitive advantages. We're talking vast wealth, vast scientific and engineering achievement, vast military power, etc. that you are expecting people to pass up in the name of being cautious. The humans we're talking about here are the same ones that are cool with poisoning themselves and the rest of the world with things like phthalates and the like just to make a little more money, and are even willing and able to corrupt powerful governments to make it happen.

Humans are not only too foolish to feasibly exercise the level of caution you expect, they're also too greedy. In reality, people will release self reproducing AI into the solar system to mine asteroids and terraform Mars in a second, as soon as they get the chance. And they will build AI armies capable of wiping out all human beings in a day as soon as they get the chance too.

Will they be self aware? Does it matter?

Anyways, there can be a notion of self awareness which is easily achieved by AI, which is to simply learn about itself and then its behavior will depend on its condition. And if its condition affects other things that affect the loss function, then it will behave accordingly. This can easily reach the level where an AI acts similar to humans in terms of things like ego, greed, anger, envy, depression, etc.

What we have as humans that seems special is not that we behave with these characteristics, but that we have these subjective feelings which we cannot imagine to be possible with a machine.

Animals have self awareness and certainly emotion in my opinion. They clearly feel things like we do. And they do experience things like envy comparing themselves to others. Pets are notorious for becoming envious of others. Dogs in particular are extremely sensitive and emotional animals.

What humans have that is objectively special is a higher level of analytical thinking than animals. But AI can arguably surpass us easily in analytical thinking, at least in niche cases and probably in the long run in general.

So what we have left really to separate us is the subjective experience of feeling.

AI can behave exactly as if it is sensitive emotionally and has feelings, but we could never peer inside and somehow tell if there is anything similar going on. Like you say we often just say the neural network is too complex to understand internally so maybe we can't tell. The truth is, we don't know where this subjective experience of feeling comes from in biological creatures. Is something supernatural involved? Like a soul? Penrose thinks the brain has quantum organelle which give us a special metaphysical character (for lack of better explanation). And I admit that I am inclined to have at least a vague feeling there is some form of spiritual plane we occupy as living creatures.

Even if that is true (Penrose is right), can we make artificial versions of those organelle? Or how about the brains we're growing in vats? At which point can these lab grown biological brains begin feeling things or having a subjective experience? Maybe for that to happen they need to be more complex? Isn't that what people ask about artificial neural networks? Do they need to first have senses, learn, and be able to respond to an environment? Would a human brain in a vat, deprived of a natural existence, have a subjective experience we could recognize? Would some kind of non-biological but quantum neural network be capable of feeling?

There are too many unanswered questions. But I'm in the camp that believes that whether or not AI feels in the way we do, it doesn't matter in practice if it acts like it does. But an emotional AI isn't really what makes the most danger in my opinion. I think the biggest danger is out of control growth. Imagine if a super strain of space cockroaches started multiplying super-exponentially and consumed everything on Earth in a week. That is the type of thing which can possibly result from something as simple as an engineer/researcher doing an experiment just to see what would happen.
 
Last edited:
  • #195
I just want to add that our concept of humans being absolutely self aware is probably way off. Humans aren't actually very self aware. We aren't aware of what is going on in our own brains, and we aren't very aware of our subconscious mind, and we aren't very aware of our organs their functions and can't consciously control them. Our complex immune systems and all of the amazing living systems that comprise us, we are hardly aware of at all. It is conceivable that some animals could be actually much more self aware than us in these ways for all we know. And it is conceivable that a being of some sort could be much more self aware than humans, in general. And depending how we define self awareness, AI could conceivable become way, way more self aware than humans. If the benchmark is recognition of self in the mirror, then AI can already do that no problem. It's only if you attach a special human inspired subjective experience to it that it is questionable, but also probably unanswerable and not even easy to define.
 
Last edited:
  • #196
Jarvis323 said:
But I very much doubt people would have the restraint to agree, in solidarity, all across the world, to constrain our use of AI to such a limiting amount, especially when letting AI go wilder offers so many competitive advantages. We're talking vast wealth, vast scientific and engineering achievement, vast military power, etc. that you are expecting people to pass up in the name of being cautious...

Humans are not only too foolish to feasibly exercise the level of caution you expect, they're also too greedy. In reality, people will release self reproducing AI into the solar system to mine asteroids and terraform Mars in a second, as soon as they get the chance. And they will build AI armies capable of wiping out all human beings in a day as soon as they get the chance too.
We can do lighter versions today, with or without true AI, whatever that is, but we don't. The idea that humans will always go for more war and profit in the short term, while popular, just isn't true. Even by mistake.

However, specific to the point, what would prevent the next Hitler from creating world-destroying AI is control. He can't take over the world if the AI turns against him.
 
  • #197
Jarvis323 said:
To me, it's at the deep level of analysis where you come to realize that AI cannot be understood (at least internally). This is because neural networks are complex systems modelling other complex systems.
So in this case "AI" is software techniques such as neural nets.

The brain has arrangements or neurons that suggest "neural nets", but if neural nets really are part of our internal information processing, they don't play the stand-out roles.

As far as rights are concerned, my view has always been that if I can talk something into an equitable agreement that keeps it from killing me, it deservers suffrage.
 
  • #198
Jarvis323 said:
I just want to add that our concept of humans being absolutely self aware is probably way off. Humans aren't actually very self aware. We aren't aware of what is going on in our own brains, and we aren't very aware of our subconscious mind, and we aren't very aware of our organs their functions and can't consciously control them. Our complex immune systems and all of the amazing living systems that comprise us, we are hardly aware of at all. It is conceivable that some animals could be actually much more self aware than us in these ways for all we know. And it is conceivable that a being of some sort could be much more self aware than humans, in general. And depending how we define self awareness, AI could conceivable become way, way more self aware than humans. If the benchmark is recognition of self in the mirror, then AI can already do that no problem. It's only if you attach a special human inspired subjective experience to it that it is questionable, but also probably unanswerable and not even easy to define.
I'm not sure what "absolutely self-aware" would be. Even if we were aware of our livers, would we need to know what chemical processes were proceeding to be "completely aware"? The "self" we are aware of is our role as an animal and as a member of society - and that's just the information end.

Being conscious of "self" is just one of enumerable things we can be conscious of. In a normal, undamaged brain, we maintain a single story line, a single stream of consciousness, a train of items that have grabbed our attention. But this is just a trick. The advantages to this trick are that: we can apply our full bodily and social resources to one of the many things that may be crossing our minds; and our memory is maintained like a serial log - if nothing else, that spares memory. I can't find a reference right now, but in a couple of studies, when people focused on one thing to the exclusion of other things, the effects of those other things still showed up later in their responses to word association tests.

My best guess is that our experience of consciousness is actually many "consciousness engines" within our skulls - with only one at a time given the helm and the log book.

Clearly, if you attempt to mimic human-like consciousness in a machine, you will have lots of structural options - many engines, one log; one log per engine; etc. BTW: I am in substantial agreement with Penrose that consciousness is a form of quantum information processing - though I wouldn't hang my hat on those microtubules).
 
  • #199
russ_watters said:
However, specific to the point, what would prevent the next Hitler from creating world-destroying AI is control. He can't take over the world if the AI turns against him.
Turns against him? What a nasty programming bug! More likely, it is the system designers that turned against him.
 
  • Like
Likes russ_watters
  • #200
russ_watters said:
We can do lighter versions today, with or without true AI, whatever that is, but we don't. The idea that humans will always go for more war and profit in the short term, while popular, just isn't true. Even by mistake.

However, specific to the point, what would prevent the next Hitler from creating world-destroying AI is control. He can't take over the world if the AI turns against him.
True, but there may be 1000 Hitler idols, maybe at a time, who get the opportunity to try and command AI armies, and eventially you would think at least one of them would lose control. Either way what is the prospect? AI causing full human extinction on its own vs AI helping a next generation Hitler to cause something close to that on purpose. And then you have the people who mean well trying to build AI armies to stop future Hitlers and they can make mistakes too.
 

Similar threads

Back
Top