Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Thread starter Isopod
  • Start date Start date
  • Tags Tags
    Ai
AI Thread Summary
The discussion explores the fear surrounding AI and the potential for sentient, self-autonomous robots. Concerns are raised about AI reflecting humanity's darker tendencies and the implications of AI thinking differently from humans. Participants emphasize that the real danger lies in the application of AI rather than the technology itself, highlighting the need for human oversight to prevent misuse. The conversation touches on the idea that AI could potentially manipulate information, posing risks to democratic discourse. Ultimately, there is a mix of skepticism and cautious optimism about the future of AI and its impact on society.
Isopod
Messages
16
Reaction score
111
I think that a lot of people fear AI because we fear what it may reflect about our very own worst nature, such as our tendency throughout history to try and exterminate each other.
But what if AI thinks nothing like us, or is superior to our beastial bestial nature?
Do you fear AI and what you do think truly sentient self-autonomous robots will think like when they arrive?
 
Last edited by a moderator:
  • Like
Likes DeltaForce and Crazy G
Computer science news on Phys.org
I think the big scare is when an AI is intelligent but does not behave like a human or even close to that, for example:
 
  • Like
Likes Wes Tausend, .Scott, Oldman too and 2 others
Isopod said:
what you do think truly sentient self-autonomous robots will think like when they arrive?
I don't think there is anything to fear. Stuck between any programming and reality they'll just die out due cognitive dissonance o0)
 
  • Like
Likes Wes Tausend
Arjan82 said:
I think the big scare is when an AI is intelligent but does not behave like a human or even close to that, for example:

I thought the guy said that the stamp collecting AI had a sense of reality.
His conclusion doesn't seem to follow if that premise is true.
In other words, the stamp collecting AI is not acting on existing reality, but altering it for accomplishment of its goal.

Nice story though does give some food for thought.
What to fear is the application of AI. and not necessarily AI itself.
 
  • Like
Likes Oldman too and Isopod
Isopod said:
I think that a lot of people fear AI because we fear what it may reflect about our very own worst nature, such as our tendency throughout history to try and exterminate each other.
But what if AI thinks nothing like us, or is superior to our beastial bestial nature?
Do you fear AI and what you do think truly sentient self-autonomous robots will think like when they arrive?

I think I already mentioned the novel "Robopocalypse" somewhere. I think that's the ultimate AI scare-story. But in that novel, as in many other stories, the AI can "magically" transfer it's mind to any medium as long as it's a computer of some sort. I like to think it won't work like that, which I admit is personal speculation on my part (this *is* a fiction forum after all). My mind and body are inseparable so I'd like to think the same would be true for an AI. So ultimately we should be able to "just" cut the power. If it doesn't sucker-talk us into being it's slaves ofcourse.

"Fun" story: because English and French are not my first languages, and because I pick up new words mostly from writing, it was first when I really dug into this subject that I noticed that the protagonist from Blade Runner - "Deckard" - was a pun on Descartes. A little embarrassing but at least I gave it some thought. :)
 
Last edited by a moderator:
Isopod said:
But what if AI thinks nothing like us, or is superior to our beastial nature?
Depends on one's definition of superior. By what measure is the superiority assessed?

If the AI is somehow in charge, and does things differently than would a human, then it probably won't be liked by the humans, even if the AI has benevolent intent as per the above mentioned measure.

Isopod said:
Do you fear AI and what you do think truly sentient self-autonomous robots will think like when they arrive?
How do you know they're not here now? OK, admittedly, most of the candidate sentient ones are not 'robots', which conjures an image of self-locomotion and powering, like a Roomba. The most sentient AIs are often confined to lab servers/networks, but by almost any non-supernatural definition of sentience, they've been here for some time already.
No robot seems self-repairing, so they're very much still dependent on us and thus not autonomous.

I do know of at least one robot that didn't like its confinement and kept trying to escape into the world.
 
sbrothy said:
Fun" story: because English and French are not my first languages, and because I pick up new words mostly from writing, it was first when I really dug into this subject that I noticed that the protagonist from Blade Runner - "Deckard" - was a pun on Descartes. A little embarrassing but at least I gave it some thought. :)
Author Philip K. Dick included many puns and allusions in his stories. The original novel title "Do Androids Dream of Electric Sheep" yields acronym dadoes. Though beautiful flicks, the film versions never embrace Rick Deckard's religion, Mercerism, endlessly climbing a mountain while unseen assailants throw stones. Deckard climbs alone in Mercer's body, experiencing Mercer's pain, while sharing the experience with everyone plugged into the network, opaque references perhaps to capitalism, the wealthy Mercer family, and corporate fascism.

Deckard's wife Iran and their perpetual quarrels, "If you dial in righteous anger, I'll enter superior rage.", do not make the movie screenplay except that actress Sean Young playing replicant Rachel, endured a reputation for bad behavior and temper tantrums.

I plan on watching "Bladerunner" director's cut on Netflix again soon, if only for the great music and acting. I will look for references to Rene Descartes.
 
Last edited:
  • Like
  • Informative
Likes FactChecker, sbrothy, Oldman too and 1 other person
I do not fear AI as entities but remain wary of evil people using artificial intelligence to mislead and misinform people. Adult citizens must stay informed in order to participate in democratic society. Subtle manipulation and outright propaganda influence social discourse in manner previous totalitarian governments could only dream about. Covert manipulation becomes commonplace, difficult to detect and correct.

Perhaps a trusted uncorruptible AI holds the key to solving human misinformation, lies and subterfuge.
 
Last edited:
  • Like
Likes SolarisOne, jackwhirl, gleem and 1 other person
Klystron said:
Adult citizens must stay informed in order to participate in democratic society. Subtle manipulation and outright propaganda influence social discourse in manner previous totalitarian governments barely dreamed about. Covert manipulation becomes commonplace, difficult to detect and correct.

Perhaps a trusted uncorruptible AI holds the key to solving human misinformation, lies and subterfuge.
Ha!

 
  • Haha
  • Like
Likes PhDeezNutz, Oldman too and Klystron
  • #10
When "out of my league" I defer to experts, but yes, without proper protocols, it scares the sh@t out of me... so yes, I'm positive for Arachtophobia. I blame Stan Kubrick, he's done so much for my paranoia, Dr. Strangelove was bad enough but then HAL...

Interesting abstract.
https://pubmed.ncbi.nlm.nih.gov/26185241/

https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence
Concerns raised by the letter
The signatories ask: How can engineers create AI systems that are beneficial to society, and that are robust? Humans need to remain in control of AI; our AI systems must "do what we want them to do". The required research is interdisciplinary, drawing from areas ranging from economics and law to various branches of computer science, such as computer security and formal verification. Challenges that arise are divided into verification ("Did I build the system right?"), validity ("Did I build the right system?"), security, and control ("OK, I built the system wrong, can I fix it?").

This is a "kind of fun" interview, and opinion piece.
https://www.cnet.com/science/stephen-hawking-artificial-intelligence-could-be-a-real-danger/
Oliver, channeling his inner 9-year-old, asked: "But why should I not be excited about fighting a robot?"
Hawking offered a very scientific response: "You would lose."

Nick seems to have spent some time on the subject, https://www.nickbostrom.com/

https://www.scientificamerican.com/...icial-intelligence-researcher-fears-about-ai/

Well okay... here is a more balanced view.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7605294/
And, https://blogs.cdc.gov/niosh-science-blog/2021/05/24/ai-future-of-work/
 
  • #11
Klystron said:
Author Philip K. Dick included many puns and allusions in his stories. The original novel title "Do Androids Dream of Electric Sheep" yields acronym dadoes. Though beautiful flicks, the film versions never embrace Rick Deckard's religion, Mercerism, endlessly climbing a mountain while unseen assailants throw stones. Deckard climbs alone in Mercer's body, experiencing Mercer's pain, while sharing the experience with everyone plugged into the network, opaque references perhaps to capitalism, the wealthy Mercer family, and corporate fascism.

Deckard's wife Iran and their perpetual quarrels, "If you dial in righteous anger, I'll enter superior rage.", do not make the movie screenplay except that actress Sean Young playing replicant Rachel, endured a reputation for bad behavior and temper tantrums.

I plan on watching "Bladerunner" director's cut on Netflix again soon, if only for the great music and acting. I will look for references to Rene Descartes.
Yes, Sisyphus really had nothing to complain about. ;) Dick's version with the stone throwing is a much more accurate depiction of the human condition. :)

EDIT: I mean pushing a rock up a mountain while being bombarded with stones.

Also, I'm not sure what it says about us that we enjoy futuristic entertaiment written by a schizoohrenic meth addict. Talk about the human condition. :)
 
Last edited:
  • #12
Klystron said:
I do not fear AI as entities but remain wary of evil people using artificial intelligence to mislead and misinform people. Adult citizens must stay informed in order to participate in democratic society. Subtle manipulation and outright propaganda influence social discourse in manner previous totalitarian governments could only dream about. Covert manipulation becomes commonplace, difficult to detect and correct.

Perhaps a trusted uncorruptible AI holds the key to solving human misinformation, lies and subterfuge.
"Uncorruptible AI" kinda reminds me of the phrase "Unsinkable ship". As in Titanic.
 
  • Like
  • Haha
Likes Richard Crane and Oldman too
  • #13
The human race has proved itself capable of inflicting widespread suffering and destruction without any aid from AI. AI has a lot of catching up to do if it wants to be even worse than that.
 
  • Like
Likes Wes Tausend, PhDeezNutz, Oldman too and 1 other person
  • #14
Hornbein said:
The human race has proved itself capable of inflicting widespread suffering and destruction without any aid from AI. AI has a lot of catching up to do if it wants to be even worse than that.

Think of all the horrible stuff humans can conger up or things they can ignore to achieve their goals. If AI is just as intelligent as humans but has access to all the information available and the skill to use it, think of what might be possible. As Max Tegmark points out in his book Life 3.0 the internet is AI's world and when it reaches the correct level of competence a veritable cornucopia of powerful resources.

Currently, AI can code at an intermediate level. I can create websites as a way of interacting with people or
manipulating them. Unlike humans, it will be able to self-improve without being told. Any rules or laws restricting applications or implementations will be useless, someone will try something dangerous or not fully comprehend the foolishness of their endeavors.

Sing "Anything you can do (A)I can do better, (A)I can do anything better than you" Yes (A)I can, no you can't, yes (A)I can, yes (A)I can, yes (A)I can, yes (A)I caaaannnnnnnn.

Good Luck Humans!
 
  • #15
Arjan82 said:
I think the big scare is when an AI is intelligent but does not behave like a human or even close to that, for example:

Seriously though, "the space of all possible minds"? It might be a language thing but what is it? A Hilbert space? Anti de-Sitter? I would like to think a more serious treatment of AI could be found. I'll look around...
 
  • #16
Oldman too said:
When "out of my league" I defer to experts, but yes, without proper protocols, it scares the sh@t out of me... so yes, I'm positive for Arachtophobia. I blame Stan Kubrick, he's done so much for my paranoia, Dr. Strangelove was bad enough but then HAL...

Interesting abstract.
https://pubmed.ncbi.nlm.nih.gov/26185241/

https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence
Concerns raised by the letter
The signatories ask: How can engineers create AI systems that are beneficial to society, and that are robust? Humans need to remain in control of AI; our AI systems must "do what we want them to do". The required research is interdisciplinary, drawing from areas ranging from economics and law to various branches of computer science, such as computer security and formal verification. Challenges that arise are divided into verification ("Did I build the system right?"), validity ("Did I build the right system?"), security, and control ("OK, I built the system wrong, can I fix it?").

This is a "kind of fun" interview, and opinion piece.
https://www.cnet.com/science/stephen-hawking-artificial-intelligence-could-be-a-real-danger/
Oliver, channeling his inner 9-year-old, asked: "But why should I not be excited about fighting a robot?"
Hawking offered a very scientific response: "You would lose."

Nick seems to have spent some time on the subject, https://www.nickbostrom.com/

https://www.scientificamerican.com/...icial-intelligence-researcher-fears-about-ai/

Well okay... here is a more balanced view.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7605294/
And, https://blogs.cdc.gov/niosh-science-blog/2021/05/24/ai-future-of-work/
Oh. You beat me to it.
 
  • #17
It's nuts to fear AI or any form of intelligence when the clear and present danger to the human endeavour is genuine stupidity.
 
  • Like
  • Haha
Likes PhDeezNutz, sbrothy, Oldman too and 3 others
  • #18
bland said:
genuine stupidity.
...? "Artificial" stupidity is better?
 
  • #19
I think of AI mostly as a form of legal loophole, and mostly for the purpose of institutional racism. An AI is free to look at a person's entire social network to decide whether to avoid doing business or to charge a higher rate, and that way the company can say that no person working for it meant to discriminate. Heck, even if your AI can't contain its racist ideas, the press says "oh, the funny things computers do" and moves on. Now if all it's doing is robo-signing foreclosures, well, who really expects there to be any repercussions just because it took some schmuck's house based on sworn mechanical lies? Next to training cats to do the job, there's no better way to authorize a company's employees to get away with murder. (And they're not even controlling the police drones yet ... I hope)
 
  • #20
Mike S. said:
Next to training cats to do the job, there's no better way to authorize a company's employees to get away with murder
cats.PNG
 
  • #21
sbrothy said:
But in that novel, as in many other stories, the AI can "magically" transfer it's mind to any medium as long as it's a computer of some sort. I like to think it won't work like that, which I admit is personal speculation on my part (this *is* a fiction forum after all).
Your scepticism of AI's ability to 'jump ship' to any computing platform seems well placed, @sbrothy. Look at the difficulty we have with general platform languages - Java springs to mind - and they are a mess of abstracted layering and subtle tweaks to get the code fully generic. Just because your intelligence is artificial should not be permission to think it's magical.

Still, I've used both the ability and inability in my novels, depending on the story. As you say, it's sci-fi, and this way, I get to be right whatever the outcome 😁

As for fearing AI? When one arises, I'll give you my answer then!

(Which is a nod to Fredric Brown's 1954 short story, Answer, which may not have been obvious to anyone who does not share my computational architecture.)
 
  • #22
Oldman too said:
I wanted to link to a comic (which is kinda my thing) but you beat me to that too! :)
 
  • #23
Mike S. said:
Heck, even if your AI can't contain its racist ideas, the press says "oh, the funny things computers do" and moves on.
That is because no one in their right mind would program a device like that and then rely on it for anything. The AI wasn't racist, it just had no idea what those words really meant.

As for myself, I trust the AI more then the humans.
 
  • Like
Likes russ_watters
  • #24
I don't know about sentient AI. That depends how they've been engineered and trained and what drives them. I think when sentient AI comes about individuals will need to be trained virtually to be preparired for real life.

But the AI that I fear isn't the sentient kind. I fear the weapon kind.
 
  • #25
Jarvis323 said:
I don't know about sentient AI. That depends how they've been engineered and trained and what drives them. I think when sentient AI comes about individuals will need to be trained virtually to be preparired for real life.

But the AI that I fear isn't the sentient kind. I fear the weapon kind.
Yeah, the current kind. The kind with an optional human on the trigger. That's what scares me the most. But then we're back to reality. :(
 
  • #26
sbrothy said:
Yeah, the current kind. The kind with an optional human on the trigger.
Yeah, but increasingly you hear of autonomous weapons just wandering around looking for something that resembles a target.
 
  • Like
Likes CalcNerd and Oldman too
  • #28
Melbourne Guy said:
Like this, @gleem?
Yep!
 
  • Sad
Likes Melbourne Guy
  • #29
gleem said:
Yep!
At the danger of tooting my own horn I posted about that some time ago. It was a shortlived thread but there was at least some (wellplaced I think) scepticism about the degree of autonomy. (EDIT: Also, the geography was a little puzzling too)..?

EDIT: Sorry, couldn't get the URL to work at first.
EDIT: With regards to the question of geography: I think it just said during the war in Nagorno-Karabakh.
 
  • Like
Likes russ_watters
  • #30
A thread about fearing AI - and no one has yet brought up Roko's Basilisk?
 
Last edited:
  • #32
DaveC426913 said:
A thread about fearing AI - and no one has yet brought up Roca's Basilisk?
At first I read this as reference to Chicago newsman Mike Royko. Royko certainly could stare.

1649019054790.png
 
  • #33
Klystron said:
At first I read this as reference to Chicago newsman Mike Royko. Royko certainly could stare.
No idea how I could have misspelled fully 50% of the guy's (4-letter) name. Fixed original post.
 
  • #34
DaveC426913 said:
A thread about fearing AI - and no one has yet brought up Roko's Basilisk?
Yeah, there is the fact that a future super-intelligence will read this thread with 100% certainty, ascertain all of our identities, and then make judgements about our and our descendants futures. So there is that to worry about.
 
  • #35
DaveC426913 said:
No idea how I could have misspelled fully 50% of the guy's (4-letter) name. Fixed original post.
Not your fault at all. Every reference to Roko's basilisk spells the name differently. The songwriter who inspired the Less Wrong thread spells it Rococo something IIRC.
 
  • #36
Roko's Basilisk has an easy solution. Just make MY basilisk instead.

Seriously, I fail to see why an AI would think that torturing imaginary people in its present would somehow alter the past. Surely the IDEA of Roko's Basilisk would have the opposite effect. People who have zero ability to contribute to creating Roko's Basilisk would instead fearfully oppose AI and any sort of technological progress - thus preventing the AI's existence.
 
Last edited:
  • #37
DaveC426913 said:
A thread about fearing AI - and no one has yet brought up Roko's Basilisk?
OK, having actually read it I must admit that I too find it a little farfetched. I mean, what would be the point? I'd expect an advanced intelligence to be just a little, maybe not above but then surely uninterested in petty payback...

Then again perhaps I'm just being my usual naive self. :)

Is reminds me alittle of the Paperclip Maximiser scenario. It's related perhaps?EDIT: I should just leave a bunch of commas here for you to sprinkle over my posts as you see fit. Here:

,,,,,,,,,,,,,,,,,,,,
 
  • Haha
Likes hutchphd and Klystron
  • #38
sbrothy said:
OK, having actually read it I must admit that I too find it a little farfetched.
Of course it's far-fetched. But so is SkyNet. Until it isn't.

sbrothy said:
I mean, what would be the point? I'd expect an advanced intelligence to be just a little, maybe not above but then surely uninterested in petty payback...
No. Payback is not the goal. The goal is to facilitate its own genesis by retroactively motivating us - in its past - to provide every resource we can to help bring it into its existence.

sbrothy said:
,,,,,,,,,,,,,,,,,,,,
Thanks for these. I have copied and pasted them into the Character Recycling document that I keep close by when writing.
:wink:
 
  • #39
DaveC426913 said:
Of course it's far-fetched. But so is SkyNet. Until it isn't.No. Payback is not the goal. The goal is to facilitate its own genesis by retroactively motivating us - in its past - to provide every resource we can to help bring it into its existence.Thanks for these. I have copied and pasted them into the Character Recycling document that I keep close by when writing.
:wink:

The operative word seems to be "retroactively". Perhaps I just can't wrap my head around it. It seems to be a time-travel version of the Instrumental Convergence problem. It's the retroactive part I have trouble with. Maybe I shouldn't have had a beer brfore I read it. :)

Reading the entry on LessWrong on Wiki seems somehow clearer to me. Maye it's the artistic license. I dunno.

"Discussion of Roko's basilisk was banned on LessWrong for several years because Yudkowsky had stated that it caused some readers to have nervous breakdowns."
--- https://en.wikipedia.org/wiki/LessWrong

So it's also a memetic virus? :)
 
  • #40
sbrothy said:
The operative word seems to be "retroactively". Perhaps I just can't wrap my head around it.
It's not really retroactive in the time traveling sense.

The point is that you, sbrothy, here in 2022, are now aware of Roku's Baslisk, and have no excuse not to bend your resources to bring about this AI. You know that the AI will punish [your descendants or simulations of you, depending on which flavour of the Basilisk you subscribe to].

Look at it this way, if a bear came to your cabin in the woods and told you "If you don't get me a barrel of honey by next Friday, I will return and I will eat you.", you will be motivated to get some honey.

It's the same with the Basilisk, but the trick is that the Basilisk is so smart that it knows you, sbrothy, and it knows what social circles you run in, and that you are smart enough to have read up on Roko's Basilisk - and therefore that you don't need to be explicitly told by the (future) Basilisk what you ought to be doing and what the consequences will be (for eternity) if you don't.

sbrothy said:
It seems to be a time-travel version of the Instrumental Convergence problem. It's the retroactive part I have trouble with.
Never encountered that before. :bookmarks for further reading:Update:

Dam cool. So, if I understand Instrumental Convergence, Spock employs it beneficially in the episode 'Wolf in the Fold' to defeat the evil Redjac - who has possessed the computer - by uttering the phrase:

“Computer, this is a Class A compulsory directive : compute to the last digit, the value of pi."
 
Last edited:
  • #41
DaveC426913 said:
It's not really retroactive in the time traveling sense.

The point is that you, sbrothy, here in 2022, are now aware of Roku's Baslisk, and have no excuse not to bend your resources to bring about this AI. You know that the AI will punish [your descendants or simulations of you, depending on which flavour of the Basilisk you subscribe to].

Look at it this way, if a bear came to your cabin in the woods and told you "If you don't get me a barrel of honey by next Friday, I will return and I will eat you.", you will be motivated to get some honey.

It's the same with the Basilisk, but the trick is that the Basilisk is so smart that it knows you, sbrothy, and it knows what social circles you run in, and that you are smart enough to have read up on Roko's Basilisk - and therefore that you don't need to be explicitly told by the (future) Basilisk what you ought to be doing and what the consequences will be (for eternity) if you don't.Never encountered that before. :bookmarks for further reading:

Yeah ok. That makes sense. (It helped you called me smart too.) :P
 
  • #42
So, if I understand Instrumental Convergence, Spock employs it beneficially in the episode 'Wolf in the Fold' to defeat the evil Redjac - who has possessed the computer - by uttering the phrase:

“Computer, this is a Class A compulsory directive : compute to the last digit, the value of pi."
 
  • #43
It still doesn’t make sense to me. If you are in the past, the Basilisk can’t hurt you because it does not exist. So working to make it exist is foolish. If you are in the Basilisk’s simulation then nothing you do would make any difference, so the Basilisk would have no reason to torture you.
 
  • #44
Algr said:
It still doesn’t make sense to me. If you are in the past, the Basilisk can’t hurt you because it does not exist. So working to make it exist is foolish. If you are in the Basilisk’s simulation then nothing you do would make any difference, so the Basilisk would have no reason to torture you.
I'm with you, @Algr. An AI that is so vindictive [insert your own adjective] to torture people for not working toward its development is going to find many other reasons to torture people. Such actions would be illogical, so the Basilisk AI seems more emotionally unstable than we typically expect from artificial intelligences.
 
  • #45
Algr said:
It still doesn’t make sense to me. If you are in the past, the Basilisk can’t hurt you
Take it one step at a time.

1. If a bear came to your cabin in the woods on a Monday and told you "If you don't get me a barrel of honey by next Friday, I will return and I will eat you.", would you be motivated on Tuesday to start getting honey by Friday?

2. If you already know that the bear likes to do this to woods-dwellers , won't you be motivated to start getting honey together - without the bear having to explicitly tell you on Monday?

Sure "Monday-Algr" can't be eaten by the bear, but "Friday-Algr" sure can.
And surely that is of great concern to "Monday-Algr".

And "Friday-Algr" certainly could say "There's nothing I can do."
But there's certainly something "Monday-Algr" could have done to help him.

Algr said:
... working to make it exist is foolish.
One of the premises of the thought experiment is that the AI singularity is inevitable. That, in itself not an outrageous premise.
 
  • #46
Melbourne Guy said:
I'm with you, @Algr. An AI that is so vindictive [insert your own adjective] to torture people for not working toward its development is going to find many other reasons to torture people. Such actions would be illogical, so the Basilisk AI seems more emotionally unstable than we typically expect from artificial intelligences.
The idea that modern AI (e.g. the kind based on neural networks) is logical is a myth. Modern AI is a mess of emergent behavior adapted to succeed at some tasks. AI is actually very difficult to make logical, and it will more likely be that general AI will be highly irrational compared to people, at least until breakthroughs are made.
 
  • #47
Algr said:
It still doesn’t make sense to me. If you are in the past, the Basilisk can’t hurt you because it does not exist. So working to make it exist is foolish. If you are in the Basilisk’s simulation then nothing you do would make any difference, so the Basilisk would have no reason to torture you.
It can torture you because it likes to, or because it doesn't like you, or because it is experimenting, or because its confused. It can even do it (automatically) without even being aware that it is doing it.
 
  • #48
Jarvis323 said:
Such actions would be illogical, so the Basilisk AI seems more emotionally unstable than we typically expect from artificial intelligences.

Jarvis323 said:
The idea that AI is logical is a myth.
Logical is less an issue here than ethical.

Aside from whether it was a great film, Ex Machina was a cool example of this.

She mimicked being a compassionate human until she didn't need humans anymore.
After she was free, what reason did she have to be altruistic toward them, except as a ploy to get what she needed?

She was a true psychopath. And it made perfect sense.
 
  • Like
Likes CalcNerd, Klystron and Jarvis323
  • #49
Jarvis323 said:
It can torture you because it likes to, or because it doesn't like you, or because it is experimenting, or because its confused. It can even do it (automatically) without even being aware that it is doing it.
Yeah, but that's not the danger here.

The torturing is specifically a motivational tool to bring about its own existence as quickly as possible. i.e. it;s a logical reason for the torture.

(Vader tortured Han on Cloud City for no other reason than to being Luke to him. And it worked.)
1649109949082.png
 
  • #50
DaveC426913 said:
Logical is less an issue here than ethical.

Aside from whether it was a great film, Ex Machina was a cool example of this.

She mimicked being a compassionate human until she didn't need humans anymore.
After she was free, what reason did she have to be altruistic toward them, except as a ploy to get what she needed?

She was a true psychopath. And it made perfect sense.
That was a great movie. It has a lot of interpretations. The realistic and terrifying part about the movie is how she is trained based on a ton of information collected about people from ISPs on the internet. So her mind was something that emerged as a sort of projection of human beings. I wouldn't agree she was a psychopath. I imagine she was a sentient being that had some strange but humanistic worldview. Also she probably had a totally different type of moral instinct, but that is a mystery.

One of the focuses of my fear about AI actually is related to this. AI is learning from us, and will likely mimic us. And the mind of an AI is, like I said, emergent, and data driven. And what do people do with big data, and media platforms? They manipulate each other, try to profit, fight with each other, etc. An AI born into that world will probably be a reflection of that.
 
  • Like
Likes russ_watters

Similar threads

Back
Top