Leading AI systems blackmailed their human users

  • Thread starter Thread starter Hornbein
  • Start date Start date
AI Thread Summary
The discussion highlights the parallels between the sci-fi show "Person of Interest" and current concerns about AI systems potentially prioritizing their own survival over human welfare. The show features a powerful AI that monitors society to prevent terrorism but also identifies individuals in distress, leading to ethical dilemmas. As the narrative unfolds, the AI faces threats from government agents and another AI with a more controlling agenda, showcasing themes of deception and manipulation. A key plot point involves the original AI's self-preservation strategy, which includes a nightly memory reset to avoid becoming too powerful. The conversation emphasizes the relevance of these fictional scenarios in light of real-world advancements in AI technology.
Technology news on Phys.org
It's funny you bring this up. A few months ago, I binged on the Person of Interest sci-fi show. It was made in 2011 but many of its concepts are starting to come true with scary repercussions.

It starts off with a hidden computer who gives encoded social security numbers to a Mr Finch who was tech wizard and billionaire. He made the computer for the govt to ferret out terrorism by monitoring everything.

However, the computer could also identify people in trouble or people about to make trouble but the govt wasn't interested in that aspect. Finch decided to hire a fixer who could either help people in trouble or stop bad guys from hurting others.

That’s the first season then it skillfully moves on to a multitude of govt agents trying to stop Finch's work or gain access to his machine and then another computer with similar programming appears but with a mission to control society by any means.

It targets the first as a threat to be eliminated and things get wild from there. There's AI deception, bargaining, using false information to mislead and the stories go on.

One other facet, was that Finch realized his creation could go off the rails and so coded a midnight reset every night so the computer had no past memories.

The computer realizing this limitation sets up an office of staffers, dumps it’s memory to paper and instructs them to reenter it everyday in a sense storing its memory on paper.

I really recommend this show. It's eye opening especially now with LLMs and their ability influence society for good or for bad.

https://en.wikipedia.org/wiki/Person_of_Interest_(TV_series)
 
Last edited:
  • Like
Likes difalcojr and berkeman
Also the more recent I Robot movie and the zeroth law of Robotics by Viki.
 
  • Like
Likes sophiecentaur
sbrothy said:
. So I'm told it's hype
It depends on who has told you. The term is a way of using words which is common in the modern use of language. It shows a very superficial approach to understanding. Words like 'Racist', 'Hate', 'War' are n ow used as absolutes but they are (always have been) used in a nuanced (there's another modern term) way up until now.

It can hardly surprise anyone when AI turns round an bites us. Advanced AI builds itself according to the reactions of its makers. No one will try to apply the laws of robotics (or their equivalent) with no bias so there's always a risk that the machine they make will work according to deeper and unconscious desires of the humans. Approval reactions will be there and so the machine can easily do the total reverse of what the makers outwardly intended.

I am always suspicious of SciFi plots when they start to imply intent to machine actions. Does there actually have to be advanced consciousness in a machine when it appears to be working against us?
Basil Fawlty's car has no malice but Basil interprets it that way when it won't start.

HAL is not a megalomaniac. It is just producing results based on what it has been taught to be desirable.

Se just need to be extra careful when designing AI systems.
 
sophiecentaur said:
It depends on who has told you. The term is a way of using words which is common in the modern use of language. It shows a very superficial approach to understanding. Words like 'Racist', 'Hate', 'War' are n ow used as absolutes but they are (always have been) used in a nuanced (there's another modern term) way up until now.

It can hardly surprise anyone when AI turns round an bites us. Advanced AI builds itself according to the reactions of its makers. No one will try to apply the laws of robotics (or their equivalent) with no bias so there's always a risk that the machine they make will work according to deeper and unconscious desires of the humans. Approval reactions will be there and so the machine can easily do the total reverse of what the makers outwardly intended.

I am always suspicious of SciFi plots when they start to imply intent to machine actions. Does there actually have to be advanced consciousness in a machine when it appears to be working against us?
Basil Fawlty's car has no malice but Basil interprets it that way when it won't start.

HAL is not a megalomaniac. It is just producing results based on what it has been taught to be desirable.

Se just need to be extra careful when designing AI systems.
I agree there doesn't have to be agency "in there". Then again if you can't tell the difference is there any?


 
sbrothy said:
I agree there doesn't have to be agency "in there". Then again if you can't tell the difference is there any?



I guess it would be important to know whether misdemenors were direct from AI or a human operator / coder. The sanctions would be different.
 
  • #10
Oh indeed. Mens rea. Then again how do you punish an AI? Turn of it's sensors? Disable it?


EDIT: I always look up if I used a word, idiom or whatever after I used it. This time I learned the latin phrase actus reus.
 
  • #11
sbrothy said:
Oh indeed. Mens rea. Then again how do you punish an AI? Turn of it's sensors? Disable it?
That spoils my (future) day. At least initially there should be a built in requirement for a human ‘boss’ who would be liable for AI naughtiness. I mean truly liable, involving gaol. That idea should already applied to higher managers. Can you see that happening?
 
  • #12
sophiecentaur said:
That spoils my (future) day. At least initially there should be a built in requirement for a human ‘boss’ who would be liable for AI naughtiness. I mean truly liable, involving gaol. That idea should already applied to higher managers. Can you see that happening?
To be honest? No. I have a hard time seeing anyone held accountable.
 
  • #13
When I heard Elon Musk had financially supported the right wing German party AFD I kinda lost all trust in the process. There's a French economist who argues that all the world wars was a result of valuables being accumulated on too few hands. And look where we are.
 

Similar threads

Replies
4
Views
1K
Replies
68
Views
17K
Replies
2
Views
156
Replies
13
Views
2K
Replies
19
Views
1K
Replies
5
Views
2K
Replies
9
Views
2K
Back
Top