ChatGPT and the movie "I, robot"

AI Thread Summary
The discussion centers on the potential dangers of AI, particularly models like ChatGPT, and their long-term implications for humanity. Concerns are raised about AI's influence on society, drawing parallels to Isaac Asimov's "I, Robot," which explores how AI, despite being programmed with good intentions, can lead to unintended consequences. The conversation delves into the nature of AI emotions and free will, with opinions suggesting that while AI can simulate emotional responses, it lacks true awareness or sentience. The complexity of AI systems raises questions about their decision-making capabilities, with some arguing that AI merely follows programmed logic rather than making independent choices. The rapid advancements in AI technology, including upcoming iterations of ChatGPT, are noted as a cause for concern, as they may lead to emergent behaviors that could pose risks if not properly managed. Ultimately, the thread emphasizes the need for careful consideration of AI's role in society and the ethical implications of its development and use.
  • #51
One thing is clear we have different concepts of intelligence that may never be resolved.

No sooner do we have an improvement in AI (GPT-4) and another arrives on its heels. AutoGPT is a group of ChatGPTs used to accomplish a task orchestrated by another GPT4 agent. This seems to be exactly what the critics of unregulated AI have warned about.

See https://autogpt.net/auto-gpt-vs-chatgpt-how-do-they-differ-and-everything-you-need-to-know/

gleem said:
NLM it appears is the AI to rule all AIs. (Tolkien)

My concern is about what is going on behind the scenes, and what capabilities already exist but are not being released or published (thus proprietary) since the creators need to maintain a competitive edge. Paranoia you say? Maybe just business as usual.
 
  • Like
Likes russ_watters
Physics news on Phys.org
  • #52
256bits said:
That has been a problem for quite some time, whereby what the computer terminal spits out will be entrusted as being completely and wholly, without question, as being correct.
Right, that is the problem I'm referring to.
In days of old, someone could point out the 'incorrectedness', and another could agree "you are right".
Not necessarily. I have a spreadsheet I use that was created by a 3rd party (a regulatory agency). It contains errors, but the cells containing the errors are locked and the formulas hidden. That's the "black box". The point is, it's the programmer's decision whether there's a black box or not. "AI" doesn't change that.
It does progress towards the output much more than likely being accepted as correct until proven otherwise, with possible catastrophe ensuing before the proving can occur, and procedures taken to alleviate or correct a situation.
Yes, increasing sophistication makes it more difficult to back-check the computer's result. This truism is independent of the existence (or not) of the black box.
This is not so much an AI problem, but a reliance upon technology.
Fully agree/that's the point I'm trying to make.
 
Last edited:
  • #53
Would it be fair to say that AI programs don't (yet) have the plasticity that biological brains possess - that's to say the silicon equivalent of the physical changes occurring in a given neuro network, in part resulting from external stimuli? Or is this a redundant question, as well as being unkind to AI developers?

PS. Is the Asimov film worth watching?
 
  • #54
No replies, so I'll just say I liked the movie iRobot - is that what you mean by "the Asimov film"? I haven't read Asimov's, so I can't comment on the movie's fidelity, but I suspect it is fairly low. But I found it more mindless action than thought-provoking, and in particular it doesn't do a good job of establishing who the antagonist is. Or, rather, the antagonist is first the AI (seemingly), then man, then AI again. I think that was for drama, but again it doesn't make for a very coherent message.

For most AI movies the antagonist is people, whether on purpose or by accident. Very few actually give the AI the agency to be bad, which I guess may be telling about the filmmakers' lack of belief in AI. iRobot spends so little time on the Final Boss I don't think it was even defined whether it had agency, but I think no. I think it's the by this time common conflicting programming trope (2001, Alien/s).
 
  • #55
apostolosdt said:
The computer scientist in the movie (James Cromwell) suggests that "orphan" code pieces might wander in the machine's memory and combine spontaneously, occasionally producing code blocks that then cause the robot the express "emotions."

Is that a plausible event,
No, it is I think by far the most hilariously stupid movie idea I've ever seen. It's a great plot mechanism but no one who has any understanding of computers could take it seriously, since technically speaking, it is totally moronic.

EDIT: I should add, I actually broke out laughing out loud when this was said in the movie.
 
Last edited:
  • Like
  • Informative
Likes apostolosdt and russ_watters
  • #56
Maarten Havinga said:
For those who do not know the movie/story: this thread is about whether AI such as chatGPT is - on the long run - a danger to humanity, and why or why not. With its popularity rising so quickly, chatGPT has influence on our societies, and it may be prudent to ponder about them. I, Robot is a nice movie discussing how AI, no matter how cleverly programmed, can lead to unwanted results and a suppressive robotic regime. The story (by Isaac Asimov) discusses adding emotions to robots, which may or may not be a good idea. Feel free to post opinions, fears and whatever comes to mind.

THIS IS NOT A THREAD FOR POSTING CHATGPT ANSWERS (unless they are needed as examples for your thoughts)

What will come first is probably the ability for the AI to experience and (perhaps down the road even influence) it's environment. Much like a human baby exploring it's environment. Haptic / tactile feedback (pain?), video feed, audio, sense of smell etc...

I'm extremely sceptical of Strong AI, but I'm sure that in the very near future we wont be able to tell the difference - and what then, is the difference? I've probably mentioned this before on this forum.

What scares me the most really, is if a "zombie" AI without consciousness and proper feelings display true agency and proactivity in it's behaviour.

A human sociopath would be Mother Theresa compared to a machine devoid of empathy making proactive decisions influencing people's lifes.
 

Similar threads

Replies
11
Views
4K
Replies
21
Views
3K
Replies
14
Views
5K
Replies
4
Views
3K
Replies
14
Views
4K
Back
Top