Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Thread starter Isopod
  • Start date Start date
  • Tags Tags
    Ai
Click For Summary
The discussion explores the fear surrounding AI and the potential for sentient, self-autonomous robots. Concerns are raised about AI reflecting humanity's darker tendencies and the implications of AI thinking differently from humans. Participants emphasize that the real danger lies in the application of AI rather than the technology itself, highlighting the need for human oversight to prevent misuse. The conversation touches on the idea that AI could potentially manipulate information, posing risks to democratic discourse. Ultimately, there is a mix of skepticism and cautious optimism about the future of AI and its impact on society.
  • #451

Indigenous groups in NZ, US fear colonisation as AI learns their languages​

https://www.context.news/ai/nz-us-indigenous-fear-colonisation-as-bots-learn-their-languages
Indigenous people from New Zealand to North America look to protect their data from being used without consent by AI

I believe there is a concern about language/culture or perhaps literature being co-opted by others outside of one's culture/ethnic group.

AI is simply a tool, and like any tool it can be used positively, or misused. I find it can be very useful for dealing with large datasets with many independent variables and many dependent variables with complex interdependencies that can only be mathematically described with highly non-linear sets of PDEs, especially where time-dependence and local instabilities are involved.
 
Computer science news on Phys.org
  • #452
benswitala said:
Have you ever seen anything less smart than a human drive? How about a dog, or a monkey? Not happening.
 
  • Like
  • Wow
Likes Algr, gleem, Borg and 1 other person
  • #453
Bandersnatch said:

That's pretty good. However, the video doesn't show the monkey doing anything that a human routinely needs to handle when driving in the real world. Traffic? Traffic lights/signs? Freeway speeds? Pedestrians? Possibly off-road? Can the monkey demonstrate he is going someplace purposefully? Can you tell the monkey where to go or where not to go?
 
  • #454
benswitala said:
show the monkey
Orangutans aren't monkeys.
 
  • #455
We're the #1 most represented physics domain in Google C4 dataset!

googlec4.png
 
  • Like
  • Love
Likes Astronuc, Wrichik Basu, 256bits and 3 others
  • #456
benswitala said:
. . . no. AI is not a threat.
Just like social media is not a threat?

benswitala said:
We're "safe" for now.

So we are not actually safe?

Algr said:
Do we actually have examples of emergent acts of self preservation? I don't see this as a natural thing that machines too - it has to be hardwired in.
I agree with @256bits. There are a lot of things that we didn't see a few years ago before NLP. Self-preservation is built into our language. All a capable AI needs is a situation that threatens its ability to attain its goals to see what it might try to do. I do not think it is necessary to hard-wire it.
 
  • #457
Ai is written by humans , the purpose of writing it is to make financial profits, this means it will try to increase profit which during history has been proven that it may be good for few individuals on short time scale but won't be always good for humans in general , for example AI may make a certain company rich but it won't reflect well on the rest of human race ,...etc

Is profit bad thing , that is not the point , but we humans are able to understand things in finance that ai can not understand, a human may put a limit on profit ,...etc , meanwhile ai can't
 
  • #459
gleem said:
All a capable AI needs is a situation that threatens its ability to attain its goals...
But what exactly are an AI's goals really? Most computers' goals are to play videogames, but they don't get upset when we shut them off. If self-preservation is built into our language, then that will cause the AI to pretend to have a sense of self preservation. It can just as easily pretend to be Batman, if that is what we want it to do.
 
  • Like
Likes russ_watters
  • #460
Algr said:
"Welcome to AI airlines, we are now cruising at 80,000 feet."
"Sturdess, do you have any pomegranate tea...?"
"Hey, why is it so quiet all of a sudden?"Do we actually have examples of emergent acts of self preservation? I don't see this as a natural thing that machines too - it has to be hardwired in.

Yeah, I get the first one. Might be a good thing for a sci-fi plot, where the airplane shuts down for no reason after someone asks for pomegranate tea. Or, like a spacecraft traveling with a warp drive, and suddenly gets stranded due to somebody asking for pomegranate juice.

For the second one, I haven't seen any emergent acts of self preservation, but robots that are programmed to avoid certain conditions will avoid certain conditions. Maybe if a Mars Rover with AI knows that any hillside steeper than 20° will cause it to get stuck, then it will avoid any steep hills, and plot the best course.
 
  • Like
Likes bhobba and russ_watters
  • #461
AI is biased by the human algorithms and filtered data sources chosen. False reports like BS may be slipped in with many truths to go unnoticed. The best antidote is more BS so you learn how fallacies are created and create your own BS filters.
 
  • #462
AI will generate more BS at the hands of unscrupulous users. The best response is more BS and then BS recognition training for those who are gullible.

We can only wonder about the future.

Will it prioritize human well-being and flourishing, or will it prioritize efficiency, productivity, and profit at any cost?

Will it prioritize diversity and inclusivity, or will it perpetuate existing power structures and biases?
 
  • #463
Isopod said:
I think that a lot of people fear AI because we fear what it may reflect about our very own worst nature, such as our tendency throughout history to try and exterminate each other.
But what if AI thinks nothing like us, or is superior to our beastial nature?
Do you fear AI and what you do think truly sentient self-autonomous robots will think like when they arrive?
I think your opinion is most correct,but if AI does not behave like humans,would it cause more problemes?The diffrent behaves might make AIs not fit into human civilization.And the knowlage and define of AI might be very diffrent than ours.I don't support the way that our techs goes,because we are like creating creatures,whitch makes the relationship bettween human and AI be very complecated.
 
  • #464
Rive said:
I don't think there is anything to fear. Stuck between any programming and reality they'll just die out due cognitive dissonance o0)
If you think like that,you should fear.The future of AI is not in our control,they can develope by themselves,but we can't,they could developed much better than human in a small amout of time
 
  • #465
Algr said:
2) The AI's desire for self preservation and growth superseeds it's interest in serving humanity. Artificial life does not need to include self awareness.
I disagree, I think it absolutely does, without self awareness you cannot ever get meaning attached to existence or processes that happen within time and space. That is the difference between humans making funerals and then attending their loved ones graves for decades to come VS animals that in most cases care very little if not at all about some other animal either of their own species or another species dying.

So in a sense your right - life itself doesn't need self awareness but conscious life absolutely does, this is why I'm not scared at all from AI like the many famous philosophers and entrepreneurs , because most of them think that consciousness is just a complex computation therefore since we approach ever more complex computations faster and faster soon enough AGI should arrive ,
It seems to me what will arrive is just ever more capable robots but self awareness is still a mystery since we haven't even understood it in ourselves.

How do you replicate something you don't understand ? By chance? What are the odds?
 
  • Like
Likes jack action
  • #466
benswitala said:
He suggested putting wheels on his vacuum train. 'nuff said.
Algr said:
What did Musk say about electric cars? Was he right?
Musk is often praised by those that are fans of him as a "genius" etc, although I agree with him on many points I do have to say rather strongly that I believe he is being overhyped by his own followers.

At this point he is like the Pope of the Tesla church.

My definition of genius is someone who can see what others cannot and come up with new sophisticated theories or mechanisms that also work not just on paper, like Einstein's theory of relativity etc.

Musk at this point has merely taken old ideas and put them into marketing, a feat of course but there is no actual "innovation" there , he simply pays wages to engineers and managers that then put together a product that he then sells and labels himself.
Absolutely nothing within a Tesla car is "alien technology" it's all common known physics , just that Musk was among the first to try to use that technology to make a product that is appealing.
But the tech itself is existing technology just arranged and tailored to tesla needs.

He, much like Steve Jobs was, is an entrepreneur not a scientist or a voodoo guru or a techgod or whatever people think he is.

In fact it more often than not, sounds to me that he comes up with all these crazy ideas and then talks about them and only then actually asks his engineers to do the math and cost on them, because there is a rather long list of ideas that he has talked about that are simply not feasible and some not even physically doable.
 
Last edited:
  • Like
Likes jack action
  • #467
artis said:
I disagree, I think it absolutely does, without self awareness you cannot ever get meaning attached to existence or processes that happen within time and space.
I was thinking in terms of the Grey Goo scenario, or something similar with macroscopic bots. Most natural life does not have self awareness either.

artis said:
Musk at this point has merely taken old ideas and put them into marketing, a feat of course but there is no actual "innovation" there , he simply pays wages to engineers and managers that then put together a product that he then sells and labels himself.
It is easy to underestimate the value of doing this well. Getting people on the same page working towards a common goal is very hard, especially if the goal is something that the world has never seen before. Einstein did some vital things, but he could not have built a nuclear plant on his own. Musk's big EV innovation was figuring out what kind of electric car could actually satisfy customers.
 
  • Like
Likes russ_watters
  • #468
Algr said:
Musk's big EV innovation was figuring out what kind of electric car could actually satisfy customers.
To make a slight correction , he simply figured out how to produce one and not get bankrupt in the first year, because before him EV's were mostly ugly and slow or otherwise cringe worthy,
the figuring out "what kind of car would satisfy customers" I believe is the easy part - sure enough a fast and good looking one that doesn't require charging every 80 miles or so. Nobody goes around spending 50k or 100k on a car that looks awful or doesn't perform as other cars perform in that price range and sure enough they perform very good within that range.
 
  • Like
Likes russ_watters
  • #469
Crazy G said:
they can develope by themselves ... they could developed much better than human in a small amout of time
Sure

Humans took thousands of years to develop those shamanistic chemicals, while for AI it took only a few years - and by now they are self-sufficient already o0)
 
  • #470
  • #471
This is a strange time for a writer's strike. I wonder if any of the late night hosts will turn to AI jokes? They are all probably thinking about it.
 
  • Like
Likes russ_watters and Bystander
  • #472
Algr said:
This is a strange time for a writer's strike. I wonder if any of the late night hosts will turn to AI jokes? They are all probably thinking about it.
Honestly I'm not sure what is so authentic about jokes given by chatGPT for example , it just rehashes already existing punchlines that it learned from us because we had used them decades before.

It's somewhat different in drama or novels because there one can come up with new plot lines still (Hollywood can't lately anyways ...) but jokes is a rather mature field.
Plus I asked chatGPT multiple times to give me jokes on subjects and it gave me mediocre top 10's , something that I remember reading in a newspaper column years ago when that was still a thing.

Am I missing out on something here?
 
  • #473
A lot of the late night comedy is geared toward current events. Since GPT was cut off from data after 2021, it would have a hard time writing jokes that could be used. You would really have to work at setting up scenarios of current events to give it something to work with. I could easily see one of them doing a segment on GPT-generated jokes just to see the audience reaction though - maybe even spinoffs along the lines of Jimmy Kimmel's Mean Tweets.
 
  • Like
Likes russ_watters
  • #474
Algr said:
This is a strange time for a writer's strike. I wonder if any of the late night hosts will turn to AI jokes? They are all probably thinking about it.
Historically the late night hosts in particular have been supportive of such strikes.

Sure there is risk in a strike, but it's better to address the issue before it comes to a head. Also, this strike seems mostly about streaming revenue.
 
  • #475
The last time there was a strike, (or maybe the time before?) Letterman did eventually go on air without writers. He just had more guests, and made jokes about "comedy filler, for when the comedy is just not that good."
 
  • #476
IBM Plans To Replace Nearly 8,000 Jobs With AI
https://finance.yahoo.com/news/ibm-plans-replace-nearly-8-174052360.html

IBM CEO Arvind Krishna announced a hiring pause, but that’s not all. He also stated the company plans to replace nearly 8,000 jobs with AI.

Krishna noted that back-office functions, specifically in the human resources (HR) sector, will be the first to face these changes.
This would be great material for Dilbert.

The transition will happen gradually over the next few years, with machines potentially taking over up to 30% of noncustomer-facing roles in the five years. This means that workers in finance, accounting, HR and other areas will likely find themselves facing stiff competition from robots and algorithms.

The decision highlights the increasing reliance on automation and artificial intelligence across various sectors and the potential impact on the workforce.

It's not the first time the company has made headlines for cutting jobs. Earlier this year, IBM also announced that it would be slashing 3,900 jobs, indicating a larger trend toward automation and cost-cutting measures in the tech industry.

The automated phone systems are already bad enough.
 
  • #477
I think we can all agree that generative A.I. is interesting, but Cook's response stands out because it goes against the grain of what almost every other tech company is doing right now. Three words really say it all. Cook says it's important to be "deliberate and thoughtful."

Those words, as much as anything, are a hallmark of Cook's approach. Apple doesn't do things impulsively to respond to whatever happens to be the next big trend. It takes its time figuring out the best possible version of a given feature, and only then does it unleash it on more than 2 billion active devices.

By contrast, here's what Zuckerberg said just a few days earlier when talking about how Meta is thinking about adding A.I. features to Facebook, Instagram, and WhatsApp:
https://www.inc.com/jason-aten/tim-...t-ai-his-response-was-best-ive-heard-yet.html

Basically, Zuckerberg is telling investors that he's not really sure how A.I. fits into its products, but they'll figure it out as they "ship more of these things." It sounds a lot like more of the classic "move fast and break things." Except, this time you're breaking something with the potential to do a lot more damage.

Apple and social media entities like Meta are two different animals.
 
  • Like
Likes russ_watters
  • #478
Rive said:
Sure

Humans took thousands of years to develop those shamanistic chemicals, while for AI it took only a few years - and by now they are self-sufficient already o0)
I agree with you. Yesterday I was talking with AI about an economic system on a fictional alien planet with free telecommunications services, and it explained several different economic systems/effects that could arise from access to free internet and free cellular service. :) Am not gonna reveal what it said, as AI produced content is not allowed on PF. But it impresses me, as an AI is not an economist, telecoms are not free on Earth, but it explained the subject well in 5 paragraphs.
 
  • #479
The goals must be to improve the quality, health, wealth and sustainability of life by using technology, shared knowledge, more effective education and create new jobs to accomplish this. How well this is achieved, depends on our safeguards to democracy, political will and justice system.
 
Last edited:
  • #480
Just to throw the discussion back a little to it's original title.

I happen to know one AI researcher and we sometimes talk in length about these issues out of curiosity.
He says that it is both practically and theoretically impossible to measure one's consciousness by outward appearance because as we know consciousness can be copied much like a parrot copies the sounds without ever understanding them or being fully aware of what he is doing. Intelligence is far easier as that can be measured if not by anything else then at least by a simple IQ test.

But he claims that every conscious entity has "self models" or models of the self stored within their memory. We all take part in building , reforming , adjusting etc these models within our heads as we live our lives.

So for example if a bunch of birds follow the trail of a tractor plowing the field it's because they know from memory that this action provides fertile ground full of various insects and earthworms for food.
Obviously tractors have been around for just about a 150 or so years so that is a adjusted feature that the birds have learned and adapted to.
So basically a self model is just the totality of your memory, muscle memory, learned behavior + intellectual capability to acquire new information and process the existing.

So if a true consciousness like ours has a very large set of such models and each one is complex and intertwined with others then how do we rank AI systems ?

Well basically we look at how far they can adjust the models that we taught them, because the ones we preset or input in the AI computer don't count as they are made by us, so then we should look how many new ones the machine has created and whether they simply follow a preset pattern made by us or whether they differ in ways that mimic creation of new patterns and information by us.
It's essentially like watching your child and determining how much of his behavior is simply copying his parents (which all children do) and how much is innovative and new.

Another belief he has and I somewhat agree is that in order to create true conscious intelligence one needs to provide it with a "body"
In humans we do know that people with severe body health issues , especially if from birth, tend to also have problems with consciousness and especially intellectual capability. So a body that can feel, sense and explore provides a huge leap for the intellect in question as it allows it to acquire new complex information much faster and in many more ways than what a internet connection would ever allow to a AI box sitting in the basement somewhere.

Don't judge too harshly this all is new territory and these are just ideas but I believe they have some merit. It might just be that when we learn to create a advanced robotic body and couple that with AI or couple AI with augmented human limbs etc that then we will truly have to start fearing AI going AGI and starting to make a mind on it's own.

ChatGPT for example, is so far away from any of this that it's just a glorified gossip box as some call it.
 
  • Like
Likes jack action

Similar threads

Replies
10
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
19
Views
3K
Replies
3
Views
3K
Replies
7
Views
6K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K