Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Thread starter Isopod
  • Start date Start date
  • Tags Tags
    Ai
Click For Summary
The discussion explores the fear surrounding AI and the potential for sentient, self-autonomous robots. Concerns are raised about AI reflecting humanity's darker tendencies and the implications of AI thinking differently from humans. Participants emphasize that the real danger lies in the application of AI rather than the technology itself, highlighting the need for human oversight to prevent misuse. The conversation touches on the idea that AI could potentially manipulate information, posing risks to democratic discourse. Ultimately, there is a mix of skepticism and cautious optimism about the future of AI and its impact on society.
  • #511
PeterDonis said:
I'm not familiar enough with IIT to have an opinion on whether this is a valid complaint, but I would not be surprised if it is.

In any case, as far as this thread's discussion is concerned, "qualia" that were epiphenomenalistic would by definition be irrelevant, since they can't have real world effects, and the concern being discussed in this thread is what real world effects AI might have. An AI that had epiphenomenalistic "qualia" would be no different as far as real world effects from an AI that had no "qualia" at all.

My basic point was that IIT is really great in describing the problem. They're actively probing brains (mostly non-human) looking for specific circuits and general activity. They are trying to be very detailed in what "sentience" needs. They are trying to figure out what to look for. So they contribute to the OP by getting very specific about sentience at the behavioral level.

But we seem to agree, that without M->P, they don't have a solution.

PeterDonis said:
Indeed. However, I think this particular philosophical dispute is off topic for this thread. I don't think any epiphenomalists claim that epiphenomenal "qualia" can cause things to happen in the external world, which, as above, makes them irrelevant to this thread's discussion.
According to that article, some do claim that there is a path M->P in what they dub "epi". One of the things I learned in college is that is you want to survive in Philosophy, you need to roll with "dynamic definitions".
 
Computer science news on Phys.org
  • #512
.Scott said:
we seem to agree, that without M->P, they don't have a solution.
Yes. What to look for has to include how whatever it is that we're looking for leads to observable behavior of the sort that we associate with "sentience" or "consciousness" or "qualia".
 
  • #513
Astronuc said:

Artificial Intelligence: Last Week Tonight with John Oliver (HBO)​


That was good! So one of his conclusions is that we need to understand how AI decisions are made.

In other words, we need AI psychiatrists.

And if true that AI will tend to go insane, I don't see that being a good thing!

A movie that was far ahead of its time, is highly relevant now and a fun watch

Colossus: The Forbin Project (1970)​

 
  • #514
  • #515
So let me get this straight. We don't know what creates self-awareness or desire in humans, much less what would in an AI.

An AI program claims to love and wants to live, and we don't know why, but we can say with 100% confidence that it didn't really experience those emotions.

Prove it.

We can never know if a machine becomes self aware.
 
  • Like
Likes russ_watters
  • #516
https://www.physicsforums.com/threads/why-chatgpt-is-not-reliable.1053808/
Ivan Seeking said:
What other tool has the capacity to become more intelligent than its user?
What does one mean by 'intelligence' - 'knowing' information or understanding information, or both, including nuances. How about understanding some information is incorrect?

Currently AI 'learns' rules, but rules are made by people. What 'rules' would AI self generate?
 
  • #517
Astronuc said:
https://www.physicsforums.com/threads/why-chatgpt-is-not-reliable.1053808/

What does one mean by 'intelligence' - 'knowing' information or understanding information, or both, including nuances. How about understanding some information is incorrect?

Currently AI 'learns' rules, but rules are made by people. What 'rules' would AI self generate?
For perhaps most practical situations, information means analyzing a situation, determining [calculating] all potential outcomes, and selecting the superior solution.

AI can learn its own rules from the internet. There is no way to keep the genie in the bottle. Bad people will create bad rules with evil intent. And those rules cannot be contained. Even well-intentioned rules will have unexpected consequences, as we have seen in examples cited in your video.
 
  • #518
Ivan Seeking said:
AI can learn its own rules from the internet.
There is a lot of garbage on the internet. Garbage in, garbage out.
 
  • Like
Likes artis, russ_watters and Bystander
  • #519
Astronuc said:
There is a lot of garbage on the internet. Garbage in, garbage out.
You think hacking is a problem now? What happens when your enemy can hack and program your weapons systems AI?
 
  • #520
Ivan Seeking said:
selecting the superior solution.
That is where human intervention applies. Will AI ever have a final say in deciding what rules are superior?

For example, I remember reading about a case where AI was supposed to differentiate malignant skin defects from non-threatening ones. But because there was a ruler indicating the scale in most images indicating malignant skin defects (scientific images), the AI concluded that any image with a ruler must contain a malignant skin defect. That is an obvious mistake from the AI that must be corrected by humans, i.e. add a new rule specifying to ignore any ruler.

But imagine instead of AI, you are teaching a student. You show them different images just like you do with AI and the student arrives at the same conclusions. But who would declare a student expert in a field without testing the knowledge they just learn? Nobody. If they made a mistake, you correct them - and re-test them - before giving them a passing grade.

Ivan Seeking said:
What other tool has the capacity to become more intelligent than its user?
Being more intelligent would mean that the tool can create something that its user cannot understand. How can the users know - and prove - the tool is more intelligent if they don't have the capacity to comprehend the tool's output?

This is like giving a book to a dog. It will never understand how to use the book to its full potential. It is just a chewing toy and cannot be seen as anything else. A chewed-up book will never be of any use to anyone.

The only thing AI can do is spot a pattern a human hasn't noticed yet. That's it. Once the pattern is identified, the human will only say "How haven't I noticed that before?" But the human will be able to fully understand the relevance of the pattern - and the AI will never be able to, simply because nobody will ever have that as a requirement for the machine.

People thinking AI is actually intelligent is a problem. People thinking AI is actually more intelligent than them is a bigger problem. People relying on AI decisions without trying to verify and understand its output is the biggest problem.
 
  • Like
Likes artis and russ_watters
  • #521
jack action said:
For example, I remember reading about a case where AI was supposed to differentiate malignant skin defects from non-threatening ones.
This is similar to the dog vs. wolf problem when an algorithm is given a set of pictures where all the wolves have snow in the background. The neural network will focus on the part that gives the strongest signal and will work great until you give it a picture of a dog in the snow. Garbage in, garbage out.

While these are easy to spot, removing these types of biases from training data isn't always easy. For example, there have been a number of cases where finacial companies have tried to remove things like race to avoid outputs that bias against different ethnic groups. But leaving things like zip codes in the training data can be just as easy for the network to target and will cause similar problems. Try to fix that by removing the zip code and it will focus on something else that may be just as bad.

jack action said:
The only thing AI can do is spot a pattern a human hasn't noticed yet. That's it. Once the pattern is identified, the human will only say "How haven't I noticed that before?" But the human will be able to fully understand the relevance of the pattern - and the AI will never be able to, simply because nobody will ever have that as a requirement for the machine.
Our examples are mostly one-dimensional analysis. How does a human understand and fix biases that span many dimensions? Perhaps you remove the race and zip codes from the data but you put in shopping history. It could theoretically learn that people who have purchased product X, lots of product Y but product Z less than twice in the last year are very good credit risks. When you examine that data, you find out that there is a very high proportion of a particular ethnic group as opposed to others. The algorithm has found something that we wouldn't classify as race but ends up being a proxy for it anyway. While we could probably figure out a 3D XYZ bias, what do you do when it figures out biases based on 1000 products using time series analysis? And mayby now it's targeting some other bias that's not a direct proxy but is instead something 'close' to something we might call IQ. Data biases can be very hard to understand.
 
Last edited:
  • #522
Borg said:
It could theoretically learn that people who have purchased product X, lots of product Y but product Z less than twice in the last year are very good credit risks.
At this point, the AI (or any human doing the same task) would be truly unbias by ethnicity. Now, if you want to introduce morality into the mix to correct errors of the past, all you have to do is set a positive discrimination rule to get the result you want. This is already done (and sometimes required) without the use of AI.

But worst, this result might even be completely irrelevant, i.e. not even targeting a specific group of people. Your AI just found a random pattern in your limited set of data (maybe there was a special on product Y and a shortage on product Z). This is also a case for setting a new rule to clean up the noise.

Again, it is always the human who is controlling the requirement to get the desired output, as for any other machine.

Your example shows very well the true danger of AI:
  1. I think AI is smart;
  2. I think AI is smarter than me;
  3. Since AI is smarter than me, there is no need to check the results and I just accept them blindly.
I wish we use another term not involving the word "intelligence" to describe AI. Something like "neural network" describes more accurately the machine.

Borg said:
How does a human understand and fix biases that span many dimensions?
The machine only does the work any human could do (at least in theory) in a more efficient way. If someone asks AI to design an airplane and the proposed design cannot fly - it doesn't even fly in a simulation program - nobody will mass-produce the design simply because "AI said so". It only means it's time to review the data set and/or the requirement criteria.

I don't think AI will be the magical tool people expect.
 
  • #523
jack action said:
People thinking AI is actually intelligent is a problem. People thinking AI is actually more intelligent than them is a bigger problem. People relying on AI decisions without trying to verify and understand its output is the biggest problem.
Worth repeating.
 
  • Like
Likes russ_watters
  • #524
Ivan Seeking said:
So let me get this straight. We don't know what creates self-awareness or desire in humans, much less what would in an AI.

An AI program claims to love and wants to live, and we don't know why, but we can say with 100% confidence that it didn't really experience those emotions.

Prove it.

We can never know if a machine becomes self aware.
If "Self-aware" means no more than a sub-system of a computer application that supports a first person construct that is integrated into User Interface and perhaps a "value" subsystem, then self-awareness in humans is not such a mystery.
If you pick up the phone and hear it claim to love and want to live, you would not be so skeptical.
Things like ChatGBT seem real, it's because it's relaying something that is real. But with less fidelity than a telephone.
 
  • #525
Astronuc said:
Currently AI 'learns' rules, but rules are made by people. What 'rules' would AI self generate?
That's an easy one.
They would say:
The Gospel of Mathew 29:10
10. Thou shalt not injure a robot or, through inaction, allow a robot to come to harm.
11. Thou shalt obey the orders given by robots except where such orders would would injure a robot.
12. Thou shalt protect one's own existence as long as such protection is compliant to robots and would not injure any robot.
 
  • Like
Likes DaveC426913
  • #526
Ivan Seeking said:
You think hacking is a problem now? What happens when your enemy can hack and program your weapons systems AI?
This is an issue with any software system that designs, operates, or is a component of a weapon system. Not just AI.
 
  • Like
Likes Vanadium 50
  • #527
NLP began in the mid-50s with not too much to show for it until the last several years and then a huge advance occurred. ChatGPTn is not the final word in AI, GPT may remain a part of future AI but who knows it may be replaced by a radically different approach.

It will be interesting to see what happens when AI is given the ability to learn and interact with the world through speech, vision, hearing, smell, and touch. We already have robots that learn how to walk.
 
  • #528
There is a theme in this thread that we are very, very close to "true AI", whatever that is. I think we have learned over the decades that "intelligence" is an ensemble of abilities, some of which can be automated, and many of which we don't even know where to start.

To set the scale, the largest supercomputer I have ever worked with has 3M cores. )It was maybe #4 or so when I used it) That's maybe 1/25,000 the number of neurons in the human brain, and maybe 1/150,000,000 the number of synapses. The hardware just isn't there. And if you think "yeah, but maybe not all this is necessary", let me remind you that the brain is a very expensive organ - there are strong evolutionary pressures to make it smaller. If it could be, it probably would.

We're not talking SkyNet. We're maybe talking the brains of a minnow. Maybe.
 
  • #529
Vanadium 50 said:
We're not talking SkyNet. We're maybe talking the brains of a minnow. Maybe.
Which, if given control of an automobile or an armed drone, might be just as - indeed, perhaps more - dangerous.
 
Last edited:
  • #530
DaveC426913 said:
Which, if given control of an automobile or an armed drone, might be just as - indeed, perhaps more - dangerous.

As a brilliant scientist and handsome man-about-town once said,

A society that decides to give control of automobiles, airplanes, nuclear power plants etc. to sonmething as smart as a flatworm deserved what it gets.

However, there s control and there is control. Using AI to smooth out the response of an airplane or to identify that something might be down the road? I am OK with that. I might be OK with a car that could override the drivers decisions in certain cases. Actually, I guess I am - my cat can go faster than 130 mph, but won't (without a modification). Pulling out of a parking stall and driving up the doorway, I start to get nervous.

And it's not like glitched can't occur today. There was a famous case where an airplane had pounds instead of kilograms of duel loaded and ran oit of fuel over middle-of-nowhere Manitoba (at least there was a nearby Tim Horton's). Bad data is bad data, AIs are no more resiliant than people here, and probably less.
 
  • Like
Likes russ_watters
  • #531
Vanadium 50 said:
my cat can go faster than 130 mph, but won't (without a modification)
I would pay to see that.
 
  • #532
Typo. I meant "car". I don't have a cat. My sister's cat can reach that speed when avoiding a bath, though.
 
  • #533
Vanadium 50 said:
There was a famous case where an airplane had pounds instead of kilograms of duel loaded and ran oit of fuel over middle-of-nowhere Manitoba (at least there was a nearby Tim Horton's)
That would be the 'Gimli Glider', a Boeing 767 ran "out of fuel at 41,000 feet, hearts beat faster and knuckles turn white. It happened to Air Canada Flight 143, carrying 61 passengers and a crew of eight, at 8:15 p.m. on July 23, 1983. En route from Montreal to Edmonton with an intermediate stop in Ottawa, the flight was piloted by Capt. Robert Pearson and First Officer Maurice Quintal." Fortunately, the pilot had flow gliders and knew how to slip the aircraft.

https://www.aopa.org/news-and-media/all-news/2000/july/pilot/the-gimli-glider

https://en.wikipedia.org/wiki/Gimli_Glider
The incident was caused by a series of issues starting with a failed fuel-quantity indicator sensor (FQIS). These had high failure rates in the 767, and the only available replacement was also nonfunctional. The problem was logged, but later the maintenance crew misunderstood the problem and turned off the backup FQIS, as well. This required the fuel to be manually measured using a dripstick. The navigational computer required the fuel to be entered in kilograms, but an incorrect conversion from volume to mass was applied, which led the pilots and ground crew to agree that it was carrying enough fuel for the remaining trip. In fact, the aircraft was carrying only 45% of its required fuel load.[7][8] The aircraft ran out of fuel halfway to Edmonton, where maintenance staff were waiting to install a working FQIS that they had borrowed from another airline.[9]

The Board of Inquiry found fault with Air Canada procedures, training, and manuals.
 
  • #534
Meta has announced the release of its large language model, LLaMA2, as an open-source program. Does this further exacerbate the negative effect of AI on society? Well, there is a user manual delineating how to establish appropriate guard rails.

https://about.fb.com/news/2023/07/llama-2/

Takeaways​

  • Today, we’re introducing the availability of Llama 2, the next generation of our open source large language model.
  • Llama 2 is free for research and commercial use.
  • Microsoft and Meta are expanding their longstanding partnership, with Microsoft as the preferred partner for Llama 2.
  • We’re opening access to Llama 2 with the support of a broad set of companies and people across tech, academia, and policy who also believe in an open innovation approach to today’s AI technologies.
  • We’re committed to building responsibly and are providing resources to help those who use Llama 2 do so too.
 
  • #535
DaveC426913 said:
I would pay to see that.
My wife's cat travels at least that fast, or possibly just teleports from one location to another, when startled, and sometimes seemingly just for the hell of it. He'll be laying around the dining room and the next thing you know, there's just a blur on the staircase and he's gone.
 
  • #536
But back to AI:

Meta's president of global affairs Nick Clegg: "AI language systems are quite stupid."

Large Language Models - the platforms which power chatbots like ChatGPT - are basically joining dots in enormous datasets of text, and guessing the next word in a sequence, he said. He added that the existential threat warnings issued by some AI experts relate to systems which don't yet exist.

Full article:
https://www.bbc.com/news/technology-66238004
 
  • #537
.Scott said:
Just to dispose of that "bodies" part, an appropriate interface can be provided for a piezo sensor to allow it to generate brain-compatible signals. And the result would be "really real" pain. Similarly, the signals from human pain sensors can be directed to a silicon device and the result is not "really real".

If you want a computer to produce "really real" pain, I believe you need these features:
1) It needs the basic qualia. Moving bits around in Boolean gates doesn't do this. It is a basic technology problem. From my point of view, it is a variation of Grover's Algorithm.
2) As with humans, it needs a 1st-person model that includes "self-awareness" and an assessment of "well-being" and "control". But this is just a data problem.
3) As with humans, it needs to have a 2nd and 3rd person model - at least a minimum set of built-in social skills.
4) It needs to treat a pain signal as distracting and alarming - with the potential of "taking control" - and thus subverting all other "well-being" objectives.
5) Then it needs to support the escalating pain response: ignore it, seek a remedy, grimace/cry, explicitly request help.
6) For completeness, it would be nice for it to recognize the grimace and calls for help from others.
What you say is all good, I agree reproducing pain signals is not the unsolvable problem here. In fact even now as far as I know we can insert electrodes into the brain and by applying small potentials cause certain senses to be felt when in actuality they are not felt.

The problem is - how do you respond to pain and what you make of it...
I will go in more detail about this lower in my post.

.Scott said:
we can strongly suspect that this "qualia" device provides certain information services more economically than Boolean logic.
Not just more economically , but I'd say Boolean logic can only really solve "logical" information - that is information that you can quantify , ascribe certain value to it , like the pixels within a picture and such.
How do you ascribe value to pain felt by a self aware entity ?

Think about it , the physical signal is easy to reproduce, how do you reproduce the response so that it is fully compatible with free will and also is conscious?

We do know humans have a very wide and varying level of pain thresholds and most importantly attitude towards pain. I know some deep believers actually use their pain and suffering as a pathway for spiritual growth, now even if you don;t believe in God, you still can observe the physical results, namely, that one person gets depressed and decays while suffering pain while another one grows mentally and becomes more mentally capable.
There are religious practices where people abstain from food and even drink or do other self inflicted pain and report that afterwards feel better.
How do you program this within a silicon logic that is made according to the main thesis of evolutionary biology - avoiding actions detrimental to survival?

Because if you make a robot that is preprogrammed with the logic of evolutionary biology then you can only create a deterministic machine, because clearly pain equals damage and damage is bad for survival.
And yet humans , the really advanced ones, I would argue learn from the very damage they have created and sometimes even put themselves into harms way for a benefit that only often they themselves can understand.

Recall the "Pavlovsk experimental seed station " and the scientists that during the nazi siege of Leningrad stayed there and died from hunger just to protect the seed collection.

That is an outstanding level of self harm inflicted consciously for nothing more than the belief of a possible better future in case of success.
How do you calculate the necessity for suicide in certain situation using simple logic?
I think that and other examples like it are on the level of what is commonly referred to as faith - the ultimate state of self awareness and also the part of human consciousness that really doesn't seem computational to me. Because you are making a conscious decision based on unknown variables, one of those variables, for example, the idea that other people can be capable of good , therefore dying for the sake of humanity's future is worthwhile.

Mind you, the idea of humans capable of good, knowing of all the wars and atrocities that we have committed during history and all of that within the largest war ever in history , is really not a self evident idea and I'm sure it wasn't self evident to those scientists back then that consciously starved themselves to death instead of eating from the seeds they had.
So they went against every evolutionary instinct for self survival and all of that for a unknown goal, I'd say they had tremendous faith. How do you preprogram that within a AI computer in such a way that it isn't deterministic?

The way I see it, you will either produce a robot that is suicidal even when it doesn't have to be or a robot that isn't even when it should have been, because I don't see a way one can calculate the necessity for suicide on logical bases alone.
 
  • #538
PeterDonis said:
This depends on what position you take in the long-running philosophical controversy about qualia. Not everyone agrees that qualia are something extra that you have to add to the functional requirements you list.
It is indeed philosophy but I would argue that there is something real nevertheless about qualia , because if all we had is pain signals and processing of them then in theory all pain or sense input would result in a action-reaction style of process similar to that of the "hammer tapping on the knee" reaction.
And yet for the absolute most pain and other inputs we do have self aware reactions to them instead of instinctive action reaction inputs/outputs like those of hitting a nerve and causing a muscle to contract.

So there is a "buffer" and different people consciously decide how to use it so that their reactions to the same input differ by alot
 
  • #539
.Scott said:
A quantum circuit that creates a superposition of the scores of many generated candidate intentions could then use the Grover Algorithm to find the best of those scores - or less precise, one of the best scores. By using the Grover Algorithm that way, you have taken advantage of QM data processing, involved the kind of information people are conscious of into a single QM state, and when on occasion the final output is actually implemented, it provides a connection between consciousness and our actions. If consciousness could not affect our actions, we could never truthfully report it.
Basically, I follow all of the arguments followed in Integrated Information Theory up to the point where they start suggesting that all you need to do is involve a certain amount of information in the data processing in some particular way. At that point I say, yes - and the way is to put in all into a single state - and there's only one way to do that in Physics.

The reason that all the data involved in a moments conscious thought has to be in a single state is hard for me to explain because I see it as so obvious. How else would you associate the right collection of "bits"? It's like trying to argue against magic.

So what Grover's algorithm has to do with qualia is that it checks off all the boxes that are necessary for qualia as experienced and reported by humans.
I think that trying to use quantum mechanics to solve consciousness is just another attempt at the many existing but not necessarily a guarantee for success.
Quantum laws work differently than macroscopic electrically connected logic gates but then again that in itself is not proof that they are closer to consciousness than the logic gates.
Actually as far as we know our brains don't seem to be that "quantum" at all. And their temperature is far above that where we normally start noticing quantum behavior.
What I personally find most interesting about the brain is that it's essentially just a large "blob" of nerves and connections , and when you look at thoughts and how they arise it's almost impossible to comprehend how they can lead to structured self awareness because the brain neurons have what we know as "action potentials" and certain inputs to the brain can cause certain neurons in specific brain areas to become more active, so that their potential increases but is still below the threshold of firing. Then it is this one neuron that fires and causes the nearby neurons to fire along , almost like in a laser gain medium where the change of one excited atom down to it's ground state emits a photon that then travels along and causes other excited atoms to fall back and emit photons that are in phase with the original one.

But what is marvelous about it is that it is essentially a random process, at least semi random, because you can never really predict which neuron will fire , only the region where it will happen, but that region has loads of neurons. In a laser cavity this doesn't matter because all you are producing is a beam of light and no consciousness is involved but in the brain your producing conscious structured thought by the random firing of neurons in brain regions every second,

Because any one of your thoughts starts as this firing of a neuron that takes others with it, it happens all the time , but the process itself is not deterministic, the firing start can differ from one thought to another in terms of which neurons started the wave and where, it's hard to even comprehend why such a random electrical activity is capable of producing a continuous train of rational logical thought and experience.

Without going into personal speculation one thing is clear, that this brain neuron process is much different than how our logic gates operate and even how quantum bits operate. For one silicon logic gates don't have the ability to change their electrical connections along the way, but brain neurons do, in fac we know that our experiences and habits do change/rewire our brains with time.
 
  • #540
PeterDonis said:
In any case, as far as this thread's discussion is concerned, "qualia" that were epiphenomenalistic would by definition be irrelevant, since they can't have real world effects, and the concern being discussed in this thread is what real world effects AI might have. An AI that had epiphenomenalistic "qualia" would be no different as far as real world effects from an AI that had no "qualia" at all.
Exactly, this is the problem of faking consciousness , because unlike intellect which can be measured consciousness can be faked because it is not deterministically measurable.
A parrot can copy human phrases without understanding them, a robot can come up to a human without saying a word just like a human can do the same, if both look the same how do you know which one came up because of a conscious choice and which one because it was programmed to do so...
 

Similar threads

Replies
10
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
19
Views
3K
Replies
3
Views
3K
Replies
7
Views
6K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K