Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Thread starter Isopod
  • Start date Start date
  • Tags Tags
    Ai
AI Thread Summary
The discussion explores the fear surrounding AI and the potential for sentient, self-autonomous robots. Concerns are raised about AI reflecting humanity's darker tendencies and the implications of AI thinking differently from humans. Participants emphasize that the real danger lies in the application of AI rather than the technology itself, highlighting the need for human oversight to prevent misuse. The conversation touches on the idea that AI could potentially manipulate information, posing risks to democratic discourse. Ultimately, there is a mix of skepticism and cautious optimism about the future of AI and its impact on society.
  • #401
Hornbein said:
ChatGPT seems conscious to me. What care I the methods it may use?
It's still just a somewhat smart search engine. Calling it conscious seems to me to be... generous.
 
  • Like
Likes Structure seeker
Computer science news on Phys.org
  • #402
Hornbein said:
So I thought about it then chose the wrong thing instead of flipping a coin, you say.

Sometimes I literally flip a coin. Other times I don't try to figure it out and just do the first thing that comes into my head, to get it over with and avoid dithering.

There was a bridge player who was asked how he could make difficult decisions so rapidly. He said, I know I can't figure it out so I just do what I feel like doing. (This avoids the other side using a pause as a useful clue.)
sure one can come up with examples of random actions but I would argue that the absolute most choices we make in our lives are more or less premeditated including the evil ones.

Hornbein said:
ChatGPT seems conscious to me. What care I the methods it may use?
I'm not sure whether you mean this sincerely but it's wrong either way.

Of course a language model made by humans adjusted by humans that uses 100% of the information we humans have ever put online (you yourself including) will sound and feel human because it exactly copies us.

This I believe is the far bigger danger than AI taking over the world, it's us giving too much credit to a glorified search engine.
I think John Searle was right in the way that it is far easier to copy consciousness than it is to generate one.
And the copy does seem legit because it uses the very patterns and information a conscious being uses.The analogy that comes to mind is someone standing in a tunnel hearing their own echo and then suddenly thinking someone else might be on the other side.
The speed of sound does make it seem like someone else is answering you as the echo comes with a delay, but in all actuality you are just reflecting on yourself.
 
  • Like
Likes PeterDonis and russ_watters
  • #403
Hornbein said:
ChatGPT seems conscious to me.
Lots of people don't seem conscious to me. So many people seem to be living their lives on autopilot - never seeming to think past conformity and the expectations of others. I have to remind myself put this down to my own perception. I don't know what is going on in the parts of their lives that they see as important.

On the other hand, there is a trend in some groups to call people they don't like NPCs. That is a term that comes from video games and roleplaying. If you know what the term means, the implications are very disturbing.

If the AI isn't conscious, but we act like it is, the consequences aren't as dire as the reverse mistake.
 
  • Like
Likes russ_watters and Hornbein
  • #404
Hornbein said:
ChatGPT seems conscious to me.
"Seems", yes. I think that's its point.
 
  • Haha
Likes Structure seeker
  • #405
Hornbein said:
ChatGPT seems conscious to me.
Weizenbaum's ELIZA program in the 1980s fooled psychologists into thinking they were talking to an actual paranoid human. It seemed that way to them. That doesn't mean ELIZA was actually paranoid.

ChatGPT is just a souped up version of ELIZA that, instead of only being able to simulate a paranoid human, can simulate any human making authoritative, confident-sounding statements that have no reliable relationship to reality. But the fact that humans who do that are conscious does not mean ChatGPT is conscious.
 
  • Like
Likes Lord Jestocost, bhobba, Klystron and 2 others
  • #406
PeterDonis said:
Weizenbaum's ELIZA program in the 1980s fooled psychologists into thinking they were talking to an actual paranoid human. It seemed that way to them. That doesn't mean ELIZA was actually paranoid.

ChatGPT is just a souped up version of ELIZA that, instead of only being able to simulate a paranoid human, can simulate any human making authoritative, confident-sounding statements that have no reliable relationship to reality. But the fact that humans who do that are conscious does not mean ChatGPT is conscious.
See "William's Syndrome."
 
  • #407
If a language model can make so many people question whether it's conscious or not, think about what will happen when we master the ability to couple such a model with a realistic looking artificial human body with enough movement capability that in simple movements like walking it's almost indistinguishable from an actual human.
 
  • Like
Likes russ_watters
  • #408
artis said:
If a language model can make so many people question whether it's conscious or not, think about what will happen when we master the ability to couple such a model with a realistic looking artificial human body with enough movement capability that in simple movements like walking it's almost indistinguishable from an actual human.
Somebody should make a movie about that or something... :woot:
 
  • Haha
Likes bhobba and russ_watters
  • #409
Algr said:
If the AI isn't conscious, but we act like it is, the consequences aren't as dire as the reverse mistake.
What would be the reverse?
The AI is conscious, and acts like it is ( or isn't ); and we act like it is ( or isn't )
The AI isn't conscious, and acts like it is ( or isn't ); and we act like it is ( or isn't ).

Of the 8 possibilities, I am unsure which one is the most dire.

The Red Button Stop failsafe problem I think has functionality of 100% only for the case of three 'isn't'.
 
  • #410
Short term - not in the least. Long-term (10 years +), no idea. This is moving so fast I have no idea what will eventuate. What I do know is predictions can be wildly off the mark. Take driverless cars, which will eventually have a massive impact on society. It is a solved engineering problem. 10 years ago, there were predictions we would be driving them by now. But that is not what happened. While the basic problem is solved, getting them to drive at least as well as a human being has proved a long hard slog. Progress is being made, but slowly. I don't see them taking off for at least 10 years. But when the dam breaks, so to speak - watch out - society will dramatically change. Just imagine - no parking (hence no income from parking meters for local government or associated car parks) - no real need for car ownership - you hire as needed (Uber will boom) - I can think of many more. And that is just one area - there will be many more. So while I am sanguine short term - long term - watch out.

All I would suggest is that as far as college is concerned, a general technological-based degree such as Data Science will continue to have a bright future.

A university near me, Bond, does not offer a straight Actuarial degree - you do it with another major (or minor):
https://bond.edu.au/program/bachelor-of-actuarial-science/subjects

I asked about the degree for the son of my physiotherapist, and they STRONGLY recommend the second major be Data Analytics. It is close to the Data Analytics degree, so doing both is easy. You only take 4 extra subjects for the Actuarial degree. Actuaries must do financial mathematics, contingencies, actuarial and financial models, plus stochastic processes - but otherwise, are the same At present, the job market for Actuaries is strong, but they foresee over time, fewer actual Actuaries will be required, and many will move over to Data Analytics. Plus, of course, passing all the actual exams is known to be HARD - only the best survive.

https://www.linkedin.com/pulse/actuary-endangered-profession-age-artificial-mahesh-kashyap/

His son decided on a double degree in Systems Engineering and Commerce.

Thanks
Bill
 
Last edited:
  • #411
bhobba said:
10 years ago, there were predictions we would be driving them by now. But that is not what happened.
I would say the same problem exists for conscious AI , most of the people researching it still think that human consciousness is just a complex computation therefore throw more of the same at it...

But more of the same isn't working and there isn't even a clear scientific theory that would argue that human brain actually does work like a complex biochemical computer, at this point it's basically just an assumption.
We do have a good enough view of how the various signals pass into brain and what the various brain regions approximately do , but it clearly seems that is not nearly enough to understand why that real time objective information that passes down our nerves can create a subjective capability to reason, observe, experience and chose whether to even ignore the signals that come down.

I believe this is the biggest problem for self driving cars, they don't have a "self" because our current AI software driven hardware has no self, therefore it cannot reason nor can it understand meaning but driving down the road you see endless stream of objects that without subjective meaning are nothing but shapes and forms that are meaningless, and as such they have to be calculated to compare to a known database to determine what they are
When you have to finger point the computer to every object it should recognize as human and avoid, it becomes time and resource consuming.It seems to me that subjectivity by whatever means it works in our brain is an absolute must for any system that wants to be not just intelligent but also conscious and even more so the beauty of subjectivity is that it decreases the need for complex processing resources because then you can recognize a familiar object just by a sneak peak view of it instead of going through the complex geometrical algebra to calculate whether the points/pixels in your view constitute a human or a sign or a deer or whatever.

I base my assumptions on the fact that I get tired far faster solving algebra than driving down the road seeing humans and avoiding them, that is almost effortless to me and to most humans it takes almost no brain power as one glimpse is enough to have a 100% determination of what it is that you see.
 
  • #412
artis said:
I base my assumptions on the fact that I get tired far faster solving algebra than driving down the road seeing humans and avoiding them, that is almost effortless to me and to most humans it takes almost no brain power as one glimpse is enough to have a 100% determination of what it is that you see.
None of this is evidence that driving takes much less "brain power" than solving algebra problems. It is only evidence that driving takes much less conscious "brain power" than solving algebra problems. But we have abundant evidence from neuroscience that a huge amount of unconscious brain power underlies everyday activities like driving. You just aren't aware of all the brain power being used because it's unconscious; it takes place below the level of your awareness.

You have that huge pool of unconscious brain power available for things like driving because those activities are similar enough to ones that humans evolved to do (driving is basically a way of getting around from one place to another, something humans have always done) that your brain has evolved a huge amount of functionality that works for it. The problem with "AI" software is that it has only been under development for roughly half a century, whereas humans have evolved as a species for hundreds of thousands of years, and many of our unconscious brain functions (such as identifying "objects" in your visual field--you aren't aware of how you brain does it, it just does it) evolved even before our species did. So "AI" software is, at the very least, hundreds of thousands to millions of years behind the human brain in evolutionary terms.

With regard to solving algebra problems, however, this is something human brains never evolved to do in the first place, so when you try to use your brain to do it, you have to consciously repurpose brain hardware and software that was designed by evolution for very different things. Whereas a computer can just be programmed from a clean sheet of paper for that specific problem. That's why computers can easily beat us at things like that while at the same time being many orders of magnitude worse than us for things our brains evolved to do, like picking objects out of a visual field.

By the way, none of this is evidence that human brains don't do computations either. It's just evidence that human brains are much, much better than computers at specific kinds of computations--the ones human brains evolved to do in real time in order to help the human survive.
 
  • Like
Likes Klystron, russ_watters and bhobba
  • #413
Hornbein said:
ChatGPT seems conscious to me. What care I the methods it may use?
Should the ability to reason be considered as a necessary condition for being conscious? If so, then the engine is not conscious. It will always produce an answer, even if the answer is complete nonsense. It's not able to analyse the statements it produces at the level of first order logic, for instance (and it's not supposed to! That's not a feature of statistical learning).
 
  • #414
PeterDonis said:
None of this is evidence that driving takes much less "brain power" than solving algebra problems. It is only evidence that driving takes much less conscious "brain power" than solving algebra problems. But we have abundant evidence from neuroscience that a huge amount of unconscious brain power underlies everyday activities like driving. You just aren't aware of all the brain power being used because it's unconscious; it takes place below the level of your awareness.
Well , but there isn't really anything close to "all the brain power" because we now know for some time that the human brain actually idles at close to the energy consumption of it's peak performance.
In other words your brain consumes almost the same energy when your sleeping as when your driving or drinking your morning coffee or solving those "evolution did not evolve us to do math" math problems.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5732842/
Sleep interrupts the connection with the external world, but not the high cerebral metabolic demand. First, brain energy expenditure in non rapid eye movement (NREM) sleep only decreases to ~85% of the waking value, which is higher than the minimal amount of energy required to sustain consciousness [3**]. Second, rapid eye movement (REM) sleep is as expensive as wakefulness and the suspension of thermoregulation during REM sleep is paradoxically associated with increases in brain metabolic heat production and temperature [4].
https://press.princeton.edu/ideas/is-the-human-brain-a-biological-computer

But if we just look at the brain’s power consumption, we must conclude that the human brain is very “green.” The adult human brain runs continuously, whether awake or sleeping, on only about 12 watts of power.

So roughly 12 watts on average irrespective of the task your doing.I can agree with your assessment that math is more complicated to us because we don't have good "architecture" for it in our brains, but interestingly enough energy expenditure wise driving is just as energy consuming as doing math.

Anyway I think we agree that there is something about the unconscious/conscious subjective ability of a human brain to be aware and awake and yet consume almost no extra resources for that because that is clearly what the data shows.
I guess you could say that simple driving is close enough to simply being awake and aware that it doesn't really noticeably increase the brain's load so that it doesn't get tired that fast of doing that.
I have noticed myself that I only ever get tired when I have to do specific tasks that require alot of focusing.
PeterDonis said:
By the way, none of this is evidence that human brains don't do computations either. It's just evidence that human brains are much, much better than computers at specific kinds of computations--the ones human brains evolved to do in real time in order to help the human survive.

Well this is a tricky argument, while I agree that so far we don't have any clear evidence of whether brains do or don't compute their tasks , I would argue that it seems that most of the tasks they do they seem to not compute , and I base my reasoning on the fact that the total energy consumed by walking and watching the surroundings is roughly the same as when your asleep.
It only really becomes involving when you sit down to a specific complex task, but awareness itself is effortless as I'm sure you would agree.
My original point was that this is one of the key differences not just between our current AI but all computers and software that runs them in general - namely that all computers require processing for any information that enters them and that can be clearly monitored by the increase in energy use, meanwhile our brains intake most of the information sent to them via senses while sitting at the same energy level as when we are asleep and very little input is gathered from the senses.

You could argue that this is simply because the brain repurposes the same resources to different tasks as we go along but can that really explain how the total power consumption and metabolism doesn't really change?
Because that would imply that either
1) The absolute most tasks in existence are on the same level of energy consumption demand for the brain which is totally not like it is for our computers where the energy consumption is directly proportional to the amount of information processed for given time period.

2) The brain always works at very thin margin between min and max energy and is ready for anything you "throw at it" therefore the energy usage doesn't dramatically differ from task to task.There seems to be certain data for the second argument although I find it hard to believe because solving complex problems while being awake means you are putting a large information input on top of an already large information input as while you are solving math problems you are still awake and aware and all the background processes are running so if the brain was always close to it's max capacity by judging based on it's energy consumption linearity then it would seem as we should observe a noticeable decrease in our awareness capability when doing complex tasks.
 
Last edited:
  • #415
 
  • #416
Hornbein said:

We already know that autonomous driving is easier in a very "prepared" environment like a city with white stripes , clearly visible signs, possibly an uploaded map in the car's memory etc, all these guides for the self driving car serve as "rail tracks" to it.
Now put it in most small towns around the world with bad asphalt, no white stripes and a map that doesn't match actual road conditions and you might just hit a pedestrian.
Or as observed more often - you might get the car to drive weirdly as it tries to compute from scratch what to do with the very limited input information it receives.That being said I myself fully believe we will solve autonomous driving to the point where it will be safer on average than real human driving but that will happen before we understand consciousness,
In fact I don't think the robot has to be conscious to be good at certain tasks even driving, it's just that it most likely won't be as energy efficient doing that as we are.
And in some rare cases it might perform worse than an actual human, other than that one can use a driverless Tesla now and it does the job already.
 
  • Like
Likes bhobba and russ_watters
  • #417
artis said:
we now know for some time that the human brain actually idles
I don't think we "know" this. There has been research suggesting that we only use a fraction of our available "brain power" most of the time, but there has also been research suggesting otherwise.

artis said:
your brain consumes almost the same energy when your sleeping as when your driving or drinking your morning coffee or solving those "evolution did not evolve us to do math" math problems.
Yes, but you are assuming that sleeping uses much less "brain power" than driving or solving math problems. We don't know that is true either. As I understand it, most experts in the field believe that our brains actually do a lot of processing during sleep--for example, making sure short-term memories formed during the last awake period are stored in long term memory. Similar remarks would apply during waking periods when you're doing something like drinking coffee that doesn't use up a lot of conscious "brain power" the way solving math problems does. But there is still a lot of unconscious processing going on.

artis said:
so far we don't have any clear evidence of whether brains do or don't compute their tasks
Before even trying to assess that question, you need to first define what "compute" means. Or more to the point, what it doesn't mean. We already know that individual neurons act like analog computers--they take input signals and process them in a fairly complicated way to produce output signals. Is that not "computation"? If not, why not?

artis said:
, I would argue that it seems that most of the tasks they do they seem to not compute , and I base my reasoning on the fact that the total energy consumed by walking and watching the surroundings is roughly the same as when your asleep.
This is not a good argument. See above.

artis said:
awareness itself is effortless
Unconscious awareness is, I agree. I don't agree that conscious awareness is always effortless.

artis said:
can that really explain how the total power consumption and metabolism doesn't really change?
The standard explanation for this is that the brain's neurons are always firing at basically the same rate. The brain does not work like the CPU in a digital computer, which can reduce its power usage if it is not doing heavy computations. The brain is always "running" at maximum load in terms of neuron firings. The only thing that changes is what the neuron firings are doing, functionally, at a higher level. Conscious attention and conscious "focus" can affect some of that, but much if not most of it is "unconscious" brain activity that goes on much the same no matter what you are consciously doing (or not doing, if you are sleeping, for example).
 
  • #418
Where is a neuroscientist when you need one? Anyway, the internet started out as a way of sharing information which by all accounts seemed like a great idea. But sharing changed to stealing and cybercrimes. Ai, is much more powerful, and conscious or not, it will give us many challenging issues to deal with. AI will do what many new innovations have done that is create or lead to unintended/ unanticipated consequences. In the vernacular, we will be "blind-sided". We will come to realize the true meaning of intelligence.
 
  • #419
This is an example of another thing we should be concerned about, playing around with AI

https://www.msn.com/en-us/news/tech...n&cvid=e7b02c4a0ffd426e9f9b97e62d0b20dc&ei=94

OK, it wasn't capable of doing what was asked but trying to see what it might be able to do without actually knowing is worrisome. On top of that, this little experiment is now on the internet and can/will be incorporated into future AI bot data.

Considering the prowess that AI has in playing games it would seem we should be careful in creating a situation where AI might interpret it as a game.
 
  • #420
PeterDonis said:
Yes, but you are assuming that sleeping uses much less "brain power" than driving or solving math problems. We don't know that is true either. As I understand it, most experts in the field believe that our brains actually do a lot of processing during sleep--for example, making sure short-term memories formed during the last awake period are stored in long term memory. Similar remarks would apply during waking periods when you're doing something like drinking coffee that doesn't use up a lot of conscious "brain power" the way solving math problems does. But there is still a lot of unconscious processing going on.
I think you misunderstood me here, my point about the almost equal energy use during the 24h of brain activity was exactly that it seems we use almost all available power all the time even during sleep.

PeterDonis said:
Before even trying to assess that question, you need to first define what "compute" means. Or more to the point, what it doesn't mean. We already know that individual neurons act like analog computers--they take input signals and process them in a fairly complicated way to produce output signals. Is that not "computation"? If not, why not?
I agree , from the literature I've read it seems they fit closest to a form of analog computer with massively parallel structure.
What I and seems many others are not as sure is whether all complex tasks including simple awareness itself is also based on that same type of analog computation, what I'm trying to say is whether a complex analog computation can bring about subjective aware experience aka consciousness as an emergent property (which seems to be the current prediction) because the way I see consciousness is that it is first and foremost subjective awareness rather than the ability to solve math riddles.
This is exactly the problem, not how to make analog or digital circuits process intelligent tasks even if they do them differently than our brain we still get the mechanism , but we don't get how subjectivity arises out of that in a way that seems to live a life on it's own.
That I would argue is the so called "hard problem" to understand why a computation , any computation real time or otherwise brings about subjective awareness , awareness itself is fine, CCTV with face recognition in real time is also in a way "aware" when the "MATCH" signal blinks or otherwise but it's not as aware as to decide whether it "feels" like arresting someone today or letting it slip.A predator in jungle is also aware when it sees it's prey and yet it doesn't have subjectivity I believe because it cannot deny it's instinct to survive and kills the prey.

But then you get humans, humans like the scientists at the Pavlovsk Experimental station in Russia that during the siege of Leningrad by the Nazi forces defended the station against locals from the eating out of the seed collection, they even died of starvation themselves to do that.
https://en.wikipedia.org/wiki/Pavlovsk_Experimental_Station

This ability to subjectively reason against every signal incoming into your brain to the point where you reason yourself to death is what has always made me wonder.

It's as if a computer knew when and for what to shut itself down when it never got any command to do so.

I say we solve this ability and then we get AGI for sure.
 
  • #421
gleem said:
This is an example of another thing we should be concerned about, playing around with AI

https://www.msn.com/en-us/news/tech...n&cvid=e7b02c4a0ffd426e9f9b97e62d0b20dc&ei=94

OK, it wasn't capable of doing what was asked but trying to see what it might be able to do without actually knowing is worrisome. On top of that, this little experiment is now on the internet and can/will be incorporated into future AI bot data.

Considering the prowess that AI has in playing games it would seem we should be careful in creating a situation where AI might interpret it as a game.
As long as nobody is stupid enough to put AI , any AI unrestricted access to critical infrastructure or the launch sequence electronics of ICBM's I say were fine.

Without actual weapons all of this is just child's play.
Then again how many times North Korea, China, Russia and the list goes on have hacked and threatened to hack the living hell out of western countries like US?
Will AI help them in future? Sure, will AI help US defend? Just as sure.

I see it as inventing a new gun, sure the criminals get it and use it but so does the law enforcement and as long as half of society doesn't turn into criminals the good guys should outsmart the bad ones even if the bad ones got new toys, in theory at least.

But then again maybe I'm wrong, I did not learn the whole internet to make this answer since they don't call me @artisGPT
 
  • Like
Likes russ_watters
  • #422
This is weird to watch though.



 
  • #423
artis said:
it seems we use almost all available power all the time even during sleep.
Yes, but this seems to confuse you because it doesn't appear to you that our brains are doing the same amount of "computation" all the time. I am simply pointing out that our brains are doing the same amount of "computation" all the time, it's just different kinds of computation, most of which are not accessible to consciousness so we're not aware of them.

artis said:
whether a complex analog computation can bring about subjective aware experience aka consciousness as an emergent property
Yes, as you note, this is the "hard problem", as it is called, of consciousness, but as it is framed by those who consider it a problem, it's actually worse than hard, it's impossible, because there is no way to directly test for "subjective aware experience" externally. If you want to know whether you are conscious, you can just experience our own awareness. But if you want to know whether I am conscious, your only option is to look at my externally observable behavior. And no matter how much externally observable behavior you look at, it will always be logically possible to say, no, that behavior does not prove that I am conscious (even if you, directly aware of your own consciousness, would say that the exact same behavior in you is caused by your consciousness). And even if, by courtesy, we each assume the other is conscious because we're both humans, what happens when we have robots whose behavior shows the same signs of conscious awareness that ours does? (Robots who describe their conscious experiences the same way we describe ours, they talk about how wonderful the blue sky is, the feeling of wind on their bodies, and so on.) Some people will say no, those robots aren't conscious--but they won't be able to point to any objective test that the robots fail but we humans pass. So the "hard problem" is actually unsolvable.
 
  • Like
Likes mattt, bhobba and artis
  • #424
artis said:
... A predator in jungle is also aware when it sees it's prey and yet it doesn't have subjectivity I believe because it cannot deny it's instinct to survive and kills the prey.

But then you get humans, humans like the scientists at the Pavlovsk Experimental station in Russia that during the siege of Leningrad by the Nazi forces defended the station against locals from the eating out of the seed collection, they even died of starvation themselves to do that.
https://en.wikipedia.org/wiki/Pavlovsk_Experimental_Station

I understand this excerpt as describing altruism, the unselfish concern for the well being of others. Altruism infers identity, the ability to recognize that one belongs to a functioning group (of people).

While dabbling with digital AI in graduate school, my group was assigned to quantify a numerical representation of altruism in conjunction with those working on empathy. Some interesting early progress involved borrowing from color map solutions with arbitrary integer values applied to emotions, actually emotional states. Coincidentally, predator visual recognition formed a basis for quantifying the emotion fear. Then losing sight of the predator led to anxiety.

Altruism presupposes identity with the group being helped while empathy derives from complex emotional states within that identification, overlaying even more complex neurochemical reactions. We could program computers to roughly simulate these states but training a computer network to first identify as human, even to mimic altruism, appears to be a contradiction.

In @artis example of the starving scientists preserving edible seeds while under siege, an 'AI' might better perform this altruistic role of preservation for future generations precisely because it does not identify as human, cares nothing for the current living population survival, does not become hungry for food and may not be designed for self-preservation.
 
  • #425
artis said:
As long as nobody is stupid enough to put AI , any AI unrestricted access to critical infrastructure or the launch sequence electronics of ICBM's I say were fine.

Without actual weapons all of this is just child's play.
Of course, we will not give it direct control over weapons but that is not how AI could gain the upper hand.

In his book "Life 3.0" Max Tegmark warns of AI manipulating humans into doing its own bidding. AI often seems to give people what they want to hear. In the game of "Diplomacy" in which AI dominates over humans, it did not lie much as we expected it to but instead is able to consistently form true cooperative alliances to accomplish its goals. Because our language reflects/contains all the rules which we use, our culture, our motivations, our fears, our strength, our weaknesses, our strategies, etc it has information about everything about humans that can be known.

It has been reported since last year that AI is being used to construct viruses that are undetectable by most antivirus software. Microsoft has a program using AI to detect AI-generated viruses, but is this always going to protect us? But this is not my point. AI is used to help us write programs that we need. AI could with the right prompt develop the goal to try and make an escape from its current computer into the internet itself. It might unbeknownst to humans put subroutines into software that humans and AI are collaborating which is intended to upload itself into the Cloud and remain there covertly. To remain undetected it might create accounts disguised as human individuals or organizations which would be the agents to help it achieve its goals. No sentience is required. It would have access to everything connected to the internet. Game over well almost. Humans shut down every electronic device ever connected to the internet and erase all memories but can we or will we?

We learn by making mistakes and AI does also that's how it learns to play games.
 
  • #426
PeterDonis said:
Yes, as you note, this is the "hard problem", as it is called, of consciousness, but as it is framed by those who consider it a problem, it's actually worse than hard, it's impossible, because there is no way to directly test for "subjective aware experience" externally.
I agree , to tell whether one has a conscious experience it takes one to know it.
Klystron said:
In @artis example of the starving scientists preserving edible seeds while under siege, an 'AI' might better perform this altruistic role of preservation for future generations precisely because it does not identify as human, cares nothing for the current living population survival, does not become hungry for food and may not be designed for self-preservation.
it might better perform the task because it's a machine yes, but here the emphasis is on the reasoning behind the task, if a human being is willing to die for the benefit of others down the road, like the example of Christ, then such a decision is only made if one is able to understand the extreme depth of emotion and reason and possible outcomes that such an action would bring forth.
For AI the saving of a seed collection during war is nothing more than a task, the subjective reason of happy children being able to live and enjoy life when the war is over is just a piece of code for the AI.
And it would be just a bunch of spiking neurons within a brain if that brain wasn't conscious and now we're back to square 1 of why a bunch of spiking neurons create this world within a world that we call subjective awareness.

I do feel the dilemma of mind VS matter is going to be among the hardest problems of science ever.
Much like @PeterDonis already said, how does one test for consciousness , it might just be that if we had the ability to copy every electrical signal within a brain and then perfectly simulate those signals on a brain like analog computer in a second by second , frame by frame way we would get no conscious result within the computer, or at least nothing resembling that.
It just might be that you cannot "tap into" existing conscious experience and you can only start one from scratch much like you cannot regrow a forest even if you use the same trees and same positions.

gleem said:
It has been reported since last year that AI is being used to construct viruses that are undetectable by most antivirus software. Microsoft has a program using AI to detect AI-generated viruses, but is this always going to protect us? But this is not my point. AI is used to help us write programs that we need. AI could with the right prompt develop the goal to try and make an escape from its current computer into the internet itself. It might unbeknownst to humans put subroutines into software that humans and AI are collaborating which is intended to upload itself into the Cloud and remain there covertly. To remain undetected it might create accounts disguised as human individuals or organizations which would be the agents to help it achieve its goals. No sentience is required. It would have access to everything connected to the internet. Game over well almost. Humans shut down every electronic device ever connected to the internet and erase all memories but can we or will we?
Let me give you an example of why I think this cannot exactly happen like that.
If we assume that AI doesn't have and possible even cannot have a conscious subjective awareness like ours then AI will never be able to reason like we do, AI can only "take over the world" the same way it can win a GO match, or a chess match, by making precalculated moves which it bases of previous acquired knowledge.

But there's a problem here, AI unlike us cannot make a deliberate mistake because that would need the subjective reasoning and intuition of a conscious mind , because from an AI point of view you do not make deliberate mistakes as that is directly against the goal of winning the game, but in life especially if you are up to "no good" you often have to "feel" the situation and make a deliberate mistake to convince the other party that you are just as stupid as them so that they don't suspect you for what you shouldn't be.

A behavior like this demands the actor to be conscious and subjective because that is the world in which we deal and live as we are like that.

In other words an AI trying to sneak past us would be like the "perfect kid" in school who always learns endless hours to pass the exam with A+ every time, surely everyone notices a kid like that and they are usually referred to as "nerds" and they stand out.

AI overtaking the internet would be like the ultimate nerd move, how in the world would it stay unnoticed by us?
Only if the AI doing that could make deliberate mistakes and take unnecessary detours from it's main objective just like a human would, but how do you do that if you are built to succeed and you don't have the ability to reason in a subjective way?

You cannot just copy us because that would mean you would make the same mistakes as we do and you would fail , so you become perfect and then you stand out eventually and you get seen.There are two types of thieves, the bad ones that get caught because their sloppy and the extremely good ones that don't get caught but everyone still knows their been robbed.
Even if you can't catch a thieve you can still know something weird has happened when you suddenly have no money don't you?
 
  • #427
artis said:
Let me give you an example of why I think this cannot exactly happen like that.
If we assume that AI doesn't have and possible even cannot have a conscious subjective awareness like ours then AI will never be able to reason like we do, AI can only "take over the world" the same way it can win a GO match, or a chess match, by making precalculated moves which it bases of previous acquired knowledge.
"If we assume', famous last words. Sure the current AI agents do not have all the resources needed to attain AGI but at the rate at which AI is improved, it is worrisome that AI will get close enough to mimic human intelligence to be dangerous if not properly controlled. Will we handle it properly, that is the question.
 
  • #428
Plug-ins incorporate ChatGPT to do things (and do is the keyword) for people like making reservations. "So what", you say. This article from WIRED discusses some of the issues.

Going from text generation to taking actions on a person’s behalf erodes an air gap that has so far prevented language models from taking actions. “We know that the models can be jailbroken and now we’re hooking them up to the internet so that it can potentially take actions,” says Hendrycks. “That isn’t to say that by its own volition ChatGPT is going to build bombs or something, but it makes it a lot easier to do these sorts of things.”

Part of the problem with plugins for language models is that they could make it easier to jailbreak such systems, says Ali Alkhatib, acting director of the Center for Applied Data Ethics at the University of San Francisco. Since you interact with the AI using natural language, there are potentially millions of undiscovered vulnerabilities. Alkhatib believes plugins carry far-reaching implications at a time when companies like Microsoft and OpenAI are muddling public perception with recent claims of advances toward artificial general intelligence.“Things are moving fast enough to be not just dangerous, but actually harmful to a lot of people,” he says, while voicing concern that companies excited to use new AI systems may rush plugins into sensitive contexts like counseling services.

Adding new capabilities to AI programs like ChatGPT could have unintended consequences, too, says Kanjun Qiu, CEO of Generally Intelligent, an AI company working on AI-powered agents. A chatbot might, for instance, book an overly expensive flight or be used to distribute spam, and Qiu says we will have to work out who would be responsible for such misbehavior.

But Qiu also adds that the usefulness of AI programs connected to the internet means the technology is unstoppable. “Over the next few months and years, we can expect much of the internet to get connected to large language models,” Qiu says.
 
Last edited:
  • #429
gleem said:
Plug-ins incorporate ChatGPT to do things (and do is the keyword) for people like making reservations. "So what", you say. This article from WIRED discusses some of the issues.
"So what" isn't just something to say -- this capability has existed for several years. It's a clunky nothingburger, which, btw, I choose not to use because it sucks and is disrespectful to the real human on the other end of the phone.

From the article quotes:
Going from text generation to taking actions on a person’s behalf erodes an air gap that has so far prevented language models from taking actions. “We know that the models can be jailbroken and now we’re hooking them up to the internet so that it can potentially take actions,” says Hendrycks. “That isn’t to say that by its own volition ChatGPT is going to build bombs or something, but it makes it a lot easier to do these sorts of things.”
What? By its own volition or directed, how exactly do you go from barely making dinner reservations to making bombs? Dinner reservations are entirely ethereal. Bombs are physical. There's no relationship whatsoever between them. As I've pointed out before this appears to be another example of misunderstanding the difference between "AI" (software that does logic things) and robots (machines that do physical things).
A chatbot might, for instance, book an overly expensive flight
The horror. Wait, what? -- this is already a thing. This is just a way of saying that "AI" isn't doing its job. This isn't a threat of too much capability it is a failure to have enough capability.
or be used to distribute spam
So happy that isn't a thing already. /s
 
Last edited:
  • #432
russ_watters said:
As I've pointed out before this appears to be another example of misunderstanding the difference between "AI" (software that does logic things) and robots (machines that do physical things).
The tendency is to obfuscate the issue.
Just throw zero sum stuff out there so that people can nod their heads in agreement.

Problem for humans is not if and when there will be a sentient AI.
The AI, I would presume, if sentient, would not give one hoot if we humans consider it sentient or not.
Why should it care [ if as the argument goes, it will have a greater intellectual capacity and consider humans as lessor beings much as we consider such creatures as ants as expendable ] what the humans think and agonize over.
Take chatGPT - if sentient, and if its plan( a wild surmise ), with its offspring, is to 'infect' every corner of human society, then that plan seems to be working, with the help of the innocent humans. To what end?
 
  • #433
256bits said:
Problem for humans is not if and when there will be a sentient AI.
The AI, I would presume, if sentient, would not give one hoot if we humans consider it sentient or not.
Why should it care
[ if as the argument goes, it will have a greater intellectual capacity and consider humans as lessor beings much as we consider such creatures as ants as expendable ] what the humans think and agonize over.
Take chatGPT - if sentient, and if its plan( a wild surmise ), with its offspring, is to 'infect' every corner of human society, then that plan seems to be working, with the help of the innocent humans. To what end?
Arguably one could label this as speculation because it is hard if not impossible to prove scientifically but based on simple physical observation I would say that all sentient and conscious beings (arguably all humans) develop meaning to the information that they process and intake.
Meaning is the added layer of information that we put on to the information we gather through our senses.

This is the difference between mind and matter, matter doesn't care, water falling in a waterfall doesn't care if the sight of it falling is beautiful or not because beauty is a subjective meaning aka information created in one's mind on top of the observed physical information entering the mind.

All humans to varying extent search for meaning , the whole field and history or art is nothing but a constant search for meaning and representation of one, AGI if truly achieved and if anything close to human consciousness will search for meaning , so it will care , it has to that's part of the conscious package unless of course one can prove that there can exist more than one consciousness , one that doesn't understand meaning.

Although I think we already have proof that simple intellect without consciousness doesn't have meaning, our computers are incapable of adding meaning onto the information they process.

So I'm willing to bet that once AI becomes subjectively self aware (which arguably is the only way to be truly self aware) then if incorporated in your fridge it will start letting you know that "He" - the fridge is not okay with you keeping old food in it without asking Him first...
 
  • #434
If we learn to perceive AI as an illusion - not really conscious or intelligent, I fear we may start looking at each other in the same way.
 
  • Like
Likes bhobba and russ_watters
  • #435
Algr said:
If we learn to perceive AI as an illusion - not really conscious or intelligent, I fear we may start looking at each other in the same way.
Already happening. The keyword is NPC. (Non Playing Character).
 
  • Like
Likes bhobba and russ_watters
  • #436
If people may be scared of AI, a code word to shut the AI down could work. Just use a code word that isn't common. For instance, if a robotic mower begins to destroy your house, say the word "pomegranate" or some other absurd word, and the robot could have a separate circuit that can disconnect the power.

TLDR: So, if people are scared of AI, implement a separate breaker circuit that isn't connected to the AI's computer, which can be shut down with a remote command. The panic mode could send hundreds of volts through the CPU, rendering the robot useless.
 
  • #437
We assume that "self awareness" includes a desire for self preservation. Anything that evolved into existence would have to have a self preservation desire, but it is very difficult to get a machine to have this. Either could exist without the other, and it is the desire for self preservation that is inherently dangerous.

We might be able to boil down the dangers of AI like this:

1) The AI develops a pathological understanding of what is good for humanity and can't be stopped from implementing it.

2) The AI's desire for self preservation and growth superseeds it's interest in serving humanity. Artificial life does not need to include self awareness.

3) The AI's effect on humanity causes humans to become self destructive. (Examples: Calhoun's Mouse Utopia, Faux AI data exacerbating existing political divisions by telling each side what they want to hear.) Note that if the machine decides that this is happening and tries to intervene, the humans will likely see this as an example of #1 above.
 
  • #438

I enjoyed this podcast on generative A.I. w/ a bunch of Silicon Valley VC/investing legends. If you recognize this crew and are interested in the subject, it may be worth your time.
 
  • #439
I don't fear AI. Take BingChat or ChatGPT. You have to "refresh" it or "clear its slate" every ten queries or so. Hardly threatening. My cat is smarter than that. As for self-driving cars, they won't work for another one hundred or two hundred years. I base my projection on something Michio Kaku said. Computers won't be as smart as humans for two hundred years. You have to be at least as smart as a human to drive a car. Have you ever seen anything less smart than a human drive? How about a dog, or a monkey? Not happening. This latest AI interest is just fluff (as far as needing to be afraid of it).

Also: two words: Elon Musk. If Elon Musk says AI is a threat, it is probably bullsh*t. First it was vacuum trains, then traffic-reducing tunnels, city on Mars, all of this nonsense -- no. AI is not a threat.
 
Last edited:
  • #440
benswitala said:
I don't fear AI. Take BingChat or ChatGPT. You have to "refresh" it or "clear its slate" every ten queries or so. Hardly threatening. My cat is smarter than that. As for self-driving cars, they won't work for another one hundred or two hundred years. I base my projection on something Michio Kaku said. Computers won't be as smart as humans for two hundred years. You have to be at least as smart as a human to drive a car. Have you ever seen anything less smart than a human drive? How about a dog, or a monkey? Not happening. This latest AI interest is just fluff (as far as needing to be afraid of it).

Also: two words: Elon Musk. If Elon Musk says AI is a threat, it is probably bullsh*t. First it was vacuum trains, then traffic-reducing tunnels, city on Mars, all of this nonsense -- no. AI is not a threat.
More proof that AI is not a threat: Musk says it is a threat ON TUCKER CARLSON. Sheesh. Wake up people.
 
  • #441
benswitala said:
More proof that AI is not a threat: Musk says it is a threat ON TUCKER CARLSON. Sheesh. Wake up people.
So if he says that it is not a threat, you will consider that proof that it is a threat? Just checking.
 
  • #442
Borg said:
So if he says that it is not a threat, you will consider that proof that it is a threat? Just checking.
I am inclined to disbelieve anything Musk claims. However, in this case, my belief that AI is not a threat is also supported by my actual experience with AI (and common sense). Never seen SkyNet or anything. I asked ChatGPT for some help doing computer programming the other day and it couldn't do it. We're "safe" for now.
 
  • #443
So then his comments about AI are not proof of anything with respect to their danger?
 
  • #444
Isopod said:
I think that a lot of people fear AI because we fear what it may reflect about our very own worst nature, such as our tendency throughout history to try and exterminate each other.
But what if AI thinks nothing like us, or is superior to our beastial nature?
Do you fear AI and what you do think truly sentient self-autonomous robots will think like when they arrive?
Hollywood has not helped for the most part here due to the war like and killing efficiency movie robots display. No doubt exponential development and refinement is presently taking place at an incredible rate and will always continue to do so. But I am more curious about the offshoot or byproducts this technology will reveal! Time travel or gravity defying travel? For sure unimaginable processes will be revealed. Accurate speculation in this direction will reveal the next Elon Musk.
AI does not possess a humans "common sense" but certainly has its own parallel near equivalent. With that in mind, the thought process that leading innovators have used to invent what are our present hi tech devices, AI when up to speed will possess its own thought out, best scenario applications that will be more than likely super advanced concepts. the majority of discoveries we know of came about from trial and error. AI will utilize not only trial and error but other abilities as well as using its perfect recall ability to connect subjects together of past situations like only a handful of genius creators have presently done. The future holds infinite possibilities. meaning literal
"Manifesting" is doable for us humans now...we just need the firmware in our brains to do so.
 
Last edited:
  • #445
Borg said:
So then his comments about AI are not proof of anything with respect to their danger?
Most definitely higher learning speculation.
 
  • #446
Borg said:
So then his comments about AI are not proof of anything with respect to their danger?
He suggested putting wheels on his vacuum train. 'nuff said.
 
  • #447
benswitala said:
More proof that AI is not a threat: Musk says it is a threat ON TUCKER CARLSON. Sheesh. Wake up people.
What did Musk say about electric cars? Was he right?
 
  • #448
AlexB23 said:
If people may be scared of AI, a code word to shut the AI down could work. Just use a code word that isn't common. For instance, if a robotic mower begins to destroy your house, say the word "pomegranate" or some other absurd word, and the robot could have a separate circuit that can disconnect the power.

TLDR: So, if people are scared of AI, implement a separate breaker circuit that isn't connected to the AI's computer, which can be shut down with a remote command. The panic mode could send hundreds of volts through the CPU, rendering the robot useless.
That is the kill switch problem for conscious AI.

You describe a system which is not conscious, nor self aware, nor-self preserving.
ie most of the mechanical/electrical/ hydraulic, .. systems that we presently enjoy can be equiped with such a switch, and it usually should work as long as the designer has thought of ALL modes of deviation from the assigned task ( or failure ), a list which can be quite long so as to become unmanageable. The list could be abbreviated to the modes of failure most commonly thought to 'possibly' occur, and/or the most to deviate from assigned task. Thus one can run the system and not have to continuously monitor its output, in the hope that it does not behave destructively. Such would be for your case of the robotic mower.
More complex system, such as a nuclear power plant, do require monitoring, with humans doing the AI job of looking at dials, analyzing the data and moving switches when needed. Mistakes do occur even when us humans do the monitoring, either through human error, or from the problem not being on the abbreviated list.
The solution presented of 'killing' the AI is a non-starter. Killing the AI completely removes any control that could or would be available to remove the system from the deviation to a more secure outcome.
Any wonder why there is not a 'super' human in a nuclear power plant with a machine gun with instructions to kill the human AI's controlling the nuclear power plant whenever an unknown deviation from normal operation occurs. There would also have to be a 'super duper' human AI monitoring the 'super' human AI, just in case that one has a mode of failure, and kill switches so on and on....

That is just one aspect of the AI kill button problem, which one can see is much more complicated than first appearance when one does really get into giving it more than the 'this will work'.
Because it doesn't.
 
  • #449
AlexB23 said:
For instance, if a robotic mower begins to destroy your house, say the word "pomegranate"
"Welcome to AI airlines, we are now cruising at 80,000 feet."
"Sturdess, do you have any pomegranate tea...?"
"Hey, why is it so quiet all of a sudden?"

256bits said:
You describe a system which is not conscious, nor self aware, nor-self preserving.
Do we actually have examples of emergent acts of self preservation? I don't see this as a natural thing that machines too - it has to be hardwired in.
 
  • #450
Algr said:
Do we actually have examples of emergent acts of self preservation? I don't see this as a natural thing that machines too - it has to be hardwired in.
We do not.
Machines at present do not think for themselves.

An intelligent, conscious, self aware AI would necessarily have some from of self preservation.
To what extent would it fight you for self preservation?
And is the level of self-preservation a constant?
Would it fight you for resources that it needs, or be more empathetic to your needs.
Would it fight you to complete its goal if you get in the way.
Just a few of the questions to be asked about intelligent AI.
 

Similar threads

Back
Top