Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Thread starter Isopod
  • Start date Start date
  • Tags Tags
    Ai
Click For Summary
The discussion explores the fear surrounding AI and the potential for sentient, self-autonomous robots. Concerns are raised about AI reflecting humanity's darker tendencies and the implications of AI thinking differently from humans. Participants emphasize that the real danger lies in the application of AI rather than the technology itself, highlighting the need for human oversight to prevent misuse. The conversation touches on the idea that AI could potentially manipulate information, posing risks to democratic discourse. Ultimately, there is a mix of skepticism and cautious optimism about the future of AI and its impact on society.
  • #331
Greg Bernhardt said:
If there is money to be made, a pause will never happen. Also, Elon is just bitter that he couldn't buy OpenAI.
I'm sure AI development will continue, but it's the implementation that must be considered. It is one thing to use AI to perform an analysis, or a non-critical analysis, but it's another thing entirely to allow AI to assume command and control of a critical system.

In Quality Assurance, we have different levels (or grades) of QA for software/hardware based on whether it is a minor, major or critical function. For example, a scoping calculation or preliminary/comparative analysis may allow a lower level of QA, however, for a design and analysis of a 'critical' system, that requires a higher level of QA. By 'critical', I mean the failure of which could cause serious injury or death of one or many persons.

A critical system could be control of a locomotive or a set of locomotives, an aircraft, a road vehicle (truck or car), . . . .

In genealogical research, some organizations use AI systems to try and match up people with their ancestors. However, I often find garbage in the information because one will find many instances of the same name, 'e.g., John Smith, in a given geographical area such as a county/shire/district/parish or several counties/shires/districts/parishes, and many participants are not too careful in placing unrelated people in their family trees. That's an annoyance, and I can chose to ignore. However, if the same lack of care was to be applied to a medical care situation, the outcome could be life threatening for one or more persons (e.g., mixed up patients receiving the other's care, or a misdiagnosis).
 
  • Like
Likes Lord Jestocost
Computer science news on Phys.org
  • #333
Astronuc said:
AI could become a danger if it is put 'in control' of critical systems, and and the input includes erroneous data/information.

AI for analysis of large data sets can be useful, but if the input is incorrect, then erroneous or false conclusions may occur. It may be minor, or it could be major, or even severe.
You damn sure wouldn't want to put ChatGPT in it's current form as a teacher, sure it will pass the simple stuff but on more complicated outputs it will teach you that earth is flat with non zero probability.

Just today I wanted to see whether it will recognize some very niche engineering apparatus that I'm dealing with and sure enough it gave me names to models and companies that were all either scams or some classical "free energy" perpetual motion machine guys writing their blogs.

I have noticed that whenever the information pool from which it can chose becomes small it tends to run into errors. And sure it is not conscious it doesn't see the sketchy meaning behind those scam articles therefore it simply sees the words like a bull sees the red cloth and it runs right in.
 
Last edited:
  • #334
Jarvis323 said:
Anyways, we obviously need to worry about the AI-human threat even in the absence of non-transparent AI decision making. There is also room to fear AI decision making whenever it becomes involved in making subjective or error prone decisions. But people make bad or threatening enough decisions as it is.
Jarvis , I get it you think it's a threat, but then again we have thermonuclear weapons for over 50 years now and "Putin is Puting" them in Belarus as we speak from what I understand , you really think a face recognition software will end your life?
Let's say you live in China, they have CCTV everywhere , even without AI you couldn't cross the street in Beijing without being "caught" and your social credit score possibly lowered.
So AI will do what exactly? Decide to invade Taiwan without supreme leader's approval?
No it's not gonna happen, I suggest we are still in the times where we have to worry about humans instead of robots.

And I would think experts and leaders around the world are smart enough to no put ChatGPT in the command of nuclear reactors (even if it could do that) or any other critical infrastructure just yet.

Even Tesla is still struggling with their autopilot, turns out driving is really hard even when your not conscious, or maybe especially if your not conscious, now what does that tell me?

It tells me that a very important aspect of awareness and consciousness is the ability to subjectively interpret every detail around you to give it meaning.
Only when AI will have that ability at which point it will most likely become AGI, only then will I begin to fear it.
Until then the best it could do is to mess up something critical that it's assigned to, but guess what? Humans blew up Chernobyl, humans did WW2 and dropped first 2 atomic bombs, we already have done pretty much the worst stuff we can do, I do not think a fancy robot will be able to do more.

But then again I could be wrong
 
  • Like
Likes russ_watters
  • #335
Jarvis323 said:
You can read about it here.

https://en.m.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback

It basically involves training the model by examples. People rank the examples (e.g., good or bad) and the training algorithm uses those ranked examples to automatically update parameter values through back propagation, to new values which result in (hopefully) the encoding of some general rules implicit in the relationships between the examples and their ratings, so that it can extrapolate and give preferred answers to new similar prompts.
Ok, now I do recall this is the way they do it, but eventually this is just a more complicated way of doing what I said. The model is dumb as a rock, it doesn't understand the words it deals with the same way we understand them with added meaning/load. Therefore it will spew racist crap unless it is "shown" not to do so by way of literally "dragging it by the nose" like a spoiled dog back to the place where it messed up the carpet or whatever , and even then a dog understands better than these current language models because for them this "teaching" is simply a way to put "weights" so that the algorithm labels these certain words as "bad" so uses them less often or not at all.

At least that is how I understand it.
No matter how many "catchpa" like examples you give it it still doesn't understand the meaning it just sees that "this and that goes there" better than elsewhere.
 
  • #336
I'm not sure if I should apologize for reviving this thread. This topic seems to carry an emotional aspect. My intention was to get opinions on the rapid increase in AI potential. There is a lot I would like to respond to but this post might be a TLDR one. I woke up this morning with this thread on my mind just turning over what was posted last night before 10 PM. The commonly held fear of AI as depicted in movies of the physical destruction of mankind which even the wildest speculation at this time seems improbable. Another fear is the disruption of society or the economy due to the misuse of AI. This I believe is real. This is easy because we do it all the time ourselves. Then, there is the unappreciated dangerous AI agent which we think we can manage. I think there is a possibility for this. Here I am thinking of nuclear power. We embrace it with the full knowledge that it can also destroy us. This may happen but we're confident that nothing bad will happen. Finally, I guess we could be looking through rose-colored glasses and saying this is the best thing that has happened to mankind. Unfortunately, we even manage to screw up the use of antibiotics.

Today the internet and social media are having untoward effects on society even as they drive our economy to higher levels. We all thought this was great but now we have to live with the bad and the good.


We sometimes refer to the “game of life”. AI is really good at games, being able to see many moves ahead in various scenarios and even being able to find new strategies foreign to humans and do this at light speed. LLM has all the information (rules) needed to play the game built into the language. The only thing it needs is the right prompt to set it in motion. I believe AGI is absolutely attainable and be developed sooner than many think. It doesn’t have to be sentient. Remember a few years ago AI was a single-task agent and forgot everything when you changed the task. Not so now. People are adding “plug-ins” to refine its performance for specialized tasks. Some have found ways of increasing the capability of a neural network without increasing its size.

We berate AI for making mistakes, going off the rails, and hallucinating which I might point out is also a human failure. We mix up facts, forget things, make biased statements, go off track, mislead, and make stupid statements and on top of that are handicapped by some emotions. AI may use us like we use one another. Paraphrasing what Yondu Udonta, the blue guy in guardians of the galaxy said to Rocket the genetically engineered raccoon: I know what AI is because it is us.
 
  • Like
Likes russ_watters
  • #337
artis said:
you really think a face recognition software will end your life?
You are misidentified as a terrorist and located. Law enforcement approaches you. You reach into your pocket for your cell phone and you go down in a hail of bullets. It is possible. it has happened.

Remember the TV series "Person of interest"? An AI system (The Machine) plugged into all surveillance monitors search for people demonstrating a possibility of terrorist activity.
 
  • #338
Greg Bernhardt said:
If there is money to be made, a pause will never happen.

The fact this seems true is all the more reason to try.
 
  • #339
Jarvis323 said:
Have you tried it?
No, I haven't. I've seen a bunch of samples provided and heard from people who have, and nothing sounds very compelling to me - I don't see a reason why I would try it.
 
  • #340
nuuskur said:
I hate this type of discussion: let's decide a priori something is a threat and then come up with explanations for why. Eventually, it has to be a person that carries out those deeds. So I stand by what I said. I fear evil people wielding knives. I don't fear the knife.
Nor do you/I fear everyone simply because knives exist.
 
  • #341
gleem said:
You are misidentified as a terrorist and located. Law enforcement approaches you. You reach into your pocket for your cell phone and you go down in a hail of bullets. It is possible. it has happened.
Right, but that doesn't have anything to do with AI. AI doesn't create that risk and there's no reason to think AI will make it worse, is there?
 
  • #342
Jarvis323 said:
Yes. Extremely worse.
Why? I think it'll make things better by improving identification and reducing false IDs, similar to how DNA evidence is freeing wrongfully imprisoned people. As I said previously, you're largely skipping the part where you explain the actual risk. That's why this sounds like fear of the dark.
 
  • #343
russ_watters said:
AI doesn't create that risk and there's no reason to think AI will make it worse, is there?
I think it contributes to the risk. AI is efficient, and tireless it can expand surveillance thereby increasing
the number of false positive events even if it is better than humans If there are no terrorists then the risk is increased with no benefit.
 
  • #344
Aperture Science said:
I think the first thing we need to understand is what a program is and that true consciousness is.
Machine learning -- the huge breakthrough that is altering our lives at a dizzying pace -- is not programming.
 
  • #345
Some people use all available technology to kill their fellow man. Machine learning will be used for such purposes, on a large scale and soon. The head of the US military, Mark Milley, just announced that in fifteen years there will be "significant" numbers of robotic vehicles on the battlefield. Criminals will avail themselves of AI. All you can do is hope that the positive uses outweigh the negative ones.

Revolutions are traditionally won when the lower ranks of the army and police refuse to support the rulers. It seems to me that a robotic army should be more obedient, solidifying the ruling class's control. I would expect this is a powerful motive for the rapid deployment of such robots.
 
Last edited:
  • #346
gleem said:
I think it contributes to the risk. AI is efficient, and tireless it can expand surveillance thereby increasing
the number of false positive events even if it is better than humans If there are no terrorists then the risk is increased with no benefit.
That doesn't logically follow. We're talking about human police killing the wrong "suspect" here. The number of human police doing that can't be increased by a large number of false positives because there's only a limited number of interventions the police can do. AI can only increase the number of errors if it increases the percentage of errors; if it's worse than human police at its job.

E.G., if police can investigate 1,000 leads a day and have an error rate of 10% (100 false leads) and AI provides a billion leads a day and 1% error rate (10 million false leads) the police can still only pursue 1,000 leads, including the 10 errors among them.

And because of this reality (high volume), screening has to happen, which means that the leads aren't just all pursued at random, but scored and pursued preferentially. So a 1% error rate can become a 0.1% error rate because the lower scored guesses aren't pursued.
 
Last edited:
  • #347
Hornbein said:
Machine learning -- the huge breakthrough that is altering our lives at a dizzying pace -- is not programming.
Then what is it? And can you define it in such a way as to exclude a PID loop?
Hornbein said:
Some people use all available technology to kill their fellow man.
Agreed. But this isn't profound. The best at it haven't primarily used high-technology, they've used political power. Mundane trains were the key "technology" in what was probably the greatest murder-spree of all time.
Hornbein said:
Machine learning will be used for such purposes, on a large scale and soon.
How do you define "Machine Learning"? Is that "AI"? What, exactly, does it mean? This statement can't be evaluated without clarity of definition, otherwise it feels like hand-waving.
Hornbein said:
K. What does "significant" mean? More significant than the introduction of the Sidewinder missile in 1956? The Phalanx CIWS, introduced in 1980? Are these changes bigger/more fundamental? The descriptions are vague and feel hand-wavey (as is this thread, as I and others have complained many times).
Criminals will avail themselves of AI. All you can do is hope that the positive uses outweigh the negative ones.
Dynamite: 1866. ....Rockets: ~1000 AD.
 
Last edited:
  • #348
Honestly, right now my biggest fear is that AI will overpromise and underdeliver and we’ll get another 30+ year AI winter.
 
  • Like
Likes russ_watters
  • #349
TeethWhitener said:
Honestly, right now my biggest fear is that AI will overpromise and underdeliver and we’ll get another 30+ year AI winter.
Depends on your expectations. I work in the marketing dept for a large SaaS company and in 6 months generative AI models have changed everything we're doing.
 
  • Like
Likes TeethWhitener and russ_watters
  • #350
TeethWhitener said:
Honestly, right now my biggest fear is that AI will overpromise and underdeliver and we’ll get another 30+ year AI winter.
Agreed with Greg regarding expectations. I don't know anything about his industry (but I'm interested in what, specifically has changed/happened), but apparently that happened under the radar of the hype. Either way, since I am nearly always skeptical of hype, such failures don't look like technology failures to me, just marketing/hype failures, which are meaningless. I've never been holding my breath for Elon's "full self driving" so I'm not turning blue waiting for it.
 
  • Like
Likes TeethWhitener
  • #351
russ_watters said:
Then what is it [machine learning]?
It's when a machine teaches itself. No programming. All you do is tell it the rules of the game and whether it has won or lost. A training set may or may not be supplied.

When AlphaGo defeated Lee Sidol to become Go champion of the world I knew that was the biggest engineering breakthrough of my lifetime. It was much more impressive than chess because the game space of Go is far greater than the number of particles in the visible universe. Go cannot be mastered by brute force.

AlphaGo was given a training set of Go games played by experts. Shortly afterward AlphaGo was defeated by AlphaZero, which was given no training set whatsoever. Playing against itself, AlphaZero became world chess champion after nine hours of self play, defeating Stockfish 8. The latter is a traditional AI that searches about 27 million positions per move. AlphaZero searched about eighty thousand.

It took AlphaZero 34 hours to become world Go champion entirely via self play. As you can see, it makes little difference what sort of game the learning algorithm is applied to. It can play Donkey Kong, Breakout, and so forth, these being much easier than Go. Instead of alternating turns players make their moves in real time, but this doesn't matter.

The next step was AlphaStar soundly defeating two of the the very top players in the war game of Star Craft II. This game is largely about strategic planning/logistics in a situation in which most of your opponents moves are unknown. AlphaStar achieved its mastery in fourteen days of self play after absorbing a training set of human games. Some said the computer had a speed advantage but the computer made its moves at about half the rate of a top human player. https://www.deepmind.com/blog/alphastar-mastering-the-real-time-strategy-game-starcraft-ii.

Surely the armed forces are hard at work applying this technology to real world battles. For all we know they may already be at use in the field.

Only two years separated the triumphs of AlphaZero and AlphaStar. I would have thought much more would be necessary. The revolution was going much faster than I expected. The results are apparent in the autonomous soldier robots produced by Boston Dynamics. Simulations have been developed accurate enough that the machine learning can take place in the simulations. Ten years ago humanoid robots were doing the Alzheimer's shuffle. Now they can perform standing backflips.

Such revolutions are unstoppable. The cat is out of the bag. All you can do is hope that the positive results outnumber the negative.
 
Last edited:
  • #352
Well a couple of points, if we stay rational, why would a team of engineers with regulatory oversight produce a nuclear reactor control system where AI is hardwired into the system without human intervention capability?
Unless that is done, any properly trained human operator team can just take over once they see the AI isn't working properly.
Unless they build a special artificial AI hand with a huge drill in it's palm right into the reactor hall so that it can drill a hole into the pressure vessel...

The way I see the worst that can happen is the AI can get chaotic and if given too much authority that may cause havoc within the system that it controls.
But then again, how many times did we have a false alert of an incoming nuclear attack during the Cold war?
We have already been marginally close to causing WW3 by accident.

Actually I think @Hornbein gave some of the most rational arguments of how AI might actually be used for harm - that is in the hands of military and rogue leaders.
The robot army example is a good one, sure enough if the leaders in Moscow in 1991 had robotized tanks the chances of their coup failing would be much much lower.

Then again I'm sure someone will find a way to hack such robots and use them potentially against their very users, as they say a gun can always backfire.
 
  • Like
Likes russ_watters
  • #353
Hornbein said:
Surely the armed forces are hard at work applying this technology to real world battles. For all we know they may already be at use in the field.
Apparently not in Ukraine and definitely not by the Russians...

If they ever used any AI it was most likely WI (wrong intelligence)
 
  • #354
Hornbein said:
The next step was AlphaStar soundly defeating two of the the very top players
...
Surely the armed forces are hard at work applying this technology
Erm. No. It's a grave mistake to pair up these things just like that. That AlphaStar thing is just not in the right calibre for any actual usage.

Though I'm pretty sure that some armed forces has some SW (which may be categorized as or at least contains AI) as assistance (!only!) for logistics, strategy and data/image processing.
 
  • Like
Likes russ_watters
  • #355

AI weapons: Russia’s war in Ukraine shows why the world must enact a ban

Conflict pressures are pushing the world closer to autonomous weapons that can kill without human control. Researchers and the international community must join forces to prohibit them.

https://www.nature.com/articles/d41586-023-00511-5

This is a prescient article. Unfortunately I couldn't find a version that isn't paywalled.

Basically, the war in Ukraine is accelerating the pace at which we approach the inevitable point where people can mass produce fully autonomous slaughter-bots capable of efficient targeted mass murder.

I can already guess what someone might say: Fully autonomous slaughter-bots are no different than sling shots.

Or: Fully autonomous slaughter-bots aren't conscious or self aware, so no big deal, the worst that could happen is they make mistakes when they are killing people.

Or: Slaughter-bots can't solve P=NP, so no problem. Or, I fear humans with fully autonomous slaughter-bot swarms, not slaughter-bot swarms.

Or: First we need to figure out what human consciousness is, and whether slaughter-bots are capable of having it.

Or: Show me the blueprints for the Slaughter-bots.

Or: Where is the empirical evidence that slaughter-bot swarms are efficient at targeted mass murder?
 
Last edited:
  • Skeptical
Likes russ_watters
  • #356
Aperture Science said:
I think the first thing we need to understand is what a program is....
As respect to chatbots, one probably can understand it when one programms one of the first chatbots called ELIZA:

https://en.wikipedia.org/wiki/ELIZA
 
  • #357
Greg Bernhardt said:
Depends on your expectations. I work in the marketing dept for a large SaaS company and in 6 months generative AI models have changed everything we're doing.
Agreed 100% on how useful the new capabilities are. I’ve only started playing with some of the new AI models, but I can easily see it being an amazing time saver, especially for things like literature searches and reformatting papers/presentations, etc. I’ve already used Dall-E to create interesting graphics for presentations, for instance.
 
  • Like
Likes mattt and Greg Bernhardt
  • #358
Hornbein said:
The next step was AlphaStar soundly defeating two of the the very top players in the war game of Star Craft II.
AlphaStar was really good, but it would have inhuman APM spikes when it needed hard micro. It's average APM was throttled, but I'm not sure they ever addressed the APM spikes. The top humans have very high APM spikes, but their EPM is far less than their APM, as opposed to the AI, whose EPM likely almost equal to its APM. Humans, even the top ones, spam more useless commands than machines. And even within EPM, the commands aren't necessarily beneficial. AlphaStar preferred odd units for strategies specifically because it could micro them better than any human could ever hope to.
 
  • #359
russ_watters said:
Either way, since I am nearly always skeptical of hype, such failures don't look like technology failures to me, just marketing/hype failures, which are meaningless.
This isn't entirely true. Overheated hype tends to presage disappointment, which leads to fewer research dollars being allotted to new developments and applications. This usually happens (unfortunately) just as all the low-hanging fruit gets picked and people actually start to make headway on the truly difficult problems. As someone who's been doing research on graphene for many years, I witnessed this firsthand when a decent fraction of my sponsors basically stopped funding graphene work and moved onto the next hot thing. So the hype failure definitely has consequences, which is what I think the biggest danger is right now in AI R&D.
 
  • Like
Likes russ_watters
  • #360
JLowe said:
AlphaStar was really good, but it would have inhuman APM spikes when it needed hard micro. It's average APM was throttled, but I'm not sure they ever addressed the APM spikes. The top humans have very high APM spikes, but their EPM is far less than their APM, as opposed to the AI, whose EPM likely almost equal to its APM. Humans, even the top ones, spam more useless commands than machines. And even within EPM, the commands aren't necessarily beneficial. AlphaStar preferred odd units for strategies specifically because it could micro them better than any human could ever hope to.
Care to repeat that in English?
 
  • Like
Likes gmax137 and gleem

Similar threads

Replies
10
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
19
Views
3K
Replies
3
Views
3K
Replies
7
Views
6K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 51 ·
2
Replies
51
Views
8K