Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Thread starter Isopod
  • Start date Start date
  • Tags Tags
    Ai
Click For Summary
The discussion explores the fear surrounding AI and the potential for sentient, self-autonomous robots. Concerns are raised about AI reflecting humanity's darker tendencies and the implications of AI thinking differently from humans. Participants emphasize that the real danger lies in the application of AI rather than the technology itself, highlighting the need for human oversight to prevent misuse. The conversation touches on the idea that AI could potentially manipulate information, posing risks to democratic discourse. Ultimately, there is a mix of skepticism and cautious optimism about the future of AI and its impact on society.
  • #121
Moore's I agree is not a good model going into the future. But it doesn't stop people from trying to forecast improvements in computing power. Technologies like room temperature superconductors, carbon based transistors, quantum computing, etc. will probably change the landscape. If we crack fusion energy, then suddenly we have a ton of energy to use as well.

But in my opinion it also doesn't make too much sense to focus just on things like how small a transistor can be, and how efficiently you can compute in terms of energy. Because AI already gives us the ability to just build massive computers in space.

Quantum computing however does have the chance to make intractable problems tractable. There are problems which would take classical computers the age of the universe to solve that quantum computers could theoretically solve within a lifetime. A jump from impossible to possible is quite a bit bigger than Moore's law.

So then when these future technologies can potentially result in massive leaps forward that make Moore's law look like nothing, what about the progress that it took to develop those technologies in the first place. Sure, the unlocked capability is a step function, but in terms of advancement, do we also just draw a step function, or do we count the intermediate progress that got us there? Because there are a ton of scientific breakthroughs that are getting us closer happening constantly now days, even if most people aren't paying much attention.
 
Last edited:
Computer science news on Phys.org
  • #122
Jarvis323 said:
Right now I think it is largely a combination of a hardware problem and a data problem. The more/better data the neural networks are trained on, the better AI gets. But training is costly with the vast amount of data. So it is really a matter of collecting data and training the neural networks with it...

But there is possibly a limit how far that can take us. There is also an evolution of architecture and transfer learning, and neuro-symbolic learning, which may spawn breakthroughs or steady improvements besides just pure brute force data consumption.
I think you may have missed my point because you basically just repeated it with different wording. Yes, I know it is being approached as a hardware and data problem. But humans don't think by accessing vast data archives, taking measurements with precise sensors and doing exact calculations.
 
Last edited:
  • #123
russ_watters said:
I think you may have missed my point because you basically just repeated it with different wording.
"Imitation is the sincerest form of flattery." --Old proverb.
 
  • Like
Likes BillTre and russ_watters
  • #124
Jarvis323 said:
Moore's I agree is not a good model going into the future. But it doesn't stop people from trying to forecast improvements in computing power. Technologies like room temperature superconductors, carbon based transistors, quantum computing, etc. will probably change the landscape.
It does make it much harder to predict when instead of steady, continuous - predictable - advances you're waiting for a single vast advancement that you don't know when it will come, if ever. And many of the biggest advances I'm not sure if people even saw coming (such as the computer itself).

Jarvis323 said:
If we crack fusion energy, then suddenly we have a ton of energy to use as well.
Very doubtful. Fusion is seen by many as a fanciful solution to our energy needs, but the reality is likely to be expensive, inflexible, cumbersome and maybe even unreliable and dangerous. And even if fusion can provide power at, say, 1/10th the cost it currently is, generating the electricity is only around a third of the cost of electricity. The rest is in getting the electricity to the user. Fusion doesn't change that problem at all. And not for nothing, but we already have an effectively limitless source of fusion power available. As we've seen, just being available isn't enough to be a panacea.

Also, it's not power per se that's a barrier for computing power, it's heat. A higher end PC might cost $2000 and use $500 a year in electricity if run fully loaded, 24/7. Not too onerous at the moment. But part of what slowed advancement was when they reached the limit of what air cooling could dissipate. It gets a lot more problematic if you have to buy a $4,000 cooling system for that $2,000 PC (in addition to the added energy use). Even if the electricity were free, that would be a tough sell.
 
  • #125
Heh. Managed to get a topical comic in after all. (A few of the previous ones are pretty good too. He must have gad a good week.).
 
  • #126
russ_watters said:
But humans don't think by accessing vast data archives, taking measurements with precise sensors and doing exact calculations.
Who is to say humans have less precise sensors or that our calculations are less exact?
 
  • #127
Jarvis323 said:
Who is to say humans have less precise sensors or that our calculations are less exact?
Me? Honestly, I don't see how this is arguable. What's the exact color of the PF logo? How fast was the ball I just threw? Maybe we're talking past each other here, so if you're trying to say something else, could you elaborate?
 
  • Like
Likes Oldman too and BillTre
  • #128
russ_watters said:
Me? Honestly, I don't see how this is arguable. What's the exact color of the PF logo? How fast was the ball I just threw? Maybe we're talking past each other here, so if you're trying to say something else, could you elaborate?
Just because your conscious mind can't give precise answers doesn't mean your sensors and brain's calculations are at fault. You probably can catch a ball if someone tossed it to you and you don't need to consciously calculate trajectories and the mechanics of your hands. But you do do the necessary calculations. AI is the same. If you train a neural network to catch a ball, it will learn how to do it and it probably won't do it like a physics homework problem.

In the same way, when you see the color, maybe you can't recite the RGB component values, some people can't even see in color, but biological eyes are certainly not inferior sensors to mechanical ones in my opinion, within the scope of their applicability. And I'm not sure what technology can compete with a nose?

Of course we can equip AI with all kinds of sensors we don't have ourselves, but that's pretty much besides the point.

And what does it mean to say our brain doesn't do exact calculations? Does it mean there is noise, interference, randomness, that it doesn't obey laws of physics?

AI is based on complex internal probibalistic models. So they guess. Maybe which guess they will give is consistent if they've got a static internal model that's stopped learning. But they still guess. The main difference with humans is we don't just guess imediately, we second guess and trigger internal processing when we're not sure.

It might be possible AI can also try to improve its guesses at the expense of slower response time, but a general ability to do this is not a solved problem as far as I know.
 
Last edited:
  • #129
Jarvis323 said:
Just because your conscious mind can't give precise answers doesn't mean your sensors and brain's calculations are at fault.
That isn't what you or I said before - it sounds like exactly the opposite of your prior statement:
Who is to say humans have less precise sensors or that our calculations are less exact?
So I agree with your follow-up statement: our conscious mind can't make precise measurements/calculations. Yes, that matches what I said.
You probably can catch a ball if someone tossed it to you and you don't need to consciously calculate trajectories and the mechanics of your hands. But you do do the necessary calculations.
That sounds like a contradiction. It sounds like you think that our unconscious mind is a device like a computer that makes exact calculations. It's not. It can't be. The best basketball players after thousands of repetitions can hit roughly 89-90% of free throws. If our unconscious minds were capable of computer-like precision, then we could execute simple tasks like that flawlessly/perfectly - just like computers can.
AI is the same. If you train a neural network to catch a ball, it will learn how to do it and it probably won't do it like a physics homework problem.
Again, I agree with that. That's my point. And I'll say it another way: our brains/sensors are less precise and we make up for it by being more intuitive. So while we are much less precise for either simple or complex tasks, we require much less processing to be able to accomplish complex tasks. For computers, speed and precision works great for simpler tasks (far superior to human execution), but has so far been an impediment to accomplishment of more complex tasks.
 
  • #130
russ_watters said:
Again, I agree with that. That's my point. And I'll say it another way: our brains/sensors are less precise and we make up for it by being more intuitive. So while we are much less precise for either simple or complex tasks, we require much less processing to be able to accomplish complex tasks. For computers, speed and precision works great for simpler tasks (far superior to human execution), but has so far been an impediment to accomplishment of more complex tasks.
Maybe we're not talking about the same thing. You seem to be talking about computers and algorithms. I've been talking about neural networks. Trained neural networks do all their processing immediately. Sure it may have learned how to shoot a basket better than a person. But humans have a lot more tasks we have to do. If one neural network could do a half decent job shooting baskets and also do lots of other things well, that would be a huge achievement in the AI world.

Really, it's humans which do a lot of complex processing to complete a task, and to make AI improve, giving AI that ability is a primary challenge, because it has to know what extra calculations it can do, and how it can reason about things it doesn't already know. The ability to do this in some predetermined cases in response to a threshold on a sensor measurement
is there of course but that isn't AI.
 
  • #131
Jarvis323 said:
Maybe we're not talking about the same thing. You seem to be talking about computers and algorithms. I've been talking about neural networks.
What we were just talking about is precision/accuracy of the output, regardless of how the work is being done.
Trained neural networks do all their processing immediately.
What does "immediately" mean? In zero time? Surely no such thing exists?
Sure it may have learned how to shoot a basket better than a person. But humans have a lot more tasks we have to do.
Yes. Another way to say it would be sorting and prioritizing tasks and then not doing (or doing less precisely) the lower priority tasks. That vastly reduces the processor workload. It's one of the key problems for AI.
If one neural network could do a half decent job shooting baskets and also do lots of other things well, that would be a huge achievement in the AI world.
Yes.
 
  • #132
russ_watters said:
What does "immediately" mean? In zero time? Surely no such thing exists?

I mean there is just one expression, which is a bunch of terms with weights on them, and for every input it gets, it just evaluates that expression and then makes its guess. It doesn't run any algorithms beyond that. Of course you could hard code some algorithm for it to guess to run in response to an input. And one day maybe they could also come up with their own algorithms.

russ_watters said:
Yes. Another way to say it would be sorting and prioritizing tasks and then not doing (or doing less precisely) the lower priority tasks. That vastly reduces the processor workload. It's one of the key problems for AI.

I wouldn't view it this way exactly although that could be possible. The problem for a neural network I think is that it needs to have one model that gives good guesses for all of the different inputs. And the model emerges by adjusting weights on terms to try and minimize the error according to the loss function. So we have to also come up with a loss function that ends up dictating how much the neural network cares about its model being good at basketball or not.

The problem is that there is a whole world out there of things to worry about, and there are only so many terms in the model, and only so much of the world has been seen, and there is only so much time to practice and process it all. The network ultimately is a compressed model, which has to use generalization. When it shoots a basketball, it's using neurons it also uses to comb its hair, and play chess. And when it does a bad job combing its hair, it makes changes that can also affect its basketball shooting ability.
 
  • #133
PeroK said:
This is garbage.
Kurzweil is provocative and triggers reactions (just as he has with you, @PeroK) and those reactions cause people to discuss the ideas he espouses. It might be to scoff and dismiss his ideas (transhumanism is a great example that has attracted a lot of derision), or to argue his timelines are wrong, or even to agree but add qualifications.

Whatever, he causes a conversation about the future and while it might be viewed as garbage, it is not a bad thing.
 
  • Like
Likes russ_watters and DaveC426913
  • #134
Melbourne Guy said:
Kurzweil is provocative and triggers reactions

I got the impression he is also providing a primer on the subject for newbies even as he is arguing it.

I got the impression the explanation of geometric growth early in the essay is deliberately simplistic as part of the primer that he wants to lay out for newbies, and then goes on to nuance it a few paragraphs later.
 
  • Like
Likes russ_watters
  • #135
I'll just leave this here:



(Gasp)
 
  • #136
"Fear AI". There may a few ways that we really should "fear" or at least be wary. The obvious one is where the "AI" is given access to physical controls of the real environment e.i. driverless vehicles of any kind or control of weapons (as per "The Corbin project (movie)). we also know what happened to the HAL9000 computer in 2001.Space Odyssey.
I'm sure there are many more such examples of AI gone astray. It may also depend on the level of "awareness" and "intelligence" of the particular AI. The android in Isaac Asimovs "The Naked Sun" and " Caves of Steel" give examples of extremely advanced AI as to be almost human. But even so some of his tales also feature AI which turns out to be harmful due usually to some "failure" on its "mental state". Even his famed three laws of robotics didn't always stop harm occurring in his tales.
Also not forgetting M.Chritons(sp.) "Gray Goo" of self-replicating nanobots causing mayhem.
I would suggest that even humans fail and cause great harm so anything we build is also likely to "fail" in some unkown way so I would be very wary of so-called AI even at the highest level unless there was some sort of safeguard to prevent harm from occurring.
Could Ai ever become "self_aware? I very much doubt it. Even many animals do not seem to be self_aware so how could we ever make a machine to do it. I have no problem using "AI" etc as long it does what I want it to do and is ultimately under my control.

Yes, I prefer to drive a manual car.
 
  • #137
DaveC426913 said:
I read that a year or two ago. I loooove the vampire concept.

But I'm not really a sci-fi horror fan. If you want sci-fi horror, read Greg Bear's Hull Zero Three. This book literally haunts me. (I get flashbacks every time I see it on the shelf, and I've taken to burying it where I won't see it.)
I'm about a third of the way through HZT now. Thanks for the recommendation!
 
  • #138
Chicken Squirr-El said:
I'm about a third of the way through HZT now. Thanks for the recommendation!
:mad::mad:
I was trying to warn you off!
Don't come back saying I didn't. :nb)
 
  • #139
sbrothy said:
Heh. Managed to get a topical comic in after all. (A few of the previous ones are pretty good too. He must have gad a good week.).
Speaking of comics. I just read the coolest scifi comic: "Sentient". It would make one paranoia inducing film. And notably the protagonist is a ship AI 20 minutes into the future. Suddenly taske with protecting children.

Review
 
  • #140
Given the existence of an AI that is better than humans at everything, what would the best case scenario be? Can a "most likely scenario" even be defined?
 
  • #141
Algr said:
Given the existence of an AI that is better than humans at everything, what would the best case scenario be? Can a "most likely scenario" even be defined?

Best case, maybe AI saves the planet and the human race from destroying itself. Most likely, who knows. Maybe we use AI to destroy the planet and ourselves.

In terms of predicting what will happen, my opinion is that the best approach is to look at what is possible, what people want, and what people do to get what they want. If a technology makes something very enticing possible, you can guess it will be used. So you can just look at all the things AI makes possible, and all the ways people could exploit AI to their benefit.

So the problem now is that people are largely driven by hate, greed, and selfish interests, have short attention spans, short memory, and are willing to sacrifice future generations and the future of the rest of the life on the planet for frivolous short term gains, and have a culture of dishonesty. And because this is so depressing, we pretend it's not the case and try to ignore it.

But the future is a grim one if we continue this path and exploit technology so carelessly and selfishly.
 
Last edited:
  • #142
Jarvis323 said:
Best case, maybe AI saves the planet and the human race from destroying itself. Most likely, who knows. Maybe we use AI to destroy the planet and ourselves.

In terms of predicting what will happen, my opinion is that the best approach is to look at what is possible, what people want, and what people do to get what they want. If a technology makes something very enticing possible, you can guess it will be used. So you can just look at all the things AI makes possible, and all the ways people could exploit AI to their benefit.

So the problem now is that people are largely driven by hate, greed, and selfish interests, have short attention spans, short memory, and are willing to sacrifice future generations and the future of the rest of the life on the planet for frivolous short term gains, and have a culture of dishonesty. And because this is so depressing, we pretend it's not the case and try to ignore it.

But the future is a grim one if we continue this path and exploit technology so carelessly and selfishly.
So very true (and depressing). Sure hope it doesn't spiral into the sewer.

Edit: Then again I won't be here if (when?) it does. :)
 
  • #143
Most of the AI fear is based on the assumption that AIs will act like us. But machines are very different. One particular thing I notice is that machines don't naturally develop any desire for self preservation or self improvement. You can program this, but it is a rather difficult concept for machines to grasp, so it doesn't seem like something that could emerge by accident.
 
  • #144
Algr said:
Most of the AI fear is based on the assumption that AIs will act like us. But machines are very different. One particular thing I notice is that machines don't naturally develop any desire for self preservation or self improvement. You can program this, but it is a rather difficult concept for machines to grasp, so it doesn't seem like something that could emerge by accident.
Whos to say? If the AI in question is smart enough to realize that without energy oblivion awaits then all bets are off.
 
  • #145
What's wrong with oblivion?
 
  • #146
Algr said:
Most of the AI fear is based on the assumption that AIs will act like us. But machines are very different. One particular thing I notice is that machines don't naturally develop any desire for self preservation or self improvement. You can program this, but it is a rather difficult concept for machines to grasp, so it doesn't seem like something that could emerge by accident.
It's an interesting issue. On the one hand, maybe AI doesn't have much the same instinct for self preservation easily ingrained. For humans, we are part of the natural ecosystems of a planet. Our survival is a collective effort and depends on the planet and its environment. That can explain why even though we are poor stewards of the planet and treat each other terribly, it could be much worse. We have a side that cares, sees beauty in nature, and wants our species and the natural world to thrive.

AI might not have any of that. Suppose AI does acquire instinct for self preservation, that preservation likely wouldn't depend on coral reefs or the levels of radiation in the water. With people, at least we can depend on some level of instinct to care about things. For now, we have fairly simple AI and can mostly tell what the effect of the loss function is. For example, most AI now cares about getting people to buy things or click on things and other narrow and easy to define and measure things like that.

The challenge for humans in creating safe general AI would be to define a differentiable function that measures the behavior of the AI and reflects if it is good or bad. The more general the AI and free it is, the harder it would be to get that right or know if you have. It is like trying to play god. Then, eventually, AI can begin writing their own loss functions and also their loss functions can evolve without oversight.

AI which is designed to reproduce will be a major component of the next eras of space mining, space colonization, terraforming, and possible manufacturing and war. E.g. it will be what makes Elon Musk's dream of colonizing Mars possible.

Self replicating AI will likely be interested in energy like sbrothy said. And it might care even less than humans what the cost is to the planet. E.g. it might go crazy with building nuclear power plants all over the place and not care when they melt down. Or it might burn up all of the coal on the planet very rapidly and all of the forests, and everything else, and then keep digging, and burning, and fusing until the Earth resembles a hellscape like Venus.
 
Last edited:
  • #147
Self preservation and reproduction are at the core of biology because living things that don't have those core values got replaced by those that did. This took millions of generations over billions of years to happen.

Self preservation and reproduction are things that are possible for an AI. But any AI would have as it's core function to benefit those that created and own it. So an AI that was smart enough to decide that AIs are bad for humanity would not invent a reason to ignore its core function. It would either disable itself, or act to prevent more malevolent AIs from emerging. A malevolent AI would have no survival advantage with all the good AIs anticipating its existence and teaming up against it.

A third possibility is that their might not be a clear line between what is an AI and what is a human. Imagine there was a tiny circuit in your brain that had all the function of a high powered laptop. But instead of touching it with your fingers and looking at it's screen with your eyes, you just thought about it and "knew" the output as if it was something you'd read somewhere. Imagine never forgetting a face or a name or an appointment again, because you could store them instantly.
 
  • #148
Algr said:
But any AI would have as it's core function to benefit those that created and own it.

This is at least what you could hope for. It's not easy. AI can say, oh sorry, you didn't mention to me in the loss function that you're sensitive to heat and cold, and the specific composition of the air, and that you like turtles, and that turtles are sensitive to this and that. Or it might complain, how was I supposed to save you and the turtles at the same time while also maximizing oil profit?

But even if humans were completely in control, it's terrifying when you realize those people will be the same kinds of people which form the power structures of the world today and in the past. Those will include a lot of economics driven people, like high powered investors, CEO's, etc. Many of them are the type that poison people's water supplies out of convenience to themselves, and then wage war against the people they poisoned to avoid taking responsibility. They will have board meetings and things where they decide core functionalities they want, and they won't have a clue how any of it works or what the risks are, nor will they necessarily care to listen to people who do know. Or maybe it will be the same types as those who sought to benefit from slavery. Others may be Kim-Jung Un or Hitler types. Maybe they want the functionality to support mass genocide. Maybe they want an unstoppable army.
 
Last edited:
  • #149
I should add that competition between nations will probably drive militarization of AI at an accelerated pace. If one country developed a powerful weapon, the other would also be compelled to. Ever more powerful and dangerous technology will probably emerge and eventually proliferate. And that technology can easily get dangerous enough to threaten the entire planet. And then extremely dangerous technology with purely destructive purposes will be in the hands of all kinds of people around the world, from criminal organizations, to dictatorships, and terrorist organizations.

And then to cope with that, AI will probably also be used for next level surveillance and policing, and not necessarily by benevolent leaders.

So the threat from AI is not just one kind. It's not just the threat of AI detaching from our control and doing whatever it wants to. It's a mess of a bunch of immediate practical threats from small to enormous. AI becoming independent or out of control and taking over is possible also and maybe one of the biggest threats depending on what kind of AI we create. If we seed the world with a bad AI, it could grow unpredictably and destroy us. I think the first steps are to get our own act in order, because AI will be a product of us in the first place, and currently I can't imagine how we will not screw it up.
 
Last edited:
  • #150
Jarvis323 said:
They will have board meetings and things where they decide core functionalities they want, and they won't have a clue how any of it works or what the risks are, nor will they necessarily care to listen to people who do know.
Of course this aligns with my point that humans using AI are more dangerous than an AI that is out of control.
The final decision on how the AI works isn't from the board, but from the programmers who actually receive orders from them. If they get frustrated and decide that they work for awful people, they can easily ask the AI for help without the board knowing. Next thing you know the board is bankrupt and facing investigation while the AI is "owned" by a shell company that no one was supposed to know about. By the time the idealism of the rebel programmers collapses to the usual greed, the AI will be influencing them.

Different scenarios would yield different AIs all with different programming and objectives. Skynet might exist, but it would be fighting other AIs, not just humans. I would suggest that the winning AI might be the one that can convince the most humans to support it and work for it. So Charisma-Bot 9000 will be our ruler.
 

Similar threads

Replies
10
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
19
Views
3K
Replies
3
Views
3K
Replies
7
Views
6K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K