Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Thread starter Isopod
  • Start date Start date
  • Tags Tags
    Ai
Click For Summary
The discussion explores the fear surrounding AI and the potential for sentient, self-autonomous robots. Concerns are raised about AI reflecting humanity's darker tendencies and the implications of AI thinking differently from humans. Participants emphasize that the real danger lies in the application of AI rather than the technology itself, highlighting the need for human oversight to prevent misuse. The conversation touches on the idea that AI could potentially manipulate information, posing risks to democratic discourse. Ultimately, there is a mix of skepticism and cautious optimism about the future of AI and its impact on society.
  • #241
Astronuc said:
AI is a tool, and as with any tool, it could be used constructively or destructively/nefariously. Who gets to apply the AI system and who writes the rules/algorithms?

AI can certainly be beneficial - https://www.cnn.com/videos/business...re-orig-jc.cnn/video/playlists/intl-business/

The ship navigated the Atlantic Ocean using an AI system with 6 cameras, 30 onboard sensors and 15 edge devices. AI was performing decisions a captain would ordinarily make.

I realize It's somewhat old news but it's not the same as the navy version is it?

https://www.navytimes.com/news/your...-to-expedite-integration-of-unmanned-systems/

But yeah, it all depends on the use. ;)
 
Computer science news on Phys.org
  • #242
sbrothy said:
but it's not the same as the navy version is it?
According to the article, both unmanned systems were involved in the April 2021 exercise, however, the Navy remained tight-lipped about specifics. The Navy is not providing details, which is understandable, but the performance relates to intelligence, surveillance and reconnaissance, and increasing the range of surveillance much further out.

At work, we have a group that applies AI (machine learning) to complex datasets, e.g., variations in composition of alloys or ceramics, and processing, both of which affect a material's microstructure (including flaws and crystalline defects), which in turn affects properties and performance. The goal is to find the optimal composition for a given environment with an optimal performance. That's a positive use.

Another positive use would be weather prediction and climate prediction.

A negative use would something like manipulation financial markets or other economic systems.
 
  • #243
Astronuc said:
AI is a tool, and as with any tool, it could be used constructively or destructively/nefariously. Who gets to apply the AI system and who writes the rules/algorithms?
I guess the point, @Astronuc, is that this tool has potential to write its own rules and algorithms. Currently, it's a blunt instrument in that regard, but how do you constrain AI that is self-aware and able to alter its own code?
 
  • #244
Melbourne Guy said:
Currently, it's a blunt instrument in that regard, but how do you constrain AI that is self-aware and able to alter its own code?
Self-aware in what sense? That the AI system is an algorithm or set of algorithms and rules? Or that it is a program residing on Si or other microchips and circuits?

Would the AI set the values and make value judgement? Or, otherwise, who sets the values? To what end?

Would it be modeled in humankind, which seems kind of self-destructive at the moment? Or, would there be some high purpose, e.g., making the planet sustainable and moderating the climate to a more balanced (between extremes of temperature and precipitation)?
 
  • #245
It's important to consider that a neural network, which most AI is based on now, isn't a set of algorithms or code. It is a set of numbers/weights in a very big and complex mathematical model. People don't set those values and don't know how to tweak them to make it work differently. It learns those values from data and considering a loss function. So discussing algorithms and code is at best a metaphore, and no more valid than thinking of human intelligence in such terms.

An AI which writes its own rules would be one which is allowed to collect its own data and or adapt its cost functions.
 
  • #246
Astronuc said:
Self-aware in what sense?
That's essentially the crux of the concern. We can't control each other's behaviour, so if an AI reaches that level of autonomy, and is inimical to the human way of life, it might decide on some nefarious course of action to kill us off.

We don't know, of course, if an AI could even reach this dangerous point (and the AI we've built to date are laughably limited in that regard) but it is possible. As for what 'model' it adopts in terms of ethics or higher purpose, that is equally unknown.

Some say, AI has potential to go horribly wrong for us. The question is whether we should fear this or not.
 
  • Like
Likes russ_watters
  • #247
Another thing - are we talking AI at the level of a 4th or 8th grader, or that of one with a PhD or ScD? Quite a difference.
 
  • #248
Astronuc said:
Another thing - are we talking AI at the level of a 4th or 8th grader, or that of one with a PhD or ScD? Quite a difference.
As far as I know, AI is currently very good at, and either is or will probably soon excede humans (in a technical sense) in, language skills, music, art, and understanding and synthesis of images. In these areas, it is easy to make AI advance further just by throwing more and better data and massive amounts of compute time into its learning/training.

I am not aware of an ability for AI to do independent fundamental research in mathematics, or that type of thing. But that is something we shouldn't be surprised to see fairly soon IMO. I think this because AI advances at a high rate, and now we are seeing leaps in natural language, which I think is a stepping stone to mathematics. And google has an AI now that can compete at an average level in codeing competitions.

Once AI is able to make its own breakthroughs, and if it has access to the world, then it can become fully independent and potentially increase in intelligence and capabaility at a pace we can hardly comprehend.

AI is also very, very advanced in understanding human behavior/psychology. Making neural networks able to understand human behavior, and training them how to manipulate us, is basically, by far, the biggest effort going in the AI game. This is one of the biggest threats currently IMO.
 
Last edited:
  • #249
Jarvis323 said:
Once AI is able to make its own breakthroughs, and if it has access to the world, then it can become fully independent and potentially increase in intelligence and capabaility at a pace we can hardly comprehend.
Maybe. But what happens if the algorithm becomes corrupted, or a chip or microcircuit fails? Will it self-correct?

Jarvis323 said:
if it has access to the world,
This is a rather critical aspect. How will AI connect with the human world? Controlling power grids? Controlling water supply? Controlling transportation systems, e.g., air traffic control? Highway traffic control?
 
  • Like
Likes russ_watters
  • #250
Astronuc said:
This is a rather critical aspect. How will AI connect with the human world? Controlling power grids? Controlling water supply? Controlling transportation systems, e.g., air traffic control? Highway traffic control?

I guess there is pretty much no limitation. We have to guess where people will draw the line. If there is a line that once crossed we can no longer turn back from and will lead to our destruction, it will be hard to recognize. We could be like the lobster in a pot of water that slowly increases in temperature.
 
  • #251
Astronuc said:
This is a rather critical aspect. How will AI connect with the human world? Controlling power grids? Controlling water supply? Controlling transportation systems, e.g., air traffic control? Highway traffic control?
It is, and all of those systems are commonly cited as examples of where AI can provide better outcomes (usually lower cost and fewer errors) than people do. Certainly, clever pattern matching algorithms can reduce cost and error rates in those domains, but they are not 'intelligent' in the sense humans generally mean by the term, and it is not clear to me how or why their 'intelligence' would grow such that they became a threat (or even a help) beyond the specific parameters set by their original model.

But a "4th or 8th grader" in charge of a large real-world network / system, could cause havoc "just because", and that's really likely, even if it is via a programming bug rather than self-aware mischief making.
 
  • #252
The danger of AI resides in how much capability we will not recognize that it has and how much control we will give it. Like nuclear energy AI will be developed by anybody. Like nuclear energy, there was an initial barrier to its widespread development and implementation but with time those barriers became lower—the same with AI. Initially, AI development was limited by time and the use of massive computer resources.

Recently Cerebras a computer chip design company produced the largest CPU ever obviating the need for the thousand of GPUs normally needed to develop advanced AI. They claim that a computer incorporating this chip will be able to handle 100 times more parameters than current AI models such as GTP-3. A computer with this chip will reduce the cost of development by making programming much easier, reducing the power requirements, and reducing the training time for neural networks from months to minutes.

Should we be concerned? Will this development be like fire in the hands of a child? Will common sense prevail?
 
  • #254
gleem said:
Should we be concerned?
Yes.

gleem said:
Will this development be like fire in the hands of a child?
Probably.

gleem said:
Will common sense prevail?
No!
 
  • #255
https://www.nature.com/articles/d41586-022-01921-7#ref-CR1

"Inspired by research into how infants learn, computer scientists have created a program that can learn simple physical rules about the behaviour of objects — and express surprise when they seem to violate those rules. The results were published on 11 July in Nature Human Behaviour1."
 
  • Wow
Likes Melbourne Guy
  • #256
So far most AI is what I call "AI in a bottle". We uncork the bottle to see what is inside. The AI "agents" as some are called are asked questions and provide answers based on the relevance of the question to words and phrases of a language. This is only one aspect of many that true intelligence has. AI as we currently experience it has no contact with the outside world other than being turned on to respond to some question.

However, researchers are giving AI more intelligent functionality, Giving it access to or the ability to interact with the outside world without any prompts may be the beginning of what we might fear.

Selecting a few stanzas from the song "Genie in a Bottle", it might depict our fascination and caution
with AI sans the sexual innuendo. Maybe there is some reason to think AI might be a D'Jinn.

I feel like I've been locked up tight​
For a century of lonely nights​
Waiting for someone to release me​
You're lickin' your lips​
And blowing kisses my way​
But that don't mean I'm going to give it away​
If you want to be with me​
Baby, there's a price to pay​
I'm a genie in a bottle (I'm a genie in a bottle)​
You got to rub me the right way​
If you want to be with me (oh)​
I can make your wish come true (your wish come true oh)​
Just come and set me free, baby​
And I'll be with you​

I'm a genie in a bottle, baby​
Come come, come on and let me out​
 
  • #258
It's not clear how much AI capability one can put in this small of a robot. I think It would need at least the capability of the auto drive system that Tesla has to make it useful. Although on second thought giving it IR vision a human target would dramatically stand out at night making target ids less of a problem.
 
  • #259
Melbourne Guy said:
I can't recall if the Boston Dynamics-looking assault rifle toting robot has been mentioned, but it's genuinely scary!
Scary? yes, but that's a knock-off bot, check out the Russian theme. Spot's potential was made pretty clear by MSCHF in I personally I think the video in the TT article was a bit of sensationalism on the part of a particular country, Scary? very, but it gets better/worse. This is what spots creators are showing off lately.
https://www.bostondynamics.com/atlas

It's also a pretty good bet that the DARPA dog in the video could also handle the auto-fire recoil a lot better than the knock-off
 
  • #260
gleem said:
It's not clear how much AI capability one can put in this small of a robot. I think It would need at least the capability of the auto drive system that Tesla has to make it useful. Although on second thought giving it IR vision a human target would dramatically stand out at night making target ids less of a problem.
AI in my novels are often distributed swarm minds, it's a pretty common theme in sci-fi, these small units could be peripherals of a larger set, communication by RF. You'd think that would be easy to interfere with, but spread spectrum radios can be resistant to jamming!

Oldman too said:
Scary? yes, but that's a knock-off bot, check out the Russian theme.
Hadn't seen Spot, I wonder if those were used in that recent War of the Worlds TV series? But that aside, it's straightforward to imagine a hostile AI either taking over bots like this, or crafting their own versions. That's all in the future of course, at the moment, we have to design and build the tools of our own downfall :nb)
 
  • #261
I've seen worse. Drone with a flamethrower.
 
  • Sad
Likes Melbourne Guy
  • #262
profbuxton said:
[...] Even [Asimov's] famed three laws of robotics didn't always stop harm occurring in his tales. [...]

I was under the impression that his three laws of robotics was a plot-device invented to drive his stories. The flaws presumably intended to provide drama. As an insurance against out-of-control AI they sound way too catchy. :)
 
  • Like
Likes russ_watters
  • #263
sbrothy said:
I was under the impression that his three laws of robotics was a plot-device invented to drive his stories. The flaws presumably intended to provide drama. As an insurance against out-of-control AI they sound way too catchy. :)
I mean when something related to AI references itself as his "laws" do you can be sure it's going to be "exciting". :)
 
  • #264
Former Google CEO Eric Schmidt gives a warning about the world's lack of preparedness to deal with AI.

 
  • #265
I've been listening to a presentation by a company that is developing autonomous machines, one application of which is construction equipment or heavy machinery with the objective of replacing human operators with AI systems that monitor a variety of sensors that permit the AI controller to be 'aware of the environment'. So, like autonomous cars, trucks, trains, ships, planes, (human) heavy equipment operators can be replaced by a computer system.

I reflect on locomotives, which have hundreds of sensors to monitor the condition of the prime mover, power conversion system, and traction system. Apparently, any piece of equipment can be modified to replace a human operator, and the control is much smoother, so less wear and tear on the equipment.

A number of tech companies are sponsoring the research.
 
  • #266
Astronuc said:
So, like autonomous cars, trucks, trains, ships, planes, (human) heavy equipment operators can be replaced by a computer system.
Most construction sites that I've worked on have a standing caveat. "No one is irreplaceable" (this applies to more situations than just construction). As an afterthought, I'll bet the white hats will be the first to go if AI takes over.
 
  • #267
Oldman too said:
As an afterthought, I'll bet the white hats will be the first to go if AI takes over.
What's a white hat?
 
  • #268
On any typical construction site, "white hats" denote a foreman or boss. Besides the obligatory white hard hat, they can also be identified by carrying a clipboard and having a cell phone constantly attached to one ear or the other. They are essential on a job site, however they are also the highest paid. The pay grade is why I believe they will be the first to be replaced.
 
  • Skeptical
  • Like
Likes russ_watters and Melbourne Guy
  • #269
Oldman too said:
they are also the highest paid. The pay grade is why I believe they will be the first to be replaced.
Do you have any idea what will replace them?
 
  • Like
  • Informative
Likes russ_watters and Oldman too
  • #270
gleem said:
Do you have any idea what will replace them?
Not in least, but it will probably involve artificially intelligent algorithms.
 

Similar threads

Replies
10
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
19
Views
3K
Replies
3
Views
3K
Replies
7
Views
6K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K