Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Isopod
  • Start date
  • Tags
    Ai
  • Featured
In summary, the AI in Blade Runner is a pun on Descartes and the protagonist has a religious experience with the AI.
  • #246
Astronuc said:
Self-aware in what sense?
That's essentially the crux of the concern. We can't control each other's behaviour, so if an AI reaches that level of autonomy, and is inimical to the human way of life, it might decide on some nefarious course of action to kill us off.

We don't know, of course, if an AI could even reach this dangerous point (and the AI we've built to date are laughably limited in that regard) but it is possible. As for what 'model' it adopts in terms of ethics or higher purpose, that is equally unknown.

Some say, AI has potential to go horribly wrong for us. The question is whether we should fear this or not.
 
  • Like
Likes russ_watters
Computer science news on Phys.org
  • #247
Another thing - are we talking AI at the level of a 4th or 8th grader, or that of one with a PhD or ScD? Quite a difference.
 
  • #248
Astronuc said:
Another thing - are we talking AI at the level of a 4th or 8th grader, or that of one with a PhD or ScD? Quite a difference.
As far as I know, AI is currently very good at, and either is or will probably soon excede humans (in a technical sense) in, language skills, music, art, and understanding and synthesis of images. In these areas, it is easy to make AI advance further just by throwing more and better data and massive amounts of compute time into its learning/training.

I am not aware of an ability for AI to do independent fundamental research in mathematics, or that type of thing. But that is something we shouldn't be surprised to see fairly soon IMO. I think this because AI advances at a high rate, and now we are seeing leaps in natural language, which I think is a stepping stone to mathematics. And google has an AI now that can compete at an average level in codeing competitions.

Once AI is able to make its own breakthroughs, and if it has access to the world, then it can become fully independent and potentially increase in intelligence and capabaility at a pace we can hardly comprehend.

AI is also very, very advanced in understanding human behavior/psychology. Making neural networks able to understand human behavior, and training them how to manipulate us, is basically, by far, the biggest effort going in the AI game. This is one of the biggest threats currently IMO.
 
Last edited:
  • #249
Jarvis323 said:
Once AI is able to make its own breakthroughs, and if it has access to the world, then it can become fully independent and potentially increase in intelligence and capabaility at a pace we can hardly comprehend.
Maybe. But what happens if the algorithm becomes corrupted, or a chip or microcircuit fails? Will it self-correct?

Jarvis323 said:
if it has access to the world,
This is a rather critical aspect. How will AI connect with the human world? Controlling power grids? Controlling water supply? Controlling transportation systems, e.g., air traffic control? Highway traffic control?
 
  • Like
Likes russ_watters
  • #250
Astronuc said:
This is a rather critical aspect. How will AI connect with the human world? Controlling power grids? Controlling water supply? Controlling transportation systems, e.g., air traffic control? Highway traffic control?

I guess there is pretty much no limitation. We have to guess where people will draw the line. If there is a line that once crossed we can no longer turn back from and will lead to our destruction, it will be hard to recognize. We could be like the lobster in a pot of water that slowly increases in temperature.
 
  • #251
Astronuc said:
This is a rather critical aspect. How will AI connect with the human world? Controlling power grids? Controlling water supply? Controlling transportation systems, e.g., air traffic control? Highway traffic control?
It is, and all of those systems are commonly cited as examples of where AI can provide better outcomes (usually lower cost and fewer errors) than people do. Certainly, clever pattern matching algorithms can reduce cost and error rates in those domains, but they are not 'intelligent' in the sense humans generally mean by the term, and it is not clear to me how or why their 'intelligence' would grow such that they became a threat (or even a help) beyond the specific parameters set by their original model.

But a "4th or 8th grader" in charge of a large real-world network / system, could cause havoc "just because", and that's really likely, even if it is via a programming bug rather than self-aware mischief making.
 
  • #252
The danger of AI resides in how much capability we will not recognize that it has and how much control we will give it. Like nuclear energy AI will be developed by anybody. Like nuclear energy, there was an initial barrier to its widespread development and implementation but with time those barriers became lower—the same with AI. Initially, AI development was limited by time and the use of massive computer resources.

Recently Cerebras a computer chip design company produced the largest CPU ever obviating the need for the thousand of GPUs normally needed to develop advanced AI. They claim that a computer incorporating this chip will be able to handle 100 times more parameters than current AI models such as GTP-3. A computer with this chip will reduce the cost of development by making programming much easier, reducing the power requirements, and reducing the training time for neural networks from months to minutes.

Should we be concerned? Will this development be like fire in the hands of a child? Will common sense prevail?
 
  • #254
gleem said:
Should we be concerned?
Yes.

gleem said:
Will this development be like fire in the hands of a child?
Probably.

gleem said:
Will common sense prevail?
No!
 
  • #255
https://www.nature.com/articles/d41586-022-01921-7#ref-CR1

"Inspired by research into how infants learn, computer scientists have created a program that can learn simple physical rules about the behaviour of objects — and express surprise when they seem to violate those rules. The results were published on 11 July in Nature Human Behaviour1."
 
  • Wow
Likes Melbourne Guy
  • #256
So far most AI is what I call "AI in a bottle". We uncork the bottle to see what is inside. The AI "agents" as some are called are asked questions and provide answers based on the relevance of the question to words and phrases of a language. This is only one aspect of many that true intelligence has. AI as we currently experience it has no contact with the outside world other than being turned on to respond to some question.

However, researchers are giving AI more intelligent functionality, Giving it access to or the ability to interact with the outside world without any prompts may be the beginning of what we might fear.

Selecting a few stanzas from the song "Genie in a Bottle", it might depict our fascination and caution
with AI sans the sexual innuendo. Maybe there is some reason to think AI might be a D'Jinn.

I feel like I've been locked up tight​
For a century of lonely nights​
Waiting for someone to release me​
You're lickin' your lips​
And blowing kisses my way​
But that don't mean I'm going to give it away​
If you want to be with me​
Baby, there's a price to pay​
I'm a genie in a bottle (I'm a genie in a bottle)​
You got to rub me the right way​
If you want to be with me (oh)​
I can make your wish come true (your wish come true oh)​
Just come and set me free, baby​
And I'll be with you​

I'm a genie in a bottle, baby​
Come come, come on and let me out​
 
  • Informative
Likes Oldman too
  • #258
It's not clear how much AI capability one can put in this small of a robot. I think It would need at least the capability of the auto drive system that Tesla has to make it useful. Although on second thought giving it IR vision a human target would dramatically stand out at night making target ids less of a problem.
 
  • #259
Melbourne Guy said:
I can't recall if the Boston Dynamics-looking assault rifle toting robot has been mentioned, but it's genuinely scary!
Scary? yes, but that's a knock-off bot, check out the Russian theme. Spot's potential was made pretty clear by MSCHF in I personally I think the video in the TT article was a bit of sensationalism on the part of a particular country, Scary? very, but it gets better/worse. This is what spots creators are showing off lately.
https://www.bostondynamics.com/atlas

It's also a pretty good bet that the DARPA dog in the video could also handle the auto-fire recoil a lot better than the knock-off
 
  • #260
gleem said:
It's not clear how much AI capability one can put in this small of a robot. I think It would need at least the capability of the auto drive system that Tesla has to make it useful. Although on second thought giving it IR vision a human target would dramatically stand out at night making target ids less of a problem.
AI in my novels are often distributed swarm minds, it's a pretty common theme in sci-fi, these small units could be peripherals of a larger set, communication by RF. You'd think that would be easy to interfere with, but spread spectrum radios can be resistant to jamming!

Oldman too said:
Scary? yes, but that's a knock-off bot, check out the Russian theme.
Hadn't seen Spot, I wonder if those were used in that recent War of the Worlds TV series? But that aside, it's straightforward to imagine a hostile AI either taking over bots like this, or crafting their own versions. That's all in the future of course, at the moment, we have to design and build the tools of our own downfall :nb)
 
  • Like
Likes Oldman too
  • #261
I've seen worse. Drone with a flamethrower.
 
  • Sad
Likes Melbourne Guy
  • #262
profbuxton said:
[...] Even [Asimov's] famed three laws of robotics didn't always stop harm occurring in his tales. [...]

I was under the impression that his three laws of robotics was a plot-device invented to drive his stories. The flaws presumably intended to provide drama. As an insurance against out-of-control AI they sound way too catchy. :)
 
  • Like
Likes russ_watters
  • #263
sbrothy said:
I was under the impression that his three laws of robotics was a plot-device invented to drive his stories. The flaws presumably intended to provide drama. As an insurance against out-of-control AI they sound way too catchy. :)
I mean when something related to AI references itself as his "laws" do you can be sure it's going to be "exciting". :)
 
  • #264
Former Google CEO Eric Schmidt gives a warning about the world's lack of preparedness to deal with AI.

 
  • #265
I've been listening to a presentation by a company that is developing autonomous machines, one application of which is construction equipment or heavy machinery with the objective of replacing human operators with AI systems that monitor a variety of sensors that permit the AI controller to be 'aware of the environment'. So, like autonomous cars, trucks, trains, ships, planes, (human) heavy equipment operators can be replaced by a computer system.

I reflect on locomotives, which have hundreds of sensors to monitor the condition of the prime mover, power conversion system, and traction system. Apparently, any piece of equipment can be modified to replace a human operator, and the control is much smoother, so less wear and tear on the equipment.

A number of tech companies are sponsoring the research.
 
  • #266
Astronuc said:
So, like autonomous cars, trucks, trains, ships, planes, (human) heavy equipment operators can be replaced by a computer system.
Most construction sites that I've worked on have a standing caveat. "No one is irreplaceable" (this applies to more situations than just construction). As an afterthought, I'll bet the white hats will be the first to go if AI takes over.
 
  • Like
Likes CalcNerd
  • #267
Oldman too said:
As an afterthought, I'll bet the white hats will be the first to go if AI takes over.
What's a white hat?
 
  • #268
On any typical construction site, "white hats" denote a foreman or boss. Besides the obligatory white hard hat, they can also be identified by carrying a clipboard and having a cell phone constantly attached to one ear or the other. They are essential on a job site, however they are also the highest paid. The pay grade is why I believe they will be the first to be replaced.
 
  • Skeptical
  • Like
Likes russ_watters and Melbourne Guy
  • #269
Oldman too said:
they are also the highest paid. The pay grade is why I believe they will be the first to be replaced.
Do you have any idea what will replace them?
 
  • Like
  • Informative
Likes russ_watters and Oldman too
  • #270
gleem said:
Do you have any idea what will replace them?
Not in least, but it will probably involve artificially intelligent algorithms.
 
  • #271
gleem said:
Do you have any idea what will replace them?
"Chimps on rollerskates?"
 
  • Like
  • Informative
  • Love
Likes russ_watters, Oldman too and gleem
  • #272
Oldman too said:
They are essential on a job site, however they are also the highest paid. The pay grade is why I believe they will be the first to be replaced.
I agree, @Oldman too. If you are investing in ML / AI to replace labour, picking off the highest-paid, hardest-to-replace roles seems economically advantageous to the creator and the buyer of the system.
 
  • Skeptical
  • Like
Likes russ_watters and Oldman too
  • #273
If you want to save some serious money replace the CEO, COO, CFO, CIO, CTO well maybe not the CTO since he might be the one doing the replacement. After all, they run the company through the computer system reading, writing reports, and holding meetings all of which AI is optimally set to do.
 
  • Skeptical
  • Like
Likes russ_watters and Oldman too
  • #274
gleem said:
If you want to save some serious money replace the CEO, COO, CFO, CIO, CTO
Well, hopefully the C-suite is providing strategy, inspiration, leadership, and capital raising activities that are so far hard for AI to replicate, but yeah, eventually...
 
  • #275
Melbourne Guy said:
eventually...
That's what scares me.
 
  • Haha
Likes Melbourne Guy
  • #276
Oldman too said:
That's what scares me.
That's why I'm lowest man on the totem pole, @Oldman too. By the time Colossus comes for me, I'll be well retired :biggrin:
 
  • Like
Likes Oldman too
  • #277
Melbourne Guy said:
That's why I'm lowest man on the totem pole, @Oldman too. By the time Colossus comes for me, I'll be well retired :biggrin:
A good plan!
 
  • #278
Considering the advances in AI in the last 6 months what if anything has changed with regard to how you feel about its implementation considering that companies cannot seem to embrace it fast enough?

An update of GPT4, GPT4.5, is expected around Oct of this year.
 
  • #279
gleem said:
Considering the advances in AI in the last 6 months what if anything has changed with regard to how you feel about its implementation considering that companies cannot seem to embrace it fast enough?
Nothing/still no fear. I see a few comments from last year/above I disagree with too, and the succinct reply is I think people watch too many movies.
 
Last edited:
  • Like
Likes Bystander
  • #280
gleem said:
Considering the advances in AI in the last 6 months what if anything has changed with regard to how you feel about its implementation considering that companies cannot seem to embrace it fast enough?

An update of GPT4, GPT4.5, is expected around Oct of this year.

There was an open letter released today calling for a pause.

https://futureoflife.org/open-letter/pause-giant-ai-experiments/
 

Similar threads

Replies
10
Views
2K
  • Computing and Technology
Replies
1
Views
1K
Writing: Input Wanted Number of Androids on Spaceships
  • Sci-Fi Writing and World Building
Replies
9
Views
447
  • General Discussion
Replies
5
Views
1K
Replies
19
Views
2K
Replies
4
Views
1K
  • General Discussion
Replies
12
Views
1K
Replies
7
Views
4K
  • Astronomy and Astrophysics
Replies
3
Views
666
Replies
3
Views
2K
Back
Top