Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Thread starter Isopod
  • Start date Start date
  • Tags Tags
    Ai
Click For Summary
The discussion explores the fear surrounding AI and the potential for sentient, self-autonomous robots. Concerns are raised about AI reflecting humanity's darker tendencies and the implications of AI thinking differently from humans. Participants emphasize that the real danger lies in the application of AI rather than the technology itself, highlighting the need for human oversight to prevent misuse. The conversation touches on the idea that AI could potentially manipulate information, posing risks to democratic discourse. Ultimately, there is a mix of skepticism and cautious optimism about the future of AI and its impact on society.
  • #91
PeterDonis said:
I don't understand.
I saw too late that my reply quoted more than I wanted. I only intended to quote Klystron but couldn't edit the preceeding stuff out. Just disregard it.
 
Computer science news on Phys.org
  • #92
sbrothy said:
I only intended to quote Klystron but couldn't edit the preceeding stuff out.
Ok. I'll use magic Mentor powers to do the edit.
 
  • #93
PeterDonis said:
Ok. I'll use magic Mentor powers to do the edit.
I could really use a preview function when posting to this forum. Other forums have this functionality. Or is it there and I can't find it?
 
  • #94
sbrothy said:
I could really use a preview function when posting to this forum. Other forums have this functionality. Or is it there and I can't find it?
It's there when you're starting a new thread, but not for individual posts as far as I know.
 
  • #95
PeterDonis said:
It's there when you're starting a new thread, but not for individual posts as far as I know.
It's there, but hard to use. . . . 😒
1649352483215.png
 
  • Like
Likes sbrothy and Klystron
  • #96
Bystander said:
...? "Artificial" stupidity is better?
We may already have it, if artificial intelligence really is intelligent then it would know to dumb itself down enough to not be a threat, then, when the unsuspecting 'ugly bags of water'* have their guard down...

*star trek
 
  • Like
  • Haha
Likes sbrothy, Oldman too and Klystron
  • #97
bland said:
We may already have it, if artificial intelligence really is intelligent then it would know to dumb itself down enough to not be a threat, then, when the unsuspecting 'ugly bags of water'* have their guard down...

*star trek
OMG. Now there's a nightmare! :)
 
  • #99
  • Like
Likes russ_watters and Klystron
  • #100
DaveC426913 said:
We discourage blind links. It would be helpful to post a short description of what readers can expect if they click on that link, as well as why it is relevant to the discussion.
My bad! It's basically a long essay about how real AI wouldn't think like a human being as is usually portrayed in all the movies, etc.
 
  • Like
Likes DaveC426913
  • #101
Chicken Squirr-El said:
“as soon as it works, no one calls it AI anymore.”
- John McCarthy, who coined the term “Artificial Intelligence” in 1956

  • Cars are full of Artificial Narrow Intelligence (ANI) systems, from the computer that figures out when the anti-lock brakes should kick into the computer that tunes the parameters of the fuel injection systems.
  • Your phone is a little ANI factory.
  • Your email spam filter is a classic type of ANI.
  • When your plane lands, it’s not a human that decides which gate it should go to. Just like it’s not a human that determined the price of your ticket.
 
  • #103
PeroK said:
This is garbage.
:mad:
Poster is new. Be constructive if you have criticism.
 
  • #104
DaveC426913 said:
:mad:
Poster is new. Be constructive if you have criticism.
Kurzweil suggests that the progress of the entire 20th century would have been achieved in only 20 years at the rate of advancement in the year 2000—in other words, by 2000, the rate of progress was five times faster than the average rate of progress during the 20th century. He believes another 20th century’s worth of progress happened between 2000 and 2014 and that another 20th century’s worth of progress will happen by 2021, in only seven years.

In other words, he's telling us that in the 7 years since 2014 the world has changed more than it did in the entire 20th Century? By what measure could this conceivably be true? It's patently not the case. This is, as I said, garbage.

A couple decades later, he believes a 20th century’s worth of progress will happen multiple times in the same year, and even later, in less than one month.

This is also garbage. How can the world change technologically significantly several times a month? Whoever wrote this has modeled progress as a simple exponential without taking into account the non-exponential aspects like return on investment. A motor manufacturer, for example, cannot produce an entire new design every day, because they cannot physically sell enough cars in a day to get return on their investment. We are not buying new cars twice as often in 2021 as we did in 2014. This is not happening.

You can't remodernise your home, electricity, gas and water supply every month. Progress in these things, rather than change with exponential speed, has essentially flattened out. You get central heating and it lasts 20-30 years. You're not going to replace your home every month.

The truth is that most things have hardly changed since 2014. There is a small minority of things that are new or have changed significantly - but even smartphones are not fundamentally different from the ones of seven years ago.

Then, finally, just to convince us that we are too dumb to judge for ourselves the rate of change in our lives:

This isn’t science fiction. It’s what many scientists smarter and more knowledgeable than you or I firmly believe—and if you look at history, it’s what we should logically predict.

I'm not sure what logical fallacy that is, but it's, like I said, garbage.
 
Last edited:
  • Skeptical
  • Like
Likes Jarvis323 and Oldman too
  • #105
Here's another aspect to the fallcy. The above paper equates exponentiating computing power with exponentiating change to human life. This is false.

For example, in the 1970's and 80's (when computers were still very basic by today's standard) entire armies of clerks and office workers were replaced by electronic finance, payroll and budgeting systems etc. That, in a way, was the biggest change there will ever be. I.e. from the advent of ubiquitous business IT systems in the first instance.

The other big change was the Internet and web technology, which opened up access to systems. In a sense, nothing as significant as that can happen again. Instead of the impact of the Internet being an exponentially increasing effect on society, it's more like an exponentially decreasing effect. The big change has happened as an initial 10 year paradigm shift and now the effect is more gradual change. It's harder for more and more Internet access to significantly affect our lives now. The one-off sea-change in our lives has happened.

In time it becomes more difficult for changes in the the said technology to make a significant impact. That's why a smartphone in 2022 might have 32 times the processing power of 2014, but there's no sense in which it has 32 times the impact on our lives.

Equating processing power (doubling every two years) with the rate of human societal change (definitely not changing twice as fast every two years) is a completely false comparison.

Instead, change is driven by one-off technological breakthroughs. And these appear to be every 20 years or so. In other words, you could make a case that the change from 1900 to 1920 was comparable with the change from 2000 to 2020. Human civilization does not change out of all recognition every 20 years, but in the post-industrial era there has always been significant change every decade or two.

AI is likely to produce a massive one-off change sometime in the next 80 years. Whether that change is different from previous innovations and leads to permanent exponential change is anyone's guess.

Going only by the evidence of the past, we would assume that it will be a massive one-off change for 10-20 years and then have a steadily diminishing impact on us. That said, there is a case for AI to be different and to set off a chain reaction of developments. And, the extent to which we can control that is debatable.

Computers might be 1,000 times more powerful now than in the year 2000, but in no sense is life today unrecognisable from 20 years ago.
 
Last edited:
  • Skeptical
  • Informative
Likes Jarvis323 and Oldman too
  • #106
"Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind."

"Weak AI—also called Narrow AI or Artificial Narrow Intelligence (ANI)—is AI trained and focused to perform specific tasks. Weak AI drives most of the AI that surrounds us today. ‘Narrow’ might be a more accurate descriptor for this type of AI as it is anything but weak; it enables some very robust applications, such as Apple's Siri, Amazon's Alexa, IBM Watson, and autonomous vehicles.

Strong AI is made up of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). Artificial general intelligence (AGI), or general AI, is a theoretical form of AI where a machine would have an intelligence equaled to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. Artificial Super Intelligence (ASI)—also known as superintelligence—would surpass the intelligence and ability of the human brain. While strong AI is still entirely theoretical with no practical examples in use today, that doesn't mean AI researchers aren't also exploring its development. In the meantime, the best examples of ASI might be from science fiction, such as HAL, the superhuman, rogue computer assistant in 2001: A Space Odyssey."


from: https://www.ibm.com/cloud/learn/what-is-artificial-intelligence#toc-deep-learn-md_Q_Of3
 
  • #107
PeroK said:
This is also garbage. How can the world change technologically significantly several times a month? Whoever wrote this has modeled progress as a simple exponential without taking into account the non-exponential aspects like return on investment. A motor manufacturer, for example, cannot produce an entire new design every day, because they cannot physically sell enough cars in a day to get return on their investment. We are not buying new cars twice as often in 2021 as we did in 2014. This is not happening.

You can't remodernise your home, electricity, gas and water supply every month. Progress in these things, rather than change with exponential speed, has essentially flattened out. You get central heating and it lasts 20-30 years. You're not going to replace your home every month.

Your're analyzing the future in the context of its past. That just doesn't work. There could be no such thing as investment, and return and selling, etc, as we see them now.

For example, what limitations would those contraints really impose when you require 0 human labor to design, manufacture, distribute, dispose of, clean up, and recycle things, and have essentially unlimited resources, and can practically scale up as large as you want extremely fast, limited mainly by what you have in your solar system? And then after that, how long to colonize the nearby star systems?

The fact is that near future technology can easilly suddenly make these things possible. Your house and car could easilly be updated weekly or even continuously each minute, and for free, just as easy as it is for your computer to download and install an update.

And AI superintelligence isn't needed for that, just an AI pretty good intelligence. The superintelligence part may be interesting too, but not sure exactly what more can be done with more intelligence that couldn't be otherwise. I guess, probably things like math breakthroughs, medical breakthroughs, maybe imortality, maybe artificial life, or nano-scale engineering that looks like life, things like that.

Some other things to expect are cyborgs, widespread use of human genetic engineering, and ultra realistic virtual worlds and haptics, or direct brain interfaces, that people are really addicted to.

I don't know how to measure technological advancement as a scalar value though. I think Kurzweil is basically probably about right in the big picture.
 
Last edited:
  • #108
Lol, this is classic. . . . :wink:



.
 
  • Like
Likes sbrothy and Oldman too
  • #109
  • Like
  • Haha
Likes Chicken Squirr-El, Bystander and BillTre
  • #110
OCR said:
Lol, this is classic. . . . :wink:



.

Man. Kids and their computers. I'm flabbergasted. :)
 
  • #111
PeroK said:
This is garbage.
Agreed. It's dated from 2015, but includes a Moore's Law graph with real data ending in 2000 and projections for the next 50 years. It had already been dead a decade before the post was written! (Note: that was a cost-based graph, not strictly power or transistors vs time).

The exponential growth/advancement projection is just lazy bad math. It doesn't apply to everything and with Moore's law as an example, it's temporary. By many measures, technology is progressing rather slowly right now. Some of the more exciting things like electric cars are driven primarily by the mundane: cheaper batteries due to manufacturing scale.

AI is not a hardware problem (not enough power) it is a software problem. It isn't that computers think too slow it's that they think wrong. That's why self-driving cars are so difficult to implement. And if Elon succeeds it won't be because he created AI it will be because he collected enough data and built a sufficiently complex algorithm.
 
  • Like
Likes Oldman too and PeroK
  • #112
Hey, just want to say that I only posted this for "fun read" purposes, as noted by sbrothy and I definitely don't agree with everything in it. This is the "Science Fiction and Fantasy Media" section after all and I did not intend to ruffle so many feathers over it.

I get irritated when fiction always has the AI behave with human psychology and the WBW post touched on that in ways I rarely see.

Slightly related (and I'm pretty sure there are plenty of threads on this already), but I'm a huge fan of this book: https://en.wikipedia.org/wiki/Blindsight_(Watts_novel)

Highly recommend!
 
  • Like
Likes russ_watters
  • #113
Chicken Squirr-El said:
I read that a year or two ago. I loooove the vampire concept.

But I'm not really a sci-fi horror fan. If you want sci-fi horror, read Greg Bear's Hull Zero Three. This book literally haunts me. (I get flashbacks every time I see it on the shelf, and I've taken to burying it where I won't see it.)
 
  • #114
russ_watters said:
It doesn't apply to everything and with Moore's law as an example, it's temporary.
Just a side thought. Could it be that technological progress for microchips slowed down when Intel no longer had competition from the PowerPC architecture? Now that ARM is making waves, things might catch up to Moore's law again?
 
  • #115
Algr said:
Just a side thought. Could it be that technological progress for microchips slowed down when Intel no longer had competition from the PowerPC architecture? Now that ARM is making waves, things might catch up to Moore's law again?
No, Moore's Law broke down* right at the time (just after) AMD was beating them to 1 GHZ in 2000. Monopoly or not, you need to sell your products to make money, and one big contributor to the decline of PC and software sales is there's no good reason to upgrade when the next version is barely any better than the last.

*Note, there's different formulations/manifestations, but prior to 2000 for PCs, it was all about clock speeds. After, they started doing partial work-arounds to keep performance increasing (like multi-cores).
 
  • #116
Algr said:
Just a side thought. Could it be that technological progress for microchips slowed down when Intel no longer had competition from the PowerPC architecture? Now that ARM is making waves, things might catch up to Moore's law again?
The point of the criticisms is that, in real world scenarios, nothing progresses geometrically for an unlimited duration. There always tends to be a counteracting factor that rises to the fore to flatten the curve. The article even goes into it a little later, describing such progress curves as an 'S' shape.
 
  • Like
Likes Oldman too, Klystron and russ_watters
  • #117
OCR said:
Lol, this is classic. . . . :wink:



.

I really didn't do this little film justice in my first comment. The "spacetime folding" travel effects are truly amazing. And what a nightmare.
 
  • #118
OCR said:
Lol, this is classic. . . . :wink:
The crisis in that film is that the machine has final authority on deciding what constitutes "harm", and thus ends up doing pathological things, including denying the human any understanding of what is really going on.
 
  • Like
  • Informative
Likes sbrothy and Oldman too
  • #119
OCR said:
Lol, this is classic. . . . :wink:


Turing's Halting Problem, personified.
 
  • Like
  • Informative
Likes sbrothy and Oldman too
  • #120
russ_watters said:
AI is not a hardware problem (not enough power) it is a software problem. It isn't that computers think too slow it's that they think wrong. That's why self-driving cars are so difficult to implement. And if Elon succeeds it won't be because he created AI it will be because he collected enough data and built a sufficiently complex algorithm.
Right now I think it is largely a combination of a hardware problem and a data problem. The more/better data the neural networks are trained on, the better AI gets. But training is costly with the vast amount of data. So it is really a matter of collecting data and training the neural networks with it.

AI's behavior is not driven by an algorithm written by people, it's a neural network which has evolved over time to learn a vastly complex function which tells it what to do. And the function is currently too complex for people to break down and understand. So nobody is writing any complex algorithms that make AI succeed, they are just feeding data into it, and coming up with effective loss functions that penalize the results they don't like.

But there is possibly a limit how far that can take us. There is also an evolution of architecture and transfer learning, and neuro-symbolic learning, which may spawn breakthroughs or steady improvements besides just pure brute force data consumption.
 
Last edited:

Similar threads

Replies
10
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
19
Views
3K
Replies
3
Views
3K
Replies
7
Views
6K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K