Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Thread starter Isopod
  • Start date Start date
  • Tags Tags
    Ai
AI Thread Summary
The discussion explores the fear surrounding AI and the potential for sentient, self-autonomous robots. Concerns are raised about AI reflecting humanity's darker tendencies and the implications of AI thinking differently from humans. Participants emphasize that the real danger lies in the application of AI rather than the technology itself, highlighting the need for human oversight to prevent misuse. The conversation touches on the idea that AI could potentially manipulate information, posing risks to democratic discourse. Ultimately, there is a mix of skepticism and cautious optimism about the future of AI and its impact on society.
  • #101
Chicken Squirr-El said:
“as soon as it works, no one calls it AI anymore.”
- John McCarthy, who coined the term “Artificial Intelligence” in 1956

  • Cars are full of Artificial Narrow Intelligence (ANI) systems, from the computer that figures out when the anti-lock brakes should kick into the computer that tunes the parameters of the fuel injection systems.
  • Your phone is a little ANI factory.
  • Your email spam filter is a classic type of ANI.
  • When your plane lands, it’s not a human that decides which gate it should go to. Just like it’s not a human that determined the price of your ticket.
 
Computer science news on Phys.org
  • #103
PeroK said:
This is garbage.
:mad:
Poster is new. Be constructive if you have criticism.
 
  • #104
DaveC426913 said:
:mad:
Poster is new. Be constructive if you have criticism.
Kurzweil suggests that the progress of the entire 20th century would have been achieved in only 20 years at the rate of advancement in the year 2000—in other words, by 2000, the rate of progress was five times faster than the average rate of progress during the 20th century. He believes another 20th century’s worth of progress happened between 2000 and 2014 and that another 20th century’s worth of progress will happen by 2021, in only seven years.

In other words, he's telling us that in the 7 years since 2014 the world has changed more than it did in the entire 20th Century? By what measure could this conceivably be true? It's patently not the case. This is, as I said, garbage.

A couple decades later, he believes a 20th century’s worth of progress will happen multiple times in the same year, and even later, in less than one month.

This is also garbage. How can the world change technologically significantly several times a month? Whoever wrote this has modeled progress as a simple exponential without taking into account the non-exponential aspects like return on investment. A motor manufacturer, for example, cannot produce an entire new design every day, because they cannot physically sell enough cars in a day to get return on their investment. We are not buying new cars twice as often in 2021 as we did in 2014. This is not happening.

You can't remodernise your home, electricity, gas and water supply every month. Progress in these things, rather than change with exponential speed, has essentially flattened out. You get central heating and it lasts 20-30 years. You're not going to replace your home every month.

The truth is that most things have hardly changed since 2014. There is a small minority of things that are new or have changed significantly - but even smartphones are not fundamentally different from the ones of seven years ago.

Then, finally, just to convince us that we are too dumb to judge for ourselves the rate of change in our lives:

This isn’t science fiction. It’s what many scientists smarter and more knowledgeable than you or I firmly believe—and if you look at history, it’s what we should logically predict.

I'm not sure what logical fallacy that is, but it's, like I said, garbage.
 
Last edited:
  • Skeptical
  • Like
Likes Jarvis323 and Oldman too
  • #105
Here's another aspect to the fallcy. The above paper equates exponentiating computing power with exponentiating change to human life. This is false.

For example, in the 1970's and 80's (when computers were still very basic by today's standard) entire armies of clerks and office workers were replaced by electronic finance, payroll and budgeting systems etc. That, in a way, was the biggest change there will ever be. I.e. from the advent of ubiquitous business IT systems in the first instance.

The other big change was the Internet and web technology, which opened up access to systems. In a sense, nothing as significant as that can happen again. Instead of the impact of the Internet being an exponentially increasing effect on society, it's more like an exponentially decreasing effect. The big change has happened as an initial 10 year paradigm shift and now the effect is more gradual change. It's harder for more and more Internet access to significantly affect our lives now. The one-off sea-change in our lives has happened.

In time it becomes more difficult for changes in the the said technology to make a significant impact. That's why a smartphone in 2022 might have 32 times the processing power of 2014, but there's no sense in which it has 32 times the impact on our lives.

Equating processing power (doubling every two years) with the rate of human societal change (definitely not changing twice as fast every two years) is a completely false comparison.

Instead, change is driven by one-off technological breakthroughs. And these appear to be every 20 years or so. In other words, you could make a case that the change from 1900 to 1920 was comparable with the change from 2000 to 2020. Human civilization does not change out of all recognition every 20 years, but in the post-industrial era there has always been significant change every decade or two.

AI is likely to produce a massive one-off change sometime in the next 80 years. Whether that change is different from previous innovations and leads to permanent exponential change is anyone's guess.

Going only by the evidence of the past, we would assume that it will be a massive one-off change for 10-20 years and then have a steadily diminishing impact on us. That said, there is a case for AI to be different and to set off a chain reaction of developments. And, the extent to which we can control that is debatable.

Computers might be 1,000 times more powerful now than in the year 2000, but in no sense is life today unrecognisable from 20 years ago.
 
Last edited:
  • Skeptical
  • Informative
Likes Jarvis323 and Oldman too
  • #106
"Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind."

"Weak AI—also called Narrow AI or Artificial Narrow Intelligence (ANI)—is AI trained and focused to perform specific tasks. Weak AI drives most of the AI that surrounds us today. ‘Narrow’ might be a more accurate descriptor for this type of AI as it is anything but weak; it enables some very robust applications, such as Apple's Siri, Amazon's Alexa, IBM Watson, and autonomous vehicles.

Strong AI is made up of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). Artificial general intelligence (AGI), or general AI, is a theoretical form of AI where a machine would have an intelligence equaled to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. Artificial Super Intelligence (ASI)—also known as superintelligence—would surpass the intelligence and ability of the human brain. While strong AI is still entirely theoretical with no practical examples in use today, that doesn't mean AI researchers aren't also exploring its development. In the meantime, the best examples of ASI might be from science fiction, such as HAL, the superhuman, rogue computer assistant in 2001: A Space Odyssey."


from: https://www.ibm.com/cloud/learn/what-is-artificial-intelligence#toc-deep-learn-md_Q_Of3
 
  • #107
PeroK said:
This is also garbage. How can the world change technologically significantly several times a month? Whoever wrote this has modeled progress as a simple exponential without taking into account the non-exponential aspects like return on investment. A motor manufacturer, for example, cannot produce an entire new design every day, because they cannot physically sell enough cars in a day to get return on their investment. We are not buying new cars twice as often in 2021 as we did in 2014. This is not happening.

You can't remodernise your home, electricity, gas and water supply every month. Progress in these things, rather than change with exponential speed, has essentially flattened out. You get central heating and it lasts 20-30 years. You're not going to replace your home every month.

Your're analyzing the future in the context of its past. That just doesn't work. There could be no such thing as investment, and return and selling, etc, as we see them now.

For example, what limitations would those contraints really impose when you require 0 human labor to design, manufacture, distribute, dispose of, clean up, and recycle things, and have essentially unlimited resources, and can practically scale up as large as you want extremely fast, limited mainly by what you have in your solar system? And then after that, how long to colonize the nearby star systems?

The fact is that near future technology can easilly suddenly make these things possible. Your house and car could easilly be updated weekly or even continuously each minute, and for free, just as easy as it is for your computer to download and install an update.

And AI superintelligence isn't needed for that, just an AI pretty good intelligence. The superintelligence part may be interesting too, but not sure exactly what more can be done with more intelligence that couldn't be otherwise. I guess, probably things like math breakthroughs, medical breakthroughs, maybe imortality, maybe artificial life, or nano-scale engineering that looks like life, things like that.

Some other things to expect are cyborgs, widespread use of human genetic engineering, and ultra realistic virtual worlds and haptics, or direct brain interfaces, that people are really addicted to.

I don't know how to measure technological advancement as a scalar value though. I think Kurzweil is basically probably about right in the big picture.
 
Last edited:
  • #108
Lol, this is classic. . . . :wink:



.
 
  • Like
Likes sbrothy and Oldman too
  • #109
  • Like
  • Haha
Likes Chicken Squirr-El, Bystander and BillTre
  • #110
OCR said:
Lol, this is classic. . . . :wink:



.

Man. Kids and their computers. I'm flabbergasted. :)
 
  • #111
PeroK said:
This is garbage.
Agreed. It's dated from 2015, but includes a Moore's Law graph with real data ending in 2000 and projections for the next 50 years. It had already been dead a decade before the post was written! (Note: that was a cost-based graph, not strictly power or transistors vs time).

The exponential growth/advancement projection is just lazy bad math. It doesn't apply to everything and with Moore's law as an example, it's temporary. By many measures, technology is progressing rather slowly right now. Some of the more exciting things like electric cars are driven primarily by the mundane: cheaper batteries due to manufacturing scale.

AI is not a hardware problem (not enough power) it is a software problem. It isn't that computers think too slow it's that they think wrong. That's why self-driving cars are so difficult to implement. And if Elon succeeds it won't be because he created AI it will be because he collected enough data and built a sufficiently complex algorithm.
 
  • Like
Likes Oldman too and PeroK
  • #112
Hey, just want to say that I only posted this for "fun read" purposes, as noted by sbrothy and I definitely don't agree with everything in it. This is the "Science Fiction and Fantasy Media" section after all and I did not intend to ruffle so many feathers over it.

I get irritated when fiction always has the AI behave with human psychology and the WBW post touched on that in ways I rarely see.

Slightly related (and I'm pretty sure there are plenty of threads on this already), but I'm a huge fan of this book: https://en.wikipedia.org/wiki/Blindsight_(Watts_novel)

Highly recommend!
 
  • Like
Likes russ_watters
  • #113
Chicken Squirr-El said:
I read that a year or two ago. I loooove the vampire concept.

But I'm not really a sci-fi horror fan. If you want sci-fi horror, read Greg Bear's Hull Zero Three. This book literally haunts me. (I get flashbacks every time I see it on the shelf, and I've taken to burying it where I won't see it.)
 
  • #114
russ_watters said:
It doesn't apply to everything and with Moore's law as an example, it's temporary.
Just a side thought. Could it be that technological progress for microchips slowed down when Intel no longer had competition from the PowerPC architecture? Now that ARM is making waves, things might catch up to Moore's law again?
 
  • #115
Algr said:
Just a side thought. Could it be that technological progress for microchips slowed down when Intel no longer had competition from the PowerPC architecture? Now that ARM is making waves, things might catch up to Moore's law again?
No, Moore's Law broke down* right at the time (just after) AMD was beating them to 1 GHZ in 2000. Monopoly or not, you need to sell your products to make money, and one big contributor to the decline of PC and software sales is there's no good reason to upgrade when the next version is barely any better than the last.

*Note, there's different formulations/manifestations, but prior to 2000 for PCs, it was all about clock speeds. After, they started doing partial work-arounds to keep performance increasing (like multi-cores).
 
  • #116
Algr said:
Just a side thought. Could it be that technological progress for microchips slowed down when Intel no longer had competition from the PowerPC architecture? Now that ARM is making waves, things might catch up to Moore's law again?
The point of the criticisms is that, in real world scenarios, nothing progresses geometrically for an unlimited duration. There always tends to be a counteracting factor that rises to the fore to flatten the curve. The article even goes into it a little later, describing such progress curves as an 'S' shape.
 
  • Like
Likes Oldman too, Klystron and russ_watters
  • #117
OCR said:
Lol, this is classic. . . . :wink:



.

I really didn't do this little film justice in my first comment. The "spacetime folding" travel effects are truly amazing. And what a nightmare.
 
  • #118
OCR said:
Lol, this is classic. . . . :wink:
The crisis in that film is that the machine has final authority on deciding what constitutes "harm", and thus ends up doing pathological things, including denying the human any understanding of what is really going on.
 
  • Like
  • Informative
Likes sbrothy and Oldman too
  • #119
OCR said:
Lol, this is classic. . . . :wink:


Turing's Halting Problem, personified.
 
  • Like
  • Informative
Likes sbrothy and Oldman too
  • #120
russ_watters said:
AI is not a hardware problem (not enough power) it is a software problem. It isn't that computers think too slow it's that they think wrong. That's why self-driving cars are so difficult to implement. And if Elon succeeds it won't be because he created AI it will be because he collected enough data and built a sufficiently complex algorithm.
Right now I think it is largely a combination of a hardware problem and a data problem. The more/better data the neural networks are trained on, the better AI gets. But training is costly with the vast amount of data. So it is really a matter of collecting data and training the neural networks with it.

AI's behavior is not driven by an algorithm written by people, it's a neural network which has evolved over time to learn a vastly complex function which tells it what to do. And the function is currently too complex for people to break down and understand. So nobody is writing any complex algorithms that make AI succeed, they are just feeding data into it, and coming up with effective loss functions that penalize the results they don't like.

But there is possibly a limit how far that can take us. There is also an evolution of architecture and transfer learning, and neuro-symbolic learning, which may spawn breakthroughs or steady improvements besides just pure brute force data consumption.
 
Last edited:
  • #121
Moore's I agree is not a good model going into the future. But it doesn't stop people from trying to forecast improvements in computing power. Technologies like room temperature superconductors, carbon based transistors, quantum computing, etc. will probably change the landscape. If we crack fusion energy, then suddenly we have a ton of energy to use as well.

But in my opinion it also doesn't make too much sense to focus just on things like how small a transistor can be, and how efficiently you can compute in terms of energy. Because AI already gives us the ability to just build massive computers in space.

Quantum computing however does have the chance to make intractable problems tractable. There are problems which would take classical computers the age of the universe to solve that quantum computers could theoretically solve within a lifetime. A jump from impossible to possible is quite a bit bigger than Moore's law.

So then when these future technologies can potentially result in massive leaps forward that make Moore's law look like nothing, what about the progress that it took to develop those technologies in the first place. Sure, the unlocked capability is a step function, but in terms of advancement, do we also just draw a step function, or do we count the intermediate progress that got us there? Because there are a ton of scientific breakthroughs that are getting us closer happening constantly now days, even if most people aren't paying much attention.
 
Last edited:
  • #122
Jarvis323 said:
Right now I think it is largely a combination of a hardware problem and a data problem. The more/better data the neural networks are trained on, the better AI gets. But training is costly with the vast amount of data. So it is really a matter of collecting data and training the neural networks with it...

But there is possibly a limit how far that can take us. There is also an evolution of architecture and transfer learning, and neuro-symbolic learning, which may spawn breakthroughs or steady improvements besides just pure brute force data consumption.
I think you may have missed my point because you basically just repeated it with different wording. Yes, I know it is being approached as a hardware and data problem. But humans don't think by accessing vast data archives, taking measurements with precise sensors and doing exact calculations.
 
Last edited:
  • #123
russ_watters said:
I think you may have missed my point because you basically just repeated it with different wording.
"Imitation is the sincerest form of flattery." --Old proverb.
 
  • Like
Likes BillTre and russ_watters
  • #124
Jarvis323 said:
Moore's I agree is not a good model going into the future. But it doesn't stop people from trying to forecast improvements in computing power. Technologies like room temperature superconductors, carbon based transistors, quantum computing, etc. will probably change the landscape.
It does make it much harder to predict when instead of steady, continuous - predictable - advances you're waiting for a single vast advancement that you don't know when it will come, if ever. And many of the biggest advances I'm not sure if people even saw coming (such as the computer itself).

Jarvis323 said:
If we crack fusion energy, then suddenly we have a ton of energy to use as well.
Very doubtful. Fusion is seen by many as a fanciful solution to our energy needs, but the reality is likely to be expensive, inflexible, cumbersome and maybe even unreliable and dangerous. And even if fusion can provide power at, say, 1/10th the cost it currently is, generating the electricity is only around a third of the cost of electricity. The rest is in getting the electricity to the user. Fusion doesn't change that problem at all. And not for nothing, but we already have an effectively limitless source of fusion power available. As we've seen, just being available isn't enough to be a panacea.

Also, it's not power per se that's a barrier for computing power, it's heat. A higher end PC might cost $2000 and use $500 a year in electricity if run fully loaded, 24/7. Not too onerous at the moment. But part of what slowed advancement was when they reached the limit of what air cooling could dissipate. It gets a lot more problematic if you have to buy a $4,000 cooling system for that $2,000 PC (in addition to the added energy use). Even if the electricity were free, that would be a tough sell.
 
  • #125
Heh. Managed to get a topical comic in after all. (A few of the previous ones are pretty good too. He must have gad a good week.).
 
  • #126
russ_watters said:
But humans don't think by accessing vast data archives, taking measurements with precise sensors and doing exact calculations.
Who is to say humans have less precise sensors or that our calculations are less exact?
 
  • #127
Jarvis323 said:
Who is to say humans have less precise sensors or that our calculations are less exact?
Me? Honestly, I don't see how this is arguable. What's the exact color of the PF logo? How fast was the ball I just threw? Maybe we're talking past each other here, so if you're trying to say something else, could you elaborate?
 
  • Like
Likes Oldman too and BillTre
  • #128
russ_watters said:
Me? Honestly, I don't see how this is arguable. What's the exact color of the PF logo? How fast was the ball I just threw? Maybe we're talking past each other here, so if you're trying to say something else, could you elaborate?
Just because your conscious mind can't give precise answers doesn't mean your sensors and brain's calculations are at fault. You probably can catch a ball if someone tossed it to you and you don't need to consciously calculate trajectories and the mechanics of your hands. But you do do the necessary calculations. AI is the same. If you train a neural network to catch a ball, it will learn how to do it and it probably won't do it like a physics homework problem.

In the same way, when you see the color, maybe you can't recite the RGB component values, some people can't even see in color, but biological eyes are certainly not inferior sensors to mechanical ones in my opinion, within the scope of their applicability. And I'm not sure what technology can compete with a nose?

Of course we can equip AI with all kinds of sensors we don't have ourselves, but that's pretty much besides the point.

And what does it mean to say our brain doesn't do exact calculations? Does it mean there is noise, interference, randomness, that it doesn't obey laws of physics?

AI is based on complex internal probibalistic models. So they guess. Maybe which guess they will give is consistent if they've got a static internal model that's stopped learning. But they still guess. The main difference with humans is we don't just guess imediately, we second guess and trigger internal processing when we're not sure.

It might be possible AI can also try to improve its guesses at the expense of slower response time, but a general ability to do this is not a solved problem as far as I know.
 
Last edited:
  • #129
Jarvis323 said:
Just because your conscious mind can't give precise answers doesn't mean your sensors and brain's calculations are at fault.
That isn't what you or I said before - it sounds like exactly the opposite of your prior statement:
Who is to say humans have less precise sensors or that our calculations are less exact?
So I agree with your follow-up statement: our conscious mind can't make precise measurements/calculations. Yes, that matches what I said.
You probably can catch a ball if someone tossed it to you and you don't need to consciously calculate trajectories and the mechanics of your hands. But you do do the necessary calculations.
That sounds like a contradiction. It sounds like you think that our unconscious mind is a device like a computer that makes exact calculations. It's not. It can't be. The best basketball players after thousands of repetitions can hit roughly 89-90% of free throws. If our unconscious minds were capable of computer-like precision, then we could execute simple tasks like that flawlessly/perfectly - just like computers can.
AI is the same. If you train a neural network to catch a ball, it will learn how to do it and it probably won't do it like a physics homework problem.
Again, I agree with that. That's my point. And I'll say it another way: our brains/sensors are less precise and we make up for it by being more intuitive. So while we are much less precise for either simple or complex tasks, we require much less processing to be able to accomplish complex tasks. For computers, speed and precision works great for simpler tasks (far superior to human execution), but has so far been an impediment to accomplishment of more complex tasks.
 
  • #130
russ_watters said:
Again, I agree with that. That's my point. And I'll say it another way: our brains/sensors are less precise and we make up for it by being more intuitive. So while we are much less precise for either simple or complex tasks, we require much less processing to be able to accomplish complex tasks. For computers, speed and precision works great for simpler tasks (far superior to human execution), but has so far been an impediment to accomplishment of more complex tasks.
Maybe we're not talking about the same thing. You seem to be talking about computers and algorithms. I've been talking about neural networks. Trained neural networks do all their processing immediately. Sure it may have learned how to shoot a basket better than a person. But humans have a lot more tasks we have to do. If one neural network could do a half decent job shooting baskets and also do lots of other things well, that would be a huge achievement in the AI world.

Really, it's humans which do a lot of complex processing to complete a task, and to make AI improve, giving AI that ability is a primary challenge, because it has to know what extra calculations it can do, and how it can reason about things it doesn't already know. The ability to do this in some predetermined cases in response to a threshold on a sensor measurement
is there of course but that isn't AI.
 
  • #131
Jarvis323 said:
Maybe we're not talking about the same thing. You seem to be talking about computers and algorithms. I've been talking about neural networks.
What we were just talking about is precision/accuracy of the output, regardless of how the work is being done.
Trained neural networks do all their processing immediately.
What does "immediately" mean? In zero time? Surely no such thing exists?
Sure it may have learned how to shoot a basket better than a person. But humans have a lot more tasks we have to do.
Yes. Another way to say it would be sorting and prioritizing tasks and then not doing (or doing less precisely) the lower priority tasks. That vastly reduces the processor workload. It's one of the key problems for AI.
If one neural network could do a half decent job shooting baskets and also do lots of other things well, that would be a huge achievement in the AI world.
Yes.
 
  • #132
russ_watters said:
What does "immediately" mean? In zero time? Surely no such thing exists?

I mean there is just one expression, which is a bunch of terms with weights on them, and for every input it gets, it just evaluates that expression and then makes its guess. It doesn't run any algorithms beyond that. Of course you could hard code some algorithm for it to guess to run in response to an input. And one day maybe they could also come up with their own algorithms.

russ_watters said:
Yes. Another way to say it would be sorting and prioritizing tasks and then not doing (or doing less precisely) the lower priority tasks. That vastly reduces the processor workload. It's one of the key problems for AI.

I wouldn't view it this way exactly although that could be possible. The problem for a neural network I think is that it needs to have one model that gives good guesses for all of the different inputs. And the model emerges by adjusting weights on terms to try and minimize the error according to the loss function. So we have to also come up with a loss function that ends up dictating how much the neural network cares about its model being good at basketball or not.

The problem is that there is a whole world out there of things to worry about, and there are only so many terms in the model, and only so much of the world has been seen, and there is only so much time to practice and process it all. The network ultimately is a compressed model, which has to use generalization. When it shoots a basketball, it's using neurons it also uses to comb its hair, and play chess. And when it does a bad job combing its hair, it makes changes that can also affect its basketball shooting ability.
 
  • #133
PeroK said:
This is garbage.
Kurzweil is provocative and triggers reactions (just as he has with you, @PeroK) and those reactions cause people to discuss the ideas he espouses. It might be to scoff and dismiss his ideas (transhumanism is a great example that has attracted a lot of derision), or to argue his timelines are wrong, or even to agree but add qualifications.

Whatever, he causes a conversation about the future and while it might be viewed as garbage, it is not a bad thing.
 
  • Like
Likes russ_watters and DaveC426913
  • #134
Melbourne Guy said:
Kurzweil is provocative and triggers reactions

I got the impression he is also providing a primer on the subject for newbies even as he is arguing it.

I got the impression the explanation of geometric growth early in the essay is deliberately simplistic as part of the primer that he wants to lay out for newbies, and then goes on to nuance it a few paragraphs later.
 
  • Like
Likes russ_watters
  • #135
I'll just leave this here:



(Gasp)
 
  • #136
"Fear AI". There may a few ways that we really should "fear" or at least be wary. The obvious one is where the "AI" is given access to physical controls of the real environment e.i. driverless vehicles of any kind or control of weapons (as per "The Corbin project (movie)). we also know what happened to the HAL9000 computer in 2001.Space Odyssey.
I'm sure there are many more such examples of AI gone astray. It may also depend on the level of "awareness" and "intelligence" of the particular AI. The android in Isaac Asimovs "The Naked Sun" and " Caves of Steel" give examples of extremely advanced AI as to be almost human. But even so some of his tales also feature AI which turns out to be harmful due usually to some "failure" on its "mental state". Even his famed three laws of robotics didn't always stop harm occurring in his tales.
Also not forgetting M.Chritons(sp.) "Gray Goo" of self-replicating nanobots causing mayhem.
I would suggest that even humans fail and cause great harm so anything we build is also likely to "fail" in some unkown way so I would be very wary of so-called AI even at the highest level unless there was some sort of safeguard to prevent harm from occurring.
Could Ai ever become "self_aware? I very much doubt it. Even many animals do not seem to be self_aware so how could we ever make a machine to do it. I have no problem using "AI" etc as long it does what I want it to do and is ultimately under my control.

Yes, I prefer to drive a manual car.
 
  • #137
DaveC426913 said:
I read that a year or two ago. I loooove the vampire concept.

But I'm not really a sci-fi horror fan. If you want sci-fi horror, read Greg Bear's Hull Zero Three. This book literally haunts me. (I get flashbacks every time I see it on the shelf, and I've taken to burying it where I won't see it.)
I'm about a third of the way through HZT now. Thanks for the recommendation!
 
  • #138
Chicken Squirr-El said:
I'm about a third of the way through HZT now. Thanks for the recommendation!
:mad::mad:
I was trying to warn you off!
Don't come back saying I didn't. :nb)
 
  • #139
sbrothy said:
Heh. Managed to get a topical comic in after all. (A few of the previous ones are pretty good too. He must have gad a good week.).
Speaking of comics. I just read the coolest scifi comic: "Sentient". It would make one paranoia inducing film. And notably the protagonist is a ship AI 20 minutes into the future. Suddenly taske with protecting children.

Review
 
  • #140
Given the existence of an AI that is better than humans at everything, what would the best case scenario be? Can a "most likely scenario" even be defined?
 
  • #141
Algr said:
Given the existence of an AI that is better than humans at everything, what would the best case scenario be? Can a "most likely scenario" even be defined?

Best case, maybe AI saves the planet and the human race from destroying itself. Most likely, who knows. Maybe we use AI to destroy the planet and ourselves.

In terms of predicting what will happen, my opinion is that the best approach is to look at what is possible, what people want, and what people do to get what they want. If a technology makes something very enticing possible, you can guess it will be used. So you can just look at all the things AI makes possible, and all the ways people could exploit AI to their benefit.

So the problem now is that people are largely driven by hate, greed, and selfish interests, have short attention spans, short memory, and are willing to sacrifice future generations and the future of the rest of the life on the planet for frivolous short term gains, and have a culture of dishonesty. And because this is so depressing, we pretend it's not the case and try to ignore it.

But the future is a grim one if we continue this path and exploit technology so carelessly and selfishly.
 
Last edited:
  • #142
Jarvis323 said:
Best case, maybe AI saves the planet and the human race from destroying itself. Most likely, who knows. Maybe we use AI to destroy the planet and ourselves.

In terms of predicting what will happen, my opinion is that the best approach is to look at what is possible, what people want, and what people do to get what they want. If a technology makes something very enticing possible, you can guess it will be used. So you can just look at all the things AI makes possible, and all the ways people could exploit AI to their benefit.

So the problem now is that people are largely driven by hate, greed, and selfish interests, have short attention spans, short memory, and are willing to sacrifice future generations and the future of the rest of the life on the planet for frivolous short term gains, and have a culture of dishonesty. And because this is so depressing, we pretend it's not the case and try to ignore it.

But the future is a grim one if we continue this path and exploit technology so carelessly and selfishly.
So very true (and depressing). Sure hope it doesn't spiral into the sewer.

Edit: Then again I won't be here if (when?) it does. :)
 
  • #143
Most of the AI fear is based on the assumption that AIs will act like us. But machines are very different. One particular thing I notice is that machines don't naturally develop any desire for self preservation or self improvement. You can program this, but it is a rather difficult concept for machines to grasp, so it doesn't seem like something that could emerge by accident.
 
  • #144
Algr said:
Most of the AI fear is based on the assumption that AIs will act like us. But machines are very different. One particular thing I notice is that machines don't naturally develop any desire for self preservation or self improvement. You can program this, but it is a rather difficult concept for machines to grasp, so it doesn't seem like something that could emerge by accident.
Whos to say? If the AI in question is smart enough to realize that without energy oblivion awaits then all bets are off.
 
  • #145
What's wrong with oblivion?
 
  • #146
Algr said:
Most of the AI fear is based on the assumption that AIs will act like us. But machines are very different. One particular thing I notice is that machines don't naturally develop any desire for self preservation or self improvement. You can program this, but it is a rather difficult concept for machines to grasp, so it doesn't seem like something that could emerge by accident.
It's an interesting issue. On the one hand, maybe AI doesn't have much the same instinct for self preservation easily ingrained. For humans, we are part of the natural ecosystems of a planet. Our survival is a collective effort and depends on the planet and its environment. That can explain why even though we are poor stewards of the planet and treat each other terribly, it could be much worse. We have a side that cares, sees beauty in nature, and wants our species and the natural world to thrive.

AI might not have any of that. Suppose AI does acquire instinct for self preservation, that preservation likely wouldn't depend on coral reefs or the levels of radiation in the water. With people, at least we can depend on some level of instinct to care about things. For now, we have fairly simple AI and can mostly tell what the effect of the loss function is. For example, most AI now cares about getting people to buy things or click on things and other narrow and easy to define and measure things like that.

The challenge for humans in creating safe general AI would be to define a differentiable function that measures the behavior of the AI and reflects if it is good or bad. The more general the AI and free it is, the harder it would be to get that right or know if you have. It is like trying to play god. Then, eventually, AI can begin writing their own loss functions and also their loss functions can evolve without oversight.

AI which is designed to reproduce will be a major component of the next eras of space mining, space colonization, terraforming, and possible manufacturing and war. E.g. it will be what makes Elon Musk's dream of colonizing Mars possible.

Self replicating AI will likely be interested in energy like sbrothy said. And it might care even less than humans what the cost is to the planet. E.g. it might go crazy with building nuclear power plants all over the place and not care when they melt down. Or it might burn up all of the coal on the planet very rapidly and all of the forests, and everything else, and then keep digging, and burning, and fusing until the Earth resembles a hellscape like Venus.
 
Last edited:
  • #147
Self preservation and reproduction are at the core of biology because living things that don't have those core values got replaced by those that did. This took millions of generations over billions of years to happen.

Self preservation and reproduction are things that are possible for an AI. But any AI would have as it's core function to benefit those that created and own it. So an AI that was smart enough to decide that AIs are bad for humanity would not invent a reason to ignore its core function. It would either disable itself, or act to prevent more malevolent AIs from emerging. A malevolent AI would have no survival advantage with all the good AIs anticipating its existence and teaming up against it.

A third possibility is that their might not be a clear line between what is an AI and what is a human. Imagine there was a tiny circuit in your brain that had all the function of a high powered laptop. But instead of touching it with your fingers and looking at it's screen with your eyes, you just thought about it and "knew" the output as if it was something you'd read somewhere. Imagine never forgetting a face or a name or an appointment again, because you could store them instantly.
 
  • #148
Algr said:
But any AI would have as it's core function to benefit those that created and own it.

This is at least what you could hope for. It's not easy. AI can say, oh sorry, you didn't mention to me in the loss function that you're sensitive to heat and cold, and the specific composition of the air, and that you like turtles, and that turtles are sensitive to this and that. Or it might complain, how was I supposed to save you and the turtles at the same time while also maximizing oil profit?

But even if humans were completely in control, it's terrifying when you realize those people will be the same kinds of people which form the power structures of the world today and in the past. Those will include a lot of economics driven people, like high powered investors, CEO's, etc. Many of them are the type that poison people's water supplies out of convenience to themselves, and then wage war against the people they poisoned to avoid taking responsibility. They will have board meetings and things where they decide core functionalities they want, and they won't have a clue how any of it works or what the risks are, nor will they necessarily care to listen to people who do know. Or maybe it will be the same types as those who sought to benefit from slavery. Others may be Kim-Jung Un or Hitler types. Maybe they want the functionality to support mass genocide. Maybe they want an unstoppable army.
 
Last edited:
  • #149
I should add that competition between nations will probably drive militarization of AI at an accelerated pace. If one country developed a powerful weapon, the other would also be compelled to. Ever more powerful and dangerous technology will probably emerge and eventually proliferate. And that technology can easily get dangerous enough to threaten the entire planet. And then extremely dangerous technology with purely destructive purposes will be in the hands of all kinds of people around the world, from criminal organizations, to dictatorships, and terrorist organizations.

And then to cope with that, AI will probably also be used for next level surveillance and policing, and not necessarily by benevolent leaders.

So the threat from AI is not just one kind. It's not just the threat of AI detaching from our control and doing whatever it wants to. It's a mess of a bunch of immediate practical threats from small to enormous. AI becoming independent or out of control and taking over is possible also and maybe one of the biggest threats depending on what kind of AI we create. If we seed the world with a bad AI, it could grow unpredictably and destroy us. I think the first steps are to get our own act in order, because AI will be a product of us in the first place, and currently I can't imagine how we will not screw it up.
 
Last edited:
  • #150
Jarvis323 said:
They will have board meetings and things where they decide core functionalities they want, and they won't have a clue how any of it works or what the risks are, nor will they necessarily care to listen to people who do know.
Of course this aligns with my point that humans using AI are more dangerous than an AI that is out of control.
The final decision on how the AI works isn't from the board, but from the programmers who actually receive orders from them. If they get frustrated and decide that they work for awful people, they can easily ask the AI for help without the board knowing. Next thing you know the board is bankrupt and facing investigation while the AI is "owned" by a shell company that no one was supposed to know about. By the time the idealism of the rebel programmers collapses to the usual greed, the AI will be influencing them.

Different scenarios would yield different AIs all with different programming and objectives. Skynet might exist, but it would be fighting other AIs, not just humans. I would suggest that the winning AI might be the one that can convince the most humans to support it and work for it. So Charisma-Bot 9000 will be our ruler.
 

Similar threads

Back
Top