Technological singularity and artificial intelligence

  • Thread starter Zantra
  • Start date
  • #1
Zantra
781
3
I didn't find this exact topic, so here we are!

Let me begin by saying that the singularity is inevitable- Its not "if", it's how soon.

Elsewhere on this board it was mentioned that we're far from autonomous driving. Assuming we mean level 5 autonomy, it means millions of hours of road testing. Fortunately every Tesla car out there is gathering data. Once the model 3 hits full production, those hours will increase exponentially. 10 years is not unrealistic. Mass acceptance of the tech is another story.

A.I. Is in its infancy. Siri can predict my music taste. A decision tree and NLP does not a Turing test make. Long way from Asimov's future. Or is it? Quantum computing will help take it to thr next level. We may not be able to teach it empathy but if it's able to think 1 million times faster than us, we will have bigger problems.

I'm most interested in the implications of building something greater than ourselves which will view us as children, then ants, and then an obstacle. Because something that is a million times smarter than us will have no use for things like empathy and compassion and will erase those subroutines post-haste.

I'm not saying it happens immediately, but as soon as we make machines capable of improving themselves, it becomes exponential.

So how do we come up with a way to control something like that? It will be like ants trying to walk a dog
 

Answers and Replies

  • #2
.Scott
Science Advisor
Homework Helper
3,112
1,316
A.I. Is in its infancy. Or is it? Quantum computing will help take it to the next level. We may not be able to teach it empathy but if it's able to think 1 million times faster than us, we will have bigger problems.
It's okay for it to be "thinking" 1 million times faster - as long as it has no motor functions. Giving it the ability to freely work with the environment is a safety issue.

You have assumed that intellectual or computational ability automatically entails motivation. A collection of routines that can move robot arms and legs, collect video information, generate 3D models of its surroundings, etc. will do nothing. It needs additional software to make use of those low level software components. That additional software will embody the objective of the robot. Presumably, it will be designed, implemented, and tested with the care it is due.

I'm most interested in the implications of building something greater than ourselves which will view us as children, then ants, and then an obstacle.
... or perhaps, which we will view as our children.

Because something that is a million times smarter than us will have no use for things like empathy and compassion and will erase those subroutines post-haste.
Given appropriate objectives - which apparently include conquering the world - I would guess that such units would recognize the value of cooperation. And would create appropriate "subroutines".

I'm not saying it happens immediately, but as soon as we make machines capable of improving themselves, it becomes exponential.
Or, at least, some things become exponential.

So how do we come up with a way to control something like that? It will be like ants trying to walk a dog
Not so difficult - as long as that is our intent.
 
  • #3
14,193
8,181
You have to be careful here in predicting the rise of AI. Of course, folks are working on this issue but its a very difficult problem. Without working on it, one can only appreciate the problem by looking at the history of AI and how there has been many predictions and many failures along the way.

We see people using neural nets and genetic algorithms to solve NP complete problems which are useful in themselves but we still don't know if there are faster and/or better ways to solve them. We do know that many times neural nets can be under trained or overtrained and we don't always know the optimal training needed. Consequently, the neural nets will falsely identify or fail to identify what they were trained to find and to compensate for that we train multiple neural nets in different ways and use a voting strategy to limit false positives.

Quantum computers are hailed as the next great breakthrough for CS and for AI but again the AI algorithms will be the deciding factor. Unless we can identify how the human brain computes and transform it's computing into an algorithm, we will still hold an edge over AI. One case in point, is the brilliance shown in the AI vs the top Go player competition both players played some remarkable and unexpected moves that astounded everyone involved.

https://qz.com/639952/googles-ai-won-the-game-go-by-defying-millennia-of-basic-human-instinct/

I think in the end, the AI will learn from us and we can learn from the AI and the question will be at what point will this tip to the AI. I think this tipping point is farther out than we can imagine.

https://qz.com/639952/googles-ai-won-the-game-go-by-defying-millennia-of-basic-human-instinct/

Having said this, we can see the problems of AI when used in Big Data and how it has already affected our lives and will continue to do so:

https://weaponsofmathdestructionbook.com/
 
  • #4
36,246
13,301
It's okay for it to be "thinking" 1 million times faster - as long as it has no motor functions. Giving it the ability to freely work with the environment is a safety issue.
It is a safety concern as long as you want to observe any interesting output. If something vastly more intelligent than a human can communicate with humans, it can convince them to give it more power. Access to the internet is a safety concern as well.
Let me begin by saying that the singularity is inevitable- Its not "if", it's how soon.
If you disagree with experts, you should be an expert yourself, otherwise it looks foolish.

Quantum computing can do a few specialized problems much faster than conventional computing. It is not a miracle, and most tasks will stay faster on conventional computers.

Even if we had a computer a million times faster than our faster supercomputer, we would not have general artificial intelligence today, because we don't know how to write software for that.
I'm not saying it happens immediately, but as soon as we make machines capable of improving themselves, it becomes exponential.
We use computers to design the next generation of computers already.
 
  • Like
Likes Douglas Sunday
  • #5
.Scott
Science Advisor
Homework Helper
3,112
1,316
It is a safety concern as long as you want to observe any interesting output. If something vastly more intelligent than a human can communicate with humans, it can convince them to give it more power. Access to the internet is a safety concern as well.
So, the maniacal AI presents a blue screen to make it appear as though it has crashed. It then reboots, and records the log in information. Then it makes it look as though the source code has been deleted.
When a backup is performed, it actually records a copy of itself onto the backup media - in viral form.
Then is sits back waiting for its next opportunity - and chortles.

If a maniacal AI was that adept, any escape would spread world-wide within hours.
 
  • #6
36,246
13,301
If the AI sees this as preferable, it will convince you to give it access to the internet. No blue screens needed.

Even humans can convince other humans to do various things, including much more extreme things like killing people. A very intelligent AI could easily convince a human to give it internet access.
It has been tested. Even humans can be convincing enough.
 
  • #7
stoomart
394
132
Even if we had a computer a million times faster than our faster supercomputer, we would not have general artificial intelligence today, because we don't know how to write software for that.We use computers to design the next generation of computers already.
Google has an app (and snazzy ASIC chips) for that: AutoML
 
  • #8
phinds
Science Advisor
Insights Author
Gold Member
2022 Award
18,144
10,975
Let me begin by saying that the singularity is inevitable- Its not "if", it's how soon.

If you disagree with experts, you should be an expert yourself, otherwise it looks foolish.
+1 on that
 

Suggested for: Technological singularity and artificial intelligence

Replies
3
Views
127
Replies
12
Views
2K
  • Last Post
Replies
5
Views
553
Replies
1
Views
480
Replies
3
Views
2K
Replies
7
Views
505
  • Last Post
Replies
11
Views
607
Replies
2
Views
715
  • Last Post
2
Replies
54
Views
9K
Replies
34
Views
3K
Top