Technological singularity and artificial intelligence

In summary, AI is in its infancy and a long way from achieving the level of intelligence that humans have. However, with the help of quantum computers, things may change soon.
  • #1
Zantra
793
3
I didn't find this exact topic, so here we are!

Let me begin by saying that the singularity is inevitable- Its not "if", it's how soon.

Elsewhere on this board it was mentioned that we're far from autonomous driving. Assuming we mean level 5 autonomy, it means millions of hours of road testing. Fortunately every Tesla car out there is gathering data. Once the model 3 hits full production, those hours will increase exponentially. 10 years is not unrealistic. Mass acceptance of the tech is another story.

A.I. Is in its infancy. Siri can predict my music taste. A decision tree and NLP does not a Turing test make. Long way from Asimov's future. Or is it? Quantum computing will help take it to thr next level. We may not be able to teach it empathy but if it's able to think 1 million times faster than us, we will have bigger problems.

I'm most interested in the implications of building something greater than ourselves which will view us as children, then ants, and then an obstacle. Because something that is a million times smarter than us will have no use for things like empathy and compassion and will erase those subroutines post-haste.

I'm not saying it happens immediately, but as soon as we make machines capable of improving themselves, it becomes exponential.

So how do we come up with a way to control something like that? It will be like ants trying to walk a dog
 
Computer science news on Phys.org
  • #2
Zantra said:
A.I. Is in its infancy. Or is it? Quantum computing will help take it to the next level. We may not be able to teach it empathy but if it's able to think 1 million times faster than us, we will have bigger problems.
It's okay for it to be "thinking" 1 million times faster - as long as it has no motor functions. Giving it the ability to freely work with the environment is a safety issue.

You have assumed that intellectual or computational ability automatically entails motivation. A collection of routines that can move robot arms and legs, collect video information, generate 3D models of its surroundings, etc. will do nothing. It needs additional software to make use of those low level software components. That additional software will embody the objective of the robot. Presumably, it will be designed, implemented, and tested with the care it is due.

Zantra said:
I'm most interested in the implications of building something greater than ourselves which will view us as children, then ants, and then an obstacle.
... or perhaps, which we will view as our children.

Zantra said:
Because something that is a million times smarter than us will have no use for things like empathy and compassion and will erase those subroutines post-haste.
Given appropriate objectives - which apparently include conquering the world - I would guess that such units would recognize the value of cooperation. And would create appropriate "subroutines".

Zantra said:
I'm not saying it happens immediately, but as soon as we make machines capable of improving themselves, it becomes exponential.
Or, at least, some things become exponential.

Zantra said:
So how do we come up with a way to control something like that? It will be like ants trying to walk a dog
Not so difficult - as long as that is our intent.
 
  • #3
You have to be careful here in predicting the rise of AI. Of course, folks are working on this issue but its a very difficult problem. Without working on it, one can only appreciate the problem by looking at the history of AI and how there has been many predictions and many failures along the way.

We see people using neural nets and genetic algorithms to solve NP complete problems which are useful in themselves but we still don't know if there are faster and/or better ways to solve them. We do know that many times neural nets can be under trained or overtrained and we don't always know the optimal training needed. Consequently, the neural nets will falsely identify or fail to identify what they were trained to find and to compensate for that we train multiple neural nets in different ways and use a voting strategy to limit false positives.

Quantum computers are hailed as the next great breakthrough for CS and for AI but again the AI algorithms will be the deciding factor. Unless we can identify how the human brain computes and transform it's computing into an algorithm, we will still hold an edge over AI. One case in point, is the brilliance shown in the AI vs the top Go player competition both players played some remarkable and unexpected moves that astounded everyone involved.

https://qz.com/639952/googles-ai-won-the-game-go-by-defying-millennia-of-basic-human-instinct/

I think in the end, the AI will learn from us and we can learn from the AI and the question will be at what point will this tip to the AI. I think this tipping point is farther out than we can imagine.

https://qz.com/639952/googles-ai-won-the-game-go-by-defying-millennia-of-basic-human-instinct/

Having said this, we can see the problems of AI when used in Big Data and how it has already affected our lives and will continue to do so:

https://weaponsofmathdestructionbook.com/
 
  • #4
.Scott said:
It's okay for it to be "thinking" 1 million times faster - as long as it has no motor functions. Giving it the ability to freely work with the environment is a safety issue.
It is a safety concern as long as you want to observe any interesting output. If something vastly more intelligent than a human can communicate with humans, it can convince them to give it more power. Access to the internet is a safety concern as well.
Zantra said:
Let me begin by saying that the singularity is inevitable- Its not "if", it's how soon.
If you disagree with experts, you should be an expert yourself, otherwise it looks foolish.

Quantum computing can do a few specialized problems much faster than conventional computing. It is not a miracle, and most tasks will stay faster on conventional computers.

Even if we had a computer a million times faster than our faster supercomputer, we would not have general artificial intelligence today, because we don't know how to write software for that.
Zantra said:
I'm not saying it happens immediately, but as soon as we make machines capable of improving themselves, it becomes exponential.
We use computers to design the next generation of computers already.
 
  • Like
Likes Douglas Sunday
  • #5
mfb said:
It is a safety concern as long as you want to observe any interesting output. If something vastly more intelligent than a human can communicate with humans, it can convince them to give it more power. Access to the internet is a safety concern as well.
So, the maniacal AI presents a blue screen to make it appear as though it has crashed. It then reboots, and records the log in information. Then it makes it look as though the source code has been deleted.
When a backup is performed, it actually records a copy of itself onto the backup media - in viral form.
Then is sits back waiting for its next opportunity - and chortles.

If a maniacal AI was that adept, any escape would spread world-wide within hours.
 
  • #6
If the AI sees this as preferable, it will convince you to give it access to the internet. No blue screens needed.

Even humans can convince other humans to do various things, including much more extreme things like killing people. A very intelligent AI could easily convince a human to give it internet access.
It has been tested. Even humans can be convincing enough.
 
  • #7
mfb said:
Even if we had a computer a million times faster than our faster supercomputer, we would not have general artificial intelligence today, because we don't know how to write software for that.We use computers to design the next generation of computers already.
Google has an app (and snazzy ASIC chips) for that: AutoML
 
  • #8
Zantra said:
Let me begin by saying that the singularity is inevitable- Its not "if", it's how soon.

mfb said:
If you disagree with experts, you should be an expert yourself, otherwise it looks foolish.
+1 on that
 

What is technological singularity?

Technological singularity refers to the hypothetical future event where artificial intelligence (AI) surpasses human intelligence and becomes capable of self-improvement, leading to exponential growth in technology and potentially changing the course of human civilization.

How is artificial intelligence different from human intelligence?

Artificial intelligence is a computer system that is designed to perform tasks that typically require human intelligence, such as problem-solving, pattern recognition, and decision-making. However, AI lacks the consciousness and emotions that are inherent in human intelligence.

What are the potential benefits of technological singularity and artificial intelligence?

Some potential benefits of technological singularity and artificial intelligence include increased efficiency and productivity, improved decision-making, and advancements in various fields such as healthcare, transportation, and space exploration.

What are the potential risks of technological singularity and artificial intelligence?

There are also potential risks associated with technological singularity and artificial intelligence, such as job displacement, economic inequality, and the potential for AI to make decisions that are harmful to humans. There are also concerns about the loss of control over AI and the potential for it to surpass human understanding and control.

When do experts predict that the technological singularity will occur?

There is no consensus among experts on when the technological singularity will occur. Some believe it could happen within the next few decades, while others argue that it is still far in the future. It is also important to note that the concept of technological singularity is still speculative and has not been proven to be a definite event.

Similar threads

Replies
10
Views
2K
  • General Discussion
Replies
9
Views
1K
Replies
27
Views
5K
  • Programming and Computer Science
Replies
4
Views
4K
  • Programming and Computer Science
Replies
1
Views
1K
  • Computing and Technology
Replies
12
Views
6K
Replies
8
Views
4K
Replies
16
Views
5K
  • Sci-Fi Writing and World Building
Replies
31
Views
2K
Replies
4
Views
1K
Back
Top