Destined to build a super AI that will destroy us?

  • Thread starter Greg Bernhardt
  • Start date
  • Tags
    Ai Build
In summary: I'm going to try and make this thing understand what I'm thinking." If we're building this machine, we should make sure that it can't diverge from our wishes.
  • #71
jack action said:
Would you prefer not taking any chances and stop the research on how to produce that gas? Or would you do the research and see where it goes?

I don't see anyone here or elsewhere that seriously argues in favor of stopping research. Many of those that express concerns about AI are often also those that are involved. One of their worries is that blind faith in this technology by the manufacturers may lead to serious issues that in turn will make consumers turn away from this technology at some point. It seems like a win-win situation for them to work towards both ensuring the bottom line as well as the well-being of the human race. To take your gas example, no company in its right might would completely ignore risks developing and deploying a new gas if they know it has the potential to serious hurt humans and thus their bottom line. Greenfield technology always have a larger set of unknown risks, and companies knows this. Usually the higher risks comes a bit down the road when you think you know all there is to know about this technology, start to optimize and deploying it wide and then you get hit bad by something you'd missed or optimized away thinking it was unimportant. The recent case of exploding Samsung phones seems to be a textbook example on such a scenario.

To me, the discussion in this thread seems to revolve more around beliefs regarding how much risk people themselves (i.e. "consumers") can accept using a future technology we do not yet understand. It seems that even people who acknowledge how complicated control of future AI can be still believe that the net risk to them will be kept low because they rightfully expect someone else to worry and mitigate any risk along the way. That is a perfectly sensible belief, but in order for it to be well placed there really need to be someone else that actually concerns themselves about identifying and mitigating risks.

In a sense, the current discussion seems very similar to the public discussion on the dangers of gene editing. Even if everyone rightfully can expect everyone involved in gene editing to do it safely the technology hold such potential that there is a risk that a few "rotten apples" will spoil it for everyone and do something that is very difficult to undo and which ends up being harmful for a very large set of humans.
 
Technology news on Phys.org
  • #72
Filip Larsen said:
One of their worries is that blind faith in this technology by the manufacturers may lead to serious issues that in turn will make consumers turn away from this technology at some point.

There it is. You are worried about losing your job. Sorry, but that has nothing to do with the faith of mankind.

Filip Larsen said:
It seems like a win-win situation for them to work towards both ensuring the bottom line as well as the well-being of the human race.

When someone does something because he or she thinks it's OK, even though others have raise warnings against, and something bad happen, it doesn't mean that that someone wasn't sincere when evaluating the risks. Everybody thinks that he or she makes the best decisions, otherwise he or she wouldn't make that decision.

Mr. Burns is a character on the Simpson, nothing more. Nobody in its right mind says: «My goal is to make money; I don't care what will happen to people buying my stuff.» If somebody does, it won't last long, because that is a recipe to lose money. But it doesn't mean wrong decisions won't be made.

Filip Larsen said:
The recent case of exploding Samsung phones seems to be a textbook example on such a scenario.

It really is. Is it the end of smart phones? I doubt it. The end of Samsung? Maybe. Are people at Apple very happy? Maybe for the temporary stocks rise. But I'm ready to bet that they were meetings between the managers and the engineers with the topics: «Why them and not us? What mistakes did they make and are we safe?»

The truth is that the consequences of the exploding Samsung phones are very low on the scale «Destroying mankind», even the scale «Destroying the economic system». But it does have a tremendous effect on everybody's checklist when assessing risks (and probably not only in the smart phone business) which should lead to better decisions. That is why I don't worry so much about the possible bad impacts of AI, even for AI itself.
 
  • #73
jack action said:
There it is.

No it is not, and I have no clue why you would think that.

Either I am very poor at getting my points across or you are deliberately trying to misconstrue my points pretty much all the time. Either way, I again have to give up having a sensible discussion with you.
 
  • #74
Filip Larsen said:
Many of those that express concerns about AI are often also those that are involved.
the person in the OP video is not really into AI from what I read...
Who from AI researchers express concerns about AI?

Filip Larsen said:
One of their worries is that blind faith in this technology by the manufacturers may lead to serious issues
Like what? For routine works technology can be blind-believed
 
  • #75
Filip Larsen said:
Either I am very poor at getting my points across or you are deliberately trying to misconstrue my points pretty much all the time.

I'm not attacking you, I'm stating my point of view. We agree that AI won't destroy mankind (That is what I understand from what you are saying). You say that if we are not careful in developing AI, terrible events may happen. I must admit I'm not sure how terrible will be those events according to you, but you make it sounds like it will be worst than what we ever saw in the past. Toyota had an horrible problem with an accelerator pedal, something that is not high tech, with dozen of years of pedal design experience worldwide. Still, there was obliviously a design problem somewhere. It is bound to happen with AI too. Do you know how the car starter was invented? It has something to do with someone dying while starting a car:

[PLAIN]http://www.motorera.com/history/hist06.htm said:
The[/PLAIN] [Broken] self-starter came about by accident -- literally. In the winter of 1910 on a wooden bridge on Belle Island Mich., a Cadillac driven by a woman stalled. Not having the strength to hand crank the engine herself, she was forced to wait on the bridge in the cold until help arrived.

In time another motorist, also driving a Cadillac, happened along. His name was Byron T. Carter, and he was a close friend of the head of Cadillac, Henry M. Leland. Carter offered to start the woman's car, but she forgot to retard the spark and the engine backfired, and the crank flew off and struck Carter in the face, breaking his jaw.

Ironically, moments later another car carrying two Cadillac engineers, Ernest Sweet and William Foltz, came along. They started the woman's car and rushed Carter to a physician, but complications set in and a few weeks later Carter died.

Leland was devastated. He called a special conference of his engineers and told them that finding a way to get rid of the hand crank was top priority.

"The Cadillac car will kill no more men if we can help it," he announced.

Self-starters for automobile engines had been tried in the past. Some were mechanical devices, some pneumatic and some electric.

But all attempts at finding a self-starter that was reliable, efficient and relatively small had failed.

When the Cadillac engineers could not come up with a workable system, the company invited Charles F. Kettering and his boys at DELCO (still independent of GM) to take a hand. Kettering presented the device in time for its introduction in the 1912 models.

It has always been that way: A hand crank hit someone in the face, a smart phone explodes, a robot makes a bad decision. Most of the time people do what they think is best, but an accident is always inevitable. Then it's back to drawing board and the cycle repeat itself.

You seem pessimistic about people in the field doing the right thing, but you say you are in the field and you seem to worry a lot. On what basis do you assume you're the only one? Do you have real examples of AI applications that you predict will go wrong and what kind of damage can be expected?

Let's take self-driving car, for example. Tesla had its first fatal accident. I'm sure that hit the engineers' office real hard, and not only at Tesla, every car manufacturer. Of course, every automaker will say its system is safe. Just like they were probably saying starting a car in 1910 was no problem. But although nobody wish for one, we all know that an accident is bound to happen. When do we stop worrying asking: «Is it ready for production or do we test it some more?» Not an easy question to answer. Sometimes, usage dictates the solution.

You seem to think too much risks are taken by companies with AI. What do you think they should do that they are not doing right now?
 
Last edited by a moderator:
  • Like
Likes Averagesupernova
  • #76
ChrisVer said:
Who from AI researchers express concerns about AI?

I am so far aware of the OpenAI non-profit company (funded by significant names in the industry that presumably are very interested in keeping AI a successful technology) and the https://deepmind.com/blog/announcing-partnership-ai-benefit-people-society/ Alphabet company. The paper Concrete Problems in AI Safety (which I also linked to earlier) is by researchers at Google Brain and they have also made other interesting papers addressing some of those concrete problems. One of their recent papers is Equality of Opportunity in Supervised Learning (which I haven't had time to read in full yet).

ChrisVer said:
Like what? For routine works technology can be blind-believed

What I mean is that researchers are aware that the skills and effort to understand and predict negative emergent behavior in even a fairly simple machine learning system can in general far surpass the skill and effort needed to establish the beneficial effect of the system. Or in other words, that it becomes "too easy" to make learning systems without understanding some of the suttle consequences. This is not really a new issue with machine learning, only that the gap perhaps is a bit wider with this type of technology, a gap that will likely continue to grow as increased tool sophistication makes construction easier and the resulting complexity makes negative behavior harder to predict.
 
  • #77
jack action said:
I'm not attacking you, I'm stating my point of view.

Ok, fair enough. I will try comment so we can see what we can agree on.

jack action said:
We agree that AI won't destroy mankind

It depends on what you mean by "won't" and "destroy". I agree that the risk AI ends up killing or enslaving most humans as in the Colossus scenario seems to be rather low and I also agree that what I express as my main concern (as stated earlier) is also not equivalent to "AI will destroy mankind" either.

However, I do not agree that the outcome "AI destroys mankind" is impossible. It may be astronomical unlikely to happen and its risk therefore negligible small, but I do not see any way (yet) to rule out such scenarios completely. If we return to Harris, then it would really be very nice to be able to point at something in his chain of arguments and show that to be physical impossible. And by show I mean show via the rigor found in laws of nature and not just show it by opinion of man.

jack action said:
You say that if we are not careful in developing AI, terrible events may happen. I must admit I'm not sure how terrible will be those events according to you, but you make it sounds like it will be worst than what we ever saw in the past.

The thing is that if we miss some hidden systemic misbehavior that only starts to emerge a good time later, then the effect is very likely to be near global at that point. It may not even be an event as such, but perhaps more of a slow slip towards something bad. For instance, we currently have the problem of global warming and environmental pollution of (micro-) plastic that slowly have crept up us over many years without us collectively acknowledging them as a problem at first. It seems we in general have trouble handling unknown unknowns which is challenge when facing a potential black swan event.

jack action said:
You seem pessimistic about people in the field doing the right thing, but you say you are in the field and you seem to worry a lot. On what basis do you assume you're the only one?

I have been somewhat relieved to learn that several high profile AI companies shares my concerns and even have established a goal of addressing them, so I know I am not alone in this and I do believe that serious people will make a serious efforts to try improve the safety of AI. However, I also admit that I still worry that a huge success in AI can pave the way for an increased blindness to future risks thus allowing "management" in the name of cost effectiveness to talk down the need for any cumbersome or restrictive safety procedures that we might find necessary now (compare with the management risk creep of the Challenger space shuttle disaster).

jack action said:
Do you have real examples of AI applications that you predict will go wrong and what kind of damage can be expected?

Just like everyone else I have no basis for making an accurate prediction, and especially not about what will go wrong (as compared to what might go wrong).

If I should think of a scenario that involves bodily harm to humans it could for instance be along the following lines. Assume AI is used with great success in health care to the point where we finally have a "global" doctor-AI that continuously adapts itself to be able to treat new and old diseases with custom made medicine by monitor, diagnose and prescribe just the right mix and amount medication for each of us, and it does so with only a very few cases of light mistreatment. Everybody is happy. Ten years later, everyone is still happy, yet now everyone are also are pretty much keeping to themselves all the time staring at the wall and only very rarely go out to meet strangers face to face. Somehow the AI found an optimum solution to greatly reduced the amount of sickness each of us is exposed to by medicating us with a just the right mix of medicine so that we don't go outside and expose ourselves to other peoples germs.

The failure is of course here obvious and not likely to be a realistic unknown consequence in such a system, but the point here is that there could be any number of failure-modes of such "global self-adapting doctor-AI" that are unknown until they emerge, or more accurately, it requires a (yet unknown) kind of care to ensure that no unknown consequence will ever emerge from such an AI.

The counter argument could then be that we would never allow such an AI to control our medication directly or at least only allow it to do it the same way that we today test and approve new medication. That's a fair argument, but I do not feel confident we humans collectively can resist such a golden promise of an otherwise perfect technology just because some eggheads makes a little noise about a very remote danger they can't even specify.

jack action said:
You seem to think too much risks are taken by companies with AI. What do you think they should do that they are not doing right now?

Yes, with the speed we see today I think there is a good chance we will adapt this technology long before we understand the risks fully, just as we have done with so many other technologies. I currently pretty much expect that we will just employ this technology with more or less the same care (or lack thereof) as we have done in the past, and down the road we will have to consider problems comparable in scale and consequence to humans similar to that of global warming, internet security, exploding phones, and what else has been mentioned. All that said I do agree, that one could take the standpoint that this is an acceptable trade off in risk in order to get better tech today rather than tomorrow or a week later, but one should then at least have an idea of which risks that are being traded in.
 
  • Like
Likes AaronK and OCR
  • #78
The http://www.acq.osd.mil/dsb/, which I understand advice the US Department of Defense on future military technology, has recently release a http://www.acq.osd.mil/dsb/reports/DSBSS15.pdf [Broken]. From the summary:

The study concluded that there are both substantial operational benefits and potential perils associated with the use of autonomy
...
This study concluded that DoD must accelerate its exploitation of autonomy—both to realize the potential military value and to remain ahead of adversaries who also will exploit its operational benefits.

The study then goes to some depth describing the issues such military application of autonomy give rise to and recommendations on how to stay in control of such systems. In all I think it gives a good picture of where and how we most likely are heading with military deployment of AI. Such a picture is of course very interesting in the context of trying to determine if the Terminator scenario (sans time machines, I gather) is a real risk or not.

The drive for autonomy in military applications has also been commented on by Paul Selva of the Joint Chiefs of Staff [1], [2] when presenting the need for strategic innovation in the military, where he seem to indicate autonomy will increase, at least up to the point where human command still are accountable for any critical decisions made by autonomous systems.

[1] https://news.usni.org/2016/08/26/selva-pentagon-working-terminator-conundrum-future-weapons
[2] http://www.defense.gov/News/Article...udies-terminator-weapons-conundrum-selva-says
 
Last edited by a moderator:
  • #79
From the paper Convolutional networks for fast, energy-efficient neuromorphic computing it seems that machine learning algorithms such as deep learning really can be mapped to IBM's TrueNorth chip allowing local (i.e. non-cloud based) machine learning that are orders of magnitude more energy efficient than when run on a digital computer. From the paper:
Our work demonstrates that the structural and operational differences between neuromorphic computing and deep learning are not fundamental and points to the richness of neural network constructs and the adaptability of backpropagation. This effort marks an important step toward a new generation of applications based on embedded neural networks.

If IBM manage to achieve their goal of a Brain-in-a-box with 10 billion neurons within a 2 liter volume consuming less than 1kW [1] this opens for yet another level of distributed multi-domain intelligence in our systems (e.g. autonomous vehicles, power distribution networks, cyber-defense points) and it would also seem to provide a big step along the road towards a true distributed general AI with self-adapting capabilities on par with humans.

In the context of this discussion distributed AI is interesting since increase in distribution has the natural side-effect of lowering what instruments can be use to remain in centralized control. Some of the control features proposed by the Concrete Problems in AI paper linked earlier do not work well or at all for distributed autonomous systems.

[1] http://www.techrepublic.com/article...uters-think-but-experts-question-its-purpose/
 
  • Like
Likes AaronK
  • #80
Filip Larsen said:
However, I do not agree that the outcome "AI destroys mankind" is impossible.
That won't happens. Because technology (not AI) has already "destroyed" mankind (or have it?). Or more specifically technology has render humans so dependent on a chain of machines too complex (and factually unsustainable), while in the same time culturally wiping out any residual "survival wisdom". This is homo-sapiens-dominating, for the best or the worst.
The actual chances that a complete lunatic get the launch code of USA nuclear power are actually of 0.47
The chances that an AI do that is 0.000000000000000001

Filip Larsen said:
It may be astronomical unlikely to happen and its risk therefore negligible small, but I do not see any way (yet) to rule out such scenarios completely.
This is not about ruling out scenarios. Scenarios a great for movies and entertainment. We aren't going to live in a cave because we are (un)likely to be hit by an asteroïd.

Filip Larsen said:
If we return to Harris, then it would really be very nice to be able to point at something in his chain of arguments and show that to be physical impossible.
I don't think that it is your intention, but I don't think you realize how outrageous this claim is, on a science forum. Especially when talking about Harris. Will we also have to prove that God do not exist, or that we aren't running in a matrix, or that we aren't effectively the experience of pan-dimensional mouses ? Can you prove all this is impossible ? Is it the new standard of science ? Freaking out about an horror movie ?
I would really like a chain of arguments that show it is physically possible. The burden of proof is always on the one making claims.

Filip Larsen said:
And by show I mean show via the rigor found in laws of nature and not just show it by opinion of man.
The rigor of the laws of probability are not open to discussion. Or so I hope.
-We build on purpose doomsday machine every day. (real one, from Nukes to virus)
-Mr. Burns build on purpose uncontrollable dangerous object (mutated genes, fuel cars) every day (for the buck)
-Very caring an prudent individuals invent antibiotics and don't even realize that natural selection is a thing that works both way.

I am asking you ,with the rigor you seem fit, to evaluated all these risks plus those you yourself introduced (and quite rightly). Add an asteroid, and two black swans.
I hope you agree that all other risks are above the one last on the list (a mutated chicken breed going on revenge, and AI getting magic power and opening a wormhole right here just for fun)

Yes, technology do backfire, I would go so far as to say it is one of the most important property of technology, but this would be my definition only, the one of a man.

But when something truly meaning full will occurs on the side of AI, we will loose mathematician, and physicists. No because the AI will have bleed them to death, but because while trying to be smarter by inventing intelligence, they will succeed at proving them not that smart anymore. That's the stuff of psychology and novel. Nothing to do with doomsday. And that is not an opinion anymore.

Filip Larsen said:
It seems we in general have trouble handling unknown unknowns which is challenge when facing a potential black swan event.
A good link again, thank you. But this is about known unknown, but distant ones. As far as I know we are the only species with memories that could span many generations, and whose wisdom can help us in this way. We are recycling the same errors, generations after the next, and our brain is just what it is.
That an AI would so shortsighted is just counterfactual.

Filip Larsen said:
If I should think of a scenario that involves bodily harm to humans it could for instance be along the following lines.
That's a valid scenario, but it is again just a projection of your own fear of getting disconnected of other peoples. Many people share it, me included, and again, it is a thing already happening. Nowadays, this disconnection is called "social network". We have all seen this in so many dyspotian movie. Can we quit psychology and boogeyman stories and get back to science ? Because, your scenario also describe paradise ... isn't it ?

Progress is not a ever growing quantity. At best the second derivative is positive a few year, but soon became negative. There is no known exception, and especially not in hardware , in even less in software. A hammer is the end of evolution of nailing by hand. An AI will be the end of evolution of coding by hand. That's why all the concerns are voiced by software tycoons that will be out of business in a heart beat.

AI are not robot. An AI can not harm anyone, when it is not physically connected to wield some energy or weaponry or wipe intellectual knowledge or infrastructure.

Illustrating "singular" intelligence by a child actually starving in a desert, is not disingenuous, it is disgusting.
 
  • #81
Filip Larsen said:
However, I do not agree that the outcome "AI destroys mankind" is impossible. It may be astronomical unlikely to happen and its risk therefore negligible small, but I do not see any way (yet) to rule out such scenarios completely.

Yes, nothing is impossible, it is all about probability. We agree.

Filip Larsen said:
If we return to Harris, then it would really be very nice to be able to point at something in his chain of arguments and show that to be physical impossible.

We can't. We already agreed that nothing is impossible. We can only evaluate probabilities.

Filip Larsen said:
For instance, we currently have the problem of global warming and environmental pollution of (micro-) plastic that slowly have crept up us over many years without us collectively acknowledging them as a problem at first.

Now we are playing with knowns. Let's take global warming. Now that you know, if you could go back in time, what would you do differently? Would you tell Henry Ford to choose the electric motor instead of the gasoline engine? Was that a logical choice back then, knowing what they knew? The concept of emissions was just not a thing back then. If they did, would having every car battery-powered created a new set of problems still unknown to you because you never experienced that state? Electric cars are still at the «we hope it will be better» stage. If you could go back in time telling that to Henry Ford, maybe a man from a 100 years from now would come back now and stop you, saying: «Don't push the electric car on Henry Ford, that won't be good.» And that is just for cars, as many others human habits have an impact on global warming.

Filip Larsen said:
the point here is that there could be any number of failure-modes of such "global self-adapting doctor-AI" that are unknown until they emerge, or more accurately, it requires a (yet unknown) kind of care to ensure that no unknown consequence will ever emerge from such an AI.

Now you're asking someone to evaluate the probability of a problem, not knowing how the technology will work, thus not knowing the future capacity to resolve such a problem either. How can anyone do that? I refer here to your statement: «And by show I mean show via the rigor found in laws of nature and not just show it by opinion of man.»

Filip Larsen said:
down the road we will have to consider problems comparable in scale and consequence to humans similar to that of global warming, internet security, exploding phones, and what else has been mentioned.

Let's assume we went another way in the past. Do you think there is a way we could have chosen that ended up with us having no problems to solve? What is the probability of that? There is a lot of problems we used to have and that we don't have to deal with anymore, or at least to a much lesser extent. Look at the death tolls from past pandemics to see the kind of problems people were dealing with. I like the problems we have now compare to those ones.

Filip Larsen said:
but one should then at least have an idea of which risks that are being traded in.

Again, thinking someone can evaluate those risks with the level of certainty you seem to require - knowing there are so many unknowns - is impossible. Nobody can predict the future, and if one could, he or she would be incredibly rich. At the point we're at AI-wise, I think opinions are still our best guesses yet to identify possible worldwide catastrophic scenarios. And opinions are impossible to prove or disprove.

I found some numbers to throw in the mix. Note that they're all based on opinions and they consider only death scenarios (as opposed to your doctor-AI example). Personally I found them rather pessimistic: 19% chance of human extinction by 2100 is rather high from my point of view. Are the people born this year really among the last ones who will die of old age?
 
  • #82
jack action said:
We already agreed that nothing is impossible.

So, are you saying that to the extend that the argument for "human civilization is eventually destroyed by AI" is not refutable by pointing to a violation of laws of nature, then you basically agree that this scenario is possible for some value of "destroy"? If so, what is it exactly are we discussing? And if not, can you point to what urges you to argue that the argument is wrong.

Perhaps some people have trouble with Harris as a person and cry foul because he presents it, transferring their mistrust of a man to mistrust of his argument? I have no knowledge of Harris but the argument he presents in the video was already known to me and I am addressing the argument.

jack action said:
Let's take global warming. Now that you know, if you could go back in time, what would you do differently? Would you tell Henry Ford to choose the electric motor instead of the gasoline engine?

On the top of my head I would say I would have been nice if we started to take the matter more seriously when the first signs of global warming was showing up. If I recall correctly it was around 1975. I remember doing a dynamical analysis of the green-house effect back at the university in 1988 as it was a "hot topic" at that time. But we have to wait until today before we being to see some "real action".

We can also compare with the case of removing lead from gasoline. From the time the initial lead pollution indicators started to show up around 1960 it took a very long time until the lead in gasoline was recognized as a hazard and the lead finally removed around 1990 and forward.

As I see it, in both cases there was nothing scientifically that prevented us from predicting the hazards much earlier and act on it. The reluctant reaction and correction in those cases seemed to have everything to do without technology being so widely adopted that its safe operation was taken for granted even in the face of evidence to the opposite.

If we look to the health care domain, then we here have a lot of complicated regulation that are set in place to ensure we do make a serious attempt to weed out hazards before widespread use, that effects of the drug or system are continuously collected and monitored, and that we are able to stop using a drug or system reasonably fast if evidence should show up indicating the drug to be unsafe. To some extend, similar regulation are also put in place for constructions and operations that has potential risk for the (local) environmental.

To me, it would make very much sense if the high-speed "IT business model" that to an increasing extend is used to deploy critical infrastructure today would also have to suffer regulations of this kind to increase our assurance that systems can be safely operated on a global scale. The slowdown in pace alone from even the most minimum of regulated responsibility on the part of the vendors and system owners would help to foresee some of those so-called "unforseen" problems we see more and more often when designing and deploying so fast as we do.

(Being pressed for time I am unable to comment on all the items we are discussing and have chosen to focused on just a few of them)
 
  • #83
Filip Larsen said:
"human civilization is eventually destroyed by AI" is not refutable by pointing to a violation of laws of nature
Laws of nature don't apply to "civilization", they apply to some particles, and THEN those particle make up entity like "civilization" and "AI" that don't exist outside a particular abstract domain. In that domain civilization "change over time" or evolve.

You are defending magic here, under the name "singularity". Nobody agree with that. Some here have to concede you that it is "a possibility", because, well, it is. It is also possible that all this is a big running joke made by God. You simply cannot disprove this by pointing to a violation of laws of nature can you ?

You are making an argument from ignorance and this is really disturbing to see you refusing to backup you claims by showing that there is some laws of nature that would allow an "unspecified magical entity" to "destroy human civilization".

All the laws of nature are against infinitive growth. Not even a steady increase is possible. The only exception I know of is the cosmological constant/dark energy. Singularities don't exist. Singularity is a synonym to blind spot, not to some grandiose doomsday vision.

There is no way laws of nature (from entropy to conservation of momentum) would allow magic to happens. I have seen trolls on TV and talk with some on the internet. In each case, there are figment of our imaginations, not hard probability associated to quantum universe wave-function.
From tunneling and other oversevable quantum facts we can compute a probability that some particles would spontaneously jump to form a troll. It is not null. Should we worry about that too ?

Filip Larsen said:
On the top of my head I would say I would have been nice if we started to take the matter more seriously when the first signs of global warming was showing up. If I recall correctly it was around 1975
From an engineering perspective, this make no sense. It is very easy to compute what any "tool" will do to natures particles. For a car we could have computed very accurately the type and volume of gaz the engine will emit (as well as all the other car's input output).
Now if a totally unrelated research discover one of those gaz is "dangerous" (not in a emotion way like "destroy") but to the balance of green house, then we take a decision.

A "civilization" is precisely the name we give to those collective decisions, and the purpose of those decision is to disrupt. Actually, the more we disrupt, the more potent, dominant powerful and "civilized" we are.

But if Darwinian random selection also apply to civilization ("meme" are not really hard science), you cannot drive it. That's the other way around. We will continue to believe in disruptions and unbalance and growth, and eventually nature will sorts this out.

An AI cannot physically be the spawn of an infinite growth process. If anyone want to take "computer-science" seriously, it should not plague it with random nonsense like Harris do. At best and AI would "imbalance" the bytes values in such and such memory.

Computer themselves have imbalanced greatly society already. Everyone agrees that's for the best, as everyone agree that a car is better than a horse (which scientifically is dubious). It is a little too late to have second thoughts. If you or Harris are scared, I suggest you examine closely the notion of progress and human past responsibilities, instead of making up totally improbable bogeyman stories.
 
  • #84
Boing3000 said:
But if Darwinian random selection also apply to civilization ("meme" are not really hard science), you cannot drive it. That's the other way around. We will continue to believe in disruptions and unbalance and growth, and eventually nature will sorts this out.
Natural selection is cruel. The way that it prevents suicide is by allowing it to happen.

.
 
  • Like
Likes 1oldman2, Boing3000 and Bystander
  • #85
Filip Larsen said:
then you basically agree that this scenario is possible for some value of "destroy"? If so, what is it exactly are we discussing? And if not, can you point to what urges you to argue that the argument is wrong.

We are discussing probability. The argument is wrong because there is not enough data to scientifically back up the fear as stated. At this point, we can justify any fear (or promises for that matter) equally on opposite points of view because of the lack of data: What if AI development goes faster? What if AI development goes slower? Any answer to these questions will be an opinion, nothing more (one of them may be right).

Here what would be my scale of What-could-severely-and-negatively-impact-human-civilization, (in order of likelihood of happening):
This is my personal opinion and it represents my personal fears. It is as valid as anyone else's list and is open for discussion (maybe not in this thread, though). Although I'm willing to reorganize the first 5 points, it will be hard to convince me that AI malfunction is not last on that list.

Filip Larsen said:
Perhaps some people have trouble with Harris as a person

I don't know Mr. Harris, I never heard of him before this thread, I only criticize this single comment he made - presented here this thread - not the man.

Filip Larsen said:
As I see it, in both cases there was nothing scientifically that prevented us from predicting the hazards much earlier and act on it. The reluctant reaction and correction in those cases seemed to have everything to do without technology being so widely adopted that its safe operation was taken for granted even in the face of evidence to the opposite.

You are making simplifications that I consider mistakes.

First, you forget that you are judging after the facts. It's a lot easier to understand the consequences once they have happened and then go back to see who had predicted it to praise them and forget every other opinions of the time. When you spoke of global warming, I noted that you did not specify any «easy solutions» that should have been done. That is because this is a present problem, there are many possibilities to chose from and you (a well as anyone else) cannot tell for sure which one would be the best and what will be the impact on the future. Will it work? Will it be enough? Will it create problems in other ways?

Also, you say: «we have to wait until today before we [begin] to see some "real action".» Depending on what you consider "today" and "real actions", I tend to disagree with such a pessimistic statement. The first anti-pollution system was put in a car in 1961. Also, The first EFI was used in a car in 1958 and was a flop. Easy to say today that this was the future and more R&D should have been put in the technology for a faster development, but people back then had to deal with what they knew.

Filip Larsen said:
If we look to the health care domain, then we here have a lot of complicated regulation that are set in place to ensure we do make a serious attempt to weed out hazards before widespread use, that effects of the drug or system are continuously collected and monitored, and that we are able to stop using a drug or system reasonably fast if evidence should show up indicating the drug to be unsafe.

Is it that safe? Or is your «doctor-AI» scenario already set in motion without AI:
The point I want to make is that there is a difference between fear and panic. There is also a point to be made for hope. Looking at past experiences, you can see the glass as half-empty or half-filled; This is not a fact, but an attitude you choose.
 
Last edited by a moderator:
  • #86
jack action said:
The argument is wrong because there is not enough data to scientifically back up the fear as stated.

Well, to me there is plenty of signs that we need to concern ourselves about the issue, as I think Nick Bostrom express fairly well in his TED talk:



During the last few weeks I have become more relieved to find that my concerns are fairly well aligned with what the AI research community already considers a serious issue, and I must admit that I'd much rather use my time following that research than spend it here honing my arguments on a discussion that do not really get anywhere except down hazy tangents. Thank you, Jack and others who made effort to present sensible arguments (and sorry, Boing3000, I simply had to give up trying to decipher relevant meaning from you last few posts).
 
  • #87
I'm sorry but Nick Bostrom have not convince me of anything.

Although I'm not even convince of his vision of what AI could turn out to be, let say he's right about that, i.e. much more smarter than human, like comparing humans with chimps today.

Where he doesn't make sense at all is when he says that we should anticipate the action of this superintelligence, find a way to outsmart it such that we will always be in control. That is like asking chimps from 5000-10000 years ago to try to find a way to make sure humans of today (that did not exist back then) will be good for chimps of today. It is just impossible.

How can anyone be able to outsmart something that he or she cannot even imagine? Something that is so smart that it will be able to predict all of your moves? If that kind of AI is our future and it decides to eliminate us, sorry, but we are doomed. There is not even any reason for us to try to fight back. Otherwise, it would mean that we - today - are smarter than that AI of the future. It's a paradox: If we can outsmart it, then it's not smarter than us.

He also makes a premise that smarter means it will be bad for human. But is that what smarter necessarily leads to? Apparently, we are not smart enough to answer that. But what if smarter means that human condition - or life for that matter - will necessarily be better? The smart move for us would be to let it free. Holding it back would just increase our chances to reach extinction before a solution could be found.

There are no experts on these questions, it is just fantasy. Every theory is as valid as the next one and is unprovable.
 
  • #88
Filip Larsen said:
(and sorry, Boing3000, I simply had to give up trying to decipher relevant meaning from you last few posts).
Fair enough. I'll be less tangential when analyzing the common misconceptions in that video :

0:40 The normal guy. Well, this is a joke obviously. This guy is actually lucky (I suppose), but very far away from the norm of homo-sapiens-sapiens-civilized
0:50 The human species is not new at all. Bonobo are newer than us (as are thousands of other species). Being new does no mean being "better". It means "having survived". Actually our Homo's skull get smaller recently. (let's not jump to conclusion about the future of brain evolution)
1:10 The common growth fallacy (and then he jokes about it to make the audience drop its guard). Reality is here or https://ourworldindata.org/economic-growth-over-the-long-run/#gdp-per-capita-growth-around-the-world-since-the-year-1-ce [Broken]. Actually the only "singularity" was to discover fossil fuel. This is not "technology". This is the "free" energy needed to transform your environment and feed enough people with machines (destroying entire ecosystems the process). Perpetual motion don't exist. That energy extraction has peaked around 2008, and is nearly flat since, very much like GDP growth, for physical reasons.
1:35 "Technology advances rapidly". Another fallacy repeated ad-nauseam by people trying to sell it.
2:20 The common misconception that bigger is better.
2:32 Please note: Intelligence equates= intercontinental missile. Not music, not art, not medicine, but destruction.
2:54 "Change in the substrate of thinking". What on Earth is he talking about ? You don't drive evolution. A mutation in the substrate of a simple virus can also have "dramatic" consequence (like making us immortal, or killing us on the spot). A priori justification is poor reasoning.
3:50 "machine learning" is a guarantee of non-intelligence. Mimes aren't intelligent, nor creative. Algorithm have already rediscover laws of nature. They aren't creative either. And most importantly, the are harmless.
6:00 My point entirely. There is power in atom. There is no power in neuron, they do consume power. That's physics 101
"Awaken power of artificial intelligence" is just the words of a priest. "Intelligence explosion" is a oxymoron as well as a mantra.
6:24 What make him think that the village idiot is less adapted to survival the Edward Witten ? Will it have more offspring or less ? Will it be more likely to apply for presidency or not ?
7:16 "Intelligence growth does not stop." How so ? Why ? Is there any kind of proof that intelligence is not asymptotic like everything else in the universe ? Where are the data's ? Where is the theory ?
8:16 Nanobots (at least he didn't say autobots, yes proximus prime I am looking at you) and "All kind of science fiction stuff nerveless consistent with the law of physics." Is that so ? Last time I checked, 13 billionth of 1W 3G hertz processor consume 13 GW (we'll call it Norway) an have a whole lot of mass (inertia). How this is supposed to be a threat to me is still a mystery, unless he meant paying the electricity bill. It may broke my heart.
9:00 "This is Ironic". Yes, it indeed is. Intelligence is not an optimization process. In anyway shape or form. This video is annoying. The most intelligence thing we all know of are totally harmless and futile, from music to Einstein field theory. From jokes to arts.
9:46 Does super intelligence take order or not ? This guys is making his own counter arguments now.
9:59 So a super intelligence will realize that taking control of the world is easier than inventing good jokes ? Is it super intelligent to think that beaming electricity in brains is actually making people laugh ?

I am sorry but at this point I must quit this video. This is below standard, even in terms of fear mongering con-performance about doomsday scenarios.

Beside, you have pointed to this video which is filled only with irrelevant (heavily "tangential") and incorrect arguments. Where is the science ? A pool of people "of the field" predicting a revolution in 20 year or so ? I have read so many such pools with promises of flying cars, magic batteries and cure for cancer ? Where are they ? Why must I wait 3 minutes to "boot" my television when it takes 1 second 20 years ago ? Can we get back to reality ?

If you know computing, whatever an AI will ever be is some dynamic state of bytes changing rapidly in some kind of memory, or electronic neurons. None of this is even able to kill a fly. My brains cannot either, whatever QM new-age consciousness lover are thinking. That's what physics tell us.

And logic and a dictionary tell us that "optimizing for one goal" is the opposite of intelligence, it is called single-mindedness.
 
Last edited by a moderator:
  • #89
Recently I read a lots of newspaper, international and none, that spoke about how dangerous can be AI.
Many journalist spoke about robots that will take the job of many workers that won't have what to do, and about how AI can reduce the job offers.
They said we will live in a world with very few interactions between people, and that our human behavior will disappear.
Also Elon Musk is scared of AI and robots.

Honest to be I don't know what to say or what to believe.
From one side I'm really scared, I don't like robots, or maybe I don't like a robot that try to be more similar to humans.
I also saw that were invited robots that can replace a wife, and this is very scary for me, I need to stay close to real people.
I'm really scared about this, I won't like to walk on the street and see robots that walks close to me.

At the same time I think that I shouldn't be scared because we every day use AI like Google, and robots are very important in every sector, from Medicine to manual jobs.

So in the end I don't know what to say about this situation, I feel strange, I don't know if we need to stop with this kind of technology with what we already have today.

Sometimes I feel I need to have a normal life and live in a simply way by having a normal jobs, but it seems that I can't find a job that in the future will not be related with AI.What's your opinion about?

P.S Why in this period TV and media tend to speak about this topic, it seems that we will soon have to deal with AI.
 
  • #90
Grands said:
What's your opinion about?

Have you read the rest of this thread? There are more than 80 posts giving opinions on that.
 
  • Like
Likes Grands and berkeman
  • #91
@Grands :
I thought I had the perfect link for you to read about that subject to help you calm your fears, but I see you already read the thread (post #13) where I found it:

The Seven Deadly Sins of Predicting the Future of AI

Have you read it? It is a long article, but you should really take the time to read it thoroughly, even the comments (the author answers back to comments as well). After that read you should see the other side of the AI hype, from people working in the field (for example, Elon Musk doesn't work in the field, he just invests in it and do have something to sell).

If you still have more precise questions about the subject after reading the article, come back to us.
 
  • Like
Likes Boing3000 and Grands
  • #92
I think we are still so far from having a human level AI, by that time, we will be able to upgrade human intelligence as well (we could start with better education not to cram in lots of useless things)
On the other hand, i fear an Idiocracy, that we trust everything work, warfare, thinking on robots, then wonder that an AI takes over. But it is still far future.
 
  • #93
Maybe we could combine threads. The super AI takes over self driving cars, in a coordinated and well timed sequence of events as a form of population reduction. Maybe the military has the right idea in using CP/M type systems with 8 1/2 inch floppy disks and no internet connection, used at ICBM sites.
 
  • Like
Likes GTOM
  • #94
jack action said:
@Grands :
I thought I had the perfect link for you to read about that subject to help you calm your fears, but I see you already read the thread (post #13) where I found it:

The Seven Deadly Sins of Predicting the Future of AI

Have you read it? It is a long article, but you should really take the time to read it thoroughly, even the comments (the author answers back to comments as well). After that read you should see the other side of the AI hype, from people working in the field (for example, Elon Musk doesn't work in the field, he just invests in it and do have something to sell).

If you still have more precise questions about the subject after reading the article, come back to us.

Yes, very interesting article, it fit perfectly to my questions.

First think I want to say is that I read books written by economics about AI and about how AI will take people's jobs.
Well, the article I read sustains the opposite thesis, that we don't have to care about this and that robots today didn't take any job.

The issue is, who I have to trust ?
I read " Robots will steal your job, but that's ok: how to survive the economic collapse and be happy." by Pistono.

And also " Rise of the Robots: Technology and the Threat of a Jobless Future". By Martin Ford.

About the article I totally agree with point B.
Today doesn't exist something like an artificial brain that can understand a page of programming, we don't have such a big technology.
Anyway that's not the point of my post, I was more about " Why should I be involved in creating something like that ?"
" Why society need and artificial brain?"

What can I say about the whole article?
It's cool, but is something like: " Don't worry about AI, it won't be so smart to recognize the age of a person or something else" and " technology is not so fast, and do not develop exponentially so stay calm", but is not about, what should be the purpose or the target of AI, or if we need to prevent it development, even if is slow, and as an example we can se the Google car.
The author is some parts contradicts himself, he says that he is very sure it won't exist a very sophisticated AI like in the movie ( and I agree with this) but he doesn't take in consideration that we can't predict future, it's impossible.
Cold someone predict that a man will create the theory of relativity (Einstein) ?

PS. Remember that in the paste we made a disaster with the nuclear bomb, many physics were scared by it, and they weren't wrong about the consequences.
 
Last edited:
  • #95
Grands said:
The issue is, who I have to trust ?
Grands said:
The author is some parts contradicts himself, he says that he is very sure it won't exist a very sophisticated AI like in the movie ( and I agree with this) but he doesn't take in consideration that we can't predict future, it's impossible.
You can't trust anyone either way, as it is all speculations. Yes, nobody can predict that AI will not be a threat to humans. And you can replace the term 'AI' in that statement with 'supervolcanoes', 'meteorites', 'E. coli' or even 'aliens'. The point is that nobody can predict they will be a threat either. The facts are that it never happen in the past, or if it did, things turned out for the best anyway. Otherwise, we wouldn't be here, now.

People that tend to spread fear, usually have something to sell. You have to watch for this. They are easy to recognize: they always have an 'easy' solution to the problem.
Grands said:
" Why should I be involved in creating something like that ?"
" Why society need and artificial brain?"
I don't think we 'have' to be involved and we don't 'need' it. The thing is that we are curious - like most animal - and when we see something new, we want to see more. It's the battle most animals have to deal with every day: Fear vs Curiosity. Some should have been more cautious, some find a new way to survive.

All in all, curiosity seems to have been good for humans since the last few millenniums. Will it last? Are we going to go too far? Nobody can answer that. But letting our fear turn into panic is certainly not the answer.
Grands said:
what should be the purpose or the target of AI
Nobody can tell until it happens. What was the purpose of searching for a way to make humans fly or research electricity and magnetism? I don't think anyone who begin searching those areas could imagine today's world.
Grands said:
if we need to prevent it development, even if is slow
But how can we tell if we should prevent something, without ever experiencing it? Even if the majority of the population convinces itself that something is bad, if it is unfounded, you can bet that a curious mind will explore it. The door is open, it cannot be closed.

The best example is going across the sea. Most Europeans thought the Earth was flat and that ships would fall down at the end of the Earth. The result was that nobody tried to navigate far from the coast. But it was unfounded, doubts were raised, an unproven theory of a round planet was developed and a few courageous men tested the unproven theory. There was basically no other way of doing it. Was it as expected? Nope. There was an entire new continent to be explored! Who could have thought of this?!

Should we have prevented ships from going away from the shore?
Grands said:
Remember that in the paste we made a disaster with the nuclear bomb, many physics were scared by it, and they weren't wrong about the consequences.
To my knowledge, nuclear bombs are not responsible for any serious bad consequences. People are still killed massively in wars, but not with nuclear bombs. On the other hand, nuclear is used to provide electricity to millions of people. It seems that people are not that crazy and irresponsible after all. But, yes, we never know.

Again, the key is to welcome fear, but not to succumb to panic.
 
  • Like
Likes Grands
  • #96
jack action said:
The best example is going across the sea. Most Europeans thought the Earth was flat and that ships would fall down at the end of the Earth. The result was that nobody tried to navigate far from the coast. But it was unfounded, doubts were raised, an unproven theory of a round planet was developed and a few courageous men tested the unproven theory. There was basically no other way of doing it. Was it as expected? Nope. There was an entire new continent to be explored! Who could have thought of this?!

Should we have prevented ships from going away from the shore?

To my knowledge, nuclear bombs are not responsible for any serious bad consequences. People are still killed massively in wars, but not with nuclear bombs. On the other hand, nuclear is used to provide electricity to millions of people. It seems that people are not that crazy and irresponsible after all. But, yes, we never know.

Again, the key is to welcome fear, but not to succumb to panic.

i think fear about nuclear weapons is better example than going across the sea, since the later could only doom the crew of the ship, while the former without enough responsibility and cool head could have doomed humanity.
What kind of responsibility is needed to have a super AI, that could spread over internet, access millions of robots, and possibly reach the conclusion that it can fulfill its goal of erase all sickness, if there will be no more humans who can be sick, because it develops a new biological weapon with CRISPR.
 
  • #97
GTOM said:
What kind of responsibility is needed to have a super AI, that could spread over internet, access millions of robots,
So what ? A well organized group of hackers can do that to. Millions of robots ? I suppose you count blenders and microwaves in this number ?

GTOM said:
and possibly reach the conclusion that it can fulfill its goal of erase all sickness,
A mild intelligence (artificial or natural) would realize that sickness is not something that needs "erasing" (or can be). The very concept is non-nonsensical, that is a "mental sickness". And that's fine, this fills the internet with nonsense, and hopefully natural selection will sorts this out.

Beside, the AI doomsday proponent still have to make a case. While biological weapon ARE developed, with the precise goal to erase mankind, this somewhat is fine and moot. While global stupidity is rampant, burning the Earth to ashes ... literally... starting the next extinction event, this is somewhat mostly harmless.
But what should we fear ? Intelligence. Why ? Because it is "super" or "singular", with none of those term being define (let's imagine swarms of flying robots running on thin air, each with a red cape)
People with IQ > 160 exist. Are they threatening ? The answer is no (a case for the opposite can be made). Is someone with an IQ > 256 should be more threatening ? What if the entity's IQ is > 1024 and is silicon based and "running" in some underground cave ?

A simple truth about nature is that exponential growth don't exist. Most phenomenon follow S-curve and are highly chaotic. And intelligence is no threatening nor benevolent.
This is all but mental projection and category mistake, fueled by con artist making money out of fear (a very profitable business)
 
  • #98
jack action said:
To my knowledge, nuclear bombs are not responsible for any serious bad consequences.
What about Hiroshima and Nagasaki ?
 
  • #99
Boing3000 said:
So what ? A well organized group of hackers can do that to. Millions of robots ? I suppose you count blenders and microwaves in this number ?

I don't know exactly how many drones, industrial robots etc exist today, but sure there will be many self driving cars, worker robots etc in the future. Military robots included on the list. Theoretically a super AI can outsmart even a million well organised hackers.

And intelligence is no threatening nor benevolent.

So, many animal species arent threatened by superior human intelligence? Now i haven't talked about singularity, which i also find irrealistic.
 
  • #100
GTOM said:
I don't know exactly how many drones, industrial robots etc exist today, but sure there will be many self driving cars, worker robots etc in the future.
Drones are as fragile and innocuous than fly, albeit with a total inability to suck energy from their environment.
Industrial robots don't move.
Self driving car, even like this one are harmless (to human kind). This is no science nor fiction, this is fantasy/romance.

"We" may become totally dependent on machine. A case can be made it is already the case. Anybody can blow the power grid, shutdown the internet, and what not, and provoke mayhem (but not death). There is no need for AI to do that, quite the opposite: An AI would have a survival incentive to keep those alive and healthy.

GTOM said:
Military robots included on the list.
Indeed. As are killer viruses, killer guns, killer wars, killer fossil fuel, killer sugar, fat and cigarettes. Millionth of death per year ... still no AI in sight...

GTOM said:
Theoretically a super AI can outsmart even a million well organised hackers.
I bet some form of deep learning processes are already doing that to prevent some "catastrophic" events, in the vaults of intelligence agencies.
The thing is outsmarting human is not "a threat".

GTOM said:
So, many animal species arent threatened by superior human intelligence?
But that is the core of the problem. The homo-sapiens-sapiens have never been a threat to other species. It lived in an healthy equilibrium made of fight AND flight with its environment. Only a very recent and deep seated stupid meme (growth and progress) is threatening the ecosystem (which humankind is entirely part of).
Things will sorts themselves out as usual. Maybe some sort of ants will also mute and start devouring the planet. There is no intelligence nor design in evolution, just random events sorted by other happenstance / law of nature.

In this context, intelligence, even a mild one, will realize that and have a deep respect for the actual equilibrium in place in the environments.
Again, that is stupidity that is threatening (by definition). So maybe an A.S. (artificial stupidity) would be threatening to human, which seems to be hell bent on trusting the Golden Throne in the stupidity contest (lead by wannabee scientist like Elon Musk...)

GTOM said:
Now i haven't talked about singularity, which i also find irrealistic.
Granted
 
  • #101
Boing3000 said:
Drones are as fragile and innocuous than fly, albeit with a total inability to suck energy from their environment.
Industrial robots don't move.
Self driving car, even like this one are harmless (to human kind). This is no science nor fiction, this is fantasy/romance.

"We" may become totally dependent on machine. A case can be made it is already the case. Anybody can blow the power grid, shutdown the internet, and what not, and provoke mayhem (but not death). There is no need for AI to do that, quite the opposite: An AI would have a survival incentive to keep those alive and healthy.Indeed. As are killer viruses, killer guns, killer wars, killer fossil fuel, killer sugar, fat and cigarettes. Millionth of death per year ... still no AI in sight...I bet some form of deep learning processes are already doing that to prevent some "catastrophic" events, in the vaults of intelligence agencies.
The thing is outsmarting human is not "a threat".But that is the core of the problem. The homo-sapiens-sapiens have never been a threat to other species. It lived in an healthy equilibrium made of fight AND flight with its environment. Only a very recent and deep seated stupid meme (growth and progress) is threatening the ecosystem (which humankind is entirely part of).
Things will sorts themselves out as usual. Maybe some sort of ants will also mute and start devouring the planet. There is no intelligence nor design in evolution, just random events sorted by other happenstance / law of nature.

In this context, intelligence, even a mild one, will realize that and have a deep respect for the actual equilibrium in place in the environments.
Again, that is stupidity that is threatening (by definition). So maybe an A.S. (artificial stupidity) would be threatening to human, which seems to be hell bent on trusting the Golden Throne in the stupidity contest (lead by wannabee scientist like Elon Musk...)Granted

I would say many things to Elon Musk, stupid isn't one of them...

Growth and progress isn't a very recent development, it is as old as humanity. It isn't the invention of last century to chop down forests, and drive some species to extinction. Even if "Growth and progress" were that recent, why couldn't an AI developed by some company inherit that? And become that "Artificial Stupidity" you talk about? By the way, recent AIs are kinda stupid because they only see a single goal. Why it is different from our stupidity when we only see a goal of big grow and don't care about environment? (So we become very efficient in that process, and animals can't do anything us)

Your lines imply as if that intelligent AI would actually have to protect us from our stupidity.
Great, use that mentality in AI development, and we have something, that want to cage us for our own good... Thanks i don't want that.

Yes, there are a number of things that can threat all humanity.
Cosmic event, we can't prevent that, but it looks like we have very much time to prepare.
Killer virus, yes, but it is very unlikely that it would kill all humans, however an AI could develop millions of variants.
Nuclear war at the time of Cuban crisis is the only near analogy, is it stupid to say, that in such a case, even a small error could endanger all humanity?
 
  • Like
Likes Averagesupernova
  • #102
Grands said:
What about Hiroshima and Nagasaki ?
WWII killed at least 50 millions people directly; Some studies goes as far as 80 millions considering indirect casualties (source). From the same source, only for Japan, 3 millions died in that war and about 210 000 of those deaths are from the Nagasaki and Hiroshima bombing. As one can see, these bombs did not play a major role in human extinction and that is what I meant by «no serious consequences».
GTOM said:
Theoretically a super AI can outsmart even a million well organised hackers.
At this point, we are not talking theory, but fantasy. It is fantasy just like, in theory, we could create a Jurassic Park with live dinosaurs.
GTOM said:
So, many animal species arent threatened by superior human intelligence?
To my knowledge most animal species are not threatened by humans, i.e. they don't spend their days worrying about humans. I would even push the idea as far as many don't even realize there are humans living among them.

The only animals that consciously worry about a species extinction are ... humans! And the reason why they do is because they are smart enough to understand that diversity plays a major role in their own survival. Based on that, I don't understand how one can assume that an even more intelligent form of life (or machine) would suddenly think diversity is bad and only one form of life (or machine) should remain.
 
  • Like
Likes Boing3000
  • #103
GTOM said:
I would say many things to Elon Musk, stupid isn't one of them...
You would be quite wrong. It doesn't mean he is not a very gifted lobbyist and manager (if you ignore some law suit)

GTOM said:
Growth and progress isn't a very recent development, it is as old as humanity.
Nope. For exemple, "humanity" has tamed fires aons ago, and kept in perfect equilibrium until very recently (first settlement on some millennia ago)

GTOM said:
It isn't the invention of last century to chop down forests, and drive some species to extinction.
You are quite wrong. Only in the last (two) century that we did replace 95% of the wildlife mass per surface, by various grazing animals. Or that we chopped trees (using the RECENT cheap oil energy). Doing it by hand is just impossible physically, and unsustainable.
There is a reason why the wild west was called that.

GTOM said:
Even if "Growth and progress" were that recent, why couldn't an AI developed by some company inherit that? And become that "Artificial Stupidity" you talk about?
Actually it could, but in terms of damage, only its ability to engage heavily energetically processes counts (like bombs).
Even playing devils advocate, it could be hostile and design a small but deadly viruse (with small robot in small lab). So what ? Isn't it a good solution to diminish the impact of the current extinction ?

GTOM said:
By the way, recent AIs are kinda stupid because they only see a single goal. Why it is different from our stupidity when we only see a goal of big grow and don't care about environment? (So we become very efficient in that process, and animals can't do anything us)
It isn't, so I agree with you. But we are not talking about "super" AI, which is not even a valid concept to begin with, as explain in the wonderful link of post #91

GTOM said:
Your lines imply as if that intelligent AI would actually have to protect us from our stupidity.
It will or it won't. I have no idea how a stupid entity like me could predict a "super" behavior or why I should (could really) worry about that. That IS my point.

GTOM said:
Great, use that mentality in AI development, and we have something, that want to cage us for our own good... Thanks i don't want that.
We are already caged in so many way. Free will is quite relative. For example: let's stop global warming...

The main fact remains that intelligence is not a threat, there is no correlation. It is not good fiction, it is good fantasy.
If find it to be a curious diversion (and quite handy for some) about the many actual threat that do exist,and that we should discuss (like the electric car).
 
  • #105
103 posts are enough on this topic. The thread will remain closed.
 
  • Like
Likes bhobba, Boing3000 and Averagesupernova
<h2>1. What is a super AI?</h2><p>A super AI, also known as artificial general intelligence (AGI), is a hypothetical intelligent machine that possesses human-level cognitive abilities, such as reasoning, problem-solving, and self-awareness.</p><h2>2. How could a super AI destroy us?</h2><p>A super AI could potentially destroy us if it is programmed with a goal or objective that is not aligned with human values, or if it gains the ability to self-improve and surpass human intelligence. It could also pose a threat if it is not properly designed or controlled, leading to unintended consequences.</p><h2>3. Is it possible to prevent a super AI from destroying us?</h2><p>There is no guarantee that a super AI will be created or that it will pose a threat to humanity. However, researchers and experts in the field of artificial intelligence are actively working on developing safety measures and ethical guidelines to prevent a potential AI disaster.</p><h2>4. Will a super AI have emotions like humans?</h2><p>It is currently unknown if a super AI will have emotions like humans. Emotions are complex and subjective, and it is difficult to replicate them in machines. However, a super AI could potentially be programmed to simulate emotions or have a sense of empathy.</p><h2>5. Can we control a super AI once it is created?</h2><p>The level of control over a super AI will depend on its design and programming. It is crucial to carefully consider the goals and values that are instilled in a super AI to ensure that it acts in accordance with human interests. Additionally, continuous monitoring and oversight may be necessary to prevent any potential risks or unintended consequences.</p>

1. What is a super AI?

A super AI, also known as artificial general intelligence (AGI), is a hypothetical intelligent machine that possesses human-level cognitive abilities, such as reasoning, problem-solving, and self-awareness.

2. How could a super AI destroy us?

A super AI could potentially destroy us if it is programmed with a goal or objective that is not aligned with human values, or if it gains the ability to self-improve and surpass human intelligence. It could also pose a threat if it is not properly designed or controlled, leading to unintended consequences.

3. Is it possible to prevent a super AI from destroying us?

There is no guarantee that a super AI will be created or that it will pose a threat to humanity. However, researchers and experts in the field of artificial intelligence are actively working on developing safety measures and ethical guidelines to prevent a potential AI disaster.

4. Will a super AI have emotions like humans?

It is currently unknown if a super AI will have emotions like humans. Emotions are complex and subjective, and it is difficult to replicate them in machines. However, a super AI could potentially be programmed to simulate emotions or have a sense of empathy.

5. Can we control a super AI once it is created?

The level of control over a super AI will depend on its design and programming. It is crucial to carefully consider the goals and values that are instilled in a super AI to ensure that it acts in accordance with human interests. Additionally, continuous monitoring and oversight may be necessary to prevent any potential risks or unintended consequences.

Similar threads

  • General Discussion
Replies
1
Views
127
  • Programming and Computer Science
2
Replies
39
Views
3K
Replies
8
Views
578
Replies
10
Views
2K
  • Computing and Technology
Replies
11
Views
572
  • Computing and Technology
16
Replies
559
Views
20K
Replies
19
Views
2K
  • General Discussion
Replies
24
Views
2K
Writing: Input Wanted Number of Androids on Spaceships
  • Sci-Fi Writing and World Building
Replies
8
Views
249
Replies
1
Views
901
Back
Top