Destined to build a super AI that will destroy us?

  • Thread starter Thread starter Greg Bernhardt
  • Start date Start date
  • Tags Tags
    Ai Build
AI Thread Summary
The discussion centers around the potential dangers of superintelligent AI, referencing Sam Harris's TED Talk. Participants express concerns that society may not be taking these risks seriously enough, paralleling issues like global warming. There is debate over whether AI can develop its own goals or remain strictly under human control, with some arguing that autonomous systems could evolve unpredictably. While some view the advancement of AI as a natural evolution, others warn of the potential for catastrophic outcomes if safeguards are not implemented. The conversation highlights a tension between optimism for AI's benefits and fear of its possible threats to humanity.
  • #51
I don't understand the worry about the possibility that we are overtaken by some AI in a near future. Where by overtaken I mean litterally replaced. If "something" more intelligent than us eradicates us, isn't it "nice" overall? To me, this means we would have achieved our goal. It would mean we created something better adapted and more intelligent than us on Earth. For me this is a big win.
Maybe our specy won't die that quickly and we might become the AI's pets and be well treated.
 
Technology news on Phys.org
  • #52
@Filip Larsen:

The problem you describe has nothing to do with AI in particular. Because you use the terms «manage» and «control», we agree that we can stop using AI if we want or at least change how we use it.

The question you're asking is rather: «Are we responsible enough to handle [technology of your choice]

The answer is «We are as responsible as we ever going to be.»

Example:

Does anyone knows if the overpasses are built safely? For most of us, we have no clues on how they are built, what is required to built one and we trust that they are all built in a safe manner. Even a civil engineer doesn't study the overpasses he will use before planning a trip; He just put his trust in the people who built them.

10 years ago, one overpass collapse close to where I live. It fell on two vehicles, killing 5 peoples. If you read the Wikipedia article, you will learn that these deaths were the result of a succession of bad decisions that were made over a period of 40 years by different people. All of these bad decisions were based on over-confidence.

No matter how tragic this event was, we still, each of us, use overpasses without fully understanding how they are made. But it is not business as usual either: IIRC, 2 or 3 other «sister» overpasses were demolished soon after the collapse. The government tightened the inspections across the province and dozen of other overpasses and bridges were demolished or had major repairs. The small bridges that were under the care of local municipalities were claimed back by the government. To this day, most people in my province remember this event and think about it whenever going under an overpass: «Will it fall on me?» I'm telling you this because it was the 10th anniversary just a week ago and it was all over the news across the province.

What is important to understand is that we are not slave to things we don't understand. We can react and adjust. Sometimes not as individuals, but rather as a society; but we still don't just sit there, blindly accepting our fate. We always have some control on man-made thing. Will there be bad things that will happen with AI? It is a certainty (your «90%»). Will it become out of control to a point of putting the human specie in jeopardy? It is unlikely (My «0.0001%»).
 
  • Like
Likes Averagesupernova
  • #53
jack action said:
The problem you describe has nothing to do with AI in particular. Because you use the terms «manage» and «control», we agree that we can stop using AI if we want or at least change how we use it.

I am not really sure why you state this again expecting me to answer differently, so I will now just sound like a broken record.

To me, the potential realisation of capable AI seems to be the magic source which will allow us to keep increasing complexity beyond what we can understand and therefore control. Note, that I am not saying that AI is guaranteed to be a harmful thing, or that no good things would come from it, only that while our raft of civilization is drifting down the river towards the promised Elysium it makes sense that those involved with navigation make sure we get there and at least look out for waterfalls.

And regarding control, I would like to ask how much control control you'd say we have of the internet when used for criminal activity. Are you, as a user of the internet, really in control of it? Are the internet organizations? Are the national states? Can any of these parties just "turn off" the unwanted criminal elements or change it so criminals go away? If you think yes to the last one, then why are the criminals still here? And is all this going to be easier or harder if we add capable AI to the mix?

jack action said:
Does anyone knows if the overpasses are built safely?

Yes, we have a very deliberate process of striving towards building constructions that are safe. This does unfortunately not mean that wrong decisions are never made or a contractor will never try to "optimize" their profit in a way that leads to bad results, but we are in general capable of doing proper risk management when building, say, a bridge because we are in general good at predicting what will happen (i.e. using statics and similar calculations) and once built it remains fairly static. And the same degree of rigor is not even remotely present when we are developing and deploying most of the software you see on the consumer market (as software is so qualitatively very different from a bridge), yet this development process is very likely the way we will AI and similar adaptive algorithms in the future.
 
  • #54
Filip Larsen said:
I am not really sure why you state this again expecting me to answer differently, so I will now just sound like a broken record.

To me, the potential realisation of capable AI seems to be the magic source which will allow us to keep increasing complexity beyond what we can understand and therefore control.
It has already been stated that when the machines start doing things above our level of comprehension we will shut it down since the assumption will be that the machines are malfunctioning. This has likely been done many many times already on simpler levels. Consider technicians and engineers scratching their heads when a system is behaving in a manner that is making no sense. Then it occurs to someone that the machine is considering an input that everyone else had forgotten. So in that instance, for a brief moment, the machine was already smarter than the humans. I know it could be argued that somehow somewhere in a CNC machine shop that parts could be magically turned out for some apocalyptic robot or whatever. To me this is no easier than me telling my liver to start manufacturing a toxic chemical and put it in my saliva to be used as a weapon.
 
  • #55
Filip Larsen said:
only that while our raft of civilization is drifting down the river towards the promised Elysium it makes sense that those involved with navigation make sure we get there and at least look out for waterfalls.

All I'm saying is that you should have more faith in humanity that they will. Whether they do it on their own or that they are forced to by some events. I doubt that it will take something fatal to the entire humanity before there are reactions, hence my examples.

Just the fact that you are asking the question, reassure me that there are people like you who cares.

Filip Larsen said:
Are you, as a user of the internet, really in control of it?

Nope. But I'm not in control of the weather or the justice system either and I deal with it somehow.

Filip Larsen said:
Are the internet organizations?

Nope.

Filip Larsen said:
Are the national states

Nope.

Filip Larsen said:
Can any of these parties just "turn off" the unwanted criminal elements or change it so criminals go away?

Nope. But how does this differs from yesterday's reality? Did anyone ever had control on criminality on the streets? Remember when criminals got their hands on cars in the 20's for getaways? How were we able to catch them? Police got cars too.

You may change the settings but the game remains the same.

Filip Larsen said:
And is all this going to be easier or harder if we add capable AI to the mix?

IMHO, it will be as it always been. Is there really a difference between making wars with bows & arrows or with fighter jets? People still die, human specie still remains. Is there really a difference between harvesting food with scythes or with tractors? People still eat, human specie still remains. I won't open the debate, but although some might argue that we're going downhill, others might argue that it made things better. All we know for sure, we're still here, alive and kicking.
 
  • Like
Likes patmurris
  • #56
Going back not so far there were people called luddites who thought that the idea of water-mills powering textile producing factories would lead to economic ruin and the disintegration of society.
 
  • #57
fluidistic said:
I don't understand the worry about the possibility that we are overtaken by some AI in a near future. Where by overtaken I mean litterally replaced. If "something" more intelligent than us eradicates us, isn't it "nice" overall? To me, this means we would have achieved our goal. It would mean we created something better adapted and more intelligent than us on Earth. For me this is a big win.
Maybe our specy won't die that quickly and we might become the AI's pets and be well treated.
I consider this our doom... maybe you would like to live as well fed experimental rat, i dont.
 
  • #58
Otherwise i don't believe, that some artificial super brain would lead to a singularity, infinite development.
If ten Einstein lived in Middle Ages, they could still come up only with Galilean Relativity.
 
  • #59
Averagesupernova said:
It has already been stated that when the machines start doing things above our level of comprehension we will shut it down since the assumption will be that the machines are malfunctioning.

And again, the belief that we can always do this is, for a lack of better word, naive. You assume that you will always be able to detect when something is going to have negative consequences before it widely deployed, that there can never be negative emergent behaviors. As I have tried to argue in this thread, there already exists technology where we do not have control to "shut it down" in the way you describe. Add powerful AI to make our system able to quickly self-adapt and we have a chaotic system with a "life of its own" where good and bad consequences are indiscernible, hence outside control.

As an engineer participating in the construction of this technology of tomorrow, I am already scratching my head, as you say, worrying that me and my peers should be more concerned about negative consequences for the global system as well as the local system each of us are currently building. I am aware that people without insight in how current technology works will not necessarily be aware of these concerns themselves as they (rightfully) expect the engineers to work hard fixing whatever problems that may appear, or they, as some here express, just accept that whatever happens happens. The need to concern yourself is different when you feel you have a responsibility to build a tomorrow that improves things without risk of major failures, even if others seem to ignore this risk.

Compare, if you like, with the concerns bioengineers have when developing gene-modified species or similar "products" that with the best intentions are ment to improve life, yet have the potential capability of ruin part of our ecosystem if done without care. I do not see a similar care in my field (yet), only the collective state of laissez-faire where concerns are dismissed as being silly with a hand wave.

Perhaps I am naive trying to express my concerns on this forum and in this thread, already tinted with a doomsday arguments at the very top to get people all fired up. I was hoping for a more technical discussion on what options we have, but I realize that such a discussion should have been started elsewhere. In that light I suggest we just leave it at that. I thank those who chipped into express their opinion and to try address my concerns with their own view.
 
  • #60
To lighten things up a bit, allow me to add what the AI themselves are saying about the end of the world (at 2m47) ...

 
  • Like
Likes Boing3000
  • #61
jack action said:
If you have to put a number on it, what is the risk of humanity ever getting itself lost to technological complexity and AI (i.e. "destruction of human civilization")? 90%, 50%, 10% or even 0.0001%?

Filip Larsen said:
Of the numbers quoted, I would right now say that 90% sounds most likely. Perhaps it easier for me to say it the other way around: I cannot see any reason why we will not continue down this road of increasing complexity. The question then remains if and how we in this thread can agree to a definition of "lost".

I think a key difference needs to be emphasised here: Complexity is very different from true AI. Complexity is something mankind has lived with ever since responsibilities for tasks were delegated to specific people. In dedicating ones time to learn one specialization, knowledge of others is sacrificed. As civilization progresses and overall knowledge increases it is clear that the percentage which can be known or understood by each individual must decrease.

Despite the impressive achievements of some 'AI' systems, for example learning Chess and Go to the level that they can beat the best human players, the scope of what they do is extremely narrow. Teams of developers have worked together to program in the basic rules and long term goals within the framework of those rules and then relied on essentially multiple random experiments to determine the best path to achieve the goals. The only 'intelligence' in this process is that of the development team. I feel that we are a very long way from seeing a system which displays anything resembling true understanding or intelligence.
 
  • Like
Likes GTOM
  • #62
Filip Larsen said:
As an engineer participating in the construction of this technology of tomorrow, I am already scratching my head, as you say, worrying that me and my peers should be more concerned about negative consequences for the global system as well as the local system each of us are currently building.

At the risk of repeating myself, the fact that you are saying this is reassuring. You are certainly not one in 7 billion to think that way.

Fear is good, it is when it turns to panic that everything goes bad. From my point of view, more bad things have come from panicked people than from the thing that was initially feared.

A question like: «Destined to build a super AI that will destroy us?» seems to be more on the panic side of things, that's why I prefer a tone-down discussion about it.

Filip Larsen said:
I was hoping for a more technical discussion on what options we have

I would like such thread and it would probably be more constructive.
 
  • #63
Filip Larsen said:
To establish a risk you commonly establish likelihood and significant cost. Base on what how things have developed last decade or two and combine it with the golden promise of AI, I find both high likelihood and severe costs. Below I have tried to list what I observe:
Thank you for the time spent to establish this list. But here, I'll put the emphasis on promises. None of what you said is irrational, except that speaking about fantasies is not science. It is science-fiction. I don't mean it as a derogatory term. I like doing it as well as another geek out there.
To cut the loop (we are indeed running in circle). I have already accepted (for the sake of the argument) that we know what an AI is and what it does. Let's call it "Gandalf", it does "magic" (any kind of magic, it is unspecified (and un-specifiable by definition)). But then we also have to agree that an AI is not something that want to kill you.

Filip Larsen said:
There is a large drive to increase complexity of our technology so we can solve our problem cheaper, faster and with more features.
That's incorrect. The complexity of thing is a great hindrance to their usage, because for user, complex equals complicate and un-reliable. When fridge will requires you to enter a password before opening, nobody will find that acceptable (especially when an error 404 will occur). Yet user will oblige, because someone have sell it by saying the mythical word "progress/revolution".
So no engineer in his right mind would want to increase complexity. And yet ... we do.
We do that because we want our solution to be more expensive, and less reliable. That's how actual business work, and that's the reason your phone cost one order of magnitude more than 20 years ago, and why its life span have shrunken to such ridiculously small number. I am not going to factor in the usage cost, nor the quality of the communication (even calling a friend 2 blocks away sometime sound like he/she is on another continent).
The thing we actually increase is profit (even that's not possible, scientific minded people know this quantity is zero on average). You cannot profit from something efficient and cheap ... by definition.
So I agree with you that there are "drive" to make things worse. More or less everybody on this thread agrees with that, except that half would recoil (<-understatement) at the idea of calling it "being worse", because we have literally been brainwashed in calling it "progress".
OK why not ? But then, even this kind of "progress" have limit.

There is a second segmentation between opinions on the matter: is it is good or bad ? (in a "ethical" sense). As if there was some kind of bible that Harris was refereeing to that can sort this out. There is none. Things happens, that the gist of it. I don't have to fear anything. Nobody has to. We can simply asserts situation, choosing some frame of reference (I fear that exclude "species" opinion, only individual have opinions), and have total chaos.
Nature will sort it out. It already does. Humanity is already doomed, except that "it" would probably survive (and changed), thus, what is the problem exactly ?

You are concerned that we may loose control. I totally agree because we loose control a while ago, maybe when "we" tame fire, or most probably when we invent sedentary (the occidental meme). But this statement of mine is informed by a particular subset of scientific observations.
I can as well play the devil advocate and change gear (frame of reference), and pretend we are doing fine and that we are being very wise an efficient (because really, being able to "text" while chasing Pokemon, while riding a bike, while listening to Justin Beiber, is efficient ... right )?

Filip Larsen said:
There is a large shift in acceptance of continuous change, both by the consumer and by "mangement". Change is used both to fix unintended consequences in our technology (consumer computers today requires continuous updates) and to improve functionality in a changing environment. The change-cycle is often seen sending ripples our through our interconnected technology making the need for even more fixes.
I agree but none of that is life threatening, or risky. It is business as usual. What would be life threatening for most people (because of this acceptance thing), is just stop doing it. Just try selling someone a car "for life". A cheap one, a reliable one. And observe the reaction...
Now, if you could do infinite changes in a sustainable way, is there still a problem ? That an AI would predict everything for you and condemn you into infinite boredom ? Don't you actually thing that a genuine singular/AI would understand that and leave us alone .. playing with our "toy" ?

Filip Larsen said:
All these observations are made with the AI technology we have up until now. If I combine the above observations with the golden promise of AI, I only see that we are driving even faster towards more complexity, faster changes, and more acceptance that our technology is working-as-intended.
Technology NEVER work as intended. From a knives to an atomic model, you'll always have people not using them "correctly".
AI don't exist, an Alexa (excellent video !) is a glorified Parrot (albeit much less smart)
I am not concerned by a program. Program don't exist in reality. They run inside memory. If some "decide" to shut down the grid (it probably happens all the time already). This is not a problem.
We could learn a lot about living of the grid, especially for our medical cares. This tendency is already going up.

Filip Larsen said:
Especially the ever increasing features-first-fix-later approach everyone seems to converge on appears to me, as a software engineer with knowledge about non-linear system and chaotic behavior, as a recipe for, well, chaos (i.e. that our systems are exhibiting chaotic behavior).
I have the very same feeling. Except I also know the more expensive a service is, the more dispensable it is. I am paid way too much to play "Russian Roulette" with user data. But none of that is harmful. The ones that are will be cleansed by evolution (as usual)

Filip Larsen said:
Without AI or similar technology we would simply at some point have to give up adding more complexity because it would be obvious to all that we were unable to control our systems, or we would at least have to apply change at a much slower pace allowing us to distill some of the complexity into simpler subsystems before continuing.
Not going to happens. We will continue to chase our tail by introducing layer of complexity above layer of complexity. That's how every business work. Computer science is no different, it may even be the most stubborn in indulging into that "nonsense".
AI would be a solution to be rid of "computer scientist", and that's one of the many reason it will never be allowed to come into existence.

Filip Larsen said:
So, from all this I then question how we are going to continuously manage the risk of such a interconnected system, and who is going to do it?
By relinquishing the illusion of control. By not listening to our guts.
Listen, in the US (as far as I know), it is not even possible to "manage the risk" of some category of tool (let's say of the gun'ny complexion).
I would say on my fear list, my PS4 is on the very last line. My cat is way above (I'll reconsider it once my PS4 will open the fridge an eat my life sustaining protein :wink:)

Filip Larsen said:
So, I guess that if you expect constant evaluation of the technology for nuclear reactors you will also expect this constant evaluation for other technology that has potential to harm you?
Don't panic comes to mind. As soon as I can, I'll get rid of nuclear reactors. Then maybe of Swiss-army knives (that are probably more lethal). Then cats ! Those little treacherous ba$tard !

Filip Larsen said:
What if you are in doubt about if some other new technology (say, a new material) is harmful or not? Would you rather chance it or would you rather be safe? Do you make a deliberate careful choice in this or do you choose by "gut feeling"?
I do as you do. I evaluate and push one way on another. Individually. I'll establish my priority. And I'll start by denouncing any fear mongering professional like Harris, that occupy a stage he has no right to be using (by debunking his arguments).
There is plenty of harmfully technology, none of them are virtual/electronic. Geneticists working for horrible people with horrible intention (yes Monsanto I am looking at you) are building things so dangerous (and that we can qualify easily as singularity compliant) that even Los Alamo's will passes a sympathetic pick-nick.
People like me, are building program that serves as "arm" for banks and finance. They destroy life for the bottom line.
None of those program are intelligent, none of them is indispensable. Stop using them will cost us nothing (It'll cost me my job, but I'll manage)

Filip Larsen said:
This is what I hear you say: "Intelligence is just based on atoms, and if atoms are stupid then more atoms will just be more stupid."
That's not what I meant. Another people here have said that "surviving" is a proof of intelligence. By that account virus are the smarter. They'll outlive us all.
I meant there is no correlation between quantity and quality. You and I are also are aware then "more chip per dice" is not synonym of more speed/power.
There are thousandth of solutions to occupy niches, and none of them is better than the other.

Filip Larsen said:
Flops is just a measure of computing speed. What I express is that the total capacity for computing speed is rising rapidly, both in the supercomputer segment (mostly driven by research needs) and in the cloud segment (mostly driven by commercial needs).
I know that as a civilization we are addicted to speed. But those numbers are totally misleading, the reality about speed is here.
Computing speed as topped at 3GHz ten years ago. Drive speed also, even if the advent of SD have boost thing a little.
Nothing growth forever. Nothing ever grew more then a few years in a row. That's math.

Filip Larsen said:
It is not unreasonably to expect that some level of general intelligence (ability to adapt and solve new problems) requires a certain amount of calculations (very likely not a linear relationship). And for "real-time intelligence" this level of computation corresponds to a level of computing speed.
I accept this premise (also I may try to convince you otherwise on another thread).
What I (and many other people on this thread) don't accept as a premise, it that it is a risk. In fact it would be the first time in human history that we invent something intelligent. Why on Earth should I be worried ?
What is false is that there is "ever growing". What is doubly false is that computer will "upgrade themselves". Harris don't know that. An this is baseless.

Filip Larsen said:
Also, more use of dedicated renewable energy sources will allow datacenters to increase their electrical consumption with less dependency on a regulated energy sector. In all, there is no indication that the global computing capacity will not continue to increase significantly in the near future.
Actually energy will soon became of national interest, and we will first dispense of all these terawatt wastfull cat-center.
All the alarm's bell are ringing, and all the lights are blinking red, that's more or less game over already.
Intelligence is not risky. Continuing to believe into the mythological increase meme is.
 
  • Like
Likes jack action
  • #65
Concrete Problems in AI Safety (https://arxiv.org/abs/1606.06565) is an interesting read, and illustrates well how numerous and non-obvious the issues of employing AI are even when only considering problems of current practical research. Anyone having trouble imagining what possibly could go wrong even with the "entry-level" AI of today might might be enlightened by a read-through.

Abstract:
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.

The conclusion is also very relevant, I think:
With the realistic possibility of machine learning-based systems controlling industrial processes, health-related systems, and other mission-critical technology, small-scale accidents seem like a very concrete threat, and are critical to prevent both intrinsically and because such accidents could cause a justified loss of trust in automated systems. The risk of larger accidents is more difficult to gauge, but we believe it is worthwhile and prudent to develop a principled and forward-looking approach to safety that continues to remain relevant as autonomous systems become more powerful. While many current-day safety problems can and have been handled with ad hoc fixes or case-by-case rules, we believe that the increasing trend towards end-to-end, fully autonomous systems points towards the need for a unified approach to prevent these systems from causing unintended harm.
 
  • Like
Likes Charles Kottler
  • #66
Filip Larsen said:
Concrete Problems in AI Safety (https://arxiv.org/abs/1606.06565) is an interesting read, and illustrates well how numerous and non-obvious the issues of employing AI are even when only considering problems of current practical research.
This is a good read, but I failed to see the novelty. I read about those concerns years ago, and those are no more in the "research" category. There are plenty of bots that do learn and do affect your life. App/Bot on your GSM that react to trafic JAM do have beyond human sensory abilities (real time global sensors), and have beyond human memory (collectively stored in the cloud). Those bot do mistake all the time, and they do learn that there is no always one optimal path, that dominating the road is not an issue, and actual collaboration giver better results that sending everybody on the same jam "shortcut".

Filip Larsen said:
Anyone having trouble imagining what possibly could go wrong even with the "entry-level" AI of today might might be enlightened by a read-through.
This paper does not help. Glorified vacuum cleaners aren't risky. There must be somewhere statistics on domestically accident. I'll bet that death by old Classic electric powered one is already a thing. I'll also bet that a good old brush as even worst statistics.
I am quite confident that a smart vacuum cleaner will be less harmful, unless you consider that loosing the knowledge to clean by yourself is itself harmful (I repeat only a precise frame of reference allow you to evaluate risk, and there is many).
But a vacuum cleaner deciding "by himself" to suck and vaporize all you paperwork at work is surely a risk ? Yes, it is, a lesser risk that putting all your data in a "dumb" cloud, but still, a risk. But then, I'll remind you the subject of this thread: "an AI that will destroy us". Do you see the discrepancy ?

We all agree that "new things", that is, "human inventions", have a down size. Mutating virus too. Intelligence threatening ? No way. Stupidity is threatening.
One of the reference [27] is more then dubious. This is not a science publication. It is a "block buster" best seller whose reception is quite telling. And Harris push the same fear wagon, or recycle those hypothesis/fiction without even bothering justifying them. https://www.amazon.com/dp/1501227742/?tag=pfamazon01-20 is just factually wrong.
premise said:
As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence
Gorillas do not depends on humans no more then humans depends on virus, fossil fuel, nuclear bomb, or cats. Nuisance is not correlated to intelligence.

A more scientific reference ([167]) is quite interesting. I'll quote two small passage.
MACHINE INTELLIGENCE RESEARCH INSTITUTE said:
Introduction, first sentence: "By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it"
In the conclusion: "Nature is, not cruel, but indifferent; a neutrality which often seems indistinguishable from outright hostility."

But I can actually play doomsday as well as anyone here. We are well pas beyond the point were those risk are taken with all our mighty tools. Let's focus on AI (un)likely ones
-Machines do autonomously high frequency trading at ridiculously enormous amount of volume per millisecond. They could theoretically vaporize billionth of monetary signs. So what ? Nobody is actually harmed, and the real things are still there in the stock and works as usual.
-Machines decide to virtually nuke every electronic knowledge (beside them or including them). They have cunningly wait that any paper knowledge version is long burned or recycled into IPhone3.14159... Format /Internet /ALL
So what ? Everybody is harmed (beside Gorillas) ... or are we ? Didn't the machine realize the human race has became totally enslaved to them ? Isn't a good slap in the face unavoidable at some point or another ?

But still some people are thinking that tools that has been build to nuke the entire planet are less dangerous that tools build to win jeopardy ?
 
Last edited by a moderator:
  • #67
which side should I join in? the AI defenders or attackers?
*let me build an AI software to tell me the best choice*
But on the point, I am never listening to people who cry out of doomsday and destruction... Everything that is built by humans is to serve humans. If things indeed get complicated it doesn't mean that a human can't handle them...
afterall, machines don't have emotions... and so there is no sense of ambition in them... their goal is pre-set by the humans, and the machine is just learning the optimal way to reach them (something that would take maybe a lot of time, effort and thinking for a human being).
 
  • #68
Well to bring it all home, I have heard very smart computer scientists (both those working directly in machine learning applications and outside of that domain) argue heatedly both ways--that a "singularity" is ridiculous and that AI will never measure up to something capable of bringing a "singularity" about (or anything more extraordinary than the pace of innovation we currently enjoy) and vice versa, that an AI could do those things. The problem is, no one really knows exactly how intelligence or imagination works, so we're all just essentially speculating on what the capabilities of an AI would truly be (though watching some of DeepMind's work with those Atari video games was very interesting and somewhat goosebump-inducing as the AI quickly reached "superhuman" levels of performance).

The horizon of possible intelligence or imagination in a thinking agent is unknown, so maybe an AI could only ever do things on a much expanded time-scale at human level intelligence (were a general AI ever developed--which I don't see why that in itself would be ultimately impossible), or maybe it would somehow--and at some point--expand incomprehensibly and go beyond the current horizon of human intelligence/imagination (like Harris seems to believe is possible) and do things like derive some of the deep laws of physics with simple observations of its environment and go from there. Again, it's all just speculation at this early time.

Still, I think it's a problem worth thinking about--governments and companies working towards the achievement of a fully autonomous, generalized intelligent agent should definitely not be irresponsible and leave things to chance just because [insert argument that introduction and operation of AGI won't potentially dramatically harm human race in some way for reason X]. Like with climate change, even if the Earth climate is generally robust, I think it would be better for me to hedge my bets and try to do as little harm as possible with the data climate scientists have been delivering--sure, I could disbelieve them all I want (like a lot of people weirdly do), but if there is even a sliver of truth to the potential for what adverse climate change may cause for the human race, then it would be irresponsible (and suicidal in the dumbest way, or maybe just homicidal (with lack of ill intent) for my grandchildren) of me to act in a way that further damages the climate--not that I could personally damage or help the climate in any way, but say I developed a gas that accelerated the accumulation of greenhouse gasses in a significant way and found a commercial application for this gas--knowing what I know, I as the developer would be at fault in that circumstance if I continued trying to push that gas commercially--any benefit from the gas would have to outweigh potential extinction or irreversible impact on climate.

If there are enough smart experts (forget Sam Harris) in the field of AI who I know to be rational and responsible that have serious concern for the development of AI for various reasons (some great reasons were already mentioned earlier in the thread, particularly by @Filip[/PLAIN] Larsen), then even if there is a group of equally smart experts that disagree, it would be in my best interest to hedge my bets and take the possibility of harm seriously. That's not to say that I won't try to discern the answer for myself, but I think it's best to be careful even if I feel otherwise confident that things will be fine.

In this case, Sam Harris's explanation for why people should be worried about AI seems fairly straightforward--if you accept strong AI as something that is possible within the laws of physics (and you should because humans exist), and that humans will at some point create one (which of course may not happen), and that it will be unhampered by the limitations of biology like slow thinking, poor fidelity of memory and a lack of an ability to become a high-level expert in multiple hard scientific fields and sub-fields, then yes, in my opinion that seems like something to take seriously and approach with all due caution (regarding things like control and value alignment, it's something that I agree should be worked out to some extent before the so called AGI is "turned on").

To reiterate though, most of the talk on such fantastical things is quite speculative, and that's why arguments like those in this thread always end up taking place--no one objectively knows what will happen, but everyone thinks they do.
 
Last edited by a moderator:
  • #69
AaronK said:
but say I developed a gas that accelerated the accumulation of greenhouse gasses in a significant way and found a commercial application for this gas--knowing what I know, I as the developer would be at fault in that circumstance if I continued trying to push that gas commercially--any benefit from the gas would have to outweigh potential extinction or irreversible impact on climate.

But that is not the problem at stake, quite the opposite.

What if you thought of a way for creating a gas that could have many commercial applications, but you're still not sure how to do it. You also have no clue if it would have any impact on the accumulation of greenhouse gases, but some say it might. Again, the gas doesn't exist, so nobody knows.

Would you prefer not taking any chances and stop the research on how to produce that gas? Or would you do the research and see where it goes?
 
  • #70
This whole discussion reminds me of an old theological piece of sophistry: "Can God create an object so heavy that He cannot lift it?".
 
Last edited:
  • #71
jack action said:
Would you prefer not taking any chances and stop the research on how to produce that gas? Or would you do the research and see where it goes?

I don't see anyone here or elsewhere that seriously argues in favor of stopping research. Many of those that express concerns about AI are often also those that are involved. One of their worries is that blind faith in this technology by the manufacturers may lead to serious issues that in turn will make consumers turn away from this technology at some point. It seems like a win-win situation for them to work towards both ensuring the bottom line as well as the well-being of the human race. To take your gas example, no company in its right might would completely ignore risks developing and deploying a new gas if they know it has the potential to serious hurt humans and thus their bottom line. Greenfield technology always have a larger set of unknown risks, and companies knows this. Usually the higher risks comes a bit down the road when you think you know all there is to know about this technology, start to optimize and deploying it wide and then you get hit bad by something you'd missed or optimized away thinking it was unimportant. The recent case of exploding Samsung phones seems to be a textbook example on such a scenario.

To me, the discussion in this thread seems to revolve more around beliefs regarding how much risk people themselves (i.e. "consumers") can accept using a future technology we do not yet understand. It seems that even people who acknowledge how complicated control of future AI can be still believe that the net risk to them will be kept low because they rightfully expect someone else to worry and mitigate any risk along the way. That is a perfectly sensible belief, but in order for it to be well placed there really need to be someone else that actually concerns themselves about identifying and mitigating risks.

In a sense, the current discussion seems very similar to the public discussion on the dangers of gene editing. Even if everyone rightfully can expect everyone involved in gene editing to do it safely the technology hold such potential that there is a risk that a few "rotten apples" will spoil it for everyone and do something that is very difficult to undo and which ends up being harmful for a very large set of humans.
 
  • #72
Filip Larsen said:
One of their worries is that blind faith in this technology by the manufacturers may lead to serious issues that in turn will make consumers turn away from this technology at some point.

There it is. You are worried about losing your job. Sorry, but that has nothing to do with the faith of mankind.

Filip Larsen said:
It seems like a win-win situation for them to work towards both ensuring the bottom line as well as the well-being of the human race.

When someone does something because he or she thinks it's OK, even though others have raise warnings against, and something bad happen, it doesn't mean that that someone wasn't sincere when evaluating the risks. Everybody thinks that he or she makes the best decisions, otherwise he or she wouldn't make that decision.

Mr. Burns is a character on the Simpson, nothing more. Nobody in its right mind says: «My goal is to make money; I don't care what will happen to people buying my stuff.» If somebody does, it won't last long, because that is a recipe to lose money. But it doesn't mean wrong decisions won't be made.

Filip Larsen said:
The recent case of exploding Samsung phones seems to be a textbook example on such a scenario.

It really is. Is it the end of smart phones? I doubt it. The end of Samsung? Maybe. Are people at Apple very happy? Maybe for the temporary stocks rise. But I'm ready to bet that they were meetings between the managers and the engineers with the topics: «Why them and not us? What mistakes did they make and are we safe?»

The truth is that the consequences of the exploding Samsung phones are very low on the scale «Destroying mankind», even the scale «Destroying the economic system». But it does have a tremendous effect on everybody's checklist when assessing risks (and probably not only in the smart phone business) which should lead to better decisions. That is why I don't worry so much about the possible bad impacts of AI, even for AI itself.
 
  • #73
jack action said:
There it is.

No it is not, and I have no clue why you would think that.

Either I am very poor at getting my points across or you are deliberately trying to misconstrue my points pretty much all the time. Either way, I again have to give up having a sensible discussion with you.
 
  • #74
Filip Larsen said:
Many of those that express concerns about AI are often also those that are involved.
the person in the OP video is not really into AI from what I read...
Who from AI researchers express concerns about AI?

Filip Larsen said:
One of their worries is that blind faith in this technology by the manufacturers may lead to serious issues
Like what? For routine works technology can be blind-believed
 
  • #75
Filip Larsen said:
Either I am very poor at getting my points across or you are deliberately trying to misconstrue my points pretty much all the time.

I'm not attacking you, I'm stating my point of view. We agree that AI won't destroy mankind (That is what I understand from what you are saying). You say that if we are not careful in developing AI, terrible events may happen. I must admit I'm not sure how terrible will be those events according to you, but you make it sounds like it will be worst than what we ever saw in the past. Toyota had an horrible problem with an accelerator pedal, something that is not high tech, with dozen of years of pedal design experience worldwide. Still, there was obliviously a design problem somewhere. It is bound to happen with AI too. Do you know how the car starter was invented? It has something to do with someone dying while starting a car:

[PLAIN]http://www.motorera.com/history/hist06.htm said:
The[/PLAIN] self-starter came about by accident -- literally. In the winter of 1910 on a wooden bridge on Belle Island Mich., a Cadillac driven by a woman stalled. Not having the strength to hand crank the engine herself, she was forced to wait on the bridge in the cold until help arrived.

In time another motorist, also driving a Cadillac, happened along. His name was Byron T. Carter, and he was a close friend of the head of Cadillac, Henry M. Leland. Carter offered to start the woman's car, but she forgot to retard the spark and the engine backfired, and the crank flew off and struck Carter in the face, breaking his jaw.

Ironically, moments later another car carrying two Cadillac engineers, Ernest Sweet and William Foltz, came along. They started the woman's car and rushed Carter to a physician, but complications set in and a few weeks later Carter died.

Leland was devastated. He called a special conference of his engineers and told them that finding a way to get rid of the hand crank was top priority.

"The Cadillac car will kill no more men if we can help it," he announced.

Self-starters for automobile engines had been tried in the past. Some were mechanical devices, some pneumatic and some electric.

But all attempts at finding a self-starter that was reliable, efficient and relatively small had failed.

When the Cadillac engineers could not come up with a workable system, the company invited Charles F. Kettering and his boys at DELCO (still independent of GM) to take a hand. Kettering presented the device in time for its introduction in the 1912 models.

It has always been that way: A hand crank hit someone in the face, a smart phone explodes, a robot makes a bad decision. Most of the time people do what they think is best, but an accident is always inevitable. Then it's back to drawing board and the cycle repeat itself.

You seem pessimistic about people in the field doing the right thing, but you say you are in the field and you seem to worry a lot. On what basis do you assume you're the only one? Do you have real examples of AI applications that you predict will go wrong and what kind of damage can be expected?

Let's take self-driving car, for example. Tesla had its first fatal accident. I'm sure that hit the engineers' office real hard, and not only at Tesla, every car manufacturer. Of course, every automaker will say its system is safe. Just like they were probably saying starting a car in 1910 was no problem. But although nobody wish for one, we all know that an accident is bound to happen. When do we stop worrying asking: «Is it ready for production or do we test it some more?» Not an easy question to answer. Sometimes, usage dictates the solution.

You seem to think too much risks are taken by companies with AI. What do you think they should do that they are not doing right now?
 
Last edited by a moderator:
  • Like
Likes Averagesupernova
  • #76
ChrisVer said:
Who from AI researchers express concerns about AI?

I am so far aware of the OpenAI non-profit company (funded by significant names in the industry that presumably are very interested in keeping AI a successful technology) and the https://deepmind.com/blog/announcing-partnership-ai-benefit-people-society/ Alphabet company. The paper Concrete Problems in AI Safety (which I also linked to earlier) is by researchers at Google Brain and they have also made other interesting papers addressing some of those concrete problems. One of their recent papers is Equality of Opportunity in Supervised Learning (which I haven't had time to read in full yet).

ChrisVer said:
Like what? For routine works technology can be blind-believed

What I mean is that researchers are aware that the skills and effort to understand and predict negative emergent behavior in even a fairly simple machine learning system can in general far surpass the skill and effort needed to establish the beneficial effect of the system. Or in other words, that it becomes "too easy" to make learning systems without understanding some of the suttle consequences. This is not really a new issue with machine learning, only that the gap perhaps is a bit wider with this type of technology, a gap that will likely continue to grow as increased tool sophistication makes construction easier and the resulting complexity makes negative behavior harder to predict.
 
  • #77
jack action said:
I'm not attacking you, I'm stating my point of view.

Ok, fair enough. I will try comment so we can see what we can agree on.

jack action said:
We agree that AI won't destroy mankind

It depends on what you mean by "won't" and "destroy". I agree that the risk AI ends up killing or enslaving most humans as in the Colossus scenario seems to be rather low and I also agree that what I express as my main concern (as stated earlier) is also not equivalent to "AI will destroy mankind" either.

However, I do not agree that the outcome "AI destroys mankind" is impossible. It may be astronomical unlikely to happen and its risk therefore negligible small, but I do not see any way (yet) to rule out such scenarios completely. If we return to Harris, then it would really be very nice to be able to point at something in his chain of arguments and show that to be physical impossible. And by show I mean show via the rigor found in laws of nature and not just show it by opinion of man.

jack action said:
You say that if we are not careful in developing AI, terrible events may happen. I must admit I'm not sure how terrible will be those events according to you, but you make it sounds like it will be worst than what we ever saw in the past.

The thing is that if we miss some hidden systemic misbehavior that only starts to emerge a good time later, then the effect is very likely to be near global at that point. It may not even be an event as such, but perhaps more of a slow slip towards something bad. For instance, we currently have the problem of global warming and environmental pollution of (micro-) plastic that slowly have crept up us over many years without us collectively acknowledging them as a problem at first. It seems we in general have trouble handling unknown unknowns which is challenge when facing a potential black swan event.

jack action said:
You seem pessimistic about people in the field doing the right thing, but you say you are in the field and you seem to worry a lot. On what basis do you assume you're the only one?

I have been somewhat relieved to learn that several high profile AI companies shares my concerns and even have established a goal of addressing them, so I know I am not alone in this and I do believe that serious people will make a serious efforts to try improve the safety of AI. However, I also admit that I still worry that a huge success in AI can pave the way for an increased blindness to future risks thus allowing "management" in the name of cost effectiveness to talk down the need for any cumbersome or restrictive safety procedures that we might find necessary now (compare with the management risk creep of the Challenger space shuttle disaster).

jack action said:
Do you have real examples of AI applications that you predict will go wrong and what kind of damage can be expected?

Just like everyone else I have no basis for making an accurate prediction, and especially not about what will go wrong (as compared to what might go wrong).

If I should think of a scenario that involves bodily harm to humans it could for instance be along the following lines. Assume AI is used with great success in health care to the point where we finally have a "global" doctor-AI that continuously adapts itself to be able to treat new and old diseases with custom made medicine by monitor, diagnose and prescribe just the right mix and amount medication for each of us, and it does so with only a very few cases of light mistreatment. Everybody is happy. Ten years later, everyone is still happy, yet now everyone are also are pretty much keeping to themselves all the time staring at the wall and only very rarely go out to meet strangers face to face. Somehow the AI found an optimum solution to greatly reduced the amount of sickness each of us is exposed to by medicating us with a just the right mix of medicine so that we don't go outside and expose ourselves to other peoples germs.

The failure is of course here obvious and not likely to be a realistic unknown consequence in such a system, but the point here is that there could be any number of failure-modes of such "global self-adapting doctor-AI" that are unknown until they emerge, or more accurately, it requires a (yet unknown) kind of care to ensure that no unknown consequence will ever emerge from such an AI.

The counter argument could then be that we would never allow such an AI to control our medication directly or at least only allow it to do it the same way that we today test and approve new medication. That's a fair argument, but I do not feel confident we humans collectively can resist such a golden promise of an otherwise perfect technology just because some eggheads makes a little noise about a very remote danger they can't even specify.

jack action said:
You seem to think too much risks are taken by companies with AI. What do you think they should do that they are not doing right now?

Yes, with the speed we see today I think there is a good chance we will adapt this technology long before we understand the risks fully, just as we have done with so many other technologies. I currently pretty much expect that we will just employ this technology with more or less the same care (or lack thereof) as we have done in the past, and down the road we will have to consider problems comparable in scale and consequence to humans similar to that of global warming, internet security, exploding phones, and what else has been mentioned. All that said I do agree, that one could take the standpoint that this is an acceptable trade off in risk in order to get better tech today rather than tomorrow or a week later, but one should then at least have an idea of which risks that are being traded in.
 
  • Like
Likes AaronK and OCR
  • #78
The http://www.acq.osd.mil/dsb/, which I understand advice the US Department of Defense on future military technology, has recently release a http://www.acq.osd.mil/dsb/reports/DSBSS15.pdf . From the summary:

The study concluded that there are both substantial operational benefits and potential perils associated with the use of autonomy
...
This study concluded that DoD must accelerate its exploitation of autonomy—both to realize the potential military value and to remain ahead of adversaries who also will exploit its operational benefits.

The study then goes to some depth describing the issues such military application of autonomy give rise to and recommendations on how to stay in control of such systems. In all I think it gives a good picture of where and how we most likely are heading with military deployment of AI. Such a picture is of course very interesting in the context of trying to determine if the Terminator scenario (sans time machines, I gather) is a real risk or not.

The drive for autonomy in military applications has also been commented on by Paul Selva of the Joint Chiefs of Staff [1], [2] when presenting the need for strategic innovation in the military, where he seem to indicate autonomy will increase, at least up to the point where human command still are accountable for any critical decisions made by autonomous systems.

[1] https://news.usni.org/2016/08/26/selva-pentagon-working-terminator-conundrum-future-weapons
[2] http://www.defense.gov/News/Article...udies-terminator-weapons-conundrum-selva-says
 
Last edited by a moderator:
  • #79
From the paper Convolutional networks for fast, energy-efficient neuromorphic computing it seems that machine learning algorithms such as deep learning really can be mapped to IBM's TrueNorth chip allowing local (i.e. non-cloud based) machine learning that are orders of magnitude more energy efficient than when run on a digital computer. From the paper:
Our work demonstrates that the structural and operational differences between neuromorphic computing and deep learning are not fundamental and points to the richness of neural network constructs and the adaptability of backpropagation. This effort marks an important step toward a new generation of applications based on embedded neural networks.

If IBM manage to achieve their goal of a Brain-in-a-box with 10 billion neurons within a 2 liter volume consuming less than 1kW [1] this opens for yet another level of distributed multi-domain intelligence in our systems (e.g. autonomous vehicles, power distribution networks, cyber-defense points) and it would also seem to provide a big step along the road towards a true distributed general AI with self-adapting capabilities on par with humans.

In the context of this discussion distributed AI is interesting since increase in distribution has the natural side-effect of lowering what instruments can be use to remain in centralized control. Some of the control features proposed by the Concrete Problems in AI paper linked earlier do not work well or at all for distributed autonomous systems.

[1] http://www.techrepublic.com/article...uters-think-but-experts-question-its-purpose/
 
  • Like
Likes AaronK
  • #80
Filip Larsen said:
However, I do not agree that the outcome "AI destroys mankind" is impossible.
That won't happens. Because technology (not AI) has already "destroyed" mankind (or have it?). Or more specifically technology has render humans so dependent on a chain of machines too complex (and factually unsustainable), while in the same time culturally wiping out any residual "survival wisdom". This is homo-sapiens-dominating, for the best or the worst.
The actual chances that a complete lunatic get the launch code of USA nuclear power are actually of 0.47
The chances that an AI do that is 0.000000000000000001

Filip Larsen said:
It may be astronomical unlikely to happen and its risk therefore negligible small, but I do not see any way (yet) to rule out such scenarios completely.
This is not about ruling out scenarios. Scenarios a great for movies and entertainment. We aren't going to live in a cave because we are (un)likely to be hit by an asteroïd.

Filip Larsen said:
If we return to Harris, then it would really be very nice to be able to point at something in his chain of arguments and show that to be physical impossible.
I don't think that it is your intention, but I don't think you realize how outrageous this claim is, on a science forum. Especially when talking about Harris. Will we also have to prove that God do not exist, or that we aren't running in a matrix, or that we aren't effectively the experience of pan-dimensional mouses ? Can you prove all this is impossible ? Is it the new standard of science ? Freaking out about an horror movie ?
I would really like a chain of arguments that show it is physically possible. The burden of proof is always on the one making claims.

Filip Larsen said:
And by show I mean show via the rigor found in laws of nature and not just show it by opinion of man.
The rigor of the laws of probability are not open to discussion. Or so I hope.
-We build on purpose doomsday machine every day. (real one, from Nukes to virus)
-Mr. Burns build on purpose uncontrollable dangerous object (mutated genes, fuel cars) every day (for the buck)
-Very caring an prudent individuals invent antibiotics and don't even realize that natural selection is a thing that works both way.

I am asking you ,with the rigor you seem fit, to evaluated all these risks plus those you yourself introduced (and quite rightly). Add an asteroid, and two black swans.
I hope you agree that all other risks are above the one last on the list (a mutated chicken breed going on revenge, and AI getting magic power and opening a wormhole right here just for fun)

Yes, technology do backfire, I would go so far as to say it is one of the most important property of technology, but this would be my definition only, the one of a man.

But when something truly meaning full will occurs on the side of AI, we will loose mathematician, and physicists. No because the AI will have bleed them to death, but because while trying to be smarter by inventing intelligence, they will succeed at proving them not that smart anymore. That's the stuff of psychology and novel. Nothing to do with doomsday. And that is not an opinion anymore.

Filip Larsen said:
It seems we in general have trouble handling unknown unknowns which is challenge when facing a potential black swan event.
A good link again, thank you. But this is about known unknown, but distant ones. As far as I know we are the only species with memories that could span many generations, and whose wisdom can help us in this way. We are recycling the same errors, generations after the next, and our brain is just what it is.
That an AI would so shortsighted is just counterfactual.

Filip Larsen said:
If I should think of a scenario that involves bodily harm to humans it could for instance be along the following lines.
That's a valid scenario, but it is again just a projection of your own fear of getting disconnected of other peoples. Many people share it, me included, and again, it is a thing already happening. Nowadays, this disconnection is called "social network". We have all seen this in so many dyspotian movie. Can we quit psychology and boogeyman stories and get back to science ? Because, your scenario also describe paradise ... isn't it ?

Progress is not a ever growing quantity. At best the second derivative is positive a few year, but soon became negative. There is no known exception, and especially not in hardware , in even less in software. A hammer is the end of evolution of nailing by hand. An AI will be the end of evolution of coding by hand. That's why all the concerns are voiced by software tycoons that will be out of business in a heart beat.

AI are not robot. An AI can not harm anyone, when it is not physically connected to wield some energy or weaponry or wipe intellectual knowledge or infrastructure.

Illustrating "singular" intelligence by a child actually starving in a desert, is not disingenuous, it is disgusting.
 
  • #81
Filip Larsen said:
However, I do not agree that the outcome "AI destroys mankind" is impossible. It may be astronomical unlikely to happen and its risk therefore negligible small, but I do not see any way (yet) to rule out such scenarios completely.

Yes, nothing is impossible, it is all about probability. We agree.

Filip Larsen said:
If we return to Harris, then it would really be very nice to be able to point at something in his chain of arguments and show that to be physical impossible.

We can't. We already agreed that nothing is impossible. We can only evaluate probabilities.

Filip Larsen said:
For instance, we currently have the problem of global warming and environmental pollution of (micro-) plastic that slowly have crept up us over many years without us collectively acknowledging them as a problem at first.

Now we are playing with knowns. Let's take global warming. Now that you know, if you could go back in time, what would you do differently? Would you tell Henry Ford to choose the electric motor instead of the gasoline engine? Was that a logical choice back then, knowing what they knew? The concept of emissions was just not a thing back then. If they did, would having every car battery-powered created a new set of problems still unknown to you because you never experienced that state? Electric cars are still at the «we hope it will be better» stage. If you could go back in time telling that to Henry Ford, maybe a man from a 100 years from now would come back now and stop you, saying: «Don't push the electric car on Henry Ford, that won't be good.» And that is just for cars, as many others human habits have an impact on global warming.

Filip Larsen said:
the point here is that there could be any number of failure-modes of such "global self-adapting doctor-AI" that are unknown until they emerge, or more accurately, it requires a (yet unknown) kind of care to ensure that no unknown consequence will ever emerge from such an AI.

Now you're asking someone to evaluate the probability of a problem, not knowing how the technology will work, thus not knowing the future capacity to resolve such a problem either. How can anyone do that? I refer here to your statement: «And by show I mean show via the rigor found in laws of nature and not just show it by opinion of man.»

Filip Larsen said:
down the road we will have to consider problems comparable in scale and consequence to humans similar to that of global warming, internet security, exploding phones, and what else has been mentioned.

Let's assume we went another way in the past. Do you think there is a way we could have chosen that ended up with us having no problems to solve? What is the probability of that? There is a lot of problems we used to have and that we don't have to deal with anymore, or at least to a much lesser extent. Look at the death tolls from past pandemics to see the kind of problems people were dealing with. I like the problems we have now compare to those ones.

Filip Larsen said:
but one should then at least have an idea of which risks that are being traded in.

Again, thinking someone can evaluate those risks with the level of certainty you seem to require - knowing there are so many unknowns - is impossible. Nobody can predict the future, and if one could, he or she would be incredibly rich. At the point we're at AI-wise, I think opinions are still our best guesses yet to identify possible worldwide catastrophic scenarios. And opinions are impossible to prove or disprove.

I found some numbers to throw in the mix. Note that they're all based on opinions and they consider only death scenarios (as opposed to your doctor-AI example). Personally I found them rather pessimistic: 19% chance of human extinction by 2100 is rather high from my point of view. Are the people born this year really among the last ones who will die of old age?
 
  • #82
jack action said:
We already agreed that nothing is impossible.

So, are you saying that to the extend that the argument for "human civilization is eventually destroyed by AI" is not refutable by pointing to a violation of laws of nature, then you basically agree that this scenario is possible for some value of "destroy"? If so, what is it exactly are we discussing? And if not, can you point to what urges you to argue that the argument is wrong.

Perhaps some people have trouble with Harris as a person and cry foul because he presents it, transferring their mistrust of a man to mistrust of his argument? I have no knowledge of Harris but the argument he presents in the video was already known to me and I am addressing the argument.

jack action said:
Let's take global warming. Now that you know, if you could go back in time, what would you do differently? Would you tell Henry Ford to choose the electric motor instead of the gasoline engine?

On the top of my head I would say I would have been nice if we started to take the matter more seriously when the first signs of global warming was showing up. If I recall correctly it was around 1975. I remember doing a dynamical analysis of the green-house effect back at the university in 1988 as it was a "hot topic" at that time. But we have to wait until today before we being to see some "real action".

We can also compare with the case of removing lead from gasoline. From the time the initial lead pollution indicators started to show up around 1960 it took a very long time until the lead in gasoline was recognized as a hazard and the lead finally removed around 1990 and forward.

As I see it, in both cases there was nothing scientifically that prevented us from predicting the hazards much earlier and act on it. The reluctant reaction and correction in those cases seemed to have everything to do without technology being so widely adopted that its safe operation was taken for granted even in the face of evidence to the opposite.

If we look to the health care domain, then we here have a lot of complicated regulation that are set in place to ensure we do make a serious attempt to weed out hazards before widespread use, that effects of the drug or system are continuously collected and monitored, and that we are able to stop using a drug or system reasonably fast if evidence should show up indicating the drug to be unsafe. To some extend, similar regulation are also put in place for constructions and operations that has potential risk for the (local) environmental.

To me, it would make very much sense if the high-speed "IT business model" that to an increasing extend is used to deploy critical infrastructure today would also have to suffer regulations of this kind to increase our assurance that systems can be safely operated on a global scale. The slowdown in pace alone from even the most minimum of regulated responsibility on the part of the vendors and system owners would help to foresee some of those so-called "unforseen" problems we see more and more often when designing and deploying so fast as we do.

(Being pressed for time I am unable to comment on all the items we are discussing and have chosen to focused on just a few of them)
 
  • #83
Filip Larsen said:
"human civilization is eventually destroyed by AI" is not refutable by pointing to a violation of laws of nature
Laws of nature don't apply to "civilization", they apply to some particles, and THEN those particle make up entity like "civilization" and "AI" that don't exist outside a particular abstract domain. In that domain civilization "change over time" or evolve.

You are defending magic here, under the name "singularity". Nobody agree with that. Some here have to concede you that it is "a possibility", because, well, it is. It is also possible that all this is a big running joke made by God. You simply cannot disprove this by pointing to a violation of laws of nature can you ?

You are making an argument from ignorance and this is really disturbing to see you refusing to backup you claims by showing that there is some laws of nature that would allow an "unspecified magical entity" to "destroy human civilization".

All the laws of nature are against infinitive growth. Not even a steady increase is possible. The only exception I know of is the cosmological constant/dark energy. Singularities don't exist. Singularity is a synonym to blind spot, not to some grandiose doomsday vision.

There is no way laws of nature (from entropy to conservation of momentum) would allow magic to happens. I have seen trolls on TV and talk with some on the internet. In each case, there are figment of our imaginations, not hard probability associated to quantum universe wave-function.
From tunneling and other oversevable quantum facts we can compute a probability that some particles would spontaneously jump to form a troll. It is not null. Should we worry about that too ?

Filip Larsen said:
On the top of my head I would say I would have been nice if we started to take the matter more seriously when the first signs of global warming was showing up. If I recall correctly it was around 1975
From an engineering perspective, this make no sense. It is very easy to compute what any "tool" will do to natures particles. For a car we could have computed very accurately the type and volume of gaz the engine will emit (as well as all the other car's input output).
Now if a totally unrelated research discover one of those gaz is "dangerous" (not in a emotion way like "destroy") but to the balance of green house, then we take a decision.

A "civilization" is precisely the name we give to those collective decisions, and the purpose of those decision is to disrupt. Actually, the more we disrupt, the more potent, dominant powerful and "civilized" we are.

But if Darwinian random selection also apply to civilization ("meme" are not really hard science), you cannot drive it. That's the other way around. We will continue to believe in disruptions and unbalance and growth, and eventually nature will sorts this out.

An AI cannot physically be the spawn of an infinite growth process. If anyone want to take "computer-science" seriously, it should not plague it with random nonsense like Harris do. At best and AI would "imbalance" the bytes values in such and such memory.

Computer themselves have imbalanced greatly society already. Everyone agrees that's for the best, as everyone agree that a car is better than a horse (which scientifically is dubious). It is a little too late to have second thoughts. If you or Harris are scared, I suggest you examine closely the notion of progress and human past responsibilities, instead of making up totally improbable bogeyman stories.
 
  • #84
Boing3000 said:
But if Darwinian random selection also apply to civilization ("meme" are not really hard science), you cannot drive it. That's the other way around. We will continue to believe in disruptions and unbalance and growth, and eventually nature will sorts this out.
Natural selection is cruel. The way that it prevents suicide is by allowing it to happen.

.
 
  • Like
Likes 1oldman2, Boing3000 and Bystander
  • #85
Filip Larsen said:
then you basically agree that this scenario is possible for some value of "destroy"? If so, what is it exactly are we discussing? And if not, can you point to what urges you to argue that the argument is wrong.

We are discussing probability. The argument is wrong because there is not enough data to scientifically back up the fear as stated. At this point, we can justify any fear (or promises for that matter) equally on opposite points of view because of the lack of data: What if AI development goes faster? What if AI development goes slower? Any answer to these questions will be an opinion, nothing more (one of them may be right).

Here what would be my scale of What-could-severely-and-negatively-impact-human-civilization, (in order of likelihood of happening):
This is my personal opinion and it represents my personal fears. It is as valid as anyone else's list and is open for discussion (maybe not in this thread, though). Although I'm willing to reorganize the first 5 points, it will be hard to convince me that AI malfunction is not last on that list.

Filip Larsen said:
Perhaps some people have trouble with Harris as a person

I don't know Mr. Harris, I never heard of him before this thread, I only criticize this single comment he made - presented here this thread - not the man.

Filip Larsen said:
As I see it, in both cases there was nothing scientifically that prevented us from predicting the hazards much earlier and act on it. The reluctant reaction and correction in those cases seemed to have everything to do without technology being so widely adopted that its safe operation was taken for granted even in the face of evidence to the opposite.

You are making simplifications that I consider mistakes.

First, you forget that you are judging after the facts. It's a lot easier to understand the consequences once they have happened and then go back to see who had predicted it to praise them and forget every other opinions of the time. When you spoke of global warming, I noted that you did not specify any «easy solutions» that should have been done. That is because this is a present problem, there are many possibilities to chose from and you (a well as anyone else) cannot tell for sure which one would be the best and what will be the impact on the future. Will it work? Will it be enough? Will it create problems in other ways?

Also, you say: «we have to wait until today before we [begin] to see some "real action".» Depending on what you consider "today" and "real actions", I tend to disagree with such a pessimistic statement. The first anti-pollution system was put in a car in 1961. Also, The first EFI was used in a car in 1958 and was a flop. Easy to say today that this was the future and more R&D should have been put in the technology for a faster development, but people back then had to deal with what they knew.

Filip Larsen said:
If we look to the health care domain, then we here have a lot of complicated regulation that are set in place to ensure we do make a serious attempt to weed out hazards before widespread use, that effects of the drug or system are continuously collected and monitored, and that we are able to stop using a drug or system reasonably fast if evidence should show up indicating the drug to be unsafe.

Is it that safe? Or is your «doctor-AI» scenario already set in motion without AI:
The point I want to make is that there is a difference between fear and panic. There is also a point to be made for hope. Looking at past experiences, you can see the glass as half-empty or half-filled; This is not a fact, but an attitude you choose.
 
Last edited by a moderator:
  • #86
jack action said:
The argument is wrong because there is not enough data to scientifically back up the fear as stated.

Well, to me there is plenty of signs that we need to concern ourselves about the issue, as I think Nick Bostrom express fairly well in his TED talk:



During the last few weeks I have become more relieved to find that my concerns are fairly well aligned with what the AI research community already considers a serious issue, and I must admit that I'd much rather use my time following that research than spend it here honing my arguments on a discussion that do not really get anywhere except down hazy tangents. Thank you, Jack and others who made effort to present sensible arguments (and sorry, Boing3000, I simply had to give up trying to decipher relevant meaning from you last few posts).
 
  • #87
I'm sorry but Nick Bostrom have not convince me of anything.

Although I'm not even convince of his vision of what AI could turn out to be, let say he's right about that, i.e. much more smarter than human, like comparing humans with chimps today.

Where he doesn't make sense at all is when he says that we should anticipate the action of this superintelligence, find a way to outsmart it such that we will always be in control. That is like asking chimps from 5000-10000 years ago to try to find a way to make sure humans of today (that did not exist back then) will be good for chimps of today. It is just impossible.

How can anyone be able to outsmart something that he or she cannot even imagine? Something that is so smart that it will be able to predict all of your moves? If that kind of AI is our future and it decides to eliminate us, sorry, but we are doomed. There is not even any reason for us to try to fight back. Otherwise, it would mean that we - today - are smarter than that AI of the future. It's a paradox: If we can outsmart it, then it's not smarter than us.

He also makes a premise that smarter means it will be bad for human. But is that what smarter necessarily leads to? Apparently, we are not smart enough to answer that. But what if smarter means that human condition - or life for that matter - will necessarily be better? The smart move for us would be to let it free. Holding it back would just increase our chances to reach extinction before a solution could be found.

There are no experts on these questions, it is just fantasy. Every theory is as valid as the next one and is unprovable.
 
  • #88
Filip Larsen said:
(and sorry, Boing3000, I simply had to give up trying to decipher relevant meaning from you last few posts).
Fair enough. I'll be less tangential when analyzing the common misconceptions in that video :

0:40 The normal guy. Well, this is a joke obviously. This guy is actually lucky (I suppose), but very far away from the norm of homo-sapiens-sapiens-civilized
0:50 The human species is not new at all. Bonobo are newer than us (as are thousands of other species). Being new does no mean being "better". It means "having survived". Actually our Homo's skull get smaller recently. (let's not jump to conclusion about the future of brain evolution)
1:10 The common growth fallacy (and then he jokes about it to make the audience drop its guard). Reality is here or https://ourworldindata.org/economic-growth-over-the-long-run/#gdp-per-capita-growth-around-the-world-since-the-year-1-ce . Actually the only "singularity" was to discover fossil fuel. This is not "technology". This is the "free" energy needed to transform your environment and feed enough people with machines (destroying entire ecosystems the process). Perpetual motion don't exist. That energy extraction has peaked around 2008, and is nearly flat since, very much like GDP growth, for physical reasons.
1:35 "Technology advances rapidly". Another fallacy repeated ad-nauseam by people trying to sell it.
2:20 The common misconception that bigger is better.
2:32 Please note: Intelligence equates= intercontinental missile. Not music, not art, not medicine, but destruction.
2:54 "Change in the substrate of thinking". What on Earth is he talking about ? You don't drive evolution. A mutation in the substrate of a simple virus can also have "dramatic" consequence (like making us immortal, or killing us on the spot). A priori justification is poor reasoning.
3:50 "machine learning" is a guarantee of non-intelligence. Mimes aren't intelligent, nor creative. Algorithm have already rediscover laws of nature. They aren't creative either. And most importantly, the are harmless.
6:00 My point entirely. There is power in atom. There is no power in neuron, they do consume power. That's physics 101
"Awaken power of artificial intelligence" is just the words of a priest. "Intelligence explosion" is a oxymoron as well as a mantra.
6:24 What make him think that the village idiot is less adapted to survival the Edward Witten ? Will it have more offspring or less ? Will it be more likely to apply for presidency or not ?
7:16 "Intelligence growth does not stop." How so ? Why ? Is there any kind of proof that intelligence is not asymptotic like everything else in the universe ? Where are the data's ? Where is the theory ?
8:16 Nanobots (at least he didn't say autobots, yes proximus prime I am looking at you) and "All kind of science fiction stuff nerveless consistent with the law of physics." Is that so ? Last time I checked, 13 billionth of 1W 3G hertz processor consume 13 GW (we'll call it Norway) an have a whole lot of mass (inertia). How this is supposed to be a threat to me is still a mystery, unless he meant paying the electricity bill. It may broke my heart.
9:00 "This is Ironic". Yes, it indeed is. Intelligence is not an optimization process. In anyway shape or form. This video is annoying. The most intelligence thing we all know of are totally harmless and futile, from music to Einstein field theory. From jokes to arts.
9:46 Does super intelligence take order or not ? This guys is making his own counter arguments now.
9:59 So a super intelligence will realize that taking control of the world is easier than inventing good jokes ? Is it super intelligent to think that beaming electricity in brains is actually making people laugh ?

I am sorry but at this point I must quit this video. This is below standard, even in terms of fear mongering con-performance about doomsday scenarios.

Beside, you have pointed to this video which is filled only with irrelevant (heavily "tangential") and incorrect arguments. Where is the science ? A pool of people "of the field" predicting a revolution in 20 year or so ? I have read so many such pools with promises of flying cars, magic batteries and cure for cancer ? Where are they ? Why must I wait 3 minutes to "boot" my television when it takes 1 second 20 years ago ? Can we get back to reality ?

If you know computing, whatever an AI will ever be is some dynamic state of bytes changing rapidly in some kind of memory, or electronic neurons. None of this is even able to kill a fly. My brains cannot either, whatever QM new-age consciousness lover are thinking. That's what physics tell us.

And logic and a dictionary tell us that "optimizing for one goal" is the opposite of intelligence, it is called single-mindedness.
 
Last edited by a moderator:
  • #89
Recently I read a lots of newspaper, international and none, that spoke about how dangerous can be AI.
Many journalist spoke about robots that will take the job of many workers that won't have what to do, and about how AI can reduce the job offers.
They said we will live in a world with very few interactions between people, and that our human behavior will disappear.
Also Elon Musk is scared of AI and robots.

Honest to be I don't know what to say or what to believe.
From one side I'm really scared, I don't like robots, or maybe I don't like a robot that try to be more similar to humans.
I also saw that were invited robots that can replace a wife, and this is very scary for me, I need to stay close to real people.
I'm really scared about this, I won't like to walk on the street and see robots that walks close to me.

At the same time I think that I shouldn't be scared because we every day use AI like Google, and robots are very important in every sector, from Medicine to manual jobs.

So in the end I don't know what to say about this situation, I feel strange, I don't know if we need to stop with this kind of technology with what we already have today.

Sometimes I feel I need to have a normal life and live in a simply way by having a normal jobs, but it seems that I can't find a job that in the future will not be related with AI.What's your opinion about?

P.S Why in this period TV and media tend to speak about this topic, it seems that we will soon have to deal with AI.
 
  • #90
Grands said:
What's your opinion about?

Have you read the rest of this thread? There are more than 80 posts giving opinions on that.
 
  • Like
Likes Grands and berkeman
  • #91
@Grands :
I thought I had the perfect link for you to read about that subject to help you calm your fears, but I see you already read the thread (post #13) where I found it:

The Seven Deadly Sins of Predicting the Future of AI

Have you read it? It is a long article, but you should really take the time to read it thoroughly, even the comments (the author answers back to comments as well). After that read you should see the other side of the AI hype, from people working in the field (for example, Elon Musk doesn't work in the field, he just invests in it and do have something to sell).

If you still have more precise questions about the subject after reading the article, come back to us.
 
  • Like
Likes Boing3000 and Grands
  • #92
I think we are still so far from having a human level AI, by that time, we will be able to upgrade human intelligence as well (we could start with better education not to cram in lots of useless things)
On the other hand, i fear an Idiocracy, that we trust everything work, warfare, thinking on robots, then wonder that an AI takes over. But it is still far future.
 
  • #93
Maybe we could combine threads. The super AI takes over self driving cars, in a coordinated and well timed sequence of events as a form of population reduction. Maybe the military has the right idea in using CP/M type systems with 8 1/2 inch floppy disks and no internet connection, used at ICBM sites.
 
  • Like
Likes GTOM
  • #94
jack action said:
@Grands :
I thought I had the perfect link for you to read about that subject to help you calm your fears, but I see you already read the thread (post #13) where I found it:

The Seven Deadly Sins of Predicting the Future of AI

Have you read it? It is a long article, but you should really take the time to read it thoroughly, even the comments (the author answers back to comments as well). After that read you should see the other side of the AI hype, from people working in the field (for example, Elon Musk doesn't work in the field, he just invests in it and do have something to sell).

If you still have more precise questions about the subject after reading the article, come back to us.

Yes, very interesting article, it fit perfectly to my questions.

First think I want to say is that I read books written by economics about AI and about how AI will take people's jobs.
Well, the article I read sustains the opposite thesis, that we don't have to care about this and that robots today didn't take any job.

The issue is, who I have to trust ?
I read " Robots will steal your job, but that's ok: how to survive the economic collapse and be happy." by Pistono.

And also " Rise of the Robots: Technology and the Threat of a Jobless Future". By Martin Ford.

About the article I totally agree with point B.
Today doesn't exist something like an artificial brain that can understand a page of programming, we don't have such a big technology.
Anyway that's not the point of my post, I was more about " Why should I be involved in creating something like that ?"
" Why society need and artificial brain?"

What can I say about the whole article?
It's cool, but is something like: " Don't worry about AI, it won't be so smart to recognize the age of a person or something else" and " technology is not so fast, and do not develop exponentially so stay calm", but is not about, what should be the purpose or the target of AI, or if we need to prevent it development, even if is slow, and as an example we can se the Google car.
The author is some parts contradicts himself, he says that he is very sure it won't exist a very sophisticated AI like in the movie ( and I agree with this) but he doesn't take in consideration that we can't predict future, it's impossible.
Cold someone predict that a man will create the theory of relativity (Einstein) ?

PS. Remember that in the paste we made a disaster with the nuclear bomb, many physics were scared by it, and they weren't wrong about the consequences.
 
Last edited:
  • #95
Grands said:
The issue is, who I have to trust ?
Grands said:
The author is some parts contradicts himself, he says that he is very sure it won't exist a very sophisticated AI like in the movie ( and I agree with this) but he doesn't take in consideration that we can't predict future, it's impossible.
You can't trust anyone either way, as it is all speculations. Yes, nobody can predict that AI will not be a threat to humans. And you can replace the term 'AI' in that statement with 'supervolcanoes', 'meteorites', 'E. coli' or even 'aliens'. The point is that nobody can predict they will be a threat either. The facts are that it never happen in the past, or if it did, things turned out for the best anyway. Otherwise, we wouldn't be here, now.

People that tend to spread fear, usually have something to sell. You have to watch for this. They are easy to recognize: they always have an 'easy' solution to the problem.
Grands said:
" Why should I be involved in creating something like that ?"
" Why society need and artificial brain?"
I don't think we 'have' to be involved and we don't 'need' it. The thing is that we are curious - like most animal - and when we see something new, we want to see more. It's the battle most animals have to deal with every day: Fear vs Curiosity. Some should have been more cautious, some find a new way to survive.

All in all, curiosity seems to have been good for humans since the last few millenniums. Will it last? Are we going to go too far? Nobody can answer that. But letting our fear turn into panic is certainly not the answer.
Grands said:
what should be the purpose or the target of AI
Nobody can tell until it happens. What was the purpose of searching for a way to make humans fly or research electricity and magnetism? I don't think anyone who begin searching those areas could imagine today's world.
Grands said:
if we need to prevent it development, even if is slow
But how can we tell if we should prevent something, without ever experiencing it? Even if the majority of the population convinces itself that something is bad, if it is unfounded, you can bet that a curious mind will explore it. The door is open, it cannot be closed.

The best example is going across the sea. Most Europeans thought the Earth was flat and that ships would fall down at the end of the Earth. The result was that nobody tried to navigate far from the coast. But it was unfounded, doubts were raised, an unproven theory of a round planet was developed and a few courageous men tested the unproven theory. There was basically no other way of doing it. Was it as expected? Nope. There was an entire new continent to be explored! Who could have thought of this?!

Should we have prevented ships from going away from the shore?
Grands said:
Remember that in the paste we made a disaster with the nuclear bomb, many physics were scared by it, and they weren't wrong about the consequences.
To my knowledge, nuclear bombs are not responsible for any serious bad consequences. People are still killed massively in wars, but not with nuclear bombs. On the other hand, nuclear is used to provide electricity to millions of people. It seems that people are not that crazy and irresponsible after all. But, yes, we never know.

Again, the key is to welcome fear, but not to succumb to panic.
 
  • Like
Likes Grands
  • #96
jack action said:
The best example is going across the sea. Most Europeans thought the Earth was flat and that ships would fall down at the end of the Earth. The result was that nobody tried to navigate far from the coast. But it was unfounded, doubts were raised, an unproven theory of a round planet was developed and a few courageous men tested the unproven theory. There was basically no other way of doing it. Was it as expected? Nope. There was an entire new continent to be explored! Who could have thought of this?!

Should we have prevented ships from going away from the shore?

To my knowledge, nuclear bombs are not responsible for any serious bad consequences. People are still killed massively in wars, but not with nuclear bombs. On the other hand, nuclear is used to provide electricity to millions of people. It seems that people are not that crazy and irresponsible after all. But, yes, we never know.

Again, the key is to welcome fear, but not to succumb to panic.

i think fear about nuclear weapons is better example than going across the sea, since the later could only doom the crew of the ship, while the former without enough responsibility and cool head could have doomed humanity.
What kind of responsibility is needed to have a super AI, that could spread over internet, access millions of robots, and possibly reach the conclusion that it can fulfill its goal of erase all sickness, if there will be no more humans who can be sick, because it develops a new biological weapon with CRISPR.
 
  • #97
GTOM said:
What kind of responsibility is needed to have a super AI, that could spread over internet, access millions of robots,
So what ? A well organized group of hackers can do that to. Millions of robots ? I suppose you count blenders and microwaves in this number ?

GTOM said:
and possibly reach the conclusion that it can fulfill its goal of erase all sickness,
A mild intelligence (artificial or natural) would realize that sickness is not something that needs "erasing" (or can be). The very concept is non-nonsensical, that is a "mental sickness". And that's fine, this fills the internet with nonsense, and hopefully natural selection will sorts this out.

Beside, the AI doomsday proponent still have to make a case. While biological weapon ARE developed, with the precise goal to erase mankind, this somewhat is fine and moot. While global stupidity is rampant, burning the Earth to ashes ... literally... starting the next extinction event, this is somewhat mostly harmless.
But what should we fear ? Intelligence. Why ? Because it is "super" or "singular", with none of those term being define (let's imagine swarms of flying robots running on thin air, each with a red cape)
People with IQ > 160 exist. Are they threatening ? The answer is no (a case for the opposite can be made). Is someone with an IQ > 256 should be more threatening ? What if the entity's IQ is > 1024 and is silicon based and "running" in some underground cave ?

A simple truth about nature is that exponential growth don't exist. Most phenomenon follow S-curve and are highly chaotic. And intelligence is no threatening nor benevolent.
This is all but mental projection and category mistake, fueled by con artist making money out of fear (a very profitable business)
 
  • #98
jack action said:
To my knowledge, nuclear bombs are not responsible for any serious bad consequences.
What about Hiroshima and Nagasaki ?
 
  • #99
Boing3000 said:
So what ? A well organized group of hackers can do that to. Millions of robots ? I suppose you count blenders and microwaves in this number ?

I don't know exactly how many drones, industrial robots etc exist today, but sure there will be many self driving cars, worker robots etc in the future. Military robots included on the list. Theoretically a super AI can outsmart even a million well organised hackers.

And intelligence is no threatening nor benevolent.

So, many animal species arent threatened by superior human intelligence? Now i haven't talked about singularity, which i also find irrealistic.
 
  • #100
GTOM said:
I don't know exactly how many drones, industrial robots etc exist today, but sure there will be many self driving cars, worker robots etc in the future.
Drones are as fragile and innocuous than fly, albeit with a total inability to suck energy from their environment.
Industrial robots don't move.
Self driving car, even like this one are harmless (to human kind). This is no science nor fiction, this is fantasy/romance.

"We" may become totally dependent on machine. A case can be made it is already the case. Anybody can blow the power grid, shutdown the internet, and what not, and provoke mayhem (but not death). There is no need for AI to do that, quite the opposite: An AI would have a survival incentive to keep those alive and healthy.

GTOM said:
Military robots included on the list.
Indeed. As are killer viruses, killer guns, killer wars, killer fossil fuel, killer sugar, fat and cigarettes. Millionth of death per year ... still no AI in sight...

GTOM said:
Theoretically a super AI can outsmart even a million well organised hackers.
I bet some form of deep learning processes are already doing that to prevent some "catastrophic" events, in the vaults of intelligence agencies.
The thing is outsmarting human is not "a threat".

GTOM said:
So, many animal species arent threatened by superior human intelligence?
But that is the core of the problem. The homo-sapiens-sapiens have never been a threat to other species. It lived in an healthy equilibrium made of fight AND flight with its environment. Only a very recent and deep seated stupid meme (growth and progress) is threatening the ecosystem (which humankind is entirely part of).
Things will sorts themselves out as usual. Maybe some sort of ants will also mute and start devouring the planet. There is no intelligence nor design in evolution, just random events sorted by other happenstance / law of nature.

In this context, intelligence, even a mild one, will realize that and have a deep respect for the actual equilibrium in place in the environments.
Again, that is stupidity that is threatening (by definition). So maybe an A.S. (artificial stupidity) would be threatening to human, which seems to be hell bent on trusting the Golden Throne in the stupidity contest (lead by wannabee scientist like Elon Musk...)

GTOM said:
Now i haven't talked about singularity, which i also find irrealistic.
Granted
 
Back
Top