Destined to build a super AI that will destroy us?

  • Thread starter Thread starter Greg Bernhardt
  • Start date Start date
  • Tags Tags
    Ai Build
Click For Summary
The discussion centers around the potential dangers of superintelligent AI, referencing Sam Harris's TED Talk. Participants express concerns that society may not be taking these risks seriously enough, paralleling issues like global warming. There is debate over whether AI can develop its own goals or remain strictly under human control, with some arguing that autonomous systems could evolve unpredictably. While some view the advancement of AI as a natural evolution, others warn of the potential for catastrophic outcomes if safeguards are not implemented. The conversation highlights a tension between optimism for AI's benefits and fear of its possible threats to humanity.
  • #31
Boing3000 said:
His blind faith into some mythical (and obviously unspecified) "progress", is also mind boggling.

I would very much like to hear serious technical argument (i.e. not ad hominem arguments) on why there is absolutely no (i.e. identical zero) risk of humanity every getting itself lost to technological complexity and AI. Can you point to such serious arguments or perhaps provide some here?

Usually such arguments are very difficult to establish, because you effectively have to prove that something will never happen all the while humanity undergo potential multiple social and technological disruptive changes, which of course is much more difficult than proving that same something just might or might not happen. What I am looking for is a kind of argument that fairly clearly shows that there is no route from today leading to "destruction of human civilization" without breaking one or more physical laws along the way and without a relatively small group of humans (say 1% of humans) being able to "lead us into doom" (whether unintentionally or with malice) without everyone else being sure to notice well in advance.
 
Technology news on Phys.org
  • #32
Filip Larsen said:
I would very much like to hear serious technical argument (i.e. not ad hominem arguments) on why there is absolutely no (i.e. identical zero) risk of humanity every getting itself lost to technological complexity and AI. Can you point to such serious arguments or perhaps provide some here?
You know, that's the first time I get a coherent and rational response to a genuine statement. Your are kind of "getting me out of guard", because generally, what I get it downright hysterical denial and revisionism (which is, you'll have guessed, very hard to argue with:wink:)

So my answer is very straightforward. Harris (and by proxy/echo you) is making wild irrationals unsubstantiated statements. The burden of proof is on him.
"Getting lost is complexity" is not a thing. Please be so kind to defining it. Here, I'll try to help:

In the the 70'th, getting tv requires one phone call, and a +-10 bucks per month. Now, you'll have to call 4 different provider, configure you internet connection, figure out 42 possible incompatibility "problem", to have your "box" kind of working (that is: zapping (ironically I just discover it is NOT a English word, in french we use this word to describe channel-hopping) takes you 1 second, while in the 70'th it takes you 1/10 th of a second). All this for at least 50 bucks per mouth (I am talking inflation neutral numbers here) not accounting for power consumption (the increase of inefficiency is always hidden under the hood)

Did I just argues for complexity/progress is overwhelming us ? Not quite. Because none of that is anywhere close to fundamentals like "life threatening". Quite the opposite. Once you'll have op-out those inefficient modern/inefficient BS (<-sorry), you'll discover the world did not ends. Just try it for yourself, and witness the "after-life". Reagan did not "end the world". This is a fact. Will Justin Beiber ? .. unlikely...

You are talking of ad hominem, and it is very strange. Because there is none. Harris business model is to make you believe that something is going to kill you. Fear mongering is his thing. He is proud of it, and many people do the very same, and have a perfectly successful life. That is another fact. He did climb on the TED talk stage or did I make it up ?

To answer your point again (in a slightly different perspective), you have to answer what risk there is, what kind of harm will happens to whom, and how. I mean not by quoting the scientific work of a Hollywood producer, but actual scientific publications.

Filip Larsen said:
Usually such arguments are very difficult to establish, because you effectively have to prove that something will never happen all the while humanity undergo potential multiple social and technological disruptive changes, which of course is much more difficult than proving that same something just might or might not happen.
That is very very true. Proving a negative is something that no scientific minded people (that by no means ... means intelligent people) will every do.
I don't have to prove that God does not exist, nor that AI exists nor even that AI will obviously want kill every homo-sapiens-sapiens.
All these are fantasy. Hard and real fantasy. God is written with so many books's atom and process by so many human's neuron that it must exist .. right ?
Your pick: You believe in those fantasies, or believe that fantasies exist.

AI do not exist. Intelligence exist. The definition is here. Nor Harris nor anyone is going to redefine intelligence as the "'ability to process information". That is meaningless, and just deserves a laugh.

Filip Larsen said:
What I am looking for is a kind of argument that fairly clearly shows that there is no route from today leading to "destruction of human civilization"
I suppose your are talking about that.
You'll be hard pressed to find any reference to AI in those article, because (as state previously) AI do not exists, nor will, (not even talking about "wanting to kill Harris/Humanity"). Those are fantasies. If this is a serious science forum, only published peered reviewed article are of any interest, and Sam Harris as very few (let's say 3 by quick google search).

Filip Larsen said:
without breaking one or more physical laws along the way and without a relatively small group of humans (say 1% of humans) being able to "lead us into doom" (whether unintentionally or with malice) without everyone else being sure to notice well in advance.
Super intelligent AI (like every infinite growth based fantasy) break the first law of thermodynamic.
Normal serious AI (that is human kind) have trouble to know what even intelligence means and comes from.

Shouldn't the conversation ends there ? (not that's not funny but ... really ?)
 
  • #33
Filip Larsen said:
I would very much like to hear serious technical argument (i.e. not ad hominem arguments) on why there is absolutely no (i.e. identical zero) risk of humanity every getting itself lost to technological complexity and AI. Can you point to such serious arguments or perhaps provide some here?

I'll return the question:

If you have to put a number on it, what is the risk of humanity ever getting itself lost to technological complexity and AI (i.e. "destruction of human civilization")? 90%, 50%, 10% or even 0.0001%?

Then it will be easier to understand how much importance you give to your arguments.

Personally - without having any other arguments than those I already presented - I'm more inclined to go towards the 0.0001% end of the scale.
 
  • #34
jack action said:
Personally - without having any other arguments than those I already presented - I'm more inclined to go towards the 0.0001% end of the scale.
Even at a time scale as far out as 10,000 years from now? Perhaps that is where fantasy comes into play? I tweeted Sam Harris this thread. With some luck he will have a bit to say.
 
  • Like
Likes jack action
  • #35
Boing3000 said:
To answer your point again (in a slightly different perspective), you have to answer what risk there is, what kind of harm will happens to whom, and how.

1) I have already enumerated several of those risks in this thread, like humanity volunteering most control of their life to "magic" technology. as As long as these risks are not ruled out as a "high cost" risk I do not really have to enumerate more to illustrate that there are "high cost" risks. But please feel free to show how you would make risk mitigation or reduction of each of them because I am not able to find ways to eliminate those risks.
2) Requiring everyone else to prove that your new technology is dangerous instead of requiring you to prove its safe is no longer a valid strategy for a company. Using my example from earlier, you can compare my concern with the concern you would have for your safety if someone planned to build a fusion reactor very near your home, yet they claim that you have to prove that their design will be dangerous.

Boing3000 said:
Super intelligent AI (like every infinite growth based fantasy) break the first law of thermodynamic.
Normal serious AI (that is human kind) have trouble to know what even intelligence means and comes from.

Energy consumption so far seem to set a limit on how a localized an AI with human-sized intelligence can be due to the current estimate of how many PFlops it would take on conventional computers. You can simple calculate how many computers it would take and how much power, and conclude that any exponential growth in intelligence would hit the ceiling very fast. However, two observations seems to indicate that this is only a "soft" limit and that the ceiling may in fact be much higher.

Firstly, there already exist "technology" that are much more energy efficient. The human brain only uses around 50W to do what it does, and there is no indication that there should be any problem getting to that level of efficiency in an artificial neural computer either. The IBM's True North chip is already a step down that road.

Secondly, there is plenty of room to scale out in. Currently our computing infrastructure is increasing at an incredible speed making processing of ever increasing data sets cheaper and faster, making access to EFlops and beyond on the near horizon.

If you combine these two observations there is no indication that we can rule out energy or computing power as a limit for intelligence.

Boing3000 said:
Shouldn't the conversation ends there

If you don't believe you can add any more meaningful information or thoughts, then sure. But I still like to discuss technical arguments would those that still care for this issue.
 
  • Like
Likes Boing3000
  • #36
Greg Bernhardt said:
Even at a time scale as far out as 10,000 years from now? Perhaps that is where fantasy comes into play? I tweeted Sam Harris this thread. With some luck he will have a bit to say.

If mankind, augmented by his machines, becomes unrecognizable, are we destroyed or enhanced? The Borg collective comes to mind.

Mark Twain might have considered today's connected youth as being assimilated by the Internet.

If an intelligence greater than mankind's decides that humans should be killed, isn't that the best decision by definition?

Define civilization. Define destroyed. Define us and them. Define super AI.

Without agreements in advance about definitions such as these, any debate is silly.
 
  • Like
Likes Boing3000 and Greg Bernhardt
  • #37
anorlunda said:
If an intelligence greater than mankind's decides that humans should be killed,

I always find that kind of question bizarre: Why would anyone - machines or aliens - «decide» to get rid of humans, especially because we would be of «lower intelligence»?

Are we saying as human: «Let's get rid of ants, hamsters and turtles, they're so dumber than we are»?

Not only we don't say that, we are so SMART that we KNOW that we NEED them for us to exist, even if they are not as intelligent as we are (intelligence count for very little in the survival equation).

Now, why would an even smarter machine or life form would think otherwise?

And if somebody tells me that humans are the scums of the Earth that don't deserve to live, that is a very human thing to say. No other (dumber) life form think that about themselves. Following that logic, machines or aliens that are smarter than us would probably blame themselves even more, which would lead to self-destruction?!?
 
  • Like
Likes Boing3000
  • #38
jack action said:
I always find that kind of question bizarre: Why would anyone - machines or aliens - «decide» to get rid of humans, especially because we would be of «lower intelligence»?

Not so crazy. Considering the finite non-renewable resources on this planet, it could be argued that it would be intelligent to decide to cap human global population at 7 million rather than 7 billion. Once decided, it would also be intelligent to act on that decision immediately because each hour of delay further depletes the resources remaining for the surviving 7 million.

jack action said:
Are we saying as human: «Let's get rid of ants, hamsters and turtles, they're so dumber than we are»?

Did you forget that we did decide to make the smallpox virus extinct? Or that we are actively considering doing the same for disease carrying mosquitoes?
 
  • #39
anorlunda said:
Did you forget that we did decide to make the smallpox virus extinct? Or that we are actively considering doing the same for disease carrying mosquitoes?
I can't speak Jack Action but I would say the motivation to rid ourselves of smallpox and disease carrying mosquitoes is to improve human life. Apparently there has been something seriously missed in the search for extra terrestrial intelligence if humans are causing problems for alien life.
 
  • #40
Filip Larsen said:
1) I have already enumerated several of those risks in this thread, like humanity volunteering most control of their life to "magic" technology.
But you still haven't provided us with any clues as to why that is a risk. As far as normal people are concerned (those not having an intimate relationship with Maxwell equations, or Quatum field theory (that is 99.99999% of humanity, including me) a simple telephone is "magic". A cell phone even more, there is not even a cable !

If (that is a big "if", not supported by science in any way whatsoever) a supper AI pops into existence and as far as we are concerned we call it Gandalf, because it does "magic", what is the risk ? Please explain. What is good, what is not. Who dies, who does not.

Filip Larsen said:
as As long as these risks are not ruled out as a "high cost" risk I do not really have to enumerate more to illustrate that there are "high cost" risks.
But there is no risk. I mean not because there AI don't exist, nor because progress is not a exponential quantity. The reason there is no risk is because you have NOT explains any plausible risk.
"Politics" is nowadays what we "surrender" most of our decision making. Is it good, is it bad ? What "risk" is there ? What do be gain, what do we loose ?
All these have been explored is so many different way in so many fantasy book (Asimov comes to mind). None of it is science. That does not mean that is not interesting. The more "intelligent" of those novel are not black and white.

Filip Larsen said:
But please feel free to show how you would make risk mitigation or reduction of each of them because I am not able to find ways to eliminate those risks.
I am not sure what "risk mitigation" means. But as a computer "scientist", I know that computers aren't there to harm us, most often, it is the other way around (we give them bugs and viruses, and force them to do stupid things, like playing chess, or display cats in high definition)

Filip Larsen said:
2) Requiring everyone else to prove that your new technology is dangerous instead of requiring you to prove its safe is no longer a valid strategy for a company.
I cannot even begin to follow you. Am I forced to buy some of your insurance and build some underground bunker, because someone on the internet is claiming that doom is coming ?. I don't meant mean real doom, like climate change, but some AI gone berserk ? Are you kidding me ?

Filip Larsen said:
Using my example from earlier, you can compare my concern with the concern you would have for your safety if someone planned to build a fusion reactor very near your home, yet they claim that you have to prove that their design will be dangerous.
That's a non sequitur. A fusion reactor may kill me, we know very precisely how, with some kind of real probability attached.
I then mitigate it with some other benefit I get from it. That's what I call intelligence: balance, and constant evaluation.

Filip Larsen said:
Energy consumption so far seem to set a limit on how a localized an AI with human-sized intelligence can be due to the current estimate of how many PFlops it would take on conventional computers.
First Flops are not intelligence. If stupid programs run on computer, then more Flops will lead to more stupidity.
Secondly, no Flops nor computer design are an ever increasing quantity. We are still recycling 70'th tech, because it is just still about move store and add, sorry.

Filip Larsen said:
You can simple calculate how many computers it would take and how much power, and conclude that any exponential growth in intelligence would hit the ceiling very fast. However, two observations seems to indicate that this is only a "soft" limit and that the ceiling may in fact be much higher.
Indeed. But again those limit are not soft at all, as far as Plank is concerned. And again, quantity and quality are not the same thing.

Filip Larsen said:
Firstly, there already exist "technology" that are much more energy efficient. The human brain only uses around 50W to do what it does, and there is no indication that there should be any problem getting to that level of efficiency in an artificial neural computer either. The IBM's True North chip is already a step down that road.
But that's MY point ! A very good article. Actually geneticist are much more close to build a super brain that IBM. So what ? What are the risk, and where is the exponential "singularity" ? Are you saying that such a brain will want to be bigger and bigger until it as absorbed every atom in earth, then the universe ?
I am sorry, but I would like to know on what scientific basis this prediction is based on. The only thing that does that (by accident, because any program CAN go berserk) are called cancer. They kill their host. We are not hosting computers. Computer are hosting program.

Filip Larsen said:
Secondly, there is plenty of room to scale out in. Currently our computing infrastructure is increasing at an incredible speed making processing of ever increasing data sets cheaper and faster, making access to EFlops and beyond on the near horizon.
That's just false. Power consumption of data center are already an issue. And intelligent wise, those data center have the IQ below that of a snail.
You can also add up 3 billion of average "analogic" people like me, it would still not make us anywhere close to Einstein intelligence.

Filip Larsen said:
If you don't believe you can add any more meaningful information or thoughts, then sure. But I still like to discuss technical arguments would those that still care for this issue.
Of but I agree, the problem with arguments is that I would like them to be rooted in science, not in fantasy (not that you do, but Sam Harris does, and this thread is a perfect place to debunk them)

We seem to agree that computing power (that is not correlated with intelligence at all) is limited by physics.
That is a start. No singularity anywhere soon.
 
  • #41
  • Like
Likes anorlunda
  • #42
To me the danger does not lie so much in the possibility of one super intelligent computer taking over the world, which I think highly unlikely, but rather in a creeping delegation of decision making to unaccountable programs. Whether these programs are considered intelligent or not is immaterial - we already have very widespread use of algorithms controlling for example share and currency trading. Yesterday the sharp drop in the value of the British pound was at least partly blamed on these. Most large companies rely of software systems of such complexity that no individual understands every aspect of what they do, and these systems automatically control prices, stocking levels and staffing requirements. In a manner of speaking these systems are already semi-autonomous. They currently require highly skilled staff to set up and maintain them, but as the systems evolve it is becoming easier to use 'off the shelf' solutions which can be up and running with little intervention.

While a full takeover might seem implausible, economics will continue to drive this process forward. A factory with automated machines is more cost efficient than a manual one. Call centres are becoming increasingly automated with routine queries handled by voice recognition systems. It seems likely that (in at least some places) taxi drivers will be replaced by autonomous vehicles.

As these systems become more resilient and interconnected it is not inconceivable that an entire company could be run by an algorithm, relying on humans to perform some tasks, but with the key decisions driven by the 'system'. It the goal of such a company (as is likely) is to make the most profit, why would anyone think that in the long term the decisions made would be in the best interests of human society?
 
  • Like
Likes Filip Larsen
  • #43
Charles Kottler said:
It the goal of such a company (as is likely) is to make the most profit, why would anyone think that in the long term the decisions made would be in the best interests of human society?
What makes you think decisions are made in the best interest of society right now with actual people in charge?
 
  • Like
Likes Bystander
  • #44
Averagesupernova said:
What makes you think decisions are made in the best interest of society right now with actual people in charge?

Fair point.
 
  • #45
jack action said:
I always find that kind of question bizarre: Why would anyone - machines or aliens - «decide» to get rid of humans, especially because we would be of «lower intelligence»?

Are we saying as human: «Let's get rid of ants, hamsters and turtles, they're so dumber than we are»?
Over 150 species go extinct every single day. Extinction rates due to human action is 1,000 times greater than background rates. We aren't killing then because they're unintelligent, but if they were as smart as us they surely wouldn't die.

It's plausible that you or I have killed the last remaining member of a species, we wouldn't give it a single thought.
I think it's easy to imagine how AI could treat humans with the same complete indifference. I have no hesitations wiping out an entire colony of social, higly organised creatures (ants) for a largely insignificant improvement in my environment.
 
  • #46
billy_joule said:
Over 150 species go extinct every single day. Extinction rates due to human action is 1,000 times greater than background rates.
Source?
 
  • #47
Greg Bernhardt said:
Sam Harris had a TED Talk on this not long ago. What do you think? Are we taking the dangers seriously enough or does Sam Harris have it wrong?

Seriously?

To me this talk does qualify as FUD and fear mongering. It accumulates so many cliché it is embarrassing to see it on TED.

When did Sam Harris become a noted expert on AI and the future of society? AFAIU he is a philosopher and neuroscientist, so what is his expertise on that matter? I'm no expert myself but have worked for thirty years in software engineering and kept an eye on AI research and applications over the years... and it does not seem to me he knows what he is talking about.

He seems to think that once we are able to build computers with as many transistors as the number of neurons in the human brain, AGI will just happen spontaneously overnight! ...And then we loose control, have a nuclear war and end up with starving child everywhere! Comparing the advent of AI with aliens coming to Earth one day is laughable at best. Making fun of the actual experts is questionable to say the least... Using scary and morbid cartoon style visuals is almost a parody.

A lot of speculations, very little demonstration, misinformation, over simplification, fear inducing images, disaster, digital apocalypse, aliens, god... and the final so 'new age' namaste for sympathy. Seriously?

He is asking disturbing questions nonetheless and i agree we should keep in mind worst case scenarios on our way. However, although caution and concern are valuable attitudes, fear is irrational and certainly not a good mind frame to make sound assessments and take inspired decisions.

TED is supposed to be about "Ideas worth spreading". I value dissenting opinions when they are well informed, reasonably sound and honest. This talk is not.

The future of AI is a very speculative and emotionally charged subject. To start with I'm not sure there is a clear definition of what is AI or AGI. What it will look like. How it will happen. How we know we have created such a thing... Even if technical progress keeps pace with the Moore law that's just the hardware part and we still don't really know what the software will look like... Maybe AI will stall at some point despite our theoretical capability and hard work? It's all speculation.

Whatever will happen it won't happen at once. It will likely take a few decades at least and i disagree with Harris about the time argument. Fifty years is a lot of time especially nowadays. A lot will happen and we will have a better understanding of the questions we are asking now. There is no way (and has never been) to solve today all the problems we may face tomorrow or half a century from now.
 
Last edited:
  • Like
Likes Boing3000
  • #48
billy_joule said:
Over 150 species go extinct every single day. Extinction rates due to human action is 1,000 times greater than background rates. We aren't killing then because they're unintelligent, but if they were as smart as us they surely wouldn't die.
The problem you state is called overpopulation and has nothing to do with intelligence. It is just a matter of numbers. And the species that have an exponential growth is always condemn to stop and regress at one point or another. One specie cannot live by itself.

Filip Larsen said:
I would very much like to hear serious technical argument (i.e. not ad hominem arguments) on why there is absolutely no (i.e. identical zero) risk of humanity every getting itself lost to technological complexity and AI.

I wanted to go back to this question as I might have one relevant example to further feed the discussion: Humans and cats.

Humans have a tremendous effect on cat population. We feed them, we spayed & neutered them, we declaw them and we kill them. Theoretically, it's not for our own survival, we don't need to do that at all for our benefice. Generally speaking, we can say that we care for them and that humans are beneficial for the cat population survival, even if they are some individuals who kill and/or torture them for research or even just pleasure. For sure, the cat specie is not at risk at all.

What if there was an AI that turns to this for humans? Would that be bad? One argument against it would be lost of freedom; Cats live in «golden cages». Life can be consider good on some aspects, but they cannot do as they wish. But that is not entirely true either. First, they are stray cats that can be considered «free». Lots of drawbacks with that lifestyle as well, sometimes not a chosen one. Sure they have to flee from animal control services, but in the wild you are always running from something.

But the most interesting point I wanted to make about intelligence and using things we don't understand is that cats - just like humans - have curiosity and will power that can lead to amazing things. Like these cats interacting with objects that were not design for them and, most importantly, they could never understand or even build such «complex» mechanisms:




Not all cats can do these things. It shows how individuals may keep a certain degree of freedom, even in «golden cages». It also shows how difficult it is to control life because the adaptability feature is just amazing.

Keep in mind that cats did not create humans, they just have to live with an already existing life form that was «imposed» to them and that happens to be smarter than they are (or are they?).

How can mankind gradually gives the decision process to machines without them ever noticing it going against their survival? How can someone imagine that every single human being will be «lobotomized» up to a point that no ones will have will power to stray from the norm? That seems to go against what defines human living beings.
 
  • Like
Likes Boing3000
  • #49
Bystander said:
Source?

I said over 150 species but that should have been up to 150 species, sorry.

United Nations Environmental Programme said:
Biodiversity loss is real. The Millennium Ecosystem Assessment, the most authoritative statement on the health of the Earth’s ecosystems, prepared by 1,395 scientists from 95 countries, has demonstrated the negative impact of human activities on the natural functioning of the planet. As a result, the ability of the planet to provide the goods and services that we, and future generations, need for our well-being is seriously and perhaps irreversibly jeopardized. We are indeed experiencing the greatest wave of extinctions since the disappearance of the dinosaurs. Extinction rates are rising by a factor of up to 1,000 above natural rates. Every hour, three species disappear. Every day, up to 150 species are lost. Every year, between 18,000 and 55,000 species become extinct. The cause: human activities.
The full reports can be found here:
http://www.millenniumassessment.org/en/Index-2.html

jack action said:
The problem you state is called overpopulation and has nothing to do with intelligence. It is just a matter of numbers. And the species that have an exponential growth is always condemn to stop and regress at one point or another. One specie cannot live by itself.

Humans have come to dominate the globe precisely because of our intelligence. There are many species with greater populations and/or biomass but none can manipulate our surroundings like we can. We aren't condemned to stop and regress like other species, our intelligence has allowed us to increase Earth's human capacity through technology thus far, who's to say how high we can raise that capacity?

Anyway My point was, that on our path to controlling the globe and it's resources we don't look down and consider the fate of the ant, their intelligence doesn't register on our scale, they are of no value or consequence. The gulf between AI and HI could become just as vast and result in a similar outcome.
We may end up like cats, or ants, or we may end up like the dodo.

jack action said:
How can mankind gradually gives the decision process to machines without them ever noticing it going against their survival?
It's happened countless times between smart and stupid humans, and it'll continue to happen. Control through deception is a recurring theme in human history.
If a super AI wasn't capable of large scale deception I would say it's not super at all. Whether we could build it in such as way that it wouldn't is another issue.
 
  • #50
This thread have many questions and lines of argumentation going in many directions at once now. I will try focus on those that I feel connects with my concerns. Again, I am not here to "win an argument", I am here to sort out my concerns so please work with me.

jack action said:
If you have to put a number on it, what is the risk of humanity ever getting itself lost to technological
complexity and AI (i.e. "destruction of human civilization")? 90%, 50%, 10% or even 0.0001%?

Of the numbers quoted, I would right now say that 90% sounds most likely. Perhaps it easier for me to say it the other way around: I cannot see any reason why we will not continue down this road of increasing complexity. The question then remains if and how we in this thread can agree to a definition of "lost".

Boing3000 said:
But you still haven't provided us with any clues as to why that is a risk.

To establish a risk you commonly establish likelihood and significant cost. Base on what how things have developed last decade or two and combine it with the golden promise of AI, I find both high likelihood and severe costs. Below I have tried to list what I observe:
  • There is a large drive to increase complexity of our technology so we can solve our problem cheaper, faster and with more features.
  • Earlier much of the technological complexity was just used to provide an otherwise simple function (example would be radio voice communication over a satellite-link) which can be understood easy enough. Plenty of the added complexity today introduces functional complexities as well (consider the Swiss army knife our smartphones are) where a large set of functions can be cross-coupled in large set of ways.
  • There is a large drive to functionally interconnect everything, thereby "multiplying" complexities even more. By functionally interconnecting otherwise separate technologies or domains you also often get new emergent behavior with its own set of complexities. Sometimes these emergent behaviors are what you want (symbiosis), but just as often there are a set of unintended behaviors that you now also have to manage.
  • There is a large shift in acceptance of continuous change, both by the consumer and by "mangement". Change is used both to fix unintended consequences in our technology (consumer computers today requires continuous updates) and to improve functionality in a changing environment. The change-cycle is often seen sending ripples our through our interconnected technology making the need for even more fixes.
  • To support the change-cycle there is a shift towards developing and deploying new or changed functionality first and then understand and modify them later. Consumers are more and more accustomed to participate in beta-programs and testing, accepting that features sometimes work, sometimes don't work as they thought they would.
  • Many of the above drives are beginning to spread to domains otherwise reluctant to change, like the industry. For instance, industrial IoT (internet-of-things) which is currently at the top of Garners hype-curve, offers much of the same fast change cycle in the operational management of industrial components. In manufacturing both planning and operations see drives towards more automated and adaptive control where focus is optimizing a set of key performance indicators.
  • There are still some domains, like with safety-critical system, where you today traditionally are required to fully design and understand the system before deployment, but to me it seems very unlikely these domains over time withstand the drive towards increased complexity as well. It will be interesting to see the technological solutions for and social acceptance of coupling a tightly regulated medical device with, say, your smartphone. For instance, a new FDA approved device for diabetes gives an indication what we are already trying to move in that direction (while of course still trying stay in control of our medical device).
All these observations are made with the AI technology we have up until now. If I combine the above observations with the golden promise of AI, I only see that we are driving even faster towards more complexity, faster changes, and more acceptance that our technology is working-as-intended.

Especially the ever increasing features-first-fix-later approach everyone seems to converge on appears to me, as a software engineer with knowledge about non-linear system and chaotic behavior, as a recipe for, well, chaos (i.e. that our systems are exhibiting chaotic behavior). Without AI or similar technology we would simply at some point have to give up adding more complexity because it would be obvious to all that we were unable to control our systems, or we would at least have to apply change at a much slower pace allowing us to distill some of the complexity into simpler subsystems before continuing. But instead of this heralded engineering process of incremental distillation and refinement, we are now apparently just going to throw in more and more AI into the mix and let them compete with each other trying to optimize their part of the combined system in other to optimize on a handful of "key performance indicators". For AI's doing friendly or regulated competition we might manage to specify enough rules and restriction that the they end up not harming or severely disadvantage us humans, but for AI's involved in hostile competition I hold little hope we can manage to keep up.

So, from all this I then question how we are going to continuously manage the risk of such a interconnected system, and who is going to do it?

Boing3000 said:
A fusion reactor may kill me, we know very precisely how, with some kind of real probability attached.
I then mitigate it with some other benefit I get from it. That's what I call intelligence: balance, and constant evaluation.

So, I guess that if you expect constant evaluation of the technology for nuclear reactors you will also expect this constant evaluation for other technology that has potential to harm you? What if you are in doubt about if some other new technology (say, a new material) is harmful or not? Would you rather chance it or would you rather be safe? Do you make a deliberate careful choice in this or do you choose by "gut feeling"?

Boing3000 said:
First Flops are not intelligence. If stupid programs run on computer, then more Flops will lead to more stupidity.

This is what I hear you say: "Intelligence is just based on atoms, and if atoms are stupid then more atoms will just be more stupid."

Flops is just a measure of computing speed. What I express is that the total capacity for computing speed is rising rapidly, both in the supercomputer segment (mostly driven by research needs) and in the cloud segment (mostly driven by commercial needs). It is not unreasonably to expect that some level of general intelligence (ability to adapt and solve new problems) requires a certain amount of calculations (very likely not a linear relationship). And for "real-time intelligence" this level of computation corresponds to a level of computing speed.

There is not yet any indication whether or not it is even possible to construct a general AI with human-level intelligence, but so far there is nothing that seems it will be impossible given enough computing power, hence it is interesting for researches to consider the how level intelligence relates levels of computing power.

Boing3000 said:
That's just false. Power consumption of data center are already an issue.

Indeed, and I am not claiming that computing power will increase a hundredfold over-night, just that there is strong drive to increase computing power in general and this, with everything else being equal, will allow for an increase in computational load for AI-algorithms. My bet is that datacenters will continue optimize for more flops/watt, possibly by utilizing specialized chips for specialized algorithms like the True North chip. Also, more use of dedicated renewable energy sources will allow datacenters to increase their electrical consumption with less dependency on a regulated energy sector. In all, there is no indication that the global computing capacity will not continue to increase significantly in the near future.
 
Last edited:
  • Like
Likes billy_joule
  • #51
I don't understand the worry about the possibility that we are overtaken by some AI in a near future. Where by overtaken I mean litterally replaced. If "something" more intelligent than us eradicates us, isn't it "nice" overall? To me, this means we would have achieved our goal. It would mean we created something better adapted and more intelligent than us on Earth. For me this is a big win.
Maybe our specy won't die that quickly and we might become the AI's pets and be well treated.
 
  • #52
@Filip Larsen:

The problem you describe has nothing to do with AI in particular. Because you use the terms «manage» and «control», we agree that we can stop using AI if we want or at least change how we use it.

The question you're asking is rather: «Are we responsible enough to handle [technology of your choice]

The answer is «We are as responsible as we ever going to be.»

Example:

Does anyone knows if the overpasses are built safely? For most of us, we have no clues on how they are built, what is required to built one and we trust that they are all built in a safe manner. Even a civil engineer doesn't study the overpasses he will use before planning a trip; He just put his trust in the people who built them.

10 years ago, one overpass collapse close to where I live. It fell on two vehicles, killing 5 peoples. If you read the Wikipedia article, you will learn that these deaths were the result of a succession of bad decisions that were made over a period of 40 years by different people. All of these bad decisions were based on over-confidence.

No matter how tragic this event was, we still, each of us, use overpasses without fully understanding how they are made. But it is not business as usual either: IIRC, 2 or 3 other «sister» overpasses were demolished soon after the collapse. The government tightened the inspections across the province and dozen of other overpasses and bridges were demolished or had major repairs. The small bridges that were under the care of local municipalities were claimed back by the government. To this day, most people in my province remember this event and think about it whenever going under an overpass: «Will it fall on me?» I'm telling you this because it was the 10th anniversary just a week ago and it was all over the news across the province.

What is important to understand is that we are not slave to things we don't understand. We can react and adjust. Sometimes not as individuals, but rather as a society; but we still don't just sit there, blindly accepting our fate. We always have some control on man-made thing. Will there be bad things that will happen with AI? It is a certainty (your «90%»). Will it become out of control to a point of putting the human specie in jeopardy? It is unlikely (My «0.0001%»).
 
  • Like
Likes Averagesupernova
  • #53
jack action said:
The problem you describe has nothing to do with AI in particular. Because you use the terms «manage» and «control», we agree that we can stop using AI if we want or at least change how we use it.

I am not really sure why you state this again expecting me to answer differently, so I will now just sound like a broken record.

To me, the potential realisation of capable AI seems to be the magic source which will allow us to keep increasing complexity beyond what we can understand and therefore control. Note, that I am not saying that AI is guaranteed to be a harmful thing, or that no good things would come from it, only that while our raft of civilization is drifting down the river towards the promised Elysium it makes sense that those involved with navigation make sure we get there and at least look out for waterfalls.

And regarding control, I would like to ask how much control control you'd say we have of the internet when used for criminal activity. Are you, as a user of the internet, really in control of it? Are the internet organizations? Are the national states? Can any of these parties just "turn off" the unwanted criminal elements or change it so criminals go away? If you think yes to the last one, then why are the criminals still here? And is all this going to be easier or harder if we add capable AI to the mix?

jack action said:
Does anyone knows if the overpasses are built safely?

Yes, we have a very deliberate process of striving towards building constructions that are safe. This does unfortunately not mean that wrong decisions are never made or a contractor will never try to "optimize" their profit in a way that leads to bad results, but we are in general capable of doing proper risk management when building, say, a bridge because we are in general good at predicting what will happen (i.e. using statics and similar calculations) and once built it remains fairly static. And the same degree of rigor is not even remotely present when we are developing and deploying most of the software you see on the consumer market (as software is so qualitatively very different from a bridge), yet this development process is very likely the way we will AI and similar adaptive algorithms in the future.
 
  • #54
Filip Larsen said:
I am not really sure why you state this again expecting me to answer differently, so I will now just sound like a broken record.

To me, the potential realisation of capable AI seems to be the magic source which will allow us to keep increasing complexity beyond what we can understand and therefore control.
It has already been stated that when the machines start doing things above our level of comprehension we will shut it down since the assumption will be that the machines are malfunctioning. This has likely been done many many times already on simpler levels. Consider technicians and engineers scratching their heads when a system is behaving in a manner that is making no sense. Then it occurs to someone that the machine is considering an input that everyone else had forgotten. So in that instance, for a brief moment, the machine was already smarter than the humans. I know it could be argued that somehow somewhere in a CNC machine shop that parts could be magically turned out for some apocalyptic robot or whatever. To me this is no easier than me telling my liver to start manufacturing a toxic chemical and put it in my saliva to be used as a weapon.
 
  • #55
Filip Larsen said:
only that while our raft of civilization is drifting down the river towards the promised Elysium it makes sense that those involved with navigation make sure we get there and at least look out for waterfalls.

All I'm saying is that you should have more faith in humanity that they will. Whether they do it on their own or that they are forced to by some events. I doubt that it will take something fatal to the entire humanity before there are reactions, hence my examples.

Just the fact that you are asking the question, reassure me that there are people like you who cares.

Filip Larsen said:
Are you, as a user of the internet, really in control of it?

Nope. But I'm not in control of the weather or the justice system either and I deal with it somehow.

Filip Larsen said:
Are the internet organizations?

Nope.

Filip Larsen said:
Are the national states

Nope.

Filip Larsen said:
Can any of these parties just "turn off" the unwanted criminal elements or change it so criminals go away?

Nope. But how does this differs from yesterday's reality? Did anyone ever had control on criminality on the streets? Remember when criminals got their hands on cars in the 20's for getaways? How were we able to catch them? Police got cars too.

You may change the settings but the game remains the same.

Filip Larsen said:
And is all this going to be easier or harder if we add capable AI to the mix?

IMHO, it will be as it always been. Is there really a difference between making wars with bows & arrows or with fighter jets? People still die, human specie still remains. Is there really a difference between harvesting food with scythes or with tractors? People still eat, human specie still remains. I won't open the debate, but although some might argue that we're going downhill, others might argue that it made things better. All we know for sure, we're still here, alive and kicking.
 
  • Like
Likes patmurris
  • #56
Going back not so far there were people called luddites who thought that the idea of water-mills powering textile producing factories would lead to economic ruin and the disintegration of society.
 
  • #57
fluidistic said:
I don't understand the worry about the possibility that we are overtaken by some AI in a near future. Where by overtaken I mean litterally replaced. If "something" more intelligent than us eradicates us, isn't it "nice" overall? To me, this means we would have achieved our goal. It would mean we created something better adapted and more intelligent than us on Earth. For me this is a big win.
Maybe our specy won't die that quickly and we might become the AI's pets and be well treated.
I consider this our doom... maybe you would like to live as well fed experimental rat, i dont.
 
  • #58
Otherwise i don't believe, that some artificial super brain would lead to a singularity, infinite development.
If ten Einstein lived in Middle Ages, they could still come up only with Galilean Relativity.
 
  • #59
Averagesupernova said:
It has already been stated that when the machines start doing things above our level of comprehension we will shut it down since the assumption will be that the machines are malfunctioning.

And again, the belief that we can always do this is, for a lack of better word, naive. You assume that you will always be able to detect when something is going to have negative consequences before it widely deployed, that there can never be negative emergent behaviors. As I have tried to argue in this thread, there already exists technology where we do not have control to "shut it down" in the way you describe. Add powerful AI to make our system able to quickly self-adapt and we have a chaotic system with a "life of its own" where good and bad consequences are indiscernible, hence outside control.

As an engineer participating in the construction of this technology of tomorrow, I am already scratching my head, as you say, worrying that me and my peers should be more concerned about negative consequences for the global system as well as the local system each of us are currently building. I am aware that people without insight in how current technology works will not necessarily be aware of these concerns themselves as they (rightfully) expect the engineers to work hard fixing whatever problems that may appear, or they, as some here express, just accept that whatever happens happens. The need to concern yourself is different when you feel you have a responsibility to build a tomorrow that improves things without risk of major failures, even if others seem to ignore this risk.

Compare, if you like, with the concerns bioengineers have when developing gene-modified species or similar "products" that with the best intentions are ment to improve life, yet have the potential capability of ruin part of our ecosystem if done without care. I do not see a similar care in my field (yet), only the collective state of laissez-faire where concerns are dismissed as being silly with a hand wave.

Perhaps I am naive trying to express my concerns on this forum and in this thread, already tinted with a doomsday arguments at the very top to get people all fired up. I was hoping for a more technical discussion on what options we have, but I realize that such a discussion should have been started elsewhere. In that light I suggest we just leave it at that. I thank those who chipped into express their opinion and to try address my concerns with their own view.
 
  • #60
To lighten things up a bit, allow me to add what the AI themselves are saying about the end of the world (at 2m47) ...

 
  • Like
Likes Boing3000

Similar threads

Replies
24
Views
4K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 39 ·
2
Replies
39
Views
6K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 24 ·
Replies
24
Views
3K
Replies
10
Views
4K
Replies
19
Views
3K
  • · Replies 11 ·
Replies
11
Views
2K