Destined to build a super AI that will destroy us?

  • Thread starter Thread starter Greg Bernhardt
  • Start date Start date
  • Tags Tags
    Ai Build
AI Thread Summary
The discussion centers around the potential dangers of superintelligent AI, referencing Sam Harris's TED Talk. Participants express concerns that society may not be taking these risks seriously enough, paralleling issues like global warming. There is debate over whether AI can develop its own goals or remain strictly under human control, with some arguing that autonomous systems could evolve unpredictably. While some view the advancement of AI as a natural evolution, others warn of the potential for catastrophic outcomes if safeguards are not implemented. The conversation highlights a tension between optimism for AI's benefits and fear of its possible threats to humanity.
Messages
19,787
Reaction score
10,741
Sam Harris had a TED Talk on this not long ago. What do you think? Are we taking the dangers seriously enough or does Sam Harris have it wrong?

 
Last edited:
Technology news on Phys.org
It sounds half-way plausible to me. But I think the reasons that people don't take it seriously are (1) it's just too hard for those of us who are not super-intelligent to comprehend the implications of super-intelligence, and (2) people don't like to think about problems that they have no idea how to even begin to solving. Harris makes the comparison with global warming, but we're not really, as a species, taking that very seriously, either. If a problem involves cooperation among all major countries on Earth, you can pretty much bet that it isn't going to happen.

One thing that he maybe glosses over is the distinction between being intelligent and having a self, in the sense of having emotions, desires, beliefs, goals, etc. He talks about the AI having goals that conflict with those of humans, but an AI is not necessarily going to have any goals at all, other than those put in by the programmer. (Maybe they could evolve goals?)
 
stevendaryl said:
If a problem involves cooperation among all major countries on Earth, you can pretty much bet that it isn't going to happen.

So we're kinda doomed as a species. I mean obviously at some point, nothing last forever, but this could spell our demise prematurely.

stevendaryl said:
(Maybe they could evolve goals?)

I think that is what Harris says could happen especially if safeguards aren't put in.
 
Greg Bernhardt said:
So we're kinda doomed as a species. I mean obviously at some point, nothing last forever, but this could spell our demise prematurely.

I feel that global warming, depletion of natural resources, destruction of the ecosystem and overpopulation might get first.
 
  • Like
Likes Greg Bernhardt
A positive spin on AI taking over is the technological singularity , scheduled for 2045 according to Ray Kurzweil. I like his version of the vision.

I look at it as the next step in evolution. The view that homo sapians are the end game of evolution and must not be succeeded seems slightly creationist. Sure it's scary and not gentle. Go back and read Arthur C Clarke's Childhoods End for a scarier scenario.
 
In my opinion, the warning by Sam Harris is generally right and not very hard for anyone to imagine, but I sort of disagree to the timing he puts. Fact is that a self - replicating "machine" - if and when we reach as humans that stage, with whatever software and hardware mix, can get pretty easy out of control. Something that is today at a stage of full control, can evolve fast enough to something uncontrollable, in an irreversible way. And this irreversibility has more to do with the economic and social - political culture, that this whole process creates, during the evolution of a such machine generation. An equally important factor, is that it will be utilized as a weapon of any kind and this just feeds its evolution. Extreme under - employment will be just byproduct of all this. Now, I think that this is fully viable, but pretty extreme scenario, that the big countries that fund research, won't let happen just like that. So, I think that what Sam Harris describes, won't happen in fifty years or whatever short time. I also agree to stevendaryl about the two reasons that people don't take it seriously. It would be really perfect if that wasn't the case.

But I also have to point out the good things that such extreme evolution will bring. It really is absolutely viable, to conquer many diseases and various other things that are crucial to our everyday lives. One thing that I regard as bad, is that control of all this, is heavily influenced by many idiosyncratic things of our species. I also agree that other things will outpace AI endeavors, like global warming, depletion of natural resources and especially overpopulation. In my opinion, the correctness of this statement follows naturally, from the fact that all these are already out of control, in an irreversible way.
 
Greg Bernhardt said:
Are we taking the dangers seriously enough or does Sam Harris have it wrong?

That is pure nonsense.

Intelligence is the capacity to imagine things.

Say we built a machine that have so much imagination that we as human being cannot understand what it spits out. Are we going to built this machine such that it can be autonomous? Why would we do that? It's like an ape that would try to raise a human being in hope that he would build a space ship. Such a human being will only be require to do ape things. It might even get kill rather quickly if it diverges too much from the group. I can't imagine an ape looking at a human being building a spaceship and that this ape would work to feed him, protect him, care for him and cover all of his needs without even understanding what he does.

The other question is: Can we built autonomous things? Nature is autonomous, but it is extremely complex. Can we built a system that would replicate nature? We take animals or plants and just try to make them reproduce themselves in captivity and we fail. Is it reasonable to think that we could make machines that will be able to reproduce or repair themselves or just protect themselves for that matter? Let's say we built a machine that can imagine how to create such a system (because we can't): Will we be able to understand what it has thought of and reproduce it? If so, why would we do it for the machine and give it its autonomy? What would we gain to do such a large amount of work (I can only imagine that it won't be easy)? Or are we suppose to think that this machine will build this system without us knowing about it; We will only be those idiots feeding this machine doing things we don't understand?

This idea that we will build machines that will cover the Earth AND that will all be connected to one another AND that will - without warning - get suddenly smart enough to be autonomous AND that will be stronger than humanity AND that will think it is a good idea to destroy - or even just ignore - humanity, is a good script for a science-fiction movie at best.
 
  • Like
Likes Averagesupernova
jack action said:
That is pure nonsense.

Intelligence is the capacity to imagine things.

Say we built a machine that have so much imagination that we as human being cannot understand what it spits out. Are we going to built this machine such that it can be autonomous? Why would we do that? It's like an ape that would try to raise a human being in hope that he would build a space ship. Such a human being will only be require to do ape things. It might even get kill rather quickly if it diverges too much from the group. I can't imagine an ape looking at a human being building a spaceship and that this ape would work to feed him, protect him, care for him and cover all of his needs without even understanding what he does.

I really can't understand your argument. As Sam Harris points out, we have lots of really good reasons for making smarter and smarter machines, and we are doing it, already. Will the incremental improvements to machine intelligence get all the way to human-level intelligence? It's hard to know, but I don't see how your argument proves that it is nonsense.
 
  • Like
Likes AaronK
jack action said:
Are we going to built this machine such that it can be autonomous? Why would we do that?

As I see it, we are already doing it, even if this is not a goal in itself.

Currently the promise of deep learning and big data are desensitizing most of us to the objection of employing algorithms we don't really understand the result of and where the result can change autonomously (by learning) to give "better" results. That is, we will mostly likely end up accepting, that learning systems we don't understand how works will be employed everywhere its beneficial and feasible. It seems fairly obvious that autonomously interacting and data-sharing learning systems eventually can be made to outperform non-interacting systems simply because then each system can provide even more benefit to its users and owner, and from then on you pretty much get an increasing degree of autonomy. In short, the promise and actual delivery of fantastic possibilities will in all likelihood drive us to interconnect and automate intelligent systems like never before and the inherent complexity will at the same time dwindle our understanding of any eventually set of negative consequences or even how to recognize them. If something negative do happen, I think we would be hard pressed to "disconnect" our systems again unless it was really bad. And even if all people across nations, cultures and subcultures work towards ensuring something very bad never happens (good luck with that, by the way) then I question if we can really expect to be in control of a global interconnected learning system that we don't understand in the same way as we in isolation would understand a car, a medical device, or an elevator.

Of course, all this does not imply that something very bad will happen or that we will end up like dumb humans cared for by smart machines, only that unless we have good reasons to think otherwise it is a likely possibility. And I have yet to hear any such reasons.
 
  • #10
I would say, "He has it wrong." Mortality rates due to industrial accidents decline as industries "mature." "AI" is just another industry.
 
  • #11
Filip Larsen said:
or that we will end up like dumb humans cared for by smart machines,
Or a more likely scenario would be smart humans cared for by dumb machines.

A gloom and doom story.
from the talk it can be concluded that it is never ending.
I guess it is AI all the way down. Humans build AI ensuring humans destruction. AI build super-AI ensuring AI destruction. What next? super-duper-AI,... ; each generation with more intelligence destroys the previous? Pretty sure the logic points that were presented in the TED talk were not all that well thought out.
 
Last edited:
  • #12
stevendaryl said:
He talks about the AI having goals that conflict with those of humans, but an AI is not necessarily going to have any goals at all, other than those put in by the programmer. (Maybe they could evolve goals?)

A common approach towards developing an AI is to build a neural network. These are often 'trained' by defining a 'fitness' function against which variations in a population of networks are measured. It seems quite plausible to me that these functions might themselves be programmed to allow some variation which over time could develop in completely unpredictable directions.

jack action said:
Is it reasonable to think that we could make machines that will be able to reproduce or repair themselves or just protect themselves for that matter?
Given the ease with which most things can now be obtained over the internet, the developments in visual recognition software and autonomous vehicles and robots I do not think it will be too long until someone builds a population of self maintaining machines, with spare parts ordered automatically.

The machines do not even need to repair themselves - consider a fleet of autonomous vehicles programmed to return to service bases periodically. As long as they continue to perform the basic functions expected of them, their internal programming could evolve in any number of ways without attracting unwanted attention from human 'overseers'. Over time they become ubiquitous and an essential part of life - an ideal symbiosis or the an inevitable step towards the subjugation of mankind?
 
  • #13
256bits said:
Or a more likely scenario would be smart humans cared for by dumb machines.

There are plenty of scenarios where we for some reason end up with technology that do not lead to machines becoming much smarter than an average human, but the point Harris (and others, including me) is trying to make is, that those reasons, should they exist, are currently unknown. That is, so far there is nothing to exclude that we are heading towards a future where we are forced or choose to give up control in a way we so far have been used to have. The best argument most people seem to be able to establish is that we won't allow yourself to loose control, yet they are unable to point at any mechanism or principle that would would make scenarios with loss of control impossible or at the very least extremely unlikely.

I am tilting towards the opinion that people who think about this but are not concerned perhaps have adopted a kind of fatalistic attitude where they embrace change with less concern about being in control or not, or with a belief that we over time by some kind of luck or magic never seriously will loose control with our technology. I am genuinely puzzled why people involved in driving this technology do not seem to be concerned yet unable to provide technical arguments that would make me equally unconcerned.

If I were banging explosive together in my kitchen, it would be sane for my neighbors to require of me to explain in technical terms why there is no risk involved or, failing that, explain what measures I take to ensure (not just make likely) that nothing bad will happen. Why is AI (or any disruptive technology) any different?
 
  • #14
Charles Kottler said:
A common approach towards developing an AI is to build a neural network. These are often 'trained' by defining a 'fitness' function against which variations in a population of networks are measured. It seems quite plausible to me that these functions might themselves be programmed to allow some variation which over time could develop in completely unpredictable directions.

That's true. My point is that there is a distinction between understanding and action. Your understanding might be godlike, in the sense that you can predict the future with perfect accuracy (or as well as is theoretically possible, given that things are nondeterministic), but you still may have no goals---no reason to prefer one future over another, and no reason to take actions to assure one future or another.

In the natural world, understanding and action have always evolved in tandem. There is no selective pressure to understand things better unless that better understanding leads to improved chances for survival and reproductive success. In contrast, with machines, the two have developed separately---we develop computer models for understanding that are completely separated from actions by the computers themselves. The programs make their analyses and just present them to the human to do with as they like. We also, more-or-less independently have developed machine capabilities for action that were under the control of humans. That is, the goals themselves don't come from the machine. As far as I know, there hasn't been much AI research done about machines developing their own goals (except as means to an end, where the end was specified by the humans). It certainly seems possible that machines could develop their own goals, but I think you would need something like natural selection for that to happen--you'd need robot reproduction where only the fittest survive. I suppose that's possible, but it's not a situation that we have currently.
 
  • #15
stevendaryl said:
As far as I know, there hasn't been much AI research done about machines developing their own goals (except as means to an end, where the end was specified by the humans). It certainly seems possible that machines could develop their own goals, but I think you would need something like natural selection for that to happen--you'd need robot reproduction where only the fittest survive.

A quick internet search revealed this article from yesterday: https://www.technologyreview.com/s/602529/google-is-building-a-robotic-hive-mind-kindergarten/

There are also lots of teams developing 'attention seeking' robots. Presumably it would not be too hard to link the two ideas so you have teams of attention seeking robots learning from each other. These could easily be released as toys, becoming popular in the richer nations. There are many types of action which meet the loose goal of 'attention seeking' which might be outside the intended parameters: vandalism or inflicting pain might in the short term attract attention far more effectively than performing a dance routine... My point is that almost any goal can lead to unintended consequences.
 
  • #16
stevendaryl said:
I really can't understand your argument. As Sam Harris points out, we have lots of really good reasons for making smarter and smarter machines, and we are doing it, already. Will the incremental improvements to machine intelligence get all the way to human-level intelligence? It's hard to know, but I don't see how your argument proves that it is nonsense.

Making smarter machines is not nonsense; Thinking they will endanger the human specie is.

Filip Larsen said:
algorithms we don't really understand the result of and where the result can change autonomously (by learning) to give "better" results.

We may not understands the how, but we understand the results. Otherwise why would we keep a machine working that gives us things we don't understand? We will assume it's garbage and the machine is not working properly. We certainly are not going to give it the control of all nuclear missiles on earth.

Filip Larsen said:
It seems fairly obvious that autonomously interacting and data-sharing learning systems eventually can be made to outperform non-interacting systems simply because then each system can provide even more benefit to its users and owner, and from then on you pretty much get an increasing degree of autonomy.

Charles Kottler said:
I do not think it will be too long until someone builds a population of self maintaining machines, with spare parts ordered automatically.

Charles Kottler said:
As long as they continue to perform the basic functions expected of them, their internal programming could evolve in any number of ways without attracting unwanted attention from human 'overseers'.

Having a machine making one task autonomously is not having an autonomous machine; Nor does having one able to order spare parts on ebay.

First, a single machine cannot be autonomous, just like humans are not. We need plants, insects and the rest of the animal kingdom. We are part of a complex system where all depend on each other to survive. Does the flowers need the bees to reproduce themselves or does the bees need the flowers to nourish themselves? No one can tell.

For machines to develop such a system that wouldn't include humans is nonsense (Something like «The Terminator» or «The Matrix» movies). Imagining that even a system where machines could develop an independent society so fast that humans are unaware of it, one where we can't flip a switch off or we can't inject a virus to kill the brain, that is also nonsense.

The machines we make are not that reliable. To do that kind of magic, you need adaptability, something organic life form can do. You also need diversity, so having a Hal 9000, Skynet or VIKI controlling all machines on Earth would be impossible as it is a stupid path to take, evolution-wise. It creates a single «weak spot» that puts in peril the entire system.

I just can't imagine that all of this evolution could happen so fast without us being able to notice it. I can't imagine that the first thing these machines will do is develop «sneakiness» to avoid detection.
 
  • Like
Likes ShayanJ and Boing3000
  • #17
jack action said:
First, a single machine cannot be autonomous, just like humans are not. We need plants, insects and the rest of the animal kingdom.

I am referring to autonomy to make decisions. An autonomous control system is a system capable of carrying out its functions independent of any outside decision-making or guidance. It gets its autonomy because we built it into the system and it retains its autonomy as long as we trust the system.

The ability of learning systems to successfully discover patterns in large amount of data is rapidly increasing and already surpassing humans in many areas. Using such systems to assist in human decision-making is just around the corner, for instance in healthcare with Watson [2]. From that "starting point" is it most likely that we will keep striving to expand and hand over more and more decisions to the autonomic systems simply because such systems can be made to perform better than we do.

So, my concern is personally not so much that superior decision making systems take control from us, but that we willingly hand over control to such systems without even the slightest worry about getting control back if needed. To me it feels that most people like to think or accept that such better performance is a goal in itself, almost no matter what other consequences or changes it will have on our future. The promises of a golden future where new technology like AI make our big problems go away or severely reduced them also seem to makes people less concerned about loosing control of the systems along the way.

jack action said:
For machines to develop such a system that wouldn't include humans is nonsense (Something like «The Terminator» or «The Matrix» movies).

The concern is not the Hollywood scenarios, but the scenarios that we are currently heading into by our the research and drive for automation that happens right now or in the near future.

jack action said:
I just can't imagine that all of this evolution could happen so fast without us being able to notice it.

So you are calm and confident that no servere problems can happen simply because you cannot imagine how fast a learning algorithm can dig out patterns in big data or that someone will use this to solve a problem you don't want solved (like if you should keep your job or not)?

Imagine that some research company builds a completely new type of nuclear reactor very near your home. What will it take for you to remain unconcerned about this new reactor? Would you expect any guarantees from relevant authorities? What if all the guarantee you ever learn about is a statement from the company itself saying that they have tried this this new reaction in a lab without problem and that you really should relax and look forward to cheap electricity?[1] https://en.wikipedia.org/wiki/Autonomous_robot
[2] https://www.ibm.com/watson/health/
 
  • #18
jack action said:
Making smarter machines is not nonsense; Thinking they will endanger the human specie is.

Are you assuming here that we will build in some sort of control like Asimov's rules? The problem here is that we would all have to agree before we go down the AI path and I'm fairly sure that we won't all agree. Then as the intelligence of the AI gets large it will surely find a way around our simplistic rules to limit it.
 
  • #19
Almost certainly. I see no limit to theoretical intelligence, if our brains can do it, so can computers. Thinking we are anywhere near the top of possible capabilities is very anthopocentric. We'll hit an AI singularity where the AI builds the next generation of AI, perhaps with an evolutionary algorithm. Simulated evolution can do a million generations in the time real biology does one. I see this as the most likely solution to the Fermi Paradox.
 
  • Like
Likes Greg Bernhardt
  • #20
I have a couple of significant problems with his descriptions.

First, there is no "general intelligence" which is a goal of government or industry. When machines become "smarter", they don't become more like humans in their thinking. Even when we do start using technology similar to what in our skulls, it won't be a human brain duplicate or improvement - not unless that is a very deliberate goal.

Second, there's no chance of having an intelligent computer take over by accident. The danger does not come by making the computer smarter. It comes by connecting the computer to motor controls. You don't do that without a lot of caution. You can have a computer design a better computer and a better weapon system, and you would presumably also have it develop a better test protocol and safety systems. If you then decide to turn over the entire weapons development over to the computer, what will transpire is not an "accident", it's a deliberate act.

Still there is a problem. What computers and technology are doing (as correctly depicted in the TED talk) is empowering individuals - not all individuals, just the ones that develop or procure them. When the "technological singularity" happens, it will happen to someone - someone who will have the ability to overpower all competitors and governments. If that person wanted to share or contain that power, who would he/she trust?
 
  • Like
Likes 256bits
  • #21
Filip Larsen said:
I am referring to autonomy to make decisions.

That is not a problem as long as you can turn off the machine. Will machine make bad decisions? Of course, but we already are doing it as humans. What level of autonomy a machine will have? Just like for humans, it will depend of the responsibilities involved and the proven capacities of the machine. The permitted level of decision making is different for a doctor and a nurse. But, even if the doctor is highly educated, he has no capacities in deciding what type of maintenance is require for your car; A simple mechanic have more authority in that domain.

Filip Larsen said:
What if all the guarantee you ever learn about is a statement from the company itself saying that they have tried this this new reaction in a lab without problem and that you really should relax and look forward to cheap electricity?

What does that have to do with AI becoming autonomous? Why would we remove «proof of concept» when building something new, even if it is done by superior AI? That would be pretty stupid. I still expect tests, regulations and protocols to be around.

cosmik debris said:
Are you assuming here that we will build in some sort of control like Asimov's rules?

No, because I can't even conceive a world where a machine has such autonomy that we won't be able to shut it down when not working as expected.

Like I said earlier, machines will make bad decisions that will demand revisions and new analysis. But it works that way already with humans.

.Scott said:
When the "technological singularity" happens, it will happen to someone - someone who will have the ability to overpower all competitors and governments. If that person wanted to share or contain that power, who would he/she trust?

How is that different from any technological improvement done for thousands of years now? Invention of the knife, the wheel, the bow and arrow, gun powder, medicine, boat, etc. The human specie, somehow, manage to survive those technological advancements shared by only a few individuals at first.
 
  • #22
jack action said:
How is that [technological singularity] different from any technological improvement done for thousands of years now? Invention of the knife, the wheel, the bow and arrow, gun powder, medicine, boat, etc. The human specie, somehow, manage to survive those technological advancements shared by only a few individuals at first.
When the technological singularity is reached, that technology can be used to develop additional, more advanced technology faster that people can develop it. So it can be used to outpace all other technological development. If that point is reached by someone without others soon noticing, it could put one person in a very powerful position.

Even if it spreads to hundreds of others, it still creates an unstable situation.
 
  • #23
jack action said:
That is not a problem as long as you can turn off the machine.

With danger of repeating my self ad nauseam, my concern are scenarios where we manage to get so dependent on this technology that turning something off is not an option. It would be like saying that computer crime will never be a problem because you can just turn off the machine. A strategy some politicians apparently still think is feasible.

So, I content that turning off "the machine" is less and less a realistic option the way we are currently designing and employing our systems and I am puzzled that so many are unconcerned about staying in control of such "addictive" technology. As an engineer I tend to disfavor blind trust in the capabilities of machines, even smart ones, and it scares me how easily people adapt to a pattern of blind trust in technology. And with blind trust I here mean the inability to establish realistic failure modes which leads to false belief that nothing can go wrong.

jack action said:
Why would we remove «proof of concept» when building something new, even if it is done by superior AI?

With the current highly optimistic trust in technology we wouldn't and that is my point. Our ability to discern or perceive bad consequences of such AI generated solutions will only decrease from now on so if we can't do it now we will most likely never be able to. Stopping or rejecting a particular "solution" is not a option because we have a rapidly decreasing chance of being able to make a test or criteria that realistic can ensure a particular solution has no bad consequences (1). We are already today deploying advanced technology into a increasingly complex and interconnected world making us less and less able to foresee problems before they occur. We are starting to deploy network and power infrastructure so complex it can only be manged by learning systems monitoring a torrent of data. We are effectively in the process of giving up to understand complexity and just throw AI on it.

(1) Well, there is one way that might work, and that is using an equally powerful AI to evaluate the result or operation of other AI's. However, this just shifts the problem to this new AI the result of which we still have to more or less trust blindly.

jack action said:
I still expect tests, regulations and protocols to be around.

So it seems we at least share the expectation that new technology should be safe and employed only under premise of being in control of consequences.
 
  • Like
Likes Greg Bernhardt
  • #24
Filip Larsen said:
Well, there is one way that might work, and that is using an equally powerful AI to evaluate the result or operation of other AI's.

Seems there is some research on this topic ([1], [2]) which also nicely describes the problem I have been trying to point to. All the concerns I have expressed in this thread so far can be boiled down to the problem of how to ensure proper risk management for a self-adaptive learning system.

[1] http://dl.acm.org/citation.cfm?id=2805819
[2] http://pages.cpsc.ucalgary.ca/~hudsonj/pubs/MSc-Thesis.pdf
 
  • Like
Likes Greg Bernhardt
  • #25
@Filip Larsen:

Although I agree with your concern about people putting too much trust in machines they don't understand, I don't agree that AI has something to do with it and I don't think that this will lead to endanger the human specie, just society as we know it.

Like you, I also worry about the lack of interest people have about what is surrounding them, especially man-made things. All of my uncles (70ish) were repairing their car back in the days, doing house electricity, plumbing, etc. without formal training. They were just curious enough to try and eager to learn how everything worked. Now it seems that nobody cares; It's someone else's job. And I don't understand how we got from people looking out at the night sky and being so curious about finding star patterns and just trying to define what those lights were, to putting an image-box-that-can-connect-to-someone-on-the-other-side-of-the-earth in one's hand and that the same kind of people don't even want to use a screwdriver to open it and see what it looks like on the inside. Where did curiosity went?

I don't blame AI for this though. I blame what I could call «knowledge shaming» and intellectual property.

I don't understand how we got there, but it seems that knowing stuff is not a positive thing. We all praise education in public, but we don't walk the talk. I was just looking at a full-page ad for a local private high school last week: At least 90% of the texts were about sports and going on trips. Somehow the initial mission of schools (i.e. academics) is not interesting enough. We need to «reward» children for doing this «horrible task» that is learning new stuff. It seems that every teacher has jump on the band wagon promoting that «school is boring» and they all assume that no one will ever like going to school for academics only. The more everyone think that way, the more it becomes true.

The second big problem is intellectual property. By not sharing knowledge, we discourage people from further learning new things. For example, my uncles were repairing their cars. In those days, it was easy to take everything apart and see how things were made, just by looking at it, since most of the things were mechanical. When electronics came around, that was no longer true. Of course, it is not that complicated when you know how it was built or coded, but unless the maker tells you how he did it by showing you his plans, reverse engineering is a very complicated and discouraging task. That is when hiding those plans in the name of intellectual property became a big problem for our society. People are so discouraged that they just don't care anymore. It would be like asking them to figure out by themselves how to make a knife, from mining the ore to polishing the blade: Just impossible in one person's lifetime to do so. But you can show them all the steps and they will be able to do all the jobs one after the other, even if they are not the best at every job. Open source projects are a breath of fresh air on that regard, by killing that monster that intellectual property has now grown into.

Finally, the problem I anticipate with that is the end of society as we know it, not the end of the human specie (Disregarding the fact that some people may think that a human not having a car or iphone is somehow not a human). Despite this, it also doesn't mean the technologies we know won't come back, perhaps in a more solid structure, by taking the time to share the knowledge with everyone before going on the next step.

But I digress:
Destined to build a super AI that will destroy us?
No.
 
  • Like
Likes Boing3000
  • #26
jack action said:
Although I agree with your concern about people putting too much trust in machines they don't understand, I don't agree that AI has something to do with it and I don't think that this will lead to endanger the human specie, just society as we know it.

For the concerns I have (which seem to overlap the concern Harris is trying to express in his TED talk), introduction of AI is a potentially huge addition of complexity, or perhaps more accurate, the success of AI will allow us, and eventually the AI themselves, in a relatively short time frame to build and operate ever increasing complex and interconnected systems. So, to me, AI hold potential to quickly lift most technology to a level where the average human will consider it more or less magic.

All that said, I agree that we are not necessarily "destined to build a super AI that will destroy us" as Greg puts the question for this thread, only that there are plenty of opportunity for us to mess up along the way if we as a group are not prudently careful, which unfortunately is not a trait we seem to excel at.

To me its like we are all together on this river raft drifting towards what appears to be a lush green valley in the distance and most people are so focused to get to that lush valley they are in denial that any whirlpools or waterfall along the way could ever be a problem because, as I hear them say, if the raft should ever drift towards such a hazard "someone" would just paddle the raft away - problem solved. I hold that such people do not really appreciate the danger of river rafting or what it takes to plan and steer a raft once you are on the river. I then observe there is no captain on our raft, only uncoordinated groups of people all trying to paddle the raft many directions at once for their own benefit, and I really start to get worried :nb)
 
  • Like
Likes jack action
  • #27
jack action said:
@Filip Larsen:

... And I don't understand how we got from people looking out at the night sky and being so curious about finding star patterns and just trying to define what those lights were, to putting an image-box-that-can-connect-to-someone-on-the-other-side-of-the-earth in one's hand and that the same kind of people don't even want to use a screwdriver to open it and see what it looks like on the inside. Where did curiosity went?
I have my theory but I don't know how well it is shared. We tend to keep our kids constantly busy. Never do we allow our kids to just sit and complain about being bored while telling them to deal with it. Being bored breeds imagination and wandering minds. I grew up that way and I see things the same way you do jack action. Most of the time kids are kept busy so the parents have an easier time dealing with them.
 
  • #28
.Scott said:
I have a couple of significant problems with his descriptions.
Only a couple.
 
  • #29
Jack Action as nailed down most of what can reasonably said about what AI is or more likely would be (that is: pure fantasy)

Harris is a professional war mongerer and a disgrace to intelligence. Now wonder he feels threaten by intelligence, or that intelligence is out there to kill him (given his level of him "projecting too much", he equates humanity with him).

I'll also mention that even though deep-blue can "beat" (<= note the warlike criteria again) a grand master, it's intelligence is still less of that of a snail that is: orders of magnitude less than mouses.

His inability to process scientific and logic arguments became ironically hilarious when his only joke turn out to be the only actually plausible fact in his talk: Yes Justin Beiber may well be president, and when an average show-biz guy became president, the world is not coming to an end either.

His blind faith into some mythical (and obviously unspecified) "progress", is also mind boggling. Does he ignores that Moore's law failed 10 years ago ? Let us not be bothered be math nor logic, and let us pray to the growth God, and let us mistake change with "progress", or random selection.with "evolution"
 
  • #30
IMHO, the Terminator scenario is unlikely, we won't be that fool to give nuclear missiles to super AI.
I see the danger of a slope leads to a cliff.
More and more people lose their jobs, don't even want to search for new one, welfare is good enough, less creativity, people are more and more dependant on the state, the state is more and more dependant on AI managers... Then ultimately, we become the pets of AIs, they outevolved us.
 
  • #31
Boing3000 said:
His blind faith into some mythical (and obviously unspecified) "progress", is also mind boggling.

I would very much like to hear serious technical argument (i.e. not ad hominem arguments) on why there is absolutely no (i.e. identical zero) risk of humanity every getting itself lost to technological complexity and AI. Can you point to such serious arguments or perhaps provide some here?

Usually such arguments are very difficult to establish, because you effectively have to prove that something will never happen all the while humanity undergo potential multiple social and technological disruptive changes, which of course is much more difficult than proving that same something just might or might not happen. What I am looking for is a kind of argument that fairly clearly shows that there is no route from today leading to "destruction of human civilization" without breaking one or more physical laws along the way and without a relatively small group of humans (say 1% of humans) being able to "lead us into doom" (whether unintentionally or with malice) without everyone else being sure to notice well in advance.
 
  • #32
Filip Larsen said:
I would very much like to hear serious technical argument (i.e. not ad hominem arguments) on why there is absolutely no (i.e. identical zero) risk of humanity every getting itself lost to technological complexity and AI. Can you point to such serious arguments or perhaps provide some here?
You know, that's the first time I get a coherent and rational response to a genuine statement. Your are kind of "getting me out of guard", because generally, what I get it downright hysterical denial and revisionism (which is, you'll have guessed, very hard to argue with:wink:)

So my answer is very straightforward. Harris (and by proxy/echo you) is making wild irrationals unsubstantiated statements. The burden of proof is on him.
"Getting lost is complexity" is not a thing. Please be so kind to defining it. Here, I'll try to help:

In the the 70'th, getting tv requires one phone call, and a +-10 bucks per month. Now, you'll have to call 4 different provider, configure you internet connection, figure out 42 possible incompatibility "problem", to have your "box" kind of working (that is: zapping (ironically I just discover it is NOT a English word, in french we use this word to describe channel-hopping) takes you 1 second, while in the 70'th it takes you 1/10 th of a second). All this for at least 50 bucks per mouth (I am talking inflation neutral numbers here) not accounting for power consumption (the increase of inefficiency is always hidden under the hood)

Did I just argues for complexity/progress is overwhelming us ? Not quite. Because none of that is anywhere close to fundamentals like "life threatening". Quite the opposite. Once you'll have op-out those inefficient modern/inefficient BS (<-sorry), you'll discover the world did not ends. Just try it for yourself, and witness the "after-life". Reagan did not "end the world". This is a fact. Will Justin Beiber ? .. unlikely...

You are talking of ad hominem, and it is very strange. Because there is none. Harris business model is to make you believe that something is going to kill you. Fear mongering is his thing. He is proud of it, and many people do the very same, and have a perfectly successful life. That is another fact. He did climb on the TED talk stage or did I make it up ?

To answer your point again (in a slightly different perspective), you have to answer what risk there is, what kind of harm will happens to whom, and how. I mean not by quoting the scientific work of a Hollywood producer, but actual scientific publications.

Filip Larsen said:
Usually such arguments are very difficult to establish, because you effectively have to prove that something will never happen all the while humanity undergo potential multiple social and technological disruptive changes, which of course is much more difficult than proving that same something just might or might not happen.
That is very very true. Proving a negative is something that no scientific minded people (that by no means ... means intelligent people) will every do.
I don't have to prove that God does not exist, nor that AI exists nor even that AI will obviously want kill every homo-sapiens-sapiens.
All these are fantasy. Hard and real fantasy. God is written with so many books's atom and process by so many human's neuron that it must exist .. right ?
Your pick: You believe in those fantasies, or believe that fantasies exist.

AI do not exist. Intelligence exist. The definition is here. Nor Harris nor anyone is going to redefine intelligence as the "'ability to process information". That is meaningless, and just deserves a laugh.

Filip Larsen said:
What I am looking for is a kind of argument that fairly clearly shows that there is no route from today leading to "destruction of human civilization"
I suppose your are talking about that.
You'll be hard pressed to find any reference to AI in those article, because (as state previously) AI do not exists, nor will, (not even talking about "wanting to kill Harris/Humanity"). Those are fantasies. If this is a serious science forum, only published peered reviewed article are of any interest, and Sam Harris as very few (let's say 3 by quick google search).

Filip Larsen said:
without breaking one or more physical laws along the way and without a relatively small group of humans (say 1% of humans) being able to "lead us into doom" (whether unintentionally or with malice) without everyone else being sure to notice well in advance.
Super intelligent AI (like every infinite growth based fantasy) break the first law of thermodynamic.
Normal serious AI (that is human kind) have trouble to know what even intelligence means and comes from.

Shouldn't the conversation ends there ? (not that's not funny but ... really ?)
 
  • #33
Filip Larsen said:
I would very much like to hear serious technical argument (i.e. not ad hominem arguments) on why there is absolutely no (i.e. identical zero) risk of humanity every getting itself lost to technological complexity and AI. Can you point to such serious arguments or perhaps provide some here?

I'll return the question:

If you have to put a number on it, what is the risk of humanity ever getting itself lost to technological complexity and AI (i.e. "destruction of human civilization")? 90%, 50%, 10% or even 0.0001%?

Then it will be easier to understand how much importance you give to your arguments.

Personally - without having any other arguments than those I already presented - I'm more inclined to go towards the 0.0001% end of the scale.
 
  • #34
jack action said:
Personally - without having any other arguments than those I already presented - I'm more inclined to go towards the 0.0001% end of the scale.
Even at a time scale as far out as 10,000 years from now? Perhaps that is where fantasy comes into play? I tweeted Sam Harris this thread. With some luck he will have a bit to say.
 
  • Like
Likes jack action
  • #35
Boing3000 said:
To answer your point again (in a slightly different perspective), you have to answer what risk there is, what kind of harm will happens to whom, and how.

1) I have already enumerated several of those risks in this thread, like humanity volunteering most control of their life to "magic" technology. as As long as these risks are not ruled out as a "high cost" risk I do not really have to enumerate more to illustrate that there are "high cost" risks. But please feel free to show how you would make risk mitigation or reduction of each of them because I am not able to find ways to eliminate those risks.
2) Requiring everyone else to prove that your new technology is dangerous instead of requiring you to prove its safe is no longer a valid strategy for a company. Using my example from earlier, you can compare my concern with the concern you would have for your safety if someone planned to build a fusion reactor very near your home, yet they claim that you have to prove that their design will be dangerous.

Boing3000 said:
Super intelligent AI (like every infinite growth based fantasy) break the first law of thermodynamic.
Normal serious AI (that is human kind) have trouble to know what even intelligence means and comes from.

Energy consumption so far seem to set a limit on how a localized an AI with human-sized intelligence can be due to the current estimate of how many PFlops it would take on conventional computers. You can simple calculate how many computers it would take and how much power, and conclude that any exponential growth in intelligence would hit the ceiling very fast. However, two observations seems to indicate that this is only a "soft" limit and that the ceiling may in fact be much higher.

Firstly, there already exist "technology" that are much more energy efficient. The human brain only uses around 50W to do what it does, and there is no indication that there should be any problem getting to that level of efficiency in an artificial neural computer either. The IBM's True North chip is already a step down that road.

Secondly, there is plenty of room to scale out in. Currently our computing infrastructure is increasing at an incredible speed making processing of ever increasing data sets cheaper and faster, making access to EFlops and beyond on the near horizon.

If you combine these two observations there is no indication that we can rule out energy or computing power as a limit for intelligence.

Boing3000 said:
Shouldn't the conversation ends there

If you don't believe you can add any more meaningful information or thoughts, then sure. But I still like to discuss technical arguments would those that still care for this issue.
 
  • Like
Likes Boing3000
  • #36
Greg Bernhardt said:
Even at a time scale as far out as 10,000 years from now? Perhaps that is where fantasy comes into play? I tweeted Sam Harris this thread. With some luck he will have a bit to say.

If mankind, augmented by his machines, becomes unrecognizable, are we destroyed or enhanced? The Borg collective comes to mind.

Mark Twain might have considered today's connected youth as being assimilated by the Internet.

If an intelligence greater than mankind's decides that humans should be killed, isn't that the best decision by definition?

Define civilization. Define destroyed. Define us and them. Define super AI.

Without agreements in advance about definitions such as these, any debate is silly.
 
  • Like
Likes Boing3000 and Greg Bernhardt
  • #37
anorlunda said:
If an intelligence greater than mankind's decides that humans should be killed,

I always find that kind of question bizarre: Why would anyone - machines or aliens - «decide» to get rid of humans, especially because we would be of «lower intelligence»?

Are we saying as human: «Let's get rid of ants, hamsters and turtles, they're so dumber than we are»?

Not only we don't say that, we are so SMART that we KNOW that we NEED them for us to exist, even if they are not as intelligent as we are (intelligence count for very little in the survival equation).

Now, why would an even smarter machine or life form would think otherwise?

And if somebody tells me that humans are the scums of the Earth that don't deserve to live, that is a very human thing to say. No other (dumber) life form think that about themselves. Following that logic, machines or aliens that are smarter than us would probably blame themselves even more, which would lead to self-destruction?!?
 
  • Like
Likes Boing3000
  • #38
jack action said:
I always find that kind of question bizarre: Why would anyone - machines or aliens - «decide» to get rid of humans, especially because we would be of «lower intelligence»?

Not so crazy. Considering the finite non-renewable resources on this planet, it could be argued that it would be intelligent to decide to cap human global population at 7 million rather than 7 billion. Once decided, it would also be intelligent to act on that decision immediately because each hour of delay further depletes the resources remaining for the surviving 7 million.

jack action said:
Are we saying as human: «Let's get rid of ants, hamsters and turtles, they're so dumber than we are»?

Did you forget that we did decide to make the smallpox virus extinct? Or that we are actively considering doing the same for disease carrying mosquitoes?
 
  • #39
anorlunda said:
Did you forget that we did decide to make the smallpox virus extinct? Or that we are actively considering doing the same for disease carrying mosquitoes?
I can't speak Jack Action but I would say the motivation to rid ourselves of smallpox and disease carrying mosquitoes is to improve human life. Apparently there has been something seriously missed in the search for extra terrestrial intelligence if humans are causing problems for alien life.
 
  • #40
Filip Larsen said:
1) I have already enumerated several of those risks in this thread, like humanity volunteering most control of their life to "magic" technology.
But you still haven't provided us with any clues as to why that is a risk. As far as normal people are concerned (those not having an intimate relationship with Maxwell equations, or Quatum field theory (that is 99.99999% of humanity, including me) a simple telephone is "magic". A cell phone even more, there is not even a cable !

If (that is a big "if", not supported by science in any way whatsoever) a supper AI pops into existence and as far as we are concerned we call it Gandalf, because it does "magic", what is the risk ? Please explain. What is good, what is not. Who dies, who does not.

Filip Larsen said:
as As long as these risks are not ruled out as a "high cost" risk I do not really have to enumerate more to illustrate that there are "high cost" risks.
But there is no risk. I mean not because there AI don't exist, nor because progress is not a exponential quantity. The reason there is no risk is because you have NOT explains any plausible risk.
"Politics" is nowadays what we "surrender" most of our decision making. Is it good, is it bad ? What "risk" is there ? What do be gain, what do we loose ?
All these have been explored is so many different way in so many fantasy book (Asimov comes to mind). None of it is science. That does not mean that is not interesting. The more "intelligent" of those novel are not black and white.

Filip Larsen said:
But please feel free to show how you would make risk mitigation or reduction of each of them because I am not able to find ways to eliminate those risks.
I am not sure what "risk mitigation" means. But as a computer "scientist", I know that computers aren't there to harm us, most often, it is the other way around (we give them bugs and viruses, and force them to do stupid things, like playing chess, or display cats in high definition)

Filip Larsen said:
2) Requiring everyone else to prove that your new technology is dangerous instead of requiring you to prove its safe is no longer a valid strategy for a company.
I cannot even begin to follow you. Am I forced to buy some of your insurance and build some underground bunker, because someone on the internet is claiming that doom is coming ?. I don't meant mean real doom, like climate change, but some AI gone berserk ? Are you kidding me ?

Filip Larsen said:
Using my example from earlier, you can compare my concern with the concern you would have for your safety if someone planned to build a fusion reactor very near your home, yet they claim that you have to prove that their design will be dangerous.
That's a non sequitur. A fusion reactor may kill me, we know very precisely how, with some kind of real probability attached.
I then mitigate it with some other benefit I get from it. That's what I call intelligence: balance, and constant evaluation.

Filip Larsen said:
Energy consumption so far seem to set a limit on how a localized an AI with human-sized intelligence can be due to the current estimate of how many PFlops it would take on conventional computers.
First Flops are not intelligence. If stupid programs run on computer, then more Flops will lead to more stupidity.
Secondly, no Flops nor computer design are an ever increasing quantity. We are still recycling 70'th tech, because it is just still about move store and add, sorry.

Filip Larsen said:
You can simple calculate how many computers it would take and how much power, and conclude that any exponential growth in intelligence would hit the ceiling very fast. However, two observations seems to indicate that this is only a "soft" limit and that the ceiling may in fact be much higher.
Indeed. But again those limit are not soft at all, as far as Plank is concerned. And again, quantity and quality are not the same thing.

Filip Larsen said:
Firstly, there already exist "technology" that are much more energy efficient. The human brain only uses around 50W to do what it does, and there is no indication that there should be any problem getting to that level of efficiency in an artificial neural computer either. The IBM's True North chip is already a step down that road.
But that's MY point ! A very good article. Actually geneticist are much more close to build a super brain that IBM. So what ? What are the risk, and where is the exponential "singularity" ? Are you saying that such a brain will want to be bigger and bigger until it as absorbed every atom in earth, then the universe ?
I am sorry, but I would like to know on what scientific basis this prediction is based on. The only thing that does that (by accident, because any program CAN go berserk) are called cancer. They kill their host. We are not hosting computers. Computer are hosting program.

Filip Larsen said:
Secondly, there is plenty of room to scale out in. Currently our computing infrastructure is increasing at an incredible speed making processing of ever increasing data sets cheaper and faster, making access to EFlops and beyond on the near horizon.
That's just false. Power consumption of data center are already an issue. And intelligent wise, those data center have the IQ below that of a snail.
You can also add up 3 billion of average "analogic" people like me, it would still not make us anywhere close to Einstein intelligence.

Filip Larsen said:
If you don't believe you can add any more meaningful information or thoughts, then sure. But I still like to discuss technical arguments would those that still care for this issue.
Of but I agree, the problem with arguments is that I would like them to be rooted in science, not in fantasy (not that you do, but Sam Harris does, and this thread is a perfect place to debunk them)

We seem to agree that computing power (that is not correlated with intelligence at all) is limited by physics.
That is a start. No singularity anywhere soon.
 
  • #41
  • Like
Likes anorlunda
  • #42
To me the danger does not lie so much in the possibility of one super intelligent computer taking over the world, which I think highly unlikely, but rather in a creeping delegation of decision making to unaccountable programs. Whether these programs are considered intelligent or not is immaterial - we already have very widespread use of algorithms controlling for example share and currency trading. Yesterday the sharp drop in the value of the British pound was at least partly blamed on these. Most large companies rely of software systems of such complexity that no individual understands every aspect of what they do, and these systems automatically control prices, stocking levels and staffing requirements. In a manner of speaking these systems are already semi-autonomous. They currently require highly skilled staff to set up and maintain them, but as the systems evolve it is becoming easier to use 'off the shelf' solutions which can be up and running with little intervention.

While a full takeover might seem implausible, economics will continue to drive this process forward. A factory with automated machines is more cost efficient than a manual one. Call centres are becoming increasingly automated with routine queries handled by voice recognition systems. It seems likely that (in at least some places) taxi drivers will be replaced by autonomous vehicles.

As these systems become more resilient and interconnected it is not inconceivable that an entire company could be run by an algorithm, relying on humans to perform some tasks, but with the key decisions driven by the 'system'. It the goal of such a company (as is likely) is to make the most profit, why would anyone think that in the long term the decisions made would be in the best interests of human society?
 
  • Like
Likes Filip Larsen
  • #43
Charles Kottler said:
It the goal of such a company (as is likely) is to make the most profit, why would anyone think that in the long term the decisions made would be in the best interests of human society?
What makes you think decisions are made in the best interest of society right now with actual people in charge?
 
  • Like
Likes Bystander
  • #44
Averagesupernova said:
What makes you think decisions are made in the best interest of society right now with actual people in charge?

Fair point.
 
  • #45
jack action said:
I always find that kind of question bizarre: Why would anyone - machines or aliens - «decide» to get rid of humans, especially because we would be of «lower intelligence»?

Are we saying as human: «Let's get rid of ants, hamsters and turtles, they're so dumber than we are»?
Over 150 species go extinct every single day. Extinction rates due to human action is 1,000 times greater than background rates. We aren't killing then because they're unintelligent, but if they were as smart as us they surely wouldn't die.

It's plausible that you or I have killed the last remaining member of a species, we wouldn't give it a single thought.
I think it's easy to imagine how AI could treat humans with the same complete indifference. I have no hesitations wiping out an entire colony of social, higly organised creatures (ants) for a largely insignificant improvement in my environment.
 
  • #46
billy_joule said:
Over 150 species go extinct every single day. Extinction rates due to human action is 1,000 times greater than background rates.
Source?
 
  • #47
Greg Bernhardt said:
Sam Harris had a TED Talk on this not long ago. What do you think? Are we taking the dangers seriously enough or does Sam Harris have it wrong?

Seriously?

To me this talk does qualify as FUD and fear mongering. It accumulates so many cliché it is embarrassing to see it on TED.

When did Sam Harris become a noted expert on AI and the future of society? AFAIU he is a philosopher and neuroscientist, so what is his expertise on that matter? I'm no expert myself but have worked for thirty years in software engineering and kept an eye on AI research and applications over the years... and it does not seem to me he knows what he is talking about.

He seems to think that once we are able to build computers with as many transistors as the number of neurons in the human brain, AGI will just happen spontaneously overnight! ...And then we loose control, have a nuclear war and end up with starving child everywhere! Comparing the advent of AI with aliens coming to Earth one day is laughable at best. Making fun of the actual experts is questionable to say the least... Using scary and morbid cartoon style visuals is almost a parody.

A lot of speculations, very little demonstration, misinformation, over simplification, fear inducing images, disaster, digital apocalypse, aliens, god... and the final so 'new age' namaste for sympathy. Seriously?

He is asking disturbing questions nonetheless and i agree we should keep in mind worst case scenarios on our way. However, although caution and concern are valuable attitudes, fear is irrational and certainly not a good mind frame to make sound assessments and take inspired decisions.

TED is supposed to be about "Ideas worth spreading". I value dissenting opinions when they are well informed, reasonably sound and honest. This talk is not.

The future of AI is a very speculative and emotionally charged subject. To start with I'm not sure there is a clear definition of what is AI or AGI. What it will look like. How it will happen. How we know we have created such a thing... Even if technical progress keeps pace with the Moore law that's just the hardware part and we still don't really know what the software will look like... Maybe AI will stall at some point despite our theoretical capability and hard work? It's all speculation.

Whatever will happen it won't happen at once. It will likely take a few decades at least and i disagree with Harris about the time argument. Fifty years is a lot of time especially nowadays. A lot will happen and we will have a better understanding of the questions we are asking now. There is no way (and has never been) to solve today all the problems we may face tomorrow or half a century from now.
 
Last edited:
  • Like
Likes Boing3000
  • #48
billy_joule said:
Over 150 species go extinct every single day. Extinction rates due to human action is 1,000 times greater than background rates. We aren't killing then because they're unintelligent, but if they were as smart as us they surely wouldn't die.
The problem you state is called overpopulation and has nothing to do with intelligence. It is just a matter of numbers. And the species that have an exponential growth is always condemn to stop and regress at one point or another. One specie cannot live by itself.

Filip Larsen said:
I would very much like to hear serious technical argument (i.e. not ad hominem arguments) on why there is absolutely no (i.e. identical zero) risk of humanity every getting itself lost to technological complexity and AI.

I wanted to go back to this question as I might have one relevant example to further feed the discussion: Humans and cats.

Humans have a tremendous effect on cat population. We feed them, we spayed & neutered them, we declaw them and we kill them. Theoretically, it's not for our own survival, we don't need to do that at all for our benefice. Generally speaking, we can say that we care for them and that humans are beneficial for the cat population survival, even if they are some individuals who kill and/or torture them for research or even just pleasure. For sure, the cat specie is not at risk at all.

What if there was an AI that turns to this for humans? Would that be bad? One argument against it would be lost of freedom; Cats live in «golden cages». Life can be consider good on some aspects, but they cannot do as they wish. But that is not entirely true either. First, they are stray cats that can be considered «free». Lots of drawbacks with that lifestyle as well, sometimes not a chosen one. Sure they have to flee from animal control services, but in the wild you are always running from something.

But the most interesting point I wanted to make about intelligence and using things we don't understand is that cats - just like humans - have curiosity and will power that can lead to amazing things. Like these cats interacting with objects that were not design for them and, most importantly, they could never understand or even build such «complex» mechanisms:




Not all cats can do these things. It shows how individuals may keep a certain degree of freedom, even in «golden cages». It also shows how difficult it is to control life because the adaptability feature is just amazing.

Keep in mind that cats did not create humans, they just have to live with an already existing life form that was «imposed» to them and that happens to be smarter than they are (or are they?).

How can mankind gradually gives the decision process to machines without them ever noticing it going against their survival? How can someone imagine that every single human being will be «lobotomized» up to a point that no ones will have will power to stray from the norm? That seems to go against what defines human living beings.
 
  • Like
Likes Boing3000
  • #49
Bystander said:
Source?

I said over 150 species but that should have been up to 150 species, sorry.

United Nations Environmental Programme said:
Biodiversity loss is real. The Millennium Ecosystem Assessment, the most authoritative statement on the health of the Earth’s ecosystems, prepared by 1,395 scientists from 95 countries, has demonstrated the negative impact of human activities on the natural functioning of the planet. As a result, the ability of the planet to provide the goods and services that we, and future generations, need for our well-being is seriously and perhaps irreversibly jeopardized. We are indeed experiencing the greatest wave of extinctions since the disappearance of the dinosaurs. Extinction rates are rising by a factor of up to 1,000 above natural rates. Every hour, three species disappear. Every day, up to 150 species are lost. Every year, between 18,000 and 55,000 species become extinct. The cause: human activities.
The full reports can be found here:
http://www.millenniumassessment.org/en/Index-2.html

jack action said:
The problem you state is called overpopulation and has nothing to do with intelligence. It is just a matter of numbers. And the species that have an exponential growth is always condemn to stop and regress at one point or another. One specie cannot live by itself.

Humans have come to dominate the globe precisely because of our intelligence. There are many species with greater populations and/or biomass but none can manipulate our surroundings like we can. We aren't condemned to stop and regress like other species, our intelligence has allowed us to increase Earth's human capacity through technology thus far, who's to say how high we can raise that capacity?

Anyway My point was, that on our path to controlling the globe and it's resources we don't look down and consider the fate of the ant, their intelligence doesn't register on our scale, they are of no value or consequence. The gulf between AI and HI could become just as vast and result in a similar outcome.
We may end up like cats, or ants, or we may end up like the dodo.

jack action said:
How can mankind gradually gives the decision process to machines without them ever noticing it going against their survival?
It's happened countless times between smart and stupid humans, and it'll continue to happen. Control through deception is a recurring theme in human history.
If a super AI wasn't capable of large scale deception I would say it's not super at all. Whether we could build it in such as way that it wouldn't is another issue.
 
  • #50
This thread have many questions and lines of argumentation going in many directions at once now. I will try focus on those that I feel connects with my concerns. Again, I am not here to "win an argument", I am here to sort out my concerns so please work with me.

jack action said:
If you have to put a number on it, what is the risk of humanity ever getting itself lost to technological
complexity and AI (i.e. "destruction of human civilization")? 90%, 50%, 10% or even 0.0001%?

Of the numbers quoted, I would right now say that 90% sounds most likely. Perhaps it easier for me to say it the other way around: I cannot see any reason why we will not continue down this road of increasing complexity. The question then remains if and how we in this thread can agree to a definition of "lost".

Boing3000 said:
But you still haven't provided us with any clues as to why that is a risk.

To establish a risk you commonly establish likelihood and significant cost. Base on what how things have developed last decade or two and combine it with the golden promise of AI, I find both high likelihood and severe costs. Below I have tried to list what I observe:
  • There is a large drive to increase complexity of our technology so we can solve our problem cheaper, faster and with more features.
  • Earlier much of the technological complexity was just used to provide an otherwise simple function (example would be radio voice communication over a satellite-link) which can be understood easy enough. Plenty of the added complexity today introduces functional complexities as well (consider the Swiss army knife our smartphones are) where a large set of functions can be cross-coupled in large set of ways.
  • There is a large drive to functionally interconnect everything, thereby "multiplying" complexities even more. By functionally interconnecting otherwise separate technologies or domains you also often get new emergent behavior with its own set of complexities. Sometimes these emergent behaviors are what you want (symbiosis), but just as often there are a set of unintended behaviors that you now also have to manage.
  • There is a large shift in acceptance of continuous change, both by the consumer and by "mangement". Change is used both to fix unintended consequences in our technology (consumer computers today requires continuous updates) and to improve functionality in a changing environment. The change-cycle is often seen sending ripples our through our interconnected technology making the need for even more fixes.
  • To support the change-cycle there is a shift towards developing and deploying new or changed functionality first and then understand and modify them later. Consumers are more and more accustomed to participate in beta-programs and testing, accepting that features sometimes work, sometimes don't work as they thought they would.
  • Many of the above drives are beginning to spread to domains otherwise reluctant to change, like the industry. For instance, industrial IoT (internet-of-things) which is currently at the top of Garners hype-curve, offers much of the same fast change cycle in the operational management of industrial components. In manufacturing both planning and operations see drives towards more automated and adaptive control where focus is optimizing a set of key performance indicators.
  • There are still some domains, like with safety-critical system, where you today traditionally are required to fully design and understand the system before deployment, but to me it seems very unlikely these domains over time withstand the drive towards increased complexity as well. It will be interesting to see the technological solutions for and social acceptance of coupling a tightly regulated medical device with, say, your smartphone. For instance, a new FDA approved device for diabetes gives an indication what we are already trying to move in that direction (while of course still trying stay in control of our medical device).
All these observations are made with the AI technology we have up until now. If I combine the above observations with the golden promise of AI, I only see that we are driving even faster towards more complexity, faster changes, and more acceptance that our technology is working-as-intended.

Especially the ever increasing features-first-fix-later approach everyone seems to converge on appears to me, as a software engineer with knowledge about non-linear system and chaotic behavior, as a recipe for, well, chaos (i.e. that our systems are exhibiting chaotic behavior). Without AI or similar technology we would simply at some point have to give up adding more complexity because it would be obvious to all that we were unable to control our systems, or we would at least have to apply change at a much slower pace allowing us to distill some of the complexity into simpler subsystems before continuing. But instead of this heralded engineering process of incremental distillation and refinement, we are now apparently just going to throw in more and more AI into the mix and let them compete with each other trying to optimize their part of the combined system in other to optimize on a handful of "key performance indicators". For AI's doing friendly or regulated competition we might manage to specify enough rules and restriction that the they end up not harming or severely disadvantage us humans, but for AI's involved in hostile competition I hold little hope we can manage to keep up.

So, from all this I then question how we are going to continuously manage the risk of such a interconnected system, and who is going to do it?

Boing3000 said:
A fusion reactor may kill me, we know very precisely how, with some kind of real probability attached.
I then mitigate it with some other benefit I get from it. That's what I call intelligence: balance, and constant evaluation.

So, I guess that if you expect constant evaluation of the technology for nuclear reactors you will also expect this constant evaluation for other technology that has potential to harm you? What if you are in doubt about if some other new technology (say, a new material) is harmful or not? Would you rather chance it or would you rather be safe? Do you make a deliberate careful choice in this or do you choose by "gut feeling"?

Boing3000 said:
First Flops are not intelligence. If stupid programs run on computer, then more Flops will lead to more stupidity.

This is what I hear you say: "Intelligence is just based on atoms, and if atoms are stupid then more atoms will just be more stupid."

Flops is just a measure of computing speed. What I express is that the total capacity for computing speed is rising rapidly, both in the supercomputer segment (mostly driven by research needs) and in the cloud segment (mostly driven by commercial needs). It is not unreasonably to expect that some level of general intelligence (ability to adapt and solve new problems) requires a certain amount of calculations (very likely not a linear relationship). And for "real-time intelligence" this level of computation corresponds to a level of computing speed.

There is not yet any indication whether or not it is even possible to construct a general AI with human-level intelligence, but so far there is nothing that seems it will be impossible given enough computing power, hence it is interesting for researches to consider the how level intelligence relates levels of computing power.

Boing3000 said:
That's just false. Power consumption of data center are already an issue.

Indeed, and I am not claiming that computing power will increase a hundredfold over-night, just that there is strong drive to increase computing power in general and this, with everything else being equal, will allow for an increase in computational load for AI-algorithms. My bet is that datacenters will continue optimize for more flops/watt, possibly by utilizing specialized chips for specialized algorithms like the True North chip. Also, more use of dedicated renewable energy sources will allow datacenters to increase their electrical consumption with less dependency on a regulated energy sector. In all, there is no indication that the global computing capacity will not continue to increase significantly in the near future.
 
Last edited:
  • Like
Likes billy_joule
Back
Top