Destined to build a super AI that will destroy us?

  • Thread starter Greg Bernhardt
  • Start date
  • Tags
    Ai Build
In summary: I'm going to try and make this thing understand what I'm thinking." If we're building this machine, we should make sure that it can't diverge from our wishes.
  • #1
19,443
10,022
Sam Harris had a TED Talk on this not long ago. What do you think? Are we taking the dangers seriously enough or does Sam Harris have it wrong?

 
Last edited:
Technology news on Phys.org
  • #2
It sounds half-way plausible to me. But I think the reasons that people don't take it seriously are (1) it's just too hard for those of us who are not super-intelligent to comprehend the implications of super-intelligence, and (2) people don't like to think about problems that they have no idea how to even begin to solving. Harris makes the comparison with global warming, but we're not really, as a species, taking that very seriously, either. If a problem involves cooperation among all major countries on Earth, you can pretty much bet that it isn't going to happen.

One thing that he maybe glosses over is the distinction between being intelligent and having a self, in the sense of having emotions, desires, beliefs, goals, etc. He talks about the AI having goals that conflict with those of humans, but an AI is not necessarily going to have any goals at all, other than those put in by the programmer. (Maybe they could evolve goals?)
 
  • #3
stevendaryl said:
If a problem involves cooperation among all major countries on Earth, you can pretty much bet that it isn't going to happen.

So we're kinda doomed as a species. I mean obviously at some point, nothing last forever, but this could spell our demise prematurely.

stevendaryl said:
(Maybe they could evolve goals?)

I think that is what Harris says could happen especially if safeguards aren't put in.
 
  • #4
Greg Bernhardt said:
So we're kinda doomed as a species. I mean obviously at some point, nothing last forever, but this could spell our demise prematurely.

I feel that global warming, depletion of natural resources, destruction of the ecosystem and overpopulation might get first.
 
  • Like
Likes Greg Bernhardt
  • #5
A positive spin on AI taking over is the technological singularity , scheduled for 2045 according to Ray Kurzweil. I like his version of the vision.

I look at it as the next step in evolution. The view that homo sapians are the end game of evolution and must not be succeeded seems slightly creationist. Sure it's scary and not gentle. Go back and read Arthur C Clarke's Childhoods End for a scarier scenario.
 
  • #6
In my opinion, the warning by Sam Harris is generally right and not very hard for anyone to imagine, but I sort of disagree to the timing he puts. Fact is that a self - replicating "machine" - if and when we reach as humans that stage, with whatever software and hardware mix, can get pretty easy out of control. Something that is today at a stage of full control, can evolve fast enough to something uncontrollable, in an irreversible way. And this irreversibility has more to do with the economic and social - political culture, that this whole process creates, during the evolution of a such machine generation. An equally important factor, is that it will be utilized as a weapon of any kind and this just feeds its evolution. Extreme under - employment will be just byproduct of all this. Now, I think that this is fully viable, but pretty extreme scenario, that the big countries that fund research, won't let happen just like that. So, I think that what Sam Harris describes, won't happen in fifty years or whatever short time. I also agree to stevendaryl about the two reasons that people don't take it seriously. It would be really perfect if that wasn't the case.

But I also have to point out the good things that such extreme evolution will bring. It really is absolutely viable, to conquer many diseases and various other things that are crucial to our everyday lives. One thing that I regard as bad, is that control of all this, is heavily influenced by many idiosyncratic things of our species. I also agree that other things will outpace AI endeavors, like global warming, depletion of natural resources and especially overpopulation. In my opinion, the correctness of this statement follows naturally, from the fact that all these are already out of control, in an irreversible way.
 
  • #7
Greg Bernhardt said:
Are we taking the dangers seriously enough or does Sam Harris have it wrong?

That is pure nonsense.

Intelligence is the capacity to imagine things.

Say we built a machine that have so much imagination that we as human being cannot understand what it spits out. Are we going to built this machine such that it can be autonomous? Why would we do that? It's like an ape that would try to raise a human being in hope that he would build a space ship. Such a human being will only be require to do ape things. It might even get kill rather quickly if it diverges too much from the group. I can't imagine an ape looking at a human being building a spaceship and that this ape would work to feed him, protect him, care for him and cover all of his needs without even understanding what he does.

The other question is: Can we built autonomous things? Nature is autonomous, but it is extremely complex. Can we built a system that would replicate nature? We take animals or plants and just try to make them reproduce themselves in captivity and we fail. Is it reasonable to think that we could make machines that will be able to reproduce or repair themselves or just protect themselves for that matter? Let's say we built a machine that can imagine how to create such a system (because we can't): Will we be able to understand what it has thought of and reproduce it? If so, why would we do it for the machine and give it its autonomy? What would we gain to do such a large amount of work (I can only imagine that it won't be easy)? Or are we suppose to think that this machine will build this system without us knowing about it; We will only be those idiots feeding this machine doing things we don't understand?

This idea that we will build machines that will cover the Earth AND that will all be connected to one another AND that will - without warning - get suddenly smart enough to be autonomous AND that will be stronger than humanity AND that will think it is a good idea to destroy - or even just ignore - humanity, is a good script for a science-fiction movie at best.
 
  • Like
Likes Averagesupernova
  • #8
jack action said:
That is pure nonsense.

Intelligence is the capacity to imagine things.

Say we built a machine that have so much imagination that we as human being cannot understand what it spits out. Are we going to built this machine such that it can be autonomous? Why would we do that? It's like an ape that would try to raise a human being in hope that he would build a space ship. Such a human being will only be require to do ape things. It might even get kill rather quickly if it diverges too much from the group. I can't imagine an ape looking at a human being building a spaceship and that this ape would work to feed him, protect him, care for him and cover all of his needs without even understanding what he does.

I really can't understand your argument. As Sam Harris points out, we have lots of really good reasons for making smarter and smarter machines, and we are doing it, already. Will the incremental improvements to machine intelligence get all the way to human-level intelligence? It's hard to know, but I don't see how your argument proves that it is nonsense.
 
  • Like
Likes AaronK
  • #9
jack action said:
Are we going to built this machine such that it can be autonomous? Why would we do that?

As I see it, we are already doing it, even if this is not a goal in itself.

Currently the promise of deep learning and big data are desensitizing most of us to the objection of employing algorithms we don't really understand the result of and where the result can change autonomously (by learning) to give "better" results. That is, we will mostly likely end up accepting, that learning systems we don't understand how works will be employed everywhere its beneficial and feasible. It seems fairly obvious that autonomously interacting and data-sharing learning systems eventually can be made to outperform non-interacting systems simply because then each system can provide even more benefit to its users and owner, and from then on you pretty much get an increasing degree of autonomy. In short, the promise and actual delivery of fantastic possibilities will in all likelihood drive us to interconnect and automate intelligent systems like never before and the inherent complexity will at the same time dwindle our understanding of any eventually set of negative consequences or even how to recognize them. If something negative do happen, I think we would be hard pressed to "disconnect" our systems again unless it was really bad. And even if all people across nations, cultures and subcultures work towards ensuring something very bad never happens (good luck with that, by the way) then I question if we can really expect to be in control of a global interconnected learning system that we don't understand in the same way as we in isolation would understand a car, a medical device, or an elevator.

Of course, all this does not imply that something very bad will happen or that we will end up like dumb humans cared for by smart machines, only that unless we have good reasons to think otherwise it is a likely possibility. And I have yet to hear any such reasons.
 
  • #10
I would say, "He has it wrong." Mortality rates due to industrial accidents decline as industries "mature." "AI" is just another industry.
 
  • #11
Filip Larsen said:
or that we will end up like dumb humans cared for by smart machines,
Or a more likely scenario would be smart humans cared for by dumb machines.

A gloom and doom story.
from the talk it can be concluded that it is never ending.
I guess it is AI all the way down. Humans build AI ensuring humans destruction. AI build super-AI ensuring AI destruction. What next? super-duper-AI,... ; each generation with more intelligence destroys the previous? Pretty sure the logic points that were presented in the TED talk were not all that well thought out.
 
Last edited:
  • #12
stevendaryl said:
He talks about the AI having goals that conflict with those of humans, but an AI is not necessarily going to have any goals at all, other than those put in by the programmer. (Maybe they could evolve goals?)

A common approach towards developing an AI is to build a neural network. These are often 'trained' by defining a 'fitness' function against which variations in a population of networks are measured. It seems quite plausible to me that these functions might themselves be programmed to allow some variation which over time could develop in completely unpredictable directions.

jack action said:
Is it reasonable to think that we could make machines that will be able to reproduce or repair themselves or just protect themselves for that matter?
Given the ease with which most things can now be obtained over the internet, the developments in visual recognition software and autonomous vehicles and robots I do not think it will be too long until someone builds a population of self maintaining machines, with spare parts ordered automatically.

The machines do not even need to repair themselves - consider a fleet of autonomous vehicles programmed to return to service bases periodically. As long as they continue to perform the basic functions expected of them, their internal programming could evolve in any number of ways without attracting unwanted attention from human 'overseers'. Over time they become ubiquitous and an essential part of life - an ideal symbiosis or the an inevitable step towards the subjugation of mankind?
 
  • #13
256bits said:
Or a more likely scenario would be smart humans cared for by dumb machines.

There are plenty of scenarios where we for some reason end up with technology that do not lead to machines becoming much smarter than an average human, but the point Harris (and others, including me) is trying to make is, that those reasons, should they exist, are currently unknown. That is, so far there is nothing to exclude that we are heading towards a future where we are forced or choose to give up control in a way we so far have been used to have. The best argument most people seem to be able to establish is that we won't allow yourself to loose control, yet they are unable to point at any mechanism or principle that would would make scenarios with loss of control impossible or at the very least extremely unlikely.

I am tilting towards the opinion that people who think about this but are not concerned perhaps have adopted a kind of fatalistic attitude where they embrace change with less concern about being in control or not, or with a belief that we over time by some kind of luck or magic never seriously will loose control with our technology. I am genuinely puzzled why people involved in driving this technology do not seem to be concerned yet unable to provide technical arguments that would make me equally unconcerned.

If I were banging explosive together in my kitchen, it would be sane for my neighbors to require of me to explain in technical terms why there is no risk involved or, failing that, explain what measures I take to ensure (not just make likely) that nothing bad will happen. Why is AI (or any disruptive technology) any different?
 
  • #14
Charles Kottler said:
A common approach towards developing an AI is to build a neural network. These are often 'trained' by defining a 'fitness' function against which variations in a population of networks are measured. It seems quite plausible to me that these functions might themselves be programmed to allow some variation which over time could develop in completely unpredictable directions.

That's true. My point is that there is a distinction between understanding and action. Your understanding might be godlike, in the sense that you can predict the future with perfect accuracy (or as well as is theoretically possible, given that things are nondeterministic), but you still may have no goals---no reason to prefer one future over another, and no reason to take actions to assure one future or another.

In the natural world, understanding and action have always evolved in tandem. There is no selective pressure to understand things better unless that better understanding leads to improved chances for survival and reproductive success. In contrast, with machines, the two have developed separately---we develop computer models for understanding that are completely separated from actions by the computers themselves. The programs make their analyses and just present them to the human to do with as they like. We also, more-or-less independently have developed machine capabilities for action that were under the control of humans. That is, the goals themselves don't come from the machine. As far as I know, there hasn't been much AI research done about machines developing their own goals (except as means to an end, where the end was specified by the humans). It certainly seems possible that machines could develop their own goals, but I think you would need something like natural selection for that to happen--you'd need robot reproduction where only the fittest survive. I suppose that's possible, but it's not a situation that we have currently.
 
  • #15
stevendaryl said:
As far as I know, there hasn't been much AI research done about machines developing their own goals (except as means to an end, where the end was specified by the humans). It certainly seems possible that machines could develop their own goals, but I think you would need something like natural selection for that to happen--you'd need robot reproduction where only the fittest survive.

A quick internet search revealed this article from yesterday: https://www.technologyreview.com/s/602529/google-is-building-a-robotic-hive-mind-kindergarten/

There are also lots of teams developing 'attention seeking' robots. Presumably it would not be too hard to link the two ideas so you have teams of attention seeking robots learning from each other. These could easily be released as toys, becoming popular in the richer nations. There are many types of action which meet the loose goal of 'attention seeking' which might be outside the intended parameters: vandalism or inflicting pain might in the short term attract attention far more effectively than performing a dance routine... My point is that almost any goal can lead to unintended consequences.
 
  • #16
stevendaryl said:
I really can't understand your argument. As Sam Harris points out, we have lots of really good reasons for making smarter and smarter machines, and we are doing it, already. Will the incremental improvements to machine intelligence get all the way to human-level intelligence? It's hard to know, but I don't see how your argument proves that it is nonsense.

Making smarter machines is not nonsense; Thinking they will endanger the human specie is.

Filip Larsen said:
algorithms we don't really understand the result of and where the result can change autonomously (by learning) to give "better" results.

We may not understands the how, but we understand the results. Otherwise why would we keep a machine working that gives us things we don't understand? We will assume it's garbage and the machine is not working properly. We certainly are not going to give it the control of all nuclear missiles on earth.

Filip Larsen said:
It seems fairly obvious that autonomously interacting and data-sharing learning systems eventually can be made to outperform non-interacting systems simply because then each system can provide even more benefit to its users and owner, and from then on you pretty much get an increasing degree of autonomy.

Charles Kottler said:
I do not think it will be too long until someone builds a population of self maintaining machines, with spare parts ordered automatically.

Charles Kottler said:
As long as they continue to perform the basic functions expected of them, their internal programming could evolve in any number of ways without attracting unwanted attention from human 'overseers'.

Having a machine making one task autonomously is not having an autonomous machine; Nor does having one able to order spare parts on ebay.

First, a single machine cannot be autonomous, just like humans are not. We need plants, insects and the rest of the animal kingdom. We are part of a complex system where all depend on each other to survive. Does the flowers need the bees to reproduce themselves or does the bees need the flowers to nourish themselves? No one can tell.

For machines to develop such a system that wouldn't include humans is nonsense (Something like «The Terminator» or «The Matrix» movies). Imagining that even a system where machines could develop an independent society so fast that humans are unaware of it, one where we can't flip a switch off or we can't inject a virus to kill the brain, that is also nonsense.

The machines we make are not that reliable. To do that kind of magic, you need adaptability, something organic life form can do. You also need diversity, so having a Hal 9000, Skynet or VIKI controlling all machines on Earth would be impossible as it is a stupid path to take, evolution-wise. It creates a single «weak spot» that puts in peril the entire system.

I just can't imagine that all of this evolution could happen so fast without us being able to notice it. I can't imagine that the first thing these machines will do is develop «sneakiness» to avoid detection.
 
  • Like
Likes ShayanJ and Boing3000
  • #17
jack action said:
First, a single machine cannot be autonomous, just like humans are not. We need plants, insects and the rest of the animal kingdom.

I am referring to autonomy to make decisions. An autonomous control system is a system capable of carrying out its functions independent of any outside decision-making or guidance. It gets its autonomy because we built it into the system and it retains its autonomy as long as we trust the system.

The ability of learning systems to successfully discover patterns in large amount of data is rapidly increasing and already surpassing humans in many areas. Using such systems to assist in human decision-making is just around the corner, for instance in healthcare with Watson [2]. From that "starting point" is it most likely that we will keep striving to expand and hand over more and more decisions to the autonomic systems simply because such systems can be made to perform better than we do.

So, my concern is personally not so much that superior decision making systems take control from us, but that we willingly hand over control to such systems without even the slightest worry about getting control back if needed. To me it feels that most people like to think or accept that such better performance is a goal in itself, almost no matter what other consequences or changes it will have on our future. The promises of a golden future where new technology like AI make our big problems go away or severely reduced them also seem to makes people less concerned about loosing control of the systems along the way.

jack action said:
For machines to develop such a system that wouldn't include humans is nonsense (Something like «The Terminator» or «The Matrix» movies).

The concern is not the Hollywood scenarios, but the scenarios that we are currently heading into by our the research and drive for automation that happens right now or in the near future.

jack action said:
I just can't imagine that all of this evolution could happen so fast without us being able to notice it.

So you are calm and confident that no servere problems can happen simply because you cannot imagine how fast a learning algorithm can dig out patterns in big data or that someone will use this to solve a problem you don't want solved (like if you should keep your job or not)?

Imagine that some research company builds a completely new type of nuclear reactor very near your home. What will it take for you to remain unconcerned about this new reactor? Would you expect any guarantees from relevant authorities? What if all the guarantee you ever learn about is a statement from the company itself saying that they have tried this this new reaction in a lab without problem and that you really should relax and look forward to cheap electricity?[1] https://en.wikipedia.org/wiki/Autonomous_robot
[2] https://www.ibm.com/watson/health/
 
  • #18
jack action said:
Making smarter machines is not nonsense; Thinking they will endanger the human specie is.

Are you assuming here that we will build in some sort of control like Asimov's rules? The problem here is that we would all have to agree before we go down the AI path and I'm fairly sure that we won't all agree. Then as the intelligence of the AI gets large it will surely find a way around our simplistic rules to limit it.
 
  • #19
Almost certainly. I see no limit to theoretical intelligence, if our brains can do it, so can computers. Thinking we are anywhere near the top of possible capabilities is very anthopocentric. We'll hit an AI singularity where the AI builds the next generation of AI, perhaps with an evolutionary algorithm. Simulated evolution can do a million generations in the time real biology does one. I see this as the most likely solution to the Fermi Paradox.
 
  • Like
Likes Greg Bernhardt
  • #20
I have a couple of significant problems with his descriptions.

First, there is no "general intelligence" which is a goal of government or industry. When machines become "smarter", they don't become more like humans in their thinking. Even when we do start using technology similar to what in our skulls, it won't be a human brain duplicate or improvement - not unless that is a very deliberate goal.

Second, there's no chance of having an intelligent computer take over by accident. The danger does not come by making the computer smarter. It comes by connecting the computer to motor controls. You don't do that without a lot of caution. You can have a computer design a better computer and a better weapon system, and you would presumably also have it develop a better test protocol and safety systems. If you then decide to turn over the entire weapons development over to the computer, what will transpire is not an "accident", it's a deliberate act.

Still there is a problem. What computers and technology are doing (as correctly depicted in the TED talk) is empowering individuals - not all individuals, just the ones that develop or procure them. When the "technological singularity" happens, it will happen to someone - someone who will have the ability to overpower all competitors and governments. If that person wanted to share or contain that power, who would he/she trust?
 
  • Like
Likes 256bits
  • #21
Filip Larsen said:
I am referring to autonomy to make decisions.

That is not a problem as long as you can turn off the machine. Will machine make bad decisions? Of course, but we already are doing it as humans. What level of autonomy a machine will have? Just like for humans, it will depend of the responsibilities involved and the proven capacities of the machine. The permitted level of decision making is different for a doctor and a nurse. But, even if the doctor is highly educated, he has no capacities in deciding what type of maintenance is require for your car; A simple mechanic have more authority in that domain.

Filip Larsen said:
What if all the guarantee you ever learn about is a statement from the company itself saying that they have tried this this new reaction in a lab without problem and that you really should relax and look forward to cheap electricity?

What does that have to do with AI becoming autonomous? Why would we remove «proof of concept» when building something new, even if it is done by superior AI? That would be pretty stupid. I still expect tests, regulations and protocols to be around.

cosmik debris said:
Are you assuming here that we will build in some sort of control like Asimov's rules?

No, because I can't even conceive a world where a machine has such autonomy that we won't be able to shut it down when not working as expected.

Like I said earlier, machines will make bad decisions that will demand revisions and new analysis. But it works that way already with humans.

.Scott said:
When the "technological singularity" happens, it will happen to someone - someone who will have the ability to overpower all competitors and governments. If that person wanted to share or contain that power, who would he/she trust?

How is that different from any technological improvement done for thousands of years now? Invention of the knife, the wheel, the bow and arrow, gun powder, medicine, boat, etc. The human specie, somehow, manage to survive those technological advancements shared by only a few individuals at first.
 
  • #22
jack action said:
How is that [technological singularity] different from any technological improvement done for thousands of years now? Invention of the knife, the wheel, the bow and arrow, gun powder, medicine, boat, etc. The human specie, somehow, manage to survive those technological advancements shared by only a few individuals at first.
When the technological singularity is reached, that technology can be used to develop additional, more advanced technology faster that people can develop it. So it can be used to outpace all other technological development. If that point is reached by someone without others soon noticing, it could put one person in a very powerful position.

Even if it spreads to hundreds of others, it still creates an unstable situation.
 
  • #23
jack action said:
That is not a problem as long as you can turn off the machine.

With danger of repeating my self ad nauseam, my concern are scenarios where we manage to get so dependent on this technology that turning something off is not an option. It would be like saying that computer crime will never be a problem because you can just turn off the machine. A strategy some politicians apparently still think is feasible.

So, I content that turning off "the machine" is less and less a realistic option the way we are currently designing and employing our systems and I am puzzled that so many are unconcerned about staying in control of such "addictive" technology. As an engineer I tend to disfavor blind trust in the capabilities of machines, even smart ones, and it scares me how easily people adapt to a pattern of blind trust in technology. And with blind trust I here mean the inability to establish realistic failure modes which leads to false belief that nothing can go wrong.

jack action said:
Why would we remove «proof of concept» when building something new, even if it is done by superior AI?

With the current highly optimistic trust in technology we wouldn't and that is my point. Our ability to discern or perceive bad consequences of such AI generated solutions will only decrease from now on so if we can't do it now we will most likely never be able to. Stopping or rejecting a particular "solution" is not a option because we have a rapidly decreasing chance of being able to make a test or criteria that realistic can ensure a particular solution has no bad consequences (1). We are already today deploying advanced technology into a increasingly complex and interconnected world making us less and less able to foresee problems before they occur. We are starting to deploy network and power infrastructure so complex it can only be manged by learning systems monitoring a torrent of data. We are effectively in the process of giving up to understand complexity and just throw AI on it.

(1) Well, there is one way that might work, and that is using an equally powerful AI to evaluate the result or operation of other AI's. However, this just shifts the problem to this new AI the result of which we still have to more or less trust blindly.

jack action said:
I still expect tests, regulations and protocols to be around.

So it seems we at least share the expectation that new technology should be safe and employed only under premise of being in control of consequences.
 
  • Like
Likes Greg Bernhardt
  • #24
Filip Larsen said:
Well, there is one way that might work, and that is using an equally powerful AI to evaluate the result or operation of other AI's.

Seems there is some research on this topic ([1], [2]) which also nicely describes the problem I have been trying to point to. All the concerns I have expressed in this thread so far can be boiled down to the problem of how to ensure proper risk management for a self-adaptive learning system.

[1] http://dl.acm.org/citation.cfm?id=2805819
[2] http://pages.cpsc.ucalgary.ca/~hudsonj/pubs/MSc-Thesis.pdf
 
  • Like
Likes Greg Bernhardt
  • #25
@Filip Larsen:

Although I agree with your concern about people putting too much trust in machines they don't understand, I don't agree that AI has something to do with it and I don't think that this will lead to endanger the human specie, just society as we know it.

Like you, I also worry about the lack of interest people have about what is surrounding them, especially man-made things. All of my uncles (70ish) were repairing their car back in the days, doing house electricity, plumbing, etc. without formal training. They were just curious enough to try and eager to learn how everything worked. Now it seems that nobody cares; It's someone else's job. And I don't understand how we got from people looking out at the night sky and being so curious about finding star patterns and just trying to define what those lights were, to putting an image-box-that-can-connect-to-someone-on-the-other-side-of-the-earth in one's hand and that the same kind of people don't even want to use a screwdriver to open it and see what it looks like on the inside. Where did curiosity went?

I don't blame AI for this though. I blame what I could call «knowledge shaming» and intellectual property.

I don't understand how we got there, but it seems that knowing stuff is not a positive thing. We all praise education in public, but we don't walk the talk. I was just looking at a full-page ad for a local private high school last week: At least 90% of the texts were about sports and going on trips. Somehow the initial mission of schools (i.e. academics) is not interesting enough. We need to «reward» children for doing this «horrible task» that is learning new stuff. It seems that every teacher has jump on the band wagon promoting that «school is boring» and they all assume that no one will ever like going to school for academics only. The more everyone think that way, the more it becomes true.

The second big problem is intellectual property. By not sharing knowledge, we discourage people from further learning new things. For example, my uncles were repairing their cars. In those days, it was easy to take everything apart and see how things were made, just by looking at it, since most of the things were mechanical. When electronics came around, that was no longer true. Of course, it is not that complicated when you know how it was built or coded, but unless the maker tells you how he did it by showing you his plans, reverse engineering is a very complicated and discouraging task. That is when hiding those plans in the name of intellectual property became a big problem for our society. People are so discouraged that they just don't care anymore. It would be like asking them to figure out by themselves how to make a knife, from mining the ore to polishing the blade: Just impossible in one person's lifetime to do so. But you can show them all the steps and they will be able to do all the jobs one after the other, even if they are not the best at every job. Open source projects are a breath of fresh air on that regard, by killing that monster that intellectual property has now grown into.

Finally, the problem I anticipate with that is the end of society as we know it, not the end of the human specie (Disregarding the fact that some people may think that a human not having a car or iphone is somehow not a human). Despite this, it also doesn't mean the technologies we know won't come back, perhaps in a more solid structure, by taking the time to share the knowledge with everyone before going on the next step.

But I digress:
Destined to build a super AI that will destroy us?
No.
 
  • Like
Likes Boing3000
  • #26
jack action said:
Although I agree with your concern about people putting too much trust in machines they don't understand, I don't agree that AI has something to do with it and I don't think that this will lead to endanger the human specie, just society as we know it.

For the concerns I have (which seem to overlap the concern Harris is trying to express in his TED talk), introduction of AI is a potentially huge addition of complexity, or perhaps more accurate, the success of AI will allow us, and eventually the AI themselves, in a relatively short time frame to build and operate ever increasing complex and interconnected systems. So, to me, AI hold potential to quickly lift most technology to a level where the average human will consider it more or less magic.

All that said, I agree that we are not necessarily "destined to build a super AI that will destroy us" as Greg puts the question for this thread, only that there are plenty of opportunity for us to mess up along the way if we as a group are not prudently careful, which unfortunately is not a trait we seem to excel at.

To me its like we are all together on this river raft drifting towards what appears to be a lush green valley in the distance and most people are so focused to get to that lush valley they are in denial that any whirlpools or waterfall along the way could ever be a problem because, as I hear them say, if the raft should ever drift towards such a hazard "someone" would just paddle the raft away - problem solved. I hold that such people do not really appreciate the danger of river rafting or what it takes to plan and steer a raft once you are on the river. I then observe there is no captain on our raft, only uncoordinated groups of people all trying to paddle the raft many directions at once for their own benefit, and I really start to get worried :nb)
 
  • Like
Likes jack action
  • #27
jack action said:
@Filip Larsen:

... And I don't understand how we got from people looking out at the night sky and being so curious about finding star patterns and just trying to define what those lights were, to putting an image-box-that-can-connect-to-someone-on-the-other-side-of-the-earth in one's hand and that the same kind of people don't even want to use a screwdriver to open it and see what it looks like on the inside. Where did curiosity went?
I have my theory but I don't know how well it is shared. We tend to keep our kids constantly busy. Never do we allow our kids to just sit and complain about being bored while telling them to deal with it. Being bored breeds imagination and wandering minds. I grew up that way and I see things the same way you do jack action. Most of the time kids are kept busy so the parents have an easier time dealing with them.
 
  • #28
.Scott said:
I have a couple of significant problems with his descriptions.
Only a couple.
 
  • #29
Jack Action as nailed down most of what can reasonably said about what AI is or more likely would be (that is: pure fantasy)

Harris is a professional war mongerer and a disgrace to intelligence. Now wonder he feels threaten by intelligence, or that intelligence is out there to kill him (given his level of him "projecting too much", he equates humanity with him).

I'll also mention that even though deep-blue can "beat" (<= note the warlike criteria again) a grand master, it's intelligence is still less of that of a snail that is: orders of magnitude less than mouses.

His inability to process scientific and logic arguments became ironically hilarious when his only joke turn out to be the only actually plausible fact in his talk: Yes Justin Beiber may well be president, and when an average show-biz guy became president, the world is not coming to an end either.

His blind faith into some mythical (and obviously unspecified) "progress", is also mind boggling. Does he ignores that Moore's law failed 10 years ago ? Let us not be bothered be math nor logic, and let us pray to the growth God, and let us mistake change with "progress", or random selection.with "evolution"
 
  • #30
IMHO, the Terminator scenario is unlikely, we won't be that fool to give nuclear missiles to super AI.
I see the danger of a slope leads to a cliff.
More and more people lose their jobs, don't even want to search for new one, welfare is good enough, less creativity, people are more and more dependant on the state, the state is more and more dependant on AI managers... Then ultimately, we become the pets of AIs, they outevolved us.
 
  • #31
Boing3000 said:
His blind faith into some mythical (and obviously unspecified) "progress", is also mind boggling.

I would very much like to hear serious technical argument (i.e. not ad hominem arguments) on why there is absolutely no (i.e. identical zero) risk of humanity every getting itself lost to technological complexity and AI. Can you point to such serious arguments or perhaps provide some here?

Usually such arguments are very difficult to establish, because you effectively have to prove that something will never happen all the while humanity undergo potential multiple social and technological disruptive changes, which of course is much more difficult than proving that same something just might or might not happen. What I am looking for is a kind of argument that fairly clearly shows that there is no route from today leading to "destruction of human civilization" without breaking one or more physical laws along the way and without a relatively small group of humans (say 1% of humans) being able to "lead us into doom" (whether unintentionally or with malice) without everyone else being sure to notice well in advance.
 
  • #32
Filip Larsen said:
I would very much like to hear serious technical argument (i.e. not ad hominem arguments) on why there is absolutely no (i.e. identical zero) risk of humanity every getting itself lost to technological complexity and AI. Can you point to such serious arguments or perhaps provide some here?
You know, that's the first time I get a coherent and rational response to a genuine statement. Your are kind of "getting me out of guard", because generally, what I get it downright hysterical denial and revisionism (which is, you'll have guessed, very hard to argue with:wink:)

So my answer is very straightforward. Harris (and by proxy/echo you) is making wild irrationals unsubstantiated statements. The burden of proof is on him.
"Getting lost is complexity" is not a thing. Please be so kind to defining it. Here, I'll try to help:

In the the 70'th, getting tv requires one phone call, and a +-10 bucks per month. Now, you'll have to call 4 different provider, configure you internet connection, figure out 42 possible incompatibility "problem", to have your "box" kind of working (that is: zapping (ironically I just discover it is NOT a English word, in french we use this word to describe channel-hopping) takes you 1 second, while in the 70'th it takes you 1/10 th of a second). All this for at least 50 bucks per mouth (I am talking inflation neutral numbers here) not accounting for power consumption (the increase of inefficiency is always hidden under the hood)

Did I just argues for complexity/progress is overwhelming us ? Not quite. Because none of that is anywhere close to fundamentals like "life threatening". Quite the opposite. Once you'll have op-out those inefficient modern/inefficient BS (<-sorry), you'll discover the world did not ends. Just try it for yourself, and witness the "after-life". Reagan did not "end the world". This is a fact. Will Justin Beiber ? .. unlikely...

You are talking of ad hominem, and it is very strange. Because there is none. Harris business model is to make you believe that something is going to kill you. Fear mongering is his thing. He is proud of it, and many people do the very same, and have a perfectly successful life. That is another fact. He did climb on the TED talk stage or did I make it up ?

To answer your point again (in a slightly different perspective), you have to answer what risk there is, what kind of harm will happens to whom, and how. I mean not by quoting the scientific work of a Hollywood producer, but actual scientific publications.

Filip Larsen said:
Usually such arguments are very difficult to establish, because you effectively have to prove that something will never happen all the while humanity undergo potential multiple social and technological disruptive changes, which of course is much more difficult than proving that same something just might or might not happen.
That is very very true. Proving a negative is something that no scientific minded people (that by no means ... means intelligent people) will every do.
I don't have to prove that God does not exist, nor that AI exists nor even that AI will obviously want kill every homo-sapiens-sapiens.
All these are fantasy. Hard and real fantasy. God is written with so many books's atom and process by so many human's neuron that it must exist .. right ?
Your pick: You believe in those fantasies, or believe that fantasies exist.

AI do not exist. Intelligence exist. The definition is here. Nor Harris nor anyone is going to redefine intelligence as the "'ability to process information". That is meaningless, and just deserves a laugh.

Filip Larsen said:
What I am looking for is a kind of argument that fairly clearly shows that there is no route from today leading to "destruction of human civilization"
I suppose your are talking about that.
You'll be hard pressed to find any reference to AI in those article, because (as state previously) AI do not exists, nor will, (not even talking about "wanting to kill Harris/Humanity"). Those are fantasies. If this is a serious science forum, only published peered reviewed article are of any interest, and Sam Harris as very few (let's say 3 by quick google search).

Filip Larsen said:
without breaking one or more physical laws along the way and without a relatively small group of humans (say 1% of humans) being able to "lead us into doom" (whether unintentionally or with malice) without everyone else being sure to notice well in advance.
Super intelligent AI (like every infinite growth based fantasy) break the first law of thermodynamic.
Normal serious AI (that is human kind) have trouble to know what even intelligence means and comes from.

Shouldn't the conversation ends there ? (not that's not funny but ... really ?)
 
  • #33
Filip Larsen said:
I would very much like to hear serious technical argument (i.e. not ad hominem arguments) on why there is absolutely no (i.e. identical zero) risk of humanity every getting itself lost to technological complexity and AI. Can you point to such serious arguments or perhaps provide some here?

I'll return the question:

If you have to put a number on it, what is the risk of humanity ever getting itself lost to technological complexity and AI (i.e. "destruction of human civilization")? 90%, 50%, 10% or even 0.0001%?

Then it will be easier to understand how much importance you give to your arguments.

Personally - without having any other arguments than those I already presented - I'm more inclined to go towards the 0.0001% end of the scale.
 
  • #34
jack action said:
Personally - without having any other arguments than those I already presented - I'm more inclined to go towards the 0.0001% end of the scale.
Even at a time scale as far out as 10,000 years from now? Perhaps that is where fantasy comes into play? I tweeted Sam Harris this thread. With some luck he will have a bit to say.
 
  • Like
Likes jack action
  • #35
Boing3000 said:
To answer your point again (in a slightly different perspective), you have to answer what risk there is, what kind of harm will happens to whom, and how.

1) I have already enumerated several of those risks in this thread, like humanity volunteering most control of their life to "magic" technology. as As long as these risks are not ruled out as a "high cost" risk I do not really have to enumerate more to illustrate that there are "high cost" risks. But please feel free to show how you would make risk mitigation or reduction of each of them because I am not able to find ways to eliminate those risks.
2) Requiring everyone else to prove that your new technology is dangerous instead of requiring you to prove its safe is no longer a valid strategy for a company. Using my example from earlier, you can compare my concern with the concern you would have for your safety if someone planned to build a fusion reactor very near your home, yet they claim that you have to prove that their design will be dangerous.

Boing3000 said:
Super intelligent AI (like every infinite growth based fantasy) break the first law of thermodynamic.
Normal serious AI (that is human kind) have trouble to know what even intelligence means and comes from.

Energy consumption so far seem to set a limit on how a localized an AI with human-sized intelligence can be due to the current estimate of how many PFlops it would take on conventional computers. You can simple calculate how many computers it would take and how much power, and conclude that any exponential growth in intelligence would hit the ceiling very fast. However, two observations seems to indicate that this is only a "soft" limit and that the ceiling may in fact be much higher.

Firstly, there already exist "technology" that are much more energy efficient. The human brain only uses around 50W to do what it does, and there is no indication that there should be any problem getting to that level of efficiency in an artificial neural computer either. The IBM's True North chip is already a step down that road.

Secondly, there is plenty of room to scale out in. Currently our computing infrastructure is increasing at an incredible speed making processing of ever increasing data sets cheaper and faster, making access to EFlops and beyond on the near horizon.

If you combine these two observations there is no indication that we can rule out energy or computing power as a limit for intelligence.

Boing3000 said:
Shouldn't the conversation ends there

If you don't believe you can add any more meaningful information or thoughts, then sure. But I still like to discuss technical arguments would those that still care for this issue.
 
  • Like
Likes Boing3000

Similar threads

  • General Discussion
Replies
1
Views
290
  • Programming and Computer Science
2
Replies
39
Views
4K
Replies
8
Views
755
Replies
10
Views
2K
  • Computing and Technology
Replies
11
Views
709
  • Computing and Technology
16
Replies
559
Views
22K
Replies
19
Views
2K
Replies
24
Views
2K
Writing: Input Wanted Number of Androids on Spaceships
  • Sci-Fi Writing and World Building
Replies
9
Views
543
Replies
1
Views
938
Back
Top