- 19,787
- 10,741
Sam Harris had a TED Talk on this not long ago. What do you think? Are we taking the dangers seriously enough or does Sam Harris have it wrong?
Last edited:
stevendaryl said:If a problem involves cooperation among all major countries on Earth, you can pretty much bet that it isn't going to happen.
stevendaryl said:(Maybe they could evolve goals?)
Greg Bernhardt said:So we're kinda doomed as a species. I mean obviously at some point, nothing last forever, but this could spell our demise prematurely.
Greg Bernhardt said:Are we taking the dangers seriously enough or does Sam Harris have it wrong?
jack action said:That is pure nonsense.
Intelligence is the capacity to imagine things.
Say we built a machine that have so much imagination that we as human being cannot understand what it spits out. Are we going to built this machine such that it can be autonomous? Why would we do that? It's like an ape that would try to raise a human being in hope that he would build a space ship. Such a human being will only be require to do ape things. It might even get kill rather quickly if it diverges too much from the group. I can't imagine an ape looking at a human being building a spaceship and that this ape would work to feed him, protect him, care for him and cover all of his needs without even understanding what he does.
jack action said:Are we going to built this machine such that it can be autonomous? Why would we do that?
Or a more likely scenario would be smart humans cared for by dumb machines.Filip Larsen said:or that we will end up like dumb humans cared for by smart machines,
stevendaryl said:He talks about the AI having goals that conflict with those of humans, but an AI is not necessarily going to have any goals at all, other than those put in by the programmer. (Maybe they could evolve goals?)
Given the ease with which most things can now be obtained over the internet, the developments in visual recognition software and autonomous vehicles and robots I do not think it will be too long until someone builds a population of self maintaining machines, with spare parts ordered automatically.jack action said:Is it reasonable to think that we could make machines that will be able to reproduce or repair themselves or just protect themselves for that matter?
256bits said:Or a more likely scenario would be smart humans cared for by dumb machines.
Charles Kottler said:A common approach towards developing an AI is to build a neural network. These are often 'trained' by defining a 'fitness' function against which variations in a population of networks are measured. It seems quite plausible to me that these functions might themselves be programmed to allow some variation which over time could develop in completely unpredictable directions.
stevendaryl said:As far as I know, there hasn't been much AI research done about machines developing their own goals (except as means to an end, where the end was specified by the humans). It certainly seems possible that machines could develop their own goals, but I think you would need something like natural selection for that to happen--you'd need robot reproduction where only the fittest survive.
stevendaryl said:I really can't understand your argument. As Sam Harris points out, we have lots of really good reasons for making smarter and smarter machines, and we are doing it, already. Will the incremental improvements to machine intelligence get all the way to human-level intelligence? It's hard to know, but I don't see how your argument proves that it is nonsense.
Filip Larsen said:algorithms we don't really understand the result of and where the result can change autonomously (by learning) to give "better" results.
Filip Larsen said:It seems fairly obvious that autonomously interacting and data-sharing learning systems eventually can be made to outperform non-interacting systems simply because then each system can provide even more benefit to its users and owner, and from then on you pretty much get an increasing degree of autonomy.
Charles Kottler said:I do not think it will be too long until someone builds a population of self maintaining machines, with spare parts ordered automatically.
Charles Kottler said:As long as they continue to perform the basic functions expected of them, their internal programming could evolve in any number of ways without attracting unwanted attention from human 'overseers'.
jack action said:First, a single machine cannot be autonomous, just like humans are not. We need plants, insects and the rest of the animal kingdom.
jack action said:For machines to develop such a system that wouldn't include humans is nonsense (Something like «The Terminator» or «The Matrix» movies).
jack action said:I just can't imagine that all of this evolution could happen so fast without us being able to notice it.
jack action said:Making smarter machines is not nonsense; Thinking they will endanger the human specie is.
Filip Larsen said:I am referring to autonomy to make decisions.
Filip Larsen said:What if all the guarantee you ever learn about is a statement from the company itself saying that they have tried this this new reaction in a lab without problem and that you really should relax and look forward to cheap electricity?
cosmik debris said:Are you assuming here that we will build in some sort of control like Asimov's rules?
.Scott said:When the "technological singularity" happens, it will happen to someone - someone who will have the ability to overpower all competitors and governments. If that person wanted to share or contain that power, who would he/she trust?
When the technological singularity is reached, that technology can be used to develop additional, more advanced technology faster that people can develop it. So it can be used to outpace all other technological development. If that point is reached by someone without others soon noticing, it could put one person in a very powerful position.jack action said:How is that [technological singularity] different from any technological improvement done for thousands of years now? Invention of the knife, the wheel, the bow and arrow, gun powder, medicine, boat, etc. The human specie, somehow, manage to survive those technological advancements shared by only a few individuals at first.
jack action said:That is not a problem as long as you can turn off the machine.
jack action said:Why would we remove «proof of concept» when building something new, even if it is done by superior AI?
jack action said:I still expect tests, regulations and protocols to be around.
Filip Larsen said:Well, there is one way that might work, and that is using an equally powerful AI to evaluate the result or operation of other AI's.
No.Destined to build a super AI that will destroy us?
jack action said:Although I agree with your concern about people putting too much trust in machines they don't understand, I don't agree that AI has something to do with it and I don't think that this will lead to endanger the human specie, just society as we know it.
I have my theory but I don't know how well it is shared. We tend to keep our kids constantly busy. Never do we allow our kids to just sit and complain about being bored while telling them to deal with it. Being bored breeds imagination and wandering minds. I grew up that way and I see things the same way you do jack action. Most of the time kids are kept busy so the parents have an easier time dealing with them.jack action said:@Filip Larsen:
... And I don't understand how we got from people looking out at the night sky and being so curious about finding star patterns and just trying to define what those lights were, to putting an image-box-that-can-connect-to-someone-on-the-other-side-of-the-earth in one's hand and that the same kind of people don't even want to use a screwdriver to open it and see what it looks like on the inside. Where did curiosity went?
Only a couple..Scott said:I have a couple of significant problems with his descriptions.
Boing3000 said:His blind faith into some mythical (and obviously unspecified) "progress", is also mind boggling.
You know, that's the first time I get a coherent and rational response to a genuine statement. Your are kind of "getting me out of guard", because generally, what I get it downright hysterical denial and revisionism (which is, you'll have guessed, very hard to argue withFilip Larsen said:I would very much like to hear serious technical argument (i.e. not ad hominem arguments) on why there is absolutely no (i.e. identical zero) risk of humanity every getting itself lost to technological complexity and AI. Can you point to such serious arguments or perhaps provide some here?
That is very very true. Proving a negative is something that no scientific minded people (that by no means ... means intelligent people) will every do.Filip Larsen said:Usually such arguments are very difficult to establish, because you effectively have to prove that something will never happen all the while humanity undergo potential multiple social and technological disruptive changes, which of course is much more difficult than proving that same something just might or might not happen.
I suppose your are talking about that.Filip Larsen said:What I am looking for is a kind of argument that fairly clearly shows that there is no route from today leading to "destruction of human civilization"
Super intelligent AI (like every infinite growth based fantasy) break the first law of thermodynamic.Filip Larsen said:without breaking one or more physical laws along the way and without a relatively small group of humans (say 1% of humans) being able to "lead us into doom" (whether unintentionally or with malice) without everyone else being sure to notice well in advance.
Filip Larsen said:I would very much like to hear serious technical argument (i.e. not ad hominem arguments) on why there is absolutely no (i.e. identical zero) risk of humanity every getting itself lost to technological complexity and AI. Can you point to such serious arguments or perhaps provide some here?
Even at a time scale as far out as 10,000 years from now? Perhaps that is where fantasy comes into play? I tweeted Sam Harris this thread. With some luck he will have a bit to say.jack action said:Personally - without having any other arguments than those I already presented - I'm more inclined to go towards the 0.0001% end of the scale.
Boing3000 said:To answer your point again (in a slightly different perspective), you have to answer what risk there is, what kind of harm will happens to whom, and how.
Boing3000 said:Super intelligent AI (like every infinite growth based fantasy) break the first law of thermodynamic.
Normal serious AI (that is human kind) have trouble to know what even intelligence means and comes from.
Boing3000 said:Shouldn't the conversation ends there
Greg Bernhardt said:Even at a time scale as far out as 10,000 years from now? Perhaps that is where fantasy comes into play? I tweeted Sam Harris this thread. With some luck he will have a bit to say.
anorlunda said:If an intelligence greater than mankind's decides that humans should be killed,
jack action said:I always find that kind of question bizarre: Why would anyone - machines or aliens - «decide» to get rid of humans, especially because we would be of «lower intelligence»?
jack action said:Are we saying as human: «Let's get rid of ants, hamsters and turtles, they're so dumber than we are»?
I can't speak Jack Action but I would say the motivation to rid ourselves of smallpox and disease carrying mosquitoes is to improve human life. Apparently there has been something seriously missed in the search for extra terrestrial intelligence if humans are causing problems for alien life.anorlunda said:Did you forget that we did decide to make the smallpox virus extinct? Or that we are actively considering doing the same for disease carrying mosquitoes?
But you still haven't provided us with any clues as to why that is a risk. As far as normal people are concerned (those not having an intimate relationship with Maxwell equations, or Quatum field theory (that is 99.99999% of humanity, including me) a simple telephone is "magic". A cell phone even more, there is not even a cable !Filip Larsen said:1) I have already enumerated several of those risks in this thread, like humanity volunteering most control of their life to "magic" technology.
But there is no risk. I mean not because there AI don't exist, nor because progress is not a exponential quantity. The reason there is no risk is because you have NOT explains any plausible risk.Filip Larsen said:as As long as these risks are not ruled out as a "high cost" risk I do not really have to enumerate more to illustrate that there are "high cost" risks.
I am not sure what "risk mitigation" means. But as a computer "scientist", I know that computers aren't there to harm us, most often, it is the other way around (we give them bugs and viruses, and force them to do stupid things, like playing chess, or display cats in high definition)Filip Larsen said:But please feel free to show how you would make risk mitigation or reduction of each of them because I am not able to find ways to eliminate those risks.
I cannot even begin to follow you. Am I forced to buy some of your insurance and build some underground bunker, because someone on the internet is claiming that doom is coming ?. I don't meant mean real doom, like climate change, but some AI gone berserk ? Are you kidding me ?Filip Larsen said:2) Requiring everyone else to prove that your new technology is dangerous instead of requiring you to prove its safe is no longer a valid strategy for a company.
That's a non sequitur. A fusion reactor may kill me, we know very precisely how, with some kind of real probability attached.Filip Larsen said:Using my example from earlier, you can compare my concern with the concern you would have for your safety if someone planned to build a fusion reactor very near your home, yet they claim that you have to prove that their design will be dangerous.
First Flops are not intelligence. If stupid programs run on computer, then more Flops will lead to more stupidity.Filip Larsen said:Energy consumption so far seem to set a limit on how a localized an AI with human-sized intelligence can be due to the current estimate of how many PFlops it would take on conventional computers.
Indeed. But again those limit are not soft at all, as far as Plank is concerned. And again, quantity and quality are not the same thing.Filip Larsen said:You can simple calculate how many computers it would take and how much power, and conclude that any exponential growth in intelligence would hit the ceiling very fast. However, two observations seems to indicate that this is only a "soft" limit and that the ceiling may in fact be much higher.
But that's MY point ! A very good article. Actually geneticist are much more close to build a super brain that IBM. So what ? What are the risk, and where is the exponential "singularity" ? Are you saying that such a brain will want to be bigger and bigger until it as absorbed every atom in earth, then the universe ?Filip Larsen said:Firstly, there already exist "technology" that are much more energy efficient. The human brain only uses around 50W to do what it does, and there is no indication that there should be any problem getting to that level of efficiency in an artificial neural computer either. The IBM's True North chip is already a step down that road.
That's just false. Power consumption of data center are already an issue. And intelligent wise, those data center have the IQ below that of a snail.Filip Larsen said:Secondly, there is plenty of room to scale out in. Currently our computing infrastructure is increasing at an incredible speed making processing of ever increasing data sets cheaper and faster, making access to EFlops and beyond on the near horizon.
Of but I agree, the problem with arguments is that I would like them to be rooted in science, not in fantasy (not that you do, but Sam Harris does, and this thread is a perfect place to debunk them)Filip Larsen said:If you don't believe you can add any more meaningful information or thoughts, then sure. But I still like to discuss technical arguments would those that still care for this issue.
The smallpox virus is, by no means, extinct...anorlunda said:Did you forget that we did decide to make the smallpox virus extinct?
What makes you think decisions are made in the best interest of society right now with actual people in charge?Charles Kottler said:It the goal of such a company (as is likely) is to make the most profit, why would anyone think that in the long term the decisions made would be in the best interests of human society?
Averagesupernova said:What makes you think decisions are made in the best interest of society right now with actual people in charge?
Over 150 species go extinct every single day. Extinction rates due to human action is 1,000 times greater than background rates. We aren't killing then because they're unintelligent, but if they were as smart as us they surely wouldn't die.jack action said:I always find that kind of question bizarre: Why would anyone - machines or aliens - «decide» to get rid of humans, especially because we would be of «lower intelligence»?
Are we saying as human: «Let's get rid of ants, hamsters and turtles, they're so dumber than we are»?
Source?billy_joule said:Over 150 species go extinct every single day. Extinction rates due to human action is 1,000 times greater than background rates.
Greg Bernhardt said:Sam Harris had a TED Talk on this not long ago. What do you think? Are we taking the dangers seriously enough or does Sam Harris have it wrong?
The problem you state is called overpopulation and has nothing to do with intelligence. It is just a matter of numbers. And the species that have an exponential growth is always condemn to stop and regress at one point or another. One specie cannot live by itself.billy_joule said:Over 150 species go extinct every single day. Extinction rates due to human action is 1,000 times greater than background rates. We aren't killing then because they're unintelligent, but if they were as smart as us they surely wouldn't die.
Filip Larsen said:I would very much like to hear serious technical argument (i.e. not ad hominem arguments) on why there is absolutely no (i.e. identical zero) risk of humanity every getting itself lost to technological complexity and AI.
Bystander said:Source?
The full reports can be found here:United Nations Environmental Programme said:Biodiversity loss is real. The Millennium Ecosystem Assessment, the most authoritative statement on the health of the Earth’s ecosystems, prepared by 1,395 scientists from 95 countries, has demonstrated the negative impact of human activities on the natural functioning of the planet. As a result, the ability of the planet to provide the goods and services that we, and future generations, need for our well-being is seriously and perhaps irreversibly jeopardized. We are indeed experiencing the greatest wave of extinctions since the disappearance of the dinosaurs. Extinction rates are rising by a factor of up to 1,000 above natural rates. Every hour, three species disappear. Every day, up to 150 species are lost. Every year, between 18,000 and 55,000 species become extinct. The cause: human activities.
jack action said:The problem you state is called overpopulation and has nothing to do with intelligence. It is just a matter of numbers. And the species that have an exponential growth is always condemn to stop and regress at one point or another. One specie cannot live by itself.
It's happened countless times between smart and stupid humans, and it'll continue to happen. Control through deception is a recurring theme in human history.jack action said:How can mankind gradually gives the decision process to machines without them ever noticing it going against their survival?
jack action said:If you have to put a number on it, what is the risk of humanity ever getting itself lost to technological
complexity and AI (i.e. "destruction of human civilization")? 90%, 50%, 10% or even 0.0001%?
Boing3000 said:But you still haven't provided us with any clues as to why that is a risk.
Boing3000 said:A fusion reactor may kill me, we know very precisely how, with some kind of real probability attached.
I then mitigate it with some other benefit I get from it. That's what I call intelligence: balance, and constant evaluation.
Boing3000 said:First Flops are not intelligence. If stupid programs run on computer, then more Flops will lead to more stupidity.
Boing3000 said:That's just false. Power consumption of data center are already an issue.