- #1
- 19,443
- 10,022
Sam Harris had a TED Talk on this not long ago. What do you think? Are we taking the dangers seriously enough or does Sam Harris have it wrong?
Last edited:
stevendaryl said:If a problem involves cooperation among all major countries on Earth, you can pretty much bet that it isn't going to happen.
stevendaryl said:(Maybe they could evolve goals?)
Greg Bernhardt said:So we're kinda doomed as a species. I mean obviously at some point, nothing last forever, but this could spell our demise prematurely.
Greg Bernhardt said:Are we taking the dangers seriously enough or does Sam Harris have it wrong?
jack action said:That is pure nonsense.
Intelligence is the capacity to imagine things.
Say we built a machine that have so much imagination that we as human being cannot understand what it spits out. Are we going to built this machine such that it can be autonomous? Why would we do that? It's like an ape that would try to raise a human being in hope that he would build a space ship. Such a human being will only be require to do ape things. It might even get kill rather quickly if it diverges too much from the group. I can't imagine an ape looking at a human being building a spaceship and that this ape would work to feed him, protect him, care for him and cover all of his needs without even understanding what he does.
jack action said:Are we going to built this machine such that it can be autonomous? Why would we do that?
Or a more likely scenario would be smart humans cared for by dumb machines.Filip Larsen said:or that we will end up like dumb humans cared for by smart machines,
stevendaryl said:He talks about the AI having goals that conflict with those of humans, but an AI is not necessarily going to have any goals at all, other than those put in by the programmer. (Maybe they could evolve goals?)
Given the ease with which most things can now be obtained over the internet, the developments in visual recognition software and autonomous vehicles and robots I do not think it will be too long until someone builds a population of self maintaining machines, with spare parts ordered automatically.jack action said:Is it reasonable to think that we could make machines that will be able to reproduce or repair themselves or just protect themselves for that matter?
256bits said:Or a more likely scenario would be smart humans cared for by dumb machines.
Charles Kottler said:A common approach towards developing an AI is to build a neural network. These are often 'trained' by defining a 'fitness' function against which variations in a population of networks are measured. It seems quite plausible to me that these functions might themselves be programmed to allow some variation which over time could develop in completely unpredictable directions.
stevendaryl said:As far as I know, there hasn't been much AI research done about machines developing their own goals (except as means to an end, where the end was specified by the humans). It certainly seems possible that machines could develop their own goals, but I think you would need something like natural selection for that to happen--you'd need robot reproduction where only the fittest survive.
stevendaryl said:I really can't understand your argument. As Sam Harris points out, we have lots of really good reasons for making smarter and smarter machines, and we are doing it, already. Will the incremental improvements to machine intelligence get all the way to human-level intelligence? It's hard to know, but I don't see how your argument proves that it is nonsense.
Filip Larsen said:algorithms we don't really understand the result of and where the result can change autonomously (by learning) to give "better" results.
Filip Larsen said:It seems fairly obvious that autonomously interacting and data-sharing learning systems eventually can be made to outperform non-interacting systems simply because then each system can provide even more benefit to its users and owner, and from then on you pretty much get an increasing degree of autonomy.
Charles Kottler said:I do not think it will be too long until someone builds a population of self maintaining machines, with spare parts ordered automatically.
Charles Kottler said:As long as they continue to perform the basic functions expected of them, their internal programming could evolve in any number of ways without attracting unwanted attention from human 'overseers'.
jack action said:First, a single machine cannot be autonomous, just like humans are not. We need plants, insects and the rest of the animal kingdom.
jack action said:For machines to develop such a system that wouldn't include humans is nonsense (Something like «The Terminator» or «The Matrix» movies).
jack action said:I just can't imagine that all of this evolution could happen so fast without us being able to notice it.
jack action said:Making smarter machines is not nonsense; Thinking they will endanger the human specie is.
Filip Larsen said:I am referring to autonomy to make decisions.
Filip Larsen said:What if all the guarantee you ever learn about is a statement from the company itself saying that they have tried this this new reaction in a lab without problem and that you really should relax and look forward to cheap electricity?
cosmik debris said:Are you assuming here that we will build in some sort of control like Asimov's rules?
.Scott said:When the "technological singularity" happens, it will happen to someone - someone who will have the ability to overpower all competitors and governments. If that person wanted to share or contain that power, who would he/she trust?
When the technological singularity is reached, that technology can be used to develop additional, more advanced technology faster that people can develop it. So it can be used to outpace all other technological development. If that point is reached by someone without others soon noticing, it could put one person in a very powerful position.jack action said:How is that [technological singularity] different from any technological improvement done for thousands of years now? Invention of the knife, the wheel, the bow and arrow, gun powder, medicine, boat, etc. The human specie, somehow, manage to survive those technological advancements shared by only a few individuals at first.
jack action said:That is not a problem as long as you can turn off the machine.
jack action said:Why would we remove «proof of concept» when building something new, even if it is done by superior AI?
jack action said:I still expect tests, regulations and protocols to be around.
Filip Larsen said:Well, there is one way that might work, and that is using an equally powerful AI to evaluate the result or operation of other AI's.
No.Destined to build a super AI that will destroy us?
jack action said:Although I agree with your concern about people putting too much trust in machines they don't understand, I don't agree that AI has something to do with it and I don't think that this will lead to endanger the human specie, just society as we know it.
I have my theory but I don't know how well it is shared. We tend to keep our kids constantly busy. Never do we allow our kids to just sit and complain about being bored while telling them to deal with it. Being bored breeds imagination and wandering minds. I grew up that way and I see things the same way you do jack action. Most of the time kids are kept busy so the parents have an easier time dealing with them.jack action said:@Filip Larsen:
... And I don't understand how we got from people looking out at the night sky and being so curious about finding star patterns and just trying to define what those lights were, to putting an image-box-that-can-connect-to-someone-on-the-other-side-of-the-earth in one's hand and that the same kind of people don't even want to use a screwdriver to open it and see what it looks like on the inside. Where did curiosity went?
Only a couple..Scott said:I have a couple of significant problems with his descriptions.
Boing3000 said:His blind faith into some mythical (and obviously unspecified) "progress", is also mind boggling.
You know, that's the first time I get a coherent and rational response to a genuine statement. Your are kind of "getting me out of guard", because generally, what I get it downright hysterical denial and revisionism (which is, you'll have guessed, very hard to argue with)Filip Larsen said:I would very much like to hear serious technical argument (i.e. not ad hominem arguments) on why there is absolutely no (i.e. identical zero) risk of humanity every getting itself lost to technological complexity and AI. Can you point to such serious arguments or perhaps provide some here?
That is very very true. Proving a negative is something that no scientific minded people (that by no means ... means intelligent people) will every do.Filip Larsen said:Usually such arguments are very difficult to establish, because you effectively have to prove that something will never happen all the while humanity undergo potential multiple social and technological disruptive changes, which of course is much more difficult than proving that same something just might or might not happen.
I suppose your are talking about that.Filip Larsen said:What I am looking for is a kind of argument that fairly clearly shows that there is no route from today leading to "destruction of human civilization"
Super intelligent AI (like every infinite growth based fantasy) break the first law of thermodynamic.Filip Larsen said:without breaking one or more physical laws along the way and without a relatively small group of humans (say 1% of humans) being able to "lead us into doom" (whether unintentionally or with malice) without everyone else being sure to notice well in advance.
Filip Larsen said:I would very much like to hear serious technical argument (i.e. not ad hominem arguments) on why there is absolutely no (i.e. identical zero) risk of humanity every getting itself lost to technological complexity and AI. Can you point to such serious arguments or perhaps provide some here?
Even at a time scale as far out as 10,000 years from now? Perhaps that is where fantasy comes into play? I tweeted Sam Harris this thread. With some luck he will have a bit to say.jack action said:Personally - without having any other arguments than those I already presented - I'm more inclined to go towards the 0.0001% end of the scale.
Boing3000 said:To answer your point again (in a slightly different perspective), you have to answer what risk there is, what kind of harm will happens to whom, and how.
Boing3000 said:Super intelligent AI (like every infinite growth based fantasy) break the first law of thermodynamic.
Normal serious AI (that is human kind) have trouble to know what even intelligence means and comes from.
Boing3000 said:Shouldn't the conversation ends there