- 19,865
- 10,853
Sam Harris had a TED Talk on this not long ago. What do you think? Are we taking the dangers seriously enough or does Sam Harris have it wrong?
Last edited:
The forum discussion centers on the implications of superintelligent AI as presented by Sam Harris in his TED Talk. Participants express concerns about the potential dangers of AI evolving beyond human control, with some arguing that the lack of understanding and cooperation among nations could exacerbate these risks. Key points include the distinction between intelligence and self-awareness in AI, the potential for AI to develop unforeseen goals, and the comparison to global issues like climate change. The conversation highlights a mix of skepticism and optimism regarding AI's future impact on humanity.
PREREQUISITESThis discussion is beneficial for AI researchers, ethicists, policymakers, and anyone interested in the future of technology and its societal implications.
stevendaryl said:If a problem involves cooperation among all major countries on Earth, you can pretty much bet that it isn't going to happen.
stevendaryl said:(Maybe they could evolve goals?)
Greg Bernhardt said:So we're kinda doomed as a species. I mean obviously at some point, nothing last forever, but this could spell our demise prematurely.
Greg Bernhardt said:Are we taking the dangers seriously enough or does Sam Harris have it wrong?
jack action said:That is pure nonsense.
Intelligence is the capacity to imagine things.
Say we built a machine that have so much imagination that we as human being cannot understand what it spits out. Are we going to built this machine such that it can be autonomous? Why would we do that? It's like an ape that would try to raise a human being in hope that he would build a space ship. Such a human being will only be require to do ape things. It might even get kill rather quickly if it diverges too much from the group. I can't imagine an ape looking at a human being building a spaceship and that this ape would work to feed him, protect him, care for him and cover all of his needs without even understanding what he does.
jack action said:Are we going to built this machine such that it can be autonomous? Why would we do that?
Or a more likely scenario would be smart humans cared for by dumb machines.Filip Larsen said:or that we will end up like dumb humans cared for by smart machines,
stevendaryl said:He talks about the AI having goals that conflict with those of humans, but an AI is not necessarily going to have any goals at all, other than those put in by the programmer. (Maybe they could evolve goals?)
Given the ease with which most things can now be obtained over the internet, the developments in visual recognition software and autonomous vehicles and robots I do not think it will be too long until someone builds a population of self maintaining machines, with spare parts ordered automatically.jack action said:Is it reasonable to think that we could make machines that will be able to reproduce or repair themselves or just protect themselves for that matter?
256bits said:Or a more likely scenario would be smart humans cared for by dumb machines.
Charles Kottler said:A common approach towards developing an AI is to build a neural network. These are often 'trained' by defining a 'fitness' function against which variations in a population of networks are measured. It seems quite plausible to me that these functions might themselves be programmed to allow some variation which over time could develop in completely unpredictable directions.
stevendaryl said:As far as I know, there hasn't been much AI research done about machines developing their own goals (except as means to an end, where the end was specified by the humans). It certainly seems possible that machines could develop their own goals, but I think you would need something like natural selection for that to happen--you'd need robot reproduction where only the fittest survive.
stevendaryl said:I really can't understand your argument. As Sam Harris points out, we have lots of really good reasons for making smarter and smarter machines, and we are doing it, already. Will the incremental improvements to machine intelligence get all the way to human-level intelligence? It's hard to know, but I don't see how your argument proves that it is nonsense.
Filip Larsen said:algorithms we don't really understand the result of and where the result can change autonomously (by learning) to give "better" results.
Filip Larsen said:It seems fairly obvious that autonomously interacting and data-sharing learning systems eventually can be made to outperform non-interacting systems simply because then each system can provide even more benefit to its users and owner, and from then on you pretty much get an increasing degree of autonomy.
Charles Kottler said:I do not think it will be too long until someone builds a population of self maintaining machines, with spare parts ordered automatically.
Charles Kottler said:As long as they continue to perform the basic functions expected of them, their internal programming could evolve in any number of ways without attracting unwanted attention from human 'overseers'.
jack action said:First, a single machine cannot be autonomous, just like humans are not. We need plants, insects and the rest of the animal kingdom.
jack action said:For machines to develop such a system that wouldn't include humans is nonsense (Something like «The Terminator» or «The Matrix» movies).
jack action said:I just can't imagine that all of this evolution could happen so fast without us being able to notice it.
jack action said:Making smarter machines is not nonsense; Thinking they will endanger the human specie is.
Filip Larsen said:I am referring to autonomy to make decisions.
Filip Larsen said:What if all the guarantee you ever learn about is a statement from the company itself saying that they have tried this this new reaction in a lab without problem and that you really should relax and look forward to cheap electricity?
cosmik debris said:Are you assuming here that we will build in some sort of control like Asimov's rules?
.Scott said:When the "technological singularity" happens, it will happen to someone - someone who will have the ability to overpower all competitors and governments. If that person wanted to share or contain that power, who would he/she trust?
When the technological singularity is reached, that technology can be used to develop additional, more advanced technology faster that people can develop it. So it can be used to outpace all other technological development. If that point is reached by someone without others soon noticing, it could put one person in a very powerful position.jack action said:How is that [technological singularity] different from any technological improvement done for thousands of years now? Invention of the knife, the wheel, the bow and arrow, gun powder, medicine, boat, etc. The human specie, somehow, manage to survive those technological advancements shared by only a few individuals at first.
jack action said:That is not a problem as long as you can turn off the machine.
jack action said:Why would we remove «proof of concept» when building something new, even if it is done by superior AI?
jack action said:I still expect tests, regulations and protocols to be around.
Filip Larsen said:Well, there is one way that might work, and that is using an equally powerful AI to evaluate the result or operation of other AI's.
No.Destined to build a super AI that will destroy us?
jack action said:Although I agree with your concern about people putting too much trust in machines they don't understand, I don't agree that AI has something to do with it and I don't think that this will lead to endanger the human specie, just society as we know it.

I have my theory but I don't know how well it is shared. We tend to keep our kids constantly busy. Never do we allow our kids to just sit and complain about being bored while telling them to deal with it. Being bored breeds imagination and wandering minds. I grew up that way and I see things the same way you do jack action. Most of the time kids are kept busy so the parents have an easier time dealing with them.jack action said:@Filip Larsen:
... And I don't understand how we got from people looking out at the night sky and being so curious about finding star patterns and just trying to define what those lights were, to putting an image-box-that-can-connect-to-someone-on-the-other-side-of-the-earth in one's hand and that the same kind of people don't even want to use a screwdriver to open it and see what it looks like on the inside. Where did curiosity went?
Only a couple..Scott said:I have a couple of significant problems with his descriptions.