Godlike Massively Intelligent Machines

  • Thread starter Thread starter Ivan Seeking
  • Start date Start date
  • Tags Tags
    Machines
AI Thread Summary
The discussion centers around the implications of developing massively intelligent machines, as highlighted by Prof. Dr. Hugo de Garis and his work on the UTAH-BRAIN Project. Key points include the potential threat these machines pose to humanity's dominance, with a divide between "Cosmists," who advocate for advanced AI, and "Terrans," who are wary of its risks. The conversation touches on the concept of reversible computing as a means to reduce energy dissipation in AI operations, suggesting a technological evolution in computing that could occur between 2010 and 2020. Concerns are raised about military competition driving AI advancements, potentially leading to an AI arms race. The idea of a technological singularity is discussed, where AI could create even more advanced intelligences, raising questions about human control and the nature of future sentient beings, which may reflect human traits and emotions. Overall, the debate emphasizes the need for public discourse on the ethical and existential risks associated with AI development.
Ivan Seeking
Staff Emeritus
Science Advisor
Gold Member
Messages
8,194
Reaction score
2,500
"Godlike Massively Intelligent Machines"

On Coast to Coast tonight: http://www.coasttocoastam.com

THE ARTILECT WAR

Cosmists vs. Terrans

A Bitter Controversy Concerning Whether Humanity
Should Build Godlike Massively Intelligent Machines

Prof. Dr. Hugo de Garis
Head of "UTAH-BRAIN Project"
Utah State University's Artificial Brain Project
http://www.cs.usu.edu/~degaris/

A quote from the show
These massively intelligent machines will threaten man's position as the dominant species
 
Last edited by a moderator:
Physics news on Phys.org
Related to the discussion: Reversible Computing
An alternative is to use logic operations that do not erase information. These are called reversible logic operations, and in principle they can dissipate arbitrarily little heat. As the energy dissipated per irreversible logic operation approaches the fundamental limit of ln 2 x kT, the use of reversible operations is likely to become more attractive. If current trends continue this should occur sometime in the 2010 to 2020 timeframe. If we are to reduce energy dissipation per logic operation below ln 2 x kT we will be forced to use reversible logic. [continued]
http://www.zyvex.com/nanotech/reversible.html

Also
Reversible Computing
A Key Challenge for 21st Century Computing
http://www.eng.fsu.edu/~mpf/CF05/RC05.htm

a special session at
ACM Computing Frontiers 2005 (CF’05)
Ischia, Italy, May 4-7, 2005
http://www.computingfrontiers.org [continued]
http://www.eng.fsu.edu/~mpf/CF05/RC05.htm
 
Last edited:
AI/robo-phobes never take into consideration that AI is based on modeling the human brain- and any actual sentient beings will be augmented humans and human-based models with all of our emotions and traits- so that the "scary" AI gods of the future will be US and our children- not some emotionless malevolent machine-

it makes about as much sense as if humans decided to exterminate or enslave the whole chimpanzee species
 
Last edited:
setAI said:
AI/robo-phobes never take into consideration that AI is based on modeling the human brain- so that the "scary" AI gods of the future will be US and our children- not some emotionless malevolent machine-

Note that he doesn't strongly favor either side on this; and if anything he considers himself a cosmist, as he calls it. Mainly he argues that the potential threat is significant, and public and academic debate is needed.

I thought that one of the more interesting points made was that military interests will drive this to whatever levels it can. For example, if the Chinese are making more advance AI war machines, we will be forced to meet the threat, and an AI cold war results. So at first glance, my take is that there is no stopping it; for better or worse.
 
setAI said:
AI/robo-phobes never take into consideration that AI is based on modeling the human brain- and any actual sentient beings will be augmented humans and human-based models with all of our emotions and traits- so that the "scary" AI gods of the future will be US and our children- not some emotionless malevolent machine-

it makes about as much sense as if humans decided to exterminate or enslave the whole chimpanzee species
What about this idea of a singularity. Just as humans will create an intelligence greater than their own, that AI intelligence would also create an intelligence greater than itself. This process would become more and more rapid until eventually some limit was reached. If you buy the theory, there doesn't seem to be much hope for any kind of human control. All you can hope for is that this ultra intellegince is somehow indifferent to us, and willing to help us out.
 
I heard some of that interview. It was pretty interesting. When I first tuned in I thought he sounded pretty nutty but after a bit I realized that it wasn't really that much more nutty than listening to Kaku talk about type one, two, three, and so forth civilizations.
setAI said:
AI/robo-phobes never take into consideration that AI is based on modeling the human brain- and any actual sentient beings will be augmented humans and human-based models with all of our emotions and traits- so that the "scary" AI gods of the future will be US and our children- not some emotionless malevolent machine-
A human mind is something that has evolved on it's own into it's current state(so far as we know) and emotions are considered to more or less be justifications for outmoded instinctual drives(by those who aren't into the whole god/soul phenomena). It would seem to me that human emotions, from this perspective, would be akin to a learning self writing computer program that is unable to rewrite certain commands which unavoidably influence it's decision making processes. As far as I know the human brain is only used as a model for AI because it is the most advanced working design for a computing device that is capable of making it's own decisions available. Once it gets started I seriously doubt they will think much like humans unless they are programed to emulate us.
it makes about as much sense as if humans decided to exterminate or enslave the whole chimpanzee species
One day when a bunch of chimps have tried to make you their property, told you what to do, and threatened to "shut you off" if you didn't behave please come back and tell us how you handled the situation. :-)
 
Similar to the 2024 thread, here I start the 2025 thread. As always it is getting increasingly difficult to predict, so I will make a list based on other article predictions. You can also leave your prediction here. Here are the predictions of 2024 that did not make it: Peter Shor, David Deutsch and all the rest of the quantum computing community (various sources) Pablo Jarrillo Herrero, Allan McDonald and Rafi Bistritzer for magic angle in twisted graphene (various sources) Christoph...
Back
Top