GTOM
- 966
- 67
Boing3000 said:Drones are as fragile and innocuous than fly, albeit with a total inability to suck energy from their environment.
Industrial robots don't move.
Self driving car, even like this one are harmless (to human kind). This is no science nor fiction, this is fantasy/romance.
"We" may become totally dependent on machine. A case can be made it is already the case. Anybody can blow the power grid, shutdown the internet, and what not, and provoke mayhem (but not death). There is no need for AI to do that, quite the opposite: An AI would have a survival incentive to keep those alive and healthy.Indeed. As are killer viruses, killer guns, killer wars, killer fossil fuel, killer sugar, fat and cigarettes. Millionth of death per year ... still no AI in sight...I bet some form of deep learning processes are already doing that to prevent some "catastrophic" events, in the vaults of intelligence agencies.
The thing is outsmarting human is not "a threat".But that is the core of the problem. The homo-sapiens-sapiens have never been a threat to other species. It lived in an healthy equilibrium made of fight AND flight with its environment. Only a very recent and deep seated stupid meme (growth and progress) is threatening the ecosystem (which humankind is entirely part of).
Things will sorts themselves out as usual. Maybe some sort of ants will also mute and start devouring the planet. There is no intelligence nor design in evolution, just random events sorted by other happenstance / law of nature.
In this context, intelligence, even a mild one, will realize that and have a deep respect for the actual equilibrium in place in the environments.
Again, that is stupidity that is threatening (by definition). So maybe an A.S. (artificial stupidity) would be threatening to human, which seems to be hell bent on trusting the Golden Throne in the stupidity contest (lead by wannabee scientist like Elon Musk...)Granted
I would say many things to Elon Musk, stupid isn't one of them...
Growth and progress isn't a very recent development, it is as old as humanity. It isn't the invention of last century to chop down forests, and drive some species to extinction. Even if "Growth and progress" were that recent, why couldn't an AI developed by some company inherit that? And become that "Artificial Stupidity" you talk about? By the way, recent AIs are kinda stupid because they only see a single goal. Why it is different from our stupidity when we only see a goal of big grow and don't care about environment? (So we become very efficient in that process, and animals can't do anything us)
Your lines imply as if that intelligent AI would actually have to protect us from our stupidity.
Great, use that mentality in AI development, and we have something, that want to cage us for our own good... Thanks i don't want that.
Yes, there are a number of things that can threat all humanity.
Cosmic event, we can't prevent that, but it looks like we have very much time to prepare.
Killer virus, yes, but it is very unlikely that it would kill all humans, however an AI could develop millions of variants.
Nuclear war at the time of Cuban crisis is the only near analogy, is it stupid to say, that in such a case, even a small error could endanger all humanity?