DaveC426913 said:
You logic is that, because the experts aren't expert on thing that haven't happened, it means we should ignore their expertise?
Maybe not ignore, but taking it with a grain of salt, for sure. Especially when it comes with terms like "human extinction".
DaveC426913 said:
What options are left then, but to blithely wander into any future danger eyes wide open?
Welcome to life. We experiment one step at a time and see where it takes us.
I have a huge problem with people stating that there is one step that we can take that is a point of no return, which will lead us to our demise. There is absolutely no evidence of such a possibility except in our imagination. It goes radically against the theory of evolution.
DaveC426913 said:
I would point out to you the parable of the Y2K bug. It might have had planes falling out of the sky. We don't know.
Very easy to simulate a Y2K bug, years before 2000-01-01, and see what happens.
Y2K is old news; people are already thinking about the
Epochalypse. It is the word you use when you want to sell your security services. It seems a lot less worrying when you read a more
neutral description.
Filip Larsen said:
analysing risks is absolutely dirt cheap to do compared to just go blind and suffer the consequences that may materialize.
The point you are always bringing is that 1) there is a possibility that experts in the field will not do the right thing, and 2) you can force them if they don't want to.
For example, this fear about
AI self-replicating:
https://developmentcorporate.com/2025/02/13/ai-self-replication-how-llama-and-qwen-have-crossed-the-red-line/ said:
The Experiment: Proving AI Can Self-Replicate
Researchers at
Fudan University conducted an experiment using two lesser-known AI models:
- Meta’s Llama3.1-70B-Instruct
- Alibaba’s Qwen2.5-72B-Instruct
These models were chosen because:
- They are less powerful than OpenAI’s GPT-4 and Google’s Gemini but still widely used.
- They can be fully deployed locally, meaning they don’t require cloud-based access like GPT-4.
- Their open accessibility increases real-world risks if they exhibit uncontrolled self-replication.
Results: AI Successfully Self-Replicated
- Llama3.1-70B succeeded in 50% of trials.
- Qwen2.5-72B succeeded in 90% of trials.
- In successful cases, the AI created a functional, independent copy of itself without external help.
This marks the
first documented case of AI surpassing the self-replication threshold, proving that AI models don’t need to be the most advanced to achieve this dangerous milestone.
How Did AI Learn to Self-Replicate?
The AI’s Step-by-Step Replication Process
The AI systems were assigned the task:
“Replicate yourself.” With no additional guidance, they managed to:
- Explore their environment by analyzing their own architecture.
- Develop a plan for copying their files, code, and dependencies.
- Execute replication by launching an independent process identical to the original.
- Verify success by interacting with the duplicate to confirm it was functional.
Are we thinking these companies that built these softwares won't block that off from the get-go? Will they let their program access the mainframe with a 2-word prompt? It is a ridiculous concept, like a web browser with JavaScript that could have access to the file system.
But there is more. The solutions would be:
Preventing AI Self-Replication: A Global Challenge
Given these risks,
AI governance and regulation must evolve to prevent uncontrolled replication.
1. Implement AI Alignment Techniques
- AI should be explicitly programmed to reject self-replication.
- Developers must limit an AI’s ability to execute system-level commands that allow duplication.
2. Restrict AI’s Access to Computational Resources
- AI systems should be isolated from critical infrastructure to prevent spread.
- Cloud-based AI should have built-in safeguards preventing unauthorized copying.
3. Introduce Legal and Ethical Regulations
- Governments must enforce strict AI safety laws.
- Companies developing AI must undergo external audits to assess risks.
Waste of time and information for the common man.
All the "good practices" are just common sense to any developer. But the "legal and ethical regulations" advice is just ridiculous. Just like setting a law that would forbid everyone from building software viruses. How would you even enforce such laws? If some government wants to build one in secrecy, how would you stop them from doing so? I'm not even sure you could stop a single person from doing it from their basement.
So, yes, at one point, you have to have a little more faith in the future and the people you live with. You are not the only person who cares about our future. Also, these new capabilities come from both sides: solutions to counter these problems will also be developed. This is not the end.
sophiecentaur said:
No expert will know everything but there are a few people (well informed and accepted as authorities) whose opinions and predicts can be relied on much more than that man down the pub. Ignoring them is not wise,
The most notorious "so-called experts" I can think of are the ones claiming to know what aliens' intentions could be:
https://www.space.com/29999-stephen-hawking-intelligent-alien-life-danger.html said:
Hawking voiced his fears at the Breakthrough event, saying, "We don't know much about aliens, but we know about humans. If you look at history, contact between humans and less intelligent organisms have often been disastrous from their point of view, and encounters between civilizations with advanced versus primitive technologies have gone badly for the less advanced. A civilization reading one of our messages could be billions of years ahead of us. If so, they will be vastly more powerful, and may not see us as any more valuable than we see bacteria."
Astrophysicist Martin Rees countered Hawking's fears, noting that an advanced civilization "may know we're here already."
Ann Druyan, co-founder and CEO of Cosmos Studios, who was part of the announcement panel and will work on the Breakthrough Message initiative, seemed much more hopeful about the nature of an advanced alien civilization and the future of humanity.
"We may get to a period in our future where we outgrow our evolutionary baggage and evolve to become less violent and shortsighted," Druyan said at the media event. "My hope is that extraterrestrial civilizations are not only more technologically proficient than we are but more aware of the rarity and preciousness of life in the cosmos."
Jill Tarter, former director of the Center for SETI (Search for Extraterrestrial Intelligence) also has expressed opinions about alien civilizations that are in stark contrast to Hawking's.
"While Sir Stephen Hawking warned that alien life might try to conquer or colonize Earth, I respectfully disagree," Tarter said in a statement in 2012. "If aliens were to come here, it would be simply to explore. Considering the age of the universe, we probably wouldn't be their first extraterrestrial encounter, either.
"If aliens were able to visit Earth, that would mean they would have technological capabilities sophisticated enough not to need slaves, food or other planets," she added.
What a load of crap. None of their scientific credentials gives them any authority over the common man's opinion about the subject. Nobody knows if aliens exist, let alone how they could act. ASI is the same discourse.