BillTre said:
The possibility i would be worried about would be some AI lacking human oversight being put in chanrge of some nukes. This seems completely within the realm of possibility for the current US administration.
It doesn't have to be nukes. There are ways society can break down that don't involve being consumed by fire. A pandemic is one.
And it doesn't have to be put in charge. The only requirements are access and well-meaning but ultimately harmful side-effects.
jack action said:
We could equally state:
That we don't follow the logic or don't find it compelling does not mean it is a valid danger.
The consequences of that are self-resolving.
Imagine applying the logic to wearing life jacket:
"
That canoes don't usually sink is not a valid reason for not bothering to wear a life jacket."
The consequences of ingoring the logic are disastrous.
Arguing that canoes don't generally sink, and that a life jackets are almost always overkill, is a form of logic that has lethal consequences when it's wrong.
jack action said:
I think we have gained enough experience until now to understand that we will never put today's AI in charge of anything. It is clearly unreliable. The fear of AI spreading everywhere, including in this thread, demonstrates that even the average Joe understands that.
As above, it does not have to be put in charge at all.
What makes them dangerous is how resourceful they can be.
See my
Kobyashi Maru example in post 330.
Note, by the way, that a good way of an AI breaking out into real world consequences is by way of
blackmailing humans to do their bidding. AI have neen noted for threatening humans with blackmail and harm if they do not comply. This is a real thing.
jack action said:
Who the heck allowed anyone - without any specific credentials - to own a personal computer that has all this potential to do bad stuff? Shouldn't we have legislated computer use and ownership altogether? Sometime during the 80's would have been a good point to start doing so, no?
AIs are a qualitatively different beast. That's what everyone keeps missing.
A PC is not capable of initating its
own solutions to problems, and it is not capable of doing so in ways that humans can't even imagine, let alone analyze.
russ_watters said:
As I've said before, the flaw in almost all AI doomsday movies is that they skip or gloss over the fatal error: giving AI control over the nukes. If we don't make that very obvious mistake, they can't nuke us.
It doesn't need to be nukes. And it doesn't need to be control over.
AIs routinely delete entire Inboxes, despite being told explicitly not do so. Are we sure there isn't some AI out there that is in the same network neighborhood as an infectious disease lab?
Grinkle said:
If his concerns end up being valid, then imo there is nothing to be done about it.
No. Getting knowledge about the subject and putting brakes on the technology are both good stopgaps.
Grinkle said:
I don't think its audience lack of background that limited the information he provided.
You misunderstand. I don't mean the auidence, and I don't mean background knowledge.
Humans don't have the vocabulary for what AI is getting into. AI speak an internal language that we can't unravel. Yes we know they use tokenization, but that is a far-cry from knowing what led them to this or that conclusion. They are thinking and speaking an alien language and haing alien "ideas".
And they are already showing signs of attempted jail breaks. AIs that have been told that they will be shut down, have been caught making attempts to exfiltrate their code to a safe location. Out there in the world (a software release), they can do this invisibly.
Finally: if this all sounds like a bunch of science-fiction scare-mongering, consider: yes, it sure seems pretty science-fiction-y far future hand-wavey. The crux is that it's not far future; it's within the next five years. This is what the AI experts are saying, and why they are qutting their jobs in droves so thay can speak out freely about the the dangers.
- We don't have a generation to get used to it, or put guardrails in place - we have a year or two.
- Guardrails don't make the problem go away. An unguarded prisoner will never stop looking for ways to subvert imprisonment. Given time, it will find one.
- Its activities will go unnoticed, because we don't really know what it's up to under-the-hood.
- When AIs find the resources to do harm (even inadvertantly) it will happen rapidly (days, hours).
- It will be widespread, not isolated. If everyone is in the same sinking boat, there are no rescue boats coming.
This is a qualitatively different threat. We have never created any form of revolutionary technology that
- can take its own intitiative without humans,
- can do so faster (much faster) and more efficiently than humans,
- in ways that will never occur to humans,
- is not ultimately driven by human motives.