gleem
Science Advisor
Education Advisor
- 2,731
- 2,231
LLMs are carrying on conversations with people who trustingly provide very personal information.DaveC426913 said:For example, I am one who has been sort of assuming that we would first see something that behaved intelligently like a human behaves intelligently (for example able to carry on a conversation) before it would rise to the level of dangerous. I am now seeing that it does not need to have anything like what I might have of thought of as intelligence in order to get into up to some very serious shenanigans.
I read that same? article. The AI was an open source agenic LLM now called Openclaw. The author of this model recomends to load the program on a virtual computer that does not have your files. Openclaw requires administrative permissions.DaveC426913 said:One AI recently wiped some user's entire library of files accidentally (*citation needed). It didn't mean to; it simply had sufficient resources available to it (e.g. full edit/delete permissions) and insufficient (i.e. no) oversight.