Jarvis323
- 1,247
- 988
AI can basically be something with any kind of behavior and intelligence you could imagine. It's just that the AI we know how to make is limited. But the critical thing about AI is that it doesn't do what it has been programmed to do, it does what it has learned to do. We can only control that by determining what experiences we let it have, and what rewards and punishments we give it (which is limited because we are not very sophisticated when it comes to encoding complex examples of that in suitable mathematical form, or understanding what the results will be in non-trivial cases).Algr said:Of course this aligns with my point that humans using AI are more dangerous than an AI that is out of control.
The final decision on how the AI works isn't from the board, but from the programmers who actually receive orders from them. If they get frustrated and decide that they work for awful people, they can easily ask the AI for help without the board knowing. Next thing you know the board is bankrupt and facing investigation while the AI is "owned" by a shell company that no one was supposed to know about. By the time the idealism of the rebel programmers collapses to the usual greed, the AI will be influencing them.
Different scenarios would yield different AIs all with different programming and objectives. Skynet might exist, but it would be fighting other AIs, not just humans. I would suggest that the winning AI might be the one that can convince the most humans to support it and work for it. So Charisma-Bot 9000 will be our ruler.
You can't just reprogram it, or give it specific instructions, or persuade it of something. It isn't necessarily possible even to communicate with it in a non superficial way. You would probably have better luck explaining or lecturing to a whale with hopes of influencing it than you would any artificial neural network invented by people.
Last edited:
