nuuskur said:
I hate this type of discussion: let's decide a priori something is a threat and then come up with explanations for why. Eventually, it has to be a person that carries out those deeds. So I stand by what I said. I fear evil people wielding knives. I don't fear the knife.
The point of AI is that it enables task automation. So you need to consider at what level of automation you semantically call it fear of the AI itself instead of the persons who ultimately pointed it at you or bear responsibility.
Lets consider an example based on guns. You can say I don't fear guns, I fear people, and that makes some sense. Now, lets consider a weapon where the user enters a persons name into a form and presses a button, and then the weapon tracks them down and kills them. Is that enough to be afraid of AI instead of the button pusher? How about if instead of having to manually enter a name, they just manually enter some traits, and then an algorithm identifies everyone with the traits and sends the weapons out?
So far it still arguably might fall into the AI doesn't kill people, people kill people category, because the AI doesn't really make a non-transparent decision. You could go further and say what if a person just instructed the AI to eliminate anyone who is a threat to them. Then there is some ambiguity, the AI is deciding who is a threat now. So there is additional room to fear the decision process. You could go further, and suppose you instructed the AI to act in your interest, and as a result the AI decides who is a threat and eliminates them.
Anyways, we obviously need to worry about the AI-human threat even in the absence of non-transparent AI decision making. There is also room to fear AI decision making whenever it becomes involved in making subjective or error prone decisions. But people make bad or threatening enough decisions as it is.
The actions here could be generalized for understanding some of the threat profile. Instead of kill, it could just injure, steal, manipulate, convince, misinform, extort, harass, threaten, oppress, or discriminate, etc. Instead of asking it to identify physical threats you could ask it to identify political threats, or economic threats or competition, or people who are vulnerable to a scam, or people who are right wing or left wing, or some race or gender, or ethnicity or nationality, etc.
Now imagine thousands of AI based systems simultaniously being put into continuous practice automating these kinds of actions over large AI created lists of people, on the behalf of thousands of different criminal organizations, terrorist groups, extremist groups, corporations, political organizations, militaries, and so on.
That would comprise one of many possible reasons why a person might want to fear AI or people using AI.
Beyond that you could fear economic disruption and job loss (which isn't that clearcut because technically greater efficiency should lead to better outcomes if we could adapt appropriately). You could fear unintentional spreading of misinformation. You could fear negative impacts to mental health based on the potential for more addictive digital entertainment, you could fear existential crisis, you could fear the undermining of democracy, you could fear unchecked power accumulation and monopoly, you could fear excessive surveillance or a police state, you could fear over-dependence leading to incompetence, etc.
It is such a complicated profile of threats that it is hard to wrap the mind around in my opinion. A very significant number of those threats are new and now current real world threats.