That's essentially the crux of the concern. We can't control each other's behaviour, so if an AI reaches that level of autonomy, and is inimical to the human way of life, it might decide on some nefarious course of action to kill us off.Self-aware in what sense?
We don't know, of course, if an AI could even reach this dangerous point (and the AI we've built to date are laughably limited in that regard) but it is possible. As for what 'model' it adopts in terms of ethics or higher purpose, that is equally unknown.
Some say, AI has potential to go horribly wrong for us. The question is whether we should fear this or not.