Nevertheless, a corporation has rights, whatever it is called. Corporate personhood is legalese implying characteristics of some sort of sentient behavior, In fact, corporate rights can come in conflict with the rights of the human individual.
If sentient AI's do ever come about, and are granted rights-responsibilities same as or similar to human rights, then that implies that the humans share the same towards AI.
It has to work both ways.
The discussion is way more complicated than that as perceived from a simple anthropocentric perspective of the universe.
Humans will have trouble.
Is any human, at the time of this writing, willing to be prosecuted for causing harm to, including the death of, an AI? Will that viewpoint change as AGI's become more commonplace.? Is it moral to send sentient AI's into battle knowing that they are on suicide missions? Would it be ethical and moral to own and sell an AGI, with all the implications that slavery entails. Will an AGI be allowed to hold possessions, including land and real estate, accrue wealth, vote?