This first thing you have to do is differentiate between philosophy and science. If you study this as philosophy, I think that is a mistake. You should have a philosophy that is not based on phenomena but you should think of phenomena as things you might want to study in a scientific way. For example, you might ask the question, are animals conscious? And then you might ask, what is needed for something like an animal to be conscious? And then you could try to read
this article in particular which seems to be the clearest of your links. But see below why that might not be so useful.
But if you were to say, let us suppose there is only phenomena. Well I am conscious, so consciousness exists. As a conscious being, I have cause-effect power, so consciousness needs cause-effect power, etc. This is a mistake in my opinion because philosophy should be about common sense, a common-sense understanding of one's place in the world. I have a mother, I have a father, is a much better starting point for philosophy, IMHO, and not in an abstract way; we live in my city, I go to my school, so do you, etc, etc.
I have serious doubts that this IIT theory can explain something like animal consciousness because it says in that article:
simple systems can be minimally conscious; complicated systems can be unconscious; there can be true “zombies” – unconscious systems that are functionally equivalent to conscious complexes.
This is like the Chinese Room thought experiment where a computer in a room responds in Chinese (on a display I presume) and responds just like a person would. From outside the room, can we tell the difference between it being a computer or a person? If not, has the computer become conscious? It seems like IIT would say it is unconscious but functionally equivalent to a conscious entity.
This is also like one of the first chess-playing computers. Called "the Turk", it wasn't actually computer, a small person hid inside and moved the arm. But from the outside, it played as well as a person could. Does that mean this complex of man and machine has consciousness? I think the person is conscious.
But now, suppose you look at the brain as a kind of turk machine, and say that consciousness resides in some particular region or assembly, I already think this is wrong. Suppose person A and person B swap brains. A likes to wear a pony tail, B likes to have bangs. Does that mean the new A will like bangs? No. She has a different face, so probably she will think differently about hairstyles. She now has a face that looks better with a pony tail. So you can see how essential the whole organism is to being conscious. It really doesn't make sense to say consciousness is a property of some tiny part of us.
We don't see animals expressing preferences although dogs do express some preference for different types of food. But usually they just munch it. Perhaps it is strategy, by getting nicer food they might feel safer. But still, one doesn't see one dog communicating to another dog, I like your coat.
So the quote above says that systems can be minimally conscious, conscious or zombies. But do they express individuality? Do they recognize their place in the world? To be conscious is to be conscious of the world around you and how you fit into it. And I don't think IIT recognizes that. So I don't think it is a good theory for that reason.