I pretty much agree that all of these are wrong assumptions. Before I address each one, let me lay down my view of "qualia".
Even for those who think people have a monopoly on qualia, it is still a physical property of our universe. Since you (the reader) presumably have qualia, you can ask yourself if, when you are conscious, you are always conscious of some information - a memory, the sight before you, an idea, etc. Then you can ask yourself this more technical and difficult question: When you are conscious, how much information are you conscious of in a single moment? ... how many bits-worth? This is really a huge discriminator between you and a computer - because there is no place inside a computer that holds more than one or a few bits at a time. And even in those odd cases when several bits are stored in a single state (a phase or a voltage level), each bit is treated as a separate piece of information independent of the state of any other bit. So there is no place in the common computer where a more elaborate "conscious state" could exist. This is by careful design.
But there are uncommon computers - quantum computers where the number of bits in a single state are in the dozens. Currently, Physics only has one known mechanism for this kind of single multi-bit state: quantum entanglement. And as hard as it may be to believe that our warm, wet brains can elicit entanglement long enough to process quantum information and trigger macro-level effects, presuming that it is anything but entanglement suggests that something is going on in our skulls that has not been detected by Physics at all.
And from a systems point of view, it's not difficult to image Darwinian advantages that entanglement could provide - and which process the kind of information that we find associated with this qualia. In particular, Grover's algorithm allows a system with access to an entangled "oracle" or data elements to find an object with a highest score. This can be applied to the generation of a "candidate intention", something you are thinking of doing or trying to do. Of the many possible intentions, model the projected result of each one and rank each result by the benefit of that outcome. Then apply Grover's algorithm to find the one with the highest rank. The output of Grover's algorithm is a "candidate intention", a potential good idea. Mull it over - make it better - if it continues to look good, do it.
So here are my responses to
@Jarvis323 :
1) The kind of information-processing mechanism that I described above would need to be built into larger brain frameworks that are specifically adapted to take advantage of it. It is a very tough system to build up by evolution. Such a mechanism needs to be in place early in the design. In my estimation, all mammals use this mechanism - and thus have some experience of qualia. But let's not get carried away. If they are not as social as humans are, they will have a radically different sense of "self" that we do. We depend on an entire functional community to survive and procreate. We communicate our well-being and are interested in the well-being of others. This is all hard-wired into our sense of "self". So, although "qualia" may be wide-spread, the human experience is not.
2) We are in agreement again: I certainly expect a cat to experience qualia. But it's social rules involve much less interdependence. We can expect it to deal with pain differently - expecting less from most of its fellow cats. Even if they could make a verbal promise, why would they worry about keeping it? Huge parts of the human experience have no place in the cats mind.
3) Clearly the result of the machine is more important than its external methods or internal mechanisms. What mad scientist doomsday scenario do you prefer: 1000 atomic bombs trigger nuclear winter killer robots kill everyone?
4) "The idea that AI has easily understandable or micromanagable behavior, or can always be commanded, or precisely programmed." In this case, I think you are using "AI" to refer to programming methods like neural nets and evolutionary learning. These things are managed by containment. Tesla collects driving environment information from all Tesla's and look for the AI algorithms that have a high likelihood of success at doing small, verifiable tasks - like recognizing a sign or road feature. If the algorithm can do it more reliably than than the licensed and qualified human driver, it can be viewed as safe. The whole point behind using the AI techniques is to avoid having to understand exactly what the algorithm is doing at the bit level - and in that sense, micromanagement would be counter-productive.
To expand on that last point, A.I. containment can be problematic. What if the AI is keying off a Stop sign feature that is specific to something of little relevance - like whether the octagon has its faces or its corners pointing straight up, down, and to the sides. Then fifty million new "point up" signs are distributed and soon AI vehicles are running through intersections. The problem wouldn't so much that the AI doesn't recognize the new signs, but that in comparison to humans, it is doing too poorly.
So now let's make a machine that stands on the battle field as a soldier replacement - able to use it's own AI-based "judgement" about what is a target. We can test this judgement ahead of time, and once it reaches the point where it demonstrates 50% less friendly fire and non-combatant attacks, deploy it. But since we have no insight as to precisely how the targeting decisions are being made, we have no good way to determined whether there are fatal flaws in its "judgement".