@bhobba: I see. It's not so easy to label one's interpretation of quantum theory by a simple name obviously. I guess there are as many interpretation as there are physicists using QT

.
I always understood the MSI such that of course only those observables are determined of a system, prepared in some pure or mixed state by some well defined preparation procedure, for which the probability, given by Born's rule, to find a certain possible value (which is necessarily the eigenvalue of the observable-representing operator) is 1 (and then necessarily the probability to find all other values must be 0 of course). All other observables simply do not have a determined value. Measuring such an indetermined observable gives some of its possible values with a probability given by Born's rule. Measuring it for a single system doesn't tell us much. We can only test the hypothesis that the probabilities are given by Born's rule by preparing an ensemble (in the sense explained in my previous postings) and doing the appropriate statistical analysis. Simply put: An observable only takes a certain value if and only if the system is prepared in an appropriate state, where the corresponding probability to find this value is 1. The KS theorem tells me that it contradicts quantum theory, if you assume that the values of undetermined observables are just not known but have in "reality" certain values. Then you interpret quantum-theoretical probabilities just as sujective probabilities in the sense of classical statistical physics, and that's incompatible with QT according to KS. As you say, this doesn't pose a problem to the MSI. As I understand MSI, to the contrary it is most compatible with the KS theorem!
@bohm2: This quote by Fuchs is indeed interesting. It's easily stated that interpretation should come first, and it's the very first problem you run into if you have to teach an introductory quantum-mechanics lecture. I have no solution for this problem: I think one has to start with a heuristical introduction, using wave mechanics (but please not with photons, because these are among the most difficult cases at all; better use some massive particles and nonrelativistic QT to start with, but that's another story). But this should only be short (maybe at most 2-3 hours) and then you come immediately to the representation-independent realization in terms of the abstract Hilbert-space representation (which is mostly Dirac's "transformation theory", one of the three historically first formulations of QT besides Heisenberg-Born-Jordan's matrix mechanics and de Broglie-Schrödinger wave mechanics). Only when you have established this very abstract way of thinking on hand of some examples (so to say the "quantum kinematics") you can come to a presentation of "interpretation", i.e., you can define what a quantum state really means, which of course depends on your point of view on the interpretation. I use MSI. So far I've only given one advanced lecture ("quantum mechanics II") on the subject, and there I had no problems (at least if I believe the quite positive evaluation of the students in the end) with using the MSI and the point of view that a quantum state in the real world has the meaning of an equivalence class of preparation procedures for this state, represented by a statistical operator with the only meaning to provide a probabilistic description of the knowledge about this system, given the preparation of it. It gives only probabilities for the outcome of measurements of observables, and observables that are indetermined do not have any certain value (see above). Of course, the real challenge is to teach the introductory lecture, and this I never had to do yet. So I cannot say, how I would present it.
Another question, I always pose, is what it is about this Bayesian interpretation of probabilities, nobody could answer in a satisfactory way for me so far: What does this other interpretation mean in practice? If I have only incomplete information (be that subjective as in classical statistics of irreducible as in quantum theory) and assign probabilities somehow, how can I check this on the real system other than preparing it in a well defined reproducible way and check the relative frequencies of the occurance of its possible outcomes of observables under consideration?
This same problem you have with classical random experiments as playing dice. Knowing nothing about the dice, according to the Shannon-Jayne's principle, I assign the distribution of maximal entropy to it ("principle of the least prejudice") and assign an equal probability of 1/6 for each outcome (occurance of the numbers 1 to 6 when throughing the dice). This are the "prior probabilities", and now I have to check them. How else can I do it than to through the dice many times and count the relative frequencies of the occurance of the numbers 1-6? Only then can I test the hypothesis about this specific dice to a certain statistical accuracy and can, if I find significant deviations from the behavior, update my probability function. I don't see what all this Bayesian mumbering about "different interpretations of probabilty" than the frequentist interpretation is about, if I never can check the probabilities other than in the frequentist view!