I think the usual introduction of QM contains two points where it's easy to feel that a missing understanding exists.
1) One of the spirits of QM, is to focus on what you can measure. And unless you are able to perfectly predict the next measurement, then the idea is to instead predict the expectation. Ie. to predict how the measurements are expected to the distributed if you are about to repeat the experiment many times.
The obvious problem here is the that abstraction of picturing a statistical ensemble of a particular event really isn't very rigourour from the point of view of reasoning. There are many problems with this. First is the issues how you can duplicate one event, and know the conditions are the same. And even if you could do that, to repeat an experiment many times isn't enough, to find the exact probability distribution you need to repeat it an infinite of times, and how long time would such an experiment take. Also how can you store all the data from an infinite experiment series? Sometimes this "idealisation" is certainly good enough, but if you use such an abstraction at a very fundamental and general level of the theory, then it's easy to have objections IMHO.
If you try to consider that the abstraction of the ensemble really must be constructed by the observer, and supposed that the observer in this case is a small particle, then I think it's reasonable to expect that the limited capacity of such an observer, severly constrains the construction of this ensemble both complexity wise and time-wise (ie. computation wise, if you picture the universe a bit like a computer).
If such a suspicion proves to be valid (yet to be seen of course) then this reasoning will cause revisions of the entire probabiltiy abstraction used in quantum mechanics. It's not that it will go away, but I rather execpt the probability spaces themselves to become more observer attached(subjective) and also dynamical. At that point i would personally also expect that the notion of "randomness" becomes relative to the observer. IE. what appears to be a random sequence to one observer, could well seem non-random to another. The missing logic here is to see how this can be consistent, and still reproduce all the predictions of QM we have learned is right.
One could probably imagine that the "complexity" of the random-generator determines the degrees of randomness, and that an observers less complex(massive??) than the generator would simply be computation-wise, unable to with certainty distinguish the sequence from a random sequence.
This is the direction of progress I personally expect to come up.
2) The other thing is how the expectations and probabilities are combined, and calculated. Ie. the quantum logic vs classical logic. This is currently what's usually understood in the shut up and calculate way. But I expect that eventually there will be a more solid understanding of this.
Edit: When you think about this for awhile, you may start to think that the two points are connected. IE, could the revision of the notion of the probability abstraction, require also a revision of the "classical logic" and thus yield quantum logic - I think so. Hopefully this will also at some poing be more understood.
/Fredrik