daniel444 said:
so that means, the 1. postulate is wrong? Because in all quantum mechanics books is written, that after a measurement the wave function collapses in an eigenstate of the operator, which corresponds to the observable, which is being measured
In many but not all books. The collapse postulate is the most confusing of all the Copenhagen like interpretation schemes. Fortunately it's unnecessary to use QT to the real world, and it's almost never realized even approximately: What state the measured system is in after the measurment depends on the measurement you made on it, i.e., it depends on what the interaction of the measured system with the measurement apparatus does to it. E.g., if you detect a photon, usually it's absorbed, and after the measurement thus there's only the photodetector left but no photon, i.e., it doesn't make any sense to say the measured photon's state is then an eigenstate of the measured observable.
It's also important to remember that states, as far as the single system under consideration is concerned, describe the preparation of the system and not the measurement. The uncertainty relation for position and momentum ##\Delta x \Delta p_x \geq \hbar/2## says you cannot at the same time accurately localize a particle and make its momentum also very well defined. If ##\Delta x## is "small", then ##\Delta p## is necessarily large and vice versa.
It doesn't say anything about how accurate you can measure the one or the other observable, which entirely depends on the measurement setup. It's also another question in how far a measurement disturbs the measured system. This is also not what this simple usual uncertainty relations for incompatible observables say, i.e.,
$$\Delta A \Delta B \geq \frac{1}{2} |\langle [\hat{A},\hat{B}] \rangle|.$$
This refers to limitations in "preparability" of states concerning the standard deviations/uncertainties of the observables under consideration.
There is of course also a limit in the ability to measure an observable accurately without disturbing the system. E.g., if you want to measure the location of a charged particle accurately you need to, e.g., scatter light with a small wave length at it, and for this you need to interact with at least one single photon of energy ##\hbar \omega## with it. The smaller the wave-length of the photon the larger is the frequency and thus the more you'll disturb your particle by measuring its position. But that's not described by the uncertainty relation. You just have to calculate how much momentum you'll transfer by scattering the photon in a certain direction to estimate how much the particle is kicked around by measuring it by interacting with the photon.
Concerning incompatible observables there's usually also a tension between measuring one observable very accurately and also simultaneously the other, but that's also interrelated with the preparability. A famous example is Weizsäcker's analysis of the Heisenberg microscope, which is a gedanken experiment with the famous double-slit setup. I.e., you let a single photon run through a double slit and you use a lense to either measure its momentum, i.e., you put the photoplate in the focal plane. Then each point on the photo plate refers one-to-one to a measured momentum of the photon. Then the picture when repeating this with many equally prepared photons running through the double slit is the double-slit interference pattern, and it's impossible to know through which slit each single photon came. If you put the screen in the image plane, you'll resolve from which slit each photon came, but their spread in momenta is large. You can also make a compromise and put the screen somewhere else but neither in the focal nor the image plane of the lense. Then you get restricted accuracy for both observables "position" (through which slit came the photon) or momentum.