Quote lifted from a thread in the cosmology forum.

What does it mean to know the exact state of a QM system? QM predicts probabilities that particles will be in one of multiple states when the particles are observed, and when observed, not all properties of a particle are simultaneously knowable to an exact degree (eg position and momentum).

Does knowing the exact state mean I know the probability functions for each particle in a given system, or is it different than that?

If you know the state of the system, you have encoded in this state all probabilities of measuring the value a of the quantum observable A (the set of all this possible values is the spectrum of the operator associated to A). This is the content of the so-called Born rule (or the probabilistic interpretation of QM). But knowing the exact state of the system is a tricky business, since you'd have to know the Hamiltonian and how to solve the time evolution equation. Only for a time-independent Hamiltonian, the set of possible system states is expressible in an exact manner in terms of a set of solutions to a partial differential equation.

Thanks for the reply. Now I may get into trouble because I don't know how to make my follow up very precise.

If one could know and solve respectively the above, would the solution be a time-varying set of probabilities?

As opposed to Newtonian physics providing a time-varying set of state values each with probability 1, I mean.

Another question - is it in theory possible to discover and solve the Hamiltonian for a time dependent system? Is this why @kimbyd said 'given enough compute power'?

No, it is a time varying many sets of probabilities. There are many sets of incompatible probabilities, because one can make many incomptible measurements.

Also, when a measurement is made, unitarity fails.

A simple answer is that in QM the state of a system evolves deterministically - according to the Schrödinger equation. But a measurement produces probabilistic outcomes.

There are no probabilities, as such, in the time evolution of the state of a system.

OK it says..
"A wave function in quantum physics is a mathematical description of the quantumstate of a system. The wave function is a complex-valued probability amplitude, and the probabilities for the possible results of measurements made on the system can be derived from it."
It certainly seems to lead to probabilities.

The results of measurements are probabilistic but the way the wavefunction itself evolves over time is not probabilistic.

A crude analogy is a coin. If you toss a coin you get heads or tails with 50% probability - that's the measurement. But the coin itself and the probabilities of getting heads and tails do not change over time. That represents the state.

In other words, the toss of a coin is probabilistic, but the coin itself is always the same.

Note that this is a rough analogy and not meant to be precisely related to QM.

First we need to define a Positive Operator Value Measure (POVM). A POVM is a set of positive operators Ei ∑ Ei =1 from, for the purposes of QM, an assumed complex vector space.

Elements of POVM's are called effects and its easy to see a positive operator E is an effect iff Trace(E) <= 1.

Now we can state the single foundational axiom QM'

An observation/measurement with possible outcomes i = 1, 2, 3 ..... is described by a POVM Ei such that the probability of outcome i is determined by Ei, and only by Ei, in particular it does not depend on what POVM it is part of.

Note - nothing said at all about a state.

I will evoke a very beautiful theorem which is a modern version of a famous theorem you may have heard of called Gleason's, and in the link prove it.

It says a positive operator of unit trace P exists such that the probability of Ei occurring in the POVM E1, E2 ..... is Trace (Ei P).

This is called Born's Rule. P by definition is called the state of the system. It is simply an aid in calculating the probability of Ei. That's it - that's all. Its nothing 'mystical' etc etc. Its simply something used as an aid in calculating the probability of a certain outcome derived from the fundamental axiom that outcomes can be mapped to POVM's.

I don't think, I know what it represents. You may be getting confused between the time evolution of the wavefunction between measurements (deterministic, according to Schrödinger equation) and the probabilistic "collapse" of the wavefunction upon measurement.

In simple terms this means that if you leave a system alone (perhaps after an initial measurement) its wavefunction evolves deterministically until you make a further measurement, at which point it randomly collapses into a new wavefunction, which then evolves deterministically again.

The fundamental assumption in this thread is that quantum mechanics is a probabilistic theory dealing with measurements. How is "measurement" defined, especially in contrast to unitary time evolution, i.e. when does a system evolve unitarily, and when does it collapse?

I guess all this does not apply to the "modernized" Everett's interpretation making use of decoherence; in this context the POVM must be there as well b/c we know that they fit to our observations, but they should not be introduced by an axiom but be a derived or emergent result or theorem.

You're not really using the terminology correctly here, but reading between the lines you seem to have the right idea, or at least the essence of it. It might be worthwhile to revisit the basic axioms of QM that can be found in a lot of textbooks. I'm going to add the caveat here that this is just a place to start - think of it like watching an Avengers movie; temporarily suspend one's disbelief and just try to enjoy the ride. So your mindset should be "OK, not too sure about these, but let's run with them and see what happens". The axioms I'm going to write ultimately need all sorts of refinements, additions and details added - but we have to start somewhere.

1. The state of physical system (eg an electron) is represented by a vector in a complex Hilbert space
2. This state evolves according to the Schrodinger equation
3. Observables are represented by linear Hermitian operators
4. The possible results of a measurement of an observable ##\hat {\mathbf A}## are the eigenvalues of ##\hat {\mathbf A}##
5. If the initial state is ## | \psi \rangle## then the probability of getting the eigenvalue ##a_i## as a result of the measurement of ##\hat {\mathbf A}## is given by ##| \langle a_i | \psi \rangle |^2 ## where ##| a_i \rangle## is the eigenstate of ##\hat {\mathbf A}## associated with the eigenvalue ##a_i##
6. Immediately after the measurement of ##\hat {\mathbf A}## in which the eigenvalue ##a_i## was obtained as a result of the measurement, the new state of the system is given by ##| a_i \rangle##

Holy Gotham City Batman It's no wonder that students, exposed throughout their education to classical physics, see these axioms and have a very serious "WTF?" moment. Once we've had a chance to rest in a darkened room for several hours in order to calm down, we can try to use these frankly bizarre set of rules. The surprising thing is that they work, and they work very well indeed (by work I mean successfully allow us to calculate experimental predictions).

Now these axioms should be taken alongside a whole bunch of warning flags and alerts - they're just someplace to start. They're not necessarily the best set of axioms we could pick, or the most elegant, and on reflection we can see there are some gaping holes (or at least some major questions). So with the proviso that we may need to swap these out later on for a much more elegant and 'better' set of rules (that are equivalent) let's try to answer your question.

So, typically, in an experiment we might prepare our system in some known state (as best we can). So suppose we want to prepare a bunch of systems in some state ##| a_i \rangle## then we'd take a collection of systems and make measurements of ##\hat {\mathbf A}## and select all of those systems for which we got the result ##a_i##. So we 'filter' out the states we want. Now we can experiment on these systems which we've prepared in a known state.

We might want to know what happens to our systems, prepared in the state ##| a_i \rangle##, if we apply an electric field. So we work out what the Schrodinger equation would be in this situation and solve it to give us the new state that ##| a_i \rangle## evolves to when we apply the electric field. Then we decide what property we're going to measure (energy?, angular momentum? etc) and work out the probabilities of the results we should get.

In the absence of the measurement everything is evolving smoothly and reversibly according to the Schrodinger equation - and this is the 'unitary' bit. It's actually essential to make the probabilities all sum to 1. Notice that, in general, if we measure something like energy we'll get a particular set of possible results (the energy eigenvalues) with associated probabilities, but if we choose to measure angular momentum instead we'll get a different set of possible results (the angular momentum eigenvalues) with a different set of associated probabilities.

Now there are several issues with these 'beginning' set of axioms that I've presented (and I've written them out from memory - so apologies for any mistakes which I hope others will correct). Principal amongst them is the slightly 'magical' character of this thing I've blithely termed "measurement". Whilst it is obvious operationally what a measurement is ("Oh look, the intensity reads such and such") it's not really clear in the theoretical framework above what a measurement is. Whatever it is it would appear to be different from the nice smooth evolution dictated by the Schrodinger equation. But surely my measurement device is made up of all sort of bits and pieces (atoms and such) that also obey the QM evolution equation? Therein lies at least one of the thorny issues that QM presents us.

Another big issue is what is actually meant by the thing I've called a 'state'. You'll notice the seductive language that is difficult to avoid when I've talked about the system being "in a state". This, not too subtly, leads us to suppose that there is some real, objective thing we're talking about that somehow changes when we do this mystical thing called measurement. That's another decidedly vexing issue.

Yet another is this notion that measurements are so clean cut. In a typical experiment, say in a quantum optics lab, we'd end up destroying the thing we're measuring - photons are absorbed by photodetectors, for example. So we really need a different formalism to cope with what happens when our measurements aren't of this nice projective character implied by the axioms above. This is the POVM formalism that Bill mentioned but in my view you need a fair degree of sophistication to appreciate that, so it's not the best place to start (again in my view, but I'm sure Bill would disagree here). But even here the POVM formalism is equivalent to adding an ancillary system and doing these nice ideal measurements (whatever they are within the theory) on this ancillary system.

I'm sorry if by now I've thoroughly confused you - there is a certain sense in which QM is confusing - and it just takes practice and patience and effort to fool oneself that actually it isn't confusing at all. That might take a few months or even years