I Eigenvalue degeneracy in real physical systems

ErikZorkin
Messages
104
Reaction score
6
I understand this question is rather marginal, but still think I might get some help here. I previously asked a question regarding the so-called computable Universe hypothesis which, roughly speaking, states that a universe, such as ours, may be (JUST IN PRINCIPLE) simulated on a large enough computer, and the question was resolved quite successfully.
This is to say that everything, that has a meaning in terms of observation, might be in principle simulated (up to a finite precision).

Now, the question. (Forgive me my mediocre knowledge) Let A be a Hermitian operator acting on an n-dimensional Hilbert space H. By the spectral theorem, we can decompose A into the sum Σi=1m λi Pi where λi-s are m mutually distinct eigenvalues of A and Pi-s are the corresponding orthogonal projections. Then, H can be rewritten as a direct sum of the corresponding subspaces. Now, if we were to simulate all (observable in real world) physical systems, we would need to know whether the eigenvalues of all Hermitian operators that correspond to the real physical systems are distinguishable. Otherwise, our "supercomputer" would be unable to determine, which eigenstate the system falls into after measurement. In particular, it is true when all the operators are represented by non-degenerate matrices.

Are there (or have there been observed) real-world physical systems known to have indistinguishable eigenvalues?

My question is motivated by the following work:

Computable Spectral Theorem

Another discussion on the topic (quite old though)
 
Last edited:
  • Like
Likes atyy
Physics news on Phys.org
bhobba said:
Quantum degeneracy is well known:

Thanks for the answer. By distinguishable I meant if it was known beforehand that some eigenvalues are equal and some are distinct.
 
I'm not really sure what you are asking but typically degeneracies in quantum mechanics can be associated with symmetry or topological characteristics of the system.

Take time reversal for example. In a system with an odd number of electrons you will have at least a two-fold degeneracy since T^=-1 for fermions.
 
  • Like
Likes bhobba
Ok, but was is degeneracy in real physical systems and what's its relation to measurement?
 
ErikZorkin said:
to determine, which eigenstate the system falls into after measurement
For a Copenhagen-style experiment, a system described by $\psi$ before the measurement is described by $P_k\psi$ after the measurement, where $P_k$ is the projector to the eigenvalue $k$ measured. This is completely determined, and independent of the dimension of the eigenspace.

This probably makes your question moot.
 
Let's define the projector in the previous posting a bit more specifically.

According to some flavors of the Copenhagen interpretation after a measurement (in fact it's only very special measurements calle von Neumann filter measurements which almost never are really done as measurements but as preparation procedures in an approximate way) of an observable ##A## leading to the result ##a##, which is an eigenvalue of the self-adjoint operator ##\hat{A}## describing ##A## in the quantum theoretical formalism and where the eigenspace to this eigenvector is spanned by the orthonormal vectors ##|a,\beta \rangle## and the system is prepared in a pure state, described by a state vector ##|\psi \rangle## after the filter measurement the system is in a pure state described by the state vector
$$|\psi ' \rangle = \sum_{\beta} |a,\beta \rangle \langle a,\beta|\psi \rangle=\hat{P}_a |\psi \rangle.$$
 

But the Pi's depend on multiplicity of eigenvalues.
 
Yes, you project not to a specific eigenvector but just to the eigenspace. Only if you measure a complete set of compatible observables in the sense of a von Neumann filter measurement you project to the then uniquely determined state. If not, you miss information, and then you make the plausible assumption to estimate the state as the projection of the state the system was prepared into the eigenspace with equal weights for all possibilities. In some sense you can understand it as an application of the maximum-entropy principle, i.e., you choose the state "of least prejudice".

Note, however, that the collapse postulate has to be taken with some grain of salt. It's not a necessary assumption within the minimally interpreted QT, and what really happens in a measurement process depends on the measurement device and its interaction with the measured system. As I said before, what's described here is a very special and rarely realized von Neumann filter measurement, which you can take as a state-preparation procedure.
 
  • Like
Likes bhobba
  • #10
There is an obvious example of completely degenerate energy eigenstates, which is free particles. A free electron is infinitely degenerate, because the electron's momentum can point in a continuum of different directions, all with the same energy. Similarly, since a free electron's energy doesn't depend on it's spin, spin-up and spin-down have the same energy.

I think you were thinking along the lines of eigenstates for bound particles, and in that case, it seems that interactions tend to break any "accidental" degeneracies (ones that cannot be deduced from symmetry considerations).
 
  • Like
Likes bhobba
  • #11
I think the computational aspect is still not addressed. But, perhaps, it's because this is not right a place to ask.
 
  • #12
Which computational aspect? A (generalized) basis of Hilbert space is determined by calculating common eigenvectors of a complete set of compatible observables. For a (non-relativistic) electron you can choose common generalized eigenvectors of the three momentum components ##\hat{\vec{p}}## and ##\hat{s}_z##. The eigenvectors are ##|\vec{p},\sigma_z \rangle## with ##\vec{p} \in \mathbb{R}^3## and ##\sigma_z \in \{\hbar/2,-\hbar/2 \}##. These are also eigenvectors of ##\hat{H}## with ##\hat{H}=\hat{\vec{p}}^2/2m##.
 
  • Like
Likes bhobba
  • #13
Which eigenvalue is observed determines which projection is "applied" after measurement, right? Eigenvalues, for which it is computationally undecidable, whether they are equal or distinct, it's impossible to determine which projection applies.
 
  • #14
ErikZorkin said:
Which eigenvalue is observed determines which projection is "applied" after measurement, right?

Not if there is degeneracy. Even if there is no degeneracy its very easy to create one by simply making two outcomes the same value. This is often done, for example in theoretical discussions of QM, in creating an 'indicator' operator that is one for some outcome and zero for the rest.

Thanks
Bill
 
  • #15
Detecting degeneracy is undecidable. Also, I am not interested in theoretical constructions, but practical.
 
  • #16
ErikZorkin said:
Detecting degeneracy is undecidable. Also, I am not interested in theoretical constructions, but practical.

There is no difference. Physically it would mean you simply change the readout on your apparatus.

Thanks
Bill
 
  • #17
In order to do physics, we only need to know the eigenvalues to the precision of the measurement apparatus. We don't need to know the multiplicity, since we need to project onto the space of states that are close enough to the measured eigenvalue. If the numerics gives us many eigenspaces for eigenvalues close enough to the measured value, we would project onto their direct sum. If the numerics gives us fewer, degenerate eigenspaces, we would also project onto their direct sum, but we would need fewer projectors. In both cases, the numerics would provide us with a sufficiently good projector, even though we might not know whether it projects onto degenerate or non-degenerate eigenspaces.
 
  • Like
Likes ErikZorkin and bhobba
  • #18
So you can change it EXACTLY SO that degeneracy appears?
 
  • #19
ErikZorkin said:
So you can change it EXACTLY SO that degeneracy appears?
Change what? (EDIT: Oh, I didn't realize that this was a response to bhobba.)

Here is an example:
Let ##A## be an observable given by a matrix and we have observed the value ##a=5## with a precision of ##\sigma=0.5##. We don't need the projector onto the eigenspace to the eigenvalue ##5##, but rather a projector ##P(4.5,5.5)## onto the space of states which is spanned by eigenstates with eigenvalues ##4.5 \leq a \leq 5.5##. The numerics might give us ##P(4.5,5.5) = P_{5.1}## with multiplicity 4 or ##P(4.5,5.5) = P_{4.9} + P_{5.3}+P_{5.4}## with corresponding multiplicitties ##1##, ##1## and ##2## and this decomposition might be numerically unstable and uncomputable, but since we don't care about the decomposition, but only about ##P(4.5,5.5)## itself, this uncomputability issue isn't relevant for us.
 
Last edited:
  • #20
rubi said:
In order to do physics, we only need to know the eigenvalues to the precision of the measurement apparatus. We don't need to know the multiplicity, since we need to project onto the space of states that are close enough to the measured eigenvalue. If the numerics gives us many eigenspaces for eigenvalues close enough to the measured value, we would project onto their direct sum. If the numerics gives us fewer, degenerate eigenspaces, we would also project onto their direct sum, but we would need fewer projectors. In both cases, the numerics would provide us with a sufficiently good projector, even though we might not know whether it projects onto degenerate or non-degenerate eigenspaces.

This is MUCH closer to what I was asking.
 
  • #21
ErikZorkin said:
So you can change it EXACTLY SO that degeneracy appears?

I think you need to see an axiomatic treatment of QM - see post 137:
https://www.physicsforums.com/threads/the-born-rule-in-many-worlds.763139/page-7

Axiom 1
Associated with each Von Neumann measurement we can find a Hermitian operator O, called the observations observable such that the possible outcomes of the observation are its eigenvalues yi.

The values of those outcomes are entirely arbitrary - any operator can be made degenerate or non degenerate without changing the underlying physics.

Thanks
Bill
 
  • #22
ErikZorkin said:
Which eigenvalue is observed determines which projection is "applied" after measurement, right? Eigenvalues, for which it is computationally undecidable, whether they are equal or distinct, it's impossible to determine which projection applies.
There's nothing that determines which value is observed when measuring an observable. The state of the system determines the probabilities (and only the probabilities) with which you'll find a possible value. In terms of my notation above it's given, according to Born's rule,
$$P(a)=\mathrm{Tr}(\hat{\rho} \hat{P}_a),$$
where ##\rho## is the statistical operator, representing the system's state when the measurement is done.
 
  • #23
ErikZorkin said:
Detecting degeneracy is undecidable. Also, I am not interested in theoretical constructions, but practical.
Practically you decide the spectrum by calucating it numerically. This gives you an orthonormal basis and the projection operators. Approximately of course, byt that the nature of practice.

if your spectrum is too tightly spaced it is unlikely that you perform in reality a Copenhagen measurement; hence you shouldn't simulate it as one. In this case you should look at POVMs instead.
 
Last edited:
  • Like
Likes bhobba
  • #24
bhobba said:
The values of those outcomes are entirely arbitrary - any operator can be made degenerate or non degenerate without changing the underlying physics.

What's the mathematical background of this?
 
  • #25
A. Neumaier said:
if your spectrum is too tightly spaced it is unlikely that you perform in reality a Copenhagen measurement; hence you shouldn't simulate it as one. In this case you should look at POVMs instead.

What's POVM?
 
  • #27
ErikZorkin said:
What's the mathematical background of this?

Scratching head. It detailed in the link I gave.

Thanks
Bill
 
  • #28
A. Neumaier said:
Well, the article doesn't seem to be very explanatory. At least in terms of how it relates to the practical nature of measurement.
 
  • #29
ErikZorkin said:
Well, the article doesn't seem to be very explanatory. At least in terms of how it relates to the practical nature of measurement.
POVMs allow one to model finite precision measurements of continuous variables in a style generalizing Born's rule.

On the other hand, practical measurement is something completely different than what is discussed in the Copenhagen interpretation.

What do you really want wo understand?
 
Last edited:
  • #30
A. Neumaier said:
POVMs allow one to model finite precisiton measurements of continuous variables in a style genralizing Born's rule.

That's exactly the aspect I am trying to understand. Where can I read about it. Wiki's article seems to be full of "clarification needed" marks.
 
  • #31
ErikZorkin said:
That's exactly the aspect I am trying to understand. Where can I read about it. Wiki's article seems to be full of "clarification needed" marks.
There is a nice book on the foundations of QM by Asher Peres. Very recommendable.
 
  • Like
Likes ErikZorkin
  • #32
Well, I don't have time to read a whole book. I just want to understand what the key feature of POVM is (mathematically) and how it addresses the degeneracy problem in practical measurement.
 
  • #33
ErikZorkin said:
Well, I don't have time to read a whole book. I just want to understand what the key feature of POVM is (mathematically) and how it addresses the degeneracy problem in practical measurement.
Understanding doesn't come for free.
The book is online. You can concentrate on the part you want to understand.
 
  • #34
So, is it merely a way of defining operators corresponding to measurements of a specific range of outcomes rather than discrete values?
 
  • #35
ErikZorkin said:
Well, I don't have time to read a whole book. I just want to understand what the key feature of POVM is (mathematically) and how it addresses the degeneracy problem in practical measurement.

Are you on a tight deadline for acquiring this understanding?

:wink:
 
  • #36
ErikZorkin said:
So, is it merely a way of defining operators corresponding to measurements of a specific range of outcomes rather than discrete values?
No. It accounts for a more general class of measurements. Maybe the discussion here helps.
 
  • Like
Likes ErikZorkin
  • #37
stevendaryl said:
Are you on a tight deadline for acquiring this understanding?

:wink:
Well, better said, I lack time badly! And I am neither a physicist nor am I that much interested in physics (rather, mathematical foundation thereof).
 
  • #38
A. Neumaier said:
No. It accounts for a more general class of measurements. Maybe the discussion here helps.
Nice post! Could you give a link to an example where usage of POVMs is demonstrated along with measurement imperfection?
 
  • #39
ErikZorkin said:
nor am I that much interested in physics (rather, mathematical foundation thereof).
I don't give advice to superficial thinkers who think that instant understanding is just a few clicks away.

You won't get far in the mathematical foundations of physics without learning some physics and spending a lot of time. Take a slower pace and you'll benefit a lot from it.

The book by Peres is wholly about the foundations of quantum mechanics (only). For foundations of measurement see e.g. https://labs.psych.ucsb.edu/ashby/gregory/klstv2.pdf . And these are only the tips of two huge icebergs...
 
Last edited by a moderator:
  • Like
Likes vanhees71
  • #40
Well, thanks for directing me to POVMs
 
  • #41
However, POVMs can't resolve the mathematical computability issue that ErikZorkin brought up, since they can always be seen as PVMs on a larger Hilbert space, so if they could resolve the issue, then the issue with the PVMs would also be resolved, which is apparently impossible. I think the physical resolution is what I have written in posts #17 and #19 and of course it can also be formulated using POVMs.
 
  • Like
Likes ErikZorkin
  • #42
rubi said:
and we have observed the value a=5a=5 with a precision of σ=0.5\sigma=0.5. We don't need the projector onto the eigenspace to the eigenvalue 55, but rather a projector P(4.5,5.5)P(4.5,5.5) onto the space of states which is spanned by eigenstates with eigenvalues 4.5≤a≤5.54.5 \leq a \leq 5.5.
This doesn't solve the problem of principle since the precision 0.5 is uncertain, too, whereas your construction assumes that it and the observed value are both known to infinite precision.

It is well-known and experimentally verifiable that projection-valued measures are often far too crude, whereas POVMs (and their ''square roots'') give a generally good model for this kind of measurements.
 
  • #43
Well, the value ##0.5## is what the experimenter hands me. If they claim that their measurement uncertainty is ##0.5## and this leads to disagreements between the theory and the experiment, then either the theory is false or the experimenter has made systematic errors and his uncertainty isn't really ##0.5##, but rather something else.

I don't doubt that POVMs are better suited for realistic measurement. I just don't think that they resolve the specific problem the OP has brought up.
 
  • #44
That's kind'a right. The suggestion on POVM might be a bit misleading. The connection to PVMs is established by the Neumark's theorem that establishes one-to-one correspondence.
 
Last edited:
  • #45
rubi said:
Well, the value ##0.5## is what the experimenter hands me. If they claim that their measurement uncertainty is ##0.5## and this leads to disagreements between the theory and the experiment, then either the theory is false or the experimenter has made systematic errors and his uncertainty isn't really ##0.5##, but rather something else.
The measurement error could also be nonsystematic.

It could be 0.51 or 0.49 - and would lead to a significantly different projector in case the wave function contains a large contribution in the symmetric difference of the twospectraldomains.

What an experimenter hands you is always inaccurate, and the uncertainty is usually much more inaccurate than the value itself - because it is much less well determined operatinally.

It is ridiculous to that Nature responds to a quantum measurement according to whatever the experimenter hands you.
 
  • #46
A. Neumaier said:
The measurement error could also be nonsystematic.

It could be 0.51 or 0.49 - and would lead to a significantly different projector in case the wave function contains a large contribution in the symmetric difference of the twospectraldomains.

What an experimenter hands you is always inaccurate, and the uncertainty is usually much more inaccurate than the value itself - because it is much less well determined operatinally.
If the experimenter did not make any systematic errors and computed a value of ##0.5## for his measurement uncertainty, then the theory better predict the experimental results correctly, given this value for the uncertainty. Otherwise it is false and has to be rejected. The theory just wouldn't be compatible with the experimental results.

It is ridiculous to that Nature responds to a quantum measurement according to whatever the experimenter hands you.
Well, the experimenter can't hand me any number he likes. He must hand me the value that he computed for his measurement uncertainty. However, I agree that this is ridiculous. I think that the projection postulate is nonsensical and will eventually be abandoned. I'm just answering from the point of view of a Copenhagenist, since this is what the OP (implicitly) asked for.
 
  • #47
rubi said:
If the experimenter did not make any systematic errors and computed a value of 0.5 for his measurement uncertainty, then the theory better predict the experimental results correctly, given this value for the uncertainty.
No. The value for the uncertainty is always itself uncertain, and typically over conservative. There is a large literature about how to compute and report uncertainties and they advise to be conservative in case of doubt.

rubi said:
I agree that this is ridiculous. I think that the projection postulate is nonsensical and will eventually be abandoned. I'm just answering from the point of view of a Copenhagenist, since this is what the OP (implicitly) asked for.
He asked about the realistic situation. The real situation is often described by a POVM - but the experimenter will not know the precise parameters of the POVM, only an approximate description. And in most cases the optimally fitting POVM will be not projection-valued - hence treating it in the Copenhagen way will introduce asystematic error.

But even with optimal POVM and optimmal assessment of result and uncertainty, the latter will deviate from the true result given by the POVM. This is unavoidable. There are always the error due to the modeling plus the additional error due to the actual reading.
 
  • #48
A. Neumaier said:
No. The value for the uncertainty is always itself uncertain, and typically over conservative. There is a large literature about how to compute and report uncertainties and they advise to be conservative in case of doubt.
Well, as a matter of fact, Copenhagen-style QM does have the projection postulate and its predictions depend on the uncertainty. I have never seen a Copenhagenist explain, what uncertainty must be taken in order to get correct predictions. However, the only number that we actually have is the uncertainty computed by the experimenter. What other number do you propose? Unless we have such a number, Copenagen-style QM isn't even a physical theory at all, since it doesn't tell us which projector to use in order to make predictions.

There must be a recipe that tells us the right projector to use in the projection postulate. This recipe can be falsified.

He asked about the realistic situation.
I interpreted his question to be about how we can make predictions with the projection postulate if the eigenspace decomposition is actually uncomputable. But maybe I just interpreted him wrongly.
 
  • #49
rubi said:
point of view of a Copenhagenist, since this is what the OP (implicitly) asked for.
Well, not necessary. That's at least what I am familiar with. And by the way. it's more of a problem with exact computation of operator spectra, which is impossible, than with interpretations of QM.

A. Neumaier said:
only an approximate description
Do POVMs admit constructive approximation (up to arbitrary precision) ?

rubi said:
I interpreted his question to be about how we can make predictions with the projection postulate if the eigenspace decomposition is actually uncomputable. But maybe I just interpreted him wrongly.
This is exactly what I asked. In other words, it's the issue of uncomputability of spectra.
 
  • #50
rubi said:
There must be a recipe that tells us the right projector to use in the projection postulate.
There is no such recipe for a general measurement. The Born rule is well-defined (through aprecise specification of the meaning of '''measurement'') only for interpreting the results of collision experiments, i.e., the S-matrix elements.Born originally had it only in the form of a law for predicting the result of collisions (where the measured operator is itself a projection), and it is verifiable in these situations.

Later it was abstracted into the modern form by on Neumann, who introduced an ''ideal'' measurement without aclear meaning - so that only the conformance to the rule ''defines'' whether a particular measurement is ''ideal''. - Almost none is. Neither photodetection nor electron detection works as claimed by the rule.

For the interpretation of real measurmeents one uses instead sophisticated models of Lindblad type that predict the dynamics of the state and the probabilities of the outcomes.
 
Last edited:
Back
Top