I Eigenvalue degeneracy in real physical systems

  • #151
ErikZorkin said:
Well, but, at least in case of discrete variables (and in a simulation, they will all be discrete in fact), POVMs

Did you read the link I gave on quantum measurement theory? It explains it all there.

Quantum operators give resolutions of the identity via the spectral theorem. However they are not the most general observation in QM. POVM's are - they are resolutions of the identity with the disjoint requirement removed. However POVM's can be reduced to resolutions of the identity using the concept of a probe. You have a Von Neuman type observation (ie a resolution of the identity type) and insert a probe to observe it. You then observe the probe to indirectly observe the first system. It can be shown the probe is described by a POVM - not a resolution of the identity. The detail is in the link I gave - please read it.

Thanks
Bill
 
Physics news on Phys.org
  • #152
ErikZorkin said:
For instance, I asked:

My question is, do you use spectral decomposition theorem to derive POVMs or if yes, where?​

bhobba: Of course.

The resolution of the identity from the spectral theorem is a POVM - that's the of course.

I gave a link explaining observations in QM. If you were to study it it will likely answer your queries.

Thanks
Bill
 
  • #153
ErikZorkin said:
they are hard to construct.
Not really. Take ##N## matrices ##A_k## without a common null vector. Then the (computable) Cholesky factor ##R## of ##\sum A_k^*A_k## is invertible and the (computable) ##P_k=A_kR^{-1}## form matrices with ##\sum P_k^*P_k=1##, which is all you need. One can use least squares to adapt the matrix entries to real measurements if one wants to use this to simulate a real life transformation behavior. Matching reality is called quantum estimation theory.
 
Last edited:
  • Like
Likes ErikZorkin
  • #154
bhobba said:
The resolution of the identity from the spectral theorem is a POVM - that's the of course.

I gave a link explaining observations in QM. If you were to study it it will likely answer your queries.

Thanks
Bill

I read through that link, but it's of no use for me since it assumes that the density matrix is diagonalizable in the first place. My question whether you can consistently describe a measurement in an APPROXIMATE FORMAT when you know spectral decomposition only up to some finite precision. The exact expresison for an approximate spectral decomposition I have given above.
 
  • #155
A. Neumaier said:
Not really. Take ##N## matrices ##A_k## without a common null vector. Then he (computable) Cholesky factor ##R## of ##\sum A_k^*A_k## is invertible and the (computable) ##P_k=A_kR^{-1}## form matrices with ##\sum P_k^*P_k=1##, which is all you need. One can use least squares to adapt the matrix entries to real measurements if one wnat to use this to simulate a rela life transformation behavor. Matching reality is called quantum estimation theory.

That's nice
 
  • #156
ErikZorkin said:
That's nice
You can take the ##A_k## to be approximate projectors. In this case you get something that is close to an ideal Copenhagen measurement.
 
  • Like
Likes ErikZorkin
  • #157
A. Neumaier said:
You can take the ##A_k## to be approximate projectors. In this case you get something that is close to an ideal Copenhagen measurement.

Like in this case?

For any Hermitian operator T and any ε>0, there exist commuting projections P1, ... Pn with PiPj =0 and real numbers c1,...cn such that || T - ∑i=1nciPi || ≤ ε.
 
  • #158
ErikZorkin said:
Like in this case?
I was thinking of computing an approximate spectrum, deciding how to group the approximate eigenvalues then compute approximate projectors to the corresponding invariant subspaces then apply the construction to these.
 
  • Like
Likes ErikZorkin
  • #159
ErikZorkin said:
I read through that link, but it's of no use for me since it assumes that the density matrix is diagonalizable in the first place.

Scratching head. Cant find where it makes that assumption, but maybe I am blind - its been a while since I went through it.

Thanks
Bill
 
  • #160
bhobba said:
Cant find where it makes that assumption,
A density operator is Hermitian and trace class, hence always self-adjoint, hence diagonalizable. So this is not an assumption but a provable result.
 
  • Like
Likes bhobba and vanhees71
  • #161
A. Neumaier said:
A density operator is Hermitian and trace class, hence always self-adjoint, hence diagonalizable. So this is not an assumption but a provable result.

Kicking self o0)o0)o0)o0)o0)o0)o0)o0)

It follows simply from the spectral theorem since its Hermitian and obviously normal.

Thanks
Bill
 
Last edited:
  • #162
bhobba said:
Dirac's elegant formulation is now rigorous since Rigged Hilbert Spaces have been worked out.

Came here to say exactly this! I decided not to read through the whole thread, but to instead search each page for the words "rigged" or "triplet" and came across your post on page 5. Was this post resolved? I don't see how there would be an issue in the extended nuclear space...as Ballentine himself says, "[...] rigged Hilbert space seems to be a more natural mathematical setting for quantum mechanics than is Hilbert space."
 
  • #163
bhobba said:
Kicking self o0)o0)o0)o0)o0)o0)o0)o0)

It follows simply from the spectral theorem since its Hermitian and obviously normal.

Thanks
Bill

The question was, what you can do when you can't have an exact diagonalization.
 
  • #164
HeavyMetal said:
Came here to say exactly this! I decided not to read through the whole thread, but to instead search each page for the words "rigged" or "triplet" and came across your post on page 5. Was this post resolved? I don't see how there would be an issue in the extended nuclear space...as Ballentine himself says, "[...] rigged Hilbert space seems to be a more natural mathematical setting for quantum mechanics than is Hilbert space."

The OP's background is math and was thinking in terms of a highly rigorous approach like you find in pure math. QM can be done that way and I gave a link to a book, but he didn't want to pursue it.

Thanks
Bill
 
  • #165
ErikZorkin said:
The question was, what you can do when you can't have an exact diagonalization.

By the definition of a state as a positive operator of unit trace it must. I actually derived it in a link I gave previously:
https://www.physicsforums.com/threads/the-born-rule-in-many-worlds.763139/page-7

Its the modern version of a famous theorem from the mathematician Gleason:
http://www.ams.org/notices/200910/rtx091001236p.pdf

The rock bottom essence is non-contextualty and is a much more general result of the equally famous Kochen–Specker theorem (its a simple corollary of Gleason)
https://en.wikipedia.org/wiki/Kochen–Specker_theorem

Thanks
Bill
 
Last edited:
  • #166
bhobba said:
The OP's background is math and was thinking in terms of a highly rigorous approach like you find in pure math. QM can be done that way and I gave a link to a book, but he didn't want to pursue it.

Thanks
Bill

Not really, thank you for the book, I'll take a look at that later. It's been just a bit off the original question.
 
  • Like
Likes bhobba
  • #167
bhobba said:
By the definition of a state as a positive operator of unit trace it must.

Only classically. But in terms of computable analysis, it doesn't.
 
  • Like
Likes bhobba
  • #168
ErikZorkin said:
Not really, thank you for the book, I'll take a look at that later. It's been just a bit off the original question.

:smile::smile::smile::smile::smile::smile::smile:

Thanks
Bill
 
  • #169
ErikZorkin said:
Only classically. But in terms of computable analysis, it doesn't.

Got it.

Thanks
Bill
 
  • #170
bhobba said:
Got it.

Thanks
Bill

But it shouldn't be a problem in practical physics and approximate spectral decomposition should be sufficient. That's the main message that I'm trying to check with this community. Also, you might be interested in this book (giving this in little hope that it gets viewed though). It's suprsinig how much of the physics can be done in pure computable framework.
 
  • #171
rubi said:
In order to do physics, we only need to know the eigenvalues to the precision of the measurement apparatus. We don't need to know the multiplicity, since we need to project onto the space of states that are close enough to the measured eigenvalue. If the numerics gives us many eigenspaces for eigenvalues close enough to the measured value, we would project onto their direct sum. If the numerics gives us fewer, degenerate eigenspaces, we would also project onto their direct sum, but we would need fewer projectors. In both cases, the numerics would provide us with a sufficiently good projector, even though we might not know whether it projects onto degenerate or non-degenerate eigenspaces.

I know it's a bit outdated, but can someone give me a reference to such a spectral decomposition in approximate (and possibly computable) format? The theorem that I mentioned above does not cover the question of approximating eigenvectors and -spaces.
 
  • #172
ErikZorkin said:
I know it's a bit outdated, but can someone give me a reference to such a spectral decomposition in approximate (and possibly computable) format? The theorem that I mentioned above does not cover the question of approximating eigenvectors and -spaces.
One discretizes the time-independent Schroedinger equation then solves a matrix eigenvalue problem. There are many excellent solvers on the web that give approximations to the spectrum and the eigenvectors. The grouping and approximate projection-building can be done on this level. If there is a continuous spectrum one also has to do an additional fit to approximately extract the corresponding scattering information, which is a bit more complicated. Details depend very much on the system to be handled and the accuracy required.
 
  • Like
Likes bhobba and ErikZorkin
  • #173
Matrix eigenvalue problem is undecidable, only the roots of det{A - αI} are computable, but their multiplicity isn'. That is why these solvers suffer from instability when the matrix is degenerate and cardinality of spectrum is unknown as mentioned by Ziegler & Brattka. "Effective" algorithm means it always outputs a correct answer. We may do so by essentially allowing eigenvalues/vectors/spaces and projections to be computed in approximate format. Honestly, I thought it would be easy to find a reference, but I couldn't so far. Numerical methods is something different.
 
  • #174
ErikZorkin said:
Matrix eigenvalue problem is undecidable
You never did any actual simulation, else you wouldn't care about the abstract notion of computability. Statistical errors in simulation are typically much larger than all other sources of inaccuracies.

Undecidability doesn't matter as only an approximate solution is needed. Therefore only numerical methods count. Engineers use the packages available routinely to solve high-dimensional eigenvalue problems for the design of cars, bridges, high-rise buildings, ships, etc..
 
  • #175
A. Neumaier said:
You never did any actual simulation
Funny enough, I do it almost every work day :)

That said, numerical methods often do not meet the specifications. If something doesn't work well, it's being simply rerun. So effective algorithms, supported by formalized proofs, get more popular.
 
  • #176
ErikZorkin said:
numerical methods often do not meet the specifications.
This just means that the specifications are overly strict.

Typically, the input of a matrix problem is already inaccurate, hence requiring the output to be accurate to the last bit is meaningless. If ##A## is a Hermitian matrix with a double eigenvalue and you perturb each coefficient by ##O(\epsilon)##, the eigenvalues will typically separate by an amount of ##O(\sqrt{\epsilon})## and the eigenvectors will typically even depend discontinuously on the perturbation. Thus the solution of the exact problem means nothing for the intended unknown problem nearby.

Since we didn't get closer after 175 posts I'll stop contributing to this thread.
 
  • #177
Who didn't get closer? I did (as I mentioned some posts ago).
 
Back
Top