Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Eigenvalues, eigenvectors and the expansion theorem

  1. Sep 17, 2016 #1

    dyn

    User Avatar

    If i have an arbitrary ket then i know it can always be expressed as a linear combination of the basis kets.I now have an operator A which has 2 eigenvalues +1 and -1.
    The corresponding eigenvectors are | v >+ = k | b > + m | a > and | v >- = n | c > where | a > , | b > and | c > are linear combinations of the basis vectors.
    The arbitrary ket is expressed as | ψ > = a | a > + b | b > + c | c > where | a |2 gives the probability of a measurement giving the eigenvalue corresponding to | a >. A question asks what is the probability of measuring the eigenvalue +1 . It gives the answer as | b |2 + | a |2 .
    Finally to my question ; how or why does the expansion theorem apply to this situation as the eigenvector | v >+ only exists as a combination of | a > and | b >
    Hoping you can understand my question. Thanks
     
  2. jcsd
  3. Sep 17, 2016 #2

    andrewkirk

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    What is the dimension of the vector space?

    We know it is at least two, since there are two distinct eigenvalues, but it will be more than that if either of those eigenvalues have multiplicity greater than 1. If so then the statement about 'corresponding eigenvectors' is inaccurate. An eigenvalue with multiplicity ##k## corresponds to a ##k##-dimensional eigenspace, for which we need ##k## linearly independent vectors to form a basis.

    Alternatively, if the dimension is two then the two eigenvectors you list must form a basis, since they are orthogonal and hence linearly independent. Hence we will be able to express any vector/ket as a linear combination of those two vectors.
     
  4. Sep 17, 2016 #3

    Nugatory

    User Avatar

    Staff: Mentor

    ##|v\rangle_+## is a perfectly good vector in its own right. The fact that it can be written as the sum of two other vectors doesn't make it some sort of second-class vector that only exists as that sum - any vector can be written as the sum of two other vectors.
     
  5. Sep 17, 2016 #4

    dyn

    User Avatar

    The Hilbert space is C3 so has 3 orthonormal basis vectors. | v >+ exists in a 2-D subspace of C3
    and | v >- is in a 1-D subspace of C3
     
  6. Sep 17, 2016 #5

    andrewkirk

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Then a second eigenvector in the eigenspace of eigenvalue +1 is needed. Without it, we do not have a specification of an eigenbasis.

    However, it's still not clear exactly what your question is. Can you express it more clearly?
     
  7. Sep 17, 2016 #6

    dyn

    User Avatar

    Thank you for your time. Its getting late here. I will try to rephrase my question tomorrow more clearly. I appreciate you trying to understand my question.
     
  8. Sep 18, 2016 #7

    Nugatory

    User Avatar

    Staff: Mentor

    That makes sense, but it is does not necessarily follow that:
    That might be the case, and if it is then as @andrewkirk says, you're missing an eigenvector - the two that you have are not sufficient to span the three-dimensional Hilbert space.

    However, there is another possibility: the two eigenvectors of A together span a two-dimensional subspace of the Hilbert space. Might that be the case for this problem? It would be consistent with the answer provided for the probability of measuring +1 for A.
     
  9. Sep 18, 2016 #8

    dyn

    User Avatar

    Thanks for your replies. I think I have it sorted now.One of the eigenvalues was doubly degenerate and so was represented by 2 O.N. basis vectors.

    On this point if an eigenvalue is doubly degenerate , ie has 2 eigenvectors | v1 > and | v2 > and a measurement returns this eigenvalue does that mean we have no way of knowing if an arbitrary ket has collapsed to | v1 > or | v2 > or any linear combination of those 2 eigenvectors ?

    On a separate note I have seen the notation | ψ1 + ψ2 > = | ψ1 > + | ψ2 > . Is this standard notation as it doesn't seem right to me as adding kets is not the same as adding wavefunctions ?
     
    Last edited: Sep 18, 2016
  10. Sep 19, 2016 #9

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    The usual collapse assumption is that, if the system has been prepared in the state ##\hat{\rho}## before the measurement, after the measurement, it's in the state
    $$\hat{\rho}'=\frac{1}{z} \sum_{i,j=1}^2 |v_i \rangle \langle v_i|\hat{\rho}|v_j \rangle \langle v_j|, \quad Z=\sum_{i=1}^2 \langle v_i |\hat{\rho}|v_i \rangle.$$
    Note that this holds true only for ideal filter measurements a la von Neumann, and that the collapse hypothesis is at least questionable, but I don't want to start another long discussion on the issue. Here it's just about the math ;-)!
     
  11. Sep 20, 2016 #10

    dyn

    User Avatar

    Thanks. Does this mathematical statement agree with the statement from my previous post ?

    Also any thoughts on | ψ1 + ψ2 > = | ψ1 > + | ψ2 > ?
     
  12. Sep 20, 2016 #11

    Nugatory

    User Avatar

    Staff: Mentor

    What you put inside of a ket is just a label, so that expression is tautologically true; you could read it as a definition of |ψ1 + ψ2 >.
     
  13. Sep 21, 2016 #12

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    I think it's usual notation. Also note that kets are not wave functions. A lot of confusion can be avoided when one kept the concepts straight from the very beginning. It's analogous with usual vectors in classical physics. E.g., a position vector is not a set of three numbers but a directed line connecting the origin of your reference frame with the point in question. This we write as ##\boldsymbol{x}##. Now, if you have identified an arbitrary (maybe for convenience Cartesian) basis ##\boldsymbol{e}_j## you can decompose any vector uniquely in terms of its components, ##\boldsymbol{x}=x^j \boldsymbol{e}_j##, and then it may be convenient to introduce a notation where
    $$\vec{x}=\begin{pmatrix} x^1 \\ x^2 \\ x^3 \end{pmatrix} \in \mathbb{R}^3$$
    is a column vector. Of course ##\boldsymbol{x}## and ##\vec{x}## are not the same but there's a one-to-one mapping from the vector space of "arrows" in Euclidean space to the vector space of triples of real numbers.

    The same holds for the vectors of the Hilbert space. The kets in Dira'cs ingenious notation live in an abstract Hilbert space. Then you construct generalized basis vectors ##|\vec{x} \rangle##, which are "eigenvectors" of the position operator. These are not Hilbert-space vectors but distributions, fullfilling the generalized "orthonormality condition"
    $$\langle \vec{x}|\vec{x}' \rangle=\delta^{(3)}(\vec{x}-\vec{x}').$$
    However, you can show that they provide a "decomposition of the unit operator" analogously to proper complete orthonormal sets,
    $$\int_{\mathbb{R}^3} \mathrm{d}^3 \vec{x} |\vec{x} \rangle \langle \vec{x}|=\hat{1}.$$
    Now, given a proper state ket (normalized to 1 for convenience) you can define the wave function, which is nothing else than the formal "component" of the ket wrt. the generalized position eigenbasis,
    $$\psi(\vec{x})=\langle \vec{x}|\psi \rangle.$$
    The scalar product for this representation of the Hilbert space is also easily determined from the Dirac formalism by introducing the decomposition of the unit operator (very many calculations in QT consists in practice in a clever choice of such insertions of unit operator ;-))
    $$\langle \psi_1|\psi_2 \rangle= \int_{\mathbb{R}^3} \mathrm{d}^3 \vec{x} \langle \psi_1|\vec{x} \rangle \langle \vec{x}|\psi_2 \rangle = \int_{\mathbb{R}^3} \mathrm{d}^3 \vec{x} \psi_1^*(\vec{x}) \psi_2(\vec{x}).$$
    This is the realization of the abstract (separable) Hilbert space by the Hilbert space of square integrable functions ##\mathrm{L}^2(\mathbb{R}^3,\mathbb{C})##. There's a one-to-one connection between the kets in the abstract Hilbert space and it's realization as ##\mathrm{L}^2(\mathbb{R}^3,\mathbb{C})##. The Hilbert spaces are equivalent but not the same!
     
  14. Sep 21, 2016 #13

    dyn

    User Avatar

    The ρ' referred to above looks like an operator to me not a state ? If a measurement returns a value which is a degenerate eigenvalue does that mean the state collapses into a superposition of those eigenfunctions of the degenerate eigenvalues which we will never be able to determine ?
     
  15. Sep 21, 2016 #14

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    A quantum state is represented by the statistical operator. For pure states you can equivalently say they are represented by a ray in Hilbert space, but that's so inconvenient ;-). I prefer to use the statistical operator to refer to states, and it's completely general. Of course for a pure state, if ##|\psi \rangle## is a representant of the state (ray) before the measurement then after the measurement you update the state to the new pure state represented by
    $$|\psi' \rangle=\frac{1}{\sqrt{Z}} \sum_i |v_i \rangle \langle v_i|\psi \rangle, Z=\sum_i |\langle v_i|\psi \rangle|^2.$$
    The statistical operators are projectors in this case
    $$\hat{\rho}=|\psi \rangle \langle \psi|, \quad \hat{\rho}'=|\psi' \rangle \langle \psi'|.$$
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Eigenvalues, eigenvectors and the expansion theorem
  1. Spin eigenvectors (Replies: 24)

Loading...