Proving Linear Independence of Eigenfunctions of a Hermitian Operator

  • Context: MHB 
  • Thread starter Thread starter ognik
  • Start date Start date
  • Tags Tags
    Proof
Click For Summary

Discussion Overview

The discussion centers on the proof of the linear independence of eigenfunctions of a Hermitian operator, specifically when the eigenfunctions correspond to distinct eigenvalues. Participants explore various approaches to demonstrate this property, including the use of inner products and properties of Hermitian operators.

Discussion Character

  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant proposes that if $u_1$ and $u_2$ are eigenfunctions of a Hermitian operator with distinct eigenvalues, then they are linearly independent, using a specific algebraic manipulation.
  • Another participant suggests that the initial assumption of linear independence should be treated carefully and recommends using inner products to clarify the proof.
  • Concerns are raised about the clarity and correctness of the algebraic steps taken, particularly regarding the manipulation of eigenvalues and the implications of linearity of the operator.
  • Some participants argue for showing orthogonality of eigenfunctions corresponding to distinct eigenvalues as a pathway to proving linear independence.
  • There is a discussion about the necessity of justifying the linearity of operators and the properties of scalars in the context of the proof.
  • One participant emphasizes the importance of maintaining clarity in mathematical exposition and the need for careful justification of each step in the proof.

Areas of Agreement / Disagreement

Participants express differing views on the validity of the initial proof approach and the clarity of the logic presented. There is no consensus on the best method to prove linear independence, with multiple competing views remaining on how to approach the problem.

Contextual Notes

Participants note that the proof relies on properties of Hermitian operators and the assumptions about eigenvalues and eigenfunctions. There are unresolved questions regarding the implications of linearity and the nature of the scalars involved in the proof.

ognik
Messages
626
Reaction score
2
Given $ u_1, u_2 $ are eigenfunctions of the same Hermitian operator, with distinct eigenfunctions $ \lambda_1, \lambda_2 $, show $ u_1, u_2 $ linearly independent.

If they are LI, then $ \alpha_1u_1+\alpha_2u_2=0 $ (1)
Now $ Hu_1=\lambda_1u_1, Hu_2=\lambda_2, \therefore H(\alpha_1u_1+\alpha_2u_2)=0,$
$ \therefore \alpha_1 \lambda_1 u_1+\alpha_2 \lambda_2 u_2=0$ (2)
$ (1)*\lambda_1 = \alpha_1 \lambda_1 u_1+\alpha_2 \lambda_1 u_2=0 $ (3)
$ (2) - (3): \alpha_2 u_2 (\lambda_2 - \lambda_1) = 0 $
Eigenfunctions cannot be 0, and $\lambda_1 \ne \lambda_2, \therefore \alpha_2 = 0 $

We can similarly show that for the linear combination to = 0, $\alpha_1 = 0 $
I based this on "If linear combo of vectors = 0 and the only solution is all coefficients = 0, then the vectors are LI"
I just feel vaguely hesitant about this,as it seems slightly circular logic?
 
Physics news on Phys.org
You should assume your equation (1). That is, you're trying to show that $\alpha_1 u_1+\alpha_2 u_2=0$ forces or implies that $\alpha_1=\alpha_2=0$. So if you assume (1) holds, and show that the alphas must vanish, you're done. I'm not sure the rest of your logic holds, though. In particular, right around (1), it seems to be a bit shaky. I would recommend using inner products, and your knowledge of how Hermitian operators behave.
 
Ackbach said:
I would recommend using inner products, and your knowledge of how Hermitian operators behave.
The only Hermitian property that seemed potentially useful was orthogonality - and the previous exercise had already asked to prove orthogonal eigenvectors were linearly independent (multiply my (1) by each of the vectors in turn also proves the coefficients are 0, $u_1.u_2 = 0 $ etc) , so it seemed they wanted a different approach ...

Could you be more specific where you think the logic is shaky please?
 
Try showing that the eigenvectors corresponding to distinct eigenvalues are orthogonal. Then you can use your previous result.

As for shaky logic, you wrote:

Given $ u_1, u_2 $ are eigenfunctions of the same Hermitian operator, with distinct eigenfunctions $ \lambda_1, \lambda_2 $, show $ u_1, u_2 $ linearly independent.

If they are LI, then $ \alpha_1u_1+\alpha_2u_2=0 $ (1)
Now $ Hu_1=\lambda_1u_1, Hu_2=\lambda_2, \therefore H(\alpha_1u_1+\alpha_2u_2)=0,$
$ \therefore \alpha_1 \lambda_1 u_1+\alpha_2 \lambda_2 u_2=0$ (2)

So far, so good.

$ (1)*\lambda_1 = \alpha_1 \lambda_1 u_1+\alpha_2 \lambda_1 u_2=0 $ (3)

It's not at all clear what you mean here. The eigenvalue $\lambda_1$ does not equal $\alpha_1 \lambda_1 u_1+\alpha_2 \lambda_1 u_2$. Moreover, why the eigenvalue changed subscripts in the second term is unclear as well. What property are you using to change that subscript?

In general, your mindset should be this: every single symbol on one line must survive to the next line, unless you invoke a particular property to change it. In fact, there's got to be some sort of "Conservation of Symbols" law in algebra. I just need to formulate it correctly.

$ (2) - (3): \alpha_2 u_2 (\lambda_2 - \lambda_1) = 0 $
Eigenfunctions cannot be 0, and $\lambda_1 \ne \lambda_2, \therefore \alpha_2 = 0 $

This sort of idea is what you need, but I would go with inner products to show orthogonality of eigenfunctions. Then they must be linearly independent.
 
I think the idea of your proof is OK, the exposition is a bit off.

Suppose $\alpha_1u_1 + \alpha_2u_2 = 0$.

Since $H$ is LINEAR, it follows that $H(\alpha_1u_1 + \alpha_2u_2) = H(0) = 0$.

However, since $u_1,u_2$ are eigenfunctions with eigenvalues $\lambda_1,\lambda_2$ we have:

$H(\alpha_1u_1 + \alpha_2u_2) = \lambda_1\alpha_1u_1 + \lambda_2\alpha_2u_2 = 0$ ($\ast$)

(note we used the linearity of $H$ implicitly, without explanation).

Now $\lambda_1\alpha_1u_1 + \lambda_1\alpha_2u_2 = \lambda_1(\alpha_1u_1 + \alpha_2u_2) = \lambda_1 0 = 0$ ($\ast\ast$).

Subtracting ($\ast$) from ($\ast\ast)$ yields:

$(\lambda_1 - \lambda_2)\alpha_2u_2 = 0$.

Since $u_2$ is an eigenfunction, $u_2$ is not the $0$-function, by definition.

Therefore $(\lambda_1 - \lambda_2)\alpha_2 = 0$ (this is a scalar). Since we are in a FIELD:

either $\lambda_1 - \lambda_2 = 0$, or $\alpha_2 = 0$.

But $\lambda_1 - \lambda_2 = 0 \implies \lambda_1 = \lambda_2$.

As these eigenvalues were assumed distinct, it must be that $\alpha_2 = 0$.

Now we have that $\alpha_1u_1 + \alpha_2u_2 = 0$ becomes:

$\alpha_1u_1 = 0$, from which it is immediate that $\alpha_1 = 0$, as well, and LI is established.

***********

It appears to me that this is what you MEANT to express, but taking care with the details is important (for CLARITY's sake).
 
Ackbach said:
It's not at all clear what you mean here. The eigenvalue $\lambda_1$ does not equal $\alpha_1 \lambda_1 u_1+\alpha_2 \lambda_1 u_2$. Moreover, why the eigenvalue changed subscripts in the second term is unclear as well. What property are you using to change that subscript?
Sorry, I multiplied my eqtn (1) by $\lambda_1$ precisely to get that change of subscript in the resulting eqtn (3); then (2)-(3) lead to one coefficient =0 ...
 
Deveno said:
Since $H$ is LINEAR, it follows that $H(\alpha_1u_1 + \alpha_2u_2) = H(0) = 0$.
Aren't all operators linear? Either way, I don't get why H(some linear combo) = 0?

Later you justify $\alpha_1 (\lambda_2 - \lambda_1) $ as scalar(s) because we're in a field - do we need that field justification? The $\lambda's$ are always scalar (real) for an Hermitian operator? And the constants are also real, scalar - my book defines a linear combo of vectors as a real scalar * each vector.
 
ognik said:
Aren't all operators linear? Either way, I don't get why H(some linear combo) = 0?

Later you justify $\alpha_1 (\lambda_2 - \lambda_1) $ as scalar(s) because we're in a field - do we need that field justification? The $\lambda's$ are always scalar (real) for an Hermitian operator? And the constants are also real, scalar - my book defines a linear combo of vectors as a real scalar * each vector.

For a linear operator $L$, we have:

$L(u+v) = L(u) + L(v)$ for any vectors (functions), $u,v$.

In particular, $L(0) = L(0) + L(0)$, so subtracting $L(0)$ from both sides gives $0 = L(0)$.

Since our original linear combination is assumed = to 0, it follows that $H$ of the linear combination is likewise 0.

The property that $ab = 0 \implies a = 0$ or $b = 0$, actually holds for a wider class of algebraic structures called an *integral domain*, which every field happens to be. There ARE structures for which this is NOT true, and it is possible to have "vector-space-like" structures (called MODULES) with non-integral-domains as the "scalars".

Vector spaces occur "over" an underlying field of scalars. It is not a "given" that this field is the field of real numbers. While eigenvalues of a Hermitian operator are always real (see here), there is no reason to so restrict linear combinations a priori, and in quantum mechanics, for example, the wave-functions are often complex-valued, with only their MAGNITUDES being real.
 

Similar threads

  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K