# Eigenvalues of the square of an operator

1. Mar 31, 2005

### StatusX

If L^2 |f> = k^2 |f>, where L is a linear operator, |f> is a function, and k is a scalar, does that mean that L|f> = +/- k |f>? How would you prove this?

2. Apr 1, 2005

### matt grime

No. Consider the operator on some 2 dimensional (sub)space. given by

$$\left( \begin{array}{cc} 0 &1\\0&1 \end{array} \right)$$

then L^2=0 for all vectors in the space, but not all of them are eigenvectors with eigenvalue 0.

3. Apr 1, 2005

### StatusX

I don't see how L^2 is 0 there. Isn't L^2 just L? But in any case, what are the necessary conditions for it to be true? For example, I am working with hermition operators. Is it true for them?

4. Apr 1, 2005

### matt grime

Sorry, typo, the bottom right 1 should be a zero.

5. Apr 1, 2005

### matt grime

As to the general case, if L is hermitian, then it is diagonalizable, so you're into the case of commuting diagonalizable operators.

However, even then the asnwer is still no.

Let f and g be two eigenvectors with eigen values 1 and -1 resp. Then L^2(f+g)=f+g, yet L(f+g)=f-g, which isn't equal to f+g or -f-g.

6. Apr 1, 2005

### StatusX

OK thanks for your help so far. It's fine if you don't want to continue, but if not, I'll post this anyway in case someone else might have any ideas. I'm trying to find the necessary conditions for this to be true, and I've come up with the following:

$$\hat L^2 f = \lambda^2 f$$

$$(\hat L - \lambda)(\hat L + \lambda) f = 0$$

and

$$(\hat L + \lambda)(\hat L - \lambda) f = 0$$

Take the first one first. Now, I assumed $(\hat L - \lambda) f$ vanished, but it could just as easily be that it is just in the kernel of $(\hat L + \lambda)$. But this would mean $\lambda$ would still be an eigenvalue, but of a different eigenfunction, $(\hat L + \lambda) f$. Call this g. So either f is an eigenfunction of L with eigenvalue $-\lambda$ or g is with eigenvalue $\lambda$, or both.

Taking the second one gives the opposite results, that either f is an eigenfunction of L with eigenvalue $\lambda$ or h is with eigenvalue $-\lambda$, or both, with $h=(\hat L - \lambda) f$.

Assume f->$\lambda$. Unless $\lambda$=0, this means f-/->$-\lambda$, which means g->$\lambda$. $h=(\hat L - \lambda) f = 0$, which is trivial. The same goes for $-\lambda$. If f isn't an eigenfunction at all, then g and h must both be.

So the conclusion is that if f is an eigenfunction of $\hat L^2$, then you can always construct two eigenfunctions for $\hat L$ with eigenvalue $\lambda$ (for (f,g)), $-\lambda$ (for (f,h)), or both (for (g,h)).

Do you know if there is any way to get more specific than this, to determine what decides whether the eigenfunctions will be (f,g), (f,h), or (g,h)? (eg, in your example, it is (g,h))

By the way, if $\lambda$ is 0, nothing can be said. Then g=h=Lf, and we have either that L f=0 or $\hat L^2$f =0, which we already knew.

Last edited: Apr 1, 2005
7. Apr 3, 2005

### Hurkyl

Staff Emeritus
Can't you conclude g always satisfies the equation Lg = &lambda;g (And similarly for h)?

Anyways, have you considered reconstructing f from g and h? That gives you something interesting...

Last edited: Apr 3, 2005
8. Apr 6, 2005

### StatusX

Thanks for the reply. Yes, g and h are always eigenfunctions, but when f is an eigenfunction, one of the will be trivial. As for your hint, I tried it and I didn't get anything new:

$$\hat L f = \frac{1}{2}(g+h)$$

$$\hat L^2 f = \frac{\lambda}{2}(g-h) = \lambda^2 f$$

so:

$$f = \frac{1}{2\lambda} (g - h)$$

Is this what you had in mind? Then:

$$\hat L f = \frac{1}{2} (g + h)$$

is a multiple of f iff g or h is 0 (or if they are linearly dependent? I don't know if this is possible). But I already knew this. Is there something else you can get from this result? I'm getting the feeling the only way to know is to check one of f, g, or h to see if they are an eigenfunction or 0, from which the rest should follow. I'm not sure exactly what I was looking for, maybe some condition on the operator itself that would make this determination less bluntly.

Last edited: Apr 6, 2005
9. Apr 7, 2005

### Hurkyl

Staff Emeritus
Maybe I just made a typo, but I'm not seeing it... this is my work:

g := (L + λI) f
h := (L - λI) f

Lg = λg
Lh = -λh

2λLf = L(2λf) = L(g-h) = λ(g+h) = 2Lf

10. Apr 7, 2005

### StatusX

g+h = 2Lf, so λ(g+h) =2λLf, which is what you started with.

11. Apr 7, 2005

### Hurkyl

Staff Emeritus
Ah, now that looks like a mistake! Bleh, I looked at it a zillion times and never saw it.

12. Apr 7, 2005

### Hurkyl

Staff Emeritus
Hrm.

When L is diagonalizable, one can make progress. I make this claim:

If L is a diagonalizable matrix, then $L^2 f = \lambda^2 f \implies Lf = \lambda f \vee Lf = -\lambda f$ can fail if and only if both λ and -λ are eigenvalues of L.

Doesn't seem like a very impressive statement, though.

Last edited: Apr 7, 2005
13. Apr 8, 2005

### Galileo

I haven't checked it at all, but I have a hunch it may be true if the operator is positive definite.