Eigenfunctions problem - i have the answer, explanation required

AI Thread Summary
The discussion focuses on clarifying the use of subscripts in eigenfunctions and the implications of the delta function in mathematical expressions. The subscript m is introduced to differentiate between two sets of eigenvalues associated with different eigenfunctions, while the delta function simplifies the summation by effectively eliminating terms when m does not equal n. Orthogonality is defined in terms of inner products, where two functions are orthogonal if their inner product equals zero. The conversation also highlights that the notation is designed for generality, allowing for clear differentiation between eigenfunctions. Overall, the use of m and n helps maintain clarity in the mathematical framework of eigenfunctions and their properties.
Brewer
Messages
203
Reaction score
0

Homework Statement


question.jpg



Now in a revision lecture given a few weeks ago, the lecturer gave this as the answer.



The Attempt at a Solution


work2.jpg


No I think generally I'm fine with it (apart from it doesn't seem very obvious that this is what you should do with the maths!).

BUT
1) Where do the subscript m's come from? Why aren't they still n's like are used for the conjugate of \psi?

2) How come the delta function suddendly gets rid of the sigma m, and itself, whilst simultaneously changing the am to an?

And what's a good definition of orthogonality? A simple laymans definition!
 
Physics news on Phys.org
Brewer said:

Homework Statement


question.jpg



Now in a revision lecture given a few weeks ago, the lecturer gave this as the answer.



The Attempt at a Solution


work2.jpg


No I think generally I'm fine with it (apart from it doesn't seem very obvious that this is what you should do with the maths!).

BUT
1) Where do the subscript m's come from? Why aren't they still n's like are used for the conjugate of \psi?
There are two different eigenvalues, An and Am. You are multiplying two sums and combining them into one. For example (a_1+ a_2+ a_3)(b_1+ b_2+ b_3)= a_1b_1+ a_1b_2+ a_1b_3+ a_2b_1+ a_2b_2+ a_2b_3+ a_3b_1+ a_3b_2+ a_3b_3 where I have used "b" for the second sum for clarity. Notice that multiplying two sums of 3 terms each gives a 9 term sum as product. In general, multiply two sums of N terms gives a sum of N2 terms.

2) How come the delta function suddendly gets rid of the sigma m, and itself, whilst simultaneously changing the am to an?
If a_na_m= \delta_{mn} which is defined as: 1 if m= n, 0 otherwise, then the above sum becomes 1+ 0+ 0+ 0+ 1+ 0+ 0+ 0+ 1 and the 9 terms reduce to three: 1+ 1+ 1 (of course, each "1" is multiplied by some term).

The
And what's a good definition of orthogonality? A simple laymans definition!
Basic definition is that two vectors are orthogonal if and only if they are perpendicular. Of course, if your "vectors" are abstract functions then you have to think about what you mean by "orthogonal". We say that two vectors are orthogonal if and only if some inner product is 0. For functions this is \int f(x)g(x)dx with the integral taken over some interval.

A set of vectors is "orthonormal" if each has length 1 (the "normal" part) and any two different vectors are perpendicular (the "ortho" part). That is the same as saying that the inner product of a vector in the set with itself is 1 and with any other vector in the set is 0: the delta.
 
So:

Am is written as such just because its used for \psi rather than \psi* just as a way to differentiate between the two sets of eigenvalues that arise from using the conjugate?

If that is the case, why couldn't an* and an be used from the very start?

And the delta function disappears because it is set to 1, and therefore an = am so one can be written as the other? And as a result there is no need for the sigma over n?

Thanks for your help.
 
Brewer said:
So:

Am is written as such just because its used for \psi rather than \psi* just as a way to differentiate between the two sets of eigenvalues that arise from using the conjugate?

If that is the case, why couldn't an* and an be used from the very start?

And the delta function disappears because it is set to 1, and therefore an = am so one can be written as the other? And as a result there is no need for the sigma over n?

Thanks for your help.

The reason for using the m's and n's is really to make the derivation more general. If, for instance, you have two different eigenfunctions so that you have \psi_{m} and \psi_{n}, then taking their inner product would give you:

\int \psi^{*}_{m} \psi_{n} dx

Now, because the set of all the \psis are orthonormal, this inner product is 0 unless m=n. That is what orthonormality really means (that and the functions are normalized; i.e. integrating them over all space gives you 1).

When you get to the summation over n and m, all you are doing is subsituting in for what \psi^{*}_{m} and \psi_{n} are defined to be. The n and m are just two indices that you sum over. The idea here is that you sum them independent of each other, not all at once. This is why there are two. Then you use the condition that the u's are also orthonormal, pull out the summations, and use the Kronecker Delta. Essentially, the Kronecker Delta kills all the terms in the summation over m except for one, when m=n. So you just drop the summation over m and change all the m subscripts to n's.
 
Last edited:
Thread 'Help with Time-Independent Perturbation Theory "Good" States Proof'
(Disclaimer: this is not a HW question. I am self-studying, and this felt like the type of question I've seen in this forum. If there is somewhere better for me to share this doubt, please let me know and I'll transfer it right away.) I am currently reviewing Chapter 7 of Introduction to QM by Griffiths. I have been stuck for an hour or so trying to understand the last paragraph of this proof (pls check the attached file). It claims that we can express Ψ_{γ}(0) as a linear combination of...
Back
Top