Determining Eigenvalues and Eigenvectors in a Coupled 2-Particle System

In summary: H\vert m_1,m_2\rangle## and the eigenvector itself.In summary, the conversation discusses a 2-particle system with angular momentum operators and a Hamiltonian given by μB(L1+L2)+γL1⋅L2. The goal is to determine the eigenvalues and eigenvectors of the Hamiltonian when the spins are l1=l2=1, using Clebsch-Gordan coefficients. The conversation also includes a discussion about how the operators act on state vectors and the difficulties in finding explicit eigenvalues and eigenvectors.
  • #1
Markus Kahn
112
14

Homework Statement


Consider a 2-particle system where the two particles have angular momentum operators ##\vec{L}_1## and ##\vec{L}_2## respectively. The Hamiltonian is given by
$$H = \mu\vec{B}\cdot (\vec{L}_1+\vec{L}_2)+\gamma \vec{L}_1\cdot \vec{L}_2.$$
Determine explicitly the eigenvalues and eigenvectors of ##H## when the spins are ##l_1=l_2=1##.

Homework Equations


All relevant Clebsch-Gordan coefficients are given in the form ##\langle 1,1, m_1,m_2 \vert 1,1, j,m \rangle \equiv \langle m_1,m_2 \vert j,m\rangle##.

The Attempt at a Solution


We know that any state vector in the Hilbert-space is of the form ##\vert 1,1, m_1,m_2 \rangle = \vert1,m_1\rangle \otimes \vert 1,m_2\rangle##. Since the Hilbert space is a tensor product we can rewrite it as a direct sum of irreducible representations according to the Clebsch-Gordan-series and find
$$\vert j,m \rangle \equiv\vert 1,1, j,m \rangle = \sum_{\substack{m_1,m_2\\ m=m_1+m_2}} \langle 1,1,m_1,m_2\vert 1,1, j,m\rangle \vert 1,1, m_1,m_2\rangle\equiv \sum_{\substack{m_1,m_2\\ m=m_1+m_2}} \langle m_1,m_2\vert j,m\rangle \vert m_1,m_2\rangle .$$
Now I tried to figure out how ##H## acts on ##\vert m_1, m_2\rangle##. My first problem is that I don't know how ##\vec{L}_1, \vec{L}_2## act on the state vector.. To solve this I would need to assume that ##\vec{B}=B_z \vec{e}_z## and therefore get
$$\vec{B}(\vec{L}_1+\vec{L}_2)\vert m_1,m_2\rangle = B_z (L_1^z+L_2^z)\vert m_1,m_2\rangle = B_z(m_1+m_2)\vert m_1,m_2\rangle .$$
Is this a reasonable assumption or can one solve this without any assumptions for the magnetic field?

Now we still have the second part of the Hamiltonian left. My idea here was: ##L^2 \equiv (\vec{L}_1+\vec{L}_2)^2 = L_1^2+L_2^2+2L_1L_2 ## and therefore ##L_1L_2 = \frac{1}{2} (L^2-L_1^2-L_2^2)##. Now I only need to figure out how ##L^2## acts on ##\vert m_1,m_2\rangle##. Sadly, I can't really figure that part out...

And my last problem is: even if I would figure out how ##L^2## acts on the state, I have a hard time making the last steps to the original goal of finding all eigenvalues and -vectors explicitly.EDIT: I think I'm able to answer my first question:
Is this a reasonable assumption or can one solve this without any assumptions for the magnetic field?
Since we can chose the coordinate system in which we work, we can always chose on in which the magnetic field points in the ##z##-direction. The only assumption would then be that the magnetic field is constant...
 
Last edited:
Physics news on Phys.org
  • #2
Before the magnetic field is turned on, the Hamiltonian has only the ##\gamma \vec L_1 \cdot \vec L_2## term which is independent of your choice of coordinate axes. When the magnetic field is turned on, you might as well choose the axis of quantization in the field direction to make life simpler. There is no loss of generality when you do this.
Markus Kahn said:
Now I only need to figure out how ##L^2## acts on |m1,m2⟩
I am not sure what you mean by this.
##L^2## acts on ##\vert 1~1~L~M \rangle## and ##L^2\vert 1~1~L~M \rangle=L(L+1)\vert 1~1~L~M \rangle##. Here ##M=m_1+m_2##. I assume you will write down the matrix in the coupled configuration using the ##\vert 1~1~L~M \rangle## states. (How many are there?)
 
  • Like
Likes Markus Kahn and DrClaude
  • #3
kuruman said:
I assume you will write down the matrix in the coupled configuration using the |1 1 L M > states. (How many are there?)
I don't understand what you exactly mean by "the matrix in the coupled configuration"... How many states? Well, since ##l_1=l_2=1## we know that ##-1\le m_1,m_2\le 1##, we therefore have three possible values for ##m_1## and ##m_2##. Since there are nine possible ways to add these two together and get ##m=m_1+m_2##, we should have nine states. My idea was, after figuring out how ##L^2## acts on ##\vert m_1,m_2\rangle##, to apply the Hamiltonian to every single possible state ##\vert m_1, m_2\rangle## and look what value and state come out.

kuruman said:
L^2 acts on |1 1 L M⟩ and L^2|1 1 L M⟩=L(L+1)|1 1 L M⟩. Here M=m_1+m_2.
I'd like to see what happens when ##H##, and therefore ##L^2##, acts on ##\vert m_1, m_2\rangle## and not on ##\vert j,m\rangle(= \vert 1,1,J,M\rangle \text{ right?})##. To maybe illustrate what my idea was:
$$\begin{align*}H\vert j,m\rangle &=
\sum_{\substack{m_1,m_2\\ m=m_1+m_2}} \langle m_1,m_2\vert j,m\rangle H\vert m_1,m_2\rangle \\
&= \sum_{\substack{m_1,m_2\\ m=m_1+m_2}} \langle m_1,m_2\vert j,m\rangle \left(\mu B_z (l_1^z+l_2^z) + \frac{\gamma}{2}(L^2-L_1^2-L_2^2)\right)\vert m_1,m_2\rangle\\
&= \sum_{\substack{m_1,m_2\\ m=m_1+m_2}} \langle m_1,m_2\vert j,m\rangle \left(\mu B_z (m_1+m_2) + \frac{\gamma}{2}(L^2-1(1+1)-1(1+1))\right)\vert m_1,m_2\rangle \\
&= \sum_{\substack{m_1,m_2\\ m=m_1+m_2}} \langle m_1,m_2\vert j,m\rangle \left(\mu B_z (m_1+m_2) - \frac{\gamma}{2}4 + \frac{\gamma}{2}L^2\right)\vert m_1,m_2\rangle
\end{align*}$$
If I now knew how ##L^2## acts on ##\vert m_1,m_2\rangle## I could determine the eigenvalue of ##\vert j,m\rangle##.
 
  • #4
Markus Kahn said:
I don't understand what you exactly mean by "the matrix in the coupled configuration"...
The coupled configuration uses basis states ##\vert 1~1~L~M \rangle## where ##M## takes values from ##-L## to ##+L##. ##L## itself takes values ##0##, ##1## and ##2##, the outcomes of adding angular momenta ##1 \oplus 1##. The total number of states is ##n=0\times(0+1)+1\times(1+1)+2\times(2+1)=1+3+5=9##. The Hamiltonian in this representation is also a ##9\times 9## matrix. Another choice is to write the Hamiltonian in the ##\vert 1~m_1\rangle\vert 1~m_2\rangle## representation (##\vert m_1~m_2\rangle## is ambiguous) which is also a ##9\times 9## matrix. The two matrices are related by a unitary transformation whose matrix elements are the Clebsch-Gordan coefficients. Obviously, you get the same eigenvalues when you diagonalize either one.

By writing ##H \vert j~m \rangle##, it seems that you are trying to find the matrix elements in the coupled ##\vert L_1~L_2~L~M \rangle## configuration. To do that, you don't need to go through all that business of summations and Clebsch-Gordan coefficients. All you need to find are ##L_z\vert L_1~L_2~L~M \rangle=?## and ##\gamma \vec L_1 \cdot \vec L_2 \vert L_1~L_2~L~M \rangle=?## The first is easy. For the second, you already know that $$\gamma \vec L_1 \cdot \vec L_2\vert L_1~L_2~L~M \rangle=\frac{1}{2}\gamma(L^2-L_1^2-L_2^2)\vert L_1~L_2~L~M \rangle.$$Furthermore, the fact that we have written the states as ##\vert L_1~L_2~L~M \rangle## signifies that ##L_1##, ##L_2##, ##L## and ##M## are all good quantum numbers. What does that mean and how can you use it to your advantage?
 
Last edited:
  • Like
Likes Markus Kahn and DrClaude
  • #5
First of all, thanks for the response and help!

Before I start, let me just ask some basic question (just to make sure we talk about the same things). For the following let the ##D_k## be the irreducible representations of ##\operatorname{su}(2)## [https://en.wikipedia.org/wiki/Representation_theory_of_SU(2)], where ##k## can be either an integer or half-integer:
  1. ##\vert 1,1, L,M\rangle =\vert 1,1,j,m\rangle \in D_1\otimes D_1## ?
  2. ##\vert 1,m_1\rangle \vert 1,m_2 \rangle = \vert 1,m_1 \rangle \otimes \vert 1,m_2\rangle##?
  3. I don't know what this, ##1\oplus 1##, notation means.
  4. When you calculate the number of states, to me it seems like you are adding the dimensions of the representations... Is this correct?
  5. If I want to find the Hamiltonian in it's matrix form, basic Linear Algebra suggests applying the operator on the basis of this space, therefore the basis of ##D_0,D_1## and ##D_2##, and then rewrite the resulting vector in terms of the basis. Is that correct?
If everything from above is correct we have
$$\begin{align*}L_z\vert L_1, L_2, L, M\rangle &= (L_1^z+L_2^z)\vert L_1, L_2, L, M\rangle \\
&= (L_1^z\otimes 1+1\otimes L_2^z)\vert L_1, L_2, L, M\rangle \\
&= (m_1+m_2)\vert L_1,L_2,L,M\rangle \\
&= M\vert L_1,L_2,L,M\rangle
\end{align*}$$
Now for the last part, I'm still confused:
$$\begin{align*}
\gamma \vec L_1 \cdot \vec L_2\vert L_1,L_2,L,M \rangle & =\frac{1}{2}\gamma(L^2-L_1^2-L_2^2)\vert L_1,L_2,L,M \rangle \\
&= \frac{1}{2}\gamma(L(L+1)-L_1(L_1+1)-L_2(L_2+1))\vert L_1,L_2,L,M \rangle
\end{align*}$$
So in our particular case with ##L_1=L_2=1## we find ##\gamma \vec L_1 \cdot \vec L_2\vert 1,1,L,M \rangle = \frac{1}{2}\gamma(L(L+1)-4)\vert 1,1,L,M \rangle##, which combined with the previous result gives
$$H\vert 1,1,L,M\rangle = \left[\mu B_zM +\frac{1}{2}\gamma(L(L+1)-4)\right] \vert 1,1, L,M\rangle.$$
I'm still not sure if I understood you completely since this results looks quite similar to the one I have written down in my previous post --- being only different in that we applied here ##L^2## and got the term ##L(L+1)## (for which I btw. don't no the value...). So assuming I did the math correctly up to this point, we have shown that ##\vert 1,1,L,M\rangle## is an eigenvector with the eigenvalue given above.

kuruman said:
Furthermore, the fact that we have written the states as |L1 L2 L M⟩ signifies that L_1, L_2, L and M are all good quantum numbers. What does that mean and how can you use it to your advantage?
Honestly, it's the first time for me hearing the term good quantum numbers. Google suggests that the eigenvectors associated with these numbers stay eigenvectors of ##H## as the state evolves in time. I'm not sure how this is useful for this exercise...
 
  • #6
To answer your questions
1. I don't know what you are asking here, but it probably doesn't matter.
2. Yes.
3. This is my way of showing addition of angular momenta where ##1+1## can be ##0##, ##1## or ##2##.
4. That is correct.
5. Your final equation$$H\vert 1,1,L,M\rangle = \left[\mu B_zM +\frac{1}{2}\gamma(L(L+1)-4)\right] \vert 1,1, L,M\rangle.$$ is correct. Clearly the matrix representing ##H## is diagonal, $$\langle 1,1,L',M' \vert H\vert 1,1,L,M \rangle = \left[\mu B_zM +\frac{1}{2}\gamma(L(L+1)-4)\right]\delta_{L'L}\delta_{M'M}.$$ All that's left is to generate the ##9## diagonal elements with the appropriate choices of ##L## and ##M##. To do that, you must first order the ##9## basis states in some manner.
 
  • #7
kuruman said:
All that's left is to generate the 9 diagonal elements with the appropriate choices of L and M. To do that, you must first order the 9 basis states in some manner.
I don't really understand what you mean by that, but let's just assume the following order:
$$\left(\vert 1,1,0,0\rangle, \vert 1,1,1,-1\rangle,\vert 1,1,1,0\rangle,\vert 1,1,1,1\rangle, \vert 1,1,2,-2\rangle,\vert 1,1,2,-1\rangle, \vert 1,1,2,0\rangle,\vert 1,1,2,1\rangle, \vert 1,1,2,2\rangle \right).$$
So what exactly is the next step?
 
  • #8
Markus Kahn said:
So what exactly is the next step?
Just plug in. Start with $$\langle 1,1,L,M \vert H\vert 1,1,L,M \rangle = \left[\mu B_zM +\frac{1}{2}\gamma(L(L+1)-4)\right]$$State 1 in your ordered states is ##\vert 1,1,0,0 \rangle.## Then with ##L=0## and ##M=0## for this state, $$H_{11}=\langle 1,1,0,0 \vert H\vert 1,1,0,0 \rangle = \left[\mu B_z\times 0 +\frac{1}{2}\gamma(0\times(0+1)-4)\right]=-2\gamma.$$Do you think you can figure out the remaining 8 elements? Of course you do. :smile:
 
  • Like
Likes Markus Kahn
  • #9
Thank you very much for the help and time, worked out perfectly!
 
  • #10
kuruman said:
I am not sure what you mean by this. L^2 acts on |1 1 L M⟩ and L^2|1 1 L M⟩=L(L+1)|1 1 L M⟩.
I'm sorry for opening up the question one more time, but after some thinking I still have problems with this statement...

I just fail to see why this is the case. We have defined ##L^2:=(\vec{L}_1+\vec{L}_2)^2##. Now I understand that ##\vec{L}_k^2 \vert L_k, M_k\rangle = L_k(L_k+1)\vert L_k, M_k\rangle##, where ##k\in\{1,2\}##. If I now look at a state of the form ##\vert L_1,L_2,M_1,M_2\rangle## and apply ##(L_1^2+L_2^2)## onto it I find:
$$\begin{align*}(L_1^2+L_2^2)\vert L_1,L_2,M_1,M_2 \rangle &= (L_1^2\otimes 1+1\otimes L_2^2)(\vert L_1,M_1 \rangle \otimes \vert L_2,M_2 \rangle) \\
&= L_1^2\vert L_1,M_1 \rangle\otimes \vert L_2,M_2 \rangle+\vert L_1,M_1 \rangle\otimes L_2^2 \vert L_2,M_2 \rangle \\
&= \left[L_1(L_1+1)+L_2(L_2+1)\right]\vert L_1,M_1 \rangle\otimes \vert L_2,M_2 \rangle \\
&= \left[L_1(L_1+1)+L_2(L_2+1)\right]\vert L_1, L_2, M_1, M_2 \rangle, \end{align*}$$
so the ##-(L_1^2+L_2^2)## part of ##(L^2-(L_1^2+L_2^2))## is clear to me, but what exactly happens with the ##L^2## part? Why should ##L^2\vert L_1,L_2, L,M\rangle= L(L+1)\vert L_1,L_2,L,M\rangle##?
 
  • #11
Markus Kahn said:
$$(L_1^2+L_2^2)\vert L_1,L_2,M_1,M_2 \rangle = (L_1^2\otimes 1+1\otimes L_2^2)(\vert L_1,M_1 \rangle \otimes \vert L_2,M_2 \rangle)$$
This is correct. Note that you are using ##\vert L_1,M_1 \rangle \otimes \vert L_2,M_2 \rangle## which is an eigenstate of ##L_1##, ##L_2##, ##M_1## and ##M_2##. It is not an eigenstate of ##L## or ##M##. It is sometimes called a "two-particle" state and you can see why.
Markus Kahn said:
Why should ##L^2\vert L_1,L_2, L,M\rangle= L(L+1)\vert L_1,L_2,L,M\rangle##?
Because it is an eigenstate of ##L## with eigenvalue ##L(L+1)##. It is also an eigenstate of ##L_1##, ##L_2## and ##M##. It is not an eigenstate of ##M_1## or ##M_2##. It is sometimes called a "coupled" state and you can also see why.

Both sets of "two-particle" and "coupled" states form a complete orthonormal set that span the 9-dimensional Hilbert space. They are related to each other through the Clebsch-Gordan coefficients. You can write ##H## using either one. The advantage of the coupled states is that ##\gamma \vec L_1 \cdot \vec L_2## is diagonal in the coupled but not in the two-particle representation. It might be a good exercise for you to write ##\gamma \vec L_1 \cdot \vec L_2## in the two-particle representation and diagonalize it. I will get you started.
Step 1: Expand the dot product
##\vec L_1 \cdot \vec L_2=L_{1x}L_{2x}+L_{1y}L_{2y}+L_{1z}L_{2z}.##
Step 2: Recast ##L_{kx}## and ##L_{ky}## (##k\in\{1,2\}##) in terms of the corresponding ladder operators using
##L_{kx}=\frac{1}{2}(L_{k+}+L_{k-})~## and ##L_{ky}=\frac{1}{2i}(L_{k+}-L_{k-}).##
Step 3: Put it all together and calculate the matrix elements. You should get a real symmetric matrix with some non-zero off-diagonal elements.
 
  • Like
Likes Markus Kahn
  • #12
I followed the steps you proposed and came to the following:
  1. ##\vec{L}_1\vec{L}_2= \frac{1}{2}(L_1^+L_2^- +L_1^-L_2^+)+L_1^zL_2^z##.
  2. $$\begin{align*}\langle 1,1, m_1', m_2' \vert H \vert 1,1,m_1,m_2\rangle &= \left[\mu B_z(m_1+m_2)+\gamma m_1m_2 \right]\delta_{m_1}^{m_1'}\delta_{m_2}^{m_2'} \\ &+ \frac{\gamma}{2}\sqrt{2-m_1(m_1+1)}\sqrt{2-m_2(m_2-1)}\delta_{m_1+1}^{m_1'}\delta_{m_2-1}^{m_2'}\\ &+ \frac{\gamma}{2}\sqrt{2-m_1(m_1-1)}\sqrt{2-m_2(m_2+1)}\delta_{m_1-1}^{m_1'}\delta_{m_2+1}^{m_2'}\end{align*}$$
  3. To continue from here on you need a basis of ##D_1\otimes D_1##, which I chose in the following order: $$(\vert 0,0\rangle, \vert -1,0\rangle, \vert 1,0\rangle, \vert -1,-1\rangle, \vert 0,-1\rangle, \vert 1,-1\rangle, \vert -1,1\rangle,\vert 0,1\rangle,\vert 1,1\rangle).$$
  4. Now one can write down the matrix for the Hamiltonian: $$H=\begin{pmatrix}0&0&0&0&0&\gamma&\gamma&0&0\\ 0& -\mu B_z &0&0&\gamma&0&0&0&0 \\ 0&0& \mu B_z &0&0&0&0&\gamma&0 \\ 0&0&0& -2\mu B_z+\gamma &0&0&0&0&0\\ 0&\gamma&0&0&-\mu B_z &0&0&0&0 \\ \gamma&0&0&0&0& -\gamma &0&0&0\\ \gamma&0&0&0&0&0&-\gamma &0&0 \\ 0&0&\gamma&0&0&0&0&\mu B_z&0 \\ 0&0&0&0&0&0&0&0& 2\mu B_z+\gamma \end{pmatrix},$$ which is real and symmetric matrix with some non-zero off-diagonal elements.
Note: I just realized that you said to only write down the matrix for ##\vec{L}_1\vec{L}_2## and not for the whole Hamiltonian... If I didn't make a mistake one could just remove all the ##\mu B_z## parts and would than find the matrix for ##\gamma\vec{L}_1\vec{L}_2##.
 
  • #13
Markus Kahn said:
that you said to only write down the matrix for ##\vec{L}_1\vec{L}_2## and not for the whole Hamiltonian... If I didn't make a mistake one could just remove all the ##\mu B_z## parts and would than find the matrix for ##\gamma\vec{L}_1\vec{L}_2.##
The whole Hamiltonian is OK. Let me look into your matrix in detail check your off-diagonal elements and I'll get back to you. Meanwhile, think about how you are going to diagonalize this.
 
  • Like
Likes Markus Kahn
  • #14
Your ##9\times 9## Hamiltonian looks good. How about diagonalizing it? Hint: Can you reorder your basis states so that the Hamiltonian is brought into block diagonal form? Then it will be much easier to diagonalize the submatrices along the diagonal.
 
  • #15
kuruman said:
Hint: Can you reorder your basis states so that the Hamiltonian is brought into block diagonal form? Then it will be much easier to diagonalize the submatrices along the diagonal.
I wasn't able to do that (i.e. didn't know how to reorder the basis to get to that), but using the brute force approach I came to the following diagonal form:
$$D=\begin{pmatrix}
\gamma + 2 B_z \mu&&&&&&&&\\
&\gamma -2 B_z \mu&&&&&&&\\
&&-\gamma - B_z \mu&&&&&&\\
&&&-\gamma + B_z \mu&&&&&\\
&&&&-\gamma&&&&\\
&&&&&\gamma + B_z \mu&&&\\
&&&&&&\gamma - B_z \mu&&\\
&&&&&&&-2\gamma&\\
&&&&&&&&\gamma\\ \end{pmatrix},$$
with a set of eigenvectors:
$$\left\{
\begin{pmatrix}-1\\0\\0\\0\\0\\1\\1\\0\\0\end{pmatrix},
\begin{pmatrix}0\\0\\0\\0\\0\\-1\\1\\0\\0\end{pmatrix},
\begin{pmatrix}2\\0\\0\\0\\0\\1\\1\\0\\0\end{pmatrix},
\begin{pmatrix}0\\0\\0\\1\\0\\0\\0\\0\\0\end{pmatrix},
\begin{pmatrix}0\\-1\\0\\0\\1\\0\\0\\0\\0\end{pmatrix},
\begin{pmatrix}0\\1\\0\\0\\1\\0\\0\\0\\0\end{pmatrix},
\begin{pmatrix}0\\0\\-1\\0\\0\\0\\0\\1\\0\end{pmatrix},
\begin{pmatrix}0\\0\\1\\0\\0\\0\\0\\1\\0\end{pmatrix},
\begin{pmatrix}0\\0\\0\\0\\0\\0\\0\\0\\1\end{pmatrix}.
\right\}$$
 
  • #16
Your brute force approach yielded the correct eigenvalues. Re-ordering the matrix is easy. Just put next to each other states that are connected with off-diagonal elements. For example, if you look at the first row (or column), you see that state ##\vert 0,0 \rangle## has off diagonal elements with states ##\vert 1,-1 \rangle## and ##\vert -1,1 \rangle##. Then the new order would be ##\vert 0,0 \rangle##, ##\vert 1,-1 \rangle##, ##\vert -1,1 \rangle## ... This results in a ##3 \times 3## matrix along the block diagonal $$\begin{pmatrix}1 & \gamma & \gamma \\\gamma & -\gamma & 0 \\\gamma & 0 & -\gamma\end{pmatrix}$$which is much easier to diagonalize. States that are not connected to other states by off-diagonal elements are already diagonal ##1 \times 1## matrices and can be placed together at the end of the reordered sequence. Perhaps you may wish to practice and finish the reordering on your own.

If you look at your diagonalized matrix, you will note that in zero field you just have the eigenvalues of ##\gamma \vec L_1 \cdot \vec L_2##. There is one at ##-2\gamma## (singlet), three at ##-\gamma## (triplet) and five at ##+\gamma## (quintet). If you do it the other way and use the coupled states in zero field, the energies (as you have already determined) are given by $$E_{L_1,L_2,L,M}=\frac{1}{2}\gamma[L(L+1)-L_1(L_1+1)-L_2(L_2+1)]$$Here, ##L_1=L_2=1##.
For total angular momentum ##L=0##, there is only one value for ##M## and one energy, ##E_{1,1,0,0}=-2\gamma.## That's the singlet.
For total angular momentum ##L=1##, there are three values for ##M## and three energies, ##E_{1,1,1,1}=E_{1,1,1,0}=E_{1,1,1,-1}=-\gamma.## That's the triplet.
For total angular momentum ##L=2##, there are five values for ##M## and five energies, ##E_{1,1,2,2}=E_{1,1,2,1}=E_{1,1,2,0}=E_{1,1,2,-1}=E_{1,1,1,-2}=+\gamma.## That's the quintet.

See how it works? While you have to brute force the ##9 \times 9## matrix in the two-particle configuration to get the eigenvalues, you can just write them down in the coupled configuration.
 
  • Like
Likes Markus Kahn
  • #17
kuruman said:
See how it works? While you have to brute force the ##9 \times 9## matrix in the two-particle configuration to get the eigenvalues, you can just write them down in the coupled configuration.
Once again, thank you very much for all the explanations up until this point! I understand now why it doesn't matter which basis we use to determine the eigenvalues, but I'm still struggling to understand why ##L^2 = (L_1+L_2)^2## has the eigenvector ##\vert L_2,L_2, L,M\rangle##. Maybe I'm missing here something fundamental, or I haven't understood the example that we just calculated truly, but how can I explicitly prove that ##L^2\vert L_1,L_2,L,M\rangle = L(L+1)\vert L_1,L_2,L,M\rangle##.
 
  • #18
Markus Kahn said:
Once again, thank you very much for all the explanations up until this point! I understand now why it doesn't matter which basis we use to determine the eigenvalues, but I'm still struggling to understand why ##L^2 = (L_1+L_2)^2## has the eigenvector ##\vert L_2,L_2, L,M\rangle##. Maybe I'm missing here something fundamental, or I haven't understood the example that we just calculated truly, but how can I explicitly prove that ##L^2\vert L_1,L_2,L,M\rangle = L(L+1)\vert L_1,L_2,L,M\rangle##.
Here is a step-by-step brute force method that should make it clear.
1. You write the coupled ##L^2## operator in terms of the 1-particle operators as follows
$$L^2=L_1^2+L_2^2+2\vec L_1\cdot \vec L_2=L_1^2+L_2^2+L_{1+}L_{2-}+L_{1-}L_{2+}+2L_{1z}L_{2z}$$2. You write the coupled state as a linear combination of the appropriate two-particle states using the Clebsch-Gordan coefficients
$$\vert L,M \rangle=\sum_{\substack{m_1,m_2\\ M=m_1+m_2}}\langle L_1,m_1 , L_2,m_2 \vert L,M \rangle \vert L_1,m_1 \rangle \vert L_2,m_2 \rangle$$3. You operate on the new form of ##\vert L,M \rangle## with the new form of ##L^2##,
$$(L_1^2+L_2^2+L_{1+}L_{2-}+L_{1-}L_{2+}+2L_{1z}L_{2z})\sum_{\substack{m_1,m_2\\ M=m_1+m_2}}\langle L_1,m_1 , L_2,m_2 \vert L,M \rangle \vert L_1,m_1 \rangle \vert L_2,m_2 \rangle$$If you've done everything right and the spirit of Schrödinger's cat has been watching over you, the result should simplify to
$$L(L+1)\sum_{\substack{m_1,m_2\\ M=m_1+m_2}}\langle L_1,m_1 , L_2,m_2 \vert L,M \rangle \vert L_1,m_1 \rangle \vert L_2,m_2 \rangle$$Also consider that instead of the general case, you might first set ##L_1=L_2=1## and then proceed. It should make the proof much simpler and will clarify what you did here; you can always prove the general case later.
 
Last edited:
  • Like
Likes Markus Kahn

1. What is angular momentum coupling?

Angular momentum coupling is a phenomenon that occurs in quantum mechanics where the total angular momentum of a system is conserved, even when individual particles within the system are not. This means that the angular momentum of a system remains constant, as long as there are no external forces acting on it.

2. How does angular momentum coupling affect atomic and molecular systems?

In atomic and molecular systems, angular momentum coupling can lead to the formation of energy levels and spectral lines. This is because the coupling of angular momentum between particles can result in their energy levels becoming quantized, or restricted to certain values.

3. What is the difference between LS and jj coupling?

LS coupling, also known as Russell-Saunders coupling, is a type of angular momentum coupling where the total angular momentum is the vector sum of the individual orbital angular momentum and spin angular momentum. JJ coupling, or j-j coupling, is a more complex form of coupling that takes into account the interactions between the electrons in an atom.

4. How is angular momentum coupling related to the spin-orbit interaction?

Angular momentum coupling is closely related to the spin-orbit interaction, which is the interaction between an electron's spin and its orbital motion around the nucleus. This interaction can lead to the splitting of energy levels in an atom, and can be described using angular momentum coupling schemes.

5. What are some applications of angular momentum coupling?

Angular momentum coupling has various applications in physics, chemistry, and engineering. It is used to explain the fine structure of spectral lines in atoms and molecules, and is also important in understanding the behavior of atomic and molecular systems in magnetic fields. In addition, it plays a crucial role in the design of electronic devices such as transistors and lasers.

Similar threads

  • Advanced Physics Homework Help
Replies
17
Views
1K
  • Advanced Physics Homework Help
Replies
9
Views
1K
  • Advanced Physics Homework Help
Replies
6
Views
2K
  • Advanced Physics Homework Help
Replies
14
Views
2K
  • Advanced Physics Homework Help
Replies
3
Views
2K
  • Advanced Physics Homework Help
Replies
1
Views
2K
Replies
14
Views
3K
  • Advanced Physics Homework Help
Replies
1
Views
1K
  • Advanced Physics Homework Help
Replies
5
Views
1K
  • Advanced Physics Homework Help
Replies
8
Views
2K
Back
Top