Hamiltonian matrix off diagonal elements?

Click For Summary
SUMMARY

The discussion focuses on the construction of Hamiltonian matrices in quantum mechanics, particularly for optical applications involving hyperfine levels of hydrogen. The Hamiltonian for the interaction of the electron's magnetic moment with an external magnetic field is expressed as \(H_B = -\mathbf{\mu}_e \cdot \mathbf{B} = 2\mu_B B S_z/\hbar\), which introduces off-diagonal elements in the Hamiltonian matrix when transitioning from the coupled to the uncoupled basis. The relationship between these bases is established through Clebsch-Gordan coefficients, which are essential for transforming the Hamiltonian matrix. The participants emphasize the importance of consistent eigenvalue selection in angular momentum addition, which affects the resulting matrix elements.

PREREQUISITES
  • Understanding of Hamiltonian mechanics in quantum physics
  • Familiarity with Clebsch-Gordan coefficients and angular momentum addition
  • Knowledge of hyperfine interactions in atomic physics
  • Basic principles of matrix representation in quantum mechanics
NEXT STEPS
  • Study the construction of Hamiltonian matrices in quantum mechanics
  • Learn about Clebsch-Gordan coefficients and their applications in quantum state transformations
  • Explore hyperfine interactions and their implications in atomic systems
  • Investigate numerical methods for simulating quantum systems with multiple energy levels
USEFUL FOR

Quantum physicists, computational physicists, and students studying atomic interactions and Hamiltonian mechanics will benefit from this discussion.

TheDestroyer
Messages
401
Reaction score
1
I'm trying to understand how Hamiltonian matrices are built for optical applications. In the excerpts below, from the book "Optically polarized atoms: understanding light-atom interaction", what I don't understand is: Why are the \mu B parts not diagonal? If the Hamiltonian is \vec{\mu} \cdot \vec{B} , why aren't all the components just diagonal? How is this matrix built systematically? Can someone please explain?

The following part is from the book:

We now consider the effect of a uniform magnetic field \mathbf{B} = B\hat{z} on the hyperfine levels of the {}^2 S_{1/2} ground state of hydrogen. Initially, we will neglect the effect of the nuclear (proton) magnetic moment. The energy eigenstates for the Hamiltonian describing the hyperfine interaction are also eigenstates of the operators \{F^2, F_z, I^2, S^2\} . Therefor if we write out a matrix for the hyperfine Hamiltonian H_\text{hf} in the coupled basis \lvert Fm_F\rangle , it is diagonal. However, the Hamiltonian H_B for the interaction of the magnetic moment of the electron with the external magnetic field,

$$H_B = -\mathbf{\mu}_e\cdot\mathbf{B} = 2\mu_B B S_z/\hbar,\tag{4.20}$$

is diagonal in the uncoupled basis \lvert(SI)m_S, m_I\rangle , made up of eigenstates of the operators \{I^2, I_z, S^2, S_z\} . We can write the matrix elements of the Hamiltonian in the coupled basis by relating the uncoupled to the coupled basis. (We could also carry out the analysis in the uncoupled basis, if we so chose.)

The relationship between the coupled \lvert Fm_F\rangle and uncoupled \lvert(SI)m_Sm_I\rangle bases (see the discussion of the Clebsch-Gordan expansions in Chapter 3) is

$$\begin{align}
\lvert 1,1\rangle &= \lvert \bigl(\tfrac{1}{2}\tfrac{1}{2}\bigr)\tfrac{1}{2}\tfrac{1}{2} \rangle ,\tag{4.21a} \\
\lvert 1,0\rangle &= \frac{1}{\sqrt{2}}\biggl(\lvert \bigl(\tfrac{1}{2} \tfrac{1}{2}\bigr) \tfrac{1}{2},-\tfrac{1}{2}\rangle + \lvert\bigl(\tfrac{1}{2}\tfrac{1}{2}\bigr),-\tfrac{1}{2}\tfrac{1}{2}\rangle\biggr),\tag{4.21b} \\
\lvert 1,-1 \rangle &= \lvert \bigl(\tfrac{1}{2}\tfrac{1}{2}\bigr),-\tfrac{1}{2},-\tfrac{1}{2} \rangle,\tag{4.21c} \\
\lvert 0,0\rangle &= \frac{1}{\sqrt{2}}\biggl( \lvert \bigl( \tfrac{1}{2} \tfrac{1}{2}\bigr)\tfrac{1}{2},-\tfrac{1}{2}\rangle - \lvert\bigl(\tfrac{1}{2}\tfrac{1}{2}\bigr),-\tfrac{1}{2}\tfrac{1}{2}\rangle\biggr),\tag{4.21d}
\end{align}$$

Employing the hyperfine energy shift formula (2.28) and Eq. (4.20), one finds for the matrix of the overall Hamiltonian H_\text{hf} + H_B in the coupled basis

$$H = \begin{pmatrix}
\frac{A}{4} + \mu_B B & 0 & 0 & 0 \\
0 & \frac{A}{4} - \mu_B B & 0 & 0 \\
0 & 0 & \frac{A}{4} & \mu_B B \\
0 & 0 & \mu_B B & -\frac{3A}{4}
\end{pmatrix},\tag{4.22}$$

where we order the states (\lvert 1,1\rangle, \lvert 1,-1\rangle, \lvert 1,0\rangle, \lvert 0,0\rangle) .

And for Eq. (2.28) the other part is

$$\Delta E_F = \frac{1}{2}AK + B\frac{\frac{3}{2}K(K + 1) - 2I(I + 1)J(J + 1)}{2I(2I - 1)2J(2J - 1)},\tag{2.28}$$

where K = F(F + 1) - I(I + 1) - J(J + 1) . Here the constants A and B characterize the strengths of the magnetic-dipole and the electric-quadrupole interaction, respectively. B is zero unless I and J are both greater than 1/2 .
 
Physics news on Phys.org
I'm not completely familiar with your notation (what F, I and S are in terms of the physics). However, it appears that the author is assuming that the initial Hamiltonian is the hyperfine hydrogen Hamiltonian. Such a hamiltonian is diagonal in the same basis that the following operators, \{F^2, F_z, I^2, S^2\} are diagonal, where \vec{F} = \vec{I} + \vec{S}. This usually occurs when the Hamiltonian contains a term \vec{I} \cdot \vec{S} = (F^2 - I^2 - S^2)/2.

However, when one adds a magnetic field, it introduces a term coupling to the magnetic moment of the system. In principle, this should be two terms, (\vec{\mu}_e + \vec{\mu}_p) \cdot \vec{B}, where the constants are the magnetic moments of the electron and proton respectively. Explicitly, \vec{\mu}_{e,p} = g_{e,p} \mu_{e,p} \vec{S}_{e,p}/\hbar, where S is the angular momentum operator for the particles, g is a fundamental proportionality constant of order 1 (g is approximately 2 for electrons, 5.6 for protons), and \mu = e \hbar/2m and m is particle mass. Thus, since the proton mass is about 2000 times larger, we ignore this and only consider the electron term.

So our new Hamiltonian contains a term S_z, which cannot be made diagonal in our basis. Specifically, the mathematical identity [F^2,S_z] \neq 0 guarantees that these operators can never both be diagonal. However, we know that we can choose a basis \{I^2, I_z, S^2, S_z\} where S_z is diagonal. Therefore, one can make a transformation from one basis to the other. The matrix elements of the transformation are the Clebsch-Gordan coefficients. What your book has done is transform the magnetic hamiltonian into the basis of your original problem, where it is nondiagonal.
 
Thank you for your reply.

The problem is exactly the way you understood it. Actually my problem is how to create the Hamiltonian matrix in the \left| {{I^2}{I_z}{S^2}{S_z}} \right\rangle basis. Can you please explain how to create that hamiltonian?
 
Ok, I'm still vague about the situation, but I'll take a guess as to what you want. Your question, if I'm understanding it correctly, is how to take an operator which is diagonal in the \left| {{I^2}{I_z}{S^2}{S_z}} \right\rangle basis and write it in the \left| {{F^2}{F_z}{I^2}{S^2}} \right\rangle basis?

This is the well-known problem of addition of angular momentum, and finding the Clebsch-Gordan coefficients. Your book claims to cover this in Chapter 3, and every quantum mechanics text should cover it in detail. The idea is that there exists a linear transformation between the two bases:

$$ \left| {{F}{m_F}{I}{S}} \right\rangle = \sum_{m_I,m_S} C_{m_I,m_S}(I,S;m_I,m_S) \left| {{I}{S}{m_I}{m_S}} \right\rangle $$

(Note that I am now labeling the kets by their eigenvalues, rather than the operators). There is a standard procedure for obtaining the constants C, but you should really read up on it. This is how you obtain the relations 4.21 in your OP.

One can then build up the matrix in the new basis. As an example, let me write a matrix element here, where I'll use 4.21 from your OP:

$$ \left\langle 1 0 \frac{1}{2} \frac{1}{2} \right| S_z \left| 0 0 \frac{1}{2} \frac{1}{2} \right\rangle = \frac{1}{2} \left( \left\langle (\frac{1}{2} \frac{1}{2}) \frac{1}{2} -\frac{1}{2} \right| + \left\langle (\frac{1}{2} \frac{1}{2}) -\frac{1}{2} \frac{1}{2} \right| \right) S_z \left( \left| (\frac{1}{2} \frac{1}{2}) \frac{1}{2} -\frac{1}{2} \right\rangle - \left| (\frac{1}{2} \frac{1}{2}) -\frac{1}{2} \frac{1}{2} \right\rangle \right)$$

Notice that the off-diagonal matrix elements on the right are zero, since Sz is diagonal on the basis on the right:

$$ \left\langle 1 0 \frac{1}{2} \frac{1}{2} \right| S_z \left| 0 0 \frac{1}{2} \frac{1}{2} \right\rangle = \frac{1}{2} \left( \frac{\hbar}{2} \left\langle ( \frac{1}{2} \frac{1}{2}) \frac{1}{2} -\frac{1}{2} | (\frac{1}{2} \frac{1}{2}) \frac{1}{2} -\frac{1}{2} \right\rangle - (- \frac{\hbar}{2}) \left\langle ( \frac{1}{2} \frac{1}{2}) -\frac{1}{2} \frac{1}{2} | (\frac{1}{2} \frac{1}{2}) -\frac{1}{2} \frac{1}{2} \right\rangle \right) = \frac{\hbar}{2} $$

Therefore, the matrix S_z has an off-diagonal element in the \left| {{F^2}{F_z}{I^2}{S^2}} \right\rangle basis. The calculation above allows you to compute the rest of the matrix 4.22 in your OP.
 
Thank you so much for your answer. There's something weird happening when I do this calculation that I don't understand.

Now we are adding I and S, and this addition is supposed to be commutative, right? Look at the expectation value that you wrote there:

\left\langle 1 0 \frac{1}{2} \frac{1}{2} \right| S_z \left| 0 0 \frac{1}{2} \frac{1}{2} \right\rangle = \frac{1}{2} \left( \left\langle (\frac{1}{2} \frac{1}{2}) \frac{1}{2} -\frac{1}{2} \right| + \left\langle (\frac{1}{2} \frac{1}{2}) -\frac{1}{2} \frac{1}{2} \right| \right) S_z \left( \left| (\frac{1}{2} \frac{1}{2}) \frac{1}{2} -\frac{1}{2} \right\rangle - \left| (\frac{1}{2} \frac{1}{2}) -\frac{1}{2} \frac{1}{2} \right\rangle \right)

What I don't understand is I=1/2 and S=1/2, and this makes it possible to exchange between the values you have inside your kets, which means that Sz can be either +1/2 or -1/2 depending on the mixing you do.

To clarify more, look at this:

\left( \left| (\frac{1}{2} \frac{1}{2}) \frac{1}{2} -\frac{1}{2} \right\rangle - \left| (\frac{1}{2} \frac{1}{2}) -\frac{1}{2} \frac{1}{2} \right\rangle \right)

In each ket you (conceptually) can choose S_z to be either +1/2 or -1/2, but it just has to be done consistently, so if you choose the first ket 1/2 as the eigen value for Sz, then you have to choose -1/2 in the second one, and vice versa, which is another way to say that angular momentum addition is commutative since I=S=1/2. Is my reasoning correct?

Now the surprise for me, that the result changes and depends on your choice. So it's once \hbar /2 and once -\hbar /2 depending on which one you choose.

How do you explain that? Please elaborate.

Why is this important? Because I'm writing a computer program to do this stuff as an initiation for a very complicated simulation with 32 levels, and I got my matrix with two different signs for those mixed states, because I chose a different ordering for the sum, which confused me.
 
Time reversal invariant Hamiltonians must satisfy ##[H,\Theta]=0## where ##\Theta## is time reversal operator. However, in some texts (for example see Many-body Quantum Theory in Condensed Matter Physics an introduction, HENRIK BRUUS and KARSTEN FLENSBERG, Corrected version: 14 January 2016, section 7.1.4) the time reversal invariant condition is introduced as ##H=H^*##. How these two conditions are identical?

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 15 ·
Replies
15
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 0 ·
Replies
0
Views
1K
  • · Replies 1 ·
Replies
1
Views
1K