I Effective molecular Hamiltonian and Hund cases

  • #51
amoforum said:
Take a second look at the definitions of ##J^\pm## that I wrote a few posts above. ##J^-## really does do the opposite of what normal ##+/-## operators do, in that it couples ##|\Omega + 1>## instead of ##|\Omega - 1>##. So they are using it correctly, and the reason is not at all trivial.

It has to do with operators referenced to molecule-fixed coordinates obeying anomalous commutation relations as compared to space-fixed ones. See Levebvre-Brion/Field Section 2.3.1.

B&C also talk about this in detail (section 5.5.6), but they do it as part of a giant chapter on angular momentum, so I wouldn't recommend referencing that section just for the sake of getting to the bottom of this particular issue. If you have a lot of time on your hands, or at some point you want to derive all the matrix elements from scratch like B&C do in the later chapters, then I very highly recommend starting from the beginning of Chapter 5.
Ahh I see, and now my derivation from above makes sense and it actually gives the expected results. Thanks a lot!
 
Physics news on Phys.org
  • #52
BillKet said:
Ahh I see, and now my derivation from above makes sense and it actually gives the expected results. Thanks a lot!
Hello again! I looked a bit at some actual molecules and I noticed that for ##\Pi_{1/2}## states we don't have a spin rotation coupling i.e. ##\gamma NS##. As far as I can tell the effective operator shouldn't be zero in all cases in a Hund case a basis and the parameter ##\gamma## (7.110 in B&C) doesn't appear obviously to be zero in ##\Lambda=1## case. I read in Levebvre-Brion/Field something stating that actually spin-rotation coupling in a ##\Sigma## state and lambda doubling in the ##\Pi## state come together, hence somehow this spin-rotation coupling in the ##\Pi## state is included in the lambda doubling already. But I am not totally sure about it. First of all the equation I mentioned above doesn't say anything about ##\Lambda##. Is it by convention applied only to certain values of ##\Lambda##? More importantly, in that equation, spin-rotation appears at a first order in PT, while the lambda doubling comes at a second order. So even at a first order, I should see the spin-rotation effect in a ##\Pi## state, too. Why is that term ignored in the effective Hamiltonian of a ##\Pi## state? Thank you!
 
  • #53
BillKet said:
Hello again! I looked a bit at some actual molecules and I noticed that for ##\Pi_{1/2}## states we don't have a spin rotation coupling i.e. ##\gamma NS##. As far as I can tell the effective operator shouldn't be zero in all cases in a Hund case a basis and the parameter ##\gamma## (7.110 in B&C) doesn't appear obviously to be zero in ##\Lambda=1## case. I read in Levebvre-Brion/Field something stating that actually spin-rotation coupling in a ##\Sigma## state and lambda doubling in the ##\Pi## state come together, hence somehow this spin-rotation coupling in the ##\Pi## state is included in the lambda doubling already. But I am not totally sure about it. First of all the equation I mentioned above doesn't say anything about ##\Lambda##. Is it by convention applied only to certain values of ##\Lambda##? More importantly, in that equation, spin-rotation appears at a first order in PT, while the lambda doubling comes at a second order. So even at a first order, I should see the spin-rotation effect in a ##\Pi## state, too. Why is that term ignored in the effective Hamiltonian of a ##\Pi## state? Thank you!

I'm glad you caught that bit in Levebvre-Brion/Field, which is one part of the story. In addition, take a look at Section 3.4.3, where they look at the first order contributions for Hund's cases (a) and (b). Mentioned in the latter section and in at least a couple places in B&C is that the second order contribution is almost always the dominating term, except for very light molecules having high rotational energies.

Physically, to first order the spin-rotation interaction is super weak, because you're dependent on the nuclear magneton. But at second order, you'll start coupling in ##L##, which is a full Bohr magneton. So for a ##\Lambda > 0## state, like ##\Pi##, that's going to be more important.

I suspect that's why most literature doesn't dwell on the first order term for Hund's case (a).
 
  • #54
amoforum said:
I'm glad you caught that bit in Levebvre-Brion/Field, which is one part of the story. In addition, take a look at Section 3.4.3, where they look at the first order contributions for Hund's cases (a) and (b). Mentioned in the latter section and in at least a couple places in B&C is that the second order contribution is almost always the dominating term, except for very light molecules having high rotational energies.

Physically, to first order the spin-rotation interaction is super weak, because you're dependent on the nuclear magneton. But at second order, you'll start coupling in ##L##, which is a full Bohr magneton. So for a ##\Lambda > 0## state, like ##\Pi##, that's going to be more important.

I suspect that's why most literature doesn't dwell on the first order term for Hund's case (a).
Thanks a lot! That makes sense! I have a quick question about adding an external magnetic field. Assume we care only about the electron spin and orbital angular momentum to first order in PT (at the electronic level i.e. when building the effective H). The full (not effective) H for this interactions is $$g_L\mu_B B_z\cdot L_z + g_S\mu_B B_z\cdot S_z $$ I will ignore the coefficients from now on and just focus on the operators. Assume we are in a Hund case a. For the spin part, we don't have anything that connects different electronic levels so the effective hamiltonian for the spin-magnetic field interaction is the same as the full H, right? However I still need to account for these Wigner rotation matrices when calculating rotational matrix elements. For example, if I want to calculate something diagonal in ##\Sigma##, I only need the projection of S on the internuclear axis, but in the equation above ##S_z## is quantized in the lab frame, so the actual operator would be $$g_S\mu_B \mathcal{D}_{00}^{(1)}(\omega) B_z\cdot \Sigma$$ and for the full matrix element I would have to separate the lab and intrinsic parts and I would get something like: $$<J,M,\Omega|g_S\mu_B \mathcal{D}_{00}^{(1)}(\omega)|J,M,\Omega><\Lambda,S,\Sigma|T_{0}^1(S)|\Lambda,S,\Sigma>$$ For orbital angular momentum part, I have to account for the electronic part, as the ##L_z## in the lab is not ##L_z## in the molecule frame, so I would need to keep only the ##T_{q=0}^1(L)## part for the first order PT and the matrix element here would be $$<J,M,\Omega|g_S\mu_B \mathcal{D}_{00}^{(1)}(\omega)|J,M,\Omega><\Lambda,S,\Sigma|T_{0}^1(L)|\Lambda,S,\Sigma>$$ Is this right? One more questions, in B&C after equation 7.231 they list all the terms in the effective H due to Stark effect and for the orbital motion they have ##T_{p=0}^1(L)##. Should that p be a q? If it is a p, as I mentioned above, that won't be diagonal at the electronic level in the molecule intrinsic frame and we can't have that in the effective hamiltonian. Thank you!
 
  • #55
The line of reasoning looks correct. Some details:

Read B&C section 5.5.5. Eq. 5.143 shows the relation between the space-fixed and molecule-fixed operators. So if B&C ever writes a ##T^1_p##, you know how to turn it into molecule-fixed coordinates.

In general you need to do two things: 1. rotate into the molecule frame (via equation 5.143). 2. Unpack the composite angular momenta to reach the one you wish to address. You want ##S##, but it's part of ##J##, which means you'll get Wigner 6-j symbols.

At this point I suggest you go through Chapter 5 carefully, and look at real examples. Eqns 9.56 and 9.58, for example, gets what you want. Notice from the first line, the tensor operator conversion from space-fixed to molecule-fixed was already done. After that, three equations were used to get the rest: 1. Eq. 5.123 (Wigner-Eckart theorem) to get the 3j symbol involving ##M_F##, 2. Eq. 5.136 to get extract ##S## out of ##J##, and 3. Eq. 5.146, the matrix elements of the rotation matrix (3j symbol with ##\Omega##).

This basis also deals with hyperfine (##F##), but that just comes down to applying Eq. 5.136 to extract ##J## out of ##F##.
 
  • #56
amoforum said:
The line of reasoning looks correct. Some details:

Read B&C section 5.5.5. Eq. 5.143 shows the relation between the space-fixed and molecule-fixed operators. So if B&C ever writes a ##T^1_p##, you know how to turn it into molecule-fixed coordinates.

In general you need to do two things: 1. rotate into the molecule frame (via equation 5.143). 2. Unpack the composite angular momenta to reach the one you wish to address. You want ##S##, but it's part of ##J##, which means you'll get Wigner 6-j symbols.

At this point I suggest you go through Chapter 5 carefully, and look at real examples. Eqns 9.56 and 9.58, for example, gets what you want. Notice from the first line, the tensor operator conversion from space-fixed to molecule-fixed was already done. After that, three equations were used to get the rest: 1. Eq. 5.123 (Wigner-Eckart theorem) to get the 3j symbol involving ##M_F##, 2. Eq. 5.136 to get extract ##S## out of ##J##, and 3. Eq. 5.146, the matrix elements of the rotation matrix (3j symbol with ##\Omega##).

This basis also deals with hyperfine (##F##), but that just comes down to applying Eq. 5.136 to extract ##J## out of ##F##.
Thanks for this! I went over chapter 5 and it makes more sense how that works. But my question still remains. He claims that we can have ##T_{p=0}^1(L)## in the effective Hamiltonian explicitly, which is equivalent to having ##T_{q=\pm 1}^1(L)## explicitly in the effective Hamiltonian. However in a previous section he spends quite some time talking about how having ##R^2## in the effective Hamiltonian is not good, specifically because that implies having ##T_{q=\pm 1}^1(L)##. Shouldn't ##T_{q=\pm 1}^1(L)## be absorbed in some effective parameters at second and higher PT, and have only ##T_{q=0}^1(L)## in the effective Hamiltonian as an operator?
 
  • #57
Haha okay the answer to this might be pretty funny, assuming I understand the question correctly:

Look at Eq. 7.217, which has a ##g_L## in it. Eqns 7.221 and 7.222 show the result of putting the off-diagonal terms into an effective parameter: ##g_l##. (Notice the capitalized vs. not.) In the equations after Eqn. 7.231, you're back to ##g_L##, where he hasn't done the procedure yet.
 
  • #58
amoforum said:
Haha okay the answer to this might be pretty funny, assuming I understand the question correctly:

Look at Eq. 7.217, which has a ##g_L## in it. Eqns 7.221 and 7.222 show the result of putting the off-diagonal terms into an effective parameter: ##g_l##. (Notice the capitalized vs. not.) In the equations after Eqn. 7.231, you're back to ##g_L##, where he hasn't done the procedure yet.
Hmm that makes sense if that would be the full Hamiltonian. But right after 7.231 he claims that he is listing the terms in the effective Hamiltonian. Shouldn't we get rid of the off-diagonal terms at that level? Also one of the terms he is listing is the "anisotropic correction to the electron spin interaction" which appears only after you do the effective Hamiltonian, it is not there in the original Hamiltonian (also that term has ##g_l##). It almost looks like he is mixing terms from the original and effective Hamiltonian.
 
  • #59
Oh sorry, it looks like I did misunderstand (##g_l## is one order higher, starting with ##g_L##).

My understanding is that ##g_L## already is an effective parameter. In Eqn. 7.217 it's introduced as a perturbation to the full Hamiltonian. But he mentions right after that it can deviate from 1.0 after transforming into the effective Hamiltonian. To first order, it has exactly the same form, and because it's so close to 1.0, I believe he just keeps the same notation: ##g_L##.

So in an experiment, when you go to fit the spectrum: ##g_L## will not come out to 1 due to non-adiabatic mixing. Here's a paper where you can see them fitting to a value not equal to 1: (Wilton L. Virgo et al 2005 ApJ 628 567)

Edit: but I see your point. I'm trying to think of a case where it wouldn't just reduce to ##q = 0##. For example maybe ##\Omega## can change, but not ##\Lambda##. Let me think on this.

Edit2: Yes, I believe the above is correct. In going to the effective Hamiltonian, the ##g_LT^1_{p=0}(L)## term operates within a single ##|\Lambda>## state, but you can have mixing between ##\Omega## states.
 
Last edited:
  • #60
amoforum said:
Oh sorry, it looks like I did misunderstand (##g_l## is one order higher, starting with ##g_L##).

My understanding is that ##g_L## already is an effective parameter. In Eqn. 7.217 it's introduced as a perturbation to the full Hamiltonian. But he mentions right after that it can deviate from 1.0 after transforming into the effective Hamiltonian. To first order, it has exactly the same form, and because it's so close to 1.0, I believe he just keeps the same notation: ##g_L##.

So in an experiment, when you go to fit the spectrum: ##g_L## will not come out to 1 due to non-adiabatic mixing. Here's a paper where you can see them fitting to a value not equal to 1: (Wilton L. Virgo et al 2005 ApJ 628 567)

Edit: but I see your point. I'm trying to think of a case where it wouldn't just reduce to ##q = 0##. For example maybe ##\Omega## can change, but not ##\Lambda##. Let me think on this.

Edit2: Yes, I believe the above is correct. In going to the effective Hamiltonian, the ##g_LT^1_{p=0}(L)## term operates within a single ##|\Lambda>## state, but you can have mixing between ##\Omega## states.
So I tried to calculate what is the effective hamiltonian associated with the Stark effect under this formalism, but I am not sure if what I am doing is right. Assume the wavefunction can be written as ##|\eta>|i>##, with ##|\eta>## the electronic (intrinsic) part and ##|i>## the vibrational and rotational part. Assuming the electric field is in the z-direction, the Stark shift interaction is $$E_zd_z = E_z\sum_q \mathcal{D}_{0q}^1(\omega) T_{q}^1(d)$$ where I transformed the dipole moment from the lab to molecule frame, with $$d = d_{el}(r)+d_{nucl}(R)$$ where ##d_{el}(r) = -er##, with ##r## the location of the electron and ##d_{nucl}(R) = e(Z_1-Z_2)R## with ##R## the internuclear distance. I will just write ##\mathcal{D}## instead of ##\mathcal{D}_{0q}^1(\omega)## from now on. Calculating the effective Hamiltonian to first order in PT, as in B&C I would get $$E_z\sum_q<i|<\eta|\mathcal{D}T_q^1(d_{el}+d_{nucl})|\eta>|j> = $$ $$E_z\sum_q<i|<\eta|\mathcal{D}T_q^1(d_{el})|\eta>|j> + E_z\sum_q<i|<\eta|\mathcal{D}T_q^1(d_{nucl})|\eta>|j> = $$ $$E_z\sum_q<i|\mathcal{D}<\eta|T_q^1(d_{el})|\eta>|j> + E_z\sum_q<i|\mathcal{D}T_q^1(d_{nucl})<\eta||\eta>|j> $$ Using ##<\eta|T_q^1(d_{el})|\eta>=0## due to parity arguments and ##<\eta||\eta>=1## due to orthonormality of the electronic wavefunctions we get: $$E_z\sum_q<i|\mathcal{D}T_q^1(d_{nucl})|j> $$ Given that ##d_{nucl}=e(Z_1-Z_2)R## and ##R## is defined as the z component of the molecule frame, only the ##q=0## component survives so in the end we get $$e(Z_1-Z_2)E_z<i|\mathcal{D}R|j> = $$ $$e(Z_1-Z_2)E_z<vib_i|R|vib_i><rot_i|\mathcal{D_{00}^1(\omega)}|rot_j> $$ Given that I didn't make any assumption about the rotational basis (can be Hund case a or b without affecting the derivation) I can drop the rotational expectation value and leave that part as an operator so in the end the first order PT effective term coming from the Stark effect is $$e(Z_1-Z_2)E_z<vib_i|R|vib_i>\mathcal{D_{00}^1(\omega)} $$ so basically the effective operator is a Wigner matrix component. Is my derivation right? Thank you!
 
  • #61
I'm definitely out of my comfort zone here, so take this with a grain of salt. Your result says that the only Stark shift is due to the permanent dipole moment of the molecule, and I don't buy that. I think what's missing is the off-diagonal couplings between degenerate electronic levels, like a ##\Lambda##-doublet. There should be some polarizability there that scales inversely with the ##\Lambda##-doubling energy splitting, I think? I'm not 100% sure about the scaling, that's just something I think I remember reading in a review paper, but they were talking about ##\Omega##-doublets.

My handiness with Wigner algebra is crud, but the angular part looks right. Wolfram says ##D^{1}_{00}(\psi,\theta,\phi) = \cos \theta##, so it certainly seems reasonable.
 
  • #62
Billket, I'm assuming you went through Section 6.11.6 in B&C? In that case, yes, you'll end up keeping the first term in equation 6.333 (for an intra-electronic transition. They mention the first terms goes to zero for inter-electronic transitions.). And then your matrix element is equation 6.331, where ##cos\theta## is your Wigner matrix element.

Twigg, I believe the ##\Lambda##-doubling is contained in the rotational part of the matrix element. It shows up once you pick the basis set, which will include the parity eigenstates. And then when solving for the polarizability, that energy splitting will show up in the denominator, the energy of which is governed by the interaction that splits the parity eigenstates.
 
  • #63
amoforum said:
Billket, I'm assuming you went through Section 6.11.6 in B&C? In that case, yes, you'll end up keeping the first term in equation 6.333 (for an intra-electronic transition. They mention the first terms goes to zero for inter-electronic transitions.). And then your matrix element is equation 6.331, where ##cos\theta## is your Wigner matrix element.

Twigg, I believe the ##\Lambda##-doubling is contained in the rotational part of the matrix element. It shows up once you pick the basis set, which will include the parity eigenstates. And then when solving for the polarizability, that energy splitting will show up in the denominator, the energy of which is governed by the interaction that splits the parity eigenstates.
@Twigg @amoforum thank you for your replies. So I think I did make a mistake for the case of ##\Lambda \neq 0##, as there I should first calculate the matrix element for Hund case a and after that combine the Hund case a basis into parity eigenstates. I think I did it the other way around. I will look more closely into that. I also took a look over section 6.11.6, thank you for pointing me towards that. I actually have a quick question about the BO approximation now (unrelated to the EDM calculation). Before equation 6.333 they say: "We now make use of the Born–Oppenheimer approximation which allows us to separate the electronic and vibrational wave functions" and this is the typical statement you see in probably all books on molecular physics. And now I am wondering if I am missing something. Of course BO approximation allows that separation, but after reading the effective hamiltonian chapter it seems like that separation is always true, up to any desired order in PT. BO approximation is basically the zeroth order and that kind of statement implies that the separation is valid only under that very constraining assumption. Isn't that separation always true once we do these PT changes (isn't this the whole point of the effective Hamiltonian)? Along the same lines, I just wanted to make sure I understand how one goes from BO to Hund cases. In BO, one has a wavefunction of the form ##|\eta\nu J M>=|\eta>| \nu>Y_{J}^{M}(\theta,\phi)##, where ##|\eta>## is the electronic wavefunction (in the intrinsic frame of the molecule), ##|\nu>## is the vibrational wavefunction and ##Y_{J}^{M}(\theta,\phi)## is the spherical harmonic, showing the rotation of the molecule frame with respect to the lab frame. Then using an identity of the form ##Y_{J}^{M}(\theta,\phi)=\sum_{\Omega=-J}^{J}\sqrt{\frac{2J+1}{8\pi^2}}\mathcal{D}_{M\Omega}^{J*}(\theta,\phi)## (I might be off with that constant) we are able to get the Hund cases, which for case a, for example, based on this equation would become ##|\eta>|\nu>|J \Omega M>|S\Sigma>## where ##\mathcal{D}_{M\Omega}^{J*}(\theta,\phi)=|J \Omega M>## and ##|S\Sigma>## was pulled out by hand for completeness. Is this correct? Thank you!
 
  • #64
Thanks for the clarification, @amoforum! And @BillKet, I'd actually be curious to see what you come up with for the stark shift, if you find the time. I tried to spend some time learning this once but my coworkers weren't having it and sent me back to mixing laser dye :doh: No pressure, of course!

As far as the BO approximation, when we did spectroscopy we didn't really keep a detailed effective Hamiltonian, we would just re-measure the lifetimes and rotational constants in other vibrational states if there was a need to do so. I think in molecules where the BO violation is weak, you can take this kind of pragmatic approach. Then again, we only thought about molecules with highly diagonal Franck-Condon factors so we never really ventured above ##\nu = 2## or so.
 
  • Like
Likes BillKet
  • #65
@BillKet Yes, you assume the BO approximation first, then handle the non-adiabatic terms with perturbation theory. i.e. those parameters (or "constants") in the effective Hamiltonian.

As for your second question, that looks right, except I want to clarify: a spherical harmonic is actually a subset of the generalized rotation matrix elements (see Eq. 5.52 in B&C). More generally, you'd start with asymmetric top eigenfunctions (eigenfunctions of Eq. 5.58 in B&C), which for a diatomic would then reduce to symmetric top wavefunctions. B&C Section 5.3.4 might be helpful.
 
  • Like
Likes Twigg and BillKet
  • #66
amoforum said:
@BillKet Yes, you assume the BO approximation first, then handle the non-adiabatic terms with perturbation theory. i.e. those parameters (or "constants") in the effective Hamiltonian.

As for your second question, that looks right, except I want to clarify: a spherical harmonic is actually a subset of the generalized rotation matrix elements (see Eq. 5.52 in B&C). More generally, you'd start with asymmetric top eigenfunctions (eigenfunctions of Eq. 5.58 in B&C), which for a diatomic would then reduce to symmetric top wavefunctions. B&C Section 5.3.4 might be helpful.
@amoforum I looked into more details at the derivation in B&C in section 6.11.6 and I am actually confused a bit. Using spherical tensors for now, the transition between 2 electronic levels would be:

$$E_z\sum_q<\eta'|<\nu'|<r'|\mathcal{D}_{0q}T_q^1(d_{el}+d_{nucl})|r>|\nu>|\eta> = $$
$$E_z\sum_q<\eta'|<\nu'|<r'|\mathcal{D}_{0q}T_q^1(d_{el})|r>|\nu>|\eta> + E_z\sum_q<\eta'|<\nu'|<r'|\mathcal{D}T_q^1(d_{nucl})|r>|\nu>|\eta> = $$
$$E_z\sum_q<\nu'|<\eta'|T_q^1(d_{el})|\eta>|\nu><r'|\mathcal{D}_{0q}|r> + E_z\sum_q<\eta'||\eta><\nu'|<r'|\mathcal{D}T_q^1(d_{nucl})|r>|\nu> $$

The second term is zero because ##<\eta'||\eta> = 0##. But the first term is different from the one in B&C equation 6.331. First of all, differently from before (transitions within a given electronic state), ##d_{el}## has component in the intrinsic frame for ##q=\pm 1##, not only for ##q=0##, so that term is not just ##cos(\theta)## anymore. Why do they ignore the other 2 terms? Also the expectation value ##<\eta'|T_q^1(d_{el})|\eta>## is a function of R (the electronic wavefunctions have a dependence on R), so we can't just take them out of the vibrational integral like B&C do in 6.332. What am I missing?
 
  • #67
BillKet said:
@amoforum I looked into more details at the derivation in B&C in section 6.11.6 and I am actually confused a bit. Using spherical tensors for now, the transition between 2 electronic levels would be:

$$E_z\sum_q<\eta'|<\nu'|<r'|\mathcal{D}_{0q}T_q^1(d_{el}+d_{nucl})|r>|\nu>|\eta> = $$
$$E_z\sum_q<\eta'|<\nu'|<r'|\mathcal{D}_{0q}T_q^1(d_{el})|r>|\nu>|\eta> + E_z\sum_q<\eta'|<\nu'|<r'|\mathcal{D}T_q^1(d_{nucl})|r>|\nu>|\eta> = $$
$$E_z\sum_q<\nu'|<\eta'|T_q^1(d_{el})|\eta>|\nu><r'|\mathcal{D}_{0q}|r> + E_z\sum_q<\eta'||\eta><\nu'|<r'|\mathcal{D}T_q^1(d_{nucl})|r>|\nu> $$

The second term is zero because ##<\eta'||\eta> = 0##. But the first term is different from the one in B&C equation 6.331. First of all, differently from before (transitions within a given electronic state), ##d_{el}## has component in the intrinsic frame for ##q=\pm 1##, not only for ##q=0##, so that term is not just ##cos(\theta)## anymore. Why do they ignore the other 2 terms? Also the expectation value ##<\eta'|T_q^1(d_{el})|\eta>## is a function of R (the electronic wavefunctions have a dependence on R), so we can't just take them out of the vibrational integral like B&C do in 6.332. What am I missing?

As to your first question:

For an intra-electronic transition, you're coupling two electronic states that have exactly the same electron spatial distribution. Electron population is distributed symmetrically about the molecular axis, so there is no permanent dipole moment perpendicular to the molecular axis for the electric field to interact with. There might be at really short time scales, but then we're not in the Born-Oppenheimer regime anymore.

The same argument applies for inter-electronic transitions, where you're coupling two electronic states that have different electron spatial distributions. Hence, there's usually a dipole moment to interact with (considering symmetry and all that).

As to your second question:

B&C haven't separated the electronic and vibrational integrals in Eq. 6.332 yet. They first apply the Born-Oppenheimer approximation, then separate them. The R-dependence shows up in Eq. 6.333.
 
  • #68
amoforum said:
As to your first question:

For an intra-electronic transition, you're coupling two electronic states that have exactly the same electron spatial distribution. Electron population is distributed symmetrically about the molecular axis, so there is no permanent dipole moment perpendicular to the molecular axis for the electric field to interact with. There might be at really short time scales, but then we're not in the Born-Oppenheimer regime anymore.

The same argument applies for inter-electronic transitions, where you're coupling two electronic states that have different electron spatial distributions. Hence, there's usually a dipole moment to interact with (considering symmetry and all that).

As to your second question:

B&C haven't separated the electronic and vibrational integrals in Eq. 6.332 yet. They first apply the Born-Oppenheimer approximation, then separate them. The R-dependence shows up in Eq. 6.333.
For the first question:

The dipole moment, as an operator, has 2 components ##d_{el}(r)## and ##d_{nucl}(R)##. When the transition is within the same electronic state, what we are left with is ##\sum_q <T_q^1(d_{nucl}(R))>##. But for ##d_{nucl}(R)## there is only the q=0 component, so there it is obvious why we drop the ##q=\pm 1## terms. But in the case for transitions between 2 different electronic states, we are left with ##\sum_q <\eta|T_q^1(d_{el}(r))|\eta'>##. I am not sure why in this case, for example ##<\eta|T_q^1(d_{el}(r))|\eta'>## would be zero, this is equivalent to ##<\eta|x|\eta'>=0## and ##<\eta|y|\eta'>=0##. Is it because of the cylindrical symmetry?

For the second question:

I am a bit confused. Starting from the second integral of 6.332 we have:

$$\int{\int{\psi_\nu\psi_e r \psi_e'\psi_\nu'}}$$

(I dropped some terms, complex conjugates etc. for simplicity). By adding the dependence on different variables we have:

$$\int{\int{\psi_\nu(R)\psi_e(r,R) r \psi_e'(r,R)\psi_\nu'(R)}}$$

which is equal to

$$\int\psi_\nu(R){(\int{\psi_e(r,R) r \psi_e'(r,R))}\psi_\nu'(R)}$$

If we denote ##f(R)=\int{\psi_e(r,R) r \psi_e'(r,R) dr}## the integral above becomes:

$$\int{\psi_\nu(R)f(R)\psi_\nu'(R)}dR$$

but this is not equal to $$(\int{\psi_\nu(R)\psi_\nu'(R)}dR)f(R)$$ we can't just take the ##f(R)## out of that integral, as it depends explicitly on ##R## and I don't see how BO approximation would allow us to do that. BO allowed us to write the function as the product of the electronic and vibrational wavefunctions, but after that doing these integrals is just math.
 
  • #69
BillKet said:
$$E_z\sum_q<\eta'|<\nu'|<r'|\mathcal{D}_{0q}T_q^1(d_{el}+d_{nucl})|r>|\nu>|\eta> = $$
$$E_z\sum_q<\eta'|<\nu'|<r'|\mathcal{D}_{0q}T_q^1(d_{el})|r>|\nu>|\eta> + E_z\sum_q<\eta'|<\nu'|<r'|\mathcal{D}T_q^1(d_{nucl})|r>|\nu>|\eta> = $$
$$E_z\sum_q<\nu'|<\eta'|T_q^1(d_{el})|\eta>|\nu><r'|\mathcal{D}_{0q}|r> + E_z\sum_q<\eta'||\eta><\nu'|<r'|\mathcal{D}T_q^1(d_{nucl})|r>|\nu> $$

Unfortunately, I can't look at this until later tonight, but I need revisit your derivation above more carefully. Because, I see now that the they way you have it derived, the rotational part forces the electronic part to only ##q = 0## terms, which has to be wrong because inter-electronic transitions exist, and I guess this is the heart of your question. (Maybe the very first line is wrong.)

If you take a look at Eqn 6.331, the sum over all components is clearly still there for the non-rotational components. Read over Section 6.11.4, and revisit how Eqn 6.330 turns into 6.331, and I suspect the discrepancy will show up. i.e. the rotational part got completely separated.
 
  • #70
I think I can answer the second question for now. Eqn. 6.333 I believe has some sloppy notation. The second integral should maybe have a different symbol for ##R_\alpha## for the electronic part. It's meant to be at a single internuclear distance, usually the equilibrium distance. So you don't integrate over it. Some other texts might call this the "crude" BO approximation, and Eqn. 6.330 would be the usual BO approximation. Then there's also the Condon approximation which assumes there's no dependence on the nuclear coordinates at all.
 
  • #71
amoforum said:
I think I can answer the second question for now. Eqn. 6.333 I believe has some sloppy notation. The second integral should maybe have a different symbol for ##R_\alpha## for the electronic part. It's meant to be at a single internuclear distance, usually the equilibrium distance. So you don't integrate over it. Some other texts might call this the "crude" BO approximation, and Eqn. 6.330 would be the usual BO approximation. Then there's also the Condon approximation which assumes there's no dependence on the nuclear coordinates at all.
Thank you for your reply. I will look at the sections you suggested for questions 1. For the second one, I agree that if that ##R_\alpha## is a constant, we can take the electronic integral out of the vibrational integral, but I am not totally sure why can we do this. If we are in the BO approximation, the electronic wavefunction should be a function of ##R##, for ##R## not constant, and that electronic integral would be a function of ##R##, too. But why would we assume it is constant? I understand the idea behind BO approximation, that the electrons follow the nuclear motion almost instantaneously, but I don't get it here. It is as if the nuclei oscillate so fast that the electrons don't have time to catch up and they just the see the average inter-nuclear distance, which is kinda the opposite of BO approximation. Could you help me a bit understand this assumption that the electronic integral is constant? Thank you!
 
  • #72
BillKet said:
It is as if the nuclei oscillate so fast that the electrons don't have time to catch up and they just the see the average inter-nuclear distance.
Can you elaborate on how you arrived at this interpretation? Why does it imply that the electrons can't catch up? The "crude" BO approximation gives you a dipole moment result at a specific ##R##. If you on average only observe a specific ##R_{eq}## (equilibrium distance), then the electronic integral at ##R_{eq}## will be your observed dipole moment.
 
  • #73
BillKet said:
@amoforum I looked into more details at the derivation in B&C in section 6.11.6 and I am actually confused a bit. Using spherical tensors for now, the transition between 2 electronic levels would be:

$$E_z\sum_q<\eta'|<\nu'|<r'|\mathcal{D}_{0q}T_q^1(d_{el}+d_{nucl})|r>|\nu>|\eta> = $$
$$E_z\sum_q<\eta'|<\nu'|<r'|\mathcal{D}_{0q}T_q^1(d_{el})|r>|\nu>|\eta> + E_z\sum_q<\eta'|<\nu'|<r'|\mathcal{D}T_q^1(d_{nucl})|r>|\nu>|\eta> = $$
$$E_z\sum_q<\nu'|<\eta'|T_q^1(d_{el})|\eta>|\nu><r'|\mathcal{D}_{0q}|r> + E_z\sum_q<\eta'||\eta><\nu'|<r'|\mathcal{D}T_q^1(d_{nucl})|r>|\nu> $$

Okay, here's my stab at the first question:

The derivation is completely written out in Lefebvre-Brion/Field in Section 6.1.2.1, and it looks like yours is consistent.

Now as to why there's only cos##\theta## in B&C's version. I suspect this is all because of Eqn. 6.330 in B&C. Notice that the rotational wavefunctions are spherical harmonics for both the initial and final states. The symmetric top wavefunctions reduce to spherical harmonics if ##\Omega = 0##. (See the text above Eqn 5.52 and reconcile that with Eqn. 5.145). This is a very constraining assumption, because that means both states must be ##\Omega = 0##, like ##^1\Sigma## states. (I guess we can constrain ourselves to ##M = 0## states too?) And if that's the case, then the 3j-symbol has ##\Omega = 0## and ##\Omega' = 0## in its bottom row, meaning ##q## must equal zero for it to not vanish.

So then the only thing I can't reconcile is the sentence after Eqn. 6.331 that says ##\Delta J = 0## is allowed. To me that's only true if you have symmetric top wavefunctions, because then you can have a change in both ##\Omega## and ##J## that adds to zero.

I wouldn't be surprised that this detail was glossed over, considering that the main point they wanted to get across in that section was the electronic-vibrational stuff like Franck-Condon factors and allowed electronic transitions in homonuclears.
 
Last edited:
  • Informative
Likes Twigg
  • #74
amoforum said:
Can you elaborate on how you arrived at this interpretation? Why does it imply that the electrons can't catch up? The "crude" BO approximation gives you a dipole moment result at a specific ##R##. If you on average only observe a specific ##R_{eq}## (equilibrium distance), then the electronic integral at ##R_{eq}## will be your observed dipole moment.
I guess I don't understand what is the mathematical approximation used that allows you to assume that the electronic integral, which is a function of ##R## can be approximated to be constant. In the BO approximation you would use the adiabatic approximation, but I am not sure here, formally, what allows you to do that. Intuitively, if you have, say, the function ##cos^2##, but your response time to this oscillation is too slow, what you see is the average over many periods which is ##1/2##. Given that that electronic integral sees just the average internuclear distance, I assumed it is something similar to this i.e. the electrons see just an average of the internuclear distance.
 
  • #75
amoforum said:
Okay, here's my stab at the first question:

The derivation is completely written out in Lefebvre-Brion/Field in Section 6.1.2.1, and it looks like yours is consistent.

Now as to why there's only cos##\theta## in B&C's version. I suspect this is all because of Eqn. 6.330 in B&C. Notice that the rotational wavefunctions are spherical harmonics for both the initial and final states. The symmetric top wavefunctions reduce to spherical harmonics if ##\Omega = 0##. (See the text above Eqn 5.52 and reconcile that with Eqn. 5.145). This is a very constraining assumption, because that means both states must be ##\Omega = 0##, like ##^1\Sigma## states. (I guess we can constrain ourselves to ##M = 0## states too?) And if that's the case, then the 3j-symbol has ##\Omega = 0## and ##\Omega' = 0## in its bottom row, meaning ##q## must equal zero for it to not vanish.

So then the only thing I can't reconcile is the sentence after Eqn. 6.331 that says ##\Delta J = 0## is allowed. To me that's only true if you have symmetric top wavefunctions, because then you can have a change in both ##\Omega## and ##J## that adds to zero.

I wouldn't be surprised that this detail was glossed over, considering that the main point they wanted to get across in that section was the electronic-vibrational stuff like Franck-Condon factors and allowed electronic transitions in homonuclears.
Oh I see, that makes sense. Thanks a lot! I still have a quick question about the electronic integral. In order to have transitions between different electronic states, as you mentioned, terms of the form $$<\eta'|T_{\pm 1}^1(r)|\eta>$$ should not be zero. But this is equivalent to $$<\eta'|x|\eta>$$ not being zero (and same for ##y##). However, the electronic wavefunctions have cylindrical symmetry, so they should be even functions of x and y (here all the coordinates are in the intrinsic molecular (rotating) frame). Wouldn't in this case $$<\eta'|x|\eta>$$ be zero?
 
  • #76
BillKet said:
I guess I don't understand what is the mathematical approximation used that allows you to assume that the electronic integral, which is a function of ##R## can be approximated to be constant. In the BO approximation you would use the adiabatic approximation, but I am not sure here, formally, what allows you to do that. Intuitively, if you have, say, the function ##cos^2##, but your response time to this oscillation is too slow, what you see is the average over many periods which is ##1/2##. Given that that electronic integral sees just the average internuclear distance, I assumed it is something similar to this i.e. the electrons see just an average of the internuclear distance.
I'd say it's more of a physical approximation than a mathematical one. For low vibrational states (shorter internuclear distances), the region of the dipole moment function is relatively flat. So just picking the equilibrium distance actually approximates it pretty well. At high vibrational states, where you'd sample large internuclear distances, the curve starts to get wobbly on the outskirts and the approximation breaks down. This makes sense because you'd expect BO breakdown at higher vibrational energies.
 
  • #77
BillKet said:
Oh I see, that makes sense. Thanks a lot! I still have a quick question about the electronic integral. In order to have transitions between different electronic states, as you mentioned, terms of the form $$<\eta'|T_{\pm 1}^1(r)|\eta>$$ should not be zero. But this is equivalent to $$<\eta'|x|\eta>$$ not being zero (and same for ##y##). However, the electronic wavefunctions have cylindrical symmetry, so they should be even functions of x and y (here all the coordinates are in the intrinsic molecular (rotating) frame). Wouldn't in this case $$<\eta'|x|\eta>$$ be zero?

Time to look at some molecular orbitals! Only ##\Sigma## states have cylindrical symmetry, which as you've pointed out, means ##\Sigma## to ##\Sigma## transitions are not allowed, unless you go from ##\Sigma^+## to ##\Sigma^-##, the latter of which is not symmetric along the internuclear axis, but still only ##q = 0## transitions allowed.

Take a look at some ##\Pi## or ##\Delta## orbitals. They are absolutely not cylindrically symmetric, because they look like linear combinations of ##p## and ##d## orbitals.
 
  • #78
amoforum said:
Time to look at some molecular orbitals! Only ##\Sigma## states have cylindrical symmetry, which as you've pointed out, means ##\Sigma## to ##\Sigma## transitions are not allowed, unless you go from ##\Sigma^+## to ##\Sigma^-##, the latter of which is not symmetric along the internuclear axis, but still only ##q = 0## transitions allowed.

Take a look at some ##\Pi## or ##\Delta## orbitals. They are absolutely not cylindrically symmetric, because they look like linear combinations of ##p## and ##d## orbitals.
Thanks for the vibrational explanation! I understand what you mean now.

I should check molecular orbitals indeed, I kinda looked at the rotational part only. But if that is the case, then it makes sense. Thank you for that, too!
 
  • #79
I have a feeling that mathematically the approximation of taking ##\frac{\partial \mu_e}{\partial R} \approx 0## could be obtained from the BO approximation with the adiabatic theorem, taking the dynamical and Berry's phases evolved to be negligibly small since the nuclei barely move over a transition lifetime. I could just be crazy though. I never put a lot of thought into it before.
 
  • #80
amoforum said:
Time to look at some molecular orbitals! Only ##\Sigma## states have cylindrical symmetry, which as you've pointed out, means ##\Sigma## to ##\Sigma## transitions are not allowed, unless you go from ##\Sigma^+## to ##\Sigma^-##, the latter of which is not symmetric along the internuclear axis, but still only ##q = 0## transitions allowed.

Take a look at some ##\Pi## or ##\Delta## orbitals. They are absolutely not cylindrically symmetric, because they look like linear combinations of ##p## and ##d## orbitals.
I came across this reading, which I found very useful in understanding the actual form of the Hund cases (not sure if this is derived in B&C, too), mainly equations 6.7 and 6.12. I was wondering how this would be expanded to the case of nuclear spin (call it ##I##). Given that in most cases the hyperfine interaction is very weak, we can assume that the basis we build including ##I## would be something similar to the Hund case b) coupling of ##N## and ##S## in 6.7 i.e. we would need to use Clebsch–Gordan coefficients.

So in a Hund case a, the total basis wavefunction after adding the nuclear spin, with ##F## the total angular momentum we would have:

$$|\Sigma, \Lambda, \Omega, S, J, I, F, M_F>=\sum_{M_J=-J}^{J}\sum_{M_I=-I}^I <J,M_J;I,M_I|F,M_F> |\Sigma, \Lambda, \Omega, S, J, M_J>|I, M_I>$$

where ##|\Sigma, \Lambda, \Omega, S, J, M_J>## is a Hund case a function in the absence in nuclear spin. For Hund case b we would have something similar, but with different quantum numbers:

$$|\Lambda, N, S, J, I, F, M_F>=\sum_{M_J=-J}^{J}\sum_{M_I=-I}^I <J,M_J;I,M_I|F,M_F> |\Lambda, N, S, J, M_J>|I, M_I>$$

with ## |\Lambda, N, S, J, M_J>## being a Hund case b basis function in the absence of nuclear spin. Is this right? Thank you!
 
  • Like
Likes Twigg
  • #81
Yep, that's right. Nuclear angular momentum is just tacked on at the end of the hierarchy (though it need not be the smallest spectral splitting) with another addition of angular momenta.
 
  • Like
Likes BillKet and amoforum
  • #82
BillKet said:
I came across this reading, which I found very useful in understanding the actual form of the Hund cases (not sure if this is derived in B&C, too), mainly equations 6.7 and 6.12. I was wondering how this would be expanded to the case of nuclear spin (call it ##I##). Given that in most cases the hyperfine interaction is very weak, we can assume that the basis we build including ##I## would be something similar to the Hund case b) coupling of ##N## and ##S## in 6.7 i.e. we would need to use Clebsch–Gordan coefficients.

So in a Hund case a, the total basis wavefunction after adding the nuclear spin, with ##F## the total angular momentum we would have:

$$|\Sigma, \Lambda, \Omega, S, J, I, F, M_F>=\sum_{M_J=-J}^{J}\sum_{M_I=-I}^I <J,M_J;I,M_I|F,M_F> |\Sigma, \Lambda, \Omega, S, J, M_J>|I, M_I>$$

where ##|\Sigma, \Lambda, \Omega, S, J, M_J>## is a Hund case a function in the absence in nuclear spin. For Hund case b we would have something similar, but with different quantum numbers:

$$|\Lambda, N, S, J, I, F, M_F>=\sum_{M_J=-J}^{J}\sum_{M_I=-I}^I <J,M_J;I,M_I|F,M_F> |\Lambda, N, S, J, M_J>|I, M_I>$$

with ## |\Lambda, N, S, J, M_J>## being a Hund case b basis function in the absence of nuclear spin. Is this right? Thank you!

There's also a nice discussion in B&C Section 6.7.8 about the different ways ##I## couples in Hund's cases (a) and (b). For example, if ##I## couples to ##J## in Hund's case (b), that's actually called Hund's case (b##_{\beta J}##), which is one of the different ways it can couple in.
 
  • Like
Likes BillKet
  • #83
Twigg said:
Thanks for the clarification, @amoforum! And @BillKet, I'd actually be curious to see what you come up with for the stark shift, if you find the time. I tried to spend some time learning this once but my coworkers weren't having it and sent me back to mixing laser dye :doh: No pressure, of course!

As far as the BO approximation, when we did spectroscopy we didn't really keep a detailed effective Hamiltonian, we would just re-measure the lifetimes and rotational constants in other vibrational states if there was a need to do so. I think in molecules where the BO violation is weak, you can take this kind of pragmatic approach. Then again, we only thought about molecules with highly diagonal Franck-Condon factors so we never really ventured above ##\nu = 2## or so.
@Twigg, here is my take at deriving the Stark shift for a Hund case a. In principle it is in the case of 2 very close ##\Lambda##-doubled levels (e.g. in a ##\Delta## state as in the ACME experiment) in a field pointing in the z-direction, ##E_z##. Please let me know if there is something wrong with my derivation.

$$H_{eff} = <n\nu J M \Omega \Sigma \Lambda S|-dE|n\nu J' M' \Omega' \Sigma' \Lambda' S'>=$$
$$<n\nu J M \Omega \Sigma \Lambda S|-E_z\Sigma_q\mathcal{D}_{0q}^1T_q^1(d)|n\nu J' M' \Omega' \Sigma' \Lambda' S'>=$$
$$-E_z\Sigma_q<n\nu J M \Omega \Sigma \Lambda S|\mathcal{D}_{0q}^1T_q^1(d)|n\nu J' M' \Omega' \Sigma' \Lambda' S'>=$$
$$-E_z\Sigma_q<n\nu|T_q^1(d)|n\nu><J M \Omega \Sigma \Lambda S|\mathcal{D}_{0q}^1| J' M' \Omega' \Sigma' \Lambda' S'>$$

For the ##<n\nu|T_q^1(d)|n\nu>##, given that we are in a given electronic state, the difference between ##\Lambda## and ##-\Lambda## can only be 0, 2, 4, 6..., (for a ##\Delta## state it would be 4) so the terms with ##q=\pm 1## will give zero. So we are left with

$$-E_z<n\nu|T_0^1(d)|n\nu><J M \Omega \Sigma \Lambda S|\mathcal{D}_{00}^1| J' M' \Omega' \Sigma' \Lambda' S'>$$

If we use the variable ##D## for ##<n\nu|T_0^1(d)|n\nu>##, which is usually measured experimentally as the intrinsic electric dipole moment of the molecule (I might have missed a complex conjugate in the Wigner matrix, as it is easier to type without it :D) we have:

$$-E_zD<\Sigma S||\Sigma' S'><\Lambda||\Lambda'><J M \Omega |\mathcal{D}_{00}^1| J' M' \Omega' >$$

From here we get that ##S=S'##, ##\Sigma=\Sigma'## and ##\Lambda = \Lambda'##, which also implies that ##\Omega = \Omega'##. By calculating that Wigner matrix expectation value we get:

$$-E_zD(-1)^{M-\Omega}
\begin{pmatrix}
J & 1 & J' \\
\Omega & 0 & \Omega' \\
\end{pmatrix}
\begin{pmatrix}
J & 1 & J' \\
M & 0 & M' \\
\end{pmatrix}
$$

This gives us that ##M=M'## and ##\Delta J = 0, \pm 1##. If we are in the ##\Delta J = \pm 1## case, we connect different rotational levels, which are much further away from each other relative to ##\Lambda##-doubling levels, so I assume ##\Delta J = 0##. The expression above becomes:

$$-E_zD(-1)^{J-\Omega}\frac{M\Omega}{J(J+1)}$$

Now, the parity eigenstates are linear combinations of hund a cases:

$$|\pm>\frac{|J M S \Sigma \Lambda \Omega>\pm|J M S -\Sigma -\Lambda -\Omega>}{\sqrt{2}}$$

If we build the 2x2 Hamiltonian in the space spanned by ##|\pm>## with the Stark shift included it will then look like this (I will assume the ACME case, with ##J=1## and ##\Omega = 1##):

$$
\begin{pmatrix}
\Delta & -E_zD M\\
-E_zD M & -\Delta \\
\end{pmatrix}
$$

Assuming the 2 levels are very close we have ##\Delta << E_zD## and by diagonalizing the matrix we get for the energies and eigenstates (with a very good approximation): ##E_{\pm} = \pm E_zD M## and ##\frac{|+>\pm|->}{\sqrt{2}}##. Hence the different parities are fully mixed so the system is fully polarized.
 
  • Love
Likes Twigg
  • #84
Thank you! I really appreciate it! Your derivation helped put a lot of puzzle pieces together for me.

I was able to get the polarizability out of your 2x2 Hamiltonian. It has eigenvalues $$E_{\Lambda,M} = \frac{\Lambda}{|\Lambda|} \sqrt{\Delta^2 + (E_z DM)^2} \approx \frac{\Lambda}{|\Lambda|} (\Delta + \frac{1}{2} \frac{E_z ^2 D^2 M^2}{\Delta} +O((\frac{E_z DM}{\Delta})^2))$$
From this, polarizability is $$\alpha = \frac{\Lambda}{|\Lambda|}\frac{D^2 M^2}{2\Delta}$$, since the polarizability is associated with the energy shift that is quadratic in electric field. This seems to be in full agreement with what that review paper was saying (one of these days, I'll find that paper again).
BillKet said:
I might have missed a complex conjugate in the Wigner matrix, as it is easier to type without it :D
I can never reproduce something I derived using Wigner matrices because of all the little mistakes here and there. They're just cursed. I'd sell my soul for a simpler formalism :oldbiggrin:

By the way, I found a thesis from the HfF+ eEDM group that derives the Stark shift, and it exactly agrees with your expression for no hyperfine coupling (##F=J## and ##I=0##). Nice work!
 
  • #85
Twigg said:
Thank you! I really appreciate it! Your derivation helped put a lot of puzzle pieces together for me.

I was able to get the polarizability out of your 2x2 Hamiltonian. It has eigenvalues $$E_{\Lambda,M} = \frac{\Lambda}{|\Lambda|} \sqrt{\Delta^2 + (E_z DM)^2} \approx \frac{\Lambda}{|\Lambda|} (\Delta + \frac{1}{2} \frac{E_z ^2 D^2 M^2}{\Delta} +O((\frac{E_z DM}{\Delta})^2))$$
From this, polarizability is $$\alpha = \frac{\Lambda}{|\Lambda|}\frac{D^2 M^2}{2\Delta}$$, since the polarizability is associated with the energy shift that is quadratic in electric field. This seems to be in full agreement with what that review paper was saying (one of these days, I'll find that paper again).
I can never reproduce something I derived using Wigner matrices because of all the little mistakes here and there. They're just cursed. I'd sell my soul for a simpler formalism :oldbiggrin:

By the way, I found a thesis from the HfF+ eEDM group that derives the Stark shift, and it exactly agrees with your expression for no hyperfine coupling (##F=J## and ##I=0##). Nice work!
I am glad it's right! :D Please send me the link to that paper when you have some time. About the polarization, I am a bit confused. Based on that expression it looks like it can go to infinity, shouldn't it be between 0 and 1 (I assumed that if you bring the 2 levels to degeneracy you would get a polarization of 1)?

Side note, unrelated to EDM calculations: I am trying to derive different expressions in my free time just to make sure I understood well all the details of diatomic molecules formalism. It's this term for the Hamiltonian due to parity violation. For example in this paper equation 1 (I just chose this one because I read it recently, but it is the same formula in basically all papers about parity violation) gets turned into equation 3 after doing the effective H formalism. I didn't get a chance to look closely into it, but if you have any suggestions about going from 1 to 3 or any paper that derives it (their references don't help much) please let me know. I guess that cross product comes from the Dirac spinors somehow but it doesn't look obvious to me.
 
  • #86
Here's that thesis. I was looking at equation 6.11 on page 103. Also, I used ##\Lambda## instead of ##\Omega## in my last post, just a careless error.

I don't have APS access right now, so I can't see the Victor Flambaum paper that is cited for that Hamiltonian. Just looking at the form of that Hamiltonian, the derivation might have little to do with the content of Brown and Carrington because it's talking about spin perpendicular to the molecular axis.

If you're reading papers on parity violation, this one on Schiff's theorem is excellent if you can get access. I used to have a copy but lost it. Also, talk about a crazy author list :oldlaugh: What is this, a crossover episode?
 
  • #87
Just noticed I missed your question about polarizability. I'm not sure why it would be limited between 0 and 1. Are you thinking of spin polarization? What I mean here is electrostatic polarizability ##\vec{d}_{induced} = \alpha \vec{E}##. It only appears to go to infinity as ##\Delta \rightarrow 0## because the series expansion I did assumed ##\Delta \gg E_z DM##. The reason for this inequality is that polarizability is usually quoted for ##E_z \rightarrow 0## by convention.
 
  • #89
So I tried to derive the Zeeman effect for a Hund case b, with the nuclear spin included in the wavefunction. The final result seems a bit too simple, tho. I will look only at the ##S\cdot B## term and ignore the ##g\mu_B## prefactor. For a Hund case b, the wavefunction with nuclear spin is:

$$|NS\Lambda J I F M_F> = \Sigma_{M_J}\Sigma_{M_I}<JM_JIM_I|FM_F>|NS\Lambda J M_J>|IM_I>$$

And we also have:

$$|NS\Lambda J M_J> = \Sigma_{M_N}\Sigma_{M_S}<NM_NSM_S|JM_J>|NM_N\Lambda>|SM_S>$$

where ##<JM_JIM_I|FM_F>## and ##<JM_SIM_N|JM_J>## are Clebsch-Gordan coefficients. Now, calculating the matrix element we have:

##<NS\Lambda J I F M_F|S\cdot B|N'S'\Lambda' J' I' F' M_F'>##

I will assume that the magnetic field is in the z direction. Also, given that we are in Hund case b we can look at the spin quantized in the lab frame, so we don't need Wigner rotation matrices, so we get ##S\cdot B = T_{p=0}^1(S)T_{p=0}^1(B) = B_zS_z##, where both ##B_z## and ##S_z## are defined in the lab frame, with ##S_z## being an operator, such that ##S_z|SM_S> = M_S|SM_S>##. So we have:

$$<NS\Lambda J I F M_F|B_zS_z|N'S'\Lambda' J' I' F' M_F'>=$$

$$B_z (\Sigma_{M_J}\Sigma_{M_I}<JM_JIM_I|FM_F><NS\Lambda J M_J|<IM_I|)S_z(\Sigma_{M_J'}\Sigma_{M_I'}<J'M_J'I'M_I'|F'M_F'>|N'S'\Lambda' J' M_J'>|I'M_I'>)$$

As ##S_z## doesn't act on the nuclear spin we get:

$$B_z \Sigma_{M_J}\Sigma_{M_I}\Sigma_{M_J'}\Sigma_{M_I'}<JM_JIM_I|FM_F><J'M_J'I'M_I'|F'M_F'><IM_I||I'M_I'><NS\Lambda J M_J| S_z |N'S'\Lambda' J' M_J'> = $$

$$B_z \Sigma_{M_J}\Sigma_{M_J'}\Sigma_{M_I}<JM_JIM_I|FM_F><J'M_J'IM_I|F'M_F'><NS\Lambda J M_J| S_z |N'S'\Lambda' J' M_J'> = $$

(basically we got ##I=I'## and ##M_I = M_I'##). For the term ##<NS\Lambda J M_J| S_z |N'S'\Lambda' J' M_J'>## we get:

$$(\Sigma_{M_N}\Sigma_{M_S}<NM_NSM_S|JM_J><NM_N\Lambda|<SM_S|)S_z(\Sigma_{M_N'}\Sigma_{M_S'}<N'M_N'S'M_S'|J'M_J'>|N'M_N'\Lambda'>|S'M_S'>)$$

As ##S_z## doesn't act on the ##|NM_N\Lambda>## part we have:

$$\Sigma_{M_N}\Sigma_{M_S}\Sigma_{M_N'}\Sigma_{M_S'}<NM_NSM_S|JM_J><N'M_N'S'M_S'|J'M_J'><NM_N\Lambda||N'M_N'\Lambda'><SM_S|S_z|S'M_S'>$$

From which we get ##N=N'##, ##M_N=M_N'## and ##\Lambda=\Lambda'##. So we have:

$$\Sigma_{M_N}\Sigma_{M_S}\Sigma_{M_S'}<NM_NSM_S|JM_J><NM_NS'M_S'|J'M_J'><SM_S|S_z|S'M_S'> = $$

$$\Sigma_{M_N}\Sigma_{M_S}\Sigma_{M_S'}<NM_NSM_S|JM_J><NM_NS'M_S'|J'M_J'>M_S'<SM_S||S'M_S'> = $$

And now we get that ##S=S'## and ##M_S=M_S'## so we have:

$$\Sigma_{M_N}\Sigma_{M_S}<NM_NSM_S|JM_J><NM_NSM_S|J'M_J'>M_S = $$

$$\delta_{JJ'}\delta_{M_JM_J'}M_S$$

So we also have ##J=J'## and ##M_J=M_J'##. Plugging in in the original equation, which was left at:

$$B_z \Sigma_{M_J}\Sigma_{M_J'}\Sigma_{M_I}<JM_JIM_I|FM_F><J'M_J'IM_I|F'M_F'><NS\Lambda J M_J| S_z |N'S'\Lambda' J' M_J'> = $$

$$B_z \Sigma_{M_J}\Sigma_{M_I}<JM_JIM_I|FM_F><JM_JIM_I|F'M_F'>M_S = $$

$$B_z M_S \delta_{FF'}\delta_{M_FM_F'}$$

So in the end we get ##F=F'## and ##M_F=M_F'##, so basically all quantum numbers need to be equal and the matrix element is ##B_zM_S##. It looks a bit too simple and too intuitive. I've seen mentioned in B&C and many other readings that hund case b calculations are more complicated than Hund case a. This was indeed quite tedious, but the result looks like what I would expect without doing these calculations (for example for the EDM calculations before I wouldn't see that ##\frac{1}{J(J+1)}## scaling as obvious). Also, is there a way to get to this result easier than what I did i.e. figure out that ##B_zM_S## should be the answer without doing all the math? Thank you!
 
  • #90
I haven't gone through your derivation yet, but yes, there's a way easier method, which is how B&C derive all their matrix elements.

Look at equation 11.3. Its derivation is literally three steps, by invoking only two equations (5.123 first and 5.136 twice, once for ##F## and once for ##J##). The whole point of using Wigner symbols is to avoid the Clebsch-Gordan coefficient suffering.

By the way, almost every known case is in the B&C later chapters for you to look up. Every once in a while it's not. It happened to me actually, but I was able to derive what I needed using the process above.
 
  • Like
Likes BillKet
  • #91
amoforum said:
I haven't gone through your derivation yet, but yes, there's a way easier method, which is how B&C derive all their matrix elements.

Look at equation 11.3. Its derivation is literally three steps, by invoking only two equations (5.123 first and 5.136 twice, once for ##F## and once for ##J##). The whole point of using Wigner symbols is to avoid the Clebsch-Gordan coefficient suffering.

By the way, almost every known case is in the B&C later chapters for you to look up. Every once in a while it's not. It happened to me actually, but I was able to derive what I needed using the process above.
Thanks a lot! This really makes things a lot easier!

I have a few questions about electronic and vibrational energy upon isotopic substitution. For now I am interested in the changes in mass, as I understand that there can also be changes in the size of the nucleus, too, that add to the isotope effects.

We obtain the electronic energy (here I am referring mainly to equation 7.183 in B&C) by solving the electrostatic SE with fixed nuclei. Once we obtain these energies, their value doesn't change anymore, regardless of the order of perturbation theory we go to in the effective Hamiltonian. The energy of the vibrational and spin-rotational will change, but this baseline energy of the electronic state is the same. When getting this energy, as far as I can tell, all we care about is the distance between the electrons and nuclei, as well as their charges. We also care about the electron mass, but not the nuclear one. This means that the electronic energy shouldn't change when doing an isotopic substitution. This is reflected in equation 7.199. However in equation 7.207 we have a dependence on the mass of the nuclei. From the paragraphs before, the main reason for this is the breaking of BO approximation. However, this breaking of BO approximation, and hence the mixing of electronic levels is reflected only in the effective Hamiltonian. As I mentioned above, the electronic energy should always be the same as its zero-th order value. Where does this mass dependence of the electronic energy ##Y_{00}## from equation 7.207 come from?

For vibrational energy, we have equation 7.184. I assume that the ##G^{(0)}_{\eta\nu}## term has the isotopic dependence given by 7.199. Do the corrections in 7.207 come from the other 2 terms: ##V^{ad}_{\eta\nu}## and ##V^{spin}_{\eta\nu}##? And if so, is this because these terms can also be expanded as in equation 7.180? For example, from ##V^{ad}_{\eta\nu}## we might get a term of the form ##x_{ad}(\nu+1/2)## so overall the first term in the vibrational expansion becomes ##(\omega_{\nu e}+x_{ad})(\nu+1/2)## which doesn't have the nice expansion in 7.199 anymore but the more complicated one in 7.207? Is this right? Also do you have any recommendations for readings that go into a bit more details about this isotopic substitution effects? Thank you!
 
  • #92
I'm much less familiar with vibrational corrections. And as you've probably noticed, it's not the main focus of B&C either. A couple places to start would be:

1. Dunham's original paper: http://jupiter.chem.uoa.gr/thanost/papers/papers4/PR_41(1932)721.pdf
It shows the higher order corrections that are typically ignored in all those ##Y_{ij}## coefficients.

2. In that section B&C refer to Watson's paper: https://doi.org/10.1016/0022-2852(80)90152-6
I don't have access to it, but it seems highly relevant to this discussion.
 
  • Like
Likes BillKet
  • #93
amoforum said:
I'm much less familiar with vibrational corrections. And as you've probably noticed, it's not the main focus of B&C either. A couple places to start would be:

1. Dunham's original paper: http://jupiter.chem.uoa.gr/thanost/papers/papers4/PR_41(1932)721.pdf
It shows the higher order corrections that are typically ignored in all those ##Y_{ij}## coefficients.

2. In that section B&C refer to Watson's paper: https://doi.org/10.1016/0022-2852(80)90152-6
I don't have access to it, but it seems highly relevant to this discussion.
Thanks for the references, they helped a lot. I was wondering if you know of any papers that extended this isotope shift analysis to molecules that are not closed shell. For example the isotope dependence of spin-orbit, spin-rotation or lambda doubling parameters. I see in B&C that they mention that this hasn't been done, but the book was written in 2003 and perhaps someone did the calculations meanwhile.
 
  • #94
I looked a bit at some actual molecular systems and I have some questions.

1. In some cases, a given electronic state, say a ##^2\Pi## state is far from other electronic states except for one, which is very close (sometimes even in between the 2 spin-orbit states i.e. ##^2\Pi_{1/2}## and ##^2\Pi_{3/2}##) and the rotational energy is very small. Would that be more of a Hund case a or c?

2. I noticed that for some ##^2\Pi## states, some molecules have the electronic energy difference between this state and the other state bigger than the spin-orbit coupling and the rotational energy, which would make them quite confidently a Hund case a. However, the spin orbit coupling is bigger than the vibrational energy splitting of both ##^2\Pi_{1/2}## and ##^2\Pi_{3/2}##. How would I do the vibrational averaging in this case? Wouldn't the higher order perturbative corrections to the spin-orbit coupling diverge? Would I need to add the SO Hamiltonian to the zeroth order hamiltonian, together with the electronic energy?

3. In the Hund case c, will my zeroth order Hamiltonian (and I mean how it is usually done in literature) be ##H_{SO}##, instead of the electronic one, ##H_e## or do I include both of them ##H_e+H_{SO}##? And in this case, if the spin orbit coupling would be hidden in the new effective ##V(R)##, how can I extract the spin-orbit constant, won't it be mixed with the electronic energy?
 
  • #95
BillKet said:
... One question I have is: is this Hamiltonian (with the centrifugal corrections) correct for any ##J## in a given vibrational level? I have seen in several papers mentioned that this is correct for low values of ##J## and I am not sure why would this not hold for any ##J##. I understand that for higher ##J## the best Hund case might change, but why would the Hamiltonian itself change? ...
Greetings,

I am late to this party and forgive me please if I have missed some of the discussion given a rather quick read of a complex topic.

I have not seen any explicit comments regarding Rydberg-Rydberg or Rydberg-valence perturbations (interactions). Such interactions certainly influence observed rotationally resolved spectra, often in very subtle and unexpected ways. Lefebvre-Brion and Field is the most comprehensive discussion of such perturbations of which I am aware.

Just another detail to keep you up at night.ES
 
  • #96
I've not heard of these perturbations. Are we talking Rydberg as in electrons that are excited to >>10th electronic state? I knew Rydberg molecules are a thing, but I always assumed that stuff was limited to alkali-alkali dimers.
 
  • #97
Twigg said:
I've not heard of these perturbations. Are we talking Rydberg as in electrons that are excited to >>10th electronic state? I knew Rydberg molecules are a thing, but I always assumed that stuff was limited to alkali-alkali dimers.
Greetings,

If you have an unpaired outer electron, for example as in ##\textup{NO}##, there is an associated set of Rydberg states corresponding to excitations of that unpaired outer electron. The valence states correspond to excitations of an inner, core electron. Thus doublet states ##(S= 1/2)## would have a set of Rydberg states.

The perturbations occur, for example, when two rotational transitions associated with different electronic states are fortuitously nearly degenerate. A Fortrat diagram, ##E= f\left ( J \right )##, will show small discontinuities resulting from mixing of the nearly degenerate rotational states. Figuring out the details can be a challenge!ES
 
  • #98
Hello again. So I read more molecular papers meanwhile, including cases where perturbation theory wouldn't work and I want to clarify a few things. I would really appreciate your input @Twigg @amoforum. For simplicity assume we have only 2 electronic states, ##\Sigma## and ##\Pi## and each of them has only 1 vibrational level (this is just to be able to write down full equations). The Hamiltonian (full, not effective) in the electronic space is:

$$
\begin{pmatrix}
a(R) & c(R) \\
c(R) & b(R)
\end{pmatrix}
$$

where, for example ##a(R) = <\Sigma |a(R)|\Sigma >## and it contains stuff like ##V_{\Sigma}(R)##, while the off diagonal contains stuff like ##<\Sigma |L_-|\Pi >##. If we diagonalize this explicitly, we get, say, for the ##\Sigma## state eigenvalue:

$$\frac{1}{2}[a+b+\sqrt{(a-b)^2+4c^2}]$$

Assuming that ##c<<a,b## we can do a first order Taylor expansion and we get:

$$\frac{1}{2}[a+b+(a-b)\sqrt{1+\frac{4c^2}{(a-b)^2}}] = $$

$$\frac{1}{2}[a+b+(a-b)(1+\frac{2c^2}{(a-b)^2})] = $$

$$\frac{1}{2}[2a+\frac{2c^2}{(a-b)})] = $$

$$a+\frac{c^2}{(a-b)} $$

Here by ##c^2## I actually mean the product of the 2 off diagonal terms i.e. ##<\Sigma|c(R)|\Pi><\Pi|c(R)|\Sigma>##This is basically the second order PT correction presented in B&C. So I have a few questions:

1. Is this effective Hamiltonian in practice a diagonalization + Taylor expansion in the electronic space, or does this happened to be true just in the 2x2 case above?

2. I am a bit confused how to proceed in a derivation similar to the one above, if I account for the vibrational states, too. If I continue from the result above, and average over the vibrationally states, I would get, for the ##\Sigma## state:

$$<0_\Sigma|(a(R)+\frac{c(R)^2}{(a(R)-b(R))})|0_\Sigma> = $$

$$<0_\Sigma|a(R)|0_\Sigma>+<0_\Sigma|\frac{c(R)^2}{(a(R)-b(R))}|0_\Sigma> $$

where ##|0_\Sigma> ## is the vibrational level of the ##\Sigma## state (again I assume just one vibrational level per electronic state). This would be similar to the situation in B&C for the rotational constant in equation 7.87. However, if I include the vibration averaging before diagonalizing I would have this Hamiltonian:

$$
\begin{pmatrix}
<0_\Sigma|a(R)|0_\Sigma> & <0_\Sigma|c(R)|0_\Pi> \\
<0_\Pi|c(R)|0_\Sigma> & <0_\Pi|b(R)|0_\Pi>
\end{pmatrix}
$$

If I do the diagonalization and Taylor expansion as before, I end up with this:

$$<0_\Sigma|a(R)|0_\Sigma>+\frac{<0_\Sigma|c(R)|0_\Pi><0_\Pi|c(R)|0_\Sigma>}{(<0_\Sigma|a(R)|0_\Sigma>-<0_\Pi|b(R)|0_\Pi>)} $$

But this is not the same as above. For the term ##<0_\Sigma|c(R)|0_\Pi><0_\Pi|c(R)|0_\Sigma>##, I can assume that ##|0_\Pi><0_\Pi|## is identity (for many vibrational states that would be a sum over them that would span the whole vibrational manifold of the ##\Pi## state), so I get ##<0_\Sigma|c(R)^2|0_\Sigma>##, but in order for the 2 expression to be equal I would need:

$$\frac{<0_\Sigma|c(R)^2|0_\Sigma>}{(<0_\Sigma|a(R)|0_\Sigma>-<0_\Pi|b(R)|0_\Pi>)} =
<0_\Sigma|\frac{c(R)^2}{(a(R)-b(R))}|0_\Sigma>
$$

Which doesn't seem to be true in general (the second one has vibrational states of the ##\Pi## states involved, while the first one doesn't). Again, just to be clear by, for example, ##<0_\Sigma|a(R)|0_\Sigma>##
I mean ##<0_\Sigma|<\Sigma|a(R)|\Sigma>|0_\Sigma>## i.e. electronically + vibrational averaging.

What am I doing wrong? Shouldn't the 2 approaches i.e. vibrational averaging before or after the diagonalization + Taylor expansion give exactly the same results?
 
  • #99
BillKet said:
Thank you for your reply. I will look at the sections you suggested for questions 1. For the second one, I agree that if that ##R_\alpha## is a constant, we can take the electronic integral out of the vibrational integral, but I am not totally sure why can we do this. If we are in the BO approximation, the electronic wavefunction should be a function of ##R##, for ##R## not constant, and that electronic integral would be a function of ##R##, too. But why would we assume it is constant? I understand the idea behind BO approximation, that the electrons follow the nuclear motion almost instantaneously, but I don't get it here. It is as if the nuclei oscillate so fast that the electrons don't have time to catch up and they just the see the average inter-nuclear distance, which is kinda the opposite of BO approximation. Could you help me a bit understand this assumption that the electronic integral is constant? Thank you!
You should read some day the original Born-Oppenheimer paper.
The point is that the electronic wavefunction changes on a distance ##O(1)##, while the nuclear wavefunctions change on a distance ##O(\sqrt{m_\mathrm{e}/M_\mathrm{nuc}})## around the equilibrium distance. So you can expand the electronic matrix elements in a power series in ##R-R_0##. The matrix elements of the vibrational functions of ##(R-R_0)^n\sim O((\sqrt{m_\mathrm{e}/M_\mathrm{nuc}})^n)## whence usually all but the term with n=0 are negligible.
I think this expansion of the electronic dipole moment is called Herzberg-Teller coupling.
 
  • #100
DrDu said:
You should read some day the original Born-Oppenheimer paper.
The point is that the electronic wavefunction changes on a distance ##O(1)##, while the nuclear wavefunctions change on a distance ##O(\sqrt{m_\mathrm{e}/M_\mathrm{nuc}})## around the equilibrium distance. So you can expand the electronic matrix elements in a power series in ##R-R_0##. The matrix elements of the vibrational functions of ##(R-R_0)^n\sim O((\sqrt{m_\mathrm{e}/M_\mathrm{nuc}})^n)## whence usually all but the term with n=0 are negligible.
I think this expansion of the electronic dipole moment is called Herzberg-Teller coupling.
I am not sure how does this answer my question. I agree with what you said about the perturbative expansion, this is basically what I used in my derivation in the Taylor series. My question was why the 2 methods I used (the 2 different perturbative expansions) don't give the same result. I also think that Herzberg-Teller coupling doesn't apply to diatomic molecules, no?
 

Similar threads

Replies
0
Views
3K
Replies
7
Views
2K
Replies
3
Views
2K
Replies
1
Views
1K
Replies
5
Views
4K
Replies
13
Views
5K
Replies
4
Views
3K
Back
Top