I Effective molecular Hamiltonian and Hund cases

Click For Summary
The discussion focuses on the effective Hamiltonian for diatomic molecules, emphasizing the relationship between theoretical models and experimental data. It explains that the effective Hamiltonian is constructed by solving the Schrödinger equation for fixed internuclear distances and incorporating perturbative terms while maintaining block diagonal structure in electronic levels. The conversation highlights the importance of diagonalizing the Hamiltonian matrix to accurately determine energy levels and the role of off-diagonal terms in fitting experimental data. Additionally, it addresses the challenges of applying the effective Hamiltonian at higher rotational quantum numbers (J), where new interactions may need to be considered. Ultimately, the effective Hamiltonian serves as a foundational tool for connecting theoretical predictions with experimental observations in molecular spectroscopy.
  • #31
Twigg said:
@BillKet Nothing to apologize for! You're giving me an excuse to clear out the cobwebs in my memory. Seriously though, this stuff is hard and there's nothing to be ashamed of from asking a lot of questions.

##N^2## isn't really diagonal at the electronic level. More importantly, the electronic state isn't just ##\eta##. You can't have a ##X^1\Pi## state without the ##\Pi## (i.e., ##|\Lambda|=1##). The reason S is in there too is because as B&C note at the bottom of page 317, the parity of the electronic orbital determines the value of S, same as with atomic orbitals. So the overall 0-th order part of the ket is ##|\eta,\Lambda\rangle##, not just ##|\eta\rangle##, where S is implied but not included. So how does B&C pull out ##N^2## and even ##L_z^2## from an expectation value that includes ##\Lambda##? They just make ##\Lambda## appear redundantly in both the ##|\eta,\Lambda\rangle## ket and the Hund's case (a) ket ##|\Lambda,S,\Sigma,J,\Omega\rangle##. They're essentially saving the ##N^2## and ##L_z^2## for later when they start using the Hund's case kets because both sets of kets include ##\Lambda## (and S). How do B&C choose which terms to evaluate in the 0th order kets and which to evaluate in the hund's case kets? Just by energy scale: ##N-L## has a small energy contribution relative to the energy of the 0-th order stuff, meanwhile ##B(R)## has to be evaluated because it contains the nuclear separation R. The redundancy in the kets with ##\Lambda## is what B&C mean when they say: "L acts both within and between such states".

Just wanted to make a suggestion too. I don't remember if B&C does this in the book, but at some point you may want to consider drawing out what a rotational spectrum would look like for Hund's case (a). It's a really important exercise, especially if you're an experimentalist. If you do it, make sure to include the P, Q, and R branches, and notice for what ##\Omega## and ##\Omega'## they will appear or will not appear. That is the easiest and most reliable way of interrogating a new molecular state. In experiment, it's common to see mystery states with only a known ##\Omega## value because of how do-able this test is.

Edit: Fixed an erroneous statement about orbital parity and the value of S
So for example, assuming we choose a Hund case a, the 0th order wavefunction is ##|\eta,\Lambda>|J,\Lambda,\Sigma,\Omega>##. If we want to calculate the first order contribution of the rotational H to the effective H, we can still keep the Hund case a and calculate $$<J',\Lambda,\Sigma',\Omega'|<\eta,\Lambda|B(R)(N^2-L_z)|\eta,\Lambda>|J,\Lambda,\Sigma,\Omega>$$ where ##\Lambda## must be the same, as we are in the same electronic state, but ##J##, ##\Sigma## and ##\Omega## can change, as they are related to the rotational wavefunction. Then, given that we can change the order of the electronic and rotational wavefunction we have the term above equal to $$<\eta,\Lambda|<J',\Lambda,\Sigma',\Omega'|N^2-L_z|J,\Lambda,\Sigma,\Omega>|\eta,\Lambda>$$ which is equal to $$<\eta,\Lambda|B(R)|\eta,\Lambda><J',\Lambda,\Sigma',\Omega'|N^2-L_z|J,\Lambda,\Sigma,\Omega>$$ as the term we took out of the electronic wave function, ##<J',\Lambda,\Sigma',\Omega'|N^2-L_z|J,\Lambda,\Sigma,\Omega>## is just a number, not an operator at this stage. At this stage we can drop the assumption of the Hund case a and thus turn the rotational effective opperator from a matrix element, the way it is not, to an operator which can be applied in this form to other Hund cases, so in the end we would get $$<\eta,\Lambda|B(R)|\eta,\Lambda>(N^2-L_z^2)$$ Is this right?
 
Physics news on Phys.org
  • #32
That seems right. I want to add some more context to choosing the Hund's case though, because the way you wrote it above, it's as if you only use it just to drop it later. The choice of basis set has implications on ##H^{0}## itself.

I hope this section in Lefebvre-Brion is helpful:

"A choice of basis set implies a partitioning of the Hamiltonian, ##H = H_{el} + H_{SO} + T^N(R) + H_{ROT}##, into two parts: a part, ##H^{(0)}##, which is fully diagonal in the selected basis set, and a residual part, ##H^{(1)}##. The basis sets associated with the various Hund's cases reflect different choices of the parts of ##H## that are included in ##H^{(1)}##. Although in principle the eigenvalues of ##H## are unaffected by the choice of basis, as long as this basis set forms a complete set of functions, one basis set is usually more convenient to use or better suited than the others for a particular problem."

Also:

"The basis function labels are good quantum numbers with respect to ##H^{(0)}##, but not necessarily with respect to ##H^{(1)}##"

Even though any complete basis set is true, the relationship between Hamiltonian and basis set is a little more intertwined, in that your choice of basis set implicitly sets your choice of the zeroth order Hamiltonian.

So in choosing Hund's case (b), you've already accepted that the case (b) basis functions are eigenstates of ##H_{el}## and the diagonal part of ##H_{ROT}##: ##B(R)(N^2 - N_z^2)##. ##B(R)## is proportional to ##<\nu|R^{-2}|\nu>##, which is the constant in the first-order Hamiltonian (Eq. 7.8.3) in B&C.

Similarly with Hund's case (a): the case (a) basis functions are eigenstates of ##H_{el}## and the diagonal parts of ##H_{ROT}## mentioned a few posts up.

So now expanding ##H_{ROT}##:

##H_{ROT} = B(R)[J^2 + S^2 + L_\perp^2 - L_z^2 - 2JS - (N^+L^- + N^-L+)]##. The ##JS## term is actually diagonal in case (b) because ##2JS = [N^2 - J^2 - S^2]##. Since ##\Lambda## is good in both cases (a) and (b), the choice doesn't impact ##H_{el}## so much, as your derivation above shows.
 
  • #33
amoforum said:
That seems right. I want to add some more context to choosing the Hund's case though, because the way you wrote it above, it's as if you only use it just to drop it later. The choice of basis set has implications on ##H^{0}## itself.

I hope this section in Lefebvre-Brion is helpful:

"A choice of basis set implies a partitioning of the Hamiltonian, ##H = H_{el} + H_{SO} + T^N(R) + H_{ROT}##, into two parts: a part, ##H^{(0)}##, which is fully diagonal in the selected basis set, and a residual part, ##H^{(1)}##. The basis sets associated with the various Hund's cases reflect different choices of the parts of ##H## that are included in ##H^{(1)}##. Although in principle the eigenvalues of ##H## are unaffected by the choice of basis, as long as this basis set forms a complete set of functions, one basis set is usually more convenient to use or better suited than the others for a particular problem."

Also:

"The basis function labels are good quantum numbers with respect to ##H^{(0)}##, but not necessarily with respect to ##H^{(1)}##"

Even though any complete basis set is true, the relationship between Hamiltonian and basis set is a little more intertwined, in that your choice of basis set implicitly sets your choice of the zeroth order Hamiltonian.

So in choosing Hund's case (b), you've already accepted that the case (b) basis functions are eigenstates of ##H_{el}## and the diagonal part of ##H_{ROT}##: ##B(R)(N^2 - N_z^2)##. ##B(R)## is proportional to ##<\nu|R^{-2}|\nu>##, which is the constant in the first-order Hamiltonian (Eq. 7.8.3) in B&C.

Similarly with Hund's case (a): the case (a) basis functions are eigenstates of ##H_{el}## and the diagonal parts of ##H_{ROT}## mentioned a few posts up.

So now expanding ##H_{ROT}##:

##H_{ROT} = B(R)[J^2 + S^2 + L_\perp^2 - L_z^2 - 2JS - (N^+L^- + N^-L+)]##. The ##JS## term is actually diagonal in case (b) because ##2JS = [N^2 - J^2 - S^2]##. Since ##\Lambda## is good in both cases (a) and (b), the choice doesn't impact ##H_{el}## so much, as your derivation above shows.
So in B&C derivation, the obtained ##H_{eff}## is valid only for a Hund case a? If I want to use a Hund case b, his derivations wouldn't apply and I would have to re-derive everything from scratch? That confuses me a bit. For example the rotational term and the spin-orbit coupling term have the same form in the Hund case b, too (at least from what I saw in different papers I read). Is that just a coincidence and other terms might not have the same form? Also in the brief derivation I did above, I never used the fact that I have a Hund case a, all I used was that for a complete rotational basis, ##<i|N^2-L_z|j>## is just a number, which can be taken outside the electronic expectation value. That statement is true in general (and not only at a perturbation theory level) for any rotational basis, so I am not sure why using a different basis would change the form of that term (or any other). Of course actually evaluating ##<i|N^2-L_z|j>## would be easier in a given basis compared to another, but the form of the effective hamiltonian shouldn't change.

One other thing that confuses me is this statement: "the case (b) basis functions are eigenstates of ##H_{el}##". Hund basis functions have nothing to do with ##H_{el}##, do they? The basis functions of ##H_{el}## are the ##|\eta>##'s, and these are the same no matter what Hund case I choose later, no? Thank you!
 
  • #34
As I mentioned above, Hund's case (b) would work fine because your derivation moves B(R) around, and ##\Lambda## is a good quantum number for (a) and (b).

You didn't use Hund's case (a)? Didn't you write down your electronic wavefunction as ##|\eta, \Lambda>##? In this physical model, you've implied that ##L##'s precession about the internuclear axis is conserved, a wholly electronic phenomenon. The fact that its energy is so much higher than the rotation is the reason you can do that separation in the first place. In Hund's case (c), the spin-orbit coupling term is even larger than the electronic energy. ##\Lambda## isn't even a good quantum number! How about a scenario where the rotation is so high that it significantly alters the electronic motion?

To reiterate Twigg: "...the electronic state isn't just ##\eta##...the overall 0-th order part of the ket is ##|\eta, \Lambda>##, not just ##|\eta>##".

Hund's cases aren't just the rotational part. They're the basis states for the whole effective Hamiltonian. You can do the same procedure with the wrong basis states, it just won't match your data.
 
  • Like
Likes Twigg
  • #35
amoforum said:
As I mentioned above, Hund's case (b) would work fine because your derivation moves B(R) around, and ##\Lambda## is a good quantum number for (a) and (b).

You didn't use Hund's case (a)? Didn't you write down your electronic wavefunction as ##|\eta, \Lambda>##? In this physical model, you've implied that ##L##'s precession about the internuclear axis is conserved, a wholly electronic phenomenon. The fact that its energy is so much higher than the rotation is the reason you can do that separation in the first place. In Hund's case (c), the spin-orbit coupling term is even larger than the electronic energy. ##\Lambda## isn't even a good quantum number! How about a scenario where the rotation is so high that it significantly alters the electronic motion?

To reiterate Twigg: "...the electronic state isn't just ##\eta##...the overall 0-th order part of the ket is ##|\eta, \Lambda>##, not just ##|\eta>##".

Hund's cases aren't just the rotational part. They're the basis states for the whole effective Hamiltonian. You can do the same procedure with the wrong basis states, it just won't match your data.
Uh I am still confused. Yes, I assumed a case where the electronic energy is much bigger than the spin-orbit coupling. That allowed me to used ##H_{el}## as ##H_0## and use ##H_{SO}## as a perturbation. By solving for the eigenfunctions of ##H_{el}##, I would get ##|\eta,\Lambda>##, as ##\Lambda## is a good quantum number for ##H_0 = H_{el}##. Up to this point I haven't made any assumption about the Hund cases. ##\Lambda## is a good quantum number for the electronic energy regardless of the Hund case (i.e. if we were able to somehow fix the molecule in place and prevent it to rotate I would still obtain this eigenfunction). Once I add the rotational part, I can use any Hund case. For example if I were to use a Hund case c as the basis for the rotational levels in this electronic manifold, the eigenstates would be ##|\eta,\Lambda>|J,J_a,\Omega>##, and I could proceed just as above, as we still have ##|\eta,\Lambda>|J,J_a,\Omega> = |J,J_a,\Omega>|\eta,\Lambda>##. Of course this would be a really bad choice of basis, but the only difference in practice is that the off diagonal terms ##<J',J_a',\Omega'|N^2-L_z|J,J_a,\Omega>## would be much bigger than in a Hund case a or b (so I would have to diagonalize a bigger portion of the ##H_{eff}## in this basis for the same level of accuracy), but again, if I were to drop the basis after this step I would end up with exactly the same effective Hamiltonian I got before. What am I missing something? Thank you!
 
  • #36
Ah yeah, I feel like I should've said this for context, that B&C's derivation really only seems valid for cases (a) and (b), as @amoforum said. This is also the same as what @BillKet is saying when they assumed that ##H_{el}## dominates over ##H_{SO}##. That's why B&C can talk about the 0-th order ket being ##|\eta,\Lambda\rangle## because ##\Lambda## is good for cases (a) and (b) only. B&C doesn't discuss case (c) until chapter 10 section 7, where they derive a new expression for ##H_{rot}## in subsection b. My bad!
BillKet said:
##\Lambda## is a good quantum number for the electronic energy regardless of the Hund case
Sorry, this isn't true. All the Hund's cases aside from (a) and (b) have ##H_{el}## as a perturbation not a 0th order term, so ##\Lambda## is not good for (c) through (e). However, you will still see people assign term symbols to case (c) through (e) states. I couldn't tell you how good or bad those descriptions are, just that they're not ideal. This is why in that thesis I linked Paul Hamilton was careful to describe the eEDM state of PbO, which is a case (c) state, as a(1) instead of by it's term symbol ##^3 \Sigma_1##. ("a" is just an arbitrary label like "X,A,B,C..." but implies that the state has a higher degeneracy than the X state, and the 1 in parentheses refers to ##\Omega##). I can understand where you're coming from, because B&C almost exclusively talk about (a) and (b) cases.
 
  • #37
Twigg said:
Ah yeah, I feel like I should've said this for context, that B&C's derivation really only seems valid for cases (a) and (b), as @amoforum said. This is also the same as what @BillKet is saying when they assumed that ##H_{el}## dominates over ##H_{SO}##. That's why B&C can talk about the 0-th order ket being ##|\eta,\Lambda\rangle## because ##\Lambda## is good for cases (a) and (b) only. B&C doesn't discuss case (c) until chapter 10 section 7, where they derive a new expression for ##H_{rot}## in subsection b. My bad!
Sorry, this isn't true. All the Hund's cases aside from (a) and (b) have ##H_{el}## as a perturbation not a 0th order term, so ##\Lambda## is not good for (c) through (e). However, you will still see people assign term symbols to case (c) through (e) states. I couldn't tell you how good or bad those descriptions are, just that they're not ideal. This is why in that thesis I linked Paul Hamilton was careful to describe the eEDM state of PbO, which is a case (c) state, as a(1) instead of by it's term symbol ##^3 \Sigma_1##. ("a" is just an arbitrary label like "X,A,B,C..." but implies that the state has a higher degeneracy than the X state, and the 1 in parentheses refers to ##\Omega##). I can understand where you're coming from, because B&C almost exclusively talk about (a) and (b) cases.
Thank you for your reply! I see what you mean. However when I said: "##\Lambda## is a good quantum number for the electronic energy regardless of the Hund case", what I meant is that if if ##H_{el}>>H_{SO}## and hence we choose ##H_0=H_{el}## then ##\Lambda## is a good quantum number. But what Hund case we use as a basis for the rotational levels, won't change the fact that ##\Lambda## is a good quantum number. For example we can use as the 0th order basis ##|\eta,\Lambda>|Hund \ case \ a>## or ##|\eta,\Lambda>|Hund \ case \ c>##. In both cases ##\Lambda## is a good quantum number at the electronic level, and what basis we choose for the rotational part won't change that, it will just make calculations easier or harder.

What am I trying to say/ask is that whether ##\Lambda## is a good quantum number has nothing to do with the Hund case we choose. For example if we want to see if ##H_{el}## or ##H_{SO}## is bigger, as far as I understand, we look at the energy level spacing from theoretical (e.g. Coupled Cluster) calculations and based on the magnitude of the difference between different energy levels, we can get an idea of what ##H_{0}## we should choose. But that doesn't involve at any point Hund cases i.e. these theoretical calculations don't look at the rotation at all, they fix R, calculate the electronic energy for fixed nuclei and they repeat this for several R (internuclear distances). So for example if, based on these calculations, the energy of a ##\Pi## state is much bigger than the splitting between ##\Pi_{1/2}## and ##\Pi_{3/2}## we know that ##H_{el}>>H_{0}## so ##\Lambda## is a good quantum number in that electronic state. Now if we want to look at the rotational spectrum of that electronic level, we usually choose a Hund case a in this case, but given that they are complete basis, Hund case c or Hund case e would work just as well mathematically (but they won't be very easy to diagonalize). If on the other hand the splitting between ##\Pi_{1/2}## and ##\Pi_{3/2}## was much bigger than the energy of the ##\Pi## state relative to the ground state, ##\Lambda## wouldn't be a good quantum anymore, but again, it has nothing to do with Hund cases. And in this case Hund case c would be ideal, but we could use Hund case a or b, too. Is that right?
 
  • #38
BillKet said:
What am I trying to say/ask is that whether ##\Lambda## is a good quantum number has nothing to do with the Hund case we choose. For example if we want to see if ##H_{el}## or ##H_{SO}## is bigger, as far as I understand, we look at the energy level spacing from theoretical (e.g. Coupled Cluster) calculations and based on the magnitude of the difference between different energy levels, we can get an idea of what ##H_{0}## we should choose. But that doesn't involve at any point Hund cases

Hund's cases and sets of good quantum numbers are synonymous. Both electronic and rotational.

Why choose one ##H_{0}## over another? You're going to choose the one with the most diagonal terms, right? i.e. you'll choose an ##H_{0}## that will require the lowest order perturbation-theory to reproduce the data. That ##H_{0}## will have the most good quantum numbers, and that set of quantum numbers is a Hund's case.
 
  • Like
Likes Twigg
  • #39
I think I see what you mean BillKet. You're distinguishing between the use of Hund's cases as a basis and the use of Hund's cases to (approximately) describe the eigenstates, right? So when you say that the hierarchy ##H_{el} > H_{SO}## has nothing to do with the Hund's cases, what you mean is that it doesn't force you to use a particular basis, yeah? Correct if I'm understanding your point wrong.

In this case, yes, what you are saying is mathematically true. No one will force you to use any particular basis, but in practice the jargon is different. The jargon dictates that the basis you choose, the Hund's case, and the hierarchy of the Hamiltonian are all synonymous. This is just because molecules are so ornery to deal with we all follow the path of least resistance quite religiously. People will misunderstand what you're saying if you use the phrase "Hund's case" to describe a basis that isn't tailored to the energy scales of a particular molecular state.

Edit: computationally, it may be convenient to use a different hund's case as a basis, like what amoforum was saying early on about computational software. But 90% of the time, the rules I describe above govern how people use these terms (especially with us pea-brained experimentalists!). Not trying to force anything on you, just trying to make it easier on you to communicate and get answers to your questions. :oldsmile:
 
  • #40
Twigg said:
I think I see what you mean BillKet. You're distinguishing between the use of Hund's cases as a basis and the use of Hund's cases to (approximately) describe the eigenstates, right? So when you say that the hierarchy ##H_{el} > H_{SO}## has nothing to do with the Hund's cases, what you mean is that it doesn't force you to use a particular basis, yeah? Correct if I'm understanding your point wrong.

In this case, yes, what you are saying is mathematically true. No one will force you to use any particular basis, but in practice the jargon is different. The jargon dictates that the basis you choose, the Hund's case, and the hierarchy of the Hamiltonian are all synonymous. This is just because molecules are so ornery to deal with we all follow the path of least resistance quite religiously. People will misunderstand what you're saying if you use the phrase "Hund's case" to describe a basis that isn't tailored to the energy scales of a particular molecular state.

Edit: computationally, it may be convenient to use a different hund's case as a basis, like what amoforum was saying early on about computational software. But 90% of the time, the rules I describe above govern how people use these terms (especially with us pea-brained experimentalists!). Not trying to force anything on you, just trying to make it easier on you to communicate and get answers to your questions. :oldsmile:
Thanks a lot! That's exactly what I meant, and now I see the confusion I created, too. It should be easier to ask my questions from now on hopefully (sorry for all this mess!).

So I actually have a quick question about ##\Lambda##-doubling. If I understand it right, in deriving the terms of the effective H (I will stick to Hund case a from now on, as B&C), we calculated expectation values of the form ##<\eta\Lambda|O|\eta\Lambda>##, for the same ##\Lambda## on both sides. For example, in a second order PT, using the rotational and SO hamiltonian, the resulting term would be, for ##\Lambda = 1## (I will ignore some summations and some coefficients): $$<\eta\Lambda=1|{H_{rot}}_{+1}^{1}|\eta'\Lambda=0><\eta'\Lambda=0|{H_{SO}}_{-1}^{1}|\eta\Lambda=1>$$ where ##\pm 1## refers to the components of ##L##. For the ##\Lambda##-doubling, I imagined now we would have something of the form: $$<\eta\Lambda=1|{H_{rot}}_{+1}^{1}|\eta'\Lambda=0><\eta'\Lambda=0|{H_{SO}}_{+1}^{1}|\eta\Lambda=-1>$$ such that we connect different components of ##\Lambda## in the same electronic state. However, I am confused by the final results in B&C. For this term (##H_{rot}+H_{SO}##) they obtain ##p(R)## which is of the form $$<\eta\Lambda=1|{H_{rot}}_{+1}^{1}|\eta'\Lambda=0><\eta'\Lambda=0|{H_{SO}}_{-1}^{1}|\eta\Lambda=1>$$ just as in the case of the fine structure i.e. the derivation before the ##\Lambda##-doubling. On the other hand, ##o(R)^{(1)}## is connecting the ##\Lambda=1## with ##\Lambda=-1## as I would expect. So I am confused. Shouldn't we have something connecting ##\Lambda=1## and ##\Lambda=-1## for all 3 coefficients? Another thing that I am confused about is the ##exp(-2iq\phi)## term. They claim its role is to ensure that only matrix elements connecting ##\Lambda=1## with ##\Lambda=-1## are non zero. But that implies that we need to calculate an electronic expectation value, again. However, I thought that once we got the coefficients p, q and o, we are done with the electronic wavefunction, and all we need to calculate are rotational expectations values. Why do we still have a term that is explicitly electronic even after calculating the electronic expectation value? Shouldn't it be included in p, q and o terms? Thank you!
 
  • #41
Again, no need to apologize!

This is getting further out of my comfort zone, so take my posts with a grain of salt. But one thing sticks out to me immediately. If you only had terms that connected ##\Lambda = +1## to ##\Lambda = -1##, then you'd wind up with both parity states being shifted. Since only one parity state is shifted by ##\Lambda##-doubling, there has to be a diagonal matrix element to hold one of the states still. (This is all since the ##\Sigma## state only ##\Lambda##-doubles the ##\Pi## state with the same parity as itself.)

Even though ##e^{-2iq\phi}## contains an electron coordinate, doesn't it look like an eigenstate of ##L_z## to you? It really doesn't require you to re-do the wavefunction contraction because that part of the motion is contained in the angular momentum quantum numbers.
 
  • #42
Twigg said:
Again, no need to apologize!

This is getting further out of my comfort zone, so take my posts with a grain of salt. But one thing sticks out to me immediately. If you only had terms that connected ##\Lambda = +1## to ##\Lambda = -1##, then you'd wind up with both parity states being shifted. Since only one parity state is shifted by ##\Lambda##-doubling, there has to be a diagonal matrix element to hold one of the states still. (This is all since the ##\Sigma## state only ##\Lambda##-doubles the ##\Pi## state with the same parity as itself.)

Even though ##e^{-2iq\phi}## contains an electron coordinate, doesn't it look like an eigenstate of ##L_z## to you? It really doesn't require you to re-do the wavefunction contraction because that part of the motion is contained in the angular momentum quantum numbers.
Hmmm I will try to derive that expression explicitly. However I am actually confused about how do you apply the effective operators in practice. From reading some other papers and PGOPHER documentation, for example in a ##^2\Pi_{1/2}## electronic state, the only operator that matters is ##-\frac{1}{2}(p+2q)(J_+ S_+ + J_-S_-)## and it can be shown that in this electronic state the splitting due to lambda doubling is ##\Delta E = \pm (p+2q)(J+1/2)##, depending on the sign of the spin orbit constant, ##A##. I tried to reproduce this, but I am missing something. The eigenstates for a given J are: $$|J\Omega\pm> = \frac{|\Lambda = 1, S = 1/2, \Sigma = -1/2, J, \Omega = 1/2>\pm|\Lambda = -1, S = 1/2, \Sigma = 1/2, J, \Omega = -1/2>}{\sqrt{2}}$$ (I will ignore the denominator from now on). So if I calculate the expectation value of this operator I would get $$\pm<\Lambda = 1, S = 1/2, \Sigma = -1/2, J, \Omega = 1/2|(J_+ S_+ + J_-S_-)|\Lambda = -1, S = 1/2, \Sigma = 1/2, J, \Omega = -1/2>$$ $$\pm<\Lambda = -1, S = 1/2, \Sigma = 1/2, J, \Omega = -1/2|(J_+ S_+ + J_-S_-)|\Lambda = 1, S = 1/2, \Sigma = -1/2, J, \Omega = 1/2>$$ But this seems to be equal to zero. For example for $$(J_+ S_+ + J_-S_-)|\Lambda = -1, S = 1/2, \Sigma = 1/2, J, \Omega = -1/2>$$ ##S_+|\Sigma=1/2>=0## as that is the maximum value of the spin projection already and ##J_-|\Omega = -1/2>=0##, as that is the minimum value ##\Omega## can take in the ##^2\Pi_{1/2}## state. I am obviously missing something but I don't know what.
 
Last edited:
  • #43
Ah sorry, I was just trying to answer this question:
BillKet said:
Shouldn't we have something connecting ##\Lambda = 1## and ##\Lambda = -1## for all 3 coefficients?
The reason I think there end up being terms that are diagonal in ##\Lambda## is what I was saying above, that you need some diagonal terms to pin one state. The expressions are already in B&C so you don't need to re-derive them. Did I misunderstand the original question?

Hmmm, I see what you're saying about ##J_+ S_+ + J_- S_-## I'll try to reproduce these results on my own as well but I might be a little slow
 
  • #44
Twigg said:
Ah sorry, I was just trying to answer this question:

The reason I think there end up being terms that are diagonal in ##\Lambda## is what I was saying above, that you need some diagonal terms to pin one state. The expressions are already in B&C so you don't need to re-derive them. Did I misunderstand the original question?

Hmmm, I see what you're saying about ##J_+ S_+ + J_- S_-## I'll try to reproduce these results on my own as well but I might be a little slow
Oh I meant I will try to derive them myself to see if I understand where each term comes from. I am actually not sure I see where that parity term ##(-1)^s## comes from. As far as I can tell, for the q term for example, the expression should be derived from the terms ignored in 7.84 (##L_-L_-## and ##L_+L_+##) and I see not ##(-1)^s## coming out of there right away. Thank you for taking a look into that derivation!
 
  • #45
BillKet said:
and ##J_-|\Omega = -1/2>=0##, as that is the minimum value ##\Omega## can take in the ##^2\Pi_{1/2}## state. I am obviously missing something but I don't know what.

This is also reaching the boundaries of my comfort zone (apologies in advance), but my understanding is that the selection rules for ##J^\pm## are ##\Delta \Omega = \mp 1##. ## J^-## and ##J^+## are:

##J^-|\Omega J> = \hbar\sqrt{J(J + 1) - \Omega(\Omega + 1)}|(\Omega + 1)J>## and
##J^+|\Omega J> = \hbar\sqrt{J(J + 1) - \Omega(\Omega - 1)}|(\Omega - 1) J>##

so ##J_-|\Omega = -1/2>## shouldn't equal zero. It's acting outside the ##^2\Pi_{1/2}## state, as expected for a higher order interaction.
 
  • #46
amoforum said:
This is also reaching the boundaries of my comfort zone (apologies in advance), but my understanding is that the selection rules for ##J^\pm## are ##\Delta \Omega = \mp 1##. ## J^-## and ##J^+## are:

##J^-|\Omega J> = \hbar\sqrt{J(J + 1) - \Omega(\Omega + 1)}|(\Omega + 1)J>## and
##J^+|\Omega J> = \hbar\sqrt{J(J + 1) - \Omega(\Omega - 1)}|(\Omega - 1) J>##

so ##J_-|\Omega = -1/2>## shouldn't equal zero. It's acting outside the ##^2\Pi_{1/2}## state, as expected for a higher order interaction.
Thank you for your reply. I was thinking if I can actually go outside ##\Omega = -1/2## and make matrix elements with the ##\Omega = -3/2## state, too, but there are at least 2 things that confuse me about it. First of all, if I am in the ground rotational state of ##^2\Pi_{1/2}##, I have ##J=1/2##, so in that state ##\Omega = -3/2## doesn't exists, as ##|\Omega| \le J## but in practice the ##J=1/2## state gets split by the lambda doubling. The other thing is that even if I get ##\Omega = -3/2## in the ket, the bra part still has ##\Omega = \pm 1/2## so in the end won't I get zero for the expectation value?
 
  • #47
BillKet said:
Thank you for your reply. I was thinking if I can actually go outside ##\Omega = -1/2## and make matrix elements with the ##\Omega = -3/2## state, too, but there are at least 2 things that confuse me about it. First of all, if I am in the ground rotational state of ##^2\Pi_{1/2}##, I have ##J=1/2##, so in that state ##\Omega = -3/2## doesn't exists, as ##|\Omega| \le J## but in practice the ##J=1/2## state gets split by the lambda doubling. The other thing is that even if I get ##\Omega = -3/2## in the ket, the bra part still has ##\Omega = \pm 1/2## so in the end won't I get zero for the expectation value?

Actually, sorry my original answer definitely is the result of my rustiness. Let me review first and get back to you.

Edit:

I remember the ##\Lambda##-doubling being tricky for the ground state, so good question!

From Levebvre-Brion, starting from Eq. 3.5.23:

The splitting in ##^2\Pi_{1/2}## is usually caused by ##\Delta \Omega = 0## interactions with a nearby ##^2\Sigma## state, via the spin-orbit and spin-electronic (##L^+S^-##). Furthermore, the nearby ##^2\Sigma## state doesn't have ##\Lambda##-doubling, so it only has a single state of ##\Omega = -\frac{1}{2}##. And that state will couple to the ##^2\Pi_{1/2}## state's ##\Omega = \frac{1}{2}## via the ##L##-uncoupling operator: ##J^-L^+##, which acts via ##\Delta \Omega = \pm 1##.

Many times I couldn't understand what B&C were doing, or needed more context, so I turned to other literature describing the same thing. Let me know if you'd like me to send you this book, by the way.
 
Last edited:
  • #48
BillKet said:
As far as I can tell, for the q term for example, the expression should be derived from the terms ignored in 7.84 (##L_-L_-## and ##L_+L_+##) and I see not ##(-1)^s## coming out of there right away.
I think it's just that you pick a convention for q being the splitting of ##E_+ - E_-##, where those are the energies of ##+## and ##-## parity states. So if you interact with a ##^2\Sigma^+## state, it'll shift the energy in a certain direction. If you interact with a ##^2\Sigma^-## state, it'll shift in the same direction, but q is still defined as ##E_+ - E_-##, so now the splitting is -q. (I think!?)

If in reality the interaction really is reversed for ##^2\Sigma^-## states, then I also don't know where that comes from off the top of my head.
 
  • #49
amoforum said:
I think it's just that you pick a convention for q being the splitting of ##E_+ - E_-##, where those are the energies of ##+## and ##-## parity states. So if you interact with a ##^2\Sigma^+## state, it'll shift the energy in a certain direction. If you interact with a ##^2\Sigma^-## state, it'll shift in the same direction, but q is still defined as ##E_+ - E_-##, so now the splitting is -q. (I think!?)

If in reality the interaction really is reversed for ##^2\Sigma^-## states, then I also don't know where that comes from off the top of my head.
Thank you! So I was able to get Levebvre-Brion through my university but something in their derivations is weird. For example in equation 3.5.10 they calculate the following matrix element (I will ignore the nonrelevant terms):

$$<\Sigma+1,\Omega+1|J^-S^+|\Sigma,\Omega>$$

The way ##S^+## is applied makes sense, but the ##J^-## confuses me. Naturally I would apply it on the right and I would get ##\Omega-1##, but somehow they get a non-zero matrix element, so it almost looks like they apply the ##J^-## to the left, such that ##\Omega+1## becomes ##\Omega## and thus the matrix element doesn't get zero. But I don't understand how they do it. The fact that is written on the left, doesn't mean you apply it to the left side. If you want to apply it there that would actually become a ##J^+##, which again would give a zero matrix element. Do you know what they do? It seems like this is not a typo as they do the same things in other places, too (e.g. 3.5.19).
 
  • #50
BillKet said:
Thank you! So I was able to get Levebvre-Brion through my university but something in their derivations is weird. For example in equation 3.5.10 they calculate the following matrix element (I will ignore the nonrelevant terms):

$$<\Sigma+1,\Omega+1|J^-S^+|\Sigma,\Omega>$$

The way ##S^+## is applied makes sense, but the ##J^-## confuses me. Naturally I would apply it on the right and I would get ##\Omega-1##, but somehow they get a non-zero matrix element, so it almost looks like they apply the ##J^-## to the left, such that ##\Omega+1## becomes ##\Omega## and thus the matrix element doesn't get zero. But I don't understand how they do it. The fact that is written on the left, doesn't mean you apply it to the left side. If you want to apply it there that would actually become a ##J^+##, which again would give a zero matrix element. Do you know what they do? It seems like this is not a typo as they do the same things in other places, too (e.g. 3.5.19).

Take a second look at the definitions of ##J^\pm## that I wrote a few posts above. ##J^-## really does do the opposite of what normal ##+/-## operators do, in that it couples ##|\Omega + 1>## instead of ##|\Omega - 1>##. So they are using it correctly, and the reason is not at all trivial.

It has to do with operators referenced to molecule-fixed coordinates obeying anomalous commutation relations as compared to space-fixed ones. See Levebvre-Brion/Field Section 2.3.1.

B&C also talk about this in detail (section 5.5.6), but they do it as part of a giant chapter on angular momentum, so I wouldn't recommend referencing that section just for the sake of getting to the bottom of this particular issue. If you have a lot of time on your hands, or at some point you want to derive all the matrix elements from scratch like B&C do in the later chapters, then I very highly recommend starting from the beginning of Chapter 5.
 
  • #51
amoforum said:
Take a second look at the definitions of ##J^\pm## that I wrote a few posts above. ##J^-## really does do the opposite of what normal ##+/-## operators do, in that it couples ##|\Omega + 1>## instead of ##|\Omega - 1>##. So they are using it correctly, and the reason is not at all trivial.

It has to do with operators referenced to molecule-fixed coordinates obeying anomalous commutation relations as compared to space-fixed ones. See Levebvre-Brion/Field Section 2.3.1.

B&C also talk about this in detail (section 5.5.6), but they do it as part of a giant chapter on angular momentum, so I wouldn't recommend referencing that section just for the sake of getting to the bottom of this particular issue. If you have a lot of time on your hands, or at some point you want to derive all the matrix elements from scratch like B&C do in the later chapters, then I very highly recommend starting from the beginning of Chapter 5.
Ahh I see, and now my derivation from above makes sense and it actually gives the expected results. Thanks a lot!
 
  • #52
BillKet said:
Ahh I see, and now my derivation from above makes sense and it actually gives the expected results. Thanks a lot!
Hello again! I looked a bit at some actual molecules and I noticed that for ##\Pi_{1/2}## states we don't have a spin rotation coupling i.e. ##\gamma NS##. As far as I can tell the effective operator shouldn't be zero in all cases in a Hund case a basis and the parameter ##\gamma## (7.110 in B&C) doesn't appear obviously to be zero in ##\Lambda=1## case. I read in Levebvre-Brion/Field something stating that actually spin-rotation coupling in a ##\Sigma## state and lambda doubling in the ##\Pi## state come together, hence somehow this spin-rotation coupling in the ##\Pi## state is included in the lambda doubling already. But I am not totally sure about it. First of all the equation I mentioned above doesn't say anything about ##\Lambda##. Is it by convention applied only to certain values of ##\Lambda##? More importantly, in that equation, spin-rotation appears at a first order in PT, while the lambda doubling comes at a second order. So even at a first order, I should see the spin-rotation effect in a ##\Pi## state, too. Why is that term ignored in the effective Hamiltonian of a ##\Pi## state? Thank you!
 
  • #53
BillKet said:
Hello again! I looked a bit at some actual molecules and I noticed that for ##\Pi_{1/2}## states we don't have a spin rotation coupling i.e. ##\gamma NS##. As far as I can tell the effective operator shouldn't be zero in all cases in a Hund case a basis and the parameter ##\gamma## (7.110 in B&C) doesn't appear obviously to be zero in ##\Lambda=1## case. I read in Levebvre-Brion/Field something stating that actually spin-rotation coupling in a ##\Sigma## state and lambda doubling in the ##\Pi## state come together, hence somehow this spin-rotation coupling in the ##\Pi## state is included in the lambda doubling already. But I am not totally sure about it. First of all the equation I mentioned above doesn't say anything about ##\Lambda##. Is it by convention applied only to certain values of ##\Lambda##? More importantly, in that equation, spin-rotation appears at a first order in PT, while the lambda doubling comes at a second order. So even at a first order, I should see the spin-rotation effect in a ##\Pi## state, too. Why is that term ignored in the effective Hamiltonian of a ##\Pi## state? Thank you!

I'm glad you caught that bit in Levebvre-Brion/Field, which is one part of the story. In addition, take a look at Section 3.4.3, where they look at the first order contributions for Hund's cases (a) and (b). Mentioned in the latter section and in at least a couple places in B&C is that the second order contribution is almost always the dominating term, except for very light molecules having high rotational energies.

Physically, to first order the spin-rotation interaction is super weak, because you're dependent on the nuclear magneton. But at second order, you'll start coupling in ##L##, which is a full Bohr magneton. So for a ##\Lambda > 0## state, like ##\Pi##, that's going to be more important.

I suspect that's why most literature doesn't dwell on the first order term for Hund's case (a).
 
  • #54
amoforum said:
I'm glad you caught that bit in Levebvre-Brion/Field, which is one part of the story. In addition, take a look at Section 3.4.3, where they look at the first order contributions for Hund's cases (a) and (b). Mentioned in the latter section and in at least a couple places in B&C is that the second order contribution is almost always the dominating term, except for very light molecules having high rotational energies.

Physically, to first order the spin-rotation interaction is super weak, because you're dependent on the nuclear magneton. But at second order, you'll start coupling in ##L##, which is a full Bohr magneton. So for a ##\Lambda > 0## state, like ##\Pi##, that's going to be more important.

I suspect that's why most literature doesn't dwell on the first order term for Hund's case (a).
Thanks a lot! That makes sense! I have a quick question about adding an external magnetic field. Assume we care only about the electron spin and orbital angular momentum to first order in PT (at the electronic level i.e. when building the effective H). The full (not effective) H for this interactions is $$g_L\mu_B B_z\cdot L_z + g_S\mu_B B_z\cdot S_z $$ I will ignore the coefficients from now on and just focus on the operators. Assume we are in a Hund case a. For the spin part, we don't have anything that connects different electronic levels so the effective hamiltonian for the spin-magnetic field interaction is the same as the full H, right? However I still need to account for these Wigner rotation matrices when calculating rotational matrix elements. For example, if I want to calculate something diagonal in ##\Sigma##, I only need the projection of S on the internuclear axis, but in the equation above ##S_z## is quantized in the lab frame, so the actual operator would be $$g_S\mu_B \mathcal{D}_{00}^{(1)}(\omega) B_z\cdot \Sigma$$ and for the full matrix element I would have to separate the lab and intrinsic parts and I would get something like: $$<J,M,\Omega|g_S\mu_B \mathcal{D}_{00}^{(1)}(\omega)|J,M,\Omega><\Lambda,S,\Sigma|T_{0}^1(S)|\Lambda,S,\Sigma>$$ For orbital angular momentum part, I have to account for the electronic part, as the ##L_z## in the lab is not ##L_z## in the molecule frame, so I would need to keep only the ##T_{q=0}^1(L)## part for the first order PT and the matrix element here would be $$<J,M,\Omega|g_S\mu_B \mathcal{D}_{00}^{(1)}(\omega)|J,M,\Omega><\Lambda,S,\Sigma|T_{0}^1(L)|\Lambda,S,\Sigma>$$ Is this right? One more questions, in B&C after equation 7.231 they list all the terms in the effective H due to Stark effect and for the orbital motion they have ##T_{p=0}^1(L)##. Should that p be a q? If it is a p, as I mentioned above, that won't be diagonal at the electronic level in the molecule intrinsic frame and we can't have that in the effective hamiltonian. Thank you!
 
  • #55
The line of reasoning looks correct. Some details:

Read B&C section 5.5.5. Eq. 5.143 shows the relation between the space-fixed and molecule-fixed operators. So if B&C ever writes a ##T^1_p##, you know how to turn it into molecule-fixed coordinates.

In general you need to do two things: 1. rotate into the molecule frame (via equation 5.143). 2. Unpack the composite angular momenta to reach the one you wish to address. You want ##S##, but it's part of ##J##, which means you'll get Wigner 6-j symbols.

At this point I suggest you go through Chapter 5 carefully, and look at real examples. Eqns 9.56 and 9.58, for example, gets what you want. Notice from the first line, the tensor operator conversion from space-fixed to molecule-fixed was already done. After that, three equations were used to get the rest: 1. Eq. 5.123 (Wigner-Eckart theorem) to get the 3j symbol involving ##M_F##, 2. Eq. 5.136 to get extract ##S## out of ##J##, and 3. Eq. 5.146, the matrix elements of the rotation matrix (3j symbol with ##\Omega##).

This basis also deals with hyperfine (##F##), but that just comes down to applying Eq. 5.136 to extract ##J## out of ##F##.
 
  • #56
amoforum said:
The line of reasoning looks correct. Some details:

Read B&C section 5.5.5. Eq. 5.143 shows the relation between the space-fixed and molecule-fixed operators. So if B&C ever writes a ##T^1_p##, you know how to turn it into molecule-fixed coordinates.

In general you need to do two things: 1. rotate into the molecule frame (via equation 5.143). 2. Unpack the composite angular momenta to reach the one you wish to address. You want ##S##, but it's part of ##J##, which means you'll get Wigner 6-j symbols.

At this point I suggest you go through Chapter 5 carefully, and look at real examples. Eqns 9.56 and 9.58, for example, gets what you want. Notice from the first line, the tensor operator conversion from space-fixed to molecule-fixed was already done. After that, three equations were used to get the rest: 1. Eq. 5.123 (Wigner-Eckart theorem) to get the 3j symbol involving ##M_F##, 2. Eq. 5.136 to get extract ##S## out of ##J##, and 3. Eq. 5.146, the matrix elements of the rotation matrix (3j symbol with ##\Omega##).

This basis also deals with hyperfine (##F##), but that just comes down to applying Eq. 5.136 to extract ##J## out of ##F##.
Thanks for this! I went over chapter 5 and it makes more sense how that works. But my question still remains. He claims that we can have ##T_{p=0}^1(L)## in the effective Hamiltonian explicitly, which is equivalent to having ##T_{q=\pm 1}^1(L)## explicitly in the effective Hamiltonian. However in a previous section he spends quite some time talking about how having ##R^2## in the effective Hamiltonian is not good, specifically because that implies having ##T_{q=\pm 1}^1(L)##. Shouldn't ##T_{q=\pm 1}^1(L)## be absorbed in some effective parameters at second and higher PT, and have only ##T_{q=0}^1(L)## in the effective Hamiltonian as an operator?
 
  • #57
Haha okay the answer to this might be pretty funny, assuming I understand the question correctly:

Look at Eq. 7.217, which has a ##g_L## in it. Eqns 7.221 and 7.222 show the result of putting the off-diagonal terms into an effective parameter: ##g_l##. (Notice the capitalized vs. not.) In the equations after Eqn. 7.231, you're back to ##g_L##, where he hasn't done the procedure yet.
 
  • #58
amoforum said:
Haha okay the answer to this might be pretty funny, assuming I understand the question correctly:

Look at Eq. 7.217, which has a ##g_L## in it. Eqns 7.221 and 7.222 show the result of putting the off-diagonal terms into an effective parameter: ##g_l##. (Notice the capitalized vs. not.) In the equations after Eqn. 7.231, you're back to ##g_L##, where he hasn't done the procedure yet.
Hmm that makes sense if that would be the full Hamiltonian. But right after 7.231 he claims that he is listing the terms in the effective Hamiltonian. Shouldn't we get rid of the off-diagonal terms at that level? Also one of the terms he is listing is the "anisotropic correction to the electron spin interaction" which appears only after you do the effective Hamiltonian, it is not there in the original Hamiltonian (also that term has ##g_l##). It almost looks like he is mixing terms from the original and effective Hamiltonian.
 
  • #59
Oh sorry, it looks like I did misunderstand (##g_l## is one order higher, starting with ##g_L##).

My understanding is that ##g_L## already is an effective parameter. In Eqn. 7.217 it's introduced as a perturbation to the full Hamiltonian. But he mentions right after that it can deviate from 1.0 after transforming into the effective Hamiltonian. To first order, it has exactly the same form, and because it's so close to 1.0, I believe he just keeps the same notation: ##g_L##.

So in an experiment, when you go to fit the spectrum: ##g_L## will not come out to 1 due to non-adiabatic mixing. Here's a paper where you can see them fitting to a value not equal to 1: (Wilton L. Virgo et al 2005 ApJ 628 567)

Edit: but I see your point. I'm trying to think of a case where it wouldn't just reduce to ##q = 0##. For example maybe ##\Omega## can change, but not ##\Lambda##. Let me think on this.

Edit2: Yes, I believe the above is correct. In going to the effective Hamiltonian, the ##g_LT^1_{p=0}(L)## term operates within a single ##|\Lambda>## state, but you can have mixing between ##\Omega## states.
 
Last edited:
  • #60
amoforum said:
Oh sorry, it looks like I did misunderstand (##g_l## is one order higher, starting with ##g_L##).

My understanding is that ##g_L## already is an effective parameter. In Eqn. 7.217 it's introduced as a perturbation to the full Hamiltonian. But he mentions right after that it can deviate from 1.0 after transforming into the effective Hamiltonian. To first order, it has exactly the same form, and because it's so close to 1.0, I believe he just keeps the same notation: ##g_L##.

So in an experiment, when you go to fit the spectrum: ##g_L## will not come out to 1 due to non-adiabatic mixing. Here's a paper where you can see them fitting to a value not equal to 1: (Wilton L. Virgo et al 2005 ApJ 628 567)

Edit: but I see your point. I'm trying to think of a case where it wouldn't just reduce to ##q = 0##. For example maybe ##\Omega## can change, but not ##\Lambda##. Let me think on this.

Edit2: Yes, I believe the above is correct. In going to the effective Hamiltonian, the ##g_LT^1_{p=0}(L)## term operates within a single ##|\Lambda>## state, but you can have mixing between ##\Omega## states.
So I tried to calculate what is the effective hamiltonian associated with the Stark effect under this formalism, but I am not sure if what I am doing is right. Assume the wavefunction can be written as ##|\eta>|i>##, with ##|\eta>## the electronic (intrinsic) part and ##|i>## the vibrational and rotational part. Assuming the electric field is in the z-direction, the Stark shift interaction is $$E_zd_z = E_z\sum_q \mathcal{D}_{0q}^1(\omega) T_{q}^1(d)$$ where I transformed the dipole moment from the lab to molecule frame, with $$d = d_{el}(r)+d_{nucl}(R)$$ where ##d_{el}(r) = -er##, with ##r## the location of the electron and ##d_{nucl}(R) = e(Z_1-Z_2)R## with ##R## the internuclear distance. I will just write ##\mathcal{D}## instead of ##\mathcal{D}_{0q}^1(\omega)## from now on. Calculating the effective Hamiltonian to first order in PT, as in B&C I would get $$E_z\sum_q<i|<\eta|\mathcal{D}T_q^1(d_{el}+d_{nucl})|\eta>|j> = $$ $$E_z\sum_q<i|<\eta|\mathcal{D}T_q^1(d_{el})|\eta>|j> + E_z\sum_q<i|<\eta|\mathcal{D}T_q^1(d_{nucl})|\eta>|j> = $$ $$E_z\sum_q<i|\mathcal{D}<\eta|T_q^1(d_{el})|\eta>|j> + E_z\sum_q<i|\mathcal{D}T_q^1(d_{nucl})<\eta||\eta>|j> $$ Using ##<\eta|T_q^1(d_{el})|\eta>=0## due to parity arguments and ##<\eta||\eta>=1## due to orthonormality of the electronic wavefunctions we get: $$E_z\sum_q<i|\mathcal{D}T_q^1(d_{nucl})|j> $$ Given that ##d_{nucl}=e(Z_1-Z_2)R## and ##R## is defined as the z component of the molecule frame, only the ##q=0## component survives so in the end we get $$e(Z_1-Z_2)E_z<i|\mathcal{D}R|j> = $$ $$e(Z_1-Z_2)E_z<vib_i|R|vib_i><rot_i|\mathcal{D_{00}^1(\omega)}|rot_j> $$ Given that I didn't make any assumption about the rotational basis (can be Hund case a or b without affecting the derivation) I can drop the rotational expectation value and leave that part as an operator so in the end the first order PT effective term coming from the Stark effect is $$e(Z_1-Z_2)E_z<vib_i|R|vib_i>\mathcal{D_{00}^1(\omega)} $$ so basically the effective operator is a Wigner matrix component. Is my derivation right? Thank you!
 

Similar threads

  • · Replies 0 ·
Replies
0
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 13 ·
Replies
13
Views
5K
  • · Replies 4 ·
Replies
4
Views
3K
Replies
0
Views
2K