Effective molecular Hamiltonian and Hund cases

In summary, the effective Hamiltonian is built by solving the Schrodinger equation for fixed internuclear distance for the electrostatic potential, then adding perturbative terms off diagonal in the electronic wavefunctions. These perturbative expansions create an effective hamiltonian for each electronic level, hiding these off diagonal interactions in an effective constant rising the degeneracy of the rotational levels within a given electronic level. The basis for rotational levels is usually a Hund case basis, and when you fit data to the effective hamiltonian, you use the energy differences between the rotational levels to extract ##B## and ##\gamma##.
  • #36
Ah yeah, I feel like I should've said this for context, that B&C's derivation really only seems valid for cases (a) and (b), as @amoforum said. This is also the same as what @BillKet is saying when they assumed that ##H_{el}## dominates over ##H_{SO}##. That's why B&C can talk about the 0-th order ket being ##|\eta,\Lambda\rangle## because ##\Lambda## is good for cases (a) and (b) only. B&C doesn't discuss case (c) until chapter 10 section 7, where they derive a new expression for ##H_{rot}## in subsection b. My bad!
BillKet said:
##\Lambda## is a good quantum number for the electronic energy regardless of the Hund case
Sorry, this isn't true. All the Hund's cases aside from (a) and (b) have ##H_{el}## as a perturbation not a 0th order term, so ##\Lambda## is not good for (c) through (e). However, you will still see people assign term symbols to case (c) through (e) states. I couldn't tell you how good or bad those descriptions are, just that they're not ideal. This is why in that thesis I linked Paul Hamilton was careful to describe the eEDM state of PbO, which is a case (c) state, as a(1) instead of by it's term symbol ##^3 \Sigma_1##. ("a" is just an arbitrary label like "X,A,B,C..." but implies that the state has a higher degeneracy than the X state, and the 1 in parentheses refers to ##\Omega##). I can understand where you're coming from, because B&C almost exclusively talk about (a) and (b) cases.
 
Physics news on Phys.org
  • #37
Twigg said:
Ah yeah, I feel like I should've said this for context, that B&C's derivation really only seems valid for cases (a) and (b), as @amoforum said. This is also the same as what @BillKet is saying when they assumed that ##H_{el}## dominates over ##H_{SO}##. That's why B&C can talk about the 0-th order ket being ##|\eta,\Lambda\rangle## because ##\Lambda## is good for cases (a) and (b) only. B&C doesn't discuss case (c) until chapter 10 section 7, where they derive a new expression for ##H_{rot}## in subsection b. My bad!
Sorry, this isn't true. All the Hund's cases aside from (a) and (b) have ##H_{el}## as a perturbation not a 0th order term, so ##\Lambda## is not good for (c) through (e). However, you will still see people assign term symbols to case (c) through (e) states. I couldn't tell you how good or bad those descriptions are, just that they're not ideal. This is why in that thesis I linked Paul Hamilton was careful to describe the eEDM state of PbO, which is a case (c) state, as a(1) instead of by it's term symbol ##^3 \Sigma_1##. ("a" is just an arbitrary label like "X,A,B,C..." but implies that the state has a higher degeneracy than the X state, and the 1 in parentheses refers to ##\Omega##). I can understand where you're coming from, because B&C almost exclusively talk about (a) and (b) cases.
Thank you for your reply! I see what you mean. However when I said: "##\Lambda## is a good quantum number for the electronic energy regardless of the Hund case", what I meant is that if if ##H_{el}>>H_{SO}## and hence we choose ##H_0=H_{el}## then ##\Lambda## is a good quantum number. But what Hund case we use as a basis for the rotational levels, won't change the fact that ##\Lambda## is a good quantum number. For example we can use as the 0th order basis ##|\eta,\Lambda>|Hund \ case \ a>## or ##|\eta,\Lambda>|Hund \ case \ c>##. In both cases ##\Lambda## is a good quantum number at the electronic level, and what basis we choose for the rotational part won't change that, it will just make calculations easier or harder.

What am I trying to say/ask is that whether ##\Lambda## is a good quantum number has nothing to do with the Hund case we choose. For example if we want to see if ##H_{el}## or ##H_{SO}## is bigger, as far as I understand, we look at the energy level spacing from theoretical (e.g. Coupled Cluster) calculations and based on the magnitude of the difference between different energy levels, we can get an idea of what ##H_{0}## we should choose. But that doesn't involve at any point Hund cases i.e. these theoretical calculations don't look at the rotation at all, they fix R, calculate the electronic energy for fixed nuclei and they repeat this for several R (internuclear distances). So for example if, based on these calculations, the energy of a ##\Pi## state is much bigger than the splitting between ##\Pi_{1/2}## and ##\Pi_{3/2}## we know that ##H_{el}>>H_{0}## so ##\Lambda## is a good quantum number in that electronic state. Now if we want to look at the rotational spectrum of that electronic level, we usually choose a Hund case a in this case, but given that they are complete basis, Hund case c or Hund case e would work just as well mathematically (but they won't be very easy to diagonalize). If on the other hand the splitting between ##\Pi_{1/2}## and ##\Pi_{3/2}## was much bigger than the energy of the ##\Pi## state relative to the ground state, ##\Lambda## wouldn't be a good quantum anymore, but again, it has nothing to do with Hund cases. And in this case Hund case c would be ideal, but we could use Hund case a or b, too. Is that right?
 
  • #38
BillKet said:
What am I trying to say/ask is that whether ##\Lambda## is a good quantum number has nothing to do with the Hund case we choose. For example if we want to see if ##H_{el}## or ##H_{SO}## is bigger, as far as I understand, we look at the energy level spacing from theoretical (e.g. Coupled Cluster) calculations and based on the magnitude of the difference between different energy levels, we can get an idea of what ##H_{0}## we should choose. But that doesn't involve at any point Hund cases

Hund's cases and sets of good quantum numbers are synonymous. Both electronic and rotational.

Why choose one ##H_{0}## over another? You're going to choose the one with the most diagonal terms, right? i.e. you'll choose an ##H_{0}## that will require the lowest order perturbation-theory to reproduce the data. That ##H_{0}## will have the most good quantum numbers, and that set of quantum numbers is a Hund's case.
 
  • Like
Likes Twigg
  • #39
I think I see what you mean BillKet. You're distinguishing between the use of Hund's cases as a basis and the use of Hund's cases to (approximately) describe the eigenstates, right? So when you say that the hierarchy ##H_{el} > H_{SO}## has nothing to do with the Hund's cases, what you mean is that it doesn't force you to use a particular basis, yeah? Correct if I'm understanding your point wrong.

In this case, yes, what you are saying is mathematically true. No one will force you to use any particular basis, but in practice the jargon is different. The jargon dictates that the basis you choose, the Hund's case, and the hierarchy of the Hamiltonian are all synonymous. This is just because molecules are so ornery to deal with we all follow the path of least resistance quite religiously. People will misunderstand what you're saying if you use the phrase "Hund's case" to describe a basis that isn't tailored to the energy scales of a particular molecular state.

Edit: computationally, it may be convenient to use a different hund's case as a basis, like what amoforum was saying early on about computational software. But 90% of the time, the rules I describe above govern how people use these terms (especially with us pea-brained experimentalists!). Not trying to force anything on you, just trying to make it easier on you to communicate and get answers to your questions. :oldsmile:
 
  • #40
Twigg said:
I think I see what you mean BillKet. You're distinguishing between the use of Hund's cases as a basis and the use of Hund's cases to (approximately) describe the eigenstates, right? So when you say that the hierarchy ##H_{el} > H_{SO}## has nothing to do with the Hund's cases, what you mean is that it doesn't force you to use a particular basis, yeah? Correct if I'm understanding your point wrong.

In this case, yes, what you are saying is mathematically true. No one will force you to use any particular basis, but in practice the jargon is different. The jargon dictates that the basis you choose, the Hund's case, and the hierarchy of the Hamiltonian are all synonymous. This is just because molecules are so ornery to deal with we all follow the path of least resistance quite religiously. People will misunderstand what you're saying if you use the phrase "Hund's case" to describe a basis that isn't tailored to the energy scales of a particular molecular state.

Edit: computationally, it may be convenient to use a different hund's case as a basis, like what amoforum was saying early on about computational software. But 90% of the time, the rules I describe above govern how people use these terms (especially with us pea-brained experimentalists!). Not trying to force anything on you, just trying to make it easier on you to communicate and get answers to your questions. :oldsmile:
Thanks a lot! That's exactly what I meant, and now I see the confusion I created, too. It should be easier to ask my questions from now on hopefully (sorry for all this mess!).

So I actually have a quick question about ##\Lambda##-doubling. If I understand it right, in deriving the terms of the effective H (I will stick to Hund case a from now on, as B&C), we calculated expectation values of the form ##<\eta\Lambda|O|\eta\Lambda>##, for the same ##\Lambda## on both sides. For example, in a second order PT, using the rotational and SO hamiltonian, the resulting term would be, for ##\Lambda = 1## (I will ignore some summations and some coefficients): $$<\eta\Lambda=1|{H_{rot}}_{+1}^{1}|\eta'\Lambda=0><\eta'\Lambda=0|{H_{SO}}_{-1}^{1}|\eta\Lambda=1>$$ where ##\pm 1## refers to the components of ##L##. For the ##\Lambda##-doubling, I imagined now we would have something of the form: $$<\eta\Lambda=1|{H_{rot}}_{+1}^{1}|\eta'\Lambda=0><\eta'\Lambda=0|{H_{SO}}_{+1}^{1}|\eta\Lambda=-1>$$ such that we connect different components of ##\Lambda## in the same electronic state. However, I am confused by the final results in B&C. For this term (##H_{rot}+H_{SO}##) they obtain ##p(R)## which is of the form $$<\eta\Lambda=1|{H_{rot}}_{+1}^{1}|\eta'\Lambda=0><\eta'\Lambda=0|{H_{SO}}_{-1}^{1}|\eta\Lambda=1>$$ just as in the case of the fine structure i.e. the derivation before the ##\Lambda##-doubling. On the other hand, ##o(R)^{(1)}## is connecting the ##\Lambda=1## with ##\Lambda=-1## as I would expect. So I am confused. Shouldn't we have something connecting ##\Lambda=1## and ##\Lambda=-1## for all 3 coefficients? Another thing that I am confused about is the ##exp(-2iq\phi)## term. They claim its role is to ensure that only matrix elements connecting ##\Lambda=1## with ##\Lambda=-1## are non zero. But that implies that we need to calculate an electronic expectation value, again. However, I thought that once we got the coefficients p, q and o, we are done with the electronic wavefunction, and all we need to calculate are rotational expectations values. Why do we still have a term that is explicitly electronic even after calculating the electronic expectation value? Shouldn't it be included in p, q and o terms? Thank you!
 
  • #41
Again, no need to apologize!

This is getting further out of my comfort zone, so take my posts with a grain of salt. But one thing sticks out to me immediately. If you only had terms that connected ##\Lambda = +1## to ##\Lambda = -1##, then you'd wind up with both parity states being shifted. Since only one parity state is shifted by ##\Lambda##-doubling, there has to be a diagonal matrix element to hold one of the states still. (This is all since the ##\Sigma## state only ##\Lambda##-doubles the ##\Pi## state with the same parity as itself.)

Even though ##e^{-2iq\phi}## contains an electron coordinate, doesn't it look like an eigenstate of ##L_z## to you? It really doesn't require you to re-do the wavefunction contraction because that part of the motion is contained in the angular momentum quantum numbers.
 
  • #42
Twigg said:
Again, no need to apologize!

This is getting further out of my comfort zone, so take my posts with a grain of salt. But one thing sticks out to me immediately. If you only had terms that connected ##\Lambda = +1## to ##\Lambda = -1##, then you'd wind up with both parity states being shifted. Since only one parity state is shifted by ##\Lambda##-doubling, there has to be a diagonal matrix element to hold one of the states still. (This is all since the ##\Sigma## state only ##\Lambda##-doubles the ##\Pi## state with the same parity as itself.)

Even though ##e^{-2iq\phi}## contains an electron coordinate, doesn't it look like an eigenstate of ##L_z## to you? It really doesn't require you to re-do the wavefunction contraction because that part of the motion is contained in the angular momentum quantum numbers.
Hmmm I will try to derive that expression explicitly. However I am actually confused about how do you apply the effective operators in practice. From reading some other papers and PGOPHER documentation, for example in a ##^2\Pi_{1/2}## electronic state, the only operator that matters is ##-\frac{1}{2}(p+2q)(J_+ S_+ + J_-S_-)## and it can be shown that in this electronic state the splitting due to lambda doubling is ##\Delta E = \pm (p+2q)(J+1/2)##, depending on the sign of the spin orbit constant, ##A##. I tried to reproduce this, but I am missing something. The eigenstates for a given J are: $$|J\Omega\pm> = \frac{|\Lambda = 1, S = 1/2, \Sigma = -1/2, J, \Omega = 1/2>\pm|\Lambda = -1, S = 1/2, \Sigma = 1/2, J, \Omega = -1/2>}{\sqrt{2}}$$ (I will ignore the denominator from now on). So if I calculate the expectation value of this operator I would get $$\pm<\Lambda = 1, S = 1/2, \Sigma = -1/2, J, \Omega = 1/2|(J_+ S_+ + J_-S_-)|\Lambda = -1, S = 1/2, \Sigma = 1/2, J, \Omega = -1/2>$$ $$\pm<\Lambda = -1, S = 1/2, \Sigma = 1/2, J, \Omega = -1/2|(J_+ S_+ + J_-S_-)|\Lambda = 1, S = 1/2, \Sigma = -1/2, J, \Omega = 1/2>$$ But this seems to be equal to zero. For example for $$(J_+ S_+ + J_-S_-)|\Lambda = -1, S = 1/2, \Sigma = 1/2, J, \Omega = -1/2>$$ ##S_+|\Sigma=1/2>=0## as that is the maximum value of the spin projection already and ##J_-|\Omega = -1/2>=0##, as that is the minimum value ##\Omega## can take in the ##^2\Pi_{1/2}## state. I am obviously missing something but I don't know what.
 
Last edited:
  • #43
Ah sorry, I was just trying to answer this question:
BillKet said:
Shouldn't we have something connecting ##\Lambda = 1## and ##\Lambda = -1## for all 3 coefficients?
The reason I think there end up being terms that are diagonal in ##\Lambda## is what I was saying above, that you need some diagonal terms to pin one state. The expressions are already in B&C so you don't need to re-derive them. Did I misunderstand the original question?

Hmmm, I see what you're saying about ##J_+ S_+ + J_- S_-## I'll try to reproduce these results on my own as well but I might be a little slow
 
  • #44
Twigg said:
Ah sorry, I was just trying to answer this question:

The reason I think there end up being terms that are diagonal in ##\Lambda## is what I was saying above, that you need some diagonal terms to pin one state. The expressions are already in B&C so you don't need to re-derive them. Did I misunderstand the original question?

Hmmm, I see what you're saying about ##J_+ S_+ + J_- S_-## I'll try to reproduce these results on my own as well but I might be a little slow
Oh I meant I will try to derive them myself to see if I understand where each term comes from. I am actually not sure I see where that parity term ##(-1)^s## comes from. As far as I can tell, for the q term for example, the expression should be derived from the terms ignored in 7.84 (##L_-L_-## and ##L_+L_+##) and I see not ##(-1)^s## coming out of there right away. Thank you for taking a look into that derivation!
 
  • #45
BillKet said:
and ##J_-|\Omega = -1/2>=0##, as that is the minimum value ##\Omega## can take in the ##^2\Pi_{1/2}## state. I am obviously missing something but I don't know what.

This is also reaching the boundaries of my comfort zone (apologies in advance), but my understanding is that the selection rules for ##J^\pm## are ##\Delta \Omega = \mp 1##. ## J^-## and ##J^+## are:

##J^-|\Omega J> = \hbar\sqrt{J(J + 1) - \Omega(\Omega + 1)}|(\Omega + 1)J>## and
##J^+|\Omega J> = \hbar\sqrt{J(J + 1) - \Omega(\Omega - 1)}|(\Omega - 1) J>##

so ##J_-|\Omega = -1/2>## shouldn't equal zero. It's acting outside the ##^2\Pi_{1/2}## state, as expected for a higher order interaction.
 
  • #46
amoforum said:
This is also reaching the boundaries of my comfort zone (apologies in advance), but my understanding is that the selection rules for ##J^\pm## are ##\Delta \Omega = \mp 1##. ## J^-## and ##J^+## are:

##J^-|\Omega J> = \hbar\sqrt{J(J + 1) - \Omega(\Omega + 1)}|(\Omega + 1)J>## and
##J^+|\Omega J> = \hbar\sqrt{J(J + 1) - \Omega(\Omega - 1)}|(\Omega - 1) J>##

so ##J_-|\Omega = -1/2>## shouldn't equal zero. It's acting outside the ##^2\Pi_{1/2}## state, as expected for a higher order interaction.
Thank you for your reply. I was thinking if I can actually go outside ##\Omega = -1/2## and make matrix elements with the ##\Omega = -3/2## state, too, but there are at least 2 things that confuse me about it. First of all, if I am in the ground rotational state of ##^2\Pi_{1/2}##, I have ##J=1/2##, so in that state ##\Omega = -3/2## doesn't exists, as ##|\Omega| \le J## but in practice the ##J=1/2## state gets split by the lambda doubling. The other thing is that even if I get ##\Omega = -3/2## in the ket, the bra part still has ##\Omega = \pm 1/2## so in the end won't I get zero for the expectation value?
 
  • #47
BillKet said:
Thank you for your reply. I was thinking if I can actually go outside ##\Omega = -1/2## and make matrix elements with the ##\Omega = -3/2## state, too, but there are at least 2 things that confuse me about it. First of all, if I am in the ground rotational state of ##^2\Pi_{1/2}##, I have ##J=1/2##, so in that state ##\Omega = -3/2## doesn't exists, as ##|\Omega| \le J## but in practice the ##J=1/2## state gets split by the lambda doubling. The other thing is that even if I get ##\Omega = -3/2## in the ket, the bra part still has ##\Omega = \pm 1/2## so in the end won't I get zero for the expectation value?

Actually, sorry my original answer definitely is the result of my rustiness. Let me review first and get back to you.

Edit:

I remember the ##\Lambda##-doubling being tricky for the ground state, so good question!

From Levebvre-Brion, starting from Eq. 3.5.23:

The splitting in ##^2\Pi_{1/2}## is usually caused by ##\Delta \Omega = 0## interactions with a nearby ##^2\Sigma## state, via the spin-orbit and spin-electronic (##L^+S^-##). Furthermore, the nearby ##^2\Sigma## state doesn't have ##\Lambda##-doubling, so it only has a single state of ##\Omega = -\frac{1}{2}##. And that state will couple to the ##^2\Pi_{1/2}## state's ##\Omega = \frac{1}{2}## via the ##L##-uncoupling operator: ##J^-L^+##, which acts via ##\Delta \Omega = \pm 1##.

Many times I couldn't understand what B&C were doing, or needed more context, so I turned to other literature describing the same thing. Let me know if you'd like me to send you this book, by the way.
 
Last edited:
  • #48
BillKet said:
As far as I can tell, for the q term for example, the expression should be derived from the terms ignored in 7.84 (##L_-L_-## and ##L_+L_+##) and I see not ##(-1)^s## coming out of there right away.
I think it's just that you pick a convention for q being the splitting of ##E_+ - E_-##, where those are the energies of ##+## and ##-## parity states. So if you interact with a ##^2\Sigma^+## state, it'll shift the energy in a certain direction. If you interact with a ##^2\Sigma^-## state, it'll shift in the same direction, but q is still defined as ##E_+ - E_-##, so now the splitting is -q. (I think!?)

If in reality the interaction really is reversed for ##^2\Sigma^-## states, then I also don't know where that comes from off the top of my head.
 
  • #49
amoforum said:
I think it's just that you pick a convention for q being the splitting of ##E_+ - E_-##, where those are the energies of ##+## and ##-## parity states. So if you interact with a ##^2\Sigma^+## state, it'll shift the energy in a certain direction. If you interact with a ##^2\Sigma^-## state, it'll shift in the same direction, but q is still defined as ##E_+ - E_-##, so now the splitting is -q. (I think!?)

If in reality the interaction really is reversed for ##^2\Sigma^-## states, then I also don't know where that comes from off the top of my head.
Thank you! So I was able to get Levebvre-Brion through my university but something in their derivations is weird. For example in equation 3.5.10 they calculate the following matrix element (I will ignore the nonrelevant terms):

$$<\Sigma+1,\Omega+1|J^-S^+|\Sigma,\Omega>$$

The way ##S^+## is applied makes sense, but the ##J^-## confuses me. Naturally I would apply it on the right and I would get ##\Omega-1##, but somehow they get a non-zero matrix element, so it almost looks like they apply the ##J^-## to the left, such that ##\Omega+1## becomes ##\Omega## and thus the matrix element doesn't get zero. But I don't understand how they do it. The fact that is written on the left, doesn't mean you apply it to the left side. If you want to apply it there that would actually become a ##J^+##, which again would give a zero matrix element. Do you know what they do? It seems like this is not a typo as they do the same things in other places, too (e.g. 3.5.19).
 
  • #50
BillKet said:
Thank you! So I was able to get Levebvre-Brion through my university but something in their derivations is weird. For example in equation 3.5.10 they calculate the following matrix element (I will ignore the nonrelevant terms):

$$<\Sigma+1,\Omega+1|J^-S^+|\Sigma,\Omega>$$

The way ##S^+## is applied makes sense, but the ##J^-## confuses me. Naturally I would apply it on the right and I would get ##\Omega-1##, but somehow they get a non-zero matrix element, so it almost looks like they apply the ##J^-## to the left, such that ##\Omega+1## becomes ##\Omega## and thus the matrix element doesn't get zero. But I don't understand how they do it. The fact that is written on the left, doesn't mean you apply it to the left side. If you want to apply it there that would actually become a ##J^+##, which again would give a zero matrix element. Do you know what they do? It seems like this is not a typo as they do the same things in other places, too (e.g. 3.5.19).

Take a second look at the definitions of ##J^\pm## that I wrote a few posts above. ##J^-## really does do the opposite of what normal ##+/-## operators do, in that it couples ##|\Omega + 1>## instead of ##|\Omega - 1>##. So they are using it correctly, and the reason is not at all trivial.

It has to do with operators referenced to molecule-fixed coordinates obeying anomalous commutation relations as compared to space-fixed ones. See Levebvre-Brion/Field Section 2.3.1.

B&C also talk about this in detail (section 5.5.6), but they do it as part of a giant chapter on angular momentum, so I wouldn't recommend referencing that section just for the sake of getting to the bottom of this particular issue. If you have a lot of time on your hands, or at some point you want to derive all the matrix elements from scratch like B&C do in the later chapters, then I very highly recommend starting from the beginning of Chapter 5.
 
  • #51
amoforum said:
Take a second look at the definitions of ##J^\pm## that I wrote a few posts above. ##J^-## really does do the opposite of what normal ##+/-## operators do, in that it couples ##|\Omega + 1>## instead of ##|\Omega - 1>##. So they are using it correctly, and the reason is not at all trivial.

It has to do with operators referenced to molecule-fixed coordinates obeying anomalous commutation relations as compared to space-fixed ones. See Levebvre-Brion/Field Section 2.3.1.

B&C also talk about this in detail (section 5.5.6), but they do it as part of a giant chapter on angular momentum, so I wouldn't recommend referencing that section just for the sake of getting to the bottom of this particular issue. If you have a lot of time on your hands, or at some point you want to derive all the matrix elements from scratch like B&C do in the later chapters, then I very highly recommend starting from the beginning of Chapter 5.
Ahh I see, and now my derivation from above makes sense and it actually gives the expected results. Thanks a lot!
 
  • #52
BillKet said:
Ahh I see, and now my derivation from above makes sense and it actually gives the expected results. Thanks a lot!
Hello again! I looked a bit at some actual molecules and I noticed that for ##\Pi_{1/2}## states we don't have a spin rotation coupling i.e. ##\gamma NS##. As far as I can tell the effective operator shouldn't be zero in all cases in a Hund case a basis and the parameter ##\gamma## (7.110 in B&C) doesn't appear obviously to be zero in ##\Lambda=1## case. I read in Levebvre-Brion/Field something stating that actually spin-rotation coupling in a ##\Sigma## state and lambda doubling in the ##\Pi## state come together, hence somehow this spin-rotation coupling in the ##\Pi## state is included in the lambda doubling already. But I am not totally sure about it. First of all the equation I mentioned above doesn't say anything about ##\Lambda##. Is it by convention applied only to certain values of ##\Lambda##? More importantly, in that equation, spin-rotation appears at a first order in PT, while the lambda doubling comes at a second order. So even at a first order, I should see the spin-rotation effect in a ##\Pi## state, too. Why is that term ignored in the effective Hamiltonian of a ##\Pi## state? Thank you!
 
  • #53
BillKet said:
Hello again! I looked a bit at some actual molecules and I noticed that for ##\Pi_{1/2}## states we don't have a spin rotation coupling i.e. ##\gamma NS##. As far as I can tell the effective operator shouldn't be zero in all cases in a Hund case a basis and the parameter ##\gamma## (7.110 in B&C) doesn't appear obviously to be zero in ##\Lambda=1## case. I read in Levebvre-Brion/Field something stating that actually spin-rotation coupling in a ##\Sigma## state and lambda doubling in the ##\Pi## state come together, hence somehow this spin-rotation coupling in the ##\Pi## state is included in the lambda doubling already. But I am not totally sure about it. First of all the equation I mentioned above doesn't say anything about ##\Lambda##. Is it by convention applied only to certain values of ##\Lambda##? More importantly, in that equation, spin-rotation appears at a first order in PT, while the lambda doubling comes at a second order. So even at a first order, I should see the spin-rotation effect in a ##\Pi## state, too. Why is that term ignored in the effective Hamiltonian of a ##\Pi## state? Thank you!

I'm glad you caught that bit in Levebvre-Brion/Field, which is one part of the story. In addition, take a look at Section 3.4.3, where they look at the first order contributions for Hund's cases (a) and (b). Mentioned in the latter section and in at least a couple places in B&C is that the second order contribution is almost always the dominating term, except for very light molecules having high rotational energies.

Physically, to first order the spin-rotation interaction is super weak, because you're dependent on the nuclear magneton. But at second order, you'll start coupling in ##L##, which is a full Bohr magneton. So for a ##\Lambda > 0## state, like ##\Pi##, that's going to be more important.

I suspect that's why most literature doesn't dwell on the first order term for Hund's case (a).
 
  • #54
amoforum said:
I'm glad you caught that bit in Levebvre-Brion/Field, which is one part of the story. In addition, take a look at Section 3.4.3, where they look at the first order contributions for Hund's cases (a) and (b). Mentioned in the latter section and in at least a couple places in B&C is that the second order contribution is almost always the dominating term, except for very light molecules having high rotational energies.

Physically, to first order the spin-rotation interaction is super weak, because you're dependent on the nuclear magneton. But at second order, you'll start coupling in ##L##, which is a full Bohr magneton. So for a ##\Lambda > 0## state, like ##\Pi##, that's going to be more important.

I suspect that's why most literature doesn't dwell on the first order term for Hund's case (a).
Thanks a lot! That makes sense! I have a quick question about adding an external magnetic field. Assume we care only about the electron spin and orbital angular momentum to first order in PT (at the electronic level i.e. when building the effective H). The full (not effective) H for this interactions is $$g_L\mu_B B_z\cdot L_z + g_S\mu_B B_z\cdot S_z $$ I will ignore the coefficients from now on and just focus on the operators. Assume we are in a Hund case a. For the spin part, we don't have anything that connects different electronic levels so the effective hamiltonian for the spin-magnetic field interaction is the same as the full H, right? However I still need to account for these Wigner rotation matrices when calculating rotational matrix elements. For example, if I want to calculate something diagonal in ##\Sigma##, I only need the projection of S on the internuclear axis, but in the equation above ##S_z## is quantized in the lab frame, so the actual operator would be $$g_S\mu_B \mathcal{D}_{00}^{(1)}(\omega) B_z\cdot \Sigma$$ and for the full matrix element I would have to separate the lab and intrinsic parts and I would get something like: $$<J,M,\Omega|g_S\mu_B \mathcal{D}_{00}^{(1)}(\omega)|J,M,\Omega><\Lambda,S,\Sigma|T_{0}^1(S)|\Lambda,S,\Sigma>$$ For orbital angular momentum part, I have to account for the electronic part, as the ##L_z## in the lab is not ##L_z## in the molecule frame, so I would need to keep only the ##T_{q=0}^1(L)## part for the first order PT and the matrix element here would be $$<J,M,\Omega|g_S\mu_B \mathcal{D}_{00}^{(1)}(\omega)|J,M,\Omega><\Lambda,S,\Sigma|T_{0}^1(L)|\Lambda,S,\Sigma>$$ Is this right? One more questions, in B&C after equation 7.231 they list all the terms in the effective H due to Stark effect and for the orbital motion they have ##T_{p=0}^1(L)##. Should that p be a q? If it is a p, as I mentioned above, that won't be diagonal at the electronic level in the molecule intrinsic frame and we can't have that in the effective hamiltonian. Thank you!
 
  • #55
The line of reasoning looks correct. Some details:

Read B&C section 5.5.5. Eq. 5.143 shows the relation between the space-fixed and molecule-fixed operators. So if B&C ever writes a ##T^1_p##, you know how to turn it into molecule-fixed coordinates.

In general you need to do two things: 1. rotate into the molecule frame (via equation 5.143). 2. Unpack the composite angular momenta to reach the one you wish to address. You want ##S##, but it's part of ##J##, which means you'll get Wigner 6-j symbols.

At this point I suggest you go through Chapter 5 carefully, and look at real examples. Eqns 9.56 and 9.58, for example, gets what you want. Notice from the first line, the tensor operator conversion from space-fixed to molecule-fixed was already done. After that, three equations were used to get the rest: 1. Eq. 5.123 (Wigner-Eckart theorem) to get the 3j symbol involving ##M_F##, 2. Eq. 5.136 to get extract ##S## out of ##J##, and 3. Eq. 5.146, the matrix elements of the rotation matrix (3j symbol with ##\Omega##).

This basis also deals with hyperfine (##F##), but that just comes down to applying Eq. 5.136 to extract ##J## out of ##F##.
 
  • #56
amoforum said:
The line of reasoning looks correct. Some details:

Read B&C section 5.5.5. Eq. 5.143 shows the relation between the space-fixed and molecule-fixed operators. So if B&C ever writes a ##T^1_p##, you know how to turn it into molecule-fixed coordinates.

In general you need to do two things: 1. rotate into the molecule frame (via equation 5.143). 2. Unpack the composite angular momenta to reach the one you wish to address. You want ##S##, but it's part of ##J##, which means you'll get Wigner 6-j symbols.

At this point I suggest you go through Chapter 5 carefully, and look at real examples. Eqns 9.56 and 9.58, for example, gets what you want. Notice from the first line, the tensor operator conversion from space-fixed to molecule-fixed was already done. After that, three equations were used to get the rest: 1. Eq. 5.123 (Wigner-Eckart theorem) to get the 3j symbol involving ##M_F##, 2. Eq. 5.136 to get extract ##S## out of ##J##, and 3. Eq. 5.146, the matrix elements of the rotation matrix (3j symbol with ##\Omega##).

This basis also deals with hyperfine (##F##), but that just comes down to applying Eq. 5.136 to extract ##J## out of ##F##.
Thanks for this! I went over chapter 5 and it makes more sense how that works. But my question still remains. He claims that we can have ##T_{p=0}^1(L)## in the effective Hamiltonian explicitly, which is equivalent to having ##T_{q=\pm 1}^1(L)## explicitly in the effective Hamiltonian. However in a previous section he spends quite some time talking about how having ##R^2## in the effective Hamiltonian is not good, specifically because that implies having ##T_{q=\pm 1}^1(L)##. Shouldn't ##T_{q=\pm 1}^1(L)## be absorbed in some effective parameters at second and higher PT, and have only ##T_{q=0}^1(L)## in the effective Hamiltonian as an operator?
 
  • #57
Haha okay the answer to this might be pretty funny, assuming I understand the question correctly:

Look at Eq. 7.217, which has a ##g_L## in it. Eqns 7.221 and 7.222 show the result of putting the off-diagonal terms into an effective parameter: ##g_l##. (Notice the capitalized vs. not.) In the equations after Eqn. 7.231, you're back to ##g_L##, where he hasn't done the procedure yet.
 
  • #58
amoforum said:
Haha okay the answer to this might be pretty funny, assuming I understand the question correctly:

Look at Eq. 7.217, which has a ##g_L## in it. Eqns 7.221 and 7.222 show the result of putting the off-diagonal terms into an effective parameter: ##g_l##. (Notice the capitalized vs. not.) In the equations after Eqn. 7.231, you're back to ##g_L##, where he hasn't done the procedure yet.
Hmm that makes sense if that would be the full Hamiltonian. But right after 7.231 he claims that he is listing the terms in the effective Hamiltonian. Shouldn't we get rid of the off-diagonal terms at that level? Also one of the terms he is listing is the "anisotropic correction to the electron spin interaction" which appears only after you do the effective Hamiltonian, it is not there in the original Hamiltonian (also that term has ##g_l##). It almost looks like he is mixing terms from the original and effective Hamiltonian.
 
  • #59
Oh sorry, it looks like I did misunderstand (##g_l## is one order higher, starting with ##g_L##).

My understanding is that ##g_L## already is an effective parameter. In Eqn. 7.217 it's introduced as a perturbation to the full Hamiltonian. But he mentions right after that it can deviate from 1.0 after transforming into the effective Hamiltonian. To first order, it has exactly the same form, and because it's so close to 1.0, I believe he just keeps the same notation: ##g_L##.

So in an experiment, when you go to fit the spectrum: ##g_L## will not come out to 1 due to non-adiabatic mixing. Here's a paper where you can see them fitting to a value not equal to 1: (Wilton L. Virgo et al 2005 ApJ 628 567)

Edit: but I see your point. I'm trying to think of a case where it wouldn't just reduce to ##q = 0##. For example maybe ##\Omega## can change, but not ##\Lambda##. Let me think on this.

Edit2: Yes, I believe the above is correct. In going to the effective Hamiltonian, the ##g_LT^1_{p=0}(L)## term operates within a single ##|\Lambda>## state, but you can have mixing between ##\Omega## states.
 
Last edited:
  • #60
amoforum said:
Oh sorry, it looks like I did misunderstand (##g_l## is one order higher, starting with ##g_L##).

My understanding is that ##g_L## already is an effective parameter. In Eqn. 7.217 it's introduced as a perturbation to the full Hamiltonian. But he mentions right after that it can deviate from 1.0 after transforming into the effective Hamiltonian. To first order, it has exactly the same form, and because it's so close to 1.0, I believe he just keeps the same notation: ##g_L##.

So in an experiment, when you go to fit the spectrum: ##g_L## will not come out to 1 due to non-adiabatic mixing. Here's a paper where you can see them fitting to a value not equal to 1: (Wilton L. Virgo et al 2005 ApJ 628 567)

Edit: but I see your point. I'm trying to think of a case where it wouldn't just reduce to ##q = 0##. For example maybe ##\Omega## can change, but not ##\Lambda##. Let me think on this.

Edit2: Yes, I believe the above is correct. In going to the effective Hamiltonian, the ##g_LT^1_{p=0}(L)## term operates within a single ##|\Lambda>## state, but you can have mixing between ##\Omega## states.
So I tried to calculate what is the effective hamiltonian associated with the Stark effect under this formalism, but I am not sure if what I am doing is right. Assume the wavefunction can be written as ##|\eta>|i>##, with ##|\eta>## the electronic (intrinsic) part and ##|i>## the vibrational and rotational part. Assuming the electric field is in the z-direction, the Stark shift interaction is $$E_zd_z = E_z\sum_q \mathcal{D}_{0q}^1(\omega) T_{q}^1(d)$$ where I transformed the dipole moment from the lab to molecule frame, with $$d = d_{el}(r)+d_{nucl}(R)$$ where ##d_{el}(r) = -er##, with ##r## the location of the electron and ##d_{nucl}(R) = e(Z_1-Z_2)R## with ##R## the internuclear distance. I will just write ##\mathcal{D}## instead of ##\mathcal{D}_{0q}^1(\omega)## from now on. Calculating the effective Hamiltonian to first order in PT, as in B&C I would get $$E_z\sum_q<i|<\eta|\mathcal{D}T_q^1(d_{el}+d_{nucl})|\eta>|j> = $$ $$E_z\sum_q<i|<\eta|\mathcal{D}T_q^1(d_{el})|\eta>|j> + E_z\sum_q<i|<\eta|\mathcal{D}T_q^1(d_{nucl})|\eta>|j> = $$ $$E_z\sum_q<i|\mathcal{D}<\eta|T_q^1(d_{el})|\eta>|j> + E_z\sum_q<i|\mathcal{D}T_q^1(d_{nucl})<\eta||\eta>|j> $$ Using ##<\eta|T_q^1(d_{el})|\eta>=0## due to parity arguments and ##<\eta||\eta>=1## due to orthonormality of the electronic wavefunctions we get: $$E_z\sum_q<i|\mathcal{D}T_q^1(d_{nucl})|j> $$ Given that ##d_{nucl}=e(Z_1-Z_2)R## and ##R## is defined as the z component of the molecule frame, only the ##q=0## component survives so in the end we get $$e(Z_1-Z_2)E_z<i|\mathcal{D}R|j> = $$ $$e(Z_1-Z_2)E_z<vib_i|R|vib_i><rot_i|\mathcal{D_{00}^1(\omega)}|rot_j> $$ Given that I didn't make any assumption about the rotational basis (can be Hund case a or b without affecting the derivation) I can drop the rotational expectation value and leave that part as an operator so in the end the first order PT effective term coming from the Stark effect is $$e(Z_1-Z_2)E_z<vib_i|R|vib_i>\mathcal{D_{00}^1(\omega)} $$ so basically the effective operator is a Wigner matrix component. Is my derivation right? Thank you!
 
  • #61
I'm definitely out of my comfort zone here, so take this with a grain of salt. Your result says that the only Stark shift is due to the permanent dipole moment of the molecule, and I don't buy that. I think what's missing is the off-diagonal couplings between degenerate electronic levels, like a ##\Lambda##-doublet. There should be some polarizability there that scales inversely with the ##\Lambda##-doubling energy splitting, I think? I'm not 100% sure about the scaling, that's just something I think I remember reading in a review paper, but they were talking about ##\Omega##-doublets.

My handiness with Wigner algebra is crud, but the angular part looks right. Wolfram says ##D^{1}_{00}(\psi,\theta,\phi) = \cos \theta##, so it certainly seems reasonable.
 
  • #62
Billket, I'm assuming you went through Section 6.11.6 in B&C? In that case, yes, you'll end up keeping the first term in equation 6.333 (for an intra-electronic transition. They mention the first terms goes to zero for inter-electronic transitions.). And then your matrix element is equation 6.331, where ##cos\theta## is your Wigner matrix element.

Twigg, I believe the ##\Lambda##-doubling is contained in the rotational part of the matrix element. It shows up once you pick the basis set, which will include the parity eigenstates. And then when solving for the polarizability, that energy splitting will show up in the denominator, the energy of which is governed by the interaction that splits the parity eigenstates.
 
  • #63
amoforum said:
Billket, I'm assuming you went through Section 6.11.6 in B&C? In that case, yes, you'll end up keeping the first term in equation 6.333 (for an intra-electronic transition. They mention the first terms goes to zero for inter-electronic transitions.). And then your matrix element is equation 6.331, where ##cos\theta## is your Wigner matrix element.

Twigg, I believe the ##\Lambda##-doubling is contained in the rotational part of the matrix element. It shows up once you pick the basis set, which will include the parity eigenstates. And then when solving for the polarizability, that energy splitting will show up in the denominator, the energy of which is governed by the interaction that splits the parity eigenstates.
@Twigg @amoforum thank you for your replies. So I think I did make a mistake for the case of ##\Lambda \neq 0##, as there I should first calculate the matrix element for Hund case a and after that combine the Hund case a basis into parity eigenstates. I think I did it the other way around. I will look more closely into that. I also took a look over section 6.11.6, thank you for pointing me towards that. I actually have a quick question about the BO approximation now (unrelated to the EDM calculation). Before equation 6.333 they say: "We now make use of the Born–Oppenheimer approximation which allows us to separate the electronic and vibrational wave functions" and this is the typical statement you see in probably all books on molecular physics. And now I am wondering if I am missing something. Of course BO approximation allows that separation, but after reading the effective hamiltonian chapter it seems like that separation is always true, up to any desired order in PT. BO approximation is basically the zeroth order and that kind of statement implies that the separation is valid only under that very constraining assumption. Isn't that separation always true once we do these PT changes (isn't this the whole point of the effective Hamiltonian)? Along the same lines, I just wanted to make sure I understand how one goes from BO to Hund cases. In BO, one has a wavefunction of the form ##|\eta\nu J M>=|\eta>| \nu>Y_{J}^{M}(\theta,\phi)##, where ##|\eta>## is the electronic wavefunction (in the intrinsic frame of the molecule), ##|\nu>## is the vibrational wavefunction and ##Y_{J}^{M}(\theta,\phi)## is the spherical harmonic, showing the rotation of the molecule frame with respect to the lab frame. Then using an identity of the form ##Y_{J}^{M}(\theta,\phi)=\sum_{\Omega=-J}^{J}\sqrt{\frac{2J+1}{8\pi^2}}\mathcal{D}_{M\Omega}^{J*}(\theta,\phi)## (I might be off with that constant) we are able to get the Hund cases, which for case a, for example, based on this equation would become ##|\eta>|\nu>|J \Omega M>|S\Sigma>## where ##\mathcal{D}_{M\Omega}^{J*}(\theta,\phi)=|J \Omega M>## and ##|S\Sigma>## was pulled out by hand for completeness. Is this correct? Thank you!
 
  • #64
Thanks for the clarification, @amoforum! And @BillKet, I'd actually be curious to see what you come up with for the stark shift, if you find the time. I tried to spend some time learning this once but my coworkers weren't having it and sent me back to mixing laser dye :doh: No pressure, of course!

As far as the BO approximation, when we did spectroscopy we didn't really keep a detailed effective Hamiltonian, we would just re-measure the lifetimes and rotational constants in other vibrational states if there was a need to do so. I think in molecules where the BO violation is weak, you can take this kind of pragmatic approach. Then again, we only thought about molecules with highly diagonal Franck-Condon factors so we never really ventured above ##\nu = 2## or so.
 
  • Like
Likes BillKet
  • #65
@BillKet Yes, you assume the BO approximation first, then handle the non-adiabatic terms with perturbation theory. i.e. those parameters (or "constants") in the effective Hamiltonian.

As for your second question, that looks right, except I want to clarify: a spherical harmonic is actually a subset of the generalized rotation matrix elements (see Eq. 5.52 in B&C). More generally, you'd start with asymmetric top eigenfunctions (eigenfunctions of Eq. 5.58 in B&C), which for a diatomic would then reduce to symmetric top wavefunctions. B&C Section 5.3.4 might be helpful.
 
  • Like
Likes Twigg and BillKet
  • #66
amoforum said:
@BillKet Yes, you assume the BO approximation first, then handle the non-adiabatic terms with perturbation theory. i.e. those parameters (or "constants") in the effective Hamiltonian.

As for your second question, that looks right, except I want to clarify: a spherical harmonic is actually a subset of the generalized rotation matrix elements (see Eq. 5.52 in B&C). More generally, you'd start with asymmetric top eigenfunctions (eigenfunctions of Eq. 5.58 in B&C), which for a diatomic would then reduce to symmetric top wavefunctions. B&C Section 5.3.4 might be helpful.
@amoforum I looked into more details at the derivation in B&C in section 6.11.6 and I am actually confused a bit. Using spherical tensors for now, the transition between 2 electronic levels would be:

$$E_z\sum_q<\eta'|<\nu'|<r'|\mathcal{D}_{0q}T_q^1(d_{el}+d_{nucl})|r>|\nu>|\eta> = $$
$$E_z\sum_q<\eta'|<\nu'|<r'|\mathcal{D}_{0q}T_q^1(d_{el})|r>|\nu>|\eta> + E_z\sum_q<\eta'|<\nu'|<r'|\mathcal{D}T_q^1(d_{nucl})|r>|\nu>|\eta> = $$
$$E_z\sum_q<\nu'|<\eta'|T_q^1(d_{el})|\eta>|\nu><r'|\mathcal{D}_{0q}|r> + E_z\sum_q<\eta'||\eta><\nu'|<r'|\mathcal{D}T_q^1(d_{nucl})|r>|\nu> $$

The second term is zero because ##<\eta'||\eta> = 0##. But the first term is different from the one in B&C equation 6.331. First of all, differently from before (transitions within a given electronic state), ##d_{el}## has component in the intrinsic frame for ##q=\pm 1##, not only for ##q=0##, so that term is not just ##cos(\theta)## anymore. Why do they ignore the other 2 terms? Also the expectation value ##<\eta'|T_q^1(d_{el})|\eta>## is a function of R (the electronic wavefunctions have a dependence on R), so we can't just take them out of the vibrational integral like B&C do in 6.332. What am I missing?
 
  • #67
BillKet said:
@amoforum I looked into more details at the derivation in B&C in section 6.11.6 and I am actually confused a bit. Using spherical tensors for now, the transition between 2 electronic levels would be:

$$E_z\sum_q<\eta'|<\nu'|<r'|\mathcal{D}_{0q}T_q^1(d_{el}+d_{nucl})|r>|\nu>|\eta> = $$
$$E_z\sum_q<\eta'|<\nu'|<r'|\mathcal{D}_{0q}T_q^1(d_{el})|r>|\nu>|\eta> + E_z\sum_q<\eta'|<\nu'|<r'|\mathcal{D}T_q^1(d_{nucl})|r>|\nu>|\eta> = $$
$$E_z\sum_q<\nu'|<\eta'|T_q^1(d_{el})|\eta>|\nu><r'|\mathcal{D}_{0q}|r> + E_z\sum_q<\eta'||\eta><\nu'|<r'|\mathcal{D}T_q^1(d_{nucl})|r>|\nu> $$

The second term is zero because ##<\eta'||\eta> = 0##. But the first term is different from the one in B&C equation 6.331. First of all, differently from before (transitions within a given electronic state), ##d_{el}## has component in the intrinsic frame for ##q=\pm 1##, not only for ##q=0##, so that term is not just ##cos(\theta)## anymore. Why do they ignore the other 2 terms? Also the expectation value ##<\eta'|T_q^1(d_{el})|\eta>## is a function of R (the electronic wavefunctions have a dependence on R), so we can't just take them out of the vibrational integral like B&C do in 6.332. What am I missing?

As to your first question:

For an intra-electronic transition, you're coupling two electronic states that have exactly the same electron spatial distribution. Electron population is distributed symmetrically about the molecular axis, so there is no permanent dipole moment perpendicular to the molecular axis for the electric field to interact with. There might be at really short time scales, but then we're not in the Born-Oppenheimer regime anymore.

The same argument applies for inter-electronic transitions, where you're coupling two electronic states that have different electron spatial distributions. Hence, there's usually a dipole moment to interact with (considering symmetry and all that).

As to your second question:

B&C haven't separated the electronic and vibrational integrals in Eq. 6.332 yet. They first apply the Born-Oppenheimer approximation, then separate them. The R-dependence shows up in Eq. 6.333.
 
  • #68
amoforum said:
As to your first question:

For an intra-electronic transition, you're coupling two electronic states that have exactly the same electron spatial distribution. Electron population is distributed symmetrically about the molecular axis, so there is no permanent dipole moment perpendicular to the molecular axis for the electric field to interact with. There might be at really short time scales, but then we're not in the Born-Oppenheimer regime anymore.

The same argument applies for inter-electronic transitions, where you're coupling two electronic states that have different electron spatial distributions. Hence, there's usually a dipole moment to interact with (considering symmetry and all that).

As to your second question:

B&C haven't separated the electronic and vibrational integrals in Eq. 6.332 yet. They first apply the Born-Oppenheimer approximation, then separate them. The R-dependence shows up in Eq. 6.333.
For the first question:

The dipole moment, as an operator, has 2 components ##d_{el}(r)## and ##d_{nucl}(R)##. When the transition is within the same electronic state, what we are left with is ##\sum_q <T_q^1(d_{nucl}(R))>##. But for ##d_{nucl}(R)## there is only the q=0 component, so there it is obvious why we drop the ##q=\pm 1## terms. But in the case for transitions between 2 different electronic states, we are left with ##\sum_q <\eta|T_q^1(d_{el}(r))|\eta'>##. I am not sure why in this case, for example ##<\eta|T_q^1(d_{el}(r))|\eta'>## would be zero, this is equivalent to ##<\eta|x|\eta'>=0## and ##<\eta|y|\eta'>=0##. Is it because of the cylindrical symmetry?

For the second question:

I am a bit confused. Starting from the second integral of 6.332 we have:

$$\int{\int{\psi_\nu\psi_e r \psi_e'\psi_\nu'}}$$

(I dropped some terms, complex conjugates etc. for simplicity). By adding the dependence on different variables we have:

$$\int{\int{\psi_\nu(R)\psi_e(r,R) r \psi_e'(r,R)\psi_\nu'(R)}}$$

which is equal to

$$\int\psi_\nu(R){(\int{\psi_e(r,R) r \psi_e'(r,R))}\psi_\nu'(R)}$$

If we denote ##f(R)=\int{\psi_e(r,R) r \psi_e'(r,R) dr}## the integral above becomes:

$$\int{\psi_\nu(R)f(R)\psi_\nu'(R)}dR$$

but this is not equal to $$(\int{\psi_\nu(R)\psi_\nu'(R)}dR)f(R)$$ we can't just take the ##f(R)## out of that integral, as it depends explicitly on ##R## and I don't see how BO approximation would allow us to do that. BO allowed us to write the function as the product of the electronic and vibrational wavefunctions, but after that doing these integrals is just math.
 
  • #69
BillKet said:
$$E_z\sum_q<\eta'|<\nu'|<r'|\mathcal{D}_{0q}T_q^1(d_{el}+d_{nucl})|r>|\nu>|\eta> = $$
$$E_z\sum_q<\eta'|<\nu'|<r'|\mathcal{D}_{0q}T_q^1(d_{el})|r>|\nu>|\eta> + E_z\sum_q<\eta'|<\nu'|<r'|\mathcal{D}T_q^1(d_{nucl})|r>|\nu>|\eta> = $$
$$E_z\sum_q<\nu'|<\eta'|T_q^1(d_{el})|\eta>|\nu><r'|\mathcal{D}_{0q}|r> + E_z\sum_q<\eta'||\eta><\nu'|<r'|\mathcal{D}T_q^1(d_{nucl})|r>|\nu> $$

Unfortunately, I can't look at this until later tonight, but I need revisit your derivation above more carefully. Because, I see now that the they way you have it derived, the rotational part forces the electronic part to only ##q = 0## terms, which has to be wrong because inter-electronic transitions exist, and I guess this is the heart of your question. (Maybe the very first line is wrong.)

If you take a look at Eqn 6.331, the sum over all components is clearly still there for the non-rotational components. Read over Section 6.11.4, and revisit how Eqn 6.330 turns into 6.331, and I suspect the discrepancy will show up. i.e. the rotational part got completely separated.
 
  • #70
I think I can answer the second question for now. Eqn. 6.333 I believe has some sloppy notation. The second integral should maybe have a different symbol for ##R_\alpha## for the electronic part. It's meant to be at a single internuclear distance, usually the equilibrium distance. So you don't integrate over it. Some other texts might call this the "crude" BO approximation, and Eqn. 6.330 would be the usual BO approximation. Then there's also the Condon approximation which assumes there's no dependence on the nuclear coordinates at all.
 

Similar threads

  • Atomic and Condensed Matter
Replies
0
Views
355
  • Atomic and Condensed Matter
Replies
1
Views
2K
  • Atomic and Condensed Matter
Replies
7
Views
1K
  • Atomic and Condensed Matter
Replies
1
Views
1K
  • Atomic and Condensed Matter
Replies
3
Views
1K
  • Atomic and Condensed Matter
Replies
1
Views
851
  • Atomic and Condensed Matter
Replies
3
Views
2K
  • Atomic and Condensed Matter
Replies
13
Views
3K
  • Atomic and Condensed Matter
Replies
0
Views
802
  • Atomic and Condensed Matter
Replies
1
Views
838
Back
Top