Effective molecular Hamiltonian and Hund cases

  • I
  • Thread starter BillKet
  • Start date
  • #1
170
11
Hello! I am reading some stuff about the effective hamiltonian for a diatomic molecule and I have some questions about relating the parameters of these hamiltonian to experiment and theory. From what I understand, one starts (usually, although not always) with the electronic energy levels, by solving the Schrodinger equation (or Dirac if we consider relativistic effects) for fixed internuclear distance for the electrostatic potential, and ignoring all the other terms in the Hamiltonian. At this point all vibrational, rotational etc. levels in that electronic level are degenerate in energy (I will ignore vibrational energy for now, just focus on electronic and rotational). We then add perturbatively terms off diagonal in electronic wavefunctions, but in a way such that the hamiltonian is still block diagonal in the electronic levels. These perturbative expansion creates an effective hamiltonian for each electronic level, hiding these off diagonal interactions in an effective constant rising the degeneracy of the rotational levels within a given electronic level. We need to choose a basis to expand these rotational levels and that is usually (if not always) a Hund case basis. After we add the perturbation, we end up with an effective operator, for one of the blocks of the full hamiltonian (i.e. an electronic level) of the form ##\gamma \hat{O}##, where ##\gamma## is the effective operator which is determined from experiment and makes the connection to the theory. In matrix form this looks like (let's assume that this electronic level has only 2 rotational levels):

$$\gamma \begin{pmatrix}
O_{11} & O_{22} \\
O_{21} & O_{22}
\end{pmatrix}$$

where ##O_{ij} = <i|\hat{O}|j>##, where ##|i>## and ##|j>## and the 2 Hund cases basis. I think that up to here I understand it well. However, I am not sure how we account for off diagonal terms in this hamiltonian. When we do a fit to the data (which in this case would be a measurement of the energy difference between ##|i>## and ##|j>##) in order to extract ##\gamma##, do we just ignore the off diagonal terms, or do we diagonalize this hamiltonian (which in practice can have hundreds of rows, depending on how many lines were measured)? Usually when the energy levels are labeled in a diagram, they have the quantum numbers of the hund case chosen, which would imply that we ignore the off diagonal entries. Are they so small that we can ignore them? Or are they actually zero? They shouldn't be zero, as in an actual hamiltonian there are terms which break the perfect coupling picture of a perfect hund case. Can someone help me understand how do we connect hund energy levels to real energy levels? Thank you!
 

Answers and Replies

  • #2
23
8
Your understanding is correct for how the effective Hamiltonian is built. Once you have the whole matrix, you diagonalize to get all the energy levels. In experiment, your spectroscopy peaks correspond to the differences between those energy levels.

So to determine those tiny off-diagonal terms, in software you build those matricies, diagonalize, compute the spectra: energy differences between all energy levels (allowed transitions weighted by the line strengths,) then fit to the data.
 
  • #3
170
11
Your understanding is correct for how the effective Hamiltonian is built. Once you have the whole matrix, you diagonalize to get all the energy levels. In experiment, your spectroscopy peaks correspond to the differences between those energy levels.

So to determine those tiny off-diagonal terms, in software you build those matricies, diagonalize, compute the spectra: energy differences between all energy levels (allowed transitions weighted by the line strengths,) then fit to the data.
Thanks a lot for your reply! So, for example, in ##^2\Sigma## state, assuming there are no nuclear spins, the effective hamiltonian is $$H_{eff} = BN^2+\gamma NS$$ where ##B## is the rotational constant and ##\gamma## is the spin rotational coupling (I ignored here the centrifugal corrections to the rotation of the form ##DN^4+HN^6+...##). One question I have is: is this Hamiltonian (with the centrifugal corrections) correct for any ##J## in a given vibrational level? I have seen in several papers mentioned that this is correct for low values of ##J## and I am not sure why would this not hold for any ##J##. I understand that for higher ##J## the best Hund case might change, but why would the Hamiltonian itself change? My other question is: Assuming I use a Hund case b, and I diagonalize the ##H_{eff}## up to some ##J## using Hund case b eigenstates, what I would do in practice would be to measure the transitions between rotational levels in this vibronic state, extract the ##B## and ##\gamma## by doing a fit to the peaks predicted by ##H_{eff}## and then from the values of ##B## and ##\gamma## I can go back to ab initio calculations of the electronic levels and extract more fundamental parameters (or check if the calculations are correct). Is my understanding of it right? Thank you!
 
  • #4
23
8
Let's put it this way:

You start with Hund's case b and fit the spectrum to lower J values to get parameters B and gamma. If the fit is good, then that means your effective Hamiltonian was a good guess. At higher J levels, you'll plug in your fitted parameters B and gamma and see that they no longer match the spectrum! So clearly your effective Hamiltonian is now wrong. Either you need to account for new interactions you haven't added in yet, or your Hund's case no longer applies at all.

One reason things might go bad at high J is because some tiny couplings directly depend on the value of J, causing tiny energy splittings. At high J, those splittings might not be so tiny anymore, so a term accounting for them needs to be added to your Hamiltonian. Same goes for high vibrational levels.

It's really just about building physical intuition. For example, let's say that you suspect that there should be a rotational-electronic coupling of some sort. So you add that term into the effective Hamiltonian and try to fit the data to determine the strength of that coupling. If it fits, then your intuition was correct!
 
  • #5
170
11
Let's put it this way:

You start with Hund's case b and fit the spectrum to lower J values to get parameters B and gamma. If the fit is good, then that means your effective Hamiltonian was a good guess. At higher J levels, you'll plug in your fitted parameters B and gamma and see that they no longer match the spectrum! So clearly your effective Hamiltonian is now wrong. Either you need to account for new interactions you haven't added in yet, or your Hund's case no longer applies at all.

One reason things might go bad at high J is because some tiny couplings directly depend on the value of J, causing tiny energy splittings. At high J, those splittings might not be so tiny anymore, so a term accounting for them needs to be added to your Hamiltonian. Same goes for high vibrational levels.

It's really just about building physical intuition. For example, let's say that you suspect that there should be a rotational-electronic coupling of some sort. So you add that term into the effective Hamiltonian and try to fit the data to determine the strength of that coupling. If it fits, then your intuition was correct!
Now I am a bit confused. From the book I am reading (Rotational Spectroscopy of Diatomic Molecules by Brown and Carrington), it seems like the effective hamiltonian is not something that is done by using some sort of physics intuition or adding terms by hand. It just follows from a perturbative expansion of the full molecular hamiltonian (which mixes different electronic levels). So ##H_{eff}## should contain all terms needed to fit the spectrum up to a given perturbative order. As far as I understand this effective Hamiltonian gives exactly the same energy levels as the real Hamiltonian, up to the perturbation order used. Is it just that I might have to go to higher orders in perturbation theory for higher J, as some small terms at low J might be big now? But in that case, that term should still appear naturally when I do the perturbative expansion, no? I wouldn't have to add it by hand. Thank you!
 
  • #6
23
8
So imagine you're measuring a new molecule for the first time. You need to fit the spectra, but to what? (Nobody has the exact Hamiltonian.) You first use your physical intuition to add each of those interactions one by one. For example, why would I add a spin-orbit coupling term to a state that I expect has no spin? So you basically keep adding interactions or going to higher orders in perturbation theory until your model matches the data. Aside from physical intuition, the exact values of those coupling strengths are useful to other people, say for predicting systematic shifts due to drifting electric and magnetic fields, for example.
 
  • #7
170
11
So imagine you're measuring a new molecule for the first time. You need to fit the spectra, but to what? (Nobody has the exact Hamiltonian.) You first use your physical intuition to add each of those interactions one by one. For example, why would I add a spin-orbit coupling term to a state that I expect has no spin? So you basically keep adding interactions or going to higher orders in perturbation theory until your model matches the data. Aside from physical intuition, the exact values of those coupling strengths are useful to other people, say for predicting systematic shifts due to drifting electric and magnetic fields, for example.
I think I get it. But in terms of what parameters to add to the Hamiltonian, assuming we have a diatomic molecule, if we go up to a fixed order in perturbation theory, all diatomic molecules that can exist have exactly the same terms in the effective Hamiltonian up to that order, for any given electronic level, right? Now using some physics intuition (or some guidance from theory), we can start discarding some of these terms that are there in the most general case but we don't need (as you said drop spin-orbit coupling for no spin, or spin-spin electron interaction if there is just one electron). So I guess the issue is more what terms to drop from a well known (too) general effective Hamiltonian, rather than what to add to a given Hamiltonian, right?
 
  • #8
23
8
Working with a giant effective Hamiltonian with every interaction possible and whittling down the incorrect terms is definitely possible. However, you're going to have an unpleasant time fitting your data. If you add 50 interactions to your effective Hamiltonian, you'll always fit the data. But it'll be physically meaningless, in the same way that a 10th-order polynomial can fit an elephant!

So it's more effective to start with the smallest, simplest model and correct by adding interactions as necessary. I understand that it seems counter-intuitive to not start with a Hamiltonian that covers all Hund's cases, but the parameter space is simply too big have practical value. The good news is that almost all diatomic molecules fall into a known Hund's case. And Brown and Carrington generously derived all the matrix elements for us!
 
  • #9
Twigg
Science Advisor
Gold Member
379
123
Brown and Carrington is a powerful reference book and people swear by it, but the way it's laid out does not reflect how spectroscopy of diatomics is handled in the lab. I can see where you're coming from if that monster of a book is your starting point.

So I guess the issue is more what terms to drop from a well known (too) general effective Hamiltonian, rather than what to add to a given Hamiltonian, right?

This may sound like semantics, but in practice you usually end up "adding" terms to your "working Hamiltonian". You can think of the effective Hamiltonian in Brown and Carrington as a "catalog" to pick your terms from. And often times, even Brown and Carrington's effective Hamiltonian is incomplete and you end up adding new stuff to it. For example, in another recent thread we discussed electron electric dipole moment (eEDM) measurements in ThO. The eEDM shift (##-\vec{d_e} \cdot \vec{E}##) is not present in that effective Hamiltonian, and has to be added on. If that experiment is of interest to you, you might have more luck reading Paul Hamilton's PhD thesis over Brown and Carrington. Chapter 3 is a concise overview of molecular structure, namely for the ##^1 \Sigma## and ##^3 \Sigma## states of PbO. (Note: in ThO the eEDM-sensitive state is a ##^3 \Delta## state, but in PbO it's a ##^3 \Sigma##). It's a different molecule but it's very well written and covers a lot of the same concepts.
 
  • #10
170
11
Working with a giant effective Hamiltonian with every interaction possible and whittling down the incorrect terms is definitely possible. However, you're going to have an unpleasant time fitting your data. If you add 50 interactions to your effective Hamiltonian, you'll always fit the data. But it'll be physically meaningless, in the same way that a 10th-order polynomial can fit an elephant!

So it's more effective to start with the smallest, simplest model and correct by adding interactions as necessary. I understand that it seems counter-intuitive to not start with a Hamiltonian that covers all Hund's cases, but the parameter space is simply too big have practical value. The good news is that almost all diatomic molecules fall into a known Hund's case. And Brown and Carrington generously derived all the matrix elements for us!
Thanks a lot (and sorry for asking so many questions)! One more thing, just to make sure, the effective hamiltonian for a given electronic level, doesn't depend on the Hund case chosen, right? For example if I have a ##^2\Sigma## electronic level, for low J values I have the hamiltonian mentioned above, which has only 2 terms. If I want to find the eigenvalues of this hamiltonian and connect them to the experimental data, I can expand it in term of Hund case a or Hund case b basis (I assume here that the electronic energy is much bigger than the rotational one). Most of the time Hund case b is better in this case, as the off diagonal terms would be smaller, so a perturbation theory to first order could get me pretty close to the right answer. But I can also use Hund case a as the basis, but I would need to use a perturbative expansion to a higher order. Is that right? Also if I use a software that actually fully diagonalizes a matrix, wether I use Hund case a or b shouldn't make a big difference, right?
 
  • #11
170
11
Brown and Carrington is a powerful reference book and people swear by it, but the way it's laid out does not reflect how spectroscopy of diatomics is handled in the lab. I can see where you're coming from if that monster of a book is your starting point.



This may sound like semantics, but in practice you usually end up "adding" terms to your "working Hamiltonian". You can think of the effective Hamiltonian in Brown and Carrington as a "catalog" to pick your terms from. And often times, even Brown and Carrington's effective Hamiltonian is incomplete and you end up adding new stuff to it. For example, in another recent thread we discussed electron electric dipole moment (eEDM) measurements in ThO. The eEDM shift (##-\vec{d_e} \cdot \vec{E}##) is not present in that effective Hamiltonian, and has to be added on. If that experiment is of interest to you, you might have more luck reading Paul Hamilton's PhD thesis over Brown and Carrington. Chapter 3 is a concise overview of molecular structure, namely for the ##^1 \Sigma## and ##^3 \Sigma## states of PbO. (Note: in ThO the eEDM-sensitive state is a ##^3 \Delta## state, but in PbO it's a ##^3 \Sigma##). It's a different molecule but it's very well written and covers a lot of the same concepts.
Thanks a lot for your reply (again :) )! So if I want to work in a given electronic state, I can pick the main terms in the effective hamiltonian from literature (like Brown and Carrington) as all electronic states of that type will always contain these terms, then add some extra ones that I might need for my experiment (P-odd, P,T-odd, E/B field interactions etc.), and this is the Hamiltonian that I need to find the parameters for (usually the parameters of the main part of the Hamiltonian are known, I need to find the others, such as the eEDM). Once I have the Hamiltonian, I pick a Hund case that is convenient (but that is just a matter of convenience, as all Hund cases are complete orthonormal basis, so they would all give the same answer, right?) and diagonalize it, then connect the difference in energy levels from this diagonalization to the measured one. Is this how it is done in practice?
 
  • #12
Twigg
Science Advisor
Gold Member
379
123
I've only worked on one diatomic before in any depth, so take my words here with a grain of salt. I would point out that by choosing the terms in your effective Hamiltonian, you are already implying a hierarchy of interactions. This should help you choose which Hunds cases best approximate your effective Hamiltonian's eigenstates. In principle, it doesn't matter what basis of Hund's cases you use, but in practice you often end up approximating states as purely one Hund's case, and then seeing how well that approximation applies. If it's Hund's case a for example, you might be doing an optical pumping experiment and see slow decay to through a forbidden channel. Rather than correct the Hamiltonian, we would end up just assigning a lifetime to that channel and we'd plan our experiment accordingly. If your goal is just state preparation, the lifetime is all the information you really need. If your goals are precision spectroscopy, it's a different story but that's well outside of my experience.
 
  • #13
23
8
If I want to find the eigenvalues of this hamiltonian and connect them to the experimental data, I can expand it in term of Hund case a or Hund case b basis (I assume here that the electronic energy is much bigger than the rotational one). Most of the time Hund case b is better in this case, as the off diagonal terms would be smaller, so a perturbation theory to first order could get me pretty close to the right answer. But I can also use Hund case a as the basis, but I would need to use a perturbative expansion to a higher order. Is that right? Also if I use a software that actually fully diagonalizes a matrix, wether I use Hund case a or b shouldn't make a big difference, right?

Yup! I can't remember if it was in Brown & Carrington or in Lefebvre-Brion, but I believe one of them derives a direct expression relating Hund's case a to b. That's why fitting software like PGOPHER just uses Hund's case a by default. It works out great because cases a and b cover most states people are interested in.

There's a nice litmus test for the Hund's cases, which derives straight from perturbation theory: the first-order interaction energy must be much smaller than the zeroth-order energy difference, otherwise your basis set is unphysical.

Using your example Hamiltonian from the first post: if you fit your data and the result says that gamma is on the order of B, then you picked the wrong basis set. Here's another example: Hund's case c doesn't have quantization for lambda or sigma. If you used a Hund's case c basis on a Hund's case a state, what interaction are you going to have to come up with to explain giant energy splittings that in reality correspond to individual lambda and sigma states? And what value are the constants going to come out to? I guarantee they won't pass the litmus test.

Many basis sets can be used to fit the data, but it doesn't mean the results will be useful. You want to pick the basis set with the most good quantum numbers.

Hope that helps. And +1 to everything Twigg wrote.
 
  • #14
170
11
Yup! I can't remember if it was in Brown & Carrington or in Lefebvre-Brion, but I believe one of them derives a direct expression relating Hund's case a to b. That's why fitting software like PGOPHER just uses Hund's case a by default. It works out great because cases a and b cover most states people are interested in.

There's a nice litmus test for the Hund's cases, which derives straight from perturbation theory: the first-order interaction energy must be much smaller than the zeroth-order energy difference, otherwise your basis set is unphysical.

Using your example Hamiltonian from the first post: if you fit your data and the result says that gamma is on the order of B, then you picked the wrong basis set. Here's another example: Hund's case c doesn't have quantization for lambda or sigma. If you used a Hund's case c basis on a Hund's case a state, what interaction are you going to have to come up with to explain giant energy splittings that in reality correspond to individual lambda and sigma states? And what value are the constants going to come out to? I guarantee they won't pass the litmus test.

Many basis sets can be used to fit the data, but it doesn't mean the results will be useful. You want to pick the basis set with the most good quantum numbers.

Hope that helps. And +1 to everything Twigg wrote.
@Twigg @amoforum thanks a lot, this was really helpful! So in the end we can actually express the molecular eigenstates as products of electronic, vibrational and rotational states i.e. ##E(r,R)\times V(R) \times R(\theta,\phi)## where ##E##, ##V## and ##R## are the electronic, vibrational and rotational wavefunctions. And this splitting is the true wavefunction (up to a given order in perturbation theory). So if I add a small term to the Hamiltonian (that I am willing to treat perturbatively), that affects only one of the 3 wavefunctions, I can just ignore the rest when doing a first order perturbation theory? For example if I have a perturbation ##V(r)## (for example a new interaction between electrons and nuclei), I just need the expectation value of V in the electronic wave function. Or if I have ##V(R)## (e.g. a new interaction between nuclei), I just need the expectation value for ##E \times V## without involving the rotational part. Is this right?
 
  • #15
23
8
That sounds right to me as long as there are no interactions between the degrees of freedom. For example, if you perturb the vibrational spectrum, but don't have any rotational-vibrational interaction, I wouldn't expect to see the rotational spectrum change in each vibrational manifold. However, if say you had a vibrational-electronic interaction, and a rotational-electronic interaction, then you might get a second order shift in the rotational spectrum.
 
  • #16
170
11
That sounds right to me as long as there are no interactions between the degrees of freedom. For example, if you perturb the vibrational spectrum, but don't have any rotational-vibrational interaction, I wouldn't expect to see the rotational spectrum change in each vibrational manifold. However, if say you had a vibrational-electronic interaction, and a rotational-electronic interaction, then you might get a second order shift in the rotational spectrum.
But isn't the whole point of the effective hamiltonian to remove ALL the interactions between the degrees of freedom up to the desired order? As far as I understand he does it first at an electronic level such that the electronic energy becomes an overall constant shift, and the rest goes into the effective terms and then he does the same at a vibrational level, such that all the ##R## dependence disappears and it is replaced by ##R_0##. So from what I see, by doing this step, the electronic and vibrational energy for a given vibrational level become just an overall shift, which doesn't even matter if you don't look at other vibrational level (e.g. rotational spectroscopy in that vibrational level). Of course you might have interaction terms between the degrees of freedom at higher order, but if you would expand the perturbation theory to that order, these terms would become part of the effective constants, and again remove any explicit coupling (up to that order). Am I miss understanding something? Thank you!
 
  • #17
Twigg
Science Advisor
Gold Member
379
123
I think what amoforum is trying to say is that you can't separate variables like ##E(r,R) \times V(R) \times R(\theta ,\phi)## if you have interactions between these degrees of freedom. If you have an interaction, then the effective Hamiltonian will have to take it into account one way or another. You may decide that the interaction is weak enough to ignore how much it perturbs the wavefunction, and that's fine if it works for your circumstances. But if you have a very strong electronic-vibrational interaction, it might not be a good idea. There's a lot of unique molecules out there, and it's really left to your judgement what goes into the effective Hamiltonian.
To put this in context, many diatomic AMO projects start with next to no information about the structure! Survey spectroscopy (the job of doing the very first spectroscopy on a new diatomic with little literature on it) is a job that usually takes years if not whole PhD's. The effective Hamiltonian is like the family heirloom that changes a little bit with each generation of students' opinions and new data.
 
  • Like
Likes BillKet and amoforum
  • #18
23
8
Of course you might have interaction terms between the degrees of freedom at higher order, but if you would expand the perturbation theory to that order, these terms would become part of the effective constants, and again remove any explicit coupling (up to that order).

Actually, this is exactly what I was attempting to get at, and you described it very concisely! For example, Brown and Carrington's sections 7.4.2 and 7.5.1 show exactly this procedure (going to higher order and modifying the effective constants) for rotational-electronic and ro-vibrational interactions, respectively.

Family Heirloom :oldlaugh:
 
  • #19
170
11
Actually, this is exactly what I was attempting to get at, and you described it very concisely! For example, Brown and Carrington's sections 7.4.2 and 7.5.1 show exactly this procedure (going to higher order and modifying the effective constants) for rotational-electronic and ro-vibrational interactions, respectively.

Family Heirloom :oldlaugh:
Ah, so for example, if you don't account for rotation-electronic interaction, you will see peaks in the spectrum that don't match your effective Hamiltonian. But if you add the ##\Lambda-##doubling, that coupling will be taken into account, by adding one more term to the Hamiltonian and everything can be factorized again (until we reach again a level of experimental accuracy at which other effective terms need to be added).
 
  • #21
170
11
Yup!
I am a bit confused by actually calculating different terms under this formalism. In the first example in Brown and Carrington they look at the rotational term ##B(R)R^2## (I will ignore the ##c## and ##\hbar##) and I have a few questions. From this thread, I understand that by starting with something of the form ##|\eta>|i>## where ##|\eta>## contains the electronic part and ##|i>## the rotational part we want, to first order in PT, for an operator O to write ##<i|<\eta|O|\eta>|i>## as ##<\eta|A|\eta><i|B|i>## for some operators A and B. In this way ##<\eta|A|\eta>## is the effective parameter we find from experiment and B is an effective operator. So I have a few questions:

1. Before they start the derivations, they assume a Hund case a. I am not sure why do we need that for these derivations. What we need is ##<\eta|A|\eta>B##, which has this form regardless of the Hund case chosen. What basis we choose for the rotational manifold i.e. ##<i|B|i>## or ##<j|B|j>## (e.g. Hund case a or Hund case b), shouldn't affect the form of that operator. In the rotational case they present, the first order perturbation of the rotational term should be ##<\eta|B(R)|\eta>(N^2-L_z)##, regardless of the Hund case. Is that right or am I still missing something?

2. I am not sure how they get to the ##N^2-L_z## term and there are a few things that confuse me:

a) They say that ##N_+L_-+N_-L_+## doesn't contribute to first order. But for example ##N_+L_- = (R_++L_+)L_-## contains the term ##L_+L_-##, which, unless some cancelation happens somewhere, should not be zero in general, when calculating ##<\eta|L_+L_-|\eta>##. Also I am actually not sure how to apply ##L_+## or ##L_-## to an electronic wavefunction. Assuming that the projection of the orbital angular momentum on the internuclear axis is well defined, ##L_\pm## would change that by 1, but the ladder operators usually have a coefficient that depends on the total angular momentum ##L## (e.g. ##J_+|jm>=\sqrt{(j-m)(j+m+1)}|j(m+1)>##), but that is not defined in that case. Actually based on their calculation the coefficient of applying the ladder operator is 1. Is that a convention?

b) They don't say anything about ##L_x^2## and ##L_y^2##, neither for first or second order PT calculation. Why can we ignore them? Don't they contribute to the rotational energy? As above, by writing for example ##L_x## in terms of the ladder operators and squaring we would get something of the form ##L_+L_-##, which should be non zero in PT. I guess I am really missing something about this term...

c) How does N acts on the wavefunction? It looks like they just take it out of the electronic function braket, but N contains L, so I am not sure what they are doing.

I am sorry for the long post, but any advice/explanation on how they do the math would be really appreciated.
 
  • #22
23
8
LeI am a bit confused by actually calculating different terms under this formalism. In the first example in Brown and Carrington they look at the rotational term ##B(R)R^2## (I will ignore the ##c## and ##\hbar##) and I have a few questions. From this thread, I understand that by starting with something of the form ##|\eta>|i>## where ##|\eta>## contains the electronic part and ##|i>## the rotational part we want, to first order in PT, for an operator O to write ##<i|<\eta|O|\eta>|i>## as ##<\eta|A|\eta><i|B|i>## for some operators A and B. In this way ##<\eta|A|\eta>## is the effective parameter we find from experiment and B is an effective operator. So I have a few questions:

1. Before they start the derivations, they assume a Hund case a. I am not sure why do we need that for these derivations. What we need is ##<\eta|A|\eta>B##, which has this form regardless of the Hund case chosen. What basis we choose for the rotational manifold i.e. ##<i|B|i>## or ##<j|B|j>## (e.g. Hund case a or Hund case b), shouldn't affect the form of that operator. In the rotational case they present, the first order perturbation of the rotational term should be ##<\eta|B(R)|\eta>(N^2-L_z)##, regardless of the Hund case. Is that right or am I still missing something?

2. I am not sure how they get to the ##N^2-L_z## term and there are a few things that confuse me:

a) They say that ##N_+L_-+N_-L_+## doesn't contribute to first order. But for example ##N_+L_- = (R_++L_+)L_-## contains the term ##L_+L_-##, which, unless some cancelation happens somewhere, should not be zero in general, when calculating ##<\eta|L_+L_-|\eta>##. Also I am actually not sure how to apply ##L_+## or ##L_-## to an electronic wavefunction. Assuming that the projection of the orbital angular momentum on the internuclear axis is well defined, ##L_\pm## would change that by 1, but the ladder operators usually have a coefficient that depends on the total angular momentum ##L## (e.g. ##J_+|jm>=\sqrt{(j-m)(j+m+1)}|j(m+1)>##), but that is not defined in that case. Actually based on their calculation the coefficient of applying the ladder operator is 1. Is that a convention?

b) They don't say anything about ##L_x^2## and ##L_y^2##, neither for first or second order PT calculation. Why can we ignore them? Don't they contribute to the rotational energy? As above, by writing for example ##L_x## in terms of the ladder operators and squaring we would get something of the form ##L_+L_-##, which should be non zero in PT. I guess I am really missing something about this term...

c) How does N acts on the wavefunction? It looks like they just take it out of the electronic function braket, but N contains L, so I am not sure what they are doing.

I am sorry for the long post, but any advice/explanation on how they do the math would be really appreciated.

1. That's correct. For perturbation theory, something has to serve as your zeroth-order starting point. It might as well be convenient.

2. I would highly recommend looking at Section 3.1.2.3 in Lefebvre-Brion/Field (Revised Edition). It explicitly expands all those terms. If you don't have access to to that book, remember: ##R^2 = R^2_x + R^2_y##, ##R = J - L - S##, and ##R_z = 0## because rotation has to happen in the plane of the internuclear axis. Expanding all those terms and rearranging: you end up with six terms. Three of them are in the form of ##(L^2 - L^2_z)## for J, L and S (note: ##L_x^2 + L_y^2 = L^2 - L^2_z##), which are diagonal for the Hamiltonian up to first order. Eq. 7.82 in B&C just consolidates recognizing that N = J - S. The other three that couple other electronic states, of the form: ##(J^+L^- + J^-L^+)## for combinations of J, L and S, and need to be treated up to second order. B&C then treats these starting from Eq. 7.84.
 
  • #23
Twigg
Science Advisor
Gold Member
379
123
1. Convenience! Hund's case (a) is the one with the least funky hierarchy and the most good quantum numbers after all.

2. What @amoforum said. It's really poorly explained in Brown and Carrington.

3.
Also I am actually not sure how to apply ##L_+## or ##L_-## to an electronic wavefunction
Step 1: Demand a pay raise from whoever makes you actually work with the wavefunction
Step 2: Construct ladder operators ##L_{+i}## and ##L_{-i}## for each individual electron. $$L_{xi} = i\hbar \left[ y_i \frac{\partial}{\partial z_i} - z_i \frac{\partial}{\partial y_i} \right]$$ $$L_{yi} = i\hbar \left[ z_i \frac{\partial}{\partial x_i} - x_i \frac{\partial}{\partial z_i} \right]$$ $$L_{+i} = L_{xi} + iL_{yi}$$ $$L_{-i} = L_{xi} - iL_{yi}$$
Step 3: Add up all the individual electron operators to get the total ##L_+## and ##L_-##: $$L_+ = \sum_i L_{+i}$$ $$L_- = \sum_i L_{-i}$$ Now you have something you can apply to the total electronic wavefunction.

For ##\vec{N}## it's a similar procedure except you have to do the same process for nuclear angular momenta to get ##\vec{R}##.
 
  • #24
Twigg
Science Advisor
Gold Member
379
123
Depending on your institution's access, you may be able to access this lecture by Robert Field. Section 2.3 covers the same derivation that @amoforum refers to. The key point that's extremely NOT OBVIOUS from Brown and Carrington (:oldmad:) is that ##\langle 0|(L^2 - L_z ^2)|0\rangle## is a parameter that is fitted to data and lumped into the "hidden" part of the spectrum (##\langle 0 |H_0 |0\rangle##) or just outright ignored. Can you see now why I don't recommend this book as an introduction?

In other words, you can write ##L^2 = L^2 - L_z^2 + L_z^2##, then neglect the factor of ##L^2 - L_z^2## (shifting it over to the energy of the electronic level), and you're left with ##L^2 = L_z ^2##. Real dirty, ain't it?
 
  • #25
170
11
@Twigg @amoforum Thanks a lot! That helped answer some of my questions! But I still have some (sorry!). Going back to my previous example, assume we are in a ##^2\Sigma## state (say this was obtained by solving the electronic only SE using some method e.g. coupled cluster). Ignoring the vibrational term for now, the molecule wavefunction before adding any perturbation, in this electronic state is: ##|^2\Sigma>|JX>##, ##|JX>## is some basis that we can choose later (we kinda assumed that the electronic energy is the main contribution, so it should be Hund case A or B, but in general we know that only ##J## is the only good quantum number in all Hund cases, so X depends on what Hund case we choose later). Now if we look at the diagonal (in the electronic space) part of the rotational operator we have:

1. ##<^2\Sigma|L^2-L_z^2|^2\Sigma>## - this becomes a fitting parameter that just shifts the electronic energy overall, so it is not important for the rotational spectra (assuming we do some RF measurement within the same electronic manifold). Thank you for clarifyng that for me!

2. ##<^2\Sigma|J^2-J_z^2|^2\Sigma>## - I am not sure how to think about this. Technically, ##J## is not defined at the electronic level. ##J## "exists" only after we add the rotational wavefunction, too i.e. we can't say that ##J^2|^2\Sigma> = J(J+1)|^2\Sigma> ##, the right statement would be ##J^2|^2\Sigma>|Jx> = J(J+1)|^2\Sigma>|JX> ##. So, does this mean that J is not defined at the electronic wavefunction level and we can just move it out of the wavefunction i.e. ##<^2\Sigma|J^2-J_z^2|^2\Sigma> = ##<^2\Sigma||^2\Sigma>(J^2-J_z^2)####, where ##J^2-J_z^2## are operators, not numbers?

3. ##<^2\Sigma|S^2-S_z^2|^2\Sigma>## - we have ##<^2\Sigma|S^2|^2\Sigma> = S(S+1)##, where S is a number here, but I am not sure what ##<^2\Sigma|S_z^2|^2\Sigma> ## is. Can we just say ##<^2\Sigma|S_z^2|^2\Sigma> = S_z##, with ##S_z## being a number (not operator), and choose the quantization axis later, once we choose a Hund case?

So combining the results above (ignoring the ##L^2-L_z^2## term), assuming it is right, we get for the diagonal part ##<^2\Sigma|R^2|^2\Sigma>=S(S+1)-S_z+J^2-J_z## where, again, ##S## and ##S_z## are numbers and ##J## and ##J_z## are operators. Based on Brown and Carrington, ##<^2\Sigma|R^2|^2\Sigma>=N^2## (I ignore the ##L_z## term). So this means that ##N^2 = J^2+S(S+1)-S_z-J_z##. In principle we can turn ##S## and ##S_z## to operators, as the expression would be equivalent, so we have ##N^2 = J^2+S^2-S_z-J_z##, where everything now is an operator. But ##N^2 = (J-S)^2=J^2+S^2-2JS## which implies that ##2JS = S_z + J_z## which is not true. What am I missing?
 

Related Threads on Effective molecular Hamiltonian and Hund cases

  • Last Post
Replies
5
Views
3K
Replies
3
Views
1K
  • Last Post
Replies
4
Views
8K
  • Last Post
Replies
8
Views
2K
Replies
3
Views
2K
  • Last Post
Replies
1
Views
3K
Replies
1
Views
1K
  • Last Post
Replies
3
Views
2K
Top