Effective molecular Hamiltonian and Hund cases

In summary, the effective Hamiltonian is built by solving the Schrodinger equation for fixed internuclear distance for the electrostatic potential, then adding perturbative terms off diagonal in the electronic wavefunctions. These perturbative expansions create an effective hamiltonian for each electronic level, hiding these off diagonal interactions in an effective constant rising the degeneracy of the rotational levels within a given electronic level. The basis for rotational levels is usually a Hund case basis, and when you fit data to the effective hamiltonian, you use the energy differences between the rotational levels to extract ##B## and ##\gamma##.
  • #1
BillKet
312
29
Hello! I am reading some stuff about the effective hamiltonian for a diatomic molecule and I have some questions about relating the parameters of these hamiltonian to experiment and theory. From what I understand, one starts (usually, although not always) with the electronic energy levels, by solving the Schrodinger equation (or Dirac if we consider relativistic effects) for fixed internuclear distance for the electrostatic potential, and ignoring all the other terms in the Hamiltonian. At this point all vibrational, rotational etc. levels in that electronic level are degenerate in energy (I will ignore vibrational energy for now, just focus on electronic and rotational). We then add perturbatively terms off diagonal in electronic wavefunctions, but in a way such that the hamiltonian is still block diagonal in the electronic levels. These perturbative expansion creates an effective hamiltonian for each electronic level, hiding these off diagonal interactions in an effective constant rising the degeneracy of the rotational levels within a given electronic level. We need to choose a basis to expand these rotational levels and that is usually (if not always) a Hund case basis. After we add the perturbation, we end up with an effective operator, for one of the blocks of the full hamiltonian (i.e. an electronic level) of the form ##\gamma \hat{O}##, where ##\gamma## is the effective operator which is determined from experiment and makes the connection to the theory. In matrix form this looks like (let's assume that this electronic level has only 2 rotational levels):

$$\gamma \begin{pmatrix}
O_{11} & O_{22} \\
O_{21} & O_{22}
\end{pmatrix}$$

where ##O_{ij} = <i|\hat{O}|j>##, where ##|i>## and ##|j>## and the 2 Hund cases basis. I think that up to here I understand it well. However, I am not sure how we account for off diagonal terms in this hamiltonian. When we do a fit to the data (which in this case would be a measurement of the energy difference between ##|i>## and ##|j>##) in order to extract ##\gamma##, do we just ignore the off diagonal terms, or do we diagonalize this hamiltonian (which in practice can have hundreds of rows, depending on how many lines were measured)? Usually when the energy levels are labeled in a diagram, they have the quantum numbers of the hund case chosen, which would imply that we ignore the off diagonal entries. Are they so small that we can ignore them? Or are they actually zero? They shouldn't be zero, as in an actual hamiltonian there are terms which break the perfect coupling picture of a perfect hund case. Can someone help me understand how do we connect hund energy levels to real energy levels? Thank you!
 
Physics news on Phys.org
  • #2
Your understanding is correct for how the effective Hamiltonian is built. Once you have the whole matrix, you diagonalize to get all the energy levels. In experiment, your spectroscopy peaks correspond to the differences between those energy levels.

So to determine those tiny off-diagonal terms, in software you build those matricies, diagonalize, compute the spectra: energy differences between all energy levels (allowed transitions weighted by the line strengths,) then fit to the data.
 
  • #3
amoforum said:
Your understanding is correct for how the effective Hamiltonian is built. Once you have the whole matrix, you diagonalize to get all the energy levels. In experiment, your spectroscopy peaks correspond to the differences between those energy levels.

So to determine those tiny off-diagonal terms, in software you build those matricies, diagonalize, compute the spectra: energy differences between all energy levels (allowed transitions weighted by the line strengths,) then fit to the data.
Thanks a lot for your reply! So, for example, in ##^2\Sigma## state, assuming there are no nuclear spins, the effective hamiltonian is $$H_{eff} = BN^2+\gamma NS$$ where ##B## is the rotational constant and ##\gamma## is the spin rotational coupling (I ignored here the centrifugal corrections to the rotation of the form ##DN^4+HN^6+...##). One question I have is: is this Hamiltonian (with the centrifugal corrections) correct for any ##J## in a given vibrational level? I have seen in several papers mentioned that this is correct for low values of ##J## and I am not sure why would this not hold for any ##J##. I understand that for higher ##J## the best Hund case might change, but why would the Hamiltonian itself change? My other question is: Assuming I use a Hund case b, and I diagonalize the ##H_{eff}## up to some ##J## using Hund case b eigenstates, what I would do in practice would be to measure the transitions between rotational levels in this vibronic state, extract the ##B## and ##\gamma## by doing a fit to the peaks predicted by ##H_{eff}## and then from the values of ##B## and ##\gamma## I can go back to ab initio calculations of the electronic levels and extract more fundamental parameters (or check if the calculations are correct). Is my understanding of it right? Thank you!
 
  • #4
Let's put it this way:

You start with Hund's case b and fit the spectrum to lower J values to get parameters B and gamma. If the fit is good, then that means your effective Hamiltonian was a good guess. At higher J levels, you'll plug in your fitted parameters B and gamma and see that they no longer match the spectrum! So clearly your effective Hamiltonian is now wrong. Either you need to account for new interactions you haven't added in yet, or your Hund's case no longer applies at all.

One reason things might go bad at high J is because some tiny couplings directly depend on the value of J, causing tiny energy splittings. At high J, those splittings might not be so tiny anymore, so a term accounting for them needs to be added to your Hamiltonian. Same goes for high vibrational levels.

It's really just about building physical intuition. For example, let's say that you suspect that there should be a rotational-electronic coupling of some sort. So you add that term into the effective Hamiltonian and try to fit the data to determine the strength of that coupling. If it fits, then your intuition was correct!
 
  • #5
amoforum said:
Let's put it this way:

You start with Hund's case b and fit the spectrum to lower J values to get parameters B and gamma. If the fit is good, then that means your effective Hamiltonian was a good guess. At higher J levels, you'll plug in your fitted parameters B and gamma and see that they no longer match the spectrum! So clearly your effective Hamiltonian is now wrong. Either you need to account for new interactions you haven't added in yet, or your Hund's case no longer applies at all.

One reason things might go bad at high J is because some tiny couplings directly depend on the value of J, causing tiny energy splittings. At high J, those splittings might not be so tiny anymore, so a term accounting for them needs to be added to your Hamiltonian. Same goes for high vibrational levels.

It's really just about building physical intuition. For example, let's say that you suspect that there should be a rotational-electronic coupling of some sort. So you add that term into the effective Hamiltonian and try to fit the data to determine the strength of that coupling. If it fits, then your intuition was correct!
Now I am a bit confused. From the book I am reading (Rotational Spectroscopy of Diatomic Molecules by Brown and Carrington), it seems like the effective hamiltonian is not something that is done by using some sort of physics intuition or adding terms by hand. It just follows from a perturbative expansion of the full molecular hamiltonian (which mixes different electronic levels). So ##H_{eff}## should contain all terms needed to fit the spectrum up to a given perturbative order. As far as I understand this effective Hamiltonian gives exactly the same energy levels as the real Hamiltonian, up to the perturbation order used. Is it just that I might have to go to higher orders in perturbation theory for higher J, as some small terms at low J might be big now? But in that case, that term should still appear naturally when I do the perturbative expansion, no? I wouldn't have to add it by hand. Thank you!
 
  • #6
So imagine you're measuring a new molecule for the first time. You need to fit the spectra, but to what? (Nobody has the exact Hamiltonian.) You first use your physical intuition to add each of those interactions one by one. For example, why would I add a spin-orbit coupling term to a state that I expect has no spin? So you basically keep adding interactions or going to higher orders in perturbation theory until your model matches the data. Aside from physical intuition, the exact values of those coupling strengths are useful to other people, say for predicting systematic shifts due to drifting electric and magnetic fields, for example.
 
  • #7
amoforum said:
So imagine you're measuring a new molecule for the first time. You need to fit the spectra, but to what? (Nobody has the exact Hamiltonian.) You first use your physical intuition to add each of those interactions one by one. For example, why would I add a spin-orbit coupling term to a state that I expect has no spin? So you basically keep adding interactions or going to higher orders in perturbation theory until your model matches the data. Aside from physical intuition, the exact values of those coupling strengths are useful to other people, say for predicting systematic shifts due to drifting electric and magnetic fields, for example.
I think I get it. But in terms of what parameters to add to the Hamiltonian, assuming we have a diatomic molecule, if we go up to a fixed order in perturbation theory, all diatomic molecules that can exist have exactly the same terms in the effective Hamiltonian up to that order, for any given electronic level, right? Now using some physics intuition (or some guidance from theory), we can start discarding some of these terms that are there in the most general case but we don't need (as you said drop spin-orbit coupling for no spin, or spin-spin electron interaction if there is just one electron). So I guess the issue is more what terms to drop from a well known (too) general effective Hamiltonian, rather than what to add to a given Hamiltonian, right?
 
  • #8
Working with a giant effective Hamiltonian with every interaction possible and whittling down the incorrect terms is definitely possible. However, you're going to have an unpleasant time fitting your data. If you add 50 interactions to your effective Hamiltonian, you'll always fit the data. But it'll be physically meaningless, in the same way that a 10th-order polynomial can fit an elephant!

So it's more effective to start with the smallest, simplest model and correct by adding interactions as necessary. I understand that it seems counter-intuitive to not start with a Hamiltonian that covers all Hund's cases, but the parameter space is simply too big have practical value. The good news is that almost all diatomic molecules fall into a known Hund's case. And Brown and Carrington generously derived all the matrix elements for us!
 
  • Like
Likes Twigg
  • #9
Brown and Carrington is a powerful reference book and people swear by it, but the way it's laid out does not reflect how spectroscopy of diatomics is handled in the lab. I can see where you're coming from if that monster of a book is your starting point.

BillKet said:
So I guess the issue is more what terms to drop from a well known (too) general effective Hamiltonian, rather than what to add to a given Hamiltonian, right?

This may sound like semantics, but in practice you usually end up "adding" terms to your "working Hamiltonian". You can think of the effective Hamiltonian in Brown and Carrington as a "catalog" to pick your terms from. And often times, even Brown and Carrington's effective Hamiltonian is incomplete and you end up adding new stuff to it. For example, in another recent thread we discussed electron electric dipole moment (eEDM) measurements in ThO. The eEDM shift (##-\vec{d_e} \cdot \vec{E}##) is not present in that effective Hamiltonian, and has to be added on. If that experiment is of interest to you, you might have more luck reading Paul Hamilton's PhD thesis over Brown and Carrington. Chapter 3 is a concise overview of molecular structure, namely for the ##^1 \Sigma## and ##^3 \Sigma## states of PbO. (Note: in ThO the eEDM-sensitive state is a ##^3 \Delta## state, but in PbO it's a ##^3 \Sigma##). It's a different molecule but it's very well written and covers a lot of the same concepts.
 
  • Like
Likes amoforum
  • #10
amoforum said:
Working with a giant effective Hamiltonian with every interaction possible and whittling down the incorrect terms is definitely possible. However, you're going to have an unpleasant time fitting your data. If you add 50 interactions to your effective Hamiltonian, you'll always fit the data. But it'll be physically meaningless, in the same way that a 10th-order polynomial can fit an elephant!

So it's more effective to start with the smallest, simplest model and correct by adding interactions as necessary. I understand that it seems counter-intuitive to not start with a Hamiltonian that covers all Hund's cases, but the parameter space is simply too big have practical value. The good news is that almost all diatomic molecules fall into a known Hund's case. And Brown and Carrington generously derived all the matrix elements for us!
Thanks a lot (and sorry for asking so many questions)! One more thing, just to make sure, the effective hamiltonian for a given electronic level, doesn't depend on the Hund case chosen, right? For example if I have a ##^2\Sigma## electronic level, for low J values I have the hamiltonian mentioned above, which has only 2 terms. If I want to find the eigenvalues of this hamiltonian and connect them to the experimental data, I can expand it in term of Hund case a or Hund case b basis (I assume here that the electronic energy is much bigger than the rotational one). Most of the time Hund case b is better in this case, as the off diagonal terms would be smaller, so a perturbation theory to first order could get me pretty close to the right answer. But I can also use Hund case a as the basis, but I would need to use a perturbative expansion to a higher order. Is that right? Also if I use a software that actually fully diagonalizes a matrix, wether I use Hund case a or b shouldn't make a big difference, right?
 
  • #11
Twigg said:
Brown and Carrington is a powerful reference book and people swear by it, but the way it's laid out does not reflect how spectroscopy of diatomics is handled in the lab. I can see where you're coming from if that monster of a book is your starting point.
This may sound like semantics, but in practice you usually end up "adding" terms to your "working Hamiltonian". You can think of the effective Hamiltonian in Brown and Carrington as a "catalog" to pick your terms from. And often times, even Brown and Carrington's effective Hamiltonian is incomplete and you end up adding new stuff to it. For example, in another recent thread we discussed electron electric dipole moment (eEDM) measurements in ThO. The eEDM shift (##-\vec{d_e} \cdot \vec{E}##) is not present in that effective Hamiltonian, and has to be added on. If that experiment is of interest to you, you might have more luck reading Paul Hamilton's PhD thesis over Brown and Carrington. Chapter 3 is a concise overview of molecular structure, namely for the ##^1 \Sigma## and ##^3 \Sigma## states of PbO. (Note: in ThO the eEDM-sensitive state is a ##^3 \Delta## state, but in PbO it's a ##^3 \Sigma##). It's a different molecule but it's very well written and covers a lot of the same concepts.
Thanks a lot for your reply (again :) )! So if I want to work in a given electronic state, I can pick the main terms in the effective hamiltonian from literature (like Brown and Carrington) as all electronic states of that type will always contain these terms, then add some extra ones that I might need for my experiment (P-odd, P,T-odd, E/B field interactions etc.), and this is the Hamiltonian that I need to find the parameters for (usually the parameters of the main part of the Hamiltonian are known, I need to find the others, such as the eEDM). Once I have the Hamiltonian, I pick a Hund case that is convenient (but that is just a matter of convenience, as all Hund cases are complete orthonormal basis, so they would all give the same answer, right?) and diagonalize it, then connect the difference in energy levels from this diagonalization to the measured one. Is this how it is done in practice?
 
  • #12
I've only worked on one diatomic before in any depth, so take my words here with a grain of salt. I would point out that by choosing the terms in your effective Hamiltonian, you are already implying a hierarchy of interactions. This should help you choose which Hunds cases best approximate your effective Hamiltonian's eigenstates. In principle, it doesn't matter what basis of Hund's cases you use, but in practice you often end up approximating states as purely one Hund's case, and then seeing how well that approximation applies. If it's Hund's case a for example, you might be doing an optical pumping experiment and see slow decay to through a forbidden channel. Rather than correct the Hamiltonian, we would end up just assigning a lifetime to that channel and we'd plan our experiment accordingly. If your goal is just state preparation, the lifetime is all the information you really need. If your goals are precision spectroscopy, it's a different story but that's well outside of my experience.
 
  • Like
Likes amoforum
  • #13
BillKet said:
If I want to find the eigenvalues of this hamiltonian and connect them to the experimental data, I can expand it in term of Hund case a or Hund case b basis (I assume here that the electronic energy is much bigger than the rotational one). Most of the time Hund case b is better in this case, as the off diagonal terms would be smaller, so a perturbation theory to first order could get me pretty close to the right answer. But I can also use Hund case a as the basis, but I would need to use a perturbative expansion to a higher order. Is that right? Also if I use a software that actually fully diagonalizes a matrix, wether I use Hund case a or b shouldn't make a big difference, right?

Yup! I can't remember if it was in Brown & Carrington or in Lefebvre-Brion, but I believe one of them derives a direct expression relating Hund's case a to b. That's why fitting software like PGOPHER just uses Hund's case a by default. It works out great because cases a and b cover most states people are interested in.

There's a nice litmus test for the Hund's cases, which derives straight from perturbation theory: the first-order interaction energy must be much smaller than the zeroth-order energy difference, otherwise your basis set is unphysical.

Using your example Hamiltonian from the first post: if you fit your data and the result says that gamma is on the order of B, then you picked the wrong basis set. Here's another example: Hund's case c doesn't have quantization for lambda or sigma. If you used a Hund's case c basis on a Hund's case a state, what interaction are you going to have to come up with to explain giant energy splittings that in reality correspond to individual lambda and sigma states? And what value are the constants going to come out to? I guarantee they won't pass the litmus test.

Many basis sets can be used to fit the data, but it doesn't mean the results will be useful. You want to pick the basis set with the most good quantum numbers.

Hope that helps. And +1 to everything Twigg wrote.
 
  • Informative
Likes Twigg
  • #14
amoforum said:
Yup! I can't remember if it was in Brown & Carrington or in Lefebvre-Brion, but I believe one of them derives a direct expression relating Hund's case a to b. That's why fitting software like PGOPHER just uses Hund's case a by default. It works out great because cases a and b cover most states people are interested in.

There's a nice litmus test for the Hund's cases, which derives straight from perturbation theory: the first-order interaction energy must be much smaller than the zeroth-order energy difference, otherwise your basis set is unphysical.

Using your example Hamiltonian from the first post: if you fit your data and the result says that gamma is on the order of B, then you picked the wrong basis set. Here's another example: Hund's case c doesn't have quantization for lambda or sigma. If you used a Hund's case c basis on a Hund's case a state, what interaction are you going to have to come up with to explain giant energy splittings that in reality correspond to individual lambda and sigma states? And what value are the constants going to come out to? I guarantee they won't pass the litmus test.

Many basis sets can be used to fit the data, but it doesn't mean the results will be useful. You want to pick the basis set with the most good quantum numbers.

Hope that helps. And +1 to everything Twigg wrote.
@Twigg @amoforum thanks a lot, this was really helpful! So in the end we can actually express the molecular eigenstates as products of electronic, vibrational and rotational states i.e. ##E(r,R)\times V(R) \times R(\theta,\phi)## where ##E##, ##V## and ##R## are the electronic, vibrational and rotational wavefunctions. And this splitting is the true wavefunction (up to a given order in perturbation theory). So if I add a small term to the Hamiltonian (that I am willing to treat perturbatively), that affects only one of the 3 wavefunctions, I can just ignore the rest when doing a first order perturbation theory? For example if I have a perturbation ##V(r)## (for example a new interaction between electrons and nuclei), I just need the expectation value of V in the electronic wave function. Or if I have ##V(R)## (e.g. a new interaction between nuclei), I just need the expectation value for ##E \times V## without involving the rotational part. Is this right?
 
  • #15
That sounds right to me as long as there are no interactions between the degrees of freedom. For example, if you perturb the vibrational spectrum, but don't have any rotational-vibrational interaction, I wouldn't expect to see the rotational spectrum change in each vibrational manifold. However, if say you had a vibrational-electronic interaction, and a rotational-electronic interaction, then you might get a second order shift in the rotational spectrum.
 
  • Like
Likes Twigg
  • #16
amoforum said:
That sounds right to me as long as there are no interactions between the degrees of freedom. For example, if you perturb the vibrational spectrum, but don't have any rotational-vibrational interaction, I wouldn't expect to see the rotational spectrum change in each vibrational manifold. However, if say you had a vibrational-electronic interaction, and a rotational-electronic interaction, then you might get a second order shift in the rotational spectrum.
But isn't the whole point of the effective hamiltonian to remove ALL the interactions between the degrees of freedom up to the desired order? As far as I understand he does it first at an electronic level such that the electronic energy becomes an overall constant shift, and the rest goes into the effective terms and then he does the same at a vibrational level, such that all the ##R## dependence disappears and it is replaced by ##R_0##. So from what I see, by doing this step, the electronic and vibrational energy for a given vibrational level become just an overall shift, which doesn't even matter if you don't look at other vibrational level (e.g. rotational spectroscopy in that vibrational level). Of course you might have interaction terms between the degrees of freedom at higher order, but if you would expand the perturbation theory to that order, these terms would become part of the effective constants, and again remove any explicit coupling (up to that order). Am I miss understanding something? Thank you!
 
  • Like
Likes amoforum
  • #17
I think what amoforum is trying to say is that you can't separate variables like ##E(r,R) \times V(R) \times R(\theta ,\phi)## if you have interactions between these degrees of freedom. If you have an interaction, then the effective Hamiltonian will have to take it into account one way or another. You may decide that the interaction is weak enough to ignore how much it perturbs the wavefunction, and that's fine if it works for your circumstances. But if you have a very strong electronic-vibrational interaction, it might not be a good idea. There's a lot of unique molecules out there, and it's really left to your judgement what goes into the effective Hamiltonian.
To put this in context, many diatomic AMO projects start with next to no information about the structure! Survey spectroscopy (the job of doing the very first spectroscopy on a new diatomic with little literature on it) is a job that usually takes years if not whole PhD's. The effective Hamiltonian is like the family heirloom that changes a little bit with each generation of students' opinions and new data.
 
  • Like
Likes BillKet and amoforum
  • #18
BillKet said:
Of course you might have interaction terms between the degrees of freedom at higher order, but if you would expand the perturbation theory to that order, these terms would become part of the effective constants, and again remove any explicit coupling (up to that order).

Actually, this is exactly what I was attempting to get at, and you described it very concisely! For example, Brown and Carrington's sections 7.4.2 and 7.5.1 show exactly this procedure (going to higher order and modifying the effective constants) for rotational-electronic and ro-vibrational interactions, respectively.

Family Heirloom :oldlaugh:
 
  • Like
Likes BillKet
  • #19
amoforum said:
Actually, this is exactly what I was attempting to get at, and you described it very concisely! For example, Brown and Carrington's sections 7.4.2 and 7.5.1 show exactly this procedure (going to higher order and modifying the effective constants) for rotational-electronic and ro-vibrational interactions, respectively.

Family Heirloom :oldlaugh:
Ah, so for example, if you don't account for rotation-electronic interaction, you will see peaks in the spectrum that don't match your effective Hamiltonian. But if you add the ##\Lambda-##doubling, that coupling will be taken into account, by adding one more term to the Hamiltonian and everything can be factorized again (until we reach again a level of experimental accuracy at which other effective terms need to be added).
 
  • #20
Yup!
 
  • Like
Likes BillKet
  • #21
amoforum said:
Yup!
I am a bit confused by actually calculating different terms under this formalism. In the first example in Brown and Carrington they look at the rotational term ##B(R)R^2## (I will ignore the ##c## and ##\hbar##) and I have a few questions. From this thread, I understand that by starting with something of the form ##|\eta>|i>## where ##|\eta>## contains the electronic part and ##|i>## the rotational part we want, to first order in PT, for an operator O to write ##<i|<\eta|O|\eta>|i>## as ##<\eta|A|\eta><i|B|i>## for some operators A and B. In this way ##<\eta|A|\eta>## is the effective parameter we find from experiment and B is an effective operator. So I have a few questions:

1. Before they start the derivations, they assume a Hund case a. I am not sure why do we need that for these derivations. What we need is ##<\eta|A|\eta>B##, which has this form regardless of the Hund case chosen. What basis we choose for the rotational manifold i.e. ##<i|B|i>## or ##<j|B|j>## (e.g. Hund case a or Hund case b), shouldn't affect the form of that operator. In the rotational case they present, the first order perturbation of the rotational term should be ##<\eta|B(R)|\eta>(N^2-L_z)##, regardless of the Hund case. Is that right or am I still missing something?

2. I am not sure how they get to the ##N^2-L_z## term and there are a few things that confuse me:

a) They say that ##N_+L_-+N_-L_+## doesn't contribute to first order. But for example ##N_+L_- = (R_++L_+)L_-## contains the term ##L_+L_-##, which, unless some cancelation happens somewhere, should not be zero in general, when calculating ##<\eta|L_+L_-|\eta>##. Also I am actually not sure how to apply ##L_+## or ##L_-## to an electronic wavefunction. Assuming that the projection of the orbital angular momentum on the internuclear axis is well defined, ##L_\pm## would change that by 1, but the ladder operators usually have a coefficient that depends on the total angular momentum ##L## (e.g. ##J_+|jm>=\sqrt{(j-m)(j+m+1)}|j(m+1)>##), but that is not defined in that case. Actually based on their calculation the coefficient of applying the ladder operator is 1. Is that a convention?

b) They don't say anything about ##L_x^2## and ##L_y^2##, neither for first or second order PT calculation. Why can we ignore them? Don't they contribute to the rotational energy? As above, by writing for example ##L_x## in terms of the ladder operators and squaring we would get something of the form ##L_+L_-##, which should be non zero in PT. I guess I am really missing something about this term...

c) How does N acts on the wavefunction? It looks like they just take it out of the electronic function braket, but N contains L, so I am not sure what they are doing.

I am sorry for the long post, but any advice/explanation on how they do the math would be really appreciated.
 
  • #22
BillKet said:
LeI am a bit confused by actually calculating different terms under this formalism. In the first example in Brown and Carrington they look at the rotational term ##B(R)R^2## (I will ignore the ##c## and ##\hbar##) and I have a few questions. From this thread, I understand that by starting with something of the form ##|\eta>|i>## where ##|\eta>## contains the electronic part and ##|i>## the rotational part we want, to first order in PT, for an operator O to write ##<i|<\eta|O|\eta>|i>## as ##<\eta|A|\eta><i|B|i>## for some operators A and B. In this way ##<\eta|A|\eta>## is the effective parameter we find from experiment and B is an effective operator. So I have a few questions:

1. Before they start the derivations, they assume a Hund case a. I am not sure why do we need that for these derivations. What we need is ##<\eta|A|\eta>B##, which has this form regardless of the Hund case chosen. What basis we choose for the rotational manifold i.e. ##<i|B|i>## or ##<j|B|j>## (e.g. Hund case a or Hund case b), shouldn't affect the form of that operator. In the rotational case they present, the first order perturbation of the rotational term should be ##<\eta|B(R)|\eta>(N^2-L_z)##, regardless of the Hund case. Is that right or am I still missing something?

2. I am not sure how they get to the ##N^2-L_z## term and there are a few things that confuse me:

a) They say that ##N_+L_-+N_-L_+## doesn't contribute to first order. But for example ##N_+L_- = (R_++L_+)L_-## contains the term ##L_+L_-##, which, unless some cancelation happens somewhere, should not be zero in general, when calculating ##<\eta|L_+L_-|\eta>##. Also I am actually not sure how to apply ##L_+## or ##L_-## to an electronic wavefunction. Assuming that the projection of the orbital angular momentum on the internuclear axis is well defined, ##L_\pm## would change that by 1, but the ladder operators usually have a coefficient that depends on the total angular momentum ##L## (e.g. ##J_+|jm>=\sqrt{(j-m)(j+m+1)}|j(m+1)>##), but that is not defined in that case. Actually based on their calculation the coefficient of applying the ladder operator is 1. Is that a convention?

b) They don't say anything about ##L_x^2## and ##L_y^2##, neither for first or second order PT calculation. Why can we ignore them? Don't they contribute to the rotational energy? As above, by writing for example ##L_x## in terms of the ladder operators and squaring we would get something of the form ##L_+L_-##, which should be non zero in PT. I guess I am really missing something about this term...

c) How does N acts on the wavefunction? It looks like they just take it out of the electronic function braket, but N contains L, so I am not sure what they are doing.

I am sorry for the long post, but any advice/explanation on how they do the math would be really appreciated.

1. That's correct. For perturbation theory, something has to serve as your zeroth-order starting point. It might as well be convenient.

2. I would highly recommend looking at Section 3.1.2.3 in Lefebvre-Brion/Field (Revised Edition). It explicitly expands all those terms. If you don't have access to to that book, remember: ##R^2 = R^2_x + R^2_y##, ##R = J - L - S##, and ##R_z = 0## because rotation has to happen in the plane of the internuclear axis. Expanding all those terms and rearranging: you end up with six terms. Three of them are in the form of ##(L^2 - L^2_z)## for J, L and S (note: ##L_x^2 + L_y^2 = L^2 - L^2_z##), which are diagonal for the Hamiltonian up to first order. Eq. 7.82 in B&C just consolidates recognizing that N = J - S. The other three that couple other electronic states, of the form: ##(J^+L^- + J^-L^+)## for combinations of J, L and S, and need to be treated up to second order. B&C then treats these starting from Eq. 7.84.
 
  • Like
Likes Twigg
  • #23
1. Convenience! Hund's case (a) is the one with the least funky hierarchy and the most good quantum numbers after all.

2. What @amoforum said. It's really poorly explained in Brown and Carrington.

3.
BillKet said:
Also I am actually not sure how to apply ##L_+## or ##L_-## to an electronic wavefunction
Step 1: Demand a pay raise from whoever makes you actually work with the wavefunction
Step 2: Construct ladder operators ##L_{+i}## and ##L_{-i}## for each individual electron. $$L_{xi} = i\hbar \left[ y_i \frac{\partial}{\partial z_i} - z_i \frac{\partial}{\partial y_i} \right]$$ $$L_{yi} = i\hbar \left[ z_i \frac{\partial}{\partial x_i} - x_i \frac{\partial}{\partial z_i} \right]$$ $$L_{+i} = L_{xi} + iL_{yi}$$ $$L_{-i} = L_{xi} - iL_{yi}$$
Step 3: Add up all the individual electron operators to get the total ##L_+## and ##L_-##: $$L_+ = \sum_i L_{+i}$$ $$L_- = \sum_i L_{-i}$$ Now you have something you can apply to the total electronic wavefunction.

For ##\vec{N}## it's a similar procedure except you have to do the same process for nuclear angular momenta to get ##\vec{R}##.
 
  • Like
Likes amoforum
  • #24
Depending on your institution's access, you may be able to access this lecture by Robert Field. Section 2.3 covers the same derivation that @amoforum refers to. The key point that's extremely NOT OBVIOUS from Brown and Carrington (:oldmad:) is that ##\langle 0|(L^2 - L_z ^2)|0\rangle## is a parameter that is fitted to data and lumped into the "hidden" part of the spectrum (##\langle 0 |H_0 |0\rangle##) or just outright ignored. Can you see now why I don't recommend this book as an introduction?

In other words, you can write ##L^2 = L^2 - L_z^2 + L_z^2##, then neglect the factor of ##L^2 - L_z^2## (shifting it over to the energy of the electronic level), and you're left with ##L^2 = L_z ^2##. Real dirty, ain't it?
 
  • Like
Likes amoforum
  • #25
@Twigg @amoforum Thanks a lot! That helped answer some of my questions! But I still have some (sorry!). Going back to my previous example, assume we are in a ##^2\Sigma## state (say this was obtained by solving the electronic only SE using some method e.g. coupled cluster). Ignoring the vibrational term for now, the molecule wavefunction before adding any perturbation, in this electronic state is: ##|^2\Sigma>|JX>##, ##|JX>## is some basis that we can choose later (we kinda assumed that the electronic energy is the main contribution, so it should be Hund case A or B, but in general we know that only ##J## is the only good quantum number in all Hund cases, so X depends on what Hund case we choose later). Now if we look at the diagonal (in the electronic space) part of the rotational operator we have:

1. ##<^2\Sigma|L^2-L_z^2|^2\Sigma>## - this becomes a fitting parameter that just shifts the electronic energy overall, so it is not important for the rotational spectra (assuming we do some RF measurement within the same electronic manifold). Thank you for clarifyng that for me!

2. ##<^2\Sigma|J^2-J_z^2|^2\Sigma>## - I am not sure how to think about this. Technically, ##J## is not defined at the electronic level. ##J## "exists" only after we add the rotational wavefunction, too i.e. we can't say that ##J^2|^2\Sigma> = J(J+1)|^2\Sigma> ##, the right statement would be ##J^2|^2\Sigma>|Jx> = J(J+1)|^2\Sigma>|JX> ##. So, does this mean that J is not defined at the electronic wavefunction level and we can just move it out of the wavefunction i.e. ##<^2\Sigma|J^2-J_z^2|^2\Sigma> = ##<^2\Sigma||^2\Sigma>(J^2-J_z^2)####, where ##J^2-J_z^2## are operators, not numbers?

3. ##<^2\Sigma|S^2-S_z^2|^2\Sigma>## - we have ##<^2\Sigma|S^2|^2\Sigma> = S(S+1)##, where S is a number here, but I am not sure what ##<^2\Sigma|S_z^2|^2\Sigma> ## is. Can we just say ##<^2\Sigma|S_z^2|^2\Sigma> = S_z##, with ##S_z## being a number (not operator), and choose the quantization axis later, once we choose a Hund case?

So combining the results above (ignoring the ##L^2-L_z^2## term), assuming it is right, we get for the diagonal part ##<^2\Sigma|R^2|^2\Sigma>=S(S+1)-S_z+J^2-J_z## where, again, ##S## and ##S_z## are numbers and ##J## and ##J_z## are operators. Based on Brown and Carrington, ##<^2\Sigma|R^2|^2\Sigma>=N^2## (I ignore the ##L_z## term). So this means that ##N^2 = J^2+S(S+1)-S_z-J_z##. In principle we can turn ##S## and ##S_z## to operators, as the expression would be equivalent, so we have ##N^2 = J^2+S^2-S_z-J_z##, where everything now is an operator. But ##N^2 = (J-S)^2=J^2+S^2-2JS## which implies that ##2JS = S_z + J_z## which is not true. What am I missing?
 
  • #26
BillKet said:
@Twigg @amoforum Thanks a lot! That helped answer some of my questions! But I still have some (sorry!). Going back to my previous example, assume we are in a ##^2\Sigma## state (say this was obtained by solving the electronic only SE using some method e.g. coupled cluster). Ignoring the vibrational term for now, the molecule wavefunction before adding any perturbation, in this electronic state is: ##|^2\Sigma>|JX>##, ##|JX>## is some basis that we can choose later (we kinda assumed that the electronic energy is the main contribution, so it should be Hund case A or B, but in general we know that only ##J## is the only good quantum number in all Hund cases, so X depends on what Hund case we choose later). Now if we look at the diagonal (in the electronic space) part of the rotational operator we have:

1. ##<^2\Sigma|L^2-L_z^2|^2\Sigma>## - this becomes a fitting parameter that just shifts the electronic energy overall, so it is not important for the rotational spectra (assuming we do some RF measurement within the same electronic manifold). Thank you for clarifyng that for me!

2. ##<^2\Sigma|J^2-J_z^2|^2\Sigma>## - I am not sure how to think about this. Technically, ##J## is not defined at the electronic level. ##J## "exists" only after we add the rotational wavefunction, too i.e. we can't say that ##J^2|^2\Sigma> = J(J+1)|^2\Sigma> ##, the right statement would be ##J^2|^2\Sigma>|Jx> = J(J+1)|^2\Sigma>|JX> ##. So, does this mean that J is not defined at the electronic wavefunction level and we can just move it out of the wavefunction i.e. ##<^2\Sigma|J^2-J_z^2|^2\Sigma> = ##<^2\Sigma||^2\Sigma>(J^2-J_z^2)####, where ##J^2-J_z^2## are operators, not numbers?

3. ##<^2\Sigma|S^2-S_z^2|^2\Sigma>## - we have ##<^2\Sigma|S^2|^2\Sigma> = S(S+1)##, where S is a number here, but I am not sure what ##<^2\Sigma|S_z^2|^2\Sigma> ## is. Can we just say ##<^2\Sigma|S_z^2|^2\Sigma> = S_z##, with ##S_z## being a number (not operator), and choose the quantization axis later, once we choose a Hund case?

So combining the results above (ignoring the ##L^2-L_z^2## term), assuming it is right, we get for the diagonal part ##<^2\Sigma|R^2|^2\Sigma>=S(S+1)-S_z+J^2-J_z## where, again, ##S## and ##S_z## are numbers and ##J## and ##J_z## are operators. Based on Brown and Carrington, ##<^2\Sigma|R^2|^2\Sigma>=N^2## (I ignore the ##L_z## term). So this means that ##N^2 = J^2+S(S+1)-S_z-J_z##. In principle we can turn ##S## and ##S_z## to operators, as the expression would be equivalent, so we have ##N^2 = J^2+S^2-S_z-J_z##, where everything now is an operator. But ##N^2 = (J-S)^2=J^2+S^2-2JS## which implies that ##2JS = S_z + J_z## which is not true. What am I missing?

2. J is the total angular momentum of all degrees of freedom (ignoring nuclear). ##J^2|^2\Sigma> = J(J+1)|^2\Sigma> ## is correct. The rotational energies of ##J^2 - J_z^2## are ##B(R)[J(J+1) - \Omega^2]##.

3. ##S_z## is the projection of ##S## along the internuclear axis, or ##\Sigma##. Regardless of Hund's case, the z-axis (internuclear axis) defines the quantization axis because there's no field around stronger than the internuclear electric field.

Unfortunately, I can't point out the discrepancy at first glance. Likely some x/y terms got swept under the rug somewhere. But in all these expansions, nothing should change by introducing ##N##. If starting with ##R = J - L - S ## gives a result that makes sense, then so should ## R = N - L ##.
 
  • #27
Do you mean that ##S_z## and ##J_z## are good quantum numbers and quantified along the internuclear axis at the level of electronic wavefunction? But they can become bad quantum or be quantified along a different axis when adding the perturbative terms (e.g. in Hund case b the spin is quantized along ##J##, not along the internuclear axis)? Actually that confuses me a bit, as in a Hund case b, the electronic energy is still much stronger than the rotational one, yet the spin is quantified along ##J##.

amoforum said:
Unfortunately, I can't point out the discrepancy at first glance. Likely some x/y terms got swept under the rug somewhere. But in all these expansions, nothing should change by introducing ##N##. If starting with ##R = J - L - S ## gives a result that makes sense, then so should ## R = N - L ##.
The issue is that in the derivation that you pointed me to, they start with ##R = J - L - S ## and terms like ##J_xS_x## don't appear in the part they call diagonal. But working with ##N## instead, you discard terms of the form ##N_xL_x##, but by keeping ##N^2## you implicitly keep terms of the form ##J_xS_x## in the diagonal part and I am not sure how is that possible.
 
  • #28
BillKet said:
Do you mean that ##S_z## and ##J_z## are good quantum numbers and quantified along the internuclear axis at the level of electronic wavefunction? But they can become bad quantum or be quantified along a different axis when adding the perturbative terms (e.g. in Hund case b the spin is quantized along ##J##, not along the internuclear axis)? Actually that confuses me a bit, as in a Hund case b, the electronic energy is still much stronger than the rotational one, yet the spin is quantified along ##J##.The issue is that in the derivation that you pointed me to, they start with ##R = J - L - S ## and terms like ##J_xS_x## don't appear in the part they call diagonal. But working with ##N## instead, you discard terms of the form ##N_xL_x##, but by keeping ##N^2## you implicitly keep terms of the form ##J_xS_x## in the diagonal part and I am not sure how is that possible.

Basis sets are defined by their good quantum numbers. How you justify which basis set to use depends on what interactions you have and the strength of the interactions. You choose a basis set after sorting all that out. Table 6.7 in Brown and Carrington lists the criteria. So to answer your first question, yes, if you add more perturbations or change the relative strengths of the ones you have, your quantum numbers (basis set) might not be good anymore.

In Hund's case (b), ##L## still precesses about the internuclear axis. But with weak spin-orbit coupling, there's nothing keeping ##S## near the internuclear axis (resulting in a conserved ##S_z## like in Hund's case (a)), so it couples to next strongest thing: ##N = R + L##, which then forms ##J##.

I think maybe the discrepancy is what's diagonal in the Hund's case (a) basis set is not going to be diagonal in Hund's case (b). So if I was working out the rotational Hamiltonian having picked case (a) already, I would say ## H_{rot} = B(R)(J - L - S)^2##, but in case (b) I would start with ## H_{rot} = B(R)(N - L)^2##. And even though ##N = J - S##, that doesn't mean ##N^2## will be diagonal in a case (a) basis set just because it is in case (b).
 
  • #29
amoforum said:
I think maybe the discrepancy is what's diagonal in the Hund's case (a) basis set is not going to be diagonal in Hund's case (b). So if I was working out the rotational Hamiltonian having picked case (a) already, I would say ## H_{rot} = B(R)(J - L - S)^2##, but in case (b) I would start with ## H_{rot} = B(R)(N - L)^2##. And even though ##N = J - S##, that doesn't mean ##N^2## will be diagonal in a case (a) basis set just because it is in case (b).
Thanks for the clarification! However I am still confused about the part above. We are still at the step of calculating expectation values at the electronic wavefunction level. In the book they state: "The operators B(R) and N act only within each electronic state while the orbital angular momentum L acts both within and between such states." So if I understand this right, ##N^2## is diagonal at the electronic level, regardless of the Hund case chosen, so when we want to calculate the effective operator coming from ##<\eta|B(R)N^2|\eta>## this should give ##<\eta|B(R)|\eta>N^2##, which is actually what they show in their derivation. But I am confused, as, for example, ##N^2## contains a term of the form ##J_+S_-## which is not diagonal at the electronic level.

Also, I agree with your statement about the form of the rotational Hamiltonian at the rotational level, but he is doing the opposite. He assumes Hund case a and uses ##B(R)(N-L)^2##. But again this comes later, relative to what I am confused about. The effective operator for the rotation, to 1st order PT should be ##<\eta|B(R)|\eta>N^2##, regardless of the Hund base chosen. But my main questions is, how do they bring ##N^2## outside the electronic wavefunction expectation value?
 
  • #30
@BillKet Nothing to apologize for! You're giving me an excuse to clear out the cobwebs in my memory. Seriously though, this stuff is hard and there's nothing to be ashamed of from asking a lot of questions.

##N^2## isn't really diagonal at the electronic level. More importantly, the electronic state isn't just ##\eta##. You can't have a ##X^1\Pi## state without the ##\Pi## (i.e., ##|\Lambda|=1##). The reason S is in there too is because as B&C note at the bottom of page 317, the parity of the electronic orbital determines the value of S, same as with atomic orbitals. So the overall 0-th order part of the ket is ##|\eta,\Lambda\rangle##, not just ##|\eta\rangle##, where S is implied but not included. So how does B&C pull out ##N^2## and even ##L_z^2## from an expectation value that includes ##\Lambda##? They just make ##\Lambda## appear redundantly in both the ##|\eta,\Lambda\rangle## ket and the Hund's case (a) ket ##|\Lambda,S,\Sigma,J,\Omega\rangle##. They're essentially saving the ##N^2## and ##L_z^2## for later when they start using the Hund's case kets because both sets of kets include ##\Lambda## (and S). How do B&C choose which terms to evaluate in the 0th order kets and which to evaluate in the hund's case kets? Just by energy scale: ##N-L## has a small energy contribution relative to the energy of the 0-th order stuff, meanwhile ##B(R)## has to be evaluated because it contains the nuclear separation R. The redundancy in the kets with ##\Lambda## is what B&C mean when they say: "L acts both within and between such states".

Just wanted to make a suggestion too. I don't remember if B&C does this in the book, but at some point you may want to consider drawing out what a rotational spectrum would look like for Hund's case (a). It's a really important exercise, especially if you're an experimentalist. If you do it, make sure to include the P, Q, and R branches, and notice for what ##\Omega## and ##\Omega'## they will appear or will not appear. That is the easiest and most reliable way of interrogating a new molecular state. In experiment, it's common to see mystery states with only a known ##\Omega## value because of how do-able this test is.

Edit: Fixed an erroneous statement about orbital parity and the value of S
 
Last edited:
  • #31
Twigg said:
@BillKet Nothing to apologize for! You're giving me an excuse to clear out the cobwebs in my memory. Seriously though, this stuff is hard and there's nothing to be ashamed of from asking a lot of questions.

##N^2## isn't really diagonal at the electronic level. More importantly, the electronic state isn't just ##\eta##. You can't have a ##X^1\Pi## state without the ##\Pi## (i.e., ##|\Lambda|=1##). The reason S is in there too is because as B&C note at the bottom of page 317, the parity of the electronic orbital determines the value of S, same as with atomic orbitals. So the overall 0-th order part of the ket is ##|\eta,\Lambda\rangle##, not just ##|\eta\rangle##, where S is implied but not included. So how does B&C pull out ##N^2## and even ##L_z^2## from an expectation value that includes ##\Lambda##? They just make ##\Lambda## appear redundantly in both the ##|\eta,\Lambda\rangle## ket and the Hund's case (a) ket ##|\Lambda,S,\Sigma,J,\Omega\rangle##. They're essentially saving the ##N^2## and ##L_z^2## for later when they start using the Hund's case kets because both sets of kets include ##\Lambda## (and S). How do B&C choose which terms to evaluate in the 0th order kets and which to evaluate in the hund's case kets? Just by energy scale: ##N-L## has a small energy contribution relative to the energy of the 0-th order stuff, meanwhile ##B(R)## has to be evaluated because it contains the nuclear separation R. The redundancy in the kets with ##\Lambda## is what B&C mean when they say: "L acts both within and between such states".

Just wanted to make a suggestion too. I don't remember if B&C does this in the book, but at some point you may want to consider drawing out what a rotational spectrum would look like for Hund's case (a). It's a really important exercise, especially if you're an experimentalist. If you do it, make sure to include the P, Q, and R branches, and notice for what ##\Omega## and ##\Omega'## they will appear or will not appear. That is the easiest and most reliable way of interrogating a new molecular state. In experiment, it's common to see mystery states with only a known ##\Omega## value because of how do-able this test is.

Edit: Fixed an erroneous statement about orbital parity and the value of S
So for example, assuming we choose a Hund case a, the 0th order wavefunction is ##|\eta,\Lambda>|J,\Lambda,\Sigma,\Omega>##. If we want to calculate the first order contribution of the rotational H to the effective H, we can still keep the Hund case a and calculate $$<J',\Lambda,\Sigma',\Omega'|<\eta,\Lambda|B(R)(N^2-L_z)|\eta,\Lambda>|J,\Lambda,\Sigma,\Omega>$$ where ##\Lambda## must be the same, as we are in the same electronic state, but ##J##, ##\Sigma## and ##\Omega## can change, as they are related to the rotational wavefunction. Then, given that we can change the order of the electronic and rotational wavefunction we have the term above equal to $$<\eta,\Lambda|<J',\Lambda,\Sigma',\Omega'|N^2-L_z|J,\Lambda,\Sigma,\Omega>|\eta,\Lambda>$$ which is equal to $$<\eta,\Lambda|B(R)|\eta,\Lambda><J',\Lambda,\Sigma',\Omega'|N^2-L_z|J,\Lambda,\Sigma,\Omega>$$ as the term we took out of the electronic wave function, ##<J',\Lambda,\Sigma',\Omega'|N^2-L_z|J,\Lambda,\Sigma,\Omega>## is just a number, not an operator at this stage. At this stage we can drop the assumption of the Hund case a and thus turn the rotational effective opperator from a matrix element, the way it is not, to an operator which can be applied in this form to other Hund cases, so in the end we would get $$<\eta,\Lambda|B(R)|\eta,\Lambda>(N^2-L_z^2)$$ Is this right?
 
  • #32
That seems right. I want to add some more context to choosing the Hund's case though, because the way you wrote it above, it's as if you only use it just to drop it later. The choice of basis set has implications on ##H^{0}## itself.

I hope this section in Lefebvre-Brion is helpful:

"A choice of basis set implies a partitioning of the Hamiltonian, ##H = H_{el} + H_{SO} + T^N(R) + H_{ROT}##, into two parts: a part, ##H^{(0)}##, which is fully diagonal in the selected basis set, and a residual part, ##H^{(1)}##. The basis sets associated with the various Hund's cases reflect different choices of the parts of ##H## that are included in ##H^{(1)}##. Although in principle the eigenvalues of ##H## are unaffected by the choice of basis, as long as this basis set forms a complete set of functions, one basis set is usually more convenient to use or better suited than the others for a particular problem."

Also:

"The basis function labels are good quantum numbers with respect to ##H^{(0)}##, but not necessarily with respect to ##H^{(1)}##"

Even though any complete basis set is true, the relationship between Hamiltonian and basis set is a little more intertwined, in that your choice of basis set implicitly sets your choice of the zeroth order Hamiltonian.

So in choosing Hund's case (b), you've already accepted that the case (b) basis functions are eigenstates of ##H_{el}## and the diagonal part of ##H_{ROT}##: ##B(R)(N^2 - N_z^2)##. ##B(R)## is proportional to ##<\nu|R^{-2}|\nu>##, which is the constant in the first-order Hamiltonian (Eq. 7.8.3) in B&C.

Similarly with Hund's case (a): the case (a) basis functions are eigenstates of ##H_{el}## and the diagonal parts of ##H_{ROT}## mentioned a few posts up.

So now expanding ##H_{ROT}##:

##H_{ROT} = B(R)[J^2 + S^2 + L_\perp^2 - L_z^2 - 2JS - (N^+L^- + N^-L+)]##. The ##JS## term is actually diagonal in case (b) because ##2JS = [N^2 - J^2 - S^2]##. Since ##\Lambda## is good in both cases (a) and (b), the choice doesn't impact ##H_{el}## so much, as your derivation above shows.
 
  • #33
amoforum said:
That seems right. I want to add some more context to choosing the Hund's case though, because the way you wrote it above, it's as if you only use it just to drop it later. The choice of basis set has implications on ##H^{0}## itself.

I hope this section in Lefebvre-Brion is helpful:

"A choice of basis set implies a partitioning of the Hamiltonian, ##H = H_{el} + H_{SO} + T^N(R) + H_{ROT}##, into two parts: a part, ##H^{(0)}##, which is fully diagonal in the selected basis set, and a residual part, ##H^{(1)}##. The basis sets associated with the various Hund's cases reflect different choices of the parts of ##H## that are included in ##H^{(1)}##. Although in principle the eigenvalues of ##H## are unaffected by the choice of basis, as long as this basis set forms a complete set of functions, one basis set is usually more convenient to use or better suited than the others for a particular problem."

Also:

"The basis function labels are good quantum numbers with respect to ##H^{(0)}##, but not necessarily with respect to ##H^{(1)}##"

Even though any complete basis set is true, the relationship between Hamiltonian and basis set is a little more intertwined, in that your choice of basis set implicitly sets your choice of the zeroth order Hamiltonian.

So in choosing Hund's case (b), you've already accepted that the case (b) basis functions are eigenstates of ##H_{el}## and the diagonal part of ##H_{ROT}##: ##B(R)(N^2 - N_z^2)##. ##B(R)## is proportional to ##<\nu|R^{-2}|\nu>##, which is the constant in the first-order Hamiltonian (Eq. 7.8.3) in B&C.

Similarly with Hund's case (a): the case (a) basis functions are eigenstates of ##H_{el}## and the diagonal parts of ##H_{ROT}## mentioned a few posts up.

So now expanding ##H_{ROT}##:

##H_{ROT} = B(R)[J^2 + S^2 + L_\perp^2 - L_z^2 - 2JS - (N^+L^- + N^-L+)]##. The ##JS## term is actually diagonal in case (b) because ##2JS = [N^2 - J^2 - S^2]##. Since ##\Lambda## is good in both cases (a) and (b), the choice doesn't impact ##H_{el}## so much, as your derivation above shows.
So in B&C derivation, the obtained ##H_{eff}## is valid only for a Hund case a? If I want to use a Hund case b, his derivations wouldn't apply and I would have to re-derive everything from scratch? That confuses me a bit. For example the rotational term and the spin-orbit coupling term have the same form in the Hund case b, too (at least from what I saw in different papers I read). Is that just a coincidence and other terms might not have the same form? Also in the brief derivation I did above, I never used the fact that I have a Hund case a, all I used was that for a complete rotational basis, ##<i|N^2-L_z|j>## is just a number, which can be taken outside the electronic expectation value. That statement is true in general (and not only at a perturbation theory level) for any rotational basis, so I am not sure why using a different basis would change the form of that term (or any other). Of course actually evaluating ##<i|N^2-L_z|j>## would be easier in a given basis compared to another, but the form of the effective hamiltonian shouldn't change.

One other thing that confuses me is this statement: "the case (b) basis functions are eigenstates of ##H_{el}##". Hund basis functions have nothing to do with ##H_{el}##, do they? The basis functions of ##H_{el}## are the ##|\eta>##'s, and these are the same no matter what Hund case I choose later, no? Thank you!
 
  • #34
As I mentioned above, Hund's case (b) would work fine because your derivation moves B(R) around, and ##\Lambda## is a good quantum number for (a) and (b).

You didn't use Hund's case (a)? Didn't you write down your electronic wavefunction as ##|\eta, \Lambda>##? In this physical model, you've implied that ##L##'s precession about the internuclear axis is conserved, a wholly electronic phenomenon. The fact that its energy is so much higher than the rotation is the reason you can do that separation in the first place. In Hund's case (c), the spin-orbit coupling term is even larger than the electronic energy. ##\Lambda## isn't even a good quantum number! How about a scenario where the rotation is so high that it significantly alters the electronic motion?

To reiterate Twigg: "...the electronic state isn't just ##\eta##...the overall 0-th order part of the ket is ##|\eta, \Lambda>##, not just ##|\eta>##".

Hund's cases aren't just the rotational part. They're the basis states for the whole effective Hamiltonian. You can do the same procedure with the wrong basis states, it just won't match your data.
 
  • Like
Likes Twigg
  • #35
amoforum said:
As I mentioned above, Hund's case (b) would work fine because your derivation moves B(R) around, and ##\Lambda## is a good quantum number for (a) and (b).

You didn't use Hund's case (a)? Didn't you write down your electronic wavefunction as ##|\eta, \Lambda>##? In this physical model, you've implied that ##L##'s precession about the internuclear axis is conserved, a wholly electronic phenomenon. The fact that its energy is so much higher than the rotation is the reason you can do that separation in the first place. In Hund's case (c), the spin-orbit coupling term is even larger than the electronic energy. ##\Lambda## isn't even a good quantum number! How about a scenario where the rotation is so high that it significantly alters the electronic motion?

To reiterate Twigg: "...the electronic state isn't just ##\eta##...the overall 0-th order part of the ket is ##|\eta, \Lambda>##, not just ##|\eta>##".

Hund's cases aren't just the rotational part. They're the basis states for the whole effective Hamiltonian. You can do the same procedure with the wrong basis states, it just won't match your data.
Uh I am still confused. Yes, I assumed a case where the electronic energy is much bigger than the spin-orbit coupling. That allowed me to used ##H_{el}## as ##H_0## and use ##H_{SO}## as a perturbation. By solving for the eigenfunctions of ##H_{el}##, I would get ##|\eta,\Lambda>##, as ##\Lambda## is a good quantum number for ##H_0 = H_{el}##. Up to this point I haven't made any assumption about the Hund cases. ##\Lambda## is a good quantum number for the electronic energy regardless of the Hund case (i.e. if we were able to somehow fix the molecule in place and prevent it to rotate I would still obtain this eigenfunction). Once I add the rotational part, I can use any Hund case. For example if I were to use a Hund case c as the basis for the rotational levels in this electronic manifold, the eigenstates would be ##|\eta,\Lambda>|J,J_a,\Omega>##, and I could proceed just as above, as we still have ##|\eta,\Lambda>|J,J_a,\Omega> = |J,J_a,\Omega>|\eta,\Lambda>##. Of course this would be a really bad choice of basis, but the only difference in practice is that the off diagonal terms ##<J',J_a',\Omega'|N^2-L_z|J,J_a,\Omega>## would be much bigger than in a Hund case a or b (so I would have to diagonalize a bigger portion of the ##H_{eff}## in this basis for the same level of accuracy), but again, if I were to drop the basis after this step I would end up with exactly the same effective Hamiltonian I got before. What am I missing something? Thank you!
 
<h2>1. What is a molecular Hamiltonian?</h2><p>A molecular Hamiltonian is a mathematical representation of the total energy of a molecule, taking into account the positions and interactions of all the atoms within the molecule. It is used in quantum mechanics to describe the behavior of molecules and predict their properties.</p><h2>2. What makes a molecular Hamiltonian effective?</h2><p>A molecular Hamiltonian is considered effective when it accurately describes the behavior of a molecule and can be used to make accurate predictions about its properties. This is achieved by including all relevant factors, such as the positions and interactions of all the atoms, and using appropriate mathematical models and approximations.</p><h2>3. What are Hund's cases in molecular Hamiltonian?</h2><p>Hund's cases refer to a set of rules that govern the behavior of electrons in molecules. These rules, known as Hund's rules, determine the arrangement of electrons in different energy levels and orbitals within a molecule, and can be used to predict the molecule's properties.</p><h2>4. How is the molecular Hamiltonian used in molecular spectroscopy?</h2><p>The molecular Hamiltonian is used in molecular spectroscopy to calculate the energy levels and transitions of a molecule. This information can then be used to interpret experimental data, such as absorption or emission spectra, and identify the molecular structure and properties.</p><h2>5. What are some factors that can affect the accuracy of the molecular Hamiltonian?</h2><p>The accuracy of the molecular Hamiltonian can be affected by various factors, such as the level of approximation used in the mathematical models, the inclusion of all relevant interactions and factors, and the quality of experimental data used to validate the predictions. Additionally, the complexity of the molecule and the computational resources available can also impact the accuracy of the molecular Hamiltonian.</p>

1. What is a molecular Hamiltonian?

A molecular Hamiltonian is a mathematical representation of the total energy of a molecule, taking into account the positions and interactions of all the atoms within the molecule. It is used in quantum mechanics to describe the behavior of molecules and predict their properties.

2. What makes a molecular Hamiltonian effective?

A molecular Hamiltonian is considered effective when it accurately describes the behavior of a molecule and can be used to make accurate predictions about its properties. This is achieved by including all relevant factors, such as the positions and interactions of all the atoms, and using appropriate mathematical models and approximations.

3. What are Hund's cases in molecular Hamiltonian?

Hund's cases refer to a set of rules that govern the behavior of electrons in molecules. These rules, known as Hund's rules, determine the arrangement of electrons in different energy levels and orbitals within a molecule, and can be used to predict the molecule's properties.

4. How is the molecular Hamiltonian used in molecular spectroscopy?

The molecular Hamiltonian is used in molecular spectroscopy to calculate the energy levels and transitions of a molecule. This information can then be used to interpret experimental data, such as absorption or emission spectra, and identify the molecular structure and properties.

5. What are some factors that can affect the accuracy of the molecular Hamiltonian?

The accuracy of the molecular Hamiltonian can be affected by various factors, such as the level of approximation used in the mathematical models, the inclusion of all relevant interactions and factors, and the quality of experimental data used to validate the predictions. Additionally, the complexity of the molecule and the computational resources available can also impact the accuracy of the molecular Hamiltonian.

Similar threads

  • Atomic and Condensed Matter
Replies
0
Views
247
  • Atomic and Condensed Matter
Replies
1
Views
1K
  • Atomic and Condensed Matter
Replies
7
Views
1K
  • Atomic and Condensed Matter
Replies
1
Views
1K
  • Atomic and Condensed Matter
Replies
3
Views
1K
  • Atomic and Condensed Matter
Replies
1
Views
804
  • Atomic and Condensed Matter
Replies
3
Views
2K
  • Atomic and Condensed Matter
Replies
13
Views
3K
  • Atomic and Condensed Matter
Replies
0
Views
709
  • Atomic and Condensed Matter
Replies
1
Views
783
Back
Top