Hartree Fock: Which orbitals have to be choosen?

  • Thread starter Derivator
  • Start date
  • Tags
    Orbitals
In summary, the Hartree Fock method uses spin orbitals to solve for the energy functional in a self-consistent iteration process. The number of spin orbitals used is based on the initial guess, which determines the dimensionality of the problem. The Fock operator is then diagonalized to obtain the lowest-energy solutions, which are used to construct a new Fock matrix. However, the convergence of this method can be improved by using additional techniques such as iterative subspace convergence acceleration and level shifts.
  • #1
Derivator
149
0
Hi folks,

Hartree Fock theory tells us, that the energy functional [itex]<\Psi|H|\Psi>[/itex] where [itex]\Psi[/itex] is a single Slater determinant is minimized by using spin orbitals [itex]\phi_i[/itex] that fulfill the Hartree Fock equations [itex]f_i \phi_i = \epsilon_i \phi_i[/itex].

Since the Fock operator [itex]f_i[/itex] depends on the solution, the hartree fock equations are solved in a self consistent iteration. That is, for an N-particle Slater determinant, one starts with N guessed spin-orbitals, uses these guessed spin orbitals to calculate the fock operator, then one solves the eigenvalue problem given by this guessed fock operator, obtains new spin orbitals, calculates a new fock operator and so on... until there is no difference between input and output spin orbitals.

My question:
Suppose, you have guessed N spin-orbitals and have obtained an initial fock operator [itex]\tilde f_i[/itex]. Then you want to solve the eigen-problem [itex]\tilde f_i \phi_i = \epsilon_i \phi_i[/itex] to obtain N new spin orbitals [itex]\phi_i[/itex]. However solving [itex]\tilde f_i \phi_i = \epsilon_i \phi_i[/itex] will give you an infinite number of spin orbitals. Which of them will you use to construct your new fock matrix?

best,
derivator
 
Physics news on Phys.org
  • #2
Derivator said:
Hi folks,

However solving [itex]\tilde f_i \phi_i = \epsilon_i \phi_i[/itex] will give you an infinite number of spin orbitals. Which of them will you use to construct your new fock matrix?

Why do you think you will get an infinite number of spin orbitals from solving the eigenvalue equation using the SCF method? You said yourself that you start from an initial guess of N-spin orbitals. That fixes the dimensionality of the problem, so at each step in the SCF calculation, you will be diagonalizing an N-dimensional matrix. So of course that will end up giving N spin-orbitals in the final solution, which are the eigenvectors of the fock matrix that minimize the total energy in the N-dimensional basis that you have chosen.

Does that answer your question, or were you asking something else?
 
  • #3
Hi SpectraCat,

SpectraCat said:
That fixes the dimensionality of the problem, so at each step in the SCF calculation, you will be diagonalizing an N-dimensional matrix.
I don't see, how you get a matrix-equation without introducing a finite basis set (for example the Rothaan equations). Additionaly, only in the case of expanding each spin orbital in N basis functions, you can assure, that the matrix won't have more than N eigenvectors. If you use more than N basis functions, the fock matrix will probably have more than N eigenvectors. But that's not my point.

Without introducing such a basis, [itex]\tilde f_i \phi_i = \epsilon_i \phi_i[/itex] is a differential equation with possibly an infinite number of solution. (if the fock operator wouldn't depend on the solutions this equation would be pretty similar to a schroedinger equation. in this case you would have no 'hamilton matrix' either, except when introducing a finite basis)
 
Last edited:
  • #4
Derivator,
in principle this equation has an infinite number of solutions, but you are not interested in all of them. In all practical cases this equation has a discrete spectrum for the bound states, and is bounded from below. What you do is simply (in the simplest case): If you have nClos doubly occupied orbitals, you doubly occupy the nClos eigenvectors with the lowest energies ("aufbau principle"). From these eigenvectors you can then form a new Fock operator and go on in the SCF cycle. The situation is somewhat more complicated for spin-restricted open-shell cases, if the highest orbital is degenerate, and if you have other constraints (e.g., a requirement that the wave function transforms according to some given irrep of the point group of the molecule).

Depending on your initial guess, this procedure does not always result in convergence or convergence to the lowest possible HF solution. Therefore often additional tricks are played (e.g. iterative subspace convergence acceleration (DIIS), level shifts, following the aufbau principle only in the first few iterations and then selecting the eigenvectors which maximize the overlap of the linear span with the eigenvectors from the last cycle etc.). Note also that there are other ways of getting HF solutions, which do not involve the typical SCF cycle: http://dx.doi.org/10.1063/1.448627 (this article deals with MCSCF, but of course exactly the same thing also works in the single reference case). Such approaches can converge almost everything you throw at them, and are often used when you cannot get HF to converge by any other means.
 
  • #5
Derivator said:
Hi SpectraCat,

I don't see, how you get a matrix-equation without introducing a finite basis set (for example the Rothaan equations). Additionaly, only in the case of expanding each spin orbital in N basis functions, you can assure, that the matrix won't have more than N eigenvectors. If you use more than N basis functions, the fock matrix will probably have more than N eigenvectors. But that's not my point.

Without introducing such a basis, [itex]\tilde f_i \phi_i = \epsilon_i \phi_i[/itex] is a differential equation with possibly an infinite number of solution. (if the fock operator wouldn't depend on the solutions this equation would be pretty similar to a schroedinger equation. in this case you would have no 'hamilton matrix' either, except when introducing a finite basis)

First, the initial guess of N-spin orbitals provides the N-dimensional "basis" (I am not talking about a basis-set expansion here) for the problem being solved, irrespective of whether or not a Roothaan-style LCAO formalism is also being used. This sets the dimensionality of the problem. It makes sense to choose only N spin-orbitals for an N-electron system, because we know only occupied spin-orbitals will contribute to the total energy. Basically, what you are doing here is saying that you are only going to take the N-lowest energy solutions to the Fock operator eigenvalue-eigenfunction equation for the given problem. You are right that in principle there are an infinite number of possible solutions, but we only care about the N lowest-energy solutions, as I explained above.

Remember also that in the HF formalism, the initial guess will be a Slater determinant formed from *orthogonal* spin-orbitals, so there are some restrictions on the guess orbitals you can choose.

Second, the Fock-equations can be represented in matrix form (i.e. the Fock matrix). Therefore, the first step in any HF calculation is to form a Fock-matrix from the initial guess spin-orbitals. You then diagonalize this matrix to get a set of new orbitals, which you then use to form a new Fock matrix, which is then diagonalized, and so on, repeating the process until convergence.

Now, the reason Roothaan's contribution was so valuable was that it took the Fock-matrix of coupled integro-differential equations (which are not easy to solve), and transformed them into a set of coupled linear algebraic equations, which are easy to solve with a computer. However, the general description of an HF calculation as a variational matrix diagonalization is valid whether or not we are using the Roothaan equations as well.

[EDIT: I was working on this reply and didn't se cgk's post]
 
  • #6
SpectraCat said:
First, the initial guess of N-spin orbitals provides the N-dimensional "basis" (I am not talking about a basis-set expansion here) for the problem being solved, irrespective of whether or not a Roothaan-style LCAO formalism is also being used. This sets the dimensionality of the problem. It makes sense to choose only N spin-orbitals for an N-electron system, because we know only occupied spin-orbitals will contribute to the total energy.
It doesn't really work like that. If you do a basis expansion (no matter if LCAO or some other orthogonal basis as you say), you need more orbitals than the occupied ones in order for SCF to actually do something. Note that the determinant you get only depends on the linear span of the occupied orbitals: If you would restrict your solution space to some nOcc initial guess orbitals, you would have exactly zero variational freedom.

The most common way is to solve the SCF problem within the linear span of some "LCAO"[1] basis (or in an symmetrically orthogonalized version of the LCAO basis, with possibly some near-linear dependend vectors deleted). But you can also do HF in real space (was originally done for atom HF programs) or with FEMs, or similar things. In all those cases you basically solve the SCF equation in the full space which the effective basis you have allows you, and do not do any dimensionality restriction to an initial guess.

[1] "LCAO" not really a great term. Apart from ANO (atomic natural orbital) basis sets, in almost all other basis sets only few functions correspond to actual AOs. There are also popular LCAO basis sets (segmented sets) where not even a single basis function is an AO.
 
  • #7
cgk said:
[...] If you have nClos doubly occupied orbitals, you doubly occupy the nClos eigenvectors with the lowest energies ("aufbau principle"). [...]

just to assure i got you right. What one does in principle is the follwing:

1st step: guess N orbitals, build from theses the first fock operator
2nd step: get new orbitals. Choose from them the N orbitals that correspond to the lowest energies (eigenvalues of the fock equation). Build from these N lowest orbitals the new fock matrix.
3rd step: get new orbitals. Choose from them the N orbitals that correspond to the lowest energies. Build from these N lowest orbitals the new fock matrix.
4th step: ...and so on... until it hopefully converges

Is that correct? (especially the underlined sentence)[btw, for other readers: the index i at the foregoing fock operators is wrong, of course. there should be no index at the fock operator.]
 
  • #8
cgk said:
It doesn't really work like that. If you do a basis expansion (no matter if LCAO or some other orthogonal basis as you say), you need more orbitals than the occupied ones in order for SCF to actually do something. Note that the determinant you get only depends on the linear span of the occupied orbitals: If you would restrict your solution space to some nOcc initial guess orbitals, you would have exactly zero variational freedom.

Yes, I realized after I shut my computer down that I used N to denote two separate things in my description ... I did not initially intend to use it for both the dimensionality of the guess, as well as the number of electrons, for exactly the reason you mention. Of course you need some empty orbitals if you want anything to change during the SCF calculations.

The most common way is to solve the SCF problem within the linear span of some "LCAO"[1] basis (or in an symmetrically orthogonalized version of the LCAO basis, with possibly some near-linear dependend vectors deleted). But you can also do HF in real space (was originally done for atom HF programs) or with FEMs, or similar things. In all those cases you basically solve the SCF equation in the full space which the effective basis you have allows you, and do not do any dimensionality restriction to an initial guess.

Yes, I understand it is possible to do HF even with non-orthogonal orbitals in the initial guess (for example for excited state calculations), although I am not familiar with the details of such treatments.

Also, the only dimensionality I was referring to is the dimensionality of the Fock-matrix, which is set by the number of orbitals chosen for the initial Slater determinant. In other words, if you choose N orbitals in your initial guess, then you will get N optimized orbitals out of the SCF calculation at the end. That is the only "dimensionality restriction" I was trying to imply by my description.

[1] "LCAO" not really a great term. Apart from ANO (atomic natural orbital) basis sets, in almost all other basis sets only few functions correspond to actual AOs. There are also popular LCAO basis sets (segmented sets) where not even a single basis function is an AO.

Hmmm ... your point is well-taken, but I always took the "atomic" in LCAO as being descriptive of the form of the orbitals (i.e. approximate solutions to some appropriately parameterized H-atom Hamiltonian), rather than as denoting any particular physical significance.
 
  • #9
Derivator said:
just to assure i got you right. What one does in principle is the follwing:

1st step: guess N orbitals, build from theses the first fock operator
2nd step: get new orbitals. Choose from them the N orbitals that correspond to the lowest energies (eigenvalues of the fock equation). Build from these N lowest orbitals the new fock matrix.

That's not quite right .. when you choose N-orbitals at the beginning, what you are essentially doing is setting the dimensionality of the Fock matrix. So after each iteration, you end up with N new orbitals, which are the eigenvectors of the Fock matrix. There is no need to use choose anything .. you simply plug those orbitals back into the Fock equation to get a new Fock matrix, and iterate to convergence.

What may have confused you before is that I (unintentionally) implied that you only need to choose the occupied orbitals to form the Fock matrix. That is of course incorrect, since the HF procedure is variational, you need additional degrees of freedom in your problem in order to minimize the energy. These additional degrees of freedom are provided by including empty orbitals in the initial guess. What I was trying to say when I wrote that before is that only the occupied orbitals contribute to the ground state energy of the problem.
 
  • #10
umm, i still don't get it. SpectraCat, I fear we are talking past each other.

I try to explain, what my point of view is:

I don't want to introduce a basis set. My question is just from a kind of theoretical point of view. (that is, I want to get the 'idea' of solving the hartree fock equations in a self consistent way in principle but without talking about, what you do in practice.)

as far as i understand it, the fock matrix is given by

[itex]f_{ij} = \int \Psi_i f \Psi_j[/itex] where the [itex]\Psi_i[/itex] are the elements of the choosen basis set. that is, the spin orbital is given by: [itex]\phi_i = \sum_j c_{ij} \Psi_j[/itex]

The fock matrix is NOT given by [itex]f_{ij} = \int \phi_i f \phi_j[/itex] where the [itex]\phi_i[/itex] are spin orbitals.

Now I only choose initial spin orbitals [itex]\phi_i[/itex] by guessing. I don't see, how to get a fock matrix just by choosing the spin orbitals. Hence, at this point of talking about the self consistent iteration of the fock equations, there is no matrix eigenvalue problem, just a differential equation, yielding possibly an infinite number of solutions.

What i mean, when saying, that i want to build a new fock operator is not, that i want to build a fock matrix. I really want just to build the fock operator itself (it depends on the solution spin orbitals via the coulomb operator: http://en.wikipedia.org/wiki/Coulomb_operator and exchange operator: http://en.wikipedia.org/wiki/Exchange_operator)

(Edit: @ SpectraCat, reading your post no 5 again (https://www.physicsforums.com/showpost.php?p=3403917&postcount=5) I see, we were taling about different fock matrices.)
 
Last edited by a moderator:
  • #11
Derivator said:
umm, i still don't get it. SpectraCat, I fear we are talking past each other.

I try to explain, what my point of view is:

I don't want to introduce a basis set.

I understand .. I have not been assuming the existence of a basis set in any of my answers.

My question is just from a kind of theoretical point of view. (that is, I want to get the 'idea' of solving the hartree fock equations in a self consistent way in principle but without talking about, what you do in reality, that is introducing a basis set.)

as far as i understand it, the fock matrix is given by

[itex]f_{ij} = \int \Psi_i f \Psi_j[/itex] where the [itex]\Psi_i[/itex] are the elements of the choosen basis set. that is, the spin orbital is given by: [itex]\phi_i = \sum_j c_{ij} \Psi_j[/itex]

The fock matrix is NOT given by [itex]f_{ij} = \int \phi_i f \phi_j[/itex] where the [itex]\phi_i[/itex] are spin orbitals.

Actually, a Fock matrix is defined in both cases. The difference is that in the first case, the matrix represents a set of coupled linear equations, while in the second, more general case, the matrix represents a set of coupled integro-differential equations, which is of course MUCH harder to solve. This was the point of my comment about the importance of Roothaan's contribution in a previous post.

Now I only choose initial spin orbitals [itex]\phi_i[/itex] by guessing. I don't see, how to get a fock matrix just by choosing the spin orbitals. Hence, at this point of talking about the self consistent iteration of the fock equations, there is no matrix eigenvalue problem, just N differential equations, each yielding possibly an infinite number of solutions.

Ok .. let's back up and define a few things more carefully then.

Why are you choosing more than one initial guess spin-orbital? What conditions are you placing on your initial guess spin-orbitals? How are you relating your initial-guess spin orbitals to the overall electronic wavefunction you are trying to solve?

Answering those questions will hopefully help you to see why there is always a Fock matrix involved, irrespective of whether or not one is using some sort of basis set expansion to represent the spin-orbitals. If not, then it will at least shine a light on where the confusion lies between the two of us, so we can proceed most productively.
 
Last edited:
  • #12
SpectraCat said:
Actually, a Fock matrix is defined in both cases. The difference is that in the first case, the matrix represents a set of coupled linear equations, while in the second, more general case, the matrix represents a set of coupled integro-differential equations, which is of course MUCH harder to solve. This was the point of my comment about the importance of Roothaan's contribution in a previous post.

Ah, i see. This was the point were I got you wrong.Indeed, I never thought about how to do the initial guess. You asked me, to think about the conditions that one has to impose on the inital guess. Well, I think the orbitals of the inital guess have to be orthonormal and it is reasonable to take them from [itex]L^2[/itex]. Of course, the overall electronic wavefunction is given by the Slater determinant build by the final (after scf iteration) spin orbitals.

Assume, that the functions [itex]\{b_i\}[/itex] formally constitue a mathematical basis of [itex]L^2[/itex]. (I call this basis 'mathematical basis' to distinguish it from the lcao basis.)

Now expand the initial guesses in this basis:
[itex]\phi_i = \sum_\nu c_{\nu, i} b_\nu[/itex] where [itex]c_{\nu,i}[/itex] are the expansion coefficients which may be chosen arbitrarily, only obeying the orthonormalization constraint .

After inserting this expansion in the fock equations and multiplicating [itex]b_\mu[/itex] and integrating, one obtains:

[itex]\sum_\nu f_{\mu \nu} c_{\nu i} = \epsilon_i \sum_\nu S_{\mu \nu} c_{\nu i} [/itex] where [itex]i = 1 ... N[/itex] but [itex]\mu[/itex] or [itex] \nu = 1 ... \infty[/itex] since the dimension of [itex]L^2[/itex] is infinite.

This es equivalent to the matrix equation
[itex]F \vec c_i = \epsilon_i S \vec c_i [/itex], where F and S are matrices.

Now, indeed, you have N such matrix equations( since [itex]i = 1 ... N[/itex]) which will give you N spin orbitals for the next iteration step.

Are these your thoughts?

How would you make your initial guess?
 
  • #13
Derivator said:
Ah, i see. This was the point were I got you wrong.


Indeed, I never thought about how to do the initial guess. You asked me, to think about the conditions that one has to impose on the inital guess. Well, I think the orbitals of the inital guess have to be orthonormal and it is reasonable to take them from [itex]L^2[/itex]. Of course, the overall electronic wavefunction is given by the Slater determinant build by the final (after scf iteration) spin orbitals.

Exactly.

Assume, that the functions [itex]\{b_i\}[/itex] formally constitue a mathematical basis of [itex]L^2[/itex]. (I call this basis 'mathematical basis' to distinguish it from the lcao basis.)
Right .. that was exactly the way I was using the term "basis" in my earlier posts ...

Now expand the initial guesses in this basis:
[itex]\phi_i = \sum_\nu c_{\nu, i} b_\nu[/itex] where [itex]c_{\nu,i}[/itex] are the expansion coefficients which may be chosen arbitrarily, only obeying the orthonormalization constraint .

After inserting this expansion in the fock equations and multiplicating [itex]b_\mu[/itex] and integrating, one obtains:

[itex]\sum_\nu f_{\mu \nu} c_{\nu i} = \epsilon_i \sum_\nu S_{\mu \nu} c_{\nu i} [/itex] where [itex]i = 1 ... N[/itex] but [itex]\mu[/itex] or [itex] \nu = 1 ... \infty[/itex] since the dimension of [itex]L^2[/itex] is infinite.

This es equivalent to the matrix equation
[itex]F \vec c_i = \epsilon_i S \vec c_i [/itex], where F and S are matrices.

Now, indeed, you have N such matrix equations( since [itex]i = 1 ... N[/itex]) which will give you N spin orbitals for the next iteration step.

Are these your thoughts?

How would you make your initial guess?

Yes, you've got it now. That is why I have been talking about Fock matrices from the start .. they are general features of the HF approach, and that is why your initial guess orbitals set the dimensionality of the Fock matrix that you will be using in the SCF iterations.

So, does this also answer your initial question, or are there still points that are unclear?
 
  • #14
I don't quite get it. I don't see a need to introduce any basis if one actually works with the integro-differential equations. Maybe it is best to think about it in terms of starting with an initial guess for the Fock operator instead of the orbitals.

Then we do this:
1) Guess approximate Fock operator f_1 (first iteration). Diagonalize f_1 (i.e., solve f_1 |phi_i> = eps_i |phi_i>. This will have an infinity of solutions. Of those you take the nOcc ones with the lowest eigenvalues.

2) Use the just-determined occupied eigenvectors to form a new approximate Fock operator f_2. Again solve f_2 |phi_i> = eps_i |phi_i>. This will again give you an infinity of solutions, of which you take the nOcc lowest ones to form the next Fock operator.

3) Use the just-determined occupied eigenvectors to form a new approximate Fock operator f_3. Again solve f_3 |phi_i> = eps_i |phi_i>. This will again give you an infinity of solutions, of which you take the nOcc lowest ones to form the next Fock operator.

[...]
x) repeat until the Fock operator does not change anymore.

That's my understanding of what Derivator is looking for.
 
  • #15
cgk said:
Then we do this:
1) Guess approximate Fock operator f_1 (first iteration). Diagonalize f_1 (i.e., solve f_1 |phi_i> = eps_i |phi_i>. This will have an infinity of solutions. Of those you take the nOcc ones with the lowest eigenvalues.

[...]

this procedure sounds reasonable. I understand it.

(I also gave some further thoughts on my post https://www.physicsforums.com/showpost.php?p=3405060&postcount=12".
Although you only have N equations [itex]F \vec c_i = \epsilon_i S \vec c_i [/itex], each of these equations has possibly an infinite number of eigenvectors, since the matrices have an infinite height and width. So in this approach , when choosing a mathematical basis, you also have the problem of choosing N eigenvectors.)
 
Last edited by a moderator:
  • #16
cgk said:
I don't quite get it. I don't see a need to introduce any basis if one actually works with the integro-differential equations. Maybe it is best to think about it in terms of starting with an initial guess for the Fock operator instead of the orbitals.

Then we do this:
1) Guess approximate Fock operator f_1 (first iteration). Diagonalize f_1 (i.e., solve f_1 |phi_i> = eps_i |phi_i>. This will have an infinity of solutions. Of those you take the nOcc ones with the lowest eigenvalues.

Again, it all comes down to the form of the guess. How are you constructing your initial guess? The form of the Fock operator involves a discrete sum of two-electron integrals. In order to form the Fock operator for a given case, you need to 1) decide how many terms to include in the sum (i.e. how many orbitals are in your basis), and 2) compute all of the two electron integrals (i.e. construct the Fock matrix). I cannot see any way around those two steps, and that is the procedure that I have been describing all along.

So, while it is correct that there are in principle an infinite number of solutions to the Fock equation, the very first step in the process of an HF calculation (i.e. setting up your initial guess) involves selecting a finite subset of those orbitals. Once you have selected that finite subset and formed the Fock matrix, you have restricted the dimensionality of the problem to the number of orbitals you included in your initial guess. Therefore there are not "an infinity of solutions" at the end of each iteration, there are only N-solutions, where N is the dimensionality of the initial guess.

Am I missing something here? Is there some other way to handle the choice of initial guess and the construction of the Fock matrix?
 
  • #17
SpectraCat said:
Am I missing something here?

Well, I think you can't fix the dimensionalty by choosing an infinite basis, as I did in post #12.

See:
Derivator said:
[...]
(I also gave some further thoughts on my post https://www.physicsforums.com/showpost.php?p=3405060&postcount=12".
Although you only have N equations [itex]F \vec c_i = \epsilon_i S \vec c_i [/itex], each of these equations has possibly an infinite number of eigenvectors, since the matrices have an infinite height and width. So in this approach , when choosing a mathematical basis, you also have the problem of choosing N eigenvectors.)

I think, I was also wrong, when I said that you have N hartree fock euqations. Actually, you have only 1 equation since the fock operator is the same (during one and the same iteration) for each orbital. Nevertheless, due to the infinite dimensions of the matrices, you will get more than N orbitals (possibly infinite, definitely infinite, in the case the iteration has converged and the fock matrix has become hermitian.)
 
Last edited by a moderator:
  • #18
Derivator said:
Well, I think you can't fix the dimensionalty by choosing an infinite basis, as I did in post #12.

Ok .. so choosing an infinite basis is equivalent to forming an infinite Fock matrix. So it is something we can only speak of in the abstract. However, in the abstract case of an infinite Fock-matrix, everything I have posted previously still pertains ... it's just not practical. If you want to actually DO a calculation, your initial guess must include a finite number of terms.

Also, to clarify ... I have used the term "dimensionality of the Fock-matrix" in previous examples where I really meant "size of the 2D Fock matrix". The dimensionality of the *basis*, i.e. the choice to include N orbitals in the initial guess, sets the *size* of the Fock-matrix to be NxN. Sorry if that wasn't clear before.
See:I think, I was also wrong, when I said that you have N hartree fock euqations. Actually, you have only 1 equation since the fock operator is the same (during one and the same iteration) for each orbital. Nevertheless, due to the infinite dimensions of the matrices, you will get more than N orbitals (possibly infinite, definitely infinite, in the case the iteration has converged and the fock matrix has become hermitian.)

The Fock operator is NOT the same for each orbital .. remember that it involves explicit two-electron integrals. I suggest that you write out the Fock matrix for an excited state of helium (configuration 1s12s1), to see why this is the case. Furthermore, the Fock matrix is ALWAYS Hermitian .. it does not "become" Hermitian. This is of course because the Fock operator is Hermitian.
 
Last edited:
  • #19
SpectraCat said:
[...]
The Fock operator is NOT the same for each orbital .. remember that it involves explicit two-electron integrals. [...]

hmm, but in the fock operator, you sum over all occupied orbitals:

[itex]\hat F[\{\phi_j\}](1) = \hat H^{\text{core}}(1)+\sum_{j=1}[2\hat J_j(1)-\hat K_j(1)][/itex]

so why should different fock operators act on orbitals i and j?
 
  • #20
Derivator said:
hmm, but in the fock operator, you sum over all occupied orbitals:

[itex]\hat F[\{\phi_j\}](1) = \hat H^{\text{core}}(1)+\sum_{j=1}[2\hat J_j(1)-\hat K_j(1)][/itex]

so why should different fock operators act on orbitals i and j?

Yes, I wrote that too quickly .. I did not mean that the Fock operator changes for each orbital, but rather the equation that you get when you apply it to each orbital is different. So that's why you have N HF equations (for an N-orbital initial guess), instead of just one (which is what you wrote in your post). That is why you need to set up and diagonlize the Fock matrix in the first place.

It's worth noting (if it isn't clear already) that the Fock operator *does* change between the iterations, since the orbital basis is changing at each iteration step.
 
Last edited:
  • #21
Ah, you mean, that one applies the fock operator build from the old orbitals to all N old orbitals, and obtain by this map N new orbitals. But this way, you don't get any eigenvalues... (?)

I always understood the hartree fock equation in each iteration as a 'true differential equation', that is that you have to solve it (in theory) as any other differential equation (in a bad case by a guessed, god given ansatz)
 
Last edited:
  • #22
Derivator said:
Ah, you mean, that one applies the fock operator build from the old orbitals to all N old orbitals, and obtain by this map N new orbitals. But this way, you don't get any eigenvalues... (?)

Of course you get eigenvalues ... the Fock operator is a MATRIX, and you get the eigenvalues and eigenvectors by diagonalizing that matrix, as I have been saying from the beginning. You seem to be resisting this terminology, but I don't understand why.

I always understood the hartree fock equation in each iteration as a 'true differential equation', that is that you have to solve it (in theory) as any other differential equation (in a bad case by a guessed, god given ansatz)

What you are missing is that the hartree fock equation is not one differential equation. It is a set of N coupled differential equations, where N is the number of orbitals you are solving for. This sort of system lends itself to solution by matrix methods. See http://en.wikipedia.org/wiki/Matrix_differential_equation for example.
 
  • #23
SpectraCat said:
What you are missing is that the hartree fock equation is not one differential equation. It is a set of N coupled differential equations, where N is the number of orbitals you are solving for. This sort of system lends itself to solution by matrix methods. See http://en.wikipedia.org/wiki/Matrix_differential_equation for example.

Is that not only true before diagonalizing the lagrange multiplier matrix? see formula (101) on http://www.ccc.uga.edu/lec_top/hf/node6.html for example. By diagonalizing the lagrange multiplier matrix, you uncouple the equations and get formula (117) on http://www.ccc.uga.edu/lec_top/hf/node6.html .

Formula (117) looks like [itex]f \phi_i = \epsilon_i \phi_i[/itex], but this are no N equations. This is only 1 equation which is fulfilled by all i=1,...,N occupied orbitals [itex]\phi_i[/itex]. Or am I overlooking something substantial?
 
Last edited by a moderator:
  • #24
Ok .. I think I need to back up here .. it has been a long time since I studied this stuff in detail, and your questions (which are good ones) have made me realize that my understanding of this has become rather tightly coupled to the Roothaan equation-based solution methods that I use in my computational work. For example, what I have been calling the Fock matrix is not really the Fock matrix as it is conventionally defined, because that Fock matrix is only really defined for the Roothaan equations, where it is the matrix representation of the Fock operator in the basis set used to expand the orbitals.

However having said that, I think the core of the explanation I am giving you is still correct. To summarize, you are correct that the eigenvalue equation for the Fock operator [itex]\hat{f_i}|\chi_b>=\epsilon_b|\chi_b>[/itex], in principle has an infinite number of solutions. In fact, that equation can ONLY yield true eigenvalues and eigenvectors in the full infinite basis of spin-orbitals. As soon as you choose a subset or orbitals (as we are forced to do in any practical treatment), you are resigned to approximate solutions. However, let me quote from Szabo and Ostlund's Modern Quantum Chemistry, p.115.
"While [itex]\hat{f_i}|\chi_b>=\epsilon_b|\chi_b>[/itex] is written as a linear eigenvalue equation, it might best be described as a pseudo-eigenvalue equation since the Fock operator has a functional dependence, through the coulomb and exchange operators, on the solutions [itex]\{\chi_b\}[/itex] of the pseudo-eigenvalue equation. Thus the Hartree-Fock equations are really nonlinear equations and will need to be solved by iterative procedures."

The highlighted section above is what I have been trying to communicate in my posts so far. Even in the fully diagonal representation of you referred to in your last post, the Fock equation for a given orbital still has a functional dependence on all the other orbitals in your Slater determinant through the Coulomb and exchange operators. Furthermore, the Fock equation as written above (i.e. a pseudo-eigenvalue equation) ONLY holds for the fully optimized orbital basis .. in other words, it only holds for the basis you would get at the END of a Hartree-Fock calculation, and even then it only holds approximately. So you cannot expect that simple relationship to hold when you start from some arbitrary initial guess of a Slater determinant formed from a set of N orbitals.

So, going back to your initial question, we can answer it (at least two ways):

1) if we assume that we are starting from the infinite set of optimized orbitals representing the eigenvectors of the fock equation, then we simply choose the N lowest energy orbitals as being occupied (for an N-electron system), and contributing to the energy of the wavefunction.

2) if we assume that we are starting from an initial guess, where we have restricted the dimensionality of the problem by choosing a Slater determinant from a set of N orthonormal orbitals, then in doing so we restrict the dimensionality of our solutions at all future steps in the Hartree-Fock optimization process. (The reason for this is the highlighted passage in the quote from Szabo and Ostlund that I gave above). Thus the question of "which orbitals to choose" never arises again after the initial guess has been chosen.
 
Last edited:
  • #25
SpectraCat, I want to thank you very much for your detailed (and probably quite time consuming) help. Thanks! :-)
 

1. What is the Hartree-Fock method?

The Hartree-Fock method is a computational approach used in quantum mechanics to calculate the electronic structure of atoms and molecules. It is based on the idea of treating each electron as an independent particle, rather than considering their interactions with each other.

2. How does the Hartree-Fock method work?

The Hartree-Fock method involves solving a set of equations known as the Hartree-Fock equations, which take into account the interactions between electrons and the overall potential energy of the system. These equations are solved iteratively until a self-consistent solution is reached.

3. What is the role of orbitals in the Hartree-Fock method?

In the Hartree-Fock method, orbitals represent the probability distribution of an electron in the atom or molecule. These orbitals are chosen based on their energy and spatial properties, and they are used to construct the electronic wavefunction of the system.

4. Which orbitals have to be chosen in the Hartree-Fock method?

In the Hartree-Fock method, the choice of orbitals depends on the type of system being studied. Generally, a set of atomic orbitals or Gaussian-type orbitals are used as a basis set to represent the electronic wavefunction. The specific orbitals chosen will affect the accuracy and efficiency of the calculations.

5. What are the limitations of the Hartree-Fock method?

The Hartree-Fock method assumes that electrons behave as independent particles and do not take into account their correlations with each other. This can lead to inaccuracies in predicting properties of molecules with strong electron-electron interactions. Additionally, the method does not fully account for relativistic effects, making it less accurate for heavy elements.

Similar threads

  • Atomic and Condensed Matter
Replies
4
Views
1K
  • Quantum Physics
Replies
2
Views
1K
Replies
1
Views
1K
  • Atomic and Condensed Matter
Replies
5
Views
5K
  • Atomic and Condensed Matter
Replies
4
Views
4K
Replies
2
Views
2K
  • Other Physics Topics
Replies
1
Views
8K
Replies
1
Views
12K
  • Quantum Physics
Replies
1
Views
1K
  • Quantum Physics
Replies
1
Views
626
Back
Top