I'm curious to know why chemists like to use Gaussian basis set in case of an ab-initio (ex.DFT) calculation. I understand that the molecules that are of interest to chemists are non-periodic and hence plane wave basis is not useful, but can't they use other real space basis like a grid? What makes Gaussian orbital so special? Also mathematically does the Gaussian orbitals of different atoms combined together form a complete basis set?
DrDu said the main reason. There are lots of integrals. Additionally, the contracted gauss type orbitals are also very efficient basis sets for representing occupied molecular orbitals. Nothing more efficient is known. A few dozen (or less) CGTO basis functions per atom are sufficient to get relative energies (say, of two molecular conformations) converged to below 1 kJ/mol, and you can easily put in more basis functions to approach that basis limit as close as you wish. Note that already 1kJ/mol is far more accurate than the intrinsic accuracy of DFT methods. Since you only need so few basis functions, you can also use dense linear algebra routines for most of the iterative solution procedures. But of course people regularly come up with shiny new FEM or real-space grid based HF/DFT programs. Only to realize that GTO based programs are much, much more efficient after all. In principle the GTOs form a complete basis (even GTOs on a *single* atom would do that!), but in practice you only really want to represent the space occupied molecular orbitals are likely to span. They are of course not good for representing continuum states etc.
Hi DrDu. I'm still not clear. In DFT the PDE in non-linear. Now what is the integration you are talking about? Recasting the PDE in functional form? If so can we solve it analytically for a functional of that sort? I can understand your argument for computed say Hartree potential from the poisson eq reformulation. But for DFT? Hi cgk, thank you very much for the information. I now understand why chemists love GTO. However are you very sure about completeness? I agree with you that the Slater Orbitals/GTO of a single atom is complete to capture the wave function of that electron. For DFT we introduce GTO of different atoms. I'm not sure if the argument is true in this case. Can you refer me to some books/papers where they have proved that Slater Orbitals/GTO is a complete basis for an N electron wave function
The integrals are integrals over the Coulomb interaction and over external potentials, kinetic energy operators and so on. They occur in the second-quantized expression for the Hamiltonian: [tex]H = \sum_{rs} \langle r|t+v|s\rangle c^r c_s \sum_{rstu} \langle rs|1/r_{12}|tu\rangle c^r c^s c_u c_t[/tex] where the t/v are kinetic energy/external potential, and the cs are creation/destruction operators (regarding the empty vacuum). The rstu here are the orthogonal molecular orbitals, which need to be expressed in terms of some basis set. Especially the Coulomb integrals can be very numerous, and therefore basis sets are required in which these can be evaluated quickly. For finite elements of any kind this is not the case. Iirc there was a proof of the completeness of the GTO basis sets in "Molecular electronic structure theory" by Helgaker and others. It is also otherwise a very good textbook on that kind of stuff, if you are interested in that topic.
This is a good question (perhaps better than you realize). You really have to view this as two questions, namely 1) Why GTOs are used in wave function methods and 2) Why GTOs are used in DFT methods. For 1) DrDu's answer is right. It greatly simplifies the integrals. (See for instance Appendix A of Szabo and Ostlund, where they derive the analytical expressions for all the Hartree-Fock integrals of an s-type orbital) Showing that they form a complete set is fairly simple. Hand-wavingly, you could just remember that the spherical harmonics (which are used for the angular part) form a complete set, and also recall that the eigenfunctions of the harmonic oscillator are gaussians and form a complete set. (Or arrive at the same by solving the 3d QHO) Of course, in reality a basis set is truncated and does not form a complete set, but there do exist complete basis sets (CBSes), such as G2, and also other methods of interpolating to the CBS limit. But it's not a major issue; the larger 'ordinary' basis sets are usually quite enough to bring (relative) basis set errors down within the error of the method. Anyway, so your basis is typically not orthonormal, and so you have to enforce orthonormality through some orthogonalization procedure. (e.g. canonical, symmetric, Gram-Schmidt) I'm not sure what cgk means with "nothing more efficient is known". I agree nothing more computationally efficient is known, but they're certainly not more efficient 'mathematically', i.e in terms of number of functions required for a given accuracy. For instance, the Slater-Type functions which gaussians replaced, were more accurate in that respect. (e.g. for a HF calc on Helium, a single STO is the basis-set limit!) It's just that the increased number of basis functions with gaussians was more than compensated for by the faster evaluation of the integrals. From the number-of-functions standpoint, gaussians are a stupid choice. They're continuous at r=0, and they do not decay exponentially. So they fail to satisfy the few exact properties we know about the true wave function/density. This means it will always take a relatively large number of gaussians for a good approximation, especially due to the nuclear cusp. (since it always requires many continuous functions to correctly approximate a discontinuity) 2) Now, as for DFT, the situation is rather different. The function you are approximating (the density) is of course very similar to the true wave function that the basis sets were created to approximate, so in that respect they're a good choice. Since existing ab initio QC programs had HF/SCF methods implemented, as well as the basis sets, they had much of the code required to do DFT that way once it started getting popular, so to begin with it was just the most convenient choice. But rationale about integrals does not hold as well anymore. It's just as good and fine for the Kohn-Sham one- and two-electron integrals as it is for Hartree-Fock. But you also need to evaluate the density functional, which means integrals of the form: [tex]\int f(\rho_\alpha, \rho_\beta) dV[/tex] and [tex]\int f(\rho_\alpha, \rho_\beta, \nabla\rho_\alpha, \nabla\rho_\beta) dV[/tex] For LSDA and GGA-type functionals, respectively, where the functional f often depends on [tex]\rho^{\frac{4}{3}}[/tex] and such. These integrals can't be calculated analytically at all, so all DFT codes (to the best of my knowledge) have to do some amount of numerical integration. At the same time you don't (always) have to perform the exchange integrals used in Hartree-Fock, which allows for different approaches to the Coulomb integral. So the rationale for using GTOs in wave function methods doesn't really hold for DFT. Which isn't to say GTOs don't work - they work very well, better than with wave function methods even, since DFT is more resistant to basis set errors. But the field is more open for trying other approaches. The immediately obvious idea would be to bring back STOs - which was done in the Amsterdam DFT program. Another is to go in the opposite direction and move to fully numerical, FEM-type basis sets, which has been done with e.g DMol. I haven't looked at any recent benchmarks, so I can't comment on the success of these approaches, but at the very least, they're competitive. Bearing in mind that molecular DFT is relatively young (becoming practical circa 1990, I'd say) and that a lot more research effort has been spent on GTO basis sets, I wouldn't categorically state that GTOs will remain the best choice for DFT. On the other hand, absent any remarkable new developments, I think they will remain the standard for wave function methods for the foreseeable future.
STOs are not more efficient than contracted GTOs. Certainly not in terms of efficiency, but also not in terms of accuracy. A single contracted GTO can also approach the HF basis set limit of any atom as closely as you wish, and this not only works for hydrogen and helium, but for every single element. I highly doubt that. After all, you don't want to calculate atoms, but molecules. And you need just as many polarization STOs as you would need polarization GTOs in a basis set. And as mentioned, the AO part of the basis set is easily described in terms of /contracted/ GTOs. While there is a divergence at the nucler positions, this is of minor practical consequence, as the divergence itself does not contribute to energies or other properties due to its zero volume element. The shape near the nuclei is easily represented by a few high-exponent Gaussians set into a fixed linear combination. The only kind of calculation where the concrete behavior at the nucleus really matters is the calculation of NMR shieldings. This, of course, comes with many additional other problems. That is the property which allows one to do screening approximations in large molecules. It is not a bad thing at all. The DFT and HF molecular orbitals are somewhat different, however, especially in the core region. That is why HF basis sets like cc-pVnZ sometimes show sub-par performance for DFT calculations when compared to actual DFT basis sets like the Turbomole def2 sets. The [tex]\rho^{\frac{4}{3}}[/tex] can actually be integrated analytically for Gaussians, and there is at least one DFT implemenation which does this (see http://dx.doi.org/10.1016/j.cplett.2006.02.100 ). Of course that way you can't go beyond Slater exchange, and the results will be very inaccurate. If you are interested in energies, pure GGA functionals without HF exchange will often not be accurate enough.
I don't see your point? Of course a single contracted GTO can approach the basis set limit, since a single contracted GTO can contain any number of primitive gaussians. But the function that contracted gaussian is often approximating is a STO-type function. Hmm, sounds like the same miscommunication; I said 'number of functions', not 'number of orbitals'. I don't disagree you'd need the same number of orbitals. I'm just saying you need more primitive gaussian functions than you do slater-type functions for a given accuracy. I don't see how that's a controversial position; it says so in all the textbooks I've seen that mention the matter, including the Helgaker book you referred to. And I've personally heard some fairly notable quantum chemists echo essentially the same opinion; that gaussians aren't really a good choice from the narrow, mathematical, practicality-be-damned perspective. (Okay, I admit it - I probably didn't arrive at the opinion completely independently of having heard theirs. But the rationale, which I gave, is pretty convincing to me at least. If they don't satisfy the known properties of the function they're approximating, why would they converge faster than similar functions that do?) I agree, but you can't argue against a viewpoint from the mathematical perspective on the grounds of what's important in practice.
So we agree on the part that a contracted GTO can approximate AOs to any desired degree? The Gauss orbitals are, however, not actually constructed to form STOs. The only exceptions are the STO-3G/STO-6G basis sets and the elements hydrogen and helium, where the AOs actually are Slater functions. But if you use STO-3G in an actual calculation of, say, reaction energies, you'll easily see that with this basis set it is quite possible get HF(!!) errors of 500% in the wrong direction. Real basis sets are designed to reproduce atomic AOs. This is most easily seen for the cc-pVnZ basis sets. For the first and second row atoms these consist of one contracted function for each occupied shell of an atom (say, for N it would have one contracted 1s, 2s, and 1p function each) and a set of primitive polarization functions which are supposed describe the distortion of the orbitals due to the molecular surroundings. That means: if you actually do an ROHF calculation on N atom, you will get exactly the same HF number with the full cc-pVTZ basis set and with the cc-pVTZ stripped off all its functions except the three contracted ones (try it!). You will also get exactly the same number if you uncontract the basis set (i.e., have lots of primitive basis functions instead of the three contracted ones), because the contraction coefficients are actually determined as molecular AO coefficients of spherically averaged Hartree-Fock orbitals. Of course other basis sets are constructed in a somewhat different manner (e.g., ANO-RCC sets don't contain primitives, but are based on natural orbitals of atoms, cations/anions, and dimers; the def2- basis sets have exponents optimized on atomic Hartree-Fock and MP2 calculations and rather involved segmented contraction pattersn), but in the end all of them need to be able to represent the atomic parts of the orbitals. Maybe that is our misunderstanding I meant number of /basis functions/, not number of primitives. In correlated calculations it is only the former number which counts, but of course in DFT the number of primitives is also somewhat relevant. You don't need more GTO basis functions than STO basis functions to reach the same accuracy for a molecule, but I agree that for many elements you might need more primitive Gaussians than primitive STOs to form the contracted basis functions.
Absolutely. Yes, I expressed myself badly. Only the original Pople STO-nG basis sets were explicitly fitted to STOs, it's true. Once you go beyond the minimal basis, it doesn't really make sense to do, either. For a correlated system or molecule in particular, a single gaussian basis function can outperform a single STO basis function, certainly. But this isn't because gaussians are such great approximations, but because of the variational flexibility you got from adding more functions. As the quote usually attributed to von Neumann goes: "With four parameters I can fit an elephant, and with five I can make him wiggle his trunk." A double-zeta STO can't really be compared on equal footing to a double-zeta or split-valence GTO. Yes, I know exactly where you were coming from. I was also thinking along the lines that if you do a purely numerical integration (as my first post touched on it), your speed is limited by the number of function evaluations, which would be lower if you used STOs, since a single STO takes about the same time to evaluate as a single primitive. After a bit more thought, I'm not sure that rationale holds though; If you're clever, and memory allows, you wouldn't really need to evaluate a contracted gaussian more than once for each gridpoint. Are there even contracted/split-valence STO sets? As far as I know, your only choice is (or at least was, back around 1970) to go to with double-zeta and so on. Hence, not only do you have the integral problem, but also scaling problems. (Although I can't say I know much of what's been done to develop STO's since around when everyone stopped using them.) Seems we got that misunderstanding straightened out at least.