Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

It would take a computer bigger than the universe to solve 11 electron

  1. Mar 20, 2013 #1
    Hi,

    I've seen quite a few textbooks and lectures start by saying something like "to solve a quantum system of N particles would take the best computer in the world, to solve a quantum system of (N+a small number) of particles would take a computer bigger than the universe..", where they give specific numbers. An example of this is Wen's book. My question is which SPECIFIC back of the envelope calculations are they using to get these numbers? It's not something as simple as the Hilbert space for spin growing like 2^N or something so what are the specific assumptions that are made when coming up with these numbers? Anyone know?
     
  2. jcsd
  3. Mar 20, 2013 #2
    Probably in reference to what are called P vs NP problems...
     
  4. Mar 20, 2013 #3
    I'm not asking why quantum systems are not computable on classical computers I'm asking what set of assumptions go in to a statement like "To computer a system of blah electrons would require 2^blah bits" or what have you. What is the Hilbert space, what are the assumptions? What is the EXACT calculation that produced those EXACT numbers.
     
  5. Mar 20, 2013 #4
    By way of an example here is one of the opening paragraphs from Wen's book:

    "However, in practice, the required computing power is immense. In the 1980s, a workstation with 32 Mbyte RAM could solve a system of eleven interacting electrons. After twenty years the computing power has increased by 100-fold, which allows us to solve a system with merely two more electrons. The computing power required to solve a typical system of 10^23 interacting electrons is beyond the imagination of the human brain. A classical computer made by all of the atoms in our universe would not be powerful enough to handle the problem/' Such an impossible computer could only solve the Schrodinger equation for merely about 100"

    Where is he getting the numbers he's quoting? (he doesn't say)
     
  6. Mar 21, 2013 #5

    atyy

    User Avatar
    Science Advisor

    Brian Swingle fills in some details in his essay. Basically, the size of the Hilbert space gets very large. He also points out that very good approximations can be obtained if you know which part of Hilbert space to hunt in for your particular application.

    http://fqxi.org/community/forum/topic/1559

    "More generally, the ground state of the Hamiltonian HN(g1, ..., gk) depends on these k parameters, but the state is a 2N component vector and hence all 2N components are determined by just a few numbers! Even if N is merely 100 and k = 2 we are talking about a two dimensional subset of a 2100 dimensional space! ...

    "So what, after all, are all those complex numbers really telling us? Do we need them to predict the results of physical measurements? If so, we’re in trouble. Ignoring causality and the lack of materials, even if we filled up our entire Hubble volume, the whole visible universe, with our best classical storage device, we could only store the quantum state of a few hundred spins using this huge classical memory. Suddenly the illusory nature of Hilbert space is brought into focus."
     
    Last edited: Mar 21, 2013
  7. Mar 21, 2013 #6

    cgk

    User Avatar
    Science Advisor

    OP, these statements come from arguments which assume that you intent to represent the wave function in an exponential fashion, with N^M numbers, where N is the number of one-particle states and M the number of electrons. If you then go on and formulate a particularly bad representation of the 1-particle space (e.g., a uniform 3d grid of 100 points in 3 dimensions for a molecule) you get those estimates.

    What such estimates really show is the ignorance of the people providing them. For example, it is really not hard to find out that the electronic structure of molecules is routinely calculated by tens of thousands of theoretical chemists daily. You could do it on the computers your local super market sells. For friendly small molecules (say, <40 electrons) one can effectively get converged results to a few meV. These techniques of course require some information about the physical systems involved, *but this information is generally available* and can be checked in controlled approximations.

    In short: If you read such a statement in a book, you can safely assume that you do not want to learn many-body theory from that book.
     
  8. Mar 21, 2013 #7
    @cgk, those approximation methods require quite a number of assumptions though, in particular they do not work well for strongly-correlated systems, which are important in contemporary condensed matter.
    This report provides some examples of the computing power required for classical simulation:
    http://sc05.supercomputing.org/schedule/pdf/pap188.pdf
     
  9. Mar 21, 2013 #8

    f95toli

    User Avatar
    Science Advisor
    Gold Member

    A good example would be high-temperature superconductivity. We have a very good understanding of what goes on in metallic superconductors, including binary systems such as MgB2 and we can calculate all properties more or less from first principles; but no computer in the world is powerfull enough to fully simulate e.g a reasonably large YBCO lattice; which is what you would need in order to e.g. calculate Tc without any assumptions.

    Note that this hasn't stopped people from trying; there are a bunch of papers out there where people have tried various "short-cuts" but with limited success. Perhaps if we extended the lattice 10 times in each direction (x,y and z) it would work; but that would require 10^3=1000 faster computers than what we have now...
     
  10. Mar 22, 2013 #9

    cgk

    User Avatar
    Science Advisor

    I recognize the fact that there are systems for which those approximation methods will not work well/at all. But just saying that it is impossible to calculate wave functions for any meaningful systems, as the kinds of statements cited by the OP often imply, is still utterly wrong (this is particularly ridiculous if these statements are used to introduce density functional theory).

    Even just saying for ``strongly correlated systems it cannot be done'' is not enough: For example, a vast array of strongly correlated molecular systems are accessible to MCSCF+MRCI (multiconfiguration self-consistent field/multiconfiguration configuration interaction). Also when the strong correlation extents only across one direction, there are methods like DMRG (density matrix renormalization group) which can in principle (and sometimes in practice) handle them. There are even infinite strongly correlated lattice models (like the 1D Hubbard model) for which some ground state wave functions can be calculated analytically with pen and paper with various variations of the Bethe Ansatz.

    I would be okay with those statements if they came with some restrictions in scope. For example, by saying that in the very worst case, for a random Hamiltonian with N-body terms and on which we have no further information, the wave function most likely cannot be represented without exponential cost (just like the Hamiltonian itself!). But that is often not the case; the authors make it sound like it is impossible to calculate wave functions of *any* interesting systems. And this is not true.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: It would take a computer bigger than the universe to solve 11 electron
Loading...