Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Sign problem - QCD

  1. Nov 25, 2017 #1
    In ordinary mechanics, adding 1 particle to a system of 1000000 doesn't change a lot. I know about the sign problem in QCD, so when temperature is cold the amount of calculation diverges.

    My question is: when we add yet another quark to a system of 1000000 quarks, the amount of calculation increases dramatically. But it is also possible that a solution for 1000001 quarks differs dramatically from a solution for 1000000 quarks, and a system with 1000001 quarks has some new properties?

    Intuitively, 1 quarks should not change a lot when we already have a million. However, looking at atomic nuclei we see that most of them are really unique. So up to approx 239 * 3 = 700-800 quarks every addition changes the system dramatically. How far does it go?
  2. jcsd
  3. Nov 25, 2017 #2
    Where do you find 1000000 quarks?
  4. Nov 25, 2017 #3


    Staff: Mentor

    "Really unique" in what way?
  5. Nov 25, 2017 #4


    Staff: Mentor

    What are you referring to here?
  6. Nov 25, 2017 #5
    Normally such systems are not stable.
    But they can be stable in a gravitation well (neutron star)
    You can think about any other exotic conditions, like collapsing closed universe, for example.
  7. Nov 25, 2017 #6
  8. Nov 25, 2017 #7
    There are many. For example, (almost randomly chosen)
    The very unusual nature of 180mTa is that the ground state of this isotope is less stable than the isomer

    So there is an interesting property, observed in systems of no less than 540 quarks.
  9. Nov 25, 2017 #8


    Staff: Mentor

    There are many ways in which nuclei differ, but I don't see why that justifies the term "really unique". It's just energy levels in systems with many fermions packed into a small volume.

    Which is shared by at least one other nucleus (242mAm), so it's not unique.

    "Interesting property" is a subjective term. I could just as well say that deuterium (hydrogen-2) has the "interesting property" of having a neutron in it, by contrast with hydrogen-1, which does not.

    Again, the properties of various isotopes are pretty well understood in terms of energy levels in systems with many fermions packed into a small volume. The fermions in question are protons and neutrons, not quarks, at least in all of the nuclear energy level models I've seen, but that's to be expected since protons and neutrons are such tightly bound states of three quarks that they can be treated as single particles for purposes of analyzing nuclei. You only need to look at the underlying quark structure when analyzing very high energy experiments, such as the deep inelastic scattering experiments done in the late 1960s. Which just emphasizes again that there are no "really unique" properties here, you just stack more and more protons and neutrons together and look at the energy levels.
  10. Nov 25, 2017 #9
    I'll try to clarify.
    Lets say we have a sequence of systems with 1, 2, ... N particles.
    We select any scalar property P which value can be normalized by the number of "particles". We observe the behavior or P/N when N grows. Normally we expect P/N to converge to some constant, or it can increase/decrease with N very slowly. In both cases, when N->inf, P(N+1)/(N+1) - P(N)/N -> 0.

    An "interesting" behavior is when it is not true.
  11. Nov 25, 2017 #10


    Staff: Mentor

    This seems to be a very limiting condition. What such property would distinguish the 180mTa nucleus from the 180Ta ground state nucleus? They both have the same number N of particles.

    Huh? I can think of at least one obvious counterexample: mass.
  12. Nov 25, 2017 #11
    The sign problem is not about systems with a fixed number of quarks. It's about systems with a chemical potential, which is defined in the typical grand canonical ensemble. Just like defining the canonical ensemble means letting your system exchange energy with a reservoir, defining the grand canonical ensemble requires allowing your system to exchange particles with a large particle reservoir.

    In any case, the sign problem arises because the fermion determinant becomes complex at nonzero chemical potential, so the typical strategy of integrating out the quarks fails.
  13. Nov 28, 2017 #12
    Whats wrong with mass?
    Think about neutron star as a huge system of neutrons. Both invariant and heavy masses, divided by the number of neutrons will be the same for the wide range of neutron stars, and these values won't change if you add another spoon of matter.
  14. Nov 28, 2017 #13


    Staff: Mentor

    Yes, they will. If P is mass, P/N for a neutron star will decrease slowly as you add more neutrons, because the star will become more tightly bound. The "rate of decrease" dow not slow down as the number of neutrons goes up--at some point you reach the maximum mass limit for neutron stars and the star collapses to a black hole, but there is no convergence in P/N before that happens. So P/N for a neutron star if P is mass does not do either of the things you claimed: it doesn't converge to a constant and it doesn't increase slowly.
  15. Dec 1, 2017 #14
    You're right
    But let me try yet another attempt to address this issue.

    I return to

    Delta = P(N+1)/(N+1) - P(N)/N

    By "Interesting", or "non-trivial" behavior I mean that delta flips sign many times (or even infinitely many times) when N->inf. It will address your counter example, when P/N decrease monotonically. New definition of non-triviality resembles non-trivial zeros of Euler function.

    But we want to avoid "trivial" sign flips. It could be caused by the simple fact that systems with odd and even numbers of fermions behave differently. So it makes sense to compare N and N+2. But for quarks it makes sense for the safety reasons to add them by 2 or 3 to keep the system "white". Hence, it makes sense to use the number 6 (2+2+2 or 3+3)

    So lets redefine Delta = P(N+6)/(N+6) - P(N)/N
  16. Dec 1, 2017 #15


    Staff: Mentor

    Ok, now show me the math that says this happens for the example you describe in your OP.
  17. Dec 1, 2017 #16
    As I said, the sign problem in this context refers to Monte Carlo simulations of systems at a finite chemical potential (not necessarily QCD, though QCD is the usual example). Even a simple Bose gas at a finite chemical potential has a sign problem because the action is complex. To be very clear:
    This statement shows a completely wrong idea of how lattice QCD actually works. It has nothing to do with the sign problem.
    Last edited: Dec 1, 2017
  18. Dec 2, 2017 #17
    Sorry, this was my impression after reading a wiki article: https://en.wikipedia.org/wiki/Numerical_sign_problem
    It claims that in QCD the complexity of calculations grows as exp(fV/T), where f is field density.
  19. Dec 2, 2017 #18
    No, that equation is not about QCD specifically, and it does not even pertain to the sign problem itself: it's a measure of the effectiveness of reweighting, which is the "obvious" strategy for dealing with a sign problem.

    The way people actually do Monte Carlo calculations is the following: imagine you have a bunch of particles whose spins can be up or down. I'll assume that particles adjacent to one another can interact, and they can lower the energy of the system if their spins are aligned. This is the Ising model, which is a useful model of a ferromagnet. Now, if you want to study the Ising model via Monte Carlo, what you do is to generate a bunch of assignments of "up" or "down" for each spin, but you do it in a clever way: you generate these assignments (which from now on I'll call "configurations") in such a way that they're Boltzmann distributed. This means that, if you set the temperature of the system to be T, a configuration with energy E will appear with probability proportional to exp(-E/T). Now if you do a uniform sampling on these configurations, you can calculate, say, the magnetization, and it'll give you on average the same value as a "real" physical system that is described by the Ising model. Sneaky, huh?

    The sign problem is what we call when the "Boltzmann factors" corresponding to some physical system become negative or complex. I don't know how to generate configurations with a _negative_ probability, so the entire program crumbles. One way to rescue it is to take the absolute value of this complex Boltzmann factor and generate configurations weighted according to that, and then you take the sign (or phase) and move it to the observable you're trying to measure. I don't know how to generate complex-weighted configurations but I sure know how to measure a complex observable, so this procedure, in principle fixes the problem. This is "reweighting".

    However, by doing this we have discarded important information about the system. The configurations which are "typical" for the system with absolute value weights are not necessarily the same configurations which are "typical" for the original system. The problem was "fixed", technically, in the sense that I can write an algorithm that works, but it's also a useless algorithm. It is useless because I'm simulating a different physical system, with different properties, and hoping I'll learn something about the original one. You might ask how bad this is. Well, that's what the equation you've seen says. It's saying that the errors introduced by the reweighting are proportional to exp(Δf V/T), where Δf is the difference in the free energy densities between the original system and the reweighted one.

    See here for more details.

    So, in short, this equation is a measure of how bad the naive strategy is at dealing with the sign problem. There's nothing quite so specific about putting in N quarks, etc. I mean, the simulations of QCD which have the sign problem don't even have quarks as you'd think of them, because they've been integrated out.
    Last edited: Dec 2, 2017
  20. Dec 2, 2017 #19
    Thank you for the detailed explanation. But is it correct to say that the whole problem is NP-hard, hence not only "obvious", but all other approaches suffer of NP-hardness? So it is either brute force, or simplified models - with a danger of being inadequate in some regimes?

    This is why I posted this question in the first place. For example we, human beings, are low temperature QED/QCD systems, and our properties are absolutely unexpected from QM formulas.
  21. Dec 2, 2017 #20

    QCD was never proved to be NP-hard. What is known to be NP-hard are various types of spin glasses. In order for some problem X to be NP-hard it must be possible to use it as a black box to solve any problem in NP, in polynomial time. Typically you prove that by encoding another problem, known to be NP-hard, as some instance of X. In the case of a spin glass the mapping is straightforward because you can choose individual couplings to be ferromagnetic or antiferromagnetic. In the case of QCD there is no such freedom because the only available parameters are the temperature, chemical potential, and possibly some external field. This makes it very unlikely that the typical sign problem of QCD at a constant nonzero density is NP-hard. There simply aren't enough parameters to encode a problem with.

    That's right, but there are ways of getting field theories to form interesting structures without necessarily having to go through NP-hardness (e.g. https://arxiv.org/abs/cond-mat/0209570). Besides, as I mentioned, lattice QCD at nonzero chemical potential should be thought of as a way of modeling, say, a small piece of a neutron star, or a small piece of a large glob of quark gluon plasma, or something like that. The system is allowed to exchange particles with a reservoir. It's not adequate for thinking of a system of 30 quarks or something like that.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted