Roberto Percacci had comment on emergence at Woit's blog in reply to this by "H.M." http://www.math.columbia.edu/~woit/wordpress/?p=3123&cpage=1#comment-61076 H.M.:"...You have to understand the difference between emergence and reductionism. The prevalent idea in high energy physics is that everything (up to a limit) can be split apart and reduced into more and more fundamental particles (or strings). For example the proton is build out of three quarks, which again might be build out of more fundamental things. The idea of emergence is VERY different. If a particle is not fundamental it is not necessarily build out of smaller particles, it can be a COLLECTIVE degree of freedom. There are many examples of collective phenomena in physics (specially pioneered in Condensed Matter Physics), and these “particles” cannot be described in a reductionist way. Quantum solid and liquids, are microscopically non-relativistic and build out of many-particles interacting with EM. But they can have low energy degrees of freedom which are much more symmetric, ie have Lorentz invariance and emergent non-Abelian gauge bosons. Thus the answer to your question is that; what emergent models can offer is to describe emergent phenomena! IF gauge symmetries and gravity are not fundamental, but emergent collective degrees of freedom, they cannot be described by, say, strings. Wither the content of the standard model is fundamental or emergent is not known (though most people tend to prefer non-emergence)." Percacci:"Quoting H M: > what emergent models can offer is to describe emergent phenomena! Yes, that would be useful, but much of the literature does not address the issue of constructing workable emergent models for gravity. It seems to me that those that go furthest in this direction are people that do not usually talk much about emergence. For instance, you could argue that causal dynamical triangulations is currently the most successful way of actually calculating emergent properties of gravity." http://www.math.columbia.edu/~woit/wordpress/?p=3123&cpage=1#comment-61101 You could also argue that LQG is that, currently. IOW that the emergent properties of gravity are collective degrees of freedom arising from the evolution of nodes (as described in the spin-network picture)
... one of the most important lessons of duality is that there is in general not a fundamental distinction between a particle being elementary or a collective excitation. The canonical example is soliton versus elementary field - like the well-known duality in 2d between a kink soliton and a local fermion: their S-Matrix is the same, and there is no physical distinction despite naive appearence. Things are quite a bit more complicated than naive notions of compositeness and fundamentality!
I see that there are a few very overlapping discussions going on now, and think this could be interesting to discuss deeper. It relates both to the logic of string theory (as per hteo thre thread) and general considerations of model building. Unfortunatley I've got a trip coming up soon and I hope that I get around to ocmment and that all of those thar contribute tot this discussion can hang on for a while, if the the pace is lower. I think the QUESTIONS that have arise here and the other thread are really good, and I think it's interesting to analyze the various suggestions! I think there are insights to be gained by that. Atm I'm not able to prodcue any response... but I hope the dicussions will stay alive for a little while. /Fredrik
This is what I think compelled Percacci to post. This is where it lies the problem 2d vs. 4d. Weinberg showed in 1979 that the 2d EH action is asymptotic safe, but not in 4d. Percacci accumulated evidence for that and got Weinberg interested on the topic after 30 years. Given that Weinberg is also interested in CDT for that same reason, he couldn't resist some indirect propaganda.
It looks in your post as if you were quoting me! And as if you were responding to what I said. That is a bit confusing, Surprised. Please be careful to show whom you are quoting. My aim was to show what Percacci said. The rest is to provide context. I highlighted a sentence of Percacci. However you did not respond to this. You seem to be correcting something that "H.M." said, whoever that is. If you want to respond to my post, I would appreciate hearing your reaction to Percacci. (You may wish to admonish and correct him, which would be both interesting and delightful to hear.)
I think you are making the story much too complicated, bringing in Weinberg and Asymptotic Safety and 2d and 4d etc etc. Percacci has something very simple to say. There are people like Verlinde who buzz about "emergence", but they don't propose ideas of what are the underlying degrees of freedom from which the appearance of gravity might emerge. Instead, the people who are proposing and studying basic DoF to underlie gravity and geometry (like CDT and LQG researchers) are not the ones talking about "emergence". It is a simple point he is making.
Let's get back to focus on what Percacci exactly did say. It is simple, important, and it is the topic of this thread: ==quote Percacci== ...much of the literature does not address the issue of constructing workable emergent models for gravity. It seems to me that those that go furthest in this direction are people that do not usually talk much about emergence. For instance, you could argue that causal dynamical triangulations is currently the most successful way of actually calculating emergent properties of gravity. ==endquote== http://www.math.columbia.edu/~woit/wordpress/?p=3123&cpage=1#comment-61101 Loll uses self-assembling swarms or flocks of small identical 4-simplices as her DoF and she gets largescale deSitter spacetime to emerge as an average. The calculation is a MonteCarlo computer sim. The 4-simplices behave according to simple local rules---analogous to individual birds in a flying flock or fish in a school of fish or sheep in a herd. The individual interacts locally with its neighbors according to simple rules. An overall dynamic shape of the collective appears out of this. The simple local rules reflect some Regge considerations. The computer is constantly randomizing the internal arrangement according to these rules. It may iterate the assemblage a million times before printout. It is very "thermodynamical" or "ergodic" in this way---like shuffling a deck of cards a million times before you examine it. The method of CDT is not manifoldy, in the sense that the dimensionality at small scale is not even determined. A smooth manifold which is macro 4d must also be micro 4d. Loll has gone beyond the differential manifold and has found a 21st century way to do unsmooth geometry. However she requires a computer doing MonteCarlo sims in order to calculate. Nevertheless she is doing what Percacci says, when she calculates that the smallscale dimensionality goes down from 4 to near 2. And when she calculates that the average large shape of the flock is deSitter. She and Ambjorn and Jurkiewicz and friends are calculating emergent properties of gravity. Calculating emergent properties of geometry from non-manifold micro degrees of freedom, i.e. from active self-assembling building blocks. That is just one example of a new thing which Percacci is pointing at. LQG models can also be seen in this light. In Loop cosmology one also runs small computer models of the universe, and the cosmology people are getting into doing this with spinfoam models (which will make them more like Loll CDT.) ================ I see Percacci just posted a second comment, amplifying what he said about CDT calculating emergent properties of gravity (actually of geometry but it's the same thing): http://www.math.columbia.edu/~woit/wordpress/?p=3123&cpage=1#comment-61243
I do find Percacci's latest paper bizarre - there's nothing wrong with what he says about emergence - just more like "what's the point?" Basically, a UV renormalizable theory can be effective or emergent, it just is a theory that hides that it will fail. A non-renormalizable theory that works in some regime is one that advertises its eventual failure. This is textbook stuff, eg. in Kerson Huang's or Zee's QFT texts.
I don't think your finding his latest paper pointless shows anything wrong with his comment at Woit's blog about CDT. His point there was a simple one: the contrast of emergence buzz versus research (like CDT) which calculates some actual emergent properties of geometry without bothering to call attention to it. His latest paper, for all we know, may be unrelated to that quite reasonable point made in general open discussion. But let's look at his paper. http://arxiv.org/abs/1008.3621 Asymptotic Safety, Emergence and Minimal Length R. Percacci, G. P. Vacca 20 pages, 2 figures (Submitted on 21 Aug 2010) "There seems to be a common prejudice that asymptotic safety is either incompatible with, or at best unrelated to, the other topics in the title. This is not the case. In fact, we show that 1) the existence of a fixed point with suitable properties is a promising way of deriving emergent properties of gravity, and 2) there is a precise sense in which asymptotic safety implies a minimal length. In so doing we also discuss possible signatures of asymptotic safety in scattering experiments." ============= Well I had a look. Actually it was the second time---I read part of it a few days back. I have to say I thought (perhaps as you did) that the case was a bit strained---when he argued that if you took a trajectory slightly off the critical hypersurface you would be looking at properties that were "emergent at low energy." It's midnight here. I should try this again in the morning. The way I'm thinking now the paper is not one of Percacci's best. Maybe it is more than half by Vacca, his coauthor. It does not seem like usual RP style or quality. But not to trust my inexpert judgement especially when I'm falling asleep.
While it's true that "emergence" has become a buzzword, but there are different kinds of emergence, and I personally think the fact that there are no fundamental degrees of freedom is a good trait. I see two ways of emergence; (1) from a fixed set of complexions, or a fixed "fundamental" state spaces, emergence of a particular structure is in a sense reorganisation; like order "emerges from chaos" by self-organisation of a given, but fixed set degrees of freedom/microstructures (I use these as synonyms as it just depends on wether we consider continuum models or discrete ones; their purpose are the same). (2) if we add ontop of the first idea, that the representation and description of this process can only "live" on the inside; ie seen from the point of view of an inside observer. Then we get also the peculiar thing that the state spaces and the VISIBLE degrees of freedom can never be fixed or fundamental. They are evolving. So in(2) we have emergence w/o fundamental mictrostates or degrees of freeom by evolution relative to some existing microstructure. (which one in the generalized sense may call a background) but that this is still evolving. But the FULL evolution can never be predictable by an inside observer and thus can never be falsifiable. Thus I think this isn't what we should seek, we should instead try to find the best possible inside description, and try to separate the decidable from the undecidable. I don't think verlinde goes far enough, but the idea he has that space emerges not from some fundemantal degrees of freedom, but by expanding on an existing piece of space, fits to the principal reasoning I agree with. The point would then be that at some level or horizon each observer simply reaches a point where this evolution is undecidable and unpredictable, that that's it. If you think that this is really how nature indeed works (not the built-in locality in this view) then must not let our desire of seeking eternal truth and timeless forcing laws, blind us and prevent progress. As I see it, in this picture, combined with a generalized entropic idea, even the ACTION form, governing the expected action in a given microstructure system (at an instant of time) has emerged from a much simpler action. In this sense I don't think CDT goes far enough either. The Einstein hilbert action should be emergent too (from first principle entropic flows) along with emergence of space, and I think THIS could be possible to understand. Except I haven't read the paper but I fully share the general ambition and view that the entropic reasoning can be applied to all interactions. This is exactly what is implicit in what I've labelled "rational action". The rational action is pretty much a random action, but guided by the existing state space. IF you pick a "random perturbation", one will generally get a probability distribution that looks something like the exponetial of the negative information divergence. This unifies action and entropy notions into a generalized probability framework, and if we combine this with memory transformations (as different compressions) non-trivial actions will appear; in particular will be break classical logic as expressions such as A an B; or A or B, when A and B refers to different spaces, which are related by mappings but that are still at the upper level unified by transforming back to the original set, and thus we get a real number assosiated to it (probability). Moreover in the discrete picture, this may be coutnable so that one can start from zero complexity and classify interactions. (Here string theory is "similar" except it start from some unclear idea of a "string" in a background- which is also part of the reason for the landscape; otherwise it has similar traits). /Fredrik
This quote was meant as a general warning, since by now we know that many old-fashioned ideas of compositeness and notions of something being "more fundamental" have to be revised. As said, we know many examples for which these notions are blurred - think eg about the AdS/CFT correspondence. These issues are in general more intricate as compared to what happens in solid state physics; nor has it something specifically to do with 2 dimensions.
Yes, it's quite unrelated. CDT is interesting because it is ill defined, but has a really interesting result. It's ill defined in the sense that they don't know what the true theory is that is producing in. Most intuitively, it could be Asymptotically Safe gravity, ie. the diffeomorphism invariance of the action is preserved at arbitrarily high energies, and is not emergent. However, it's interesting that they haven't ruled out CDT as coming from Shaposhnikov and collaborators' scale-invariant gravity or Horava-Lifgarbagez, in fact I think the lower dimensional CDT results are more consistent with Horava-Lifgarbagez (I just glanced at a later paper - apparently maybe not). In both of those, I think, the diffeomorphism symmetry is not present at high energies, but emerges at low energies. Now they say http://arxiv.org/abs/1002.3298 "Interestingly, also the renormalization group approach was able to reproduce the same finding, after the spectral dimension had first been measured in simulations of CDT quantum gravity, a result taken at the time as possible corroboration of the equivalence between the CDT and RG approaches", and the footnote says "Inspired by the seemingly universal value of the UV spectral dimension, more general arguments about the underlying UV nature of space-time have been put forward." and point us to Carlip's http://arxiv.org/abs/0909.3329 . It's even funnier to read Carlip's paper where he assembles all the evidence for effective lower dimensionality at high energies, including "High temperature strings Yet another piece of evidence comes from the high temperature behavior of string theory. In 1988, Atick and Witten showed that at temperatures far above the Hagedorn temperature, string theory has a very peculiar thermodynamic behavior: the free energy in a volume V varies with temperature as F/VT ∼ T. For a field theory in d dimensions, in contrast, F/VT ∼ Td−1. Thus, although string theory lives in 10 or 26 dimensions, at high temperatures it behaves in some ways as if spacetime were two-dimensional." And of course, string theory is the theory which gives us the strongest evidence for emergent gravity.
It doesn't matter what was meant, Surprised. In post #2 of this thread, you quoted something I did not say, and would not be likely to say, and put my name on it. All I am asking is that in future you quote accurately. Please do not attribute to me something that someone else (in this case you can see his name was "H.M.") said.
There have been claims that "gravity cannot be a Wilsonian quantum field theory". One such statement was made by Verlinde in a talk he gave at Perimeter (not the one that can be found online, another talk he gave the day after, where he tried to explain the connection to string theory). His point was that in a genuine Wilsonian theory you go towards a CFT (a fixed point) in the infrared, whereas in the case of gravity the fixed point is supposed to be in the ultraviolet (I am not trying to reproduce faithfully his reasoning, but this point was at its core). Our paper was motivated in part as a response to such claims. You are right to point out that much of it is standard, but still it seems to be unclear to many people. One reason may be that to apply the standard Wilsonian picture you have to reach the fixed point from above, and this means that you start from such high energies that the Planck scale can be viewed as an infrared limit. This often causes misunderstandings, so we thought it could be useful to spell it out in detail. That said, I don't know how to interpret your statement that a renormalizable theory hides its eventual failure. Can you elaborate? In my understanding, if you are on the UV critical surface (i.e. on a renormalizable trajectory) there is no failure; if you are on any other trajectory the theory is generally an effective field theory and you can guess where it will fail by measuring (at low energy) its distance from the UV critical surface. I think it is important to stress this latter point because it highlights the usefulness, at least in principle, of calculating quantum effects in the theory of the metric, even if the metric should eventually prove to be emergent. This contrasts with the statement that is sometimes made, that if gravity is emergent, it makes no sense to treat the metric as a quantum field. In any case, thanks to everybody for all your comments.
Wow, thanks for stopping by! As you know, a number of us here, including me, find the general direction of your work fascinating! I was talking about your latest paper http://arxiv.org/abs/1008.3621. There I understood that the theory of gravity could have a UV fixed point and be asymptotically safe, but gravity could still be emergent, in the sense that the real world may not lie on a trajectory that runs into the UV fixed point. However, even if that were the case, asymptotic safety of the theory would still have observable results, because the fixed point affects all nearby trajectories (universality). What I meant by a renormalizable theory hiding its own failure, was that a renormalizable theory is mathematically sound at all energies, and so will not fail mathematically. Its failure will be experimental only. As an analogy from classical mechanics, Newtonian mechanics is mathematically sound, so there is no mathematical need for it to be an effective theory. However, it is experimentally only an effective theory for low speeds, and is in some sense "emergent" from special relativity. So Newtonian mechanics mathematically hides its eventual failure. In contrast, a non-renormalizable theory fails mathematically and experimentally, so from mathematics alone, we know it can only be an effective theory. So Newtonian mechanics would be a "renormalizable" theory that is only effective - same as Asymptotically Safe gravity with a trajectory near but off the critical surface.
Minor interruption (I too hope R.P. has more to say!). There is a subtle difference. Assuming AS is right and we know the form, then the trajectory depends on the input of a finite number (like 3 or 4?) of constants. In the "near but off" case one finds that the theory is blowing up, but one does not have to change the form of the theory---as one might if it were merely effective, and as one in fact does with Newton. As I understand it, in the "near but off" AsymSafe case if there is trouble all one has to do is improve the values of a finite number of constants, bringing one closer to the critical surface. I believe I'd rather be in that situation than be provided with a theory that is only effective. The difference may not be immediately apparent, but I think it is a real one.
Welcom Percacci! I really enjoy your works, so I have a question about your most recent paper, FIG.2, about the scalar annihilation cross section at tree level, AS vs non-AS: http://arxiv.org/abs/1008.3621 That graph is identical graph of the equipartition of the energy of quantum oscilators vs. classical oscilators of a black body, that is, Rayleigh Jeans vs. Quantum Oscilators. What does this comparison mean to you? http://hyperphysics.phy-astr.gsu.edu/hbase/mod6.html For me, it means that the classical minimum scale is the most energetic scale, not the smallest scale, which might go to infinitely small.
I see. You were implicitly assuming that every theory must eventually fail. I certainly sympathize with the idea that there will always be new frontiers to explore, but that's still a rather ideological assumption. Steven Weinberg for example does not seem to be afraid of the idea of a "final theory". It seems to me that you are giving for granted that there will be "revolutions" or "paradigm shifts" in the future of physics, that will force us to abandon quantum field theory. Then even seemingly consistent quantum field theories will have to be "unexpectedly" replaced by something else. The reason I had trouble understanding your remark is that I was assuming the continued validity of our current quantum field theory framework. Thanks for the clarification.
Perhaps I can help. We don't want to distract or introduce new material into the discussion. The best thing to do is probably just listen and not make implied demands on the author's time. Your question in post #17 is OK for another time and place, but here there is just one issue, one essential question. I want to quietly listen and read over what Atyy and R.P. said and understand what that question is. MTd2, read post #18 again and think about Weinberg's speech in early July 2009 at CERN, opening an important HEP conference. He said politely and supportively to the string theorists that he didn't want to discourage anyone about their research but string might not be needed. It might not be how the world is---how it is might instead be just "good old quantum field theory." He drew a stairway graph of the fortunes of QFT over many decades, from before 1950 to the present, and had the immodesty to continue it with a dotted line---going up another stage. Another flight of stairs with a "?". Part of the question that the Atyy/RP exchange raises in my mind is do we take Weinberg's July CERN talk seriously. As usual with good questions, there is no right answer. This is exactly what RP just said. One can choose to, or not. It is (as RP indicated) "ideological". This illuminates for us how Atyy may be thinking and also how RP may be thinking. In case anyone wants a link to the CERN video: http://cdsweb.cern.ch/record/1188567/ (historical overview of periodic rise of QFT goes to minute 58, then 12 minutes of future projection.) I will cut this comment short because I hope their conversation will continue and I don't want to distract.