Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Asymptotic safety and black holes (new Mattingly paper)

  1. Jun 8, 2010 #1

    marcus

    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    Dave Mattingly has been a frequent collaborator with Ted Jacobson (his thesis advisor). You know of Jacobson if, for example, you followed the recent discussion of "gravity as entropic force" by Erik Verlinde and others. In 1995 Jacobson derived the Einstein field equation from thermodynamics. Or if you followed the Perimeter conference last year on Horava gravity (where J. led the concluding discussion.) Or if you watched the Santa Barbara KITP workshop on quantum spacetime singularities (to a large extent about black holes). Jacobson has QG cred, and some of that rubs off. Mattingly is comparatively young but it's probably worthwhile seeing what he has to say about Asymptotic Safety. So here's this recent paper.

    http://arxiv.org/abs/1006.0718
    Asymptotic Safety, Asymptotic Darkness, and the hoop conjecture in the extreme UV
    Sayandeb Basu, David Mattingly
    9 pages
    (Submitted on 3 Jun 2010)
    "Assuming the hoop conjecture in classical general relativity and quantum mechanics, any observer who attempts to perform an experiment in an arbitrarily small region will be stymied by the formation of a black hole within the spatial domain of the experiment. This behavior is often invoked in arguments for a fundamental minimum length. Extending a proof of the hoop conjecture for spherical symmetry to include higher curvature terms we investigate this minimum length argument when the gravitational couplings run with energy in the manner predicted by asymptotically safe gravity. We show that argument for the mandatory formation of a black hole within the domain of an experiment fails. Neither is there a proof that a black hole doesn't form. Instead, whether or not an observer can perform measurements in arbitrarily small regions depends on the specific numerical values of the couplings near the UV fixed point. We further argue that when an experiment is localized on a scale much smaller than the Planck length, at least one enshrouding horizon must form outside the domain of the experiment. This implies that while an observer may still be able to perform local experiments, communicating any information out to infinity is prevented by a large horizon surrounding it, and thus compatibility with general relativity can still be restored in the infrared limit."

    One thing is he knows the recent AS literature, for example citing not just an old review paper by Martin Reuter but also imporatnt new work like the 2010 paper of Benedetti, Machado, Saueressig. He knows how the Newton's constant runs in recent AS treatments.
    The dimensionful coupling, Newton's constant, goes to zero, as the energy scale increases. It turns out to be crucial how the dimensionless version g = GN(p)p2 behaves in the UV limit. Benedetti et al found the asymptotic value was around g* = 2.
     
    Last edited: Jun 8, 2010
  2. jcsd
  3. Jun 8, 2010 #2

    marcus

    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    We had a fairly long discussion about the "Asymptotic Darkness" conjecture in connection with gravity Asymptotic Safety. As I recall, it was in a 2009 thread about something Steven Weinberg said.

    The key point to remember is that in AS the Newton constant GN goes to zero in the UV, so the conditions for a black hole to form change. And the length scale you think of as the "Planck length" changes. Because the Planck length depends on G.
    You can define the Planck length to be equal to its value in the low energy largescale limit. But then you have to be alert and not draw unjustified conclusions which might seem obvious involving the "Planck length" in the UV regime.
     
  4. Jun 8, 2010 #3

    Haelfix

    User Avatar
    Science Advisor

    "The key point to remember is that in AS the Newton constant GN goes to zero in the UV, so the conditions for a black hole to form change"

    No they don't. Physics at strong coupling might change under AS or other scenarios, but blackhole formation occurs classically, its infrared physics. Thats the point of all these papers and the Hoop conjecture. It doesn't matter what gravity may or may not do in the deep UV, b/c its screened by purely classical effects.

    The analogy to think of is instead of considering say Planck scale particles colliding (or alternatively highly accelerated proton proton collisions in a massive accelerator), but rather simply say the earth and the sun. Give them both enough kinetic energy, and their collission produces a closed trapped surface in the shockwave picture. It doesn't matter that their center of mass energies are enormous, the scattering experiment cannot possibly probe any possible quantum substructure b/c of the fact of a horizon being formed. No moon like boulders or electrons or stringy vibration modes come flying out of that collision b/c they're trapped.
     
    Last edited: Jun 8, 2010
  5. Jun 8, 2010 #4

    marcus

    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    Mattingly et al: "We show that argument for the mandatory formation of a black hole within the domain of an experiment fails."
     
  6. Jun 8, 2010 #5

    Haelfix

    User Avatar
    Science Advisor

    Yea I mean this particular paper is a bit vague in the sense that they look for a loophole by modifying the Hoop conjecture in the deep UV by higher order qg corrections (nonperturbative ones). Ok (maybe)! But the point is really that it doesn't matter much. The more energy you pump in to a scattering experiment, ultimately will form some type of screen around your laboratory whether you like it or not, regardless of what the details of quantum gravity are (otherwise we wouldn't have any blackholes in the universe at all). This screen is, under any definition I know off in GR (and horizon formation is subtle business in GR) going to have something to say about the proper length as measured by observers at infinity (where the SMatrix elements live). This is fuzzy to begin with b/c of the uncertainty principle (we are well under the compton wavelength of most particles that we can think off), much less the conjectured generalized uncertainty principles (where the measuring apparatus itself has to be so massive that it itself collapses).

    I like the explanations in this paper:

    arXiv:1005.3497
     
  7. Jun 8, 2010 #6

    Haelfix

    User Avatar
    Science Advisor

    Let me see if I can put it another way for interested laymen.

    Take a 2 particle scattering experiment. Make it the sun (with some amount of net charge for the sake of argument) and its antiparticle (an antisun).

    Now turn gravity off but keep electromagnetism on and collide them in an experiment (measuring incident particles off at infinity somewhere). Most of the time you will measure high energy annihilation photons. Othertimes you'll see bits and pieces of the sun and antisun as the system will have ripped itself apart. More rarely you'll see elementary constituents, including particle physics particles (electrons and so forth) with incredibly high momenta. In fact, the more times you do the experiment and the more energy you give the system, you'll end up probing not just the dynamics of electromagnetism, but also QED to better and better resolutions, and to arbitrarily short distance scales.

    Now if you turn gravity on in this experiment, you immediately find a problem. You cease to measure shorter and shorter distance scales, but instead end up creating bigger and bigger blackholes (which eventually radiate away as a blackbody). You end up not learning anything at all about quantum gravity, other than what you already know from classical and semiclassical physics.

    I can't for the life of me see anyway around this.
     
    Last edited: Jun 8, 2010
  8. Jun 9, 2010 #7
    Independently of Asym safe: Even assuming the hoop conjecture holds, it appears to me that the question whether it is in principle possible to probe transplanckian scales depends on the solution to the information loss paradox. After all, the 'IR' black hole states resulting from high energy scattering are identical only if the information is lost completely.
     
  9. Jun 9, 2010 #8

    Fra

    User Avatar

    I agree with this. Even though a microscopic black hole forms, in some sense, we still do not know what a microscopic black hole "looks like". The conjecture that the radiation from a small microscopic black hole contains no information just because that's what we expect from large black holes is IMO unjustified.

    I think it's rather quite natural to expect (althouhg details are unclear) that a black hole with a very small complexity, relative to the observer simply can not support such randomness. I find that counterituitive and unnatural, but the same objection can't be made to a large black hole. There the opposite is true, that the observer is not complex enough to possibly decode the "encryptation" in the radiation.

    So I think that even if a small black hole would form, we could still learn "how does a small black hole looke like", because it may not look like a large black hole - just "scaled down" in absurdum.

    /Fredrik
     
  10. Jun 9, 2010 #9

    MTd2

    User Avatar
    Gold Member

    Let me see if I can see in this way, tell me what you thing, right? That asymptotic safe point looks like a critical point 'backwards', being stable and attractive, instead of instable and saddle. But, in anyways, it marks where different regions of the Gn x /\ phase space connect. So, perhaps approaching from different regions, you'd see a black hole forming or not, if one considers that the horizon as a boundary between different states, liquid/gas, inside/outsied?
     
  11. Jun 9, 2010 #10

    Haelfix

    User Avatar
    Science Advisor

    Absolutely, its all very much related and nontrivial! Keep in mind that if information is lost, I don't know how to make sense of such a scattering experiment even in principle. The evolution is nonunitary (probabilities don't add up to one) so my one good observable no longer makes much sense.

    In fact one of the problems with Mattingly's paper is exactly what he means by a laboratory (a measurement) and an experiment in the first place. They seem to want to find a loophole and do away with the microscopic blackhole formation, yet somehow have an 'enshrouding layer' around the laboratory which can't communicate anything to the outside world. Well, that's problematic b/c there is no obvious definition of a quantum scattering experiment in that case. The asymptotics are bad and so forth.
     
  12. Jun 10, 2010 #11

    Fra

    User Avatar

    This is an interesting deep question that I think that a fundamental understanding has to face sooner or later, like what does it mean, for the constructed measure that rates possibilities to not be normalized to certainty?

    This is exactly one of the questions I think reconstructing the inference formalism should answer. Apparently it's when, as you use an inference process to infer and represent actual state and probabilities in an experiment, the time scale of the inference process is comparable to the evolution of the state space itself (which is not timeless static). The process of "infering" from interaction history, statistically significant conclusions, just takes too long, relative to the evoluotion of the environment.

    This point is missed when inferences are thought of as good-level mathematical truths and deductions are processes some somehow take place in a world that are not subject to physical constraints.

    Even I think that hawking radiation of small black holes do convey information, it doesn't mean that I think it's "right" to think of information presevation as an unquestionable realist type fact.

    /Fredrik
     
  13. Jun 10, 2010 #12
    I think this implies negative entropy in the sense that the number of available states is less than one. This also implies that the number of (asymptotic?) degrees of freedom are not well defined or they are negative degrees of freedom like ghosts. I would think that this can make sense only if you have neglected some degrees of freedom possibly non-locally correlated.

    For example if you consider spacetime on distances l<lp states may seem to have negative entropy. But since information and entropy are essentially non-local quantities you really need to flow to the IR to resolve the state. This makes sense with a holistic type via where things(probabilities) become well defined only when you consider the whole.

    My view would be that ultra UV physics may be ill defined in that sense. However ultimately one still has to resolve the UV physics with the IR before information can be fully recovered.
     
  14. Jun 11, 2010 #13

    Fra

    User Avatar

    Yes, it's like the state space is smaller, OR larger than was thought. But why is this so?

    One can then think this as "OK, then we were wrong" the real state space was really this and that all along. For example of the state space expands, one could say choose the largest possible state space and say this was the timeless state space all along.

    But, I'd claim that is not right way to think about it, it makes no sense, because first of all there is the complexity constraint that menas that no real observer can ever encode any possible future state space, because the possibilities are actually increasing. If you ignore this, you run into ridicilous "mathematical landscapes" of possibilities that strips you of all predictability. Not to mention that such overly complex models would be uncomputable and thus lack predictive power. And I don't just mean in practice, I mean in principle, since the complexity of the computer would need to exceed the size of the universe, so it's not a practical issue - it's a principal issue.

    Instead I acknowledge that it is not possible, to with certainty make statements about what the ultimately state space is. There is no such think as the best truth, we only have the "best inference". In that sense, one can have a correct and rational inference, and still later have to revise the opinon (some would then be tempted do say we were "wrong").

    This is analogous to he question of "false information" in the copenhagen interpretation that was disucssed in some old thread. From the point of view physical actions, objective truth's about correctness of information is irrelevent, since a system will respond equally rational to the wrong information. And that happens all the time, and is as I see if part of interactions.

    About the UV & IR limits of what's inferrable, I think there is a natural such limit to each observer. And I agree that in equilibrium, each observer must in the infinitesimal sense always "expect" unitarity. But I think there is no unitary measure of definite/global evolutions, except in special equilibrium cases.

    Similarly there is no objective notion of entropy. There are only relative entropy, and relative entropy are more related to transition probabilites and action, than it is to absolute probabilities. Note that any transition, would minimize the relative entropy, or information divergence. This principle of minimum action/info divergence makes more sense, and is deeper than the max ent principle.

    /Fredrik
     
    Last edited: Jun 11, 2010
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Asymptotic safety and black holes (new Mattingly paper)
Loading...