Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Hossenfelder let daylight into the entropic force room

  1. Mar 3, 2010 #1

    marcus

    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    It made a considerable difference to read
    http://prime-spot.de/Physics/notes6.pdf

    which gives Hossenfelder's explanation of Verlinde's paper, on Newton gravity being an entropic force.

    It took away a vague sense of mystery. The experience was like opening the window to full daylight. The notes seem to have been posted today 3 March, or else I just didn't see them until today.

    She summarized her comments on her blog, in a few words without the math.
    http://backreaction.blogspot.com/2010/03/gravity-is-entropy-is-gravity-is.html

    ===sample excerpts from blog===
    Here is a short summary: With a suitable definition of quantities, describing gravity by a Newtonian potential or describing it as an entropic force in terms of an "entropy," "temperature" and "holographic screens" is equivalent. One can do it back and forth. The direction Verlinde has shown in his paper is the more difficult and more surprising one. That it works both ways relies on the particularly nice properties that harmonic functions have...

    Some assumptions made in the paper are actually not necessary. For example,...

    The biggest problem is that Verlinde's argument to show ...
    ... It does not seem entirely impossible to actually do this derivation, but there are some gaps in his argument.

    In any case, let us consider for a moment these gaps can be filled in. Then the interesting aspect clearly is not the equivalence. The interesting aspect is to consider the thermodynamical description of gravity would continue to hold where we cannot use classical gravity, that it might provide a bridge to a statistical mechanics description of a possibly underlying more fundamental theory...
    ==endquote==

    The notes do not give intuition about how entropic forces work. Verlinde already did a good job with the intution, both in his paper and in his two blog posts at the Amsterdam faculty website. More intuition was not what we needed. What was needed was critical hard-headed math derivation making sure it all worked. To the extent possible this is what H. provides---some gaps are pointed out which presumably are minor and can be filled in.
     
  2. jcsd
  3. Mar 3, 2010 #2

    MTd2

    User Avatar
    Gold Member

    Are these your words or Verlide's?

    These are words of http://en.wikipedia.org/wiki/Epiphany_(feeling)" [Broken]. Reading Verlinde's blog seems he really passed through a religion conversion by seeing an apparition of God. He almost becomes dismissive of string theory.
     
    Last edited by a moderator: May 4, 2017
  4. Mar 3, 2010 #3

    marcus

    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    :rofl:

    If you read Hossenfelder's notes you will understand. It is completely unreligious daylight.
    In her paper she is cool, sober, critical, and clear. She subtracts whatever sense of epiphany or mystification you might have thought was originally in the Verlinde paper.

    Do you understand what I mean by "take away". Her paper took away the mystery. It demystified.
     
    Last edited: Mar 3, 2010
  5. Mar 3, 2010 #4

    MTd2

    User Avatar
    Gold Member

    I meant by religion String Theory as a cult. The mystical sensation is the realization of what an individual considers a deep truth. It doesn't mean something irrational is being proposed. To tell you the truth, I just read E. Verlinde's blog today, for the 1st time, after thinking about his ides for some time.

    I was scared, the guy seemed to report an epiphany. He looks like being ready to change the whole research career to the pursuit of this idea. But I mean, he didn't report that as "oh, that is a good idea", but simply that it was the truth.

    But at least for where you quote Sabine, I cannot agree with her. I still not see that as the fundamental point. The point that I think is closer to what he thinks it is more like the universe was permeated by lots of mini computing chips, and gravity was the consequence of overloading those chips. That is, the chips would have to get closer to process more information. But when chips get closer, they bend the space and so, information that would be processed far way, ends up being processed closer to other clusters of chips. But, when it gets closer, information entangles with other information of the cluster of chips, requiring even more chips to come to the cluster. So, all this mess creates gravity.
     
  6. Mar 4, 2010 #5
    The holographic principle suggests the universe is build of a pure information. The space is a secondary effect.
    What do you think about it:
    The particle with a rest mass oscillate. The information about its oscillation is distributed with a probabbility inversely proportional to the distance. We observe the interference pattern if this information interfere with an information of the another particle.
    We may turn it back and count just the points of the inteference and call it a distance.

    I claim the distance and all the space is made of the interference pattern of the non-local information. We have an information background and its base is a Compton wave length (mainly this of the protons).

    Therefore I would fill the gap between gravity and thermodynamics with an information background.
    The interaction between each non-local information creates a time dilation= Planck time and space curvature by a linear length contraction= Planck length.

    Important equations:
    A.Time dilation:
    Tp / T(x) * Tp / T(y) = -a Fg / Fe
    (lp / L(1) ) * (lp / L(2) ) = -a Fg / Fe
    B.Holographic principle
    [M/m]*[2 pi R/L(1)] = [R [tex]c^2[/tex] / 2Gm]*[2 pi R / (h/mc)] = pi[tex] R^2[/tex] / ( hG/[tex]c^3[/tex]) = pi[tex]R^2[/tex]/[tex]L^2[/tex] = A/4[tex]L^2[/tex]
    where L(1), L(2) are Compton wave lengths and L is Planck length.

    more in my simple website:
    http://www.cramerti.home.pl/ [Broken]
     
    Last edited by a moderator: May 4, 2017
  7. Mar 4, 2010 #6
    Hi Marcus,

    I'm so flattered by your words I made the effort digging out my PF password :blushing: I always feel guilty I'm not contributing more over here, but it's like I don't seem to ever find enough time, sorry about that.

    Just some words: No, my notes were not online before, you found them as soon as they were available. I had only circulated them to some friends previously. Since it looks as if people find them useful, I just uploaded them to the arxiv (gr-qc), where they should appear Friday. It's exactly the same version as the one that's mentioned on my blog though.

    All the math I used is actually to be found in Jackson's book on electrodynamics, which was very useful. I guess calling it a demystification is quite appropriate. I initially had a hard time making sense out of Verlinde's paper. Not that I couldn't follow the maths, just that it didn't seem to make much sense. It fell into place when I looked at the variation of the field energy, which is what you find in my notes.

    Another clarification is that I'm not at all sure the gaps in the GR case are minor and can be filled in. But it also doesn't seem impossible to me.

    MTd2,

    I'm not sure you actually disagree with me. I don't know what to make out of your sentence on chips etc, but I haven't actually commented on what's fundamental and what isn't. I've simply pointed out what it is you really need to arrive at the conclusion Verlinde has put forward in his paper. It might very well be that the information theory approach to space-time might eventually turn out to be more fundamental (it seems to be presently fashionable), but thing is that to arrive at Verlinde's conclusion you don't need it (and consequently Verlinde's derivation doesn't show there really is anything like bits on screens etc). That's not to say it's wrong, please note the difference. I believe I wrote somewhere in my notes that indeed the appeal of Verlinde's approach is that it might provide the bridge to a more fundamental theory (via stat mech to the thermodynamics then). Best,

    Sabine
     
  8. Mar 4, 2010 #7
    I quite agree with the title of this thread. Sabine Hossenfelder has done a great job of
    demystifying Verlinde’s paper, which seems to have baffled several people, including me. Thanks to Sabine for writing it and to Marcus for pointing to it so promptly. Although Sabine showed that an entropic force description and a Newtonian source-and-field description of gravity are technically equivalent, there is one aspect of gravity that does seem to me intrinsically entropic. Whether or not discussing it gets you anywhere, I can’t say.

    Before I go on, I admit to not thoroughly understanding entropy (or information for that matter), either in defining a thermodynamic state function like the Helmholtz free energy or as the quantification of possible quantum states. I agree with Penrose who remarked:
    . I think of entropy as a remarkably deep notion that accurately "quantifies ignorance” which surprisingly turns out to provide a new way of describing gravity, as well as defining the second law of thermodynamics and the arrow of time!

    Here goes: consider a spherically symmetric Newtonian situation of an isolated point test mass and its accompanying gravitational field. Forget all other forces of nature. In this simple system the entropy is zero, since Newton's law tells us the field everywhere, given the position of the mass and its magnitude. In the mass there are no internal degrees of freedom. Describing this system fully without bringing in the concept of entropy seems both practical and simple. All information is available.

    Next, think of the test mass as macroscopic but still spherically symmetric --- say Earth in a cosmological context. Here the concept of entropy can creep in, so: The information you can measure on its surface with a gravimeter is incomplete for the system as a whole, because the Earth is made up of shells (crust, mantle and core). The mass is radially distributed and one does the measuring outside the mass --- by Gauss’ law the radial distribution is now a “degree of freedom” of this macroscopic but spherically symmetric system. It doesn’t affect g, the measured acceleration due to the Earth’s gravity, outside the mass.

    The entropy (thinking of it as quantified ignorance) that can be measured on an outside spherical surface or "screen” isn’t zero. And if the radius of the outside surface on which g is measured gets bigger, so does the "on screen" entropy --- there is more room to distribute the inside spherical shells --- all the way out to a spherical event horizon in an accelerating model universe. Here the entropy would attain a maximum value as the information that could be measured reaches a limit.

    This is the practical but muddling aspect of gravity that seems to me to be intrinsically entropic. Again, I can’t say if such a similarity with entropy on holographic screens gets you anywhere, but perhaps it’s worth pointing out the obvious about Gauss' law and fields in this context.
     
  9. Mar 4, 2010 #8

    MTd2

    User Avatar
    Gold Member

    Hi Bee,

    Ok, so you did not comment about a fundamental point, just his paper. I really do not disagree with you, then. So, I actually agree with you. If you don't need an holographic screen, the example he used is not matched by his ideas on his blog.

    As for the "chips", I guess something close to them is the operators living on the vertex of spin foams. The higher the entropy on a node, the more massive that is.
     
  10. Mar 4, 2010 #9

    marcus

    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    I'm happy with the thread, and don't have anything substantive to add right now. Thanks to everybody for their comments!

    Anybody who hasn't done so already should check out the 9 or 10 abstracts of papers to be delivered at the Stockholm conference in July on
    Experimental Search for Quantum Gravity

    http://th.physik.uni-frankfurt.de/~hossi/ESQG10/abstracts.html

    It's unusual for a conference to have abstracts posted so early.
    It's a tough research area.
    I hope Julien Grain a young observational-test-for-QG researcher is recruited to give a talk, or that guy Aurelien (I forget his surname.) But since i don't know either of them I have no particularly good reason to hope that.
    It's vital for people to get working in empirical QG, in spite or because of it's being extremely challenging. So we should watch this kind of tentative initiative and offer moral support if/where possible.
     
    Last edited: Mar 4, 2010
  11. Mar 4, 2010 #10

    marcus

    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    I think one of the points made in the notes was that what is primarily interesting is not the equivalence so much as the possibility that the domains of applicability don't exactly coincide.

    So the thermodynamic approach might apply and tell us things where the conventional gravity approach fails. Like at "singularities"---the failure points of classical theory.

    Might we get a thermodynamics of whatever is going on at singularities?

    Here is a naive and vague for-instance of what that could look like. First of all recall this paper:
    http://arxiv.org/abs/0905.4916

    Basically it identifies the pit of a black hole with an SU(2) intertwiner. There is a Hilbert space of possible intertwiners. An intertwiner can be imagined as a node in a spin network. In the case of a stellarmass black hole, it is a node with a huge number of legs coming out of it.

    Is there a thermodynamics of intertwiners? Do they self-transform, spontaneously split up and recombine? Is there an ergodic theory about intertwiners? Well I suppose there is a remote possibility of some such shenanigans going on at the pit of a black hole. Or something, any way, that the thermodynamical approach could help us get a grip on.

    Black holes in full quantum gravity
    Kirill Krasnov, Carlo Rovelli
    (Submitted on 29 May 2009)
    "Quantum black holes have been studied extensively in quantum gravity and string theory, using various semiclassical or background dependent approaches. We explore the possibility of studying black holes in the full non-perturbative quantum theory, without recurring to semiclassical considerations, and in the context of loop quantum gravity. We propose a definition of a quantum black hole as the collection of the quantum degrees of freedom that do not influence observables at infinity. From this definition, it follows that for an observer at infinity a black hole is described by an SU(2) intertwining operator. The dimension of the Hilbert space of such intertwiners grows exponentially with the horizon area. These considerations shed some light on the physical nature of the microstates contributing to the black hole entropy. In particular, it can be seen that the microstates being counted for the entropy have the interpretation of describing different horizon shapes. The space of black hole microstates described here is related to the one arrived at recently by Engle, Noui and Perez, and sometime ago by Smolin, but obtained here directly within the full quantum theory."

    Then there is the connection with Chern-Simons theory. It isn't mentioned in the abstract here, but if you look into actual the Krasnov-Rovelli you see that CS plays a major role: e.g. the "space of black hole microstates" description that Engle Noui Perez recently arrived at.
     
  12. Mar 5, 2010 #11

    Fra

    User Avatar

    I also like this concluding remark of Sabine's paper:

    "Clearly, the equivalence of two formulations for the gravitational interaction is not the interesting aspect. As long as both descriptions are the same claiming “the redshift must be seen as a consequence of the entropy gradient and not the other way around” [1] is merely words. The interesting aspect of the reformulation would be to make use of it in regimes where the equivalence might no longer hold. The thermodynamical description might provide a bridge to a statistical mechanics description of a possibly underlying theory."

    But of a similar bjection can be raised towards arguments like "gravity is a consequence of geometry of spacetime". It's just a given relationship cast in different frameworks: geometry, statistics, probability.

    I think the subjective difference is that different people have different sources of intuition and in the end real predictions is what matters, not in what mathematical framework they are cast. Any black box producing the right numbers are as good as any other. The good boxes will live, the bad boxes will not.

    But I happen to have a strong personal preference for the information theoretic castings, so I think there are other reasons beyond equivalences to consider the idea of information beeing discrete. Information and information processing is also closely related to statistical inference, reasoning upon incomplete information and science itself, so I find it more natural than many other pictures.

    I think the remaining challange, that also Verlinde acknowledges is to undertand the the holographic principle itself, maybe from even more information theoretic first principles. If this can be done, verlindes link may be a piece in the puzzle, and may provide a preferred direction of reasoning.

    /Fredrik
     
  13. Mar 5, 2010 #12

    Fra

    User Avatar

    IMHO, one of the more important things about entropy (as a measure of missing information) that is often ignored or lost in discussions is that it does not make sense to think of it as an objective measure, it can only be constructed by an inside construction - unless of course, you subscribed to a form of realism. Which many do. But I find that highly objectionable.

    In the regular construction, a microstate and a equiprobability hypothesis is part of the background baggage. If everyone are in agreement about this microstate and it's equiprobability properties, only then the measure is objective. But the whole point if we borrow some thinking from Rovelli, is that the only physical way to make such an assessment is by the parties to communicate/interact. Now, an a priori assumption of agreement here is a premature assumption of equilibrium.

    Rovelli's then somehow tried to suggest that there is an objective description of the rules for this inteaction, and it's called quantum mechanics. But similarly, even that assessment can only be done by an inside construction.

    This is the problems I think a modernt intrinsic information theory needs to solve. It makes no sense to think that the old style entropic reasoning (where entropy is an objective information measure) is going to solve anything.

    I think we must note that most entropy measures, contains implicit background information about the microstructure. This has the function of "background structure". I don't think this is acceptable.

    Ie. we need also a background independent formulation of information theory! This is by far a generalisation of the meaning of BI as in "background spacetime metric" which is really a special case only.

    IMHO, Verlinde is nowhere near that solution in his paper. But it seems the quest for it is implicity possibly where he acknowledges that the big question is the origin of the holographic principle, formulated independent of geometry, but instead jsut as communication channels.

    /Fredrik
     
  14. Mar 5, 2010 #13
    Fra: I'm not clear on what you mean by "inside". Inside what? Inside a holographic screen? (whatever that is) or, in the ultra-simple classical case I talked about, inside a surface on which g is measured? Or just generally inside some closed system?

    And how do you construct entropy by a construction? By a "construction" do you mean a procedure of counting available quantum states, or of dividing a small bit of energy added to a closed system by the temperature of that system? Or some other defined procedure?

    I can't comment on information theory, which is hard for me to understand, except to say that entropy looks to me as a kind of complement of information (in the same sense that binary one is the complement of binary zero). No need for a background here, is there?
     
  15. Mar 5, 2010 #14

    Demystifier

    User Avatar
    Science Advisor

    That's not fair; she did my (see my name) job. And she did it very well. :cry:
     
  16. Mar 5, 2010 #15

    MTd2

    User Avatar
    Gold Member

    That's not true. She took away the mystery of the original paper, but his idea, as described in his blog, is far, very far from being clarified.
     
  17. Mar 5, 2010 #16

    Demystifier

    User Avatar
    Science Advisor

    I agree that the Verlinde's original deep idea is still not clear. Yet, the Hossi analysis suggests that such a deep idea may not be needed at all.

    Another (much older) demystification paper on mysterious and deep relation between gravity (black holes), thermodynamics and entropy that I like is
    http://xxx.lanl.gov/abs/gr-qc/9712016
     
  18. Mar 5, 2010 #17

    MTd2

    User Avatar
    Gold Member

    I love Matt Visser! :biggrin: He is one of the guys sober enough to not believe in black holes, other than being a very useful approximation to astronomical studies. Perhaps, those were one of the papers that led him to think that way.

    BTW, yes, she suggests things that are not needed, but I don't think the Newtonian, or even the GR case is representative of that case. I sense that what is coming to him is holography in the sense defined by Bousso.
     
  19. Mar 5, 2010 #18

    MTd2

    User Avatar
    Gold Member

    This blog has an indepth cover of Verlinde's paper and follow ups:

    http://blog.tryggth.com/ [Broken]

    It has a blog post where, which is very interesting and shows a paper of Ted Jacobson,from 2003, which discusses the relation between the thermodynamic picture of GR and bousso holographic bound

    http://blog.tryggth.com/?p=99 [Broken]
     
    Last edited by a moderator: May 4, 2017
  20. Mar 5, 2010 #19

    Fra

    User Avatar


    Others can disagree but with inside construction I mean that the observer(or whatever labels we should have for a system that is interacting with it's environment and encoding inforamtion about it) must be able to construct the entropy measure only given the microstructure that is distinguishable from the point of view of this observer.

    In particular

    - can the observer not make use of imaginary ensembles or fictive infinite histories. Only finite, coded histories that the observer can distinguish as part of it's own microstructure is allowed.

    - this for example generally means tha the continuum doesn't qualify as an inside construction. It's only an inside construction in the large complexity limit. In this way I see it,the difference between in principle and for all practical purposes is quite large. The problem is that the continuum is uncountable. So from the information point of view, extreme care must be taken before one starts to talk about an entire continuum of distinguishable states.

    In principle, a continuum model could be just a redundant smoothed form of a discrete model, but then the tracking of limiting procedures is absolutely essential. This is something that is typically abused IMO.

    That really disturbs be and is one reason why I'm seeking a reconstruction.

    I don't pretend to have any answers yet, but that's no excuse for me to avoid the real questions. So far the intrinsic information measures I have under considerations is a kind of relative measure on distinguishable changes of the observes own state space (which of course "mirrors" the environment). This measure then influences the observers action. I'm not sure we need to call it entropy, but my toy measuers so far are related to, but not the same as, information divergence (often called relative entropy). This also comes with an effective arrow of time. The arrow of time is not distinguishable at equilibrium. But there is no global notion of equilibrium. Like in complex systems, effective local equilibriums make be stable because of there are no communication between them.

    This is why the concept of entropy measures is deeply mixed up with action measures and both are relative.

    In the simplest way I can describe what I envision the construction is that each observer has a set of microstructures with distinguishable microstates. Each of them represents coded information about expectations of the future. Each microstructure has two numbers, the size of the event index, and the depth of the history.

    Relations between the microstructures in the set, represents evolved data compression algorithms. The fourier transform is one example.

    This all implies that each such systems has an "expected evolution" that is in consistency with it's own action. All this coded information are of course update as the system interacts with the unknown environemnt, but the beauty of this approach is that we have a natural inertial concept because each feedback is also given a weight, that is compared with the prior opinon. So a high complexity system possesses a high inertia.

    This further allows one to make predictions when two systems of difference complexity interact, in particular can one system conquer complexity from the other system my increasinly better predictions of hidden degrees of freedom in the environment, so that eventually we have here also a possible key to understanding the generation of mass.

    Temperature and areas of event horizons all have analogies too. The area of the event horizon is to me simply the number of distinguishable events in the eventspace. All we have is cardinality, no reference to "dimensions". Temperature can be interpreted loosely as the average number of supporting coded events in the history, per microstate. A microsate that has no support will collapse (this is how I solve the zero-probability thing). That however doesn't mean it can't happen, it just means it's not expected to happen, and the difference lies in the ACTIOn of the system.

    A great source of loose analogies is again economical system and gaming - expectations. I find that area of inspiration far better than any mechanical or geometric/manifold analogies.


    I'm not sure if it's clear, but what I advocate is not applying old style stat mech, or shannon style information theory. I am also saying that I think we probably also need to improve the choice of mathematical information theory to apply. We need an intrisic formulation of it.

    If you don't see the "background" in regular statistics and information theory, then I think my points will remain unclear. In entropic reasoning, one usually has the problem of choosing a prior. But it's worse than that, because the space of possible priors also contains implicitly information about this space. Now, if you do not have that information - what do you do?

    The real inside formulation is still missing IMO. I think we need that, so see the real beauty of the entropy arguments in physics.

    /Fredrik
     
  21. Mar 5, 2010 #20

    Demystifier

    User Avatar
    Science Advisor

    You mean this?
    http://xxx.lanl.gov/abs/0902.0346
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Hossenfelder let daylight into the entropic force room
Loading...