Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Insights Renormalisation Made Easy - Comments

  1. May 9, 2015 #1

    bhobba

    User Avatar
    Science Advisor
    Gold Member

  2. jcsd
  3. May 9, 2015 #2
  4. May 10, 2015 #3

    Drakkith

    User Avatar

    Staff: Mentor

    Wow, that's all renormalization is? Just saying, "Let's not use our theory at absurdly high energies"?
     
  5. May 10, 2015 #4

    bhobba

    User Avatar
    Science Advisor
    Gold Member

    Hi Guys

    Thanks so much for your likes - its appreciated.

    One thing I wanted to mention, but didn't since I wanted to keep it short, is why, since the solution is actually quite straightforward, it took so long to sort out.

    To see the issue, we defined λr in a certain way to be a function of λ and the cutoff. To second order, once we invert the function, λ = λr + aλr^2.

    λ' = λ + (f(K) + Clog(Λ^2))λ^2 = λr + aλr^2 + (f(K) + Clog(Λ^2))λr^2.

    Now since λr = λ'(U), λr = λr + aλr^2 + (f(U) + Clog(Λ^2))λr^2 to second order, so

    a = -(f(U) + Clog(Λ^2))

    Since λ = λr + aλr^2 this means, to second order at least, λ secretly depends on the cutoff, and as the cutoff goes to infinity, so does the coupling constant. Normally when pertubation theory is used, you want what you perturb about to be a lot less than one. If not the higher orders do not get progressively smaller. If, for some reason, that's not the case, it will likely fail, and you have made a lousy choice of perturbation variable. At the energies we normally access this corresponds to a cutoff that's not large, and you get a λ that's small, so you think its ok to perturb about. But, as we have seen, it secretly depends on the cutoff, in fact going to infinity with the cutoff. This makes it a downright lousy choice of what to perturb about. Not understanding this is what caused all these problems and took so long to sort out.

    Thanks
    Bill
     
  6. May 10, 2015 #5

    bhobba

    User Avatar
    Science Advisor
    Gold Member

    Yes.

    Hopefully the post I did above helped understand why it remained a mystery for so long.

    Stuff we use in our equations such as the electron mass and charge secretly depended on the cut-off. We did experiments at low energy from which we obtained small values, which when you look at the equations, means in effect its a low cut-off - otherwise you would get large values. This fooled people for yonks.

    The way it works, in QED for example, is by means of what's called counter-terms:
    http://isites.harvard.edu/fs/docs/icb.topic1146665.files/III-5-RenormalizedPerturbationTheory.pdf

    You start out with a Lagrangian expressed in terms of electron mass and charge. But we know those values actually depend on the cut-off. We want our equations to be written in terms of what we actually measure which pretty much means measure at low energies. This is called the renormalised mass and charge. We write our Lagrangian in terms of those and you end up with equations in those quantities plus what are called counter terms that are cut-off dependant. You do your calculations and of course it still blows up. But now you have these counter terms that can be adjusted to cancel the divergence. Basically this is subtracting equation (2) from (1) in my paper. The cut-off dependant quantities cancel and you get finite answers.

    Tricky - but neat.

    Thanks
    Bill
     
    Last edited: May 10, 2015
  7. May 10, 2015 #6
    Can you explain more why the improper integral over momentum blows up?
     
  8. May 10, 2015 #7

    bhobba

    User Avatar
    Science Advisor
    Gold Member

    You have to read the reference I gave - its simply the result of calculating the equation.

    Very briefly if you have a look at the resulting equation you can simplify it to M(K) = iλ + iλ^2*f(K) + 1/2*λ^2 ∫d^4k 1/k^4 (the integral is from -∞ to ∞) where k is the 4 momentum (its done by breaking the integral into two parts - one for very large k and the other for k less than that). That's a tricky integral to do but after a lot of mucking about, is i π^2 ∫ 1/k^2 dk^2 where the integral is from 0 to ∞. This is limit Λ→∞ i π^2 log (Λ^2).

    But as far as understanding renormalisation is concerned its not germane to it. You can slog through the detail but it wont illuminate anything.

    Thanks
    Bill
     
    Last edited: May 10, 2015
  9. May 11, 2015 #8

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    There are a few things that are a little mysterious about it, still. So if you assume that you understand the low-momentum behavior of a system, but that its high-momentum behavior is unknown, it makes sense not to integrate over all momenta. But why is imposing a cut-off the right way to take into account the unknown high-energy behavior?

    The second thing that's a little mysterious is the relationship between renormalization and the use of "dressed" propagators. It's been a while since I studied this stuff (a LONG while), but as I remember it, it goes something like this:

    You start describing some process (such as pair production, or whatever) using Feynman diagrams drawn using "bare" masses and coupling constants. You get loops that would produce infinities if you integrated over all momenta. Then you renormalize, expressing things in terms of the renormalized (measured) masses and coupling constants. This somehow corresponds to a similar set of Feynman diagrams, except that

    1. The propagator lines are interpreted as "dressed" or renormalized propagators
    2. You leave out the loops that have no external legs (the ones that would give infinite answers when integrating over all momenta)
    The first part is sort of by definition: The renormalization program is all about rewriting amplitudes in terms of observed masses and coupling constants. But the second is a little mysterious. In general, if we have two power-series:

    [itex]A = \sum_n A_n \lambda_0^n[/itex]
    [itex]\lambda = \sum_n L_n \lambda_0^n[/itex]

    we can rewrite [itex]A[/itex] in terms of [itex]\lambda[/itex] instead of [itex]\lambda_0[/itex] to get something like:

    [itex]A = \sum_n B_n \lambda^n[/itex]

    For a general power series, you wouldn't expect the series in terms of [itex]B_n[/itex] to be anything like the series in terms of [itex]A_n[/itex]. But for QFT, it seems that they are basically the same, except that the [itex]B_n[/itex] series skips over the divergent terms.

    I'm guessing that the fact that the renormalized Feynman diagrams look so much like the unrenormalized ones is a special feature of propagators, rather than power series in general.
     
  10. May 11, 2015 #9

    bhobba

    User Avatar
    Science Advisor
    Gold Member

    I think the answer lies in the condensed matter physics these ideas sprung from. I don't know the detail but the claim I have read is pretty much any theory in the low energy limit will look like that.

    Certainly it's the case here - to second order we have:

    λ' = λr + λr^2 f(K) - λr^2 f(U) = λr + λr^2 f(K) + λr^2 C*Log (Λ^2) for some cut-off Λ.

    Its the same form as the un-renormalised equation. This is the self similarity you hear about, that is said to be what low energy equations must be like.

    Wilson's view where the cut-off is taken seriously looks at it differently.

    If you look at the bare Lagrangian its simply taken as an equation valid at some cut-off - we just don't know the cut-off. You write it in terms of some renormalised parameters by means of counter-terms where the cut-off is explicit ie the counter terms are cut-off dependant. You then shuffle these counter terms around to try to get rid of the cut-off - similar to what I did.

    I am not that conversant with the detailed calculations of this method but I think its along the lines of the following based on the example before.

    I will be looking at second order equations. First let λr be some function of λ so λ = λr + aλr^2 for some a ie λ = (1 + aλr)λr where aλr is the first order of the counter term in that approach since we are looking only at second order.

    You substitute it into your equation to get the renormalised equation with the counter term:

    λ' = λr + (f(K) + C*log Λ^2 + a) λr^2.

    Now we want the cut-off term to go away - so we define a = a' - C*log Λ^2 and get

    λ' = λr + (f(K) + a')) λr^2.

    We then apply what we say λr is to determine a' - namely λr = λ'(U) so a' = -f(U).

    My reading of the modern way of looking at renormalisation using counter-terms is its along the lines of the above.

    Thanks
    Bill
     
    Last edited: May 11, 2015
  11. May 11, 2015 #10

    atyy

    User Avatar
    Science Advisor

    The idea is that below a certain energy, there are "emergent" degrees of freedom that are enough for describing the very low energy behaviour we are interested in. For example, even though the standard model has quarks, for condensed matter physics, we just need electrons, protons and neutrons. The cut-off represents the energy where we will be obliged to consider new degrees of freedom like supersymmetry or strings. If we knew the true degrees of freedom and the Hamiltonian at high energy, we could integrate over the high energies and by an appropriate change of "coordinates" obtain the emergent degrees of freedom and Hamiltonian at the cut-off. However, in practice we do not know the high energy details, so we make a guess about the low energy degrees of freedom and the low energy symmetries (here low means much lower than the high energy, but still much higher than the energy at which we do experiments). So we put in all possible terms into the Hamiltonian with the low energy degrees of freedom that are consistent with any known symmetry. In other words, we must do the integral over the unknown high energy degrees of freedom (as required in the path integral picture), and we attempt to do it by guessing the low energy degrees of freedom and symmetry.

    It turns out that even if we use a simple Hamiltonian that is lacking many possible terms, if we have a cut-off that is low enough that we know there are not yet new degrees of freedom, but still much higher than the energy scale we are interested in, then we can show that the low energy effective Hamiltonian will contain all possible terms consistent with the symmetry - these turn out to be the counterterms.

    The usual explanation of the counterterms is physically senseless. It is better to think of them as automatically generated by a high cutoff, and a flow to low energy. However, it is incredibly inefficient to start calculations by writing down all possible terms. The counterterm technology is a magically efficient way to get the right answer (like multiplication tables :oldtongue:).

    Naturally, our guess about the low energy degrees of freedom may be wrong, and our theory will be falsified by experiment. However, a feature of this way of thinking is that a non-renormalizable theory like QED or gravity, by requiring a cut-off, shows the scale at which new physics must appear. In other words, although experiment can show us new physics way below the cut-off, the theory itself indicates new physics in the absence of experimental disproof.
     
    Last edited: May 11, 2015
  12. May 11, 2015 #11

    wabbit

    User Avatar
    Gold Member

    Isn't there another way to look at it ? I am not familiar with renormalization but one argument I recall seeing went (very) roughly along the following lines :

    Assume the integral is actually finite, because the "true" integrand is not 1/k^2 but some unknown function phi(k) such that phi(k)~1/k^2 for k not too large (or small, wherever it diverges), and phi(k) is summable (i.e. assume we don't really know the high energy behavior, but whatever it is must give a finite answer, perhaps because spacetime is quantized or whatever reason).

    Then the calculation still holds, there are no infinities but the unknown parts cancel off at low energy. The result is the same but the mysterious infinities have been replaced by unknown finite quantities.

    My recollection of that argument is very hazy so I am unsure about it, but does this work (or rather, something similar, presumably after some renormalization of the argument :wink: )?
     
  13. May 11, 2015 #12

    bhobba

    User Avatar
    Science Advisor
    Gold Member

    Yes that would work, but simply assuming the integral has a cut-off is the usual way.

    There are also other cut-off schemes depending on the regularisation method used, but I didn't want to get into that.

    Thanks
    Bill
     
  14. May 12, 2015 #13

    wabbit

    User Avatar
    Gold Member

    Understood, thanks. I liked your presentation - renormalization is pretty intimidating for a layman and you make it seem more approachable : )

    The appeal of the kind of argument I mentionned is that it gives an intuitive explanation for why the infinities are innocuous, by suggesting they secretly stand for "large unknown quantities" - even if the actual presentation uses infinity to avoid unnecessary complications that would might make it harder to read.

    To me it's a bit like epsilon delta arguments - they are useless as explanations, but it's good to know that they could be used to make things rigorous.
     
  15. May 12, 2015 #14
    I got a useful lead in terms of my confusion from the first part of the second reference Bhobba gave. From that I have a cartoon under construction (as in it's a pile of mud and sticks) that the problem has to do with probing (the integral of all the Feynman thingamajigs) inside the plank scale where the energy domain is one that "creates" particles rather than observing them. If we are trying to count a set that our counting is creating, we will have a bit of a feedback loop.

    I can imagine this is wildly flawed.

    The idea of using a "cutoff" on the "observable" domain seems on the one hand just practical - to get at some useful answers. That paper on the MERA Ansatz is one I've tried to understand n times now. It invokes what I have learned about "re-normalization" from Sornette, just in terms of how it looks.

    The part I am puzzling about... Today, is whether that threshold fixing process is only invented, or arguably natural. In Sornette's book "Critical Events in Complex Financial Systems" the idea of critical points was primary (obviously). But in hindsight their naturalness, as introduced, was as much about everyday intuition about the system he was using as an example (investor optimism), rather than a clearly demonstrated fundamental mechanism.

    Had a bit of an epiphany diving back into "Evolutionary Dynamics" by Nowak this morning. In sec 7.1 "One Basic Model and One Third", he shows how critical points form as a pure function of N (size of finite population) under conditions of weak selection. According to the model he describes, the only thing required for real critical points of population "fixing" (where one of two species a and b disappears) would be expansion of the number of a and b, even at the same ratio. Other requirements are: Some non-flat payoff matrix and therefore fitness functions for a and b. That a and b are the best response to each other (strategically stable, or evenly matched for payoff). "Selection intensity" is weak (only some encounters induce selection). Pretty elegant and weird. At least I think that's what he said.

    I need to see if I can find a paper by him, maybe on that chapter. And I need to revisit that MERA paper to see if I missed a similar natural, rather than introduced, re-normalization thresholding process they were proposing.

    [Edit] the non-sequitur to Evolutionary Dynamics, goes-like "if space-time is discrete, are there mechanisms that could explain problematic observations, such as probabalistic irreversibility, and the fact that reality doesn't blow up, even though integrals over QM momenta suggest it should/could/would if we didn't somewhat arbitrarily re-normalize those integrals"

    [Edit] There are a number of papers by M.A. Nowak on arxiv. I'll have to look to see if there is one on that particular model in his book.

    http://arxiv.org/find/q-bio/1/au:+Nowak_M/0/1/0/all/0/1

    [Edit] There are also a number of papers by D. Sornette on Arxiv. This one really grabbed me.

    http://arxiv.org/abs/1408.1529
    Self-organization in complex systems as decision making
    V.I. Yukalov, D. Sornette
    (Submitted on 7 Aug 2014)
    The idea is advanced that self-organization in complex systems can be treated as decision making (as it is performed by humans) and, vice versa, decision making is nothing but a kind of self-organization in the decision maker nervous systems. A mathematical formulation is suggested based on the definition of probabilities of system states, whose particular cases characterize the probabilities of structures, patterns, scenarios, or prospects. In this general framework, it is shown that the mathematical structures of self-organization and of decision making are identical. This makes it clear how self-organization can be seen as an endogenous decision making process and, reciprocally, decision making occurs via an endogenous self-organization. The approach is illustrated by phase transitions in large statistical systems, crossovers in small statistical systems, evolutions and revolutions in social and biological systems, structural self-organization in dynamical systems, and by the probabilistic formulation of classical and behavioral decision theories. In all these cases, self-organization is described as the process of evaluating the probabilities of macroscopic states or prospects in the search for a state with the largest probability. The general way of deriving the probability measure for classical systems is the principle of minimal information, that is, the conditional entropy maximization under given constraints. Behavioral biases of decision makers can be characterized in the same way as analogous to quantum fluctuations in natural systems.

    Bhobba's statement is "We decide the cuttoff, which had to be there to make the calculation work and we based it on observation". I think there is actually a serious loop of truth to that way of describing it. The question of how... that decision got made, the whole chain of "fixing"... to me... is more than a little spooky, and vertiginous.

    :wideeyed:
     
    Last edited: May 12, 2015
  16. May 12, 2015 #15

    atyy

    User Avatar
    Science Advisor

    The MERA network is a little special, and it was first introduced to describe systems that (in some sense) don't require a cutoff. But it is totally within the Wilsonian framework.

    Again, the idea is simple. We don't know the true high energy degrees of freedom like strings or whatever. But at intermediate energies, these degrees of freedom are not needed, and we only need things like electrons and protons. These can (skipping a detail) basically be described as solids, where everything is on a lattice. At high energies, we know the lattice will breakdown, but it is ok at intermediate energies, and we only want to use the lattice at low energies. Renormalization is simply the process of extracting the low energy theory from the lattice.

    There we had the lattice and ran the renormalization downwards on the energy scale, making an average lattice that was coarser and coarser.

    But could we run the renormalization upwards to high energies, making our lattice spacing finer and finer? In general, our theory will break down because we didn't put in strings or whatever is needed for consistency at high energies. But in special cases, we can run the renormalization backwards. These special theories make sense even at the highest energies, and the MERA is most suited to dealing with these theories. Although the standard model of particle physics cannot be renormalized upwards, a part of it - QCD - can be (at least in the non-rigourous physics sense). This feature of QCD is called "asymptotic freedom".
     
  17. May 12, 2015 #16
    So the Wilsonian "solids" lattice (I now have 100 times more context for lattice guage theory that I did 5 minutes ago) could/would extend up the energy scale, if it turned out to be that space-time was discrete, since that's how it looks at things in the first place.

    I have a book on QCD... that I have not started. :frown:
     
  18. May 12, 2015 #17

    atyy

    User Avatar
    Science Advisor

    No, if the solid model can be extended up the energy scale, that would mean making the lattice spacing finer and finer, corresponding to a continuum.

    There is an interesting way in which the Wilsonian view of renormalization may "fail" and yet simultaneously succeed his wildest dreams. In the MERA picture, in line with other ideas of renormalization and holography in string theory, the renormalization does not bring you up and down the energy scale. Instead the "energy scale" along which you move is a spatial dimension - it is a way in which space and gravity can be emergent.
     
  19. May 12, 2015 #18
    I was picturing it having a limit, that stopped short of continuum.

    I wouldn't mind having a reference I could dig into that explains more what the energy scale as spatial dimension means. That's not connecting to anything I understand at the moment and it sounds like it could.
     
  20. May 12, 2015 #19

    atyy

    User Avatar
    Science Advisor

    OK, let's take the renormalization as running down the energy scale (don't worry about up or down here, where the direction is a bit contradictory with what I said above).

    If we go down the energy scale, we average over the lattice. So we go down a bit, we get a coarser lattice. Then we do down again, and we get another even coarser lattice. And we do this again and again, getting coarser and coarser lattices. We "stack" the lattices, with the finest lattice at the "top", and the coarser lattices "below". For simplicity, let's start with a 1 dimensional lattice. Then the stack of lattices will be 2 dimension (by the common sense idea of a stack). In ordinary renormalization the dimension created by the stack is "energy", but maybe it can also mean "space". In itself that is not special, since we have just renamed the stacking dimension from "energy" to "space". The special thing is that string theory conjectures that under certain conditions, this space has gravity that obeys the equations of Einstein's general relativity.

    You can see the stack of 1D lattices in Fig. 1 and the emergent 2D space in Fig. 2 of http://arxiv.org/abs/0905.1317, where the fine lattice is on top, and the successively coarser lattices are below.
     
    Last edited: May 12, 2015
  21. May 12, 2015 #20
    Very interesting. I look forward to trying to read that one. The discrete scale in-variance part is clear. Love the stack of lattices. Very much what I have been picturing for re-normalization and (way over-simplistically I'm sure) the MERA thing.

    The last sentence about how that system of lattices obeys the equations of GR. Do you mean the mathematical properties of GR fall out of it (emerges from, or is consistent as a property of such a system of lattices) ? That's what I am assuming that means, rather than, it's just "another place where GR goes like GR goes? In other words the idea is see if one can derive the way GR goes from the fundamental properties of such a system of lattices applied to the dimensions of space, time, energy, mass?

    Very interesting. I have to say, I have always felt like LQG and the MERA were somehow so similar. This has helped clarify the difference between them. Or at least provide contrast to what was previously a pure fog of confusion.

    [Edit] Sorry to keep asking question. It's a reflex. You've given me a lot to think about. Much appreciated.
     
    Last edited: May 12, 2015
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Renormalisation Made Easy - Comments
  1. Renormalisation unique? (Replies: 37)

Loading...