Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

What is an effective field theory?

  1. May 22, 2013 #1
    What is an effective field theory??

    Yeah, there is many information on Internet, but it is a complicated level, they speak about cut-off, top-down, series development without justify limit the coefficients of ignored terms in the development.
    Aren´t there a simple (but rigurous) explanation about this question? A explanation here or a link, it is indifferent for me. The Wikipedia page don´t explain nothing, only names examples about with what interactions it is used, and the "introductions" write downs are for a level very over graduate level.

    Thanks
     
  2. jcsd
  3. May 22, 2013 #2
    I actually think Wikipedia does sum it up rather well in their opening sentences:

    I'm not going to be very rigorous here, but I'll try to describe the philosophy, as I understand it.

    The fact that there is a chosen length scale is the key. Take a condensed matter system, for example. We want to deal with phenomena that stretch throughout the crystal (superconductivity between only two atoms isn't very useful, and quite hard to observe experimentally, for example). But clearly our crystal is made up of atoms, which are made up of electrons and a nucleus, which is made up of protons and neutrons, which are made up of quarks, which are possibly made up out of something else, etc.

    In principle we have vastly different length scales to deal with, but in practice we impose a minimum length we are willing to accept. In condensed matter systems, this is often silently assumed to be the average distance between atoms in the crystal. The de Broglie wavelength relates a short length (scale) with a large momentum (cutoff). An analogue in particle physics is that you don't need to bother about heavy particles when at energies lower than their mass.

    It would be reasonable to ask how this could work at all. The idea is simply that there is a tool by which we can change length scales of our theory, by "integrating out high energy degrees of freedom", basically adding them to the background instead of keeping them apparent in the Hamiltonian/Lagrangian/whatever. This tool is the renormalization group (RG) technique. You basically apply it to your system (possibly the theory of everything!), and out pops a slightly modified theory. There may be new terms, or all terms have the same structure albeit with different numerical prefactors (or coupling constants, if you like that language). This procedure gives rise to differential equations for these prefactors, which essentially tells you that some interactions are most important at each length scale, and that some terms vanish altogether. Hence, we can have thermodynamics without worrying about QCD-type interactions!

    Now, RG is basically a tool for generating new effective field theories from some starting point. The starting point itself can be assumed to be an effective field theory, as we probably don't have the full theory of the universe. (Somewhat unsatisfactory, it is not at all clear from this picture that we can actually ever identify said TOE even if we should arrive at it.) To return to the condensed matter systems (which I'm most familiar with), we would use RG to extract the really low-energy behaviour (the zero temperature limit, basically). This way one can explain the Kondo effect, for example.

    I'm not exactly sure how this is used in high-energy physics, but clearly one would want to find a new theory at higher energy. This theory should, by the use of RG, give rise to effective field theories that look like eg. the standard model of particle physics and other currently known physics, but it may look rather different.


    I'm not sure reading words like these or http://www.people.fas.harvard.edu/~hgeorgi/review.pdf (which seems nice, based on skimming the first chapter) is enough to fully understand the idea though. Personally, I found the uses of RG in statistical mechanics quite revealing. They also have the added advantage of not being that quantum, so the philosophy isn't obscured by the mathematical formalism. I really enjoyed Goldenfeld's book "Lectures on phase transitions and the renormalization group", and also found the calculations done in Kardar's "Statistical physics of fields" helpful.
     
  4. May 23, 2013 #3
     
    Last edited: May 23, 2013
  5. May 26, 2013 #4
    I think it's rather deep and not trivial at all, personally. Though I guess the conceptual picture is often described more clearly than the detailed calculations are, at least when it comes to this. This is, at least partly, because one basically has to include some scheme of renormalization, as the two concepts are so intertwined.

    As for this text, it was clearly a mistake to recommend it without reading it first. Looking at it again, I find it quite hard to follow myself! You see, I wanted to provide some reference for the use of non-Wilsonian RG in high-energy. Maybe one of the references in https://www.physicsforums.com/showthread.php?t=587206 will suit you better.

    (I don't quite understand how he uses ε, probably because I'm not at all used to that renormalization scheme. E seems to be the energy scale, as you say. k is related some sort of scaling dimension. When renormalizing, the term with the highest scaling dimension grows the quickest and so on. In addition, there is the weird feature that the number of physical dimensions actually does matter to the physics!)



    I would still recommend you to start at (what I think is) the simplest and most physical point: the Wilsonian RG of the Gaussian model. I'll give a small introduction, but the details can be found in most books on RG or stat mech of fields.

    The model itself comes from the Landau theory of phase transitions, ignoring density interactions. Here the field or order parameter m can be interpreted as a magnetization, and h as a magnetic field. The free energy functional can be written
    [tex]\beta H = \int d^d r \left[ \frac{t}{2} m^2(r) + \frac{K}{2}|\nabla m|^2 - hm(r)\right][/tex]
    or, in momentum modes,
    [tex]\beta H = \frac{1}{(2\pi)^d} \int d^d q \left[\frac{t + q^2 K}{2} |m(q)|^2\right] - hm(0)[/tex]
    It is easy to split momentum space into slow and fast modes, [itex]0<|\mathbf{q}|<\Lambda /b[/itex] and [itex] \Lambda /b<|\mathbf{q}|<\Lambda [/itex], respectively. [itex]\Lambda[/itex] is the cutoff, and [itex]b[/itex] is just a number that we will change throughout the renormalization, but we will think of it as slightly larger than 1. (I do think the paremeter [itex]\epsilon[/itex] is somehow an analogue of [itex]b[/itex].)

    In the Gaussian model, the slow and fast fields don't mix (i.e. no cross terms, compare with the case of a quartic term), so the fast fields just give a constant contribution to the free energy, and hence the partition function. The slow modes remain and give
    [tex]\beta H = \frac{1}{(2\pi)^d} \int_0^{\Lambda/b} d^d q \left[\frac{t + q^2 K}{2} |m(q)|^2 \right] - hm(0)[/tex]
    Thus we have done the first step, the coarse graining, and ended up with an effective Hamiltonian having the same structure as before. The second step is the rescaling, where we trick ourselves that we haven't changed much. We introduce the new coordinate [itex]\mathbf{q}'=b\mathbf{q}[/itex], such that we get the same momentum as before. Then the theory looks like
    [tex]\beta H = \frac{1}{(2\pi)^d} \int_0^{\Lambda} d^d q' b^{-d} \left[\frac{t + q'^2 b^{-2} K}{2} |m(q')|^2 \right] - hm(0)[/tex]

    In the third step, the field is renormalized as [itex]m'(\mathbf{q}')=m(\mathbf{q}')/z[/itex]. This gives
    [tex]\beta H = \frac{1}{(2\pi)^d} \int_0^{\Lambda} d^d q' b^{-d}z^2 \left[\frac{t + q'^2 b^{-2} K}{2} |m'(q')|^2 \right] - zhm'(0)[/tex]
    In other words, we have a free energy which looks like the original one, but with the modified (renormalized) parameters
    [tex] t' = z^2 b^{-d} t, \quad h'=zh, \quad K'=z^2 b^{-d-2} K[/tex]
    At this point, you can see that [itex]d[/itex], the number of dimensions, seems to matter a bit (no matter what [itex]z[/itex] is). Now, this Landau-Ginzburg model aims to describe phase transitions, near which fluctuations are scale invariant, so we require [itex]K'=K[/itex] (other choices are possible), which implies [itex]z=b^{1+d/2}[/itex], and the renormalized parameters now scale as
    [tex]t'=b^2 t, \quad h'=b^{1+d/2} h[/tex]
    So, depending on how far we renormalize, and the number of dimensions, the magnetic field term might become dominant and give rise to a magnetic ordering.

    The whole point of this exercise though, was to show you an example of how a starting Hamiltonian and a given renormalization scheme gives rise to an effective theory. In this case it had the same structure (i.e. it's renormalizable), but it need of course not be in general. However, the effective field theory is somewhat hidden behind the RG calculation, and I suspect that will be the case with most renormalization schemes out there.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook