What is an effective field theory?

In summary, an effective field theory is an approximation of an underlying physical theory, such as quantum field theory or statistical mechanics, that includes the appropriate degrees of freedom to describe phenomena at a chosen length or energy scale while ignoring substructure and degrees of freedom at shorter scales. This is achieved through the use of the renormalization group technique, which allows for the generation of new effective field theories from a starting point, and is commonly used in fields such as condensed matter and high-energy physics. While the concept may be difficult to grasp, further reading and examples can provide a better understanding of its use and application.
  • #1
StarsRuler
83
0
What is an effective field theory??

Yeah, there is many information on Internet, but it is a complicated level, they speak about cut-off, top-down, series development without justify limit the coefficients of ignored terms in the development.
Aren´t there a simple (but rigurous) explanation about this question? A explanation here or a link, it is indifferent for me. The Wikipedia page don´t explain nothing, only names examples about with what interactions it is used, and the "introductions" write downs are for a level very over graduate level.

Thanks
 
Physics news on Phys.org
  • #2
I actually think Wikipedia does sum it up rather well in their opening sentences:
In physics, an effective field theory is an type of approximation to (or effective theory for) an underlying physical theory, such as a quantum field theory or a statistical mechanics model. An effective field theory includes the appropriate degrees of freedom to describe physical phenomena occurring at a chosen length scale or energy scale, while ignoring substructure and degrees of freedom at shorter distances (or, equivalently, at higher energies).


I'm not going to be very rigorous here, but I'll try to describe the philosophy, as I understand it.

The fact that there is a chosen length scale is the key. Take a condensed matter system, for example. We want to deal with phenomena that stretch throughout the crystal (superconductivity between only two atoms isn't very useful, and quite hard to observe experimentally, for example). But clearly our crystal is made up of atoms, which are made up of electrons and a nucleus, which is made up of protons and neutrons, which are made up of quarks, which are possibly made up out of something else, etc.

In principle we have vastly different length scales to deal with, but in practice we impose a minimum length we are willing to accept. In condensed matter systems, this is often silently assumed to be the average distance between atoms in the crystal. The de Broglie wavelength relates a short length (scale) with a large momentum (cutoff). An analogue in particle physics is that you don't need to bother about heavy particles when at energies lower than their mass.

It would be reasonable to ask how this could work at all. The idea is simply that there is a tool by which we can change length scales of our theory, by "integrating out high energy degrees of freedom", basically adding them to the background instead of keeping them apparent in the Hamiltonian/Lagrangian/whatever. This tool is the renormalization group (RG) technique. You basically apply it to your system (possibly the theory of everything!), and out pops a slightly modified theory. There may be new terms, or all terms have the same structure albeit with different numerical prefactors (or coupling constants, if you like that language). This procedure gives rise to differential equations for these prefactors, which essentially tells you that some interactions are most important at each length scale, and that some terms vanish altogether. Hence, we can have thermodynamics without worrying about QCD-type interactions!

Now, RG is basically a tool for generating new effective field theories from some starting point. The starting point itself can be assumed to be an effective field theory, as we probably don't have the full theory of the universe. (Somewhat unsatisfactory, it is not at all clear from this picture that we can actually ever identify said TOE even if we should arrive at it.) To return to the condensed matter systems (which I'm most familiar with), we would use RG to extract the really low-energy behaviour (the zero temperature limit, basically). This way one can explain the Kondo effect, for example.

I'm not exactly sure how this is used in high-energy physics, but clearly one would want to find a new theory at higher energy. This theory should, by the use of RG, give rise to effective field theories that look like eg. the standard model of particle physics and other currently known physics, but it may look rather different.


I'm not sure reading words like these or http://www.people.fas.harvard.edu/~hgeorgi/review.pdf (which seems nice, based on skimming the first chapter) is enough to fully understand the idea though. Personally, I found the uses of RG in statistical mechanics quite revealing. They also have the added advantage of not being that quantum, so the philosophy isn't obscured by the mathematical formalism. I really enjoyed Goldenfeld's book "Lectures on phase transitions and the renormalization group", and also found the calculations done in Kardar's "Statistical physics of fields" helpful.
 
  • #3
Hypersphere said:
I actually think Wikipedia does sum it up rather well in their opening sentences:



I'm not going to be very rigorous here, but I'll try to describe the philosophy, as I understand it.

The fact that there is a chosen length scale is the key. Take a condensed matter system, for example. We want to deal with phenomena that stretch throughout the crystal (superconductivity between only two atoms isn't very useful, and quite hard to observe experimentally, for example). But clearly our crystal is made up of atoms, which are made up of electrons and a nucleus, which is made up of protons and neutrons, which are made up of quarks, which are possibly made up out of something else, etc.

In principle we have vastly different length scales to deal with, but in practice we impose a minimum length we are willing to accept. In condensed matter systems, this is often silently assumed to be the average distance between atoms in the crystal. The de Broglie wavelength relates a short length (scale) with a large momentum (cutoff). An analogue in particle physics is that you don't need to bother about heavy particles when at energies lower than their mass.

It would be reasonable to ask how this could work at all. The idea is simply that there is a tool by which we can change length scales of our theory, by "integrating out high energy degrees of freedom", basically adding them to the background instead of keeping them apparent in the Hamiltonian/Lagrangian/whatever. This tool is the renormalization group (RG) technique. You basically apply it to your system (possibly the theory of everything!), and out pops a slightly modified theory. There may be new terms, or all terms have the same structure albeit with different numerical prefactors (or coupling constants, if you like that language). This procedure gives rise to differential equations for these prefactors, which essentially tells you that some interactions are most important at each length scale, and that some terms vanish altogether. Hence, we can have thermodynamics without worrying about QCD-type interactions!

Now, RG is basically a tool for generating new effective field theories from some starting point. The starting point itself can be assumed to be an effective field theory, as we probably don't have the full theory of the universe. (Somewhat unsatisfactory, it is not at all clear from this picture that we can actually ever identify said TOE even if we should arrive at it.) To return to the condensed matter systems (which I'm most familiar with), we would use RG to extract the really low-energy behaviour (the zero temperature limit, basically). This way one can explain the Kondo effect, for example.

I'm not exactly sure how this is used in high-energy physics, but clearly one would want to find a new theory at higher energy. This theory should, by the use of RG, give rise to effective field theories that look like eg. the standard model of particle physics and other currently known physics, but it may look rather different.


I'm not sure reading words like these or http://www.people.fas.harvard.edu/~hgeorgi/review.pdf (which seems nice, based on skimming the first chapter) is enough to fully understand the idea though. Personally, I found the uses of RG in statistical mechanics quite revealing. They also have the added advantage of not being that quantum, so the philosophy isn't obscured by the mathematical formalism. I really enjoyed Goldenfeld's book "Lectures on phase transitions and the renormalization group",

Thanks but sorry, I don´t understand. The justification of the drop out the high energies is trivial, but how is the lagrangian developed, there is where I lose, in all the lectures I did ( thanks for the link, but I read it yet too) He introduces a parameter E, but also a parameter [tex] \epsilon [/tex]. I suppose that E is the scale of energies up to the energies in our experiments, but [tex] ¿\epsilon ?[/tex]. And this misterious k, and then dimension of all ( fields, lagrangians) in function of it. Speak of renormalization, I don´t know anything about it.

Thanks anyway
 
Last edited:
  • #4
StarsRuler said:
Thanks but sorry, I don´t understand. The justification of the drop out the high energies is trivial, but how is the lagrangian developed, there is where I lose, in all the lectures I did ( thanks for the link, but I read it yet too)

I think it's rather deep and not trivial at all, personally. Though I guess the conceptual picture is often described more clearly than the detailed calculations are, at least when it comes to this. This is, at least partly, because one basically has to include some scheme of renormalization, as the two concepts are so intertwined.

StarsRuler said:
He introduces a parameter E, but also a parameter [tex] \epsilon [/tex]. I suppose that E is the scale of energies up to the energies in our experiments, but [tex] ¿\epsilon ?[/tex]. And this misterious k, and then dimension of all ( fields, lagrangians) in function of it. Speak of renormalization, I don´t know anything about it.

Thanks anyway
As for this text, it was clearly a mistake to recommend it without reading it first. Looking at it again, I find it quite hard to follow myself! You see, I wanted to provide some reference for the use of non-Wilsonian RG in high-energy. Maybe one of the references in https://www.physicsforums.com/showthread.php?t=587206 will suit you better.

(I don't quite understand how he uses ε, probably because I'm not at all used to that renormalization scheme. E seems to be the energy scale, as you say. k is related some sort of scaling dimension. When renormalizing, the term with the highest scaling dimension grows the quickest and so on. In addition, there is the weird feature that the number of physical dimensions actually does matter to the physics!)



I would still recommend you to start at (what I think is) the simplest and most physical point: the Wilsonian RG of the Gaussian model. I'll give a small introduction, but the details can be found in most books on RG or stat mech of fields.

The model itself comes from the Landau theory of phase transitions, ignoring density interactions. Here the field or order parameter m can be interpreted as a magnetization, and h as a magnetic field. The free energy functional can be written
[tex]\beta H = \int d^d r \left[ \frac{t}{2} m^2(r) + \frac{K}{2}|\nabla m|^2 - hm(r)\right][/tex]
or, in momentum modes,
[tex]\beta H = \frac{1}{(2\pi)^d} \int d^d q \left[\frac{t + q^2 K}{2} |m(q)|^2\right] - hm(0)[/tex]
It is easy to split momentum space into slow and fast modes, [itex]0<|\mathbf{q}|<\Lambda /b[/itex] and [itex] \Lambda /b<|\mathbf{q}|<\Lambda [/itex], respectively. [itex]\Lambda[/itex] is the cutoff, and [itex]b[/itex] is just a number that we will change throughout the renormalization, but we will think of it as slightly larger than 1. (I do think the paremeter [itex]\epsilon[/itex] is somehow an analogue of [itex]b[/itex].)

In the Gaussian model, the slow and fast fields don't mix (i.e. no cross terms, compare with the case of a quartic term), so the fast fields just give a constant contribution to the free energy, and hence the partition function. The slow modes remain and give
[tex]\beta H = \frac{1}{(2\pi)^d} \int_0^{\Lambda/b} d^d q \left[\frac{t + q^2 K}{2} |m(q)|^2 \right] - hm(0)[/tex]
Thus we have done the first step, the coarse graining, and ended up with an effective Hamiltonian having the same structure as before. The second step is the rescaling, where we trick ourselves that we haven't changed much. We introduce the new coordinate [itex]\mathbf{q}'=b\mathbf{q}[/itex], such that we get the same momentum as before. Then the theory looks like
[tex]\beta H = \frac{1}{(2\pi)^d} \int_0^{\Lambda} d^d q' b^{-d} \left[\frac{t + q'^2 b^{-2} K}{2} |m(q')|^2 \right] - hm(0)[/tex]

In the third step, the field is renormalized as [itex]m'(\mathbf{q}')=m(\mathbf{q}')/z[/itex]. This gives
[tex]\beta H = \frac{1}{(2\pi)^d} \int_0^{\Lambda} d^d q' b^{-d}z^2 \left[\frac{t + q'^2 b^{-2} K}{2} |m'(q')|^2 \right] - zhm'(0)[/tex]
In other words, we have a free energy which looks like the original one, but with the modified (renormalized) parameters
[tex] t' = z^2 b^{-d} t, \quad h'=zh, \quad K'=z^2 b^{-d-2} K[/tex]
At this point, you can see that [itex]d[/itex], the number of dimensions, seems to matter a bit (no matter what [itex]z[/itex] is). Now, this Landau-Ginzburg model aims to describe phase transitions, near which fluctuations are scale invariant, so we require [itex]K'=K[/itex] (other choices are possible), which implies [itex]z=b^{1+d/2}[/itex], and the renormalized parameters now scale as
[tex]t'=b^2 t, \quad h'=b^{1+d/2} h[/tex]
So, depending on how far we renormalize, and the number of dimensions, the magnetic field term might become dominant and give rise to a magnetic ordering.

The whole point of this exercise though, was to show you an example of how a starting Hamiltonian and a given renormalization scheme gives rise to an effective theory. In this case it had the same structure (i.e. it's renormalizable), but it need of course not be in general. However, the effective field theory is somewhat hidden behind the RG calculation, and I suspect that will be the case with most renormalization schemes out there.
 

What is an effective field theory?

An effective field theory (EFT) is a theoretical framework used in physics to describe and understand physical phenomena at different energy scales. It is a generalization of traditional field theories, such as quantum field theory, that allows for the inclusion of higher energy interactions without needing to specify their underlying microscopic dynamics.

How does an effective field theory differ from traditional field theories?

Effective field theories differ from traditional field theories in that they do not attempt to describe all interactions at all energy scales. Instead, they focus on the interactions relevant to a particular energy range and ignore the effects of higher energy interactions. This allows for simpler and more accurate calculations at lower energy scales.

What are the benefits of using an effective field theory?

Effective field theories offer several benefits, including the ability to make accurate predictions at lower energy scales without needing to know the underlying microscopic dynamics. They also provide a systematic framework for incorporating higher energy interactions and allow for the calculation of physical quantities that would otherwise be too complex to calculate.

What are the limitations of effective field theories?

One limitation of effective field theories is that they are only applicable within a certain energy range. As the energy scale increases, the effects of higher energy interactions become more significant and the EFT breaks down. Additionally, effective field theories require the use of approximations and assumptions, which may introduce errors in the calculations.

How are effective field theories used in practice?

Effective field theories are used in a variety of fields, including particle physics, condensed matter physics, and cosmology. They are often used to study the behavior of systems at different energy scales, such as the interactions between particles in a collider experiment or the evolution of the universe. They are also used to make predictions and guide experiments, as well as to develop new theoretical frameworks and models.

Similar threads

  • High Energy, Nuclear, Particle Physics
Replies
1
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
1
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
1
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
2
Views
2K
  • Science and Math Textbooks
Replies
1
Views
500
Replies
36
Views
3K
  • Beyond the Standard Models
Replies
1
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
3
Views
2K
  • Quantum Interpretations and Foundations
6
Replies
204
Views
7K
  • High Energy, Nuclear, Particle Physics
Replies
4
Views
3K
Back
Top