Renormalisation and proliferation

  • Thread starter Bobhawke
  • Start date
In summary, at low energy, you can take the simplest possible theory and renormalize it to high energy, but at high energy, you need to guess at the simplest possible theory and renormalize it to low energy.
  • #1
Bobhawke
144
0
If you have a spin system where integrating out spins leads to new interactions ie proliferation, what happens if you go in the other direction and add in more spins? Do you still get proliferation?
 
Physics news on Phys.org
  • #2
The procedure due to Kadanoff where you integrate out some portion of a spin lattice to leave a resized lattice i s called decimation. As you've noticed, for many lattices, you get the same lattice, but not the same couplings, i.e. you start with nearest neighbour, and end up with next-to-nearest in addition. So I guess the question is, if you have a next to nearest neighbour interaction, can you insert extra spins into get back to only nearest neighbour? My guess is that yes you can, but it will not be the uniform nearest neighbour that you're hoping for; only for a certain relationship between nn and nnn will you get back to a uniform nn.

Fundamentally, renormalisation (in the Wilsonian sense) loses information. You trade precise information about the small scale behaviour for a numerical adjustment of the coupling constants. Going the other way, you have to make choices; for a given set of measured coupling at large scale, you would have to simply guess the right form for the small scale. Usually, we choose the simplest set of couplings possible subject to symmetry considerations (and they are only considerations; I think one of the biggest paradigm shifts in high energy in recent decades is the realisation that the vacuum is a greatly broken symmetry state). In the example above, it's as if you decided that nnn coupling is distasteful, and so you postulate a finer lattice on which a simpler coupling would reproduce it; you go off looking for such a coupling, if it was genuinely true, then you would find a helpful relationship between nn and nnn on the original lattice, and you would only need a single constant on the finer lattice. It's also possible that you fail, and then you would have to consider whether having a textured nn (i.e. not translation invariant) coupling is okay.

Incidentally, it may be said that the difference between high energy physics and condensed matter is in which way round you run renormalisation. In HEP you're trying to figure out the simpler/more symmetric fundamental theory given the low energy behaviour, and in condensed matter you're trying to figure out what the effective (i.e. simple) theory is at low energy, given the high energy "exact" theory (as far as physics at human scale is concerned, we already have the theory of everything in the non-relativistic Schrodinger equation for electrons and nuclei + Couloumb interaction + occasional spin-orbit coupling).
 
  • #3
genneth said:
Incidentally, it may be said that the difference between high energy physics and condensed matter is in which way round you run renormalisation. In HEP you're trying to figure out the simpler/more symmetric fundamental theory given the low energy behaviour, and in condensed matter you're trying to figure out what the effective (i.e. simple) theory is at low energy, given the high energy "exact" theory (as far as physics at human scale is concerned, we already have the theory of everything in the non-relativistic Schrodinger equation for electrons and nuclei + Couloumb interaction + occasional spin-orbit coupling).

I don't want to hijack the thread, but as an effective field theorist, I'm not happy with this statement! This is what "string theory" might do, but if you're doing phenomenology, you are always interested in the "low energy physics". So I would claim that CM and HEP are the same. Even if you are a GUT-physics person, or a SUSY person (like me!), you would write down your theory at the UV scale and then run down to the electroweak scale to extract predictions.

Sorry, I'll shut up now, but I just wanted to say that. :wink:
 
  • #4
I was actually thinking a bit about how condensed matter and particle physics seem to be related by different renormalisation directions. I am studying lattice QCD at the moment, and what got me thinking about this is the different use of the term irrelevant operator in condensed matter and high energy physics. In lattice an irrelevant operator is one that becomes less important as the lattice spacing becomes smaller, ie as you "zoom in" on the system, but in condensed matter the irrelevant operators or fields or whatever, are the ones that becomes less important as you increase the scale, ie "zoom out".

But then I started thinking about proliferation. If you integrate out spins and as a result have a bunch of irrelevant fields which play no part in the large scale physics, then if you reverse that process, ie add in more spins, do you regenerate those irrelevant fields? And then if you do, isn't that very much the same as proliferation, ie as you change the scale you need to include ever more fields in order to reproduce the same physics? But I don't think this can be right, it kinda suggests that a theory might be renormalisable in one direction but not in the other. But I don't know an explanation at the moment for why this isn't right.

Another thing, when you integrate out spins and you get a relation between the new coupling K' and the old coupling K, If you invert said relationship, ie solve for K in terms of K', is this the relation you would get if you added in more spins as I suggested at the start?
 
  • #5
My understanding is you choose what you consider the relevant degree of freedom, then start with all terms consistent with the symmetries.

Running it downwards tells you what the theory looks like at low energies. Running it upwards tells you whether the relevant degree of freedom has a chance of being fundamental (having a continuum limit). Even if it has a chance of being fundamental, it may not be in real life, since the "true" theory with more and different degrees of freedom may look like a theory with only the relevant degree of freedom at low energy.
 
  • #6
I think what atyy said gets to the heart of the matter. A fundamental assumption when running towards lower energy is that of universality, i.e. different microscopic systems display the same effective behaviour. This really is an assumption, though a rigorous use of renormalisation will show you when that assumption is not self-consistent.

If you have found a complete set of coupling constants, e.g. 1D Ising model with just nearest neighbour coupling, then yes, you can run the recursive relation "backwards" to find a high-energy theory that produces the low energy one. But notice that it's really choosing one possibility out of a whole manifold; I don't know if there are any general physical considerations that would select that one as somehow preferred.

Incidentally, one should be careful about relevance of terms. We are mostly considering perturbative expansions about a Gaussian fixed point (because they're the only theories we can deal with). It can be shown that the scaling behaviour of terms obey their "engineering" dimensions at such points. However, not all fixed points are Gaussian, and at those, terms scale differently, sometimes changing their scaling al powers by orders of unity. A fairly clear example is the Wilson-Fisher point in phi-4 in 3D. The technological way to treat it is to do a perturbation in dimension, i.e. \epsilon = 4-d, but the eventual point at \epsilon=1 is clearly outside of the radius of convergence.

blechman: of course what I said is a vast generalisation; even with condensed matter there are people working upwards in energy scale. But as a whole, the aim of HEP to to "burrow down" and CM is to "build up". To me, it's fascinating that the same kinds of structure keep coming up again and again at all sorts of different scales; I wish HEP and CM would talk to each other more.
 
  • #7
Actually Bobhawke, Lattice and CM examples you gave for "irrelevant" are the same thing. By saying "irrelevant refers to things vanishing as the lattice spacing goes to zero" is the same thing as saying that the effect goes away as you zoom OUT (yes, OUT, since that means that a/L goes to zero, where L is the length scale of your probe). Setting a to zero is EQUIVALENT to setting L to infinity, so this is an INFRARED statement, not a UV one.

Remember: whenever you have a scale in your problem, the physical quantities are the RATIOS of these scales (these are the dimensionless quantities that makes sense to send to zero).

So "irrelevant" operators ALWAYS mean "irrelevant in the IR" no matter who you ask.

Hope that helps.

genneth: of course I was being obnoxious with that post. No hard feelings either way, I hope! :wink:
 
  • #8
Hey blechman,

One more thing I don't understand is why in lattice we can only calculate dimensionless quantities, and why it does not make sense to send a dimensionful quantity to 0.

Also, thanks for the informative replies everyone
 
  • #9
I thought about your last post a bit more. Please tell me if this is correct:

It seems to me there are actually 3 different scales. There is the fundamental scale that is intrinsic to the theory - in lattice this is the lattice spacing a. There is no such thing as something smaller than this scale within the theory.
Then there is the scale that one probes the theory to, L. This is like the zoom factor in a camera.
Finally there is the scale of the coupling, [tex] \lambda [/tex]

Now to say that an operator is irrelevant as you zoom out means that the length that we probe to, L, is large compared to the coupling, [tex] \lambda [/tex], or basically L is going to infinity quicker than [tex] \lambda [/tex] is.
In lattice, for irrelevant operators the "overall" coupling comprises of a multipled by [tex] \lambda [/tex]. To say that this operator is irrelevant means that a is going to 0 quicker than [tex] \lambda [/tex] is going to inifinity (that is if it is increasing or going to infinity at all). In this case, as long as the proble length L is not comparable to the fundamental length a, we will not see any effects from such an operator.
Further, just as you said, taking a to 0 is exactly the same as taking L to infinity.

Does this sound right?
 
  • #10
Bobhawke said:
Hey blechman,

One more thing I don't understand is why in lattice we can only calculate dimensionless quantities, and why it does not make sense to send a dimensionful quantity to 0.

Also, thanks for the informative replies everyone

Is the distance from the Earth to the sun "large"? Well, if you're a proton, it's HUGE. If you're the Virgo Cluster, then it is insignificantly tiny!

Moral: one cannot talk about "size" of a dimensionful quantity. One can only talk about size of a dimensionful quantity relative to some fixed size. That is: a RATIO of scales! Therefore, only dimensionless quantities have an unambiguous "size".

This is not special to lattice; it's a fact of physics.
Bobhawke said:
I thought about your last post a bit more. Please tell me if this is correct:

...

Does this sound right?

I'm confused by this post. In particular, I'm confused by your introduction of "\\lambda". What is that scale? Do you mean your renormalization scale? Or perhaps the Landau Pole of the coupling ([itex]\Lambda_{\rm QCD}[/itex])?

In any event, I'm not sure what \\lambda has to do with relevant/irrelevant. Let me rephrase my understanding of RG:

You have two scales: the UV cutoff, which is the largest energy (smallest wavelength) available to you; and the energy of your probe. If you send in a mode of wavelength L, you are interested in what the physics at the scale L is. The RG gives you a way to derive that physics from the theory that was defined at the cutoff.

When we talk about IRRELEVANT operators, we mean that as the scale L gets larger (energy decreases), the effects of these operators vanish. RELEVANT operators are the opposite. This naming is right, since generally we are interested in the limit L/a gets very large (remember my above post).

Let me emphasize: another way to think of RG is that the cutoff is a PARAMETER of your theory - it is not allowed to change! Talking about "a going to zero" is (philosophically) wrong: what you REALLY mean is that your PROBE L has larger and larger wavelengths relative to a. It is the PROBE's energy that you are allowed to adjust, not the cutoff itself. That's set by G-d, or whatever. Of course, once again, in practice (see above) one only means L/a gets large, so you can think of that practically by letting a vanish. But to be completely honest with your RG analysis, this is wrong.

None of this requires the introduction of \\lambda. I'm not sure why you include it.
 
  • #11
Yeah sorry my last post was confused.

I was wrong in calling [tex] \lambda [/tex] a scale. It is a coupling that changes with the scale. I think I need to make the distinction between a relevant/irrelevant operator and an important/unimportant operator too - Its clear what irrelevant is, it is just as you said, that as we zoom out the coupling gets smaller. An unimportant operator is one whose coupling is small enough that its effects are negligible at the particular scale youre talking about. So an operator could be irrelevant, but if we were at a small enough scale it wouldn't be unimportant.

What I meant before is that the irrelevant operators that you can add to lattice actions are both irrelevant and unimportant.
I believe the confusion in my last post came from conflating irrelevant and unimportant.
 
  • #12
Also, I don't know what's wrong with my latex coding, but obviously i mean the letter lambda where it says \\lambda.
 
  • #13
Thinking about this a bit more:

The lattice spacing, a, actually I don't think is a fundamental parameter. It is really the probe size.

The standard model we know is a low energy effective theory to some more fundamental theory right? And maybe this new theory would be a low energy approximation to an even more fundamental theory etc. None of these have a cutoff that is a fundamental parameter - they have a cutoff which tells you when the theory stops making sense. But at some point, one would (maybe) get to a theory which is really fundamental, meaning that it would have a cutoff which is indeed a fundamental parameter.

Now we say we formulate the standard model in a "continuum". But we know there must be some fundamental cutoff. When we say we formulate the SM in a "continuum", what we really mean is that the physics that the SM validly describes is at a scale so big compared to the fundamental cutoff that it very continuum like - the effects from the fundamental cutoff are negligible.

The lattice spacing in lattice isn't a fundamental parameter - you don't actually put the lattice spacig into a calculation. You just adjust the coupling constant until its value corresponds to that which you would get at the lattice spacing that you want. It is just the probe size.

So we can talk about taking a to 0 consistently. More than that, the operators that are irrelevant as a goes to 0 ie zoom in, are not the same as the operators that are irrelevant as a goes to infinity ie zoom out. So I think there is a difference between irrelevant operators in lattice and irrelevant operators in CM.

That is unless I've made a mistake in my reasoning.
 
  • #15
Bobhawke: If I shine a flashlight on a rock, the "probe size" is the wavelength of the visible light, not the distance between the atoms that make the rock! Furthermore, since the lattice size is so much smaller than the wavelength of visible light, the rock won't shatter when I shine a flashlight on it! So physics is indeed "different" at visible light scales, rather than inter-atomic scales.

The atomic spacing is a "fundamental parameter" - alright, let's not get lost in the semantics, the "fundamental parameters" are really the electron mass and charge... well, maybe it's the string length and tension... well, maybe it's...

AHAHAH! Stop with the "fundamentalism!" :wink:

Seriously, when I say "fundamental parameter" I mean what you said above: it's the scale at which our theory stops working. Perhaps you can calculate this scale in terms of other scales, making it not "fundamental" but the beauty of EFT is that you don't HAVE to know what the UV theory is!

The point is, once again, from an EFT point of view the fundamental parameter is "G-d given" - it is what it is. It's adjustable in the sense that you can tune your "fundamental parameters" such as electron mass, charge, etc, to GET the proper value of the cutoff, but this is, once again, just semantics. The point is: once you fix your "fundamental parameters" the cutoff is fixed.

What is not fixed is the probe scale - I could use a flashlight, an X-ray machine or a radio antenna to probe my sample. The end results will (numerically) be different based on which probe I chose. That is what RG (and in fact, EFT in general) tell you.

genneth's reference is an excellent one. check it out.
 
  • #16
I apologise if I am beating a dead horse here, and I am of course grateful for all replies. But...

Lets continue with the rock flash light analogy. Say the theory that I have is a one where the atoms are the smallest things I know about. Then the cut off is the size of the atom. Now we might wonder what predictions our theory makes when you probe it at a scale, let's call it s, which is bigger than the cutoff of the theory (because as you say the theory stops making sense at scales equal to or smaller than the cutoff). We could do this by shining a flashlight on the rock wth wavelength s. Then we could make a computer simulation of our theory of atoms, and adjust all the couplings to their value at the desired scale. In this way we could get some predictions from our theory. and see if they match up with our rock-flashlight experiment.

But now let's think about QCD - it too has some cutoff where it stops working. But say we are interested in what its predictions are at a certain scale, let's call it (suggestively :P) a. Then we could make a computer simulation with all the couplings of the theory adjusted to their value at the scale a. And we get some predictions out. And we could compare those predictions with results from a particle accelerator doing experiments at the scale a.

a is exactly analgous here to s in the flashlight-rock example. It is just the length that we experimentally probe the theory to, and the length we put into computers to get the predictions of the theory at that length.

Further, it seems its pretty clear that we can change a - people do calculations on different sized lattices all the time! And further, a is not the scale at which QCD stops working!

Again apologies if I am being silly, but I still think the lattice spacing is the probe length.
 
  • #17
Bobhawke said:
But now let's think about QCD - it too has some cutoff where it stops working. But say we are interested in what its predictions are at a certain scale, let's call it (suggestively :P) a. Then we could make a computer simulation with all the couplings of the theory adjusted to their value at the scale a. And we get some predictions out. And we could compare those predictions with results from a particle accelerator doing experiments at the scale a.

a is exactly analgous here to s in the flashlight-rock example. It is just the length that we experimentally probe the theory to, and the length we put into computers to get the predictions of the theory at that length.

Further, it seems its pretty clear that we can change a - people do calculations on different sized lattices all the time! And further, a is not the scale at which QCD stops working!

QCD is asymptotically free. That means that it is truly UV renormalizable. It does not have a cutoff in the EFT sense!

This is probably what's causing the confusion. And perhaps I did not properly clarify it earlier. In any lattice theory, you impose your lattice spacing as the UV scale (that is, you have integrated out all physics at distance scales smaller than the lattice spacing). You then do your calculations at that scale on the computer. You are correct about this. From that point of view, calling a the "probe scale" is correct.

However, depending on what you're doing, you're not done! When you want to actually use this calculation, you then typically run the result from the scale a to a larger length scale using perturbation theory RG analysis. And this is what I was thinking about.

So I guess the right thing to say is that there are two calculations going on with lattice gauge theory: the nonperturbative lattice calculation, and the perturbative dressing that comes next.

That being said: although for lattice-QCD I retract my former statements, I stand by my rock example. That is: if I want to know what happens when I send x-rays bouncing off a rock, I would be interested in the physics at the x-ray wavelength, not the lattice size. But you are right that this is not what lattice-QCD people are usually doing.

I stand corrected.
 
  • #18
I don't think anyone would question the power of the rock-flashlight paradigm :P. Aftter all, isn't all of physics really just a rock -flashlight experiment?

But yeah I understand now what you were saying. Thanks for the discussion blechman, and also thanks for that paper genneth its very good.
 

What is renormalisation?

Renormalisation is a process in theoretical physics where infinities in a physical theory are removed or absorbed into certain physical parameters. This ensures that the predicted values of physical quantities are finite and well-defined.

Why is renormalisation important in quantum field theory?

Renormalisation is important in quantum field theory because it allows us to make meaningful predictions and calculations about physical quantities. Without renormalisation, the infinities in the theory would render it mathematically inconsistent and unusable.

What is proliferation in the context of renormalisation?

Proliferation is a term used in renormalisation theory to describe the process of generating new terms in a physical theory to account for higher-order corrections. This is necessary to ensure that the theory remains consistent and accurate at all scales.

How does renormalisation relate to the concept of scale invariance?

Renormalisation and scale invariance are closely related concepts. In renormalisation theory, the parameters of a theory are chosen to be scale-invariant, meaning that they do not change under a change of scale. This allows for predictions to be made at different scales without needing to recalculate the entire theory.

What are the practical applications of renormalisation and proliferation?

Renormalisation and proliferation have many practical applications in theoretical physics, particularly in the field of quantum field theory. They are used to make accurate predictions about physical quantities, such as particle masses and interaction strengths, and have been crucial in the development of theories such as the Standard Model of particle physics.

Similar threads

  • High Energy, Nuclear, Particle Physics
Replies
1
Views
481
  • High Energy, Nuclear, Particle Physics
Replies
1
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
3
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
2
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
1
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
3
Views
4K
  • High Energy, Nuclear, Particle Physics
Replies
14
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
2
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
1
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
1
Views
1K
Back
Top