Grassmann Algebra: Derivative of $\theta_j \theta_k \theta_l$

In summary, the conversation discusses Grassmann numbers and their derivatives, as well as properties of Grassmann integration. It also touches on complex integration and the derivation of the formula for d \theta d \bar{\theta}.
  • #36
fzero said:
The degree corresponds to the expression [tex]\Lambda^D[/tex] in the primitive cutoff scheme (where D=0 corresponds to [tex]\log \Lambda[/tex]). The polynomial in couplings and momenta must be of scaling dimension D to compensate.

The expression of couplings and momenta must have scaling or mass dimension D. The actual number of n-pt couplings would be determined from [tex]V_n[/tex], while the dependence on momenta is determined by the scaling dimension.
I'm afraid I still don't really understand why the examples of the polynomials I posted in my previous post weren't working out.

fzero said:
The renormalized parameters are the physical parameters. If you were to measure the fine structure constant from the strength of the electromagnetic interaction, you would find a different value at different energy scales. If you were to scatter electrons at a few MeV, you'd find a value close to 1/137, while if you scattered electrons at a center of mass energy around 80 GeV, you'd find 1/128.

So I'm confused as to how the renormalised parameters arise in renormalisation. Is it the addition of the counter terms that changes the parameters? And if so, how does this work?


Finally, if renormalisation produces the renormalised, physical parameters that we were after all along, why do we then go and define bare parameters! What's the point in that?

Thanks.
 
Physics news on Phys.org
  • #37
latentcorpse said:
I'm afraid I still don't really understand why the examples of the polynomials I posted in my previous post weren't working out.

Go back and read the renormalization theorem. It says that the polynomial has dimension D. Somehow you misread this as "degree D," which is incorrect.

So I'm confused as to how the renormalised parameters arise in renormalisation. Is it the addition of the counter terms that changes the parameters? And if so, how does this work?

The addition of the counterterms to the original Lagrangian gives the bare parameters. The renormalized quantities are what we get when we remove the divergence from the bare parameters.

Finally, if renormalisation produces the renormalised, physical parameters that we were after all along, why do we then go and define bare parameters! What's the point in that?

It's natural to define the bare parameters because there's a counterterm for every original term in the Lagrangian.
 
  • #38
fzero said:
Go back and read the renormalization theorem. It says that the polynomial has dimension D. Somehow you misread this as "degree D," which is incorrect.
Ok I see. That's fine then.

fzero said:
The addition of the counterterms to the original Lagrangian gives the bare parameters. The renormalized quantities are what we get when we remove the divergence from the bare parameters.

It's natural to define the bare parameters because there's a counterterm for every original term in the Lagrangian.

So I have the result for [itex]\phi^4[/itex] theory with 1 loop that the addition of counterterms gives [itex]m_B^2=\frac{m^2+B}{Z}=m^2 ( 1 + \frac{\lambda}{16 \pi^2 \epsilon})[/itex]

So all I need to do to get the physical, renormalised mass is just rearrange the above for m?

That would give [itex]m^2=\frac{m_B^2}{1+\frac{\lambda}{16 \pi^2 \epsilon}}[/itex]
However, I don't see how this is physical as it still has dependence on the regulator and in fact as [itex]\epsilon \rightarrow 0[/itex] we see [itex]m^2 \rightarrow 0[/itex]

Surely this can't be right, can it?

Thanks.
 
  • #39
latentcorpse said:
So I have the result for [itex]\phi^4[/itex] theory with 1 loop that the addition of counterterms gives [itex]m_B^2=\frac{m^2+B}{Z}=m^2 ( 1 + \frac{\lambda}{16 \pi^2 \epsilon})[/itex]

So all I need to do to get the physical, renormalised mass is just rearrange the above for m?

That would give [itex]m^2=\frac{m_B^2}{1+\frac{\lambda}{16 \pi^2 \epsilon}}[/itex]
However, I don't see how this is physical as it still has dependence on the regulator and in fact as [itex]\epsilon \rightarrow 0[/itex] we see [itex]m^2 \rightarrow 0[/itex]

Surely this can't be right, can it?

Thanks.

The bare mass itself depends on the regulator and is divergent, so that expression isn't very helpful. The right way to derive the renormalized parameters is to use the bare parameters to write down the renormalization group equations and then solve those for the scale dependent renormalized parameters.
 
  • #40
fzero said:
The bare mass itself depends on the regulator and is divergent, so that expression isn't very helpful. The right way to derive the renormalized parameters is to use the bare parameters to write down the renormalization group equations and then solve those for the scale dependent renormalized parameters.

We discussed how to solve the Callan Symanzik eqn (which i think is rg eqn) by introducing a running coupling. This can be used to find the coupling as a function of energy scale which tells us the regions in which preturbation theory is applicable. Is this what you are talking about?

How would one solve it for the mass though?
 
  • #41
latentcorpse said:
We discussed how to solve the Callan Symanzik eqn (which i think is rg eqn) by introducing a running coupling. This can be used to find the coupling as a function of energy scale which tells us the regions in which preturbation theory is applicable. Is this what you are talking about?

How would one solve it for the mass though?

For [tex]\lambda \phi^4[/tex] you have a pair of equations to one-loop order which are

[tex] \mu \frac{d\lambda}{d\mu} = \frac{3\lambda^2}{16\pi^2},~~\mu \frac{dm^2}{d\mu} = \frac{m^2\lambda}{16\pi^2}.[/tex]

They are simple to integrate.
 
Last edited:
  • #42
fzero said:
For [tex]\lambda \phi^4[/tex] you have a pair of equations to one-loop order which are

[tex] \mu \frac{d\lambda}{d\mu} = \frac{3\lambda^2}{16\pi^2},~~mu \frac{dm^2}{d\mu} = \frac{m^2\lambda}{16\pi^2}.[/tex]

They are simple to integrate.

I have [itex]\beta_\lambda=\frac{3 \lambda^2}{16 \pi^2}[/itex]
where [itex]\beta_\lambda=\hat{\beta}_\lambda+\lambda \epsilon[/itex] with [itex]\hat{\beta}_\lambda=\mu \frac{\partial \lambda}{\partial \mu}[/itex]
So this appears that we can only get the equation you had and integrate to get a [itex]\epsilon[/itex] independent result if [itex]\epsilon=0[/itex] i.e. if [itex]d=4[/itex]. Surely this procedure should also work in other dimensions though?

My m equation is slightly different I have

[itex]\frac{\mu}{m^2} \frac{d m^2}{d \mu} = \frac{\lambda}{16 \pi^2}[/itex]

Either way I can see how this will work out.

So we get the renormalised parameters by solving for the coefficients of the Callan-Symanzik eqn?

Is it ok that m is going to turn out to be a function of lambda?Also, I have a calculation in my notes I am unsure of. He is showing that naive quantisation of the pure SU(N) Yang Mills lagrangian (in free theory) encounters a problem because zero modes appear as a result of the action being gauge invariant.
Anyway, I am struggling to prove the gauge invariance of the action.
He writes that under an infinitesimal gauge transformation [itex]A_\mu^a \rightarrow A_\mu^a + \partial_\mu \lambda^a[/itex],
we find [itex]S_0=\int d^dx \mathcal{L}_{\text{YM}} = -\int d^dx \frac{1}{2} A_\mu^a \Delta^{\mu \nu} A_\nu^a[/itex] cahnges by
[itex]\delta S_0 = \frac{1}{2} \int d^dx \left( \partial_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a + A_\mu^a \Delta^{\mu \nu} \partial_\nu \lambda^a \right)[/itex]
I understand this so far (its just substitution).
He then claims that this can be integrated by parts to give
[itex]-\int d^d x A_\mu^a \Delta^{\mu \nu} \partial_\nu \lambda^a=0[/itex]

I don't follow either of these last two equalities? Why does integration by parts give that term and why does it then vanish?

Thanks again!
 
Last edited:
  • #43
latentcorpse said:
I have [itex]\beta_\lambda=\frac{3 \lambda^2}{16 \pi^2}[/itex]
where [itex]\beta_\lambda=\hat{\beta}_\lambda+\lambda \epsilon[/itex] with [itex]\hat{\beta}_\lambda=\mu \frac{\partial \lambda}{\partial \mu}[/itex]
So this appears that we can only get the equation you had and integrate to get a [itex]\epsilon[/itex] independent result if [itex]\epsilon=0[/itex] i.e. if [itex]d=4[/itex]. Surely this procedure should also work in other dimensions though?

My m equation is slightly different I have

[itex]\frac{\mu}{m^2} \frac{d m^2}{d \mu} = \frac{\lambda}{16 \pi^2}[/itex]

Either way I can see how this will work out.

So we get the renormalised parameters by solving for the coefficients of the Callan-Symanzik eqn?

Is it ok that m is going to turn out to be a function of lambda?

The solution of the RG equations is a curve in the [tex](m,\lambda)[/tex] plane parameterized by [tex]\mu[/tex]. A form like [tex]m^2 = f(\lambda,\mu)[/tex] is reasonable.

Also, I have a calculation in my notes I am unsure of. He is showing that naive quantisation of the pure SU(N) Yang Mills lagrangian (in free theory) encounters a problem because zero modes appear as a result of the action being gauge invariant.
Anyway, I am struggling to prove the gauge invariance of the action.
He writes that under an infinitesimal gauge transformation [itex]A_\mu^a \rightarrow A_\mu^a + \partial_\mu \lambda^a[/itex],
we find [itex]S_0=\int d^dx \mathcal{L}_{\text{YM}} = -\int d^dx \frac{1}{2} A_\mu^a \Delta^{\mu \nu} A_\nu^a[/itex] cahnges by
[itex]\delta S_0 = \frac{1}{2} \int d^dx \left( \partial_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a + A_\mu^a \Delta^{\mu \nu} \partial_\nu \lambda^a \right)[/itex]
I understand this so far (its just substitution).
He then claims that this can be integrated by parts to give
[itex]-\int d^d x A_\mu^a \Delta^{\mu \nu} \partial_\nu \lambda^a=0[/itex]

I don't follow either of these last two equalities? Why does integration by parts give that term and why does it then vanish?

Thanks again!

You have to integrate by parts with the [tex]\partial_\mu[/tex] in the first term of [tex]\delta S_0[/tex]. This gives a surface term and a term of the form quoted. [tex]\Delta^{\mu \nu} \partial_\nu \lambda^a=0[/tex] is simple algebra using the expression for [tex](\Delta A)_\nu[/tex] given on page 69 of those notes.
 
  • #44
fzero said:
The solution of the RG equations is a curve in the [tex](m,\lambda)[/tex] plane parameterized by [tex]\mu[/tex]. A form like [tex]m^2 = f(\lambda,\mu)[/tex] is reasonable.
But our [itex] \lambda[/itex] solution was just in terms of [itex]\lambda[/itex] and [itex]\mu[/itex]. Shouldn't it also have some [itex]m[/itex] dependence if it is in the [itex](m,\lambda)[/itex] plane? Or is that contained within the [itex]\mu[/itex] dependence?

And what about the issue of [itex]\beta_\lambda=\hat{\beta}_\lambda+\lambda \epsilon[/itex] that I raised in ym last post that seems to suggest this will only work if [itex]\epsilon=0[/itex] i.e. [itex]d=4[/itex]?

fzero said:
You have to integrate by parts with the [tex]\partial_\mu[/tex] in the first term of [tex]\delta S_0[/tex]. This gives a surface term and a term of the form quoted. [tex]\Delta^{\mu \nu} \partial_\nu \lambda^a=0[/tex] is simple algebra using the expression for [tex](\Delta A)_\nu[/tex] given on page 69 of those notes.
Right, I still don't get that. Integrating the first term as you suggest I find
[itex]\frac{1}{2}\lambda^a \Delta^{\mu \nu} A_\nu^a - \frac{1}{2} \int d^dx \lambda^a \partial_\mu \Delta^{\mu \nu} A^a_\nu[/itex]
So why can I neglect the surface term? Does [itex]\lambda[/itex] vanish at infinity? If so, why?
And how does that second term rearrange to what I want? The derivative is now on the A rather than the lambda like we want in the final answer?
 
Last edited:
  • #45
latentcorpse said:
But our [itex] \lambda[/itex] solution was just in terms of [itex]\lambda[/itex] and [itex]\mu[/itex]. Shouldn't it also have some [itex]m[/itex] dependence if it is in the [itex](m,\lambda)[/itex] plane? Or is that contained within the [itex]\mu[/itex] dependence?

It's not necessary, I was just explaining why it would be natural if it did.

And what about the issue of [itex]\beta_\lambda=\hat{\beta}_\lambda+\lambda \epsilon[/itex] that I raised in ym last post that seems to suggest this will only work if [itex]\epsilon=0[/itex] i.e. [itex]d=4[/itex]?

All of the dependence on the regulator is contained in the bare parameters. Since the bare parameters are independent of the RG scale, the RG equations don't depend on the regulator.

Right, I still don't get that. Integrating the first term as you suggest I find
[itex]\frac{1}{2}\lambda^a \Delta^{\mu \nu} A_\nu^a - \frac{1}{2} \int d^dx \lambda^a \partial_\mu \Delta^{\mu \nu} A^a_\nu[/itex]
So why can I neglect the surface term? Does [itex]\lambda[/itex] vanish at infinity? If so, why?
And how does that second term rearrange to what I want? The derivative is now on the A rather than the lambda like we want in the final answer?

That first term should be a surface integral. That integral should vanish by suitable boundary conditions on either [tex]\lambda[/tex] or [tex]A[/tex] as usual. As for the other term, I mispoke a bit. However you should still be able to show that [tex]\Delta^{\mu\nu}\partial_\mu A_\nu = 0[/tex].
 
  • #46
fzero said:
All of the dependence on the regulator is contained in the bare parameters. Since the bare parameters are independent of the RG scale, the RG equations don't depend on the regulator.
I don't get this - which of beta and beta hat is bare? I guess explicitly beta would be bare as it has obvious epsilon depedence. However, I don't see how this solves the problem?

fzero said:
That first term should be a surface integral. That integral should vanish by suitable boundary conditions on either [tex]\lambda[/tex] or [tex]A[/tex] as usual. As for the other term, I mispoke a bit. However you should still be able to show that [tex]\Delta^{\mu\nu}\partial_\mu A_\nu = 0[/tex].
Ok I need to get it to the form [itex]-\int d^dx A_\mu^a \Delta^{\mu \nu} \partial_\nu \lambda^a[/itex] though.
So I integrate the 1st term by parts giving two terms (one of which goes away as its surface) the remaining term won't combine with the 2nd term in the original integral though. I tried integrating the 2nd term by parts but it didn't give me anything that i could combine with the remaining term.
 
  • #47
latentcorpse said:
I don't get this - which of beta and beta hat is bare? I guess explicitly beta would be bare as it has obvious epsilon depedence. However, I don't see how this solves the problem?

I was making a more general statement whose consequence is that the correct beta functions to use are the ones that are not dependent on the regulator.

Ok I need to get it to the form [itex]-\int d^dx A_\mu^a \Delta^{\mu \nu} \partial_\nu \lambda^a[/itex] though.

You don't need to do so in order to show the result. If you want to do that, you'll need to integrate by parts (2x) with the derivatives in [tex]\Delta^{\mu \nu}[/tex].

So I integrate the 1st term by parts giving two terms (one of which goes away as its surface) the remaining term won't combine with the 2nd term in the original integral though. I tried integrating the 2nd term by parts but it didn't give me anything that i could combine with the remaining term.

The non surface terms you end up with both vanish. I don't know why you'd want to integrate either by parts.
 
  • #48
fzero said:
I was making a more general statement whose consequence is that the correct beta functions to use are the ones that are not dependent on the regulator.
Ok. So we agree that we can get to [itex]\beta_\lambda = \frac{3 \lambda^2}{16 \pi^2} + \mathcal{O}(\lambda)[/itex]
But [itex]\beta(\lambda)=\hat{\beta}(\lambda) + \lambda \epsilon = \mu \frac{d \lambda}{d \mu} +\lambda \epsilon[/itex] by definition.
Now how do we get rid of the [itex]\lambda \epsilon[/itex] term so that we can integrate up and solve for the renormalised coupling?

fzero said:
You don't need to do so in order to show the result. If you want to do that, you'll need to integrate by parts (2x) with the derivatives in [tex]\Delta^{\mu \nu}[/tex].

The non surface terms you end up with both vanish. I don't know why you'd want to integrate either by parts.

Ok. In the original expression I find that the 2nd term [itex]A_\mu^a \Dleta^{\mu \nu} \partial_\nu \lambda^a[/itex] vanishes when we expand out [itex]\Delta^{\mu \nu}[/itex] (which I think is right).

So this means that [itex]\delta S_0 = - \frac{1}{2} \int d^dx \partial_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a[/itex]
I'm now confused as to what to do to show this is 0? Should I expend out the [itex]\Delta^{\mu \nu}[/itex] as well?Finally, we say that a theory is renormalisable if it is rendered finite by adding a finite number of counterterms to the initial lagrangian and if these counterterms take the same form as terms present in the original lagrangian.
This is correct, yes?
However, when we say "it is rendered finite", what is "it"? It doesn't really make sense to say "the theory is rendered finite" when it's really something like the divergences in the amputated green's functions associated with loop diagrams, or something like that...
Can you clear up what "it"(i.e. the thing that gets rendered finite) actually is?
Cheers.
 
Last edited:
  • #49
latentcorpse said:
Ok. So we agree that we can get to [itex]\beta_\lambda = \frac{3 \lambda^2}{16 \pi^2} + \mathcal{O}(\lambda)[/itex]
But [itex]\beta(\lambda)=\hat{\beta}(\lambda) + \lambda \epsilon = \mu \frac{d \lambda}{d \mu} +\lambda \epsilon[/itex] by definition.
Now how do we get rid of the [itex]\lambda \epsilon[/itex] term so that we can integrate up and solve for the renormalised coupling?

You can just take [tex]\epsilon\rightarrow 0[/tex].

Ok. In the original expression I find that the 2nd term [itex]A_\mu^a \Dleta^{\mu \nu} \partial_\nu \lambda^a[/itex] vanishes when we expand out [itex]\Delta^{\mu \nu}[/itex] (which I think is right).

So this means that [itex]\delta S_0 = - \frac{1}{2} \int d^dx \partial_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a[/itex]
I'm now confused as to what to do to show this is 0? Should I expend out the [itex]\Delta^{\mu \nu}[/itex] as well?

We already dealt with this term in posts 44 and 45.

Finally, we say that a theory is renormalisable if it is rendered finite by adding a finite number of counterterms to the initial lagrangian and if these counterterms take the same form as terms present in the original lagrangian.
This is correct, yes?
However, when we say "it is rendered finite", what is "it"? It doesn't really make sense to say "the theory is rendered finite" when it's really something like the divergences in the amputated green's functions associated with loop diagrams, or something like that...
Can you clear up what "it"(i.e. the thing that gets rendered finite) actually is?
Cheers.

The Green functions (connected, amputated or otherwise) are the physical observables. These are what we make finite initially. From them and the RG group, we can also show that the physical parameters are finite as well.
 
  • #50
fzero said:
You can just take [tex]\epsilon\rightarrow 0[/tex].
We already dealt with this term in posts 44 and 45.
The Green functions (connected, amputated or otherwise) are the physical observables. These are what we make finite initially. From them and the RG group, we can also show that the physical parameters are finite as well.

Got it. Thanks.

Although, one thing is bothering me: when we do the integration by parts [itex]-\frac{1}{2} \int d^dx \partial_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a = - \frac{1}{2} \lambda^a \Delta^{\mu \nu} A_\nu^a + \frac{1}{2} \int d^dx \lambda^a \partial_\mu \Delta^{\mu \nu} A_\nu^a[/itex]
haven't we lost an index on the first term there? So the indices aren't balanced in this equation?Could you take a look at the thread I made on solving the Callan-Symanzik equation please?

Thanks very much.
 
Last edited:
  • #51
latentcorpse said:
Got it. Thanks.

Although, one thing is bothering me: when we do the integration by parts [itex]-\frac{1}{2} \int d^dx \partial_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a = - \frac{1}{2} \lambda^a \Delta^{\mu \nu} A_\nu^a + \frac{1}{2} \int d^dx \lambda^a \partial_\mu \Delta^{\mu \nu} A_\nu^a[/itex]
haven't we lost an index on the first term there? So the indices aren't balanced in this equation?

As I mentioned back in post #45, that term should be a surface integral. Look up Stokes' theorem.
 
  • #52
fzero said:
As I mentioned back in post #45, that term should be a surface integral. Look up Stokes' theorem.

[itex]\int_\Omega ( \vec{\nabla} \times \vec{F} ) \cdot \vec{da} = \int_{\partial \Omega} \vec{F} \cdot d \vec{s}[/itex]

But we don't have a curl in our expression, do we?
 
  • #53
latentcorpse said:
[itex]\int_\Omega ( \vec{\nabla} \times \vec{F} ) \cdot \vec{da} = \int_{\partial \Omega} \vec{F} \cdot d \vec{s}[/itex]

But we don't have a curl in our expression, do we?

A special case of Stokes' theorem is the divergence theorem, so check that one out.
 
  • #54
fzero said:
A special case of Stokes' theorem is the divergence theorem, so check that one out.

[itex]
-\frac{1}{2} \int d^dx \partial_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a = - \frac{1}{2} \int_{S^{d-1}} n_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a + \frac{1}{2} \int d^dx \lambda^a \partial_\mu \Delta^{\mu \nu} A_\nu^a
[/itex]

where [itex]n^\mu[/itex] are the components of the vector normal to the surface of the [itex]S^{d-1}[/itex].
Is that correct?

Then do we just say that this term vanishes because either [itex]A[/itex] or [itex]\lambda[/itex] vanishes at infinity. Which one is it though? Am I right in saying it would need to be [itex]\lambda[/itex] since we don't have an A term only derivatives of A?
Even if it is the [itex]\lambda[/itex] term that vanishes - I don't understand physically why it should?

Thanks.
 
  • #55
latentcorpse said:
[itex]
-\frac{1}{2} \int d^dx \partial_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a = - \frac{1}{2} \int_{S^{d-1}} n_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a + \frac{1}{2} \int d^dx \lambda^a \partial_\mu \Delta^{\mu \nu} A_\nu^a
[/itex]

where [itex]n^\mu[/itex] are the components of the vector normal to the surface of the [itex]S^{d-1}[/itex].
Is that correct?

Then do we just say that this term vanishes because either [itex]A[/itex] or [itex]\lambda[/itex] vanishes at infinity. Which one is it though? Am I right in saying it would need to be [itex]\lambda[/itex] since we don't have an A term only derivatives of A?
Even if it is the [itex]\lambda[/itex] term that vanishes - I don't understand physically why it should?

Thanks.

We just require that the variation of the fields vanishes at the boundary. We're only considering infinitesimal variations, so we can do this.
 

Similar threads

  • Advanced Physics Homework Help
Replies
1
Views
918
  • Advanced Physics Homework Help
Replies
2
Views
2K
  • Calculus and Beyond Homework Help
Replies
1
Views
603
  • Quantum Interpretations and Foundations
Replies
1
Views
498
  • Set Theory, Logic, Probability, Statistics
Replies
12
Views
1K
  • Special and General Relativity
2
Replies
36
Views
3K
  • Calculus and Beyond Homework Help
Replies
7
Views
2K
  • Advanced Physics Homework Help
Replies
6
Views
2K
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
  • Advanced Physics Homework Help
Replies
1
Views
917
Back
Top