Grassmann Algebra: Derivative of $\theta_j \theta_k \theta_l$

  • Thread starter Thread starter latentcorpse
  • Start date Start date
  • Tags Tags
    Algebra Grassmann
Click For Summary
SUMMARY

The discussion focuses on the differentiation of products of Grassmann numbers, specifically the expression $\frac{\partial}{\partial \theta_i} (\theta_j \theta_k \theta_l)$. Participants clarify that the differentiation follows the rule $\frac{\partial}{\partial \theta_i} (\theta_j \theta_k) = \delta_{ij} \theta_k - \theta_j \delta_{ik}$, emphasizing the importance of the minus sign due to the anticommutative nature of Grassmann variables. Additionally, they explore Grassmann integration, concluding that $\int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \epsilon_{i_1 \dots i_n}$, where $\epsilon$ is the antisymmetric tensor, and discuss the implications of this in various integrals.

PREREQUISITES
  • Understanding of Grassmann algebra and its properties
  • Familiarity with differentiation rules for Grassmann variables
  • Knowledge of antisymmetric tensors and their applications
  • Basic concepts of Grassmann integration
NEXT STEPS
  • Study the differentiation of products of Grassmann numbers in detail
  • Learn about the properties and applications of the antisymmetric tensor $\epsilon_{i_1 \dots i_n}$
  • Explore advanced topics in Grassmann integration techniques
  • Investigate the implications of Grassmann variables in quantum field theory
USEFUL FOR

The discussion is beneficial for physicists, mathematicians, and students studying quantum field theory, particularly those interested in the applications of Grassmann numbers and their integration techniques.

  • #31
latentcorpse said:
I found the calculation in full here:
http://arxiv.org/PS_cache/arxiv/pdf/0901/0901.2208v1.pdf
See p14 & p15.
I follow up to (2.6): why does he get an \Omega_4?
Surely \int d^4k = \int_{S^3} d \Omega_3 \int k^3 dk, no?

Yes, he's wrong.

And then in (2.7), I understand the change of variables he used to get the expression on the left of this line but how does that integrate to give the RHS?

The integral can be done by substitution. I get a slightly different answer than he does.

And then how does (2.8) come about? Surely we have 5 factors of 2 on the bottom i.e. 32 since we had a (2 pi)^4 and also a 1.2 from (2.7)? He also appears to have only pi^2 rather than pi^4 on the bottom?

I think he has a factor of 2 wrong in the integral above, so it's pointless to try to follow his algebra.

So, given an arbitrary loop integral to compute, is it best to start of by trying momentum space cut off (since it is easier) and if that fails then try dim reg? Or should we just always try dim reg straight away since we know that it works?

It's probably best to use DR since it's probably not obvious from just one or two diagrams whether the primitive cutoff is going to break something important.

Yep, so I worked out those divergent parts already using the dimensional regularisation prescription i think. And I understand (or at least am prepared to accept) that we can add new terms to the lagrangian in order to cancel off these divergences. However, I do not understand why the counter terms in the lagrangian give the contributions that they do to the amputated n point function
i.e. why is \hat{\tau}^{(0)}(p,-p)_{\text{c.t.}}=-Ap^2-B?
And why do they then give those diagrams drawn underneath?

The counterterms are used to compute new vertices. So you just have to compute Fourier transforms to get the tree level diagrams:

A(\partial \phi)^2 \rightarrow - A p^2

- B \phi2 \rightarrow - B,

etc. On the RHS we've gotten rid of the factors of \hat{\phi}(p) as usual when writing Feynman rules.

Ok, so given my convention, we get the RHS is equal to \frac{-1}{(p^2+m^2)^2} \hat{F}(p,-p) and the RHS is equal to F(p,-p). So what do we substitute for F(p,-p) and why?

F(p,-p) is the connected 2pt function, which should just be the propagator.
 
Physics news on Phys.org
  • #32
fzero said:
The integral can be done by substitution. I get a slightly different answer than he does.
What would your substitution be? I tried u=k^2+p^2x(1-x)+m^2 as well as u=(k^2+p^2x(1-x)+m^2)^2 but couldn't get either to work?

fzero said:
The counterterms are used to compute new vertices. So you just have to compute Fourier transforms to get the tree level diagrams:

A(\partial \phi)^2 \rightarrow - A p^2

- B \phi2 \rightarrow - B,

etc. On the RHS we've gotten rid of the factors of \hat{\phi}(p) as usual when writing Feynman rules.
Ok. So you're saying that we take the Fourier transform to get the momentum space Feynman rules? How do you know only to Fourier transform the A and B contributions for \hat{\tau}_2^{(0)} though?

And even if I went through and did all the Fourier transforms, how do we know what the corresponding vertices look like?

Lastly, are the vertices he has drawn on that page corresponding to B,C,D,E or \hat{\tau}_1^{(0)},\hat{\tau}_2^{(0)},\hat{\tau}_3^{(0)},\hat{\tau}_4^{(0)}?
fzero said:
F(p,-p) is the connected 2pt function, which should just be the propagator.
Ok but surely the full 2 point function should have external leg contributions as well, no?

My notes also claim that the renormalised parameters m, \lambda, \phi depend on the RG (renormalisation group) scale \mu but are independent of the cut off \epsilon whereas the bar parameters are dependent on the cutoff and independent of the RG scale.
I have two questions about this statement:
(i) I thought the whole point of renormalisation was to get physical parameters i.e. ones that don't change when you change the scale? But clearly these will change and we have to further manipulate them (i.e. go to the bare parameters) to get fixed ones. What do the bare parameters represent?
(ii) When he talks about the cutoff \epsilon, even though this has come from dimensional regularisation, does this correspond exactly to the high momentum cutoof \Lambda from UV cutoff regularisation since surely taking the limit \Lambda \rightarrow \infty and \epsilon \rightarrow 0 have the same effect?
Thanks.
 
Last edited:
  • #33
latentcorpse said:
What would your substitution be? I tried u=k^2+p^2x(1-x)+m^2 as well as u=(k^2+p^2x(1-x)+m^2)^2 but couldn't get either to work?

u=k^2+p^2x(1-x)+m^2 works. You might want to try again.

Ok. So you're saying that we take the Fourier transform to get the momentum space Feynman rules? How do you know only to Fourier transform the A and B contributions for \hat{\tau}_2^{(0)} though?

The leading divergence from the counterterms is at tree-level (since we choose the coefficients to be divergent). There are no contributions to the tree level 2pt function from the other counterterms.

And even if I went through and did all the Fourier transforms, how do we know what the corresponding vertices look like?

The number of external legs corresponds to the number of fields in the term from the Lagrangian. The factor is the coupling constant up to sign or other conventions. The rules are given on page 21 of your notes.

Lastly, are the vertices he has drawn on that page corresponding to B,C,D,E or \hat{\tau}_1^{(0)},\hat{\tau}_2^{(0)},\hat{\tau}_3^{(0)},\hat{\tau}_4^{(0)}?

\hat{\tau}_2^{(0)} is the leading divergence in the 2pt function. It comes from one-loop graphs. \hat{\tau}_2^{(0)}_\text{c.t.} comes from the tree-level counterterms, since we're choosing the coefficients of the counterterms themselves to be divergent.

Ok but surely the full 2 point function should have external leg contributions as well, no?

The 2pt function is what you compute from \langle T \phi(x)\phi(y)\rangle. At tree level there aren't external leg contributions.

My notes also claim that the renormalised parameters m, \lambda, \phi depend on the RG (renormalisation group) scale \mu but are independent of the cut off \epsilon whereas the bar parameters are dependent on the cutoff and independent of the RG scale.
I have two questions about this statement:
(i) I thought the whole point of renormalisation was to get physical parameters i.e. ones that don't change when you change the scale? But clearly these will change and we have to further manipulate them (i.e. go to the bare parameters) to get fixed ones. What do the bare parameters represent?

The point of renormalization is to get finite physical parameters. As a consequence they depend on scale. The bare parameters aren't directly measurable because they are divergent.

(ii) When he talks about the cutoff \epsilon, even though this has come from dimensional regularisation, does this correspond exactly to the high momentum cutoof \Lambda from UV cutoff regularisation since surely taking the limit \Lambda \rightarrow \infty and \epsilon \rightarrow 0 have the same effect?

The divergent parts should agree with the substitution 1/\epsilon \sim \log \Lambda. I don't think that there's any deeper connection between the two methods.
Thanks.[/QUOTE]
 
  • #34
fzero said:
u=k^2+p^2x(1-x)+m^2 works. You might want to try again.
ok ill try it

My notes keep chopping and changing between G's and F's for greens functions. I thought F was a greens function in momentum space and G was a greens function in position space but I just saw a G(p)? Is this notation standard? If so, can you enlighten me as to what they are meant to be?

Also, is there a difference between green's functions and correlations functions? Up until now we appear to use the two interchangably?
fzero said:
The 2pt function is what you compute from \langle T \phi(x)\phi(y)\rangle. At tree level there aren't external leg contributions.

How does this look:

We have the equation F^{(n)}(p_1, \dots , p_n) = i (2 \pi)^d \delta^{d}(\displaystyle\sum_{i=1}^\infty p_i) \displaystyle\prod_{i=1}^n \left( \frac{-i}{p_i^2+m^2} \right) \hat{F}_n(p_1, \dots , p_n)

Now we're trying to solve for \hat{F}_2(p,-p)

So the RHS of our above equation (neglecting the delta function piece) is
\frac{-1}{(p^2+m^2)^2} \hat{F}_2(p,-p)

Now the RHS is the full two point function which should just be an internal line contribution so we should have F_2(p,-p) = \frac{1}{p^2+m^2}

Rearranging this gives \hat{F}_2(p,-p)=-(p^2+m^2)=-p^2-m^2
Furthermore, the renormalisation theorem tells us that if all the subdivergences are removed and we find the superficial degree of divergence, D, to be positive or equal to 0 then our diagram is divergent and the divergent part is given by a polynomial in the couplings and the external momenta of degree D

However, in my notes, we work out that

\hat{F}_1^{(1)} \sim \frac{1}{16 \pi^2 \epsilon} g m^2 is the divergent piece for a theory with \phi^4 (coupling \lambda) and \phi^3 (coupling g) interactions in 4 dimensions for 1 loop with 1 external line (that has been amputated).

However, naive power counting gives the superficial degree of divergence as D=4-E-V_3 where E is the number of external legs and V_3 is the number of valence 3 vertices. So for 1 external line above we find D=3-V_3=2 and so we should have a degree 2 polynomial in couplings and momenta.

However, this divergent piece only has one factor of g, not 2?
What's going on?
I thought that it might be that the mass counts but it isn't a coupling OR a momenta, is it?

This is a problem throughout as I have \hat{F}_3^{(1)} \sim \frac{1}{16 \pi^2 \epsilon} 3 g \lambda
This should have D=4-3-1=0 but quite clearly this polynomial is quadratic in couplings leading to the same problem!And lastly, is the following true: we want to use renormalisation to get finite physical parameters. During renormalisation, we create renormalised parameters (that depend on the RG(renormalisation group) scale but are independent of the regulator) and then when we add the counter terms to the original lagrangian we create the bare lagrangian (the bare quantities depend on the regulator but not the RG scale). Neither of these are the physical quantities though since physical quantities should be independent of RG scale and regulator, shouldn't they?
(i)I'm not sure my understanding of how renormalised and bare quantities are produced is correct? Re-reading what I wrote above it sounds like I've said they both arise for the same reason. This can't be true as they are different things!
(ii) How do we get the physical parameters out at the end of all this?

Cheers.
 
Last edited:
  • #35
latentcorpse said:
ok ill try it

My notes keep chopping and changing between G's and F's for greens functions. I thought F was a greens function in momentum space and G was a greens function in position space but I just saw a G(p)? Is this notation standard? If so, can you enlighten me as to what they are meant to be?

Also, is there a difference between green's functions and correlations functions? Up until now we appear to use the two interchangably?

Use of F and G is not standard between authors, though usually people use G. Momentum space vs position space should be clear from context. Usually people don't bother to put a tilde over the momentum space functions because of this.

All Green functions are correlation functions of some type, but can differ in terms of connectedness. For example,

G(1,2,\ldots N) = (-i)^N \left. \frac{\delta}{\delta J_1} \cdots \frac{\delta}{\delta J_N} W[J]\right|_{J=0}

are the Green functions computed from the generating functional. The RHS is clearly a correlation function of the fields. However the Green functions computed from Z[J] = -i \log W[J] are the connected Green functions.


Furthermore, the renormalisation theorem tells us that if all the subdivergences are removed and we find the superficial degree of divergence, D, to be positive or equal to 0 then our diagram is divergent and the divergent part is given by a polynomial in the couplings and the external momenta of degree D

The degree corresponds to the expression \Lambda^D in the primitive cutoff scheme (where D=0 corresponds to \log \Lambda). The polynomial in couplings and momenta must be of scaling dimension D to compensate.

However, in my notes, we work out that

\hat{F}_1^{(1)} \sim \frac{1}{16 \pi^2 \epsilon} g m^2 is the divergent piece for a theory with \phi^4 (coupling \lambda) and \phi^3 (coupling g) interactions in 4 dimensions for 1 loop with 1 external line (that has been amputated).

However, naive power counting gives the superficial degree of divergence as D=4-E-V_3 where E is the number of external legs and V_3 is the number of valence 3 vertices. So for 1 external line above we find D=3-V_3=2 and so we should have a degree 2 polynomial in couplings and momenta.

However, this divergent piece only has one factor of g, not 2?
What's going on?
I thought that it might be that the mass counts but it isn't a coupling OR a momenta, is it?

This is a problem throughout as I have \hat{F}_3^{(1)} \sim \frac{1}{16 \pi^2 \epsilon} 3 g \lambda
This should have D=4-3-1=0 but quite clearly this polynomial is quadratic in couplings leading to the same problem!

The expression of couplings and momenta must have scaling or mass dimension D. The actual number of n-pt couplings would be determined from V_n, while the dependence on momenta is determined by the scaling dimension.

And lastly, is the following true: we want to use renormalisation to get finite physical parameters. During renormalisation, we create renormalised parameters (that depend on the RG(renormalisation group) scale but are independent of the regulator) and then when we add the counter terms to the original lagrangian we create the bare lagrangian (the bare quantities depend on the regulator but not the RG scale). Neither of these are the physical quantities though since physical quantities should be independent of RG scale and regulator, shouldn't they?
(i)I'm not sure my understanding of how renormalised and bare quantities are produced is correct? Re-reading what I wrote above it sounds like I've said they both arise for the same reason. This can't be true as they are different things!
(ii) How do we get the physical parameters out at the end of all this?

Cheers.

The renormalized parameters are the physical parameters. If you were to measure the fine structure constant from the strength of the electromagnetic interaction, you would find a different value at different energy scales. If you were to scatter electrons at a few MeV, you'd find a value close to 1/137, while if you scattered electrons at a center of mass energy around 80 GeV, you'd find 1/128.
 
  • #36
fzero said:
The degree corresponds to the expression \Lambda^D in the primitive cutoff scheme (where D=0 corresponds to \log \Lambda). The polynomial in couplings and momenta must be of scaling dimension D to compensate.

The expression of couplings and momenta must have scaling or mass dimension D. The actual number of n-pt couplings would be determined from V_n, while the dependence on momenta is determined by the scaling dimension.
I'm afraid I still don't really understand why the examples of the polynomials I posted in my previous post weren't working out.

fzero said:
The renormalized parameters are the physical parameters. If you were to measure the fine structure constant from the strength of the electromagnetic interaction, you would find a different value at different energy scales. If you were to scatter electrons at a few MeV, you'd find a value close to 1/137, while if you scattered electrons at a center of mass energy around 80 GeV, you'd find 1/128.

So I'm confused as to how the renormalised parameters arise in renormalisation. Is it the addition of the counter terms that changes the parameters? And if so, how does this work?


Finally, if renormalisation produces the renormalised, physical parameters that we were after all along, why do we then go and define bare parameters! What's the point in that?

Thanks.
 
  • #37
latentcorpse said:
I'm afraid I still don't really understand why the examples of the polynomials I posted in my previous post weren't working out.

Go back and read the renormalization theorem. It says that the polynomial has dimension D. Somehow you misread this as "degree D," which is incorrect.

So I'm confused as to how the renormalised parameters arise in renormalisation. Is it the addition of the counter terms that changes the parameters? And if so, how does this work?

The addition of the counterterms to the original Lagrangian gives the bare parameters. The renormalized quantities are what we get when we remove the divergence from the bare parameters.

Finally, if renormalisation produces the renormalised, physical parameters that we were after all along, why do we then go and define bare parameters! What's the point in that?

It's natural to define the bare parameters because there's a counterterm for every original term in the Lagrangian.
 
  • #38
fzero said:
Go back and read the renormalization theorem. It says that the polynomial has dimension D. Somehow you misread this as "degree D," which is incorrect.
Ok I see. That's fine then.

fzero said:
The addition of the counterterms to the original Lagrangian gives the bare parameters. The renormalized quantities are what we get when we remove the divergence from the bare parameters.

It's natural to define the bare parameters because there's a counterterm for every original term in the Lagrangian.

So I have the result for \phi^4 theory with 1 loop that the addition of counterterms gives m_B^2=\frac{m^2+B}{Z}=m^2 ( 1 + \frac{\lambda}{16 \pi^2 \epsilon})

So all I need to do to get the physical, renormalised mass is just rearrange the above for m?

That would give m^2=\frac{m_B^2}{1+\frac{\lambda}{16 \pi^2 \epsilon}}
However, I don't see how this is physical as it still has dependence on the regulator and in fact as \epsilon \rightarrow 0 we see m^2 \rightarrow 0

Surely this can't be right, can it?

Thanks.
 
  • #39
latentcorpse said:
So I have the result for \phi^4 theory with 1 loop that the addition of counterterms gives m_B^2=\frac{m^2+B}{Z}=m^2 ( 1 + \frac{\lambda}{16 \pi^2 \epsilon})

So all I need to do to get the physical, renormalised mass is just rearrange the above for m?

That would give m^2=\frac{m_B^2}{1+\frac{\lambda}{16 \pi^2 \epsilon}}
However, I don't see how this is physical as it still has dependence on the regulator and in fact as \epsilon \rightarrow 0 we see m^2 \rightarrow 0

Surely this can't be right, can it?

Thanks.

The bare mass itself depends on the regulator and is divergent, so that expression isn't very helpful. The right way to derive the renormalized parameters is to use the bare parameters to write down the renormalization group equations and then solve those for the scale dependent renormalized parameters.
 
  • #40
fzero said:
The bare mass itself depends on the regulator and is divergent, so that expression isn't very helpful. The right way to derive the renormalized parameters is to use the bare parameters to write down the renormalization group equations and then solve those for the scale dependent renormalized parameters.

We discussed how to solve the Callan Symanzik eqn (which i think is rg eqn) by introducing a running coupling. This can be used to find the coupling as a function of energy scale which tells us the regions in which preturbation theory is applicable. Is this what you are talking about?

How would one solve it for the mass though?
 
  • #41
latentcorpse said:
We discussed how to solve the Callan Symanzik eqn (which i think is rg eqn) by introducing a running coupling. This can be used to find the coupling as a function of energy scale which tells us the regions in which preturbation theory is applicable. Is this what you are talking about?

How would one solve it for the mass though?

For \lambda \phi^4 you have a pair of equations to one-loop order which are

\mu \frac{d\lambda}{d\mu} = \frac{3\lambda^2}{16\pi^2},~~\mu \frac{dm^2}{d\mu} = \frac{m^2\lambda}{16\pi^2}.

They are simple to integrate.
 
Last edited:
  • #42
fzero said:
For \lambda \phi^4 you have a pair of equations to one-loop order which are

\mu \frac{d\lambda}{d\mu} = \frac{3\lambda^2}{16\pi^2},~~mu \frac{dm^2}{d\mu} = \frac{m^2\lambda}{16\pi^2}.

They are simple to integrate.

I have \beta_\lambda=\frac{3 \lambda^2}{16 \pi^2}
where \beta_\lambda=\hat{\beta}_\lambda+\lambda \epsilon with \hat{\beta}_\lambda=\mu \frac{\partial \lambda}{\partial \mu}
So this appears that we can only get the equation you had and integrate to get a \epsilon independent result if \epsilon=0 i.e. if d=4. Surely this procedure should also work in other dimensions though?

My m equation is slightly different I have

\frac{\mu}{m^2} \frac{d m^2}{d \mu} = \frac{\lambda}{16 \pi^2}

Either way I can see how this will work out.

So we get the renormalised parameters by solving for the coefficients of the Callan-Symanzik eqn?

Is it ok that m is going to turn out to be a function of lambda?Also, I have a calculation in my notes I am unsure of. He is showing that naive quantisation of the pure SU(N) Yang Mills lagrangian (in free theory) encounters a problem because zero modes appear as a result of the action being gauge invariant.
Anyway, I am struggling to prove the gauge invariance of the action.
He writes that under an infinitesimal gauge transformation A_\mu^a \rightarrow A_\mu^a + \partial_\mu \lambda^a,
we find S_0=\int d^dx \mathcal{L}_{\text{YM}} = -\int d^dx \frac{1}{2} A_\mu^a \Delta^{\mu \nu} A_\nu^a cahnges by
\delta S_0 = \frac{1}{2} \int d^dx \left( \partial_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a + A_\mu^a \Delta^{\mu \nu} \partial_\nu \lambda^a \right)
I understand this so far (its just substitution).
He then claims that this can be integrated by parts to give
-\int d^d x A_\mu^a \Delta^{\mu \nu} \partial_\nu \lambda^a=0

I don't follow either of these last two equalities? Why does integration by parts give that term and why does it then vanish?

Thanks again!
 
Last edited:
  • #43
latentcorpse said:
I have \beta_\lambda=\frac{3 \lambda^2}{16 \pi^2}
where \beta_\lambda=\hat{\beta}_\lambda+\lambda \epsilon with \hat{\beta}_\lambda=\mu \frac{\partial \lambda}{\partial \mu}
So this appears that we can only get the equation you had and integrate to get a \epsilon independent result if \epsilon=0 i.e. if d=4. Surely this procedure should also work in other dimensions though?

My m equation is slightly different I have

\frac{\mu}{m^2} \frac{d m^2}{d \mu} = \frac{\lambda}{16 \pi^2}

Either way I can see how this will work out.

So we get the renormalised parameters by solving for the coefficients of the Callan-Symanzik eqn?

Is it ok that m is going to turn out to be a function of lambda?

The solution of the RG equations is a curve in the (m,\lambda) plane parameterized by \mu. A form like m^2 = f(\lambda,\mu) is reasonable.

Also, I have a calculation in my notes I am unsure of. He is showing that naive quantisation of the pure SU(N) Yang Mills lagrangian (in free theory) encounters a problem because zero modes appear as a result of the action being gauge invariant.
Anyway, I am struggling to prove the gauge invariance of the action.
He writes that under an infinitesimal gauge transformation A_\mu^a \rightarrow A_\mu^a + \partial_\mu \lambda^a,
we find S_0=\int d^dx \mathcal{L}_{\text{YM}} = -\int d^dx \frac{1}{2} A_\mu^a \Delta^{\mu \nu} A_\nu^a cahnges by
\delta S_0 = \frac{1}{2} \int d^dx \left( \partial_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a + A_\mu^a \Delta^{\mu \nu} \partial_\nu \lambda^a \right)
I understand this so far (its just substitution).
He then claims that this can be integrated by parts to give
-\int d^d x A_\mu^a \Delta^{\mu \nu} \partial_\nu \lambda^a=0

I don't follow either of these last two equalities? Why does integration by parts give that term and why does it then vanish?

Thanks again!

You have to integrate by parts with the \partial_\mu in the first term of \delta S_0. This gives a surface term and a term of the form quoted. \Delta^{\mu \nu} \partial_\nu \lambda^a=0 is simple algebra using the expression for (\Delta A)_\nu given on page 69 of those notes.
 
  • #44
fzero said:
The solution of the RG equations is a curve in the (m,\lambda) plane parameterized by \mu. A form like m^2 = f(\lambda,\mu) is reasonable.
But our \lambda solution was just in terms of \lambda and \mu. Shouldn't it also have some m dependence if it is in the (m,\lambda) plane? Or is that contained within the \mu dependence?

And what about the issue of \beta_\lambda=\hat{\beta}_\lambda+\lambda \epsilon that I raised in ym last post that seems to suggest this will only work if \epsilon=0 i.e. d=4?

fzero said:
You have to integrate by parts with the \partial_\mu in the first term of \delta S_0. This gives a surface term and a term of the form quoted. \Delta^{\mu \nu} \partial_\nu \lambda^a=0 is simple algebra using the expression for (\Delta A)_\nu given on page 69 of those notes.
Right, I still don't get that. Integrating the first term as you suggest I find
\frac{1}{2}\lambda^a \Delta^{\mu \nu} A_\nu^a - \frac{1}{2} \int d^dx \lambda^a \partial_\mu \Delta^{\mu \nu} A^a_\nu
So why can I neglect the surface term? Does \lambda vanish at infinity? If so, why?
And how does that second term rearrange to what I want? The derivative is now on the A rather than the lambda like we want in the final answer?
 
Last edited:
  • #45
latentcorpse said:
But our \lambda solution was just in terms of \lambda and \mu. Shouldn't it also have some m dependence if it is in the (m,\lambda) plane? Or is that contained within the \mu dependence?

It's not necessary, I was just explaining why it would be natural if it did.

And what about the issue of \beta_\lambda=\hat{\beta}_\lambda+\lambda \epsilon that I raised in ym last post that seems to suggest this will only work if \epsilon=0 i.e. d=4?

All of the dependence on the regulator is contained in the bare parameters. Since the bare parameters are independent of the RG scale, the RG equations don't depend on the regulator.

Right, I still don't get that. Integrating the first term as you suggest I find
\frac{1}{2}\lambda^a \Delta^{\mu \nu} A_\nu^a - \frac{1}{2} \int d^dx \lambda^a \partial_\mu \Delta^{\mu \nu} A^a_\nu
So why can I neglect the surface term? Does \lambda vanish at infinity? If so, why?
And how does that second term rearrange to what I want? The derivative is now on the A rather than the lambda like we want in the final answer?

That first term should be a surface integral. That integral should vanish by suitable boundary conditions on either \lambda or A as usual. As for the other term, I mispoke a bit. However you should still be able to show that \Delta^{\mu\nu}\partial_\mu A_\nu = 0.
 
  • #46
fzero said:
All of the dependence on the regulator is contained in the bare parameters. Since the bare parameters are independent of the RG scale, the RG equations don't depend on the regulator.
I don't get this - which of beta and beta hat is bare? I guess explicitly beta would be bare as it has obvious epsilon depedence. However, I don't see how this solves the problem?

fzero said:
That first term should be a surface integral. That integral should vanish by suitable boundary conditions on either \lambda or A as usual. As for the other term, I mispoke a bit. However you should still be able to show that \Delta^{\mu\nu}\partial_\mu A_\nu = 0.
Ok I need to get it to the form -\int d^dx A_\mu^a \Delta^{\mu \nu} \partial_\nu \lambda^a though.
So I integrate the 1st term by parts giving two terms (one of which goes away as its surface) the remaining term won't combine with the 2nd term in the original integral though. I tried integrating the 2nd term by parts but it didn't give me anything that i could combine with the remaining term.
 
  • #47
latentcorpse said:
I don't get this - which of beta and beta hat is bare? I guess explicitly beta would be bare as it has obvious epsilon depedence. However, I don't see how this solves the problem?

I was making a more general statement whose consequence is that the correct beta functions to use are the ones that are not dependent on the regulator.

Ok I need to get it to the form -\int d^dx A_\mu^a \Delta^{\mu \nu} \partial_\nu \lambda^a though.

You don't need to do so in order to show the result. If you want to do that, you'll need to integrate by parts (2x) with the derivatives in \Delta^{\mu \nu}.

So I integrate the 1st term by parts giving two terms (one of which goes away as its surface) the remaining term won't combine with the 2nd term in the original integral though. I tried integrating the 2nd term by parts but it didn't give me anything that i could combine with the remaining term.

The non surface terms you end up with both vanish. I don't know why you'd want to integrate either by parts.
 
  • #48
fzero said:
I was making a more general statement whose consequence is that the correct beta functions to use are the ones that are not dependent on the regulator.
Ok. So we agree that we can get to \beta_\lambda = \frac{3 \lambda^2}{16 \pi^2} + \mathcal{O}(\lambda)
But \beta(\lambda)=\hat{\beta}(\lambda) + \lambda \epsilon = \mu \frac{d \lambda}{d \mu} +\lambda \epsilon by definition.
Now how do we get rid of the \lambda \epsilon term so that we can integrate up and solve for the renormalised coupling?

fzero said:
You don't need to do so in order to show the result. If you want to do that, you'll need to integrate by parts (2x) with the derivatives in \Delta^{\mu \nu}.

The non surface terms you end up with both vanish. I don't know why you'd want to integrate either by parts.

Ok. In the original expression I find that the 2nd term A_\mu^a \Dleta^{\mu \nu} \partial_\nu \lambda^a vanishes when we expand out \Delta^{\mu \nu} (which I think is right).

So this means that \delta S_0 = - \frac{1}{2} \int d^dx \partial_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a
I'm now confused as to what to do to show this is 0? Should I expend out the \Delta^{\mu \nu} as well?Finally, we say that a theory is renormalisable if it is rendered finite by adding a finite number of counterterms to the initial lagrangian and if these counterterms take the same form as terms present in the original lagrangian.
This is correct, yes?
However, when we say "it is rendered finite", what is "it"? It doesn't really make sense to say "the theory is rendered finite" when it's really something like the divergences in the amputated green's functions associated with loop diagrams, or something like that...
Can you clear up what "it"(i.e. the thing that gets rendered finite) actually is?
Cheers.
 
Last edited:
  • #49
latentcorpse said:
Ok. So we agree that we can get to \beta_\lambda = \frac{3 \lambda^2}{16 \pi^2} + \mathcal{O}(\lambda)
But \beta(\lambda)=\hat{\beta}(\lambda) + \lambda \epsilon = \mu \frac{d \lambda}{d \mu} +\lambda \epsilon by definition.
Now how do we get rid of the \lambda \epsilon term so that we can integrate up and solve for the renormalised coupling?

You can just take \epsilon\rightarrow 0.

Ok. In the original expression I find that the 2nd term A_\mu^a \Dleta^{\mu \nu} \partial_\nu \lambda^a vanishes when we expand out \Delta^{\mu \nu} (which I think is right).

So this means that \delta S_0 = - \frac{1}{2} \int d^dx \partial_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a
I'm now confused as to what to do to show this is 0? Should I expend out the \Delta^{\mu \nu} as well?

We already dealt with this term in posts 44 and 45.

Finally, we say that a theory is renormalisable if it is rendered finite by adding a finite number of counterterms to the initial lagrangian and if these counterterms take the same form as terms present in the original lagrangian.
This is correct, yes?
However, when we say "it is rendered finite", what is "it"? It doesn't really make sense to say "the theory is rendered finite" when it's really something like the divergences in the amputated green's functions associated with loop diagrams, or something like that...
Can you clear up what "it"(i.e. the thing that gets rendered finite) actually is?
Cheers.

The Green functions (connected, amputated or otherwise) are the physical observables. These are what we make finite initially. From them and the RG group, we can also show that the physical parameters are finite as well.
 
  • #50
fzero said:
You can just take \epsilon\rightarrow 0.
We already dealt with this term in posts 44 and 45.
The Green functions (connected, amputated or otherwise) are the physical observables. These are what we make finite initially. From them and the RG group, we can also show that the physical parameters are finite as well.

Got it. Thanks.

Although, one thing is bothering me: when we do the integration by parts -\frac{1}{2} \int d^dx \partial_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a = - \frac{1}{2} \lambda^a \Delta^{\mu \nu} A_\nu^a + \frac{1}{2} \int d^dx \lambda^a \partial_\mu \Delta^{\mu \nu} A_\nu^a
haven't we lost an index on the first term there? So the indices aren't balanced in this equation?Could you take a look at the thread I made on solving the Callan-Symanzik equation please?

Thanks very much.
 
Last edited:
  • #51
latentcorpse said:
Got it. Thanks.

Although, one thing is bothering me: when we do the integration by parts -\frac{1}{2} \int d^dx \partial_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a = - \frac{1}{2} \lambda^a \Delta^{\mu \nu} A_\nu^a + \frac{1}{2} \int d^dx \lambda^a \partial_\mu \Delta^{\mu \nu} A_\nu^a
haven't we lost an index on the first term there? So the indices aren't balanced in this equation?

As I mentioned back in post #45, that term should be a surface integral. Look up Stokes' theorem.
 
  • #52
fzero said:
As I mentioned back in post #45, that term should be a surface integral. Look up Stokes' theorem.

\int_\Omega ( \vec{\nabla} \times \vec{F} ) \cdot \vec{da} = \int_{\partial \Omega} \vec{F} \cdot d \vec{s}

But we don't have a curl in our expression, do we?
 
  • #53
latentcorpse said:
\int_\Omega ( \vec{\nabla} \times \vec{F} ) \cdot \vec{da} = \int_{\partial \Omega} \vec{F} \cdot d \vec{s}

But we don't have a curl in our expression, do we?

A special case of Stokes' theorem is the divergence theorem, so check that one out.
 
  • #54
fzero said:
A special case of Stokes' theorem is the divergence theorem, so check that one out.

<br /> -\frac{1}{2} \int d^dx \partial_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a = - \frac{1}{2} \int_{S^{d-1}} n_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a + \frac{1}{2} \int d^dx \lambda^a \partial_\mu \Delta^{\mu \nu} A_\nu^a<br />

where n^\mu are the components of the vector normal to the surface of the S^{d-1}.
Is that correct?

Then do we just say that this term vanishes because either A or \lambda vanishes at infinity. Which one is it though? Am I right in saying it would need to be \lambda since we don't have an A term only derivatives of A?
Even if it is the \lambda term that vanishes - I don't understand physically why it should?

Thanks.
 
  • #55
latentcorpse said:
<br /> -\frac{1}{2} \int d^dx \partial_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a = - \frac{1}{2} \int_{S^{d-1}} n_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a + \frac{1}{2} \int d^dx \lambda^a \partial_\mu \Delta^{\mu \nu} A_\nu^a<br />

where n^\mu are the components of the vector normal to the surface of the S^{d-1}.
Is that correct?

Then do we just say that this term vanishes because either A or \lambda vanishes at infinity. Which one is it though? Am I right in saying it would need to be \lambda since we don't have an A term only derivatives of A?
Even if it is the \lambda term that vanishes - I don't understand physically why it should?

Thanks.

We just require that the variation of the fields vanishes at the boundary. We're only considering infinitesimal variations, so we can do this.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
Replies
2
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 36 ·
2
Replies
36
Views
5K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 7 ·
Replies
7
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K