# Would a theory with ONLY logarithmic divergencies be renormalizable ?

1. Aug 15, 2009

### zetafunction

the idea is let us supppose we have a well behaved theory where only logarithmic divergences of the form

$$\int_{0}^{\infty} \frac{dx}{x+a}=I(a)$$ for several values of 'a' occur ,

then would this theory be renormalizable ?? , i think QED works because only logarithmic divergencies appear , in fact the integral above I(a) can be regularized in the sense of Hadamard (or either differentiating respect to 'a' ) in the form

$$I(a)= -log(a)+c_a$$ here c_a is a free parameter to be fixed by experiments... Hadamard finite part integral says that

$$\int_{0}^{\infty} \frac{dx}{x+a}=I(a)= \int_{-\infty}^{\infty}dx \frac{H(x-a)}{x}$$

so in the sense of distribution theory the integral I(a) exits and is equal to log(a)

2. Aug 15, 2009

### genneth

Yes, but not because of the nature of the divergence. Renormalisability refers to whether it's possible to absorb the divergences into some redefinitions of constants. If this can be done with a finite number of constants (which would then be determined by experiment), then it's called renormalisable.

The log divergence just means that we can be happy about the accuracy of the theory, since it means that the difference between the bare value and the measured value is fairly small. (Remember that usually, the bare value is not the value at infinity energy, but something large, like Plank scale.) This produces confidence in the convergence rate of perturbation expansions.

Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook