## When do we need renormalization

<jabberwocky><div class="vbmenu_control"><a href="jabberwocky:;" onClick="newWindow=window.open('','usenetCode','toolbar=no,location=no, scrollbars=yes,resizable=yes,status=no,width=650,height=400'); newWindow.document.write('<HTML><HEAD><TITLE>Usenet ASCII</TITLE></HEAD><BODY topmargin=0 leftmargin=0 BGCOLOR=#F1F1F1><table border=0 width=625><td bgcolor=midnightblue><font color=#F1F1F1>This Usenet message\'s original ASCII form: </font></td></tr><tr><td width=449><br><br><font face=courier><UL><PRE>L.R. wrote:\n\n&gt; Arnold Neumaier wrote:\n&gt;\n&gt;&gt;No. We want to describe a situation which is a limit of more complex and\n&gt;&gt;often less symmetric situations. This limit is the only problematic thing,\n&gt;&gt;and sometimes generates infinities if done in an improper way. Just as\n&gt;&gt;when trying to compute\n&gt;&gt; s_N = sum_{k=0:N} (-1)^k/(k+1)^s = u_N - v_N\n&gt;&gt;by summing the even and odd contributions u_N and v_N separately.\n&gt;&gt;The limit N to inf is well-defined for s&gt;0, but can be obtained only for\n&gt;&gt;s&gt;1 by going to the limit in u_N and v_N separately.\n&gt;\n&gt; Yes, if s &lt;= 1 you can rearrange the summing to get any result you want,\n&gt; do you mean that this is similar to the renormalization can give you any\n&gt; result you want or something else?\n\nIf one would use weird renormalization schemes, yes. But there is a large\nclass of natural renormalization that give identical limits (at least people\nthink so). This class is defined by summing the small energy terms before\nthe large ones, and letting a cutoff go to infinity at the very end.\n\nThis corresponds to summing the above sum in any order for fixed N,\n(which means, small k first, though one does not need to be very precise in\nhow to interpret it) but taking the limit N to inf at the end.\n\n\n&gt;&gt;The data (the Hamiltonian in QM, the action in QFT) depends\n&gt;&gt;on some parameter vector v of dimension d, say, without direct\n&gt;&gt;physical meaning. For example, v may consist of bare mass,\n&gt;&gt;bare charge, and bare coupling constant.\n&gt;&gt;\n&gt;&gt;Without the renormalization conditions we get a family solution\n&gt;&gt;parameterized by v from which we can compute measurable quantities\n&gt;&gt;combined into a vector q=q_N(v) of some dimension e&gt;d.\n&gt;&gt;where N is the parameter in which we want to take the limit.\n&gt;&gt;(N might be an energy cutoff at energies beyond observability, and q the\n&gt;&gt;observed particle spectrum.)\n&gt;\n&gt; Here, N is usually introduced when we got a bare theory with infinite\n&gt; result, to regular it, we introduce a N. If we get a perfect finite\n&gt; result with bare theory, how do we introduce the N?\n\nIf the bare theory would give finite results there wouldn\'t be the need\nfor introducing N. But the bare theory makes sense _only_ with finite N.\n\n&gt; I don\'t think that\n&gt; every theory should come with a energy cutoff is a good reason, I doubt\n&gt; that it breaks the Lorentz invariance, it maybe good as a tool, but\n&gt; should not be think as a fundamental theory.\n\nThe cutoff is necessary in any 4D theory with local interaction only,\nand it breaks Lorentz invariance. The latter is restored again in the limit.\n\n\n\n&gt;&gt;When the limit (*) does not exist, the situation is more complicated.\n&gt;&gt;Since there is no limiting q, one has to work at finite N. Proceeding\n&gt;&gt;as before, one solves d of the equations in q=q_N(v) for v, getting\n&gt;&gt;v=v_N(z), but since the limit (*) does not exist, there will also be no\n&gt;&gt;limit\n&gt;&gt; v(z) = lim_{N to inf} v_N(z)\n&gt;&gt;which would enable the use of (**). Instead, v_N(z) diverges.\n&gt;&gt;(Loosely speaking, we get infinite bare masses and bare coupling\n&gt;&gt;constants. But this limit will never be used, hence there are no\n&gt;&gt;problems. It is just the loose way of speaking that creates the\n&gt;&gt;impression of weirdness.)\n&gt;&gt;\n&gt;&gt;However, at finite N, we can still define a renormalized\n&gt;&gt;parameterization\n&gt;&gt; q = q_{N,ren}(z), with q_{N,ren}(z)= q_N(v_N(z)).\n&gt;&gt;For a renormalizable theory, the limit\n&gt;&gt; q_ren(z) = lim_{N to inf} q_N,ren}(z)\n&gt;&gt;exists although neither q_N nor v_N converge.\n&gt;&gt;This limit replaces the naive bare recipe (*)-(**) which is\n&gt;&gt;ill-defined.\n&gt;\n&gt; This sounds like the old infinity minus infinity trick,\n\nOf course. It is similar to techniques to evaluate limits which\ngive naively inf-inf, by using some transformation that cancels the\ninfinities analytically. Example:\nlim sqrt(n^2+n)-sqrt(n^2+1)\n= lim ((n^2+n)-(n^2+1))/(sqrt(n^2+n)+sqrt(n^2+1))\n= lim (n-1)/(sqrt(n^2+n)+sqrt(n^2+1)) = 1/2.\n\n&gt; does this well\n&gt; defined in mathematics, I mean q_ren(z) is well defined in the limit\n&gt; processes? since the limit of q_N and v_N do not exist.\n\nEverything I said is mathematically well-defined on the level of\nFeynman diagrams (i.e., in a perturbative view of QFT).\n\n\n\n&gt;&gt;&gt;Another thing I have not figure out is, I guess we do not need\n&gt;&gt;&gt;renormalization in QM is because the parameter we used is good,\n&gt;&gt;\n&gt;&gt;we don\'t need it precisely when the potential is regular enough.\n&gt;\n&gt; don\'t we need some finite renormalization since maybe the bare mass or\n&gt; the coupling constant do not have a physical meaning? How we can show\n&gt; that these coefficients can be directly measured?\n\nThey are indirectly measured, and one has\nparameter identification = finite renormalization,\nexcept that in the latter case one has the traditional but misleading\nidea that the bare coefficients mean something \'bare\'.\n\n\n\n&gt;&gt;The infinities are caused by the nature of the interactions.\n&gt;\n&gt; this is the point I don\'t understand, there are different arguements in\n&gt; this group about the infinity in QFT. 1), infinity pop out because we\n&gt; can only use the interaction picture to calculate the physics results,\n&gt; but the covariant interaction picture is not well defined, that why we\n&gt; get some infinity and that\'s why we can introduce a cut off to get a\n&gt; finite result in trade of covariance.\n\nThat the interaction picture does not exist is just a facet of the\nrenormalization problem. It has the same cause as the infinities,\nnamely the local interactions.\n\n&gt; 2), the singular interaction is\n&gt; because the field operator is not a operator but a operator-valued\n&gt; distribution, the production of distribution is not well-defined, they\n&gt; need regularization. We get infinity when we attemp naively to give\n&gt; meaning to it. I don\'t thinke these reason of infinity is the nature of\n&gt; interactions &gt;:)\n\nOf course it is. One can maker out of the singular field operators many\ngood operators - but these are irrelevant because they cannot model\nlocal interactions. The latter are so singular that the singuarity of\nthe operator-valued distributions cannot be smeared out.\n\n&gt;&gt;If they are too singular then the limits needed for a finite\n&gt;&gt;renormalization simply do not exist anymore.\n\n&gt; A quick question, when can we say that an interaction is too singular\n&gt; that it cannot exist?\n\nThe relevant interactions in QFT are local interaction, by direct\ncontact. These are too singular to make the limit exist. With nonlocal\ninteractions of sufficient range the limit exists but Lorentz invariance\nis lost. It seems that one can have local+Lorentz only using full\nrenormalization.\n\n\nArnold Neumaier\n\n</UL></PRE></font></td></tr></table></BODY><HTML>');"> <IMG SRC=/images/buttons/ip.gif BORDER=0 ALIGN=CENTER ALT="View this Usenet post in original ASCII form">&nbsp;&nbsp;View this Usenet post in original ASCII form </a></div><P></jabberwocky>L.R. wrote:

> Arnold Neumaier wrote:
>
>>No. We want to describe a situation which is a limit of more complex and
>>often less symmetric situations. This limit is the only problematic thing,
>>and sometimes generates infinities if done in an improper way. Just as
>>when trying to compute
>> $s_N = sum_{k=0:N} (-1)^k/(k+1)^s = u_N - v_N$
>>by summing the even and odd contributions $u_N$ and $v_N$ separately.
>>The limit N to inf is well-defined for s>0, but can be obtained only for
>>s>1 by going to the limit in $u_N$ and $v_N$ separately.

>
> Yes, if $s <= 1$ you can rearrange the summing to get any result you want,
> do you mean that this is similar to the renormalization can give you any
> result you want or something else?

If one would use weird renormalization schemes, yes. But there is a large
class of natural renormalization that give identical limits (at least people
think so). This class is defined by summing the small energy terms before
the large ones, and letting a cutoff go to infinity at the very end.

This corresponds to summing the above sum in any order for fixed N,
(which means, small k first, though one does not need to be very precise in
how to interpret it) but taking the limit N to inf at the end.

>>The data (the Hamiltonian in QM, the action in QFT) depends
>>on some parameter vector v of dimension d, say, without direct
>>physical meaning. For example, v may consist of bare mass,
>>bare charge, and bare coupling constant.
>>
>>Without the renormalization conditions we get a family solution
>>parameterized by v from which we can compute measurable quantities
>>combined into a vector $q=q_N(v)$ of some dimension e>d.
>>where N is the parameter in which we want to take the limit.
>>(N might be an energy cutoff at energies beyond observability, and q the
>>observed particle spectrum.)

>
> Here, N is usually introduced when we got a bare theory with infinite
> result, to regular it, we introduce a N. If we get a perfect finite
> result with bare theory, how do we introduce the N?

If the bare theory would give finite results there wouldn't be the need
for introducing N. But the bare theory makes sense _only_ with finite N.

> I don't think that
> every theory should come with a energy cutoff is a good reason, I doubt
> that it breaks the Lorentz invariance, it maybe good as a tool, but
> should not be think as a fundamental theory.

The cutoff is necessary in any 4D theory with local interaction only,
and it breaks Lorentz invariance. The latter is restored again in the limit.

>>When the limit (*) does not exist, the situation is more complicated.
>>Since there is no limiting q, one has to work at finite N. Proceeding
>>as before, one solves d of the equations in $q=q_N(v)$ for v, getting

$>>v=v_N(z),$ but since the limit (*) does not exist, there will also be no
>>limit
>> v(z) $= lim_{N$ to inf} $v_N(z)$
>>which would enable the use of $(**)$. Instead, $v_N(z)$ diverges.
>>(Loosely speaking, we get infinite bare masses and bare coupling
>>constants. But this limit will never be used, hence there are no
>>problems. It is just the loose way of speaking that creates the
>>impression of weirdness.)
>>
>>However, at finite N, we can still define a renormalized
>>parameterization
>> $q = q_{N,ren}(z),$ with $q_{N,ren}(z)= q_N(v_N(z))$.
>>For a renormalizable theory, the limit
>> $q_{ren}(z) = lim_{N$ to inf} $q_N,ren}(z)$
>>exists although neither $q_N$ nor $v_N$ converge.
>>This limit replaces the naive bare recipe $(*)-(**)$ which is
>>ill-defined.

>
> This sounds like the old infinity minus infinity trick,

Of course. It is similar to techniques to evaluate limits which
give naively inf-inf, by using some transformation that cancels the
infinities analytically. Example:
lim $\sqrt(n^2+n)-\sqrt(n^2+1)$
= lim $((n^2+n)-(n^2+1))/(\sqrt(n^2+n)+\sqrt(n^2+1))$
= lim $(n-1)/(\sqrt(n^2+n)+\sqrt(n^2+1)) = 1/2$.

> does this well
> defined in mathematics, I mean $q_{ren}(z)$ is well defined in the limit
> processes? since the limit of $q_N$ and $v_N$ do not exist.

Everything I said is mathematically well-defined on the level of
Feynman diagrams (i.e., in a perturbative view of QFT).

>>>Another thing I have not figure out is, I guess we do not need
>>>renormalization in QM is because the parameter we used is good,

>>
>>we don't need it precisely when the potential is regular enough.

>
> don't we need some finite renormalization since maybe the bare mass or
> the coupling constant do not have a physical meaning? How we can show
> that these coefficients can be directly measured?

They are indirectly measured, and one has
parameter identification = finite renormalization,
except that in the latter case one has the traditional but misleading
idea that the bare coefficients mean something 'bare'.

>>The infinities are caused by the nature of the interactions.

>
> this is the point I don't understand, there are different arguements in
> this group about the infinity in QFT. 1), infinity pop out because we
> can only use the interaction picture to calculate the physics results,
> but the covariant interaction picture is not well defined, that why we
> get some infinity and that's why we can introduce a cut off to get a
> finite result in trade of covariance.

That the interaction picture does not exist is just a facet of the
renormalization problem. It has the same cause as the infinities,
namely the local interactions.

> 2), the singular interaction is
> because the field operator is not a operator but a operator-valued
> distribution, the production of distribution is not well-defined, they
> need regularization. We get infinity when we attemp naively to give
> meaning to it. I don't thinke these reason of infinity is the nature of
> interactions >:)

Of course it is. One can maker out of the singular field operators many
good operators - but these are irrelevant because they cannot model
local interactions. The latter are so singular that the singuarity of
the operator-valued distributions cannot be smeared out.

>>If they are too singular then the limits needed for a finite
>>renormalization simply do not exist anymore.

> A quick question, when can we say that an interaction is too singular
> that it cannot exist?

The relevant interactions in QFT are local interaction, by direct
contact. These are too singular to make the limit exist. With nonlocal
interactions of sufficient range the limit exists but Lorentz invariance
is lost. It seems that one can have local+Lorentz only using full
renormalization.

Arnold Neumaier



> Last, let me note that nobody is afraid of substracting two infinities > when the mechanics of the process is understood: > > Let A==(x,0), B==(y,0), C==(x,f(x)), D==(y,f(y)) and consider the > triangles > CAB and ABD. Both the angle at A in CAB and at B in ABD have infinite > trigonometric tangents in the limit $x->y$. But the difference $tan A -$ > $tan B$ is > finite. Moreover, the value of this difference does not change if we > use A==(x,K) and B==(y,K) for any other real number K instead of zero. I'm not following you here. The angles CAB and ABD are always 90 degrees, aren't they?



On Mon, 20 Dec 2004 16:03:38 $+0000,$ Eugene Stefanovich wrote: > Igor Khavkine wrote: >> In any case, I don't think you are contradicting anything that I wrote >> in my previous post. Moreover, as I tried to explain, you do not need to >> think about all this stuff to get an idea why renormalization is useful >> and applicable. One just has to realize that values of physically >> measured parameters need not correspond directly to the parameters >> entering the bare theory, and that renormalization provides a link >> between them. > In my approach, there are no "bare" particles and "dressed" particles. > There are just real particles. The parameters entering the theory are the > same which are measured in experiments. The particles do not interact with > their own E&M fields. There is no self-interaction. In my approach, there > are no artificial "momentum cutoffs". All integrals are honestly evaluated > to infinity. Nevertheless, there are no ultraviolet infinities. I made no mention of "bare" or "dressed" particles. I do not even speak of quantum electrodynamics. What I claim may be applied to any reasonable theory, classical or quantum. My claim is that it is most unlikely that for an arbitrary theory, all its parameters are directly measurable. If you wish to contradict me, try for example to come up with an experiment that will measure the size of a water molecule by observing water's macroscopic hydrodynamic properties. > Renormalization in QED is a clear sign that this theory is sick. A > correctly formulated theory should do well without renormalization. You have yet to provide a defensible reason for this claim. On the other hand, I see renormalization as a necessary (if sometimes implicit) ingredient of any physical theory, except perhaps some very special ones. That is what I tried to explain in my previous posts. > I am sorry for being repetitive. Probably I am the only one whose sense of > scientific beauty is offended by the twisted "logic" of traditional > renormalized QFT. You are treading on dangerous ground here. "Twisted" and "beautiful" are very subjective adjectives. They bear little weight in a scientific discussion. The same theory may admit many possible descriptions, without one contradicting another. I am not in a position to say that what you have done is without merit. However, you must accept that there are other descriptions of the same theory that are equally correct as yours, if it is indeed so, no matter how their philosophy does or does not appeal to you. Philosophy is not a substitute for mathematics. Judging their individual beauty is up to individuals, but all should be able to agree on their correctness. Igor



On Mon, 20 Dec 2004 16:05:09 $+0000,$ Arnold Neumaier wrote: > Igor Khavkine wrote: >> On Fri, 17 Dec 2004 13:52:49 $+0000,$ Arnold Neumaier wrote: > >>>Hmm. The bare model without cutoff and finite parameters doesn't >>>describe anything, since it _cannot_ be made to give finite predictions. >>>It is essential for your argument that one thinks of the bare model as >>>being a model with a finite cutoff. >> >> In fact the model which I used throughout my post (kinetic theory of >> fluids) does have an explicit finite cutoff. The smallest length scale >> where the theory is defined is intermolecular spacing. But, for the sake >> of argument, lets suppose that I'm talking about a theory where starting >> with a finite bare theory with no cutoff does give finite renormalized >> predictions. What I want to do is see how the bare parameters of this >> theory get related to the observed ones by introducing and changing a >> cutoff. What I want to motivate is doing the same procedure backward >> starting with a theory with an explicit cutoff, whether or not the >> corresponding bare theory with no cutoff exists or not. > > There will still be no convergence of the dressed to the bare parameters. [...] >>>While the discrepancy to the bare >>>theory is always infinite, the discrepancy to the renormalized theory at >>>zero length (no frequencies/momenta ignored) is tiny in the range of >>>interest. Ideally one would just want to have this limiting theory, >>>which is 'the' renormalized theory, while the others are approximate >>>effective theories only. But it is of course enough to have an effective >>>theory that predicts the observable part correctly and with a limited >>>amount of work. This does not need precise information about the short $>>>(=$ high energy) scales. >> >> I'm not sure I understand what you are saying here. What is for you the >> difference between "the bare theory" and "the renormalized theory at >> zero length"? > > The bare theory does not exist at infinite cutoff, while the renormalized > theory exists in the limit where the cutoff is removed. Hmm. I fear there is still a gap in either my understanding or our respective terminologies. Perhaps a diagram will help. Here is a logarithmic scale of lengths (say in meters) $1e-9 1e-5 1e-(5-eps) 1e-2$ |...======|================|===|==================|============... A B R Re M Here A is the zero length scale (all degrees of freedom included). B is the length scale at which I define my kinetic fluid theory (aka the bare theory). R and Re are the same theory but with all degrees of freedom defined on scales between $1e-9$ and $1e-5$ or $1e-(5-eps)$ eliminated (aka the renormalized theory). Finally, M denotes the length scale at which measurements are done. The intermediate scale $1e-5$ or R can be chosen arbitrarily between B and M. Renormalization tells us what happens to the parameters describing the theory as this scale moves, as for example from R to Re. In this case, it is possible to move R to the right in a well defined way, but moving it to the left is not uniquely defined. In a renormalizable theory, it should be possible to move R to the left or to the right in a well defined way. Also, if R is moved all the way to M, the parameters describing the renormalized theory should have a direct connection with the parameters measured at M. I choose to talk about the kinetic fluid theory because I find it easiest to think about conceptually. On the left we have the Boltzmann equation, while on the right we have the Navier-Stokes equations. Renormalization is straightforwardly implemented by coarse graining. If you find another theory more conceptually appealing, please feel free to use it as an example. Now, suppose I have some measurements at M. I construct a theory at length scale R=M whose parameters are directly determined by the measurements. Then, if I've constructed a renormalizable theory, I should be able to move R to the left and have the parameters of the theory evolve according to the RG flow with their values at M as the initial conditions. In principle, I could move R all the way to scale B and have finite parameters for what I called my bare theory. If there were now scale B to speak of, I could move R all the way to scale A and all the parameters could still be finite. Of course that it is also possible that the parameters diverge at point A or even at some finite scale. This behavior is determined by the RG flow equations and the initial conditions, but neither type of behavior is excluded a priori. The case that we usually deal with in QFT is as follows (energy or momentum scale): oo $\Lambda E$ |...=======================|======================|============... A R M M is the scale at which we do scattering experiments. R is some finite but large cutoff $\Lambda at$ which we can do perterbative calculations. And A represents no cutoff (arbitrarily large momenta are taken into account). Usually we take what is measured at E and use these values fix the initial conditions for the parameters at some $\Lambda$. Then the renormalized theory at R can be taken to any point on the left through the RG equations with the measured parameters as initial conditions. Since there is no intermediate scale at which to stop the renormalization procedure, usually R is taken to the limit point A. The theory we obtain at point A is then called the bare theory. However, the bare coupling constants at A usually end up being divergent. In principle it should be possible to do this procedure backward, just as in the kinetic fluid theory. We start with a bare theory defined at point A with some fixed finite parameters. Then we can solve the RG flow equations for R moving from left to right using the values of the parameters at A as the asymptotic initial conditions. In principle no divergence of parameters need take place (depending on the flow equations). Unfortunately, in the case of usual QFT, once we get to M, the parameters do diverge. And that's what causes the confusion about infinite predictions. If you were to draw a similar diagram. Where would you place the labels for "bare theory", "renormalized theory", "effective theory", and so on? If you talk about renormalization, which way are you moving the scale R, left or right? When you talk about "bare parameters" or "renormalized parameters" and how they change under RG flow, where do you fix your initial conditions? > Yes. I simply wanted to emphasize that you always compare two > _renormalized_ theories, at slightly different energy scale E. Any of > these renormalized theories has parameters very far away from the bare > theory with fixed bare parameters g and a huge cutoff Lamda, no matter > what the value of E. > > But it is close to a bare theory with suitable bare parameters $g(\Lambda)$ > and huge $\Lambda,$ with $g(\Lambda)$ floating as a function of $\Lambda$. > > Note that in your effective picture, it is not allowed to identify the > cutoff $\Lambda$ with the energy scale E. The latter defines the > renormalization conditions and can have _any_ finite value, while the > former has to go to infinity. After renormalization, the low energy > behavior (energies < independent of E. I'm still confused by your terminology. Where would the energy E and scale $\Lambda$ be relative to each other on a scale diagram such as above? Where would g and $g(\Lambda)$ be defined? Thanks. Igor



Arnold Neumaier wrote: > I wouldn't say, renormalization "cures" a wrong theory. It creates a > good theory by a limiting procedure, just as we define good functions > like $e^x$ by the limiting procedure $lim_{n$ to inf} $(1+x/n)^n$. You are, of course, right with your statement, but one should stress that even for perfectly finite theories, one needs to renormalize perturbation theory. For instance take an effective theory with a cutoff function at the vertices, which is perfectly finite for all Feynmandiagrams (I guess, one can construct such a thing). Then you calculate, e.g., the self-energy for one of the particles, contained in this theory. It's, as I said, a perfectly finite expression. Nevertheless you have to renormalize the wave function and the mass of the particle, in order to stay in agreement with experiment. The reason is simply, that you start from a fictitious particle, which is non-interacting (in QED an electron without electromagnetic field around it). This fictitious particle has a well-defined finite mass, which is nevertheless not observable, since an electron comes with its em. field around it, and this contains energy and thus carries also a mass. Thus, when you calculate the electron self-energy you correct for this (order by order in the number of Feynman-diagram loops or coupling constants). To make physical sense out of it, you better subtract in the very beginning all unphysical contributions to the mass. This not only renders the QED calculation finite (due to the BPHZ theorem) but also sets the physical, observable mass to the observed value of about 511 keV. > > A good theory is one for which the needed limits exist. If they don't > it is a definitely wrong theory. This is the rule, and there is no > other one. You don't need to regularize the theory to renormalize, although it is much more convenient to work with a regularized theory as an intermediate step to sort the infinities out (e.g. for many cases dim. regularization is especially convenient, in particular for gauge theories), but as I said, that's not necessary, it's just a mathematical tool to calculate the perturbation theory expressions more conveniently. -- Hendrik van Hees Cyclotron Institute Phone: $+1 979/845-1411$ Texas A&M University Fax: $+1 979/845-1899$ Cyclotron Institute, $MS-3366$ http://theory.gsi.de/~vanhees/ College Station, $TX 77843-3366$



Eugene Stefanovich wrote: > Igor Khavkine wrote: > >>First, note that by definition a free electron cannot interact with the >>electromagnetic field. Hence, no absorption nor emission. > > > Then what about the trilinear terms in the Hamiltonian $a^+c^+a$ and > $a^+ac$? Aren't they describibing exactly these processes: absorption and > emission of photons by a free electron. No; if anything it would be absorption and emission of bare photons by a bare electron. But the bare objects are completely ficticious. > Let us try to describe the time evolution using the QED Hamiltonian > > $H = H_0 + a^+c^+a + a^+ac +$... > > We form the time evolution operator > > U(t) $= \exp(iHt) = \exp(it(H_0 + a^+c^+a + a^+ac +...))$ > $= 1 +it(H_0 + a^+c^+a + a^+ac +...)$ > $-t^2/2 H_0 + a^+c^+a + a^+ac +...)^2 + ...$ > > Now apply this operator to the initial state consisting > of one electron > > U(t) $a^+|0> = a^+|0> + a^+c^+|0> + .$.. $A^+|0>$ is $_not_ a$ physical state, but the state of a bare electron, which does _not_ belong to the Hilbert space in which the physical (=renormalized) fields act as distribution-valued operators. Hence your argument is irrelevant. You'd have a look at 2D QFT, where these things are well understood in terms of actual construrctions of the appropriate Hilbert space and Hamiltonians. The bare objects do not belong to this physical Hilbert space and hence have no physical meaning. Arnold Neumaier



Eugene Stefanovich wrote: > Arnold Neumaier wrote: > >>>One such self-consistency requirement is that one of the lowest >>>eigenstates of H should describe the free electron with mass m. >>>However, this is not true. The Hamiltonian H has no such eigenstate, >> >>Prove what you assert before making such wild claims! >> >>In fact, on the perturbative level, H with a cutoff has such an eigenstate >>depending on the bare parameters in H and the cutoff, and the eigenstate >>converges as the cutoff goes to infinity when the bare parameters vary >>with the cutoff as prescribed by renormalization theory. > > In the above I discussed $pre-1949$ QFT, i.e., without cutoff-dependent > parameters and renormalization. But people had used cutoffs from very early on! >>>That's how QFT looked like before 1949. In 1949 Feynman, Tomonaga, >>>and Schwinger basically said that Hamiltonian H(m,e,...) is wrong, >>>and we'll get better results if we use a Hamiltonian of the same >>>functional form >>>but with infinite masses and charges H(M,E,...). >> >>This is a misrepresentation of what they did. Their work does not >>work with infinite masses, but with finite parameters (for historical >>reasons called bare masses, etc.) that tend to infinity while the >>quantities of interest tend to well-defined limits. > > That's what I said. To keep things simple, I omitted the intermediate > phases of the theory while the cutoff is still finite. I am looking at > the theory in its final form (with the infinite cutoff) and see that > the parameters in the Hamiltonian (masses and charges) are infinite, But this _is_ the misrepresentation. Feynman, Tomonaga, and Schwinger never take any limit in the unrenormalized theory. Thus they never see infinite parameters. > I have no complaints about how this theory calculates the S-matrix > (I understand that infinities in the Hamiltonian get cancelled). > However, there is no way you can get the time evolution at finite times. There are a number of approximate ways of getting it. See NRQED, which is more powerful than your version of QED (which is also approximate only), and similarity renormalization, although the latter is usually applied to the fron form and not the instant form. But as Dirac observed in his paper on forms of relativistic dynamics, this gives a good dynamics, too. Arnold Neumaier



Eugene Stefanovich wrote: > Arnold Neumaier wrote: > >>Eugene Stefanovich wrote: >> >>No. It is of exactly the same kind as in a relativistic QFT - namely a >>contact interaction. And it needs renormalization of exactly the >>same reasons. Indeed, you can regard quantum mechanics as a 1-dimensional >>field theory. Write down a local 1D field theory and its Hamiltonian, >>and see what happens.... > > I still think that the fundamental difference between QM and QFT is that > in QFT interaction modifies the properties of particles. But why, then, is this not a problem in 1D and 2D quantum field theories? There, there is no fundamental difference between QM and QFT since we understand everything rigorously. But your arguments would apply there, too, since the interactions look formally just the same. In the 4D case, only the analytical difficulties are larger and incompletely settled. So far there is no theorem that would forbid renormalized QFT in 4D, and it is very likely that at least non-abelian Yang-Mills theories exists in 4D; there is a million dollar price offered for proving it. But these theories all have the terms that you call 'unphysical'. But it is unlikely that you'll understand me unless you seriously study some of the work on 2D quantum field theories, such as the construction in Chapter 6 of Glimm and Jaffe's quantum physics book (and later chapters for the proofs). Your arguments reflect a conspicuous lack of knowledge in this area. Arnold Neumaier



Igor Khavkine wrote: > Hmm. I fear there is still a gap in either my understanding or our > respective terminologies. Perhaps a diagram will help. Here is a > logarithmic scale of lengths (say in meters) > > $1e-9 1e-5 1e-(5-eps) 1e-2$ > |...======|================|===|==================|============... > A B R Re M > > Here A is the zero length scale (all degrees of freedom included). > B is the length scale at which I define my kinetic fluid theory (aka the > bare theory). R and Re are the same theory but with all degrees of freedom > defined on scales between $1e-9$ and $1e-5$ or $1e-(5-eps)$ eliminated (aka the > renormalized theory). Finally, M denotes the length scale at which > measurements are done. The intermediate scale $1e-5$ or R can be chosen > arbitrarily between B and M. Renormalization tells us what happens to the > parameters describing the theory as this scale moves, as for example from > R to Re. In this case, it is possible to move R to the right in a well > defined way, but moving it to the left is not uniquely defined. In a > renormalizable theory, it should be possible to move R to the left or to > the right in a well defined way. Also, if R is moved all the way to M, the > parameters describing the renormalized theory should have a direct > connection with the parameters measured at M. This is all fine $- it$ is about the renormalized theory at different length scales, or (using E = inverse length in units such that $\hbar=c=1)$ at different energy scales. > Now, suppose I have some measurements at M. I construct a theory at length > scale R=M whose parameters are directly determined by the measurements. > Then, if I've constructed a renormalizable theory, I should be able to > move R to the left and have the parameters of the theory evolve according > to the RG flow with their values at M as the initial conditions. In > principle, I could move R all the way to scale B and have finite > parameters for what I called my bare theory. Here is the problem. The analogy to QFT breaks down the way you use the terms. In QFT, the bare theory as you discuss it does not exist because of the infinities, and your theories with cutoff are not renormalized in the QFT sense - they are just bare theories with a cutoff. The correct analogy is slightly different; see below. > The case that we usually deal with in QFT is as follows (energy or > momentum scale): > > oo $\Lambda$ E > |...=======================|======================|============... > A R M > > M is the scale at which we do scattering experiments. R is some finite but > large cutoff $\Lambda at$ which we can do perterbative calculations. In QFT, there are two different scales, one on the bare level and one on the renormalized level, and the meaning of the renormalization group is slightly different from that in statistical mechanics. This caused our misunderstanding. On the statistical mechanics level, there is the cutoff beyond which one cannot (or does not want to) observe anything. This effective cutoff is a parameter $\Lambda$ in an effective theory defined by coarse graining. The effective theory depends on E: For different values of E you get a _different_ effective theory, though their low energy predictions are essentially the same. This is expressed by renormalization group equations that relate the parameters $g(\Lambda,\mu)$ in the different effective theories such that some key low energy observables \mu keep the same values. The number of such key observables (i.e, the dimension $of \mu)$ equals the number of parameters in the effective theory (i.e, the dimension of g); most other observables are different at different cutoffs (though only slightly if they are observable at low energy), because of the coarse graining done when lowering the cutoff scale $\Lambda$. In QFT, the above is mimicked on the _bare_ level. The cutoff is a large energy $\Lambda$ beyond which the bare interaction is modified to be able to get a meaningful limit; this corresponds to coarse-graining. The resulting bare theory with cutoff $\Lambda$ is a well-defined effective theory and behaves precisely as described above. To define the renormalized theory, one needs, in addition to the cutoff, renormalization conditions defining the bare parameters in terms of renormalized parameters q. These conditions depend on an renormalization scale E figuring in the equations defining the renormalization conditions. Thus the bare parameters are functions $g(\Lambda,q,E)$ of the cutoff $\Lambda,$ the renormalized parameters q, and the renormalization scale E. The renormalization group equations in the statistical mechanics analogy would describe how $g(\Lambda,q,E)$ changes as the cutoff $\Lambda$ is altered. However, in QFT, this is of no physical interest. Indeed, $\Lambda$ is completely eliminated from considerations: The renormalized theory is obtained at fixed E by letting the cutoff $\Lambda go$ to infinity. This has the effect that the bare parameters become meaningless, since the limit $lim_{\Lambda$ to inf} $g(\Lambda,q,E)$ does not exist. At this stage it becomes obvious that all bare objects are unphysical. Some expressions of the theory, however, survive the limit and describe observable physics. They can therefore be expressed as functions of q and E only, whose detailed form comes from the standard theory. However, there is a little twist since the scale E can be chosen arbitrarily, hence cannot be measurable. In terms of a fixed set of physical parameters \mu (measurable under well-defined experimental conditions), we can predict \mu by some function of q and $E, \mu=\mu(q,E)$. Solving for q, we can express q in terms of \mu and E, $q=q_{ren}(\mu,E).$ But the exact renormalized result of a physical prediction P(q,E) must be completely independent of E, uniquely determined by the physical parameters $\mu.$ Thus, $d/dE P(q_{ren}(\mu,E),E) = 0,$ which are the renormalization group equations of interest in QFT. In contrast to the statistical mechanics sitation, however, the sliding scale is the renormalization scale E and _not_ the cutoff $\Lambda$ (which at this stage is already infinite). The picture is somewhat confused by the fact that we cannot compute this renormalized theory at any E, since it is exceedingly complicated. Thus we need to consider approximations. These approximations are no longer independent of E. It turns out that the approximation errors are small only when the energy scale of the experiment for which a prediction is made is close to the renormalization scale E. Thus one needs to evaluate the theory at the scale of interest. However, perturbation theory is valid only near a fixed point $E^*$ of the renormalization group equations. Therefore, one determines approximate formulas for the quantities $q_{ren}(\mu,E)$ with E close to the appropriate fixed point $E^*,$ and then uses (also approximate) renormalization group equations to transform the result to the scale of interest. > R is taken to the limit point A. The theory we obtain at point A is then > called the bare theory. However, the bare coupling constants at A usually > end up being divergent. Which means that the so-called bare theory is pure nonsense, and is better avoided completely. > If you were to draw a similar diagram. Where would you place the labels > for "bare theory", "renormalized theory", "effective theory", and so on? There are two diagrams: One for the bare theory (which is what you describe) where A does not make sense, and where $\Lambda$ figures. And one for the renormalized theory, where nothing bare can be placed, and where E figures. The first diagram describes a continuum of different theories, the second one a _single_ renormalized theory parameterized in a different way. > If you talk about renormalization, which way are you moving the scale R, > left or right? When you talk about "bare parameters" or "renormalized > parameters" and how they change under RG flow, where do you fix your > initial conditions? The renormalization group is a flow, which has no associated initial conditions. You can start anywhere and move in both directions. > I'm still confused by your terminology. Where would the energy E and scale > $\Lambda$ be relative to each other on a scale diagram such as above? They sit on _different_ scales and are completely unrelated. The right diasgram to draw is the one for the renormalized theory, where it should look like $E^*$ E =================|======================|============... R M M is the scale at which we do scattering experiments. R is the fixed point near which we can do perturbative calculations. > Where would g and $g(\Lambda)$ be defined? Only in the bare theory with cutoff. There g is a free parameter, and one can reparameterize it as a function of $\Lambda$ by the usual renormalization prescription. But then it becomes a function of $\Lambda _and_ E$. In the renormalized theory there is only a q(E), of which conventionally one component is g(E). Arnold Neumaier



Hendrik van Hees wrote: > Arnold Neumaier wrote: > >>I wouldn't say, renormalization "cures" a wrong theory. It creates a >>good theory by a limiting procedure, just as we define good functions >>like $e^x$ by the limiting procedure $lim_{n$ to inf} $(1+x/n)^n$. > > You are, of course, right with your statement, but one should stress > that even for perfectly finite theories, one needs to renormalize > perturbation theory. I never disputed this, and mentioned it even a few times in this discussion. Of course, in the above statement, the context is local QFT, so that it was understood that without renormalization, the "theory" is nonsense. > For instance take an effective theory with a cutoff function at the > vertices, which is perfectly finite for all Feynman diagrams (I guess, > one can construct such a thing). > > Then you calculate, e.g., the self-energy for one of the particles, > contained in this theory. It's, as I said, a perfectly finite > expression. Nevertheless you have to renormalize the wave function and > the mass of the particle, in order to stay in agreement with > experiment. > > The reason is simply, that you start from a fictitious particle, which > is non-interacting (in QED an electron without electromagnetic field > around it). This fictitious particle has a well-defined finite mass, > which is nevertheless not observable, since an electron comes with its > em. field around it, and this contains energy and thus carries also a > mass. My way of presenting this (which, I think is much less confusing than the conventional way) is to regard the parameters in the action simply as parameters, rather as masses, charges, etc of fictitious objects. Then it is a triviality to note that measurable quantities [aka dressed masses, charges, etc.] are functions of the parameters of the theory [aka bare masses, charges, etc.]. Calling this finite renormalization just makes a conundrum out of a simple observation. Nowhere except in quantum field theories is it customary to assign the defining parameters of the theory ficticious physical meaning! I think this is part of the reason why QFT is difficult for beginners. It took me many years to find out how simple everything is when described in more telling words... >>A good theory is one for which the needed limits exist. If they don't >>it is a definitely wrong theory. This is the rule, and there is no >>other one. > > You don't need to regularize the theory to renormalize, although it is > much more convenient to work with a regularized theory as an > intermediate step to sort the infinities out (e.g. for many cases dim. > regularization is especially convenient, in particular for gauge > theories), but as I said, that's not necessary, it's just a > mathematical tool to calculate the perturbation theory expressions more > conveniently. I don't understand. How do you renormalize $\Phi^4$ theory, say, without regularization? Do you refer to the Epstein-Glaser procedure? In all schemes I know of one has to take a limit in order to get at the renormalized version (unless the theory is already finite without any renormalization), and without that limit one has a regularized theory. Arnold Neumaier



Arnold Neumaier wrote: > Eugene Stefanovich wrote: > > >>In my approach, there are no artificial "momentum cutoffs". > > > This is not true. You need it > 1. to argue that your theory has the cluster separation property, and > 2. to show that it gives the same S-matrix as standard QED. > If you start directly with your renormalized Hamiltonian, and not use > its construction in terms of a cutoff, you could neither argue that your > QED gives the correct Lamb shift, nor would be able to show cluster > separation. This amply deonstrates that the covariant approach is > mathematically superior. 1. In my approach the cluster separability property of QED is explicitly preserved. This is achieved by choosing coefficient functions of interaction operators smooth and bounded (condition (III) in subsection 12.1.3). There is also a great deal of discussion of cluster separability in my paper "Quantum field theory without infinities", Ann. Phys. (NY) 292 (2001), 139. 2. I also proved that the S-matrix is obtained exactly the same as in renormalized QED, so Lamb shifts are repreduced. > > Note that as long as one sticks to perturbation theory (and you do), > there is nothing mathematically dubious about renormalization. > Salmhofer's book about renormalization is completely rigorous, though it > treats bosons only, to limit the visual complexity. But the case of > fermions is technically simpler, since the smeared creation and > annihilation operators are bounded, and his method applies with little > change to QED, giving full rigor to perturbative renormalization of QED. > > The only dubious thing is the textbook talk about infinite bare masses, > which is a historical remnant similar to the Dirac sea, and neither > needed nor didacticaslly useful. I allow myself to repeat again my position on renormalization. The usual renormalization in QED requires three actions 1. add renormalization counterterms to the Hamiltonian 2. specify integration cutoff L in all loop integrals. 3. analyze behavior of the theory as $L ->$ inf. While L is finite, the counterterms remain finite (though large) and the S-matrix reproduces scattering properties well at momenta below L. When L is set to infinity, the scattering properties are reproduced for all momenta, but the Hamiltonian becomes infinite. This is OK for most applications, since (almost) nobody cares about time evolution (where the Hamiltonian is needed). The S-matrix is accurate, and that's all we need. In this sense I don't see any problem with renormalization. The problem occurs when we want to use the Hamiltonian with infinite parameters (bare masses and charges) to evaluate time evolution. Even with finite L (when the mare parameters are not yet infinite, and the Hamiltonian is formally finite), this is not possible, due to unrealistic creation of particle out of vacuum (e.g.$, a^+b^+c^+$ terms) and around single particles $(a^+c^+a)$ encoded in the QED Hamiltonian. I do not accept surrogate ways to describe the time evolution like "closed path" formalism you referred to earlier. In quantum mechanics the only correct way to describe time evolution is with the time evolution operator $\exp(iHt)$. The Hamiltonian must be finite. My proposal to cure the renormalization approach is to add the fourth action to the above list: 4. unitary transformation of the Hamiltonian which - removes unphysical terms like $a^+b^+c^+$ and $a^+c^+a$ - preserves cluster separability (=smoothness of interactions in the momentum space) - makes coefficient functions of interaction terms to decay rapidly in the momentum space, so that all loop integrals are convergent - preserves the S-matrix. All these conditions can be satisfied (in each order of the perturbation theory), and in the limit $L ->$ inf we obtain a finite "dressed particle" Hamiltonian (assuming that the perturbation series converges) which can be used in calculations just as usual Hamiltonians in non-relativistic quantum mechanics, i.e., without regularization and renormalization. > > > >>Renormalization in QED is a clear sign that this theory >>is sick. > > > Perturbatively renormalize QED is very healthy and predictive. for the S-matrix, I agree. > > More than your theory, which relies on the equivalence to standard QED > calculations for its comparison with high precision measurements. You > can calculate far less than they, since your instant form techniques are > far more complicated than standard covariant renormalization techniques. > > What is sick about QED is only the nonperturbative aspect. But this > sickness is not cured by your approach, which replaces it by the more > severe sickness of a Hamiltonian that is a formal power series whose > convergence behavior is questionable. You are right, I haven't touched the non-perturbative aspect of QED at all. I am not claiming any success in solving this problem. I haven't discussed the convergence properties of the perturbation expansion for the S-matrix or Hamiltonian. > Thus it is not even clear whether > this Hamiltonian exists as a well-defined operator. So you only have a > scheme to generate approximate Hamiltonians, which lack Lorentz > invariance and gauge invariance. This is not true. In the limit $L ->$ inf, the dressed Hamiltonian is exact (it exactly reproduces the S-matrix of QED in each order). Moreover, augmented with similarly processed boost operator, this Hamiltonian satisfies the Poincare invariance exactly (all commutators of the Poincare Lie algebra hold). Of course, these properties are exact only in the complete perturbation series. By inevitably truncating the series we introduce errors. But, at least, we have a well-defined algorithm how to get the errors smaller and smaller, if needed. I do not care about gauge invariance at all. I don't think it is important for physics. > > But as to such approximate Hamiltonians, people have constructed and > used them within NRQED for a long time with success. See, e.g., > http://www.arxiv.org/abs/hep-ph/9209266, http://www.arxiv.org/abs/hep-ph/9805424, http://www.arxiv.org/abs/hep-ph/9707481. Thanks for the references. I can also cite some older good papers on NRQED. They can be found on the KEK preprint site under numbers $85-4-383$ and $90-9-168$. If I understand the NRQED approach correctly, they do the following (in additions to steps $1-2-3$ above) 4a. do not go to the limit $L ->$ inf, but stop at finite values L ~ mc, where m is electron mass or other characteristic parameter. 5a. Add more terms to the Hamiltonian which approximately compensate for the error in low momentum processes introduced by the finite cutoff. NRQED is a fine theory for studying low-energy scattering or bound states. However, it is not difficult to recognize that my approach (step 4) is completely different. In contrast to NRQED, this approach is exact and preserves the Poincare invariance. > > Unless you can surpass their accuracy, you are behind the state of the > art. > There are many ways to renormalize QED, and yours is not as special > as you seem to think. In particular, it is not 'the' solution to the > problems of QED, as you advertise it! I would agree with you 100% if you are speaking about applications of QED to scattering phenomena and energies of bound states. I am not trying to beat existing approaches here. However, I think I added a couple of new ideas which allow to extend QED for studies of radiative corrections to wavefunctions and time evolution. The experiments concerning time evolution of wave packets formed from excited atomic states are in their infancy yet. However, with increased precision, experimentalists will be able to detect the effects of radiative corrections on such time evolution. Then my approach will come handy. Eugene Stefanovich. > > > Arnold Neumaier >



Igor Khavkine wrote: > On Mon, 20 Dec 2004 16:03:38 $+0000,$ Eugene Stefanovich wrote: > > >>Igor Khavkine wrote: > > >>>In any case, I don't think you are contradicting anything that I wrote >>>in my previous post. Moreover, as I tried to explain, you do not need to >>>think about all this stuff to get an idea why renormalization is useful >>>and applicable. One just has to realize that values of physically >>>measured parameters need not correspond directly to the parameters >>>entering the bare theory, and that renormalization provides a link >>>between them. >> > >>In my approach, there are no "bare" particles and "dressed" particles. >>There are just real particles. The parameters entering the theory are the >>same which are measured in experiments. The particles do not interact with >>their own E&M fields. There is no self-interaction. In my approach, there >>are no artificial "momentum cutoffs". All integrals are honestly evaluated >>to infinity. Nevertheless, there are no ultraviolet infinities. > > > I made no mention of "bare" or "dressed" particles. I do not even speak > of quantum electrodynamics. What I claim may be applied to any > reasonable theory, classical or quantum. My claim is that it is most > unlikely that for an arbitrary theory, all its parameters are directly > measurable. If you wish to contradict me, try for example to come up > with an experiment that will measure the size of a water molecule by > observing water's macroscopic hydrodynamic properties. There is a slight difference. In fluid dynamics we even don't know if water molecules exist. Water is described by macroscopic properties, like density and viscosity. Molecular properties do not appear neither in theory nor in experiment. On the other hand, in high energy physics, we do care about electrons. They are directly observed, and their properties appear in equations. The inconsistency appear in the fact that electron properties (mass and charge) used in the input of the theory disagree with the same properties obtained by calculations. This looks like a serious logical flaw to me. > > >>Renormalization in QED is a clear sign that this theory is sick. A >>correctly formulated theory should do well without renormalization. > > > You have yet to provide a defensible reason for this claim. On the other > hand, I see renormalization as a necessary (if sometimes implicit) > ingredient of any physical theory, except perhaps some very special > ones. That is what I tried to explain in my previous posts. There is no renormalization in ordinary quantum mechanics. Particle properties are not affected by interactions. So, QM is logically consistent. I do not see any compelling reason why this fine formalism should be turned on its head (that's what QFT does) simply because we allowed interactions which change the number of particles. > > >>I am sorry for being repetitive. Probably I am the only one whose sense of >>scientific beauty is offended by the twisted "logic" of traditional >>renormalized QFT. > > > You are treading on dangerous ground here. "Twisted" and "beautiful" are > very subjective adjectives. They bear little weight in a scientific > discussion. The same theory may admit many possible descriptions, > without one contradicting another. I am not in a position to say that > what you have done is without merit. However, you must accept that there > are other descriptions of the same theory that are equally correct as > yours, if it is indeed so, no matter how their philosophy does or does > not appeal to you. Philosophy is not a substitute for mathematics. > Judging their individual beauty is up to individuals, but all should be > able to agree on their correctness. Agreed. Eugene. > > Igor >



Hendrik van Hees wrote: > Arnold Neumaier wrote: > > >>I wouldn't say, renormalization "cures" a wrong theory. It creates a >>good theory by a limiting procedure, just as we define good functions >>like $e^x$ by the limiting procedure $lim_{n$ to inf} $(1+x/n)^n$. > > > You are, of course, right with your statement, but one should stress > that even for perfectly finite theories, one needs to renormalize > perturbation theory. > > For instance take an effective theory with a cutoff function at the > vertices, which is perfectly finite for all Feynmandiagrams (I guess, > one can construct such a thing). > > Then you calculate, e.g., the self-energy for one of the particles, > contained in this theory. It's, as I said, a perfectly finite > expression. Nevertheless you have to renormalize the wave function and > the mass of the particle, in order to stay in agreement with > experiment. > > The reason is simply, that you start from a fictitious particle, which > is non-interacting (in QED an electron without electromagnetic field > around it). This fictitious particle has a well-defined finite mass, > which is nevertheless not observable, since an electron comes with its > em. field around it, and this contains energy and thus carries also a > mass. > > Thus, when you calculate the electron self-energy you correct for this > (order by order in the number of Feynman-diagram loops or coupling > constants). To make physical sense out of it, you better subtract in > the very beginning all unphysical contributions to the mass. This not > only renders the QED calculation finite (due to the BPHZ theorem) but > also sets the physical, observable mass to the observed value of about > 511 keV. > >>A good theory is one for which the needed limits exist. If they don't >>it is a definitely wrong theory. This is the rule, and there is no >>other one. > > > You don't need to regularize the theory to renormalize, although it is > much more convenient to work with a regularized theory as an > intermediate step to sort the infinities out (e.g. for many cases dim. > regularization is especially convenient, in particular for gauge > theories), but as I said, that's not necessary, it's just a > mathematical tool to calculate the perturbation theory expressions more > conveniently. > I agree with you. Though I wanted to add that there is a way to design theory of charged particles in which electron does not interact with its electromagnetic field, there is no self-interaction, and regularization and renormalization are not needed at all. Electron properties in the input are the same as at the output. The key is to define the Hamiltonian correctly (without "bad" trilinear terms like $a^+c^+a)$. I don't know why since the end of 1920th everybody is stuck with the same sick Hamiltonian $H = H_0 + a^+c^+a + a^+ac + .$.. In experiments we care only about S-matrix, and there is a great freedom in choosing a Hamiltonian which does not affect the calculated S-matrix. Exploiting this freedom can solve a lot of problems. Eugene Stefanovich.



Igor Khavkine wrote: > On Mon, 20 Dec 2004 16:03:38 $+0000,$ Eugene Stefanovich wrote: > > >>Igor Khavkine wrote: > > >>>In any case, I don't think you are contradicting anything that I wrote >>>in my previous post. Moreover, as I tried to explain, you do not need to >>>think about all this stuff to get an idea why renormalization is useful >>>and applicable. One just has to realize that values of physically >>>measured parameters need not correspond directly to the parameters >>>entering the bare theory, and that renormalization provides a link >>>between them. >> > >>In my approach, there are no "bare" particles and "dressed" particles. >>There are just real particles. The parameters entering the theory are the >>same which are measured in experiments. The particles do not interact with >>their own E&M fields. There is no self-interaction. In my approach, there >>are no artificial "momentum cutoffs". All integrals are honestly evaluated >>to infinity. Nevertheless, there are no ultraviolet infinities. > > > I made no mention of "bare" or "dressed" particles. I do not even speak > of quantum electrodynamics. What I claim may be applied to any > reasonable theory, classical or quantum. My claim is that it is most > unlikely that for an arbitrary theory, all its parameters are directly > measurable. If you wish to contradict me, try for example to come up > with an experiment that will measure the size of a water molecule by > observing water's macroscopic hydrodynamic properties. > Your analogy would be correct if we assume that there is some underlying fundamental theory behing QED, like string theory, or something like that at the Planck scale. Then we can regard QED as a sort of low-energy large-scale limit of something more general. I think these views are prevailing at the moment. My work on RQD, however, shows that all renormalization problems can be solved without such an assumption, and a consitent theory of charged particles and photons can be constructed on its own without any reference to something "more fundamental". Eugene. > > Igor >



Eugene Stefanovich wrote: > Igor Khavkine wrote: > > On Sun, 19 Dec 2004 12:51:20 $+0000,$ Eugene Stefanovich wrote: > >>Nobody has ever seen an electron-positron pair spontaneously created out > >>of vacuum and then annihilated. Nobody has ever seen a photon emitted and > >>re-absorbed by a free propagating electron. The theory becomes much > >>simpler if these bogus processes are eliminated. That's the idea of the > >>dressed particle approach. > > > > First, note that by definition a free electron cannot interact with the > > electromagnetic field. Hence, no absorption nor emission. > > Then what about the trilinear terms in the Hamiltonian $a^+c^+a$ and > $a^+ac$? Aren't they describibing exactly these processes: absorption and > emission of photons by a free electron. I understand that these terms > are not ptresent in the S-matrix, because they do not conserve energy. > But these term cannot be ignored when you consider time evolution at > finite times. Ah, but then we are talking about an *interacting* electron. Without the interaction terms in the Hamiltonian, its eigenstates are assigned the meaning of states with a certain number of particles or photons. These states form a basis for the Hilbert space of the theory. Once the interaction is added, one keeps the Hilbert space, but has a different Hamiltonian. The interacting Hamiltonian has its own set of eigenstates, which is *different* from the eigenstates of the free Hamiltonian. Same Hilbert space, two different sets of basis vectors. However, the interacting eigenstates are also classfiable according to representations of the Poincare group. So to us they will appear to have definite mass, momentum, and spin. In other words, these interacting states will be what we'll percieve as particles and it is their properties that will be measured in experiments. The connection to the free states is made by assuming that they are continuously connected with the interacting states when the coupling constant is varied continuously. > > Second, your conclusion that trilinear interactions are unphysical stems > > from the evidence that the physical processes literally corresponding to > > the following Feynman diagrams have never been observed: > > > > __{__} ~~~ straight lines are > > $/ \$ ~ ~ electrons > > ~~~~~~~< >~~~~~~~ -----+-----+----- wavy lines are > > \__{__}/ photons > > > > However, this conclusion is only valid if there indeed *are* physical > > processes literally corresponding to individual Feynman diagrams. I would > > caution you from deriving physical consequences from concepts whose > > reality is at best tenuous. > > > > The question of physical reality of individual Feynman diagrams comes up > > frequently here on s.p.r. Please see one of my old posts > > > > http://groups.google.ca/groups?selm=...60%40lycos.com > > > > and Arnold Neumaier's Theoretical Physics FAQ > > > > http://www.mat.univie.ac.at/~neum/physics-faq.txt > > > > for an analysis of how "real" they actually are. > Thanks for these references. I think I can add something to your > discussions. With the trilinear interaction in QED, the above diagrams > *must* correspond to real processes. Let us try to describe the time > evolution using the QED Hamiltonian > > $H = H_0 + a^+c^+a + a^+ac +$... > > We form the time evolution operator > > U(t) $= \exp(iHt) = \exp(it(H_0 + a^+c^+a + a^+ac +...))$ > $= 1 +it(H_0 + a^+c^+a + a^+ac +...)$ > $-t^2/2 H_0 + a^+c^+a + a^+ac +...)^2 + ...$ > > Now apply this operator to the initial state consisting > of one electron > > U(t) $a^+|0> = a^+|0> + a^+c^+|0> + .$.. > > If you consider $a^+, c^+,$ etc as operators corresponding to > really observable particles (you definitely make this assumption > when you calculate the S-matrix), then you should admit that > during its time evolution one free electron should be surrounded by > photons, electron-positron pairs and other garbage. You also > must admit that these "satellite" particles should be observable > in experiment (otherwise you should explain why these particles > are observable in some situations and not observable in others). > As far as I know, no such observations have been made. I will admit no such thing. The operators $a^+$ and $c^+$ do not describe creation operators for the electron and photon states that we observe. We observe only interacting electrons and photons, never free ones. As I understand your argument for the unphysical nature of trilinear interactions, it goes as follows: 1. Take $a *free*$ electron state and evolve it for a finite amount of time with the QED Hamiltonian. 2. Observe that this state is not stationary (it changes by more than just a phase under time evolution). 3. Conclude that the QED Hamiltonian is unphysical because you would expect an *observed electron* to be described by a stationary state. Quite obviously, point 3 does not follow. From this argument you can conclude nothing about the physicality of the Hamiltonian. All that you can conclude is that *free* electron states are not eigenstates of the interacting Hamiltonian. Your expectations are based on observations, but these observations tell us about properties of *interacting* electrons. These would in fact be eigenstates of the QED Hamiltonian. The reason one talks about free states when dealing with the S-matrix, is the continuous link of the free and interacting states given through continuous variation of the coupling constant. Usually the identification is done as follows. Make the coupling constant time dependent, it has the correct value for a sufficiently long time interval for all the collisions to occur, but it tends to zero adiabatically in the infinite past and future. Given a free state that is evolved for a sufficiently long time by the interacting Hamiltonian, it becomes a superposition of interacting eigenstates. One eigenstate, will make the largest contribution and the rest can be considered as high frequency noise. The corresponding free and interacting states are identified. The same thing happens when a free state is evolved with an adiabatically increasing time-dependent coupling constant. The reverse procedure uses an adiabatically decreasing coupling constant. It takes an interacting state and evolves it into mostly one particular free state plus high frequency noise. Thus, given a free state in the infinite past, it is adiabatically connected to an interacting state by slowly increasing the coupling constant to the right value. The interaction constant stays constant for a while and the scattering event takes place. The interacting state is adiabatically converted into a free state in the infinte future by slowly decreasing the coupling constant to zero. Thus free states in the infinite past (the "in" states) are mapped to free states in the infinite future (the "out" states). This mapping is the S-matrix. This trick is only used because we don't know how to write down the interacting states directly and how to calculate the matrix elements of the evolution operator using them. However, quite obviously, this trick no longer works when you want to consider finite time evolution. It is true that if you start with a free state, it will change under time evolution and there will be observable effects. But you'll never be able to prepare such a state for an experiment. The best you can do is isolate one particle from all other matter. But this state will be an approximation of a one particle eigenstate of the *interacting* Hamiltonian, not the free one. > Moreover, according to QED, such bogus particles must be > spontaneously created from vacuum (and rapidly annihilated, of course). > Take a particle detector, > put it in an isolated vacuum chamber. You can wait forever, > but you'll never see any signal there. What you are saying basically is "given an eigenstate of the interacting Hamiltonian, it will be invariant under time evolution". But this is a tautology and doesn't allow you to draw any conclusions. > That's why I say that trilinear interactions are not physical. I hope you can see now why other people disagree with you. Igor



Eugene Stefanovich wrote: > I think these views are prevailing at the moment. My work on RQD, > however, shows that all renormalization problems can be solved without > such an assumption, and a consistent theory of charged particles and > photons can be constructed on its own without any reference to > something "more fundamental". You are claiming far too much. You haven't shown your theory to be consistent. You only constructed a sequence of approximate Hamiltonians with neither Lorentz invariance nor gauge invariance. Consistency would require that you can show that a limiting Hamiltonian exists which gives rise to a theory having both these properties. Arnold Neumaier



Arnold Neumaier wrote: > Eugene Stefanovich wrote: > >>Igor Khavkine wrote: >> > > >>>First, note that by definition a free electron cannot interact with the >>>electromagnetic field. Hence, no absorption nor emission. >> >> >>Then what about the trilinear terms in the Hamiltonian $a^+c^+a$ and $>>a^+ac$? Aren't they describibing exactly these processes: absorption and >>emission of photons by a free electron. > > > No; if anything it would be absorption and emission of bare photons by > a bare electron. But the bare objects are completely ficticious. > > >>Let us try to describe the time evolution using the QED Hamiltonian >> >>H $= H_0 + a^+c^+a + a^+ac +$... >> >>We form the time evolution operator >> >>U(t) $= \exp(iHt) = \exp(it(H_0 + a^+c^+a + a^+ac +...))>> = 1 +it(H_0 + a^+c^+a + a^+ac +...)>> -t^2/2 H_0 + a^+c^+a + a^+ac +...)^2 + ...$ >> >>Now apply this operator to the initial state consisting >>of one electron >> >>U(t) $a^+|0> = a^+|0> + a^+c^+|0> + ...$ > > > $A^+|0>$ is $_not_ a$ physical state, but the state of a bare electron, > which does _not_ belong to the Hilbert space in which the physical > (=renormalized) fields act as distribution-valued operators. > Hence your argument is irrelevant. > > You'd have a look at 2D QFT, where these things are well understood > in terms of actual construrctions of the appropriate Hilbert space > and Hamiltonians. The bare objects do not belong to this physical > Hilbert space and hence have no physical meaning. > > > Arnold Neumaier > This is my point. These objects are fictitious, but the QED Hamiltonian is written in terms of these objects. So, the Hamiltonian does not make sense. This can be fixed by rewriting the Hamiltonian in terms of real "dressed" particles. That's what RQD does. Eugene Stefanovich