Renormalizable quantum field theories

  • #51
Bob_for_short said:
Hamiltonian formulation of QED looks like non Lorentz invariant but it is in fact. It's been proven many times. I use the Hamiltonian formulation in the gauge invariant (Dirac's) variables (known also as a Coulomb gauge). I build the relativistic Hamiltonian basing on the physics of the quantum mechanical charge smearing outlined in the first part of the article. The resulting Hamiltonian (see also "Reformulation instead of Renormalizations", formula (60)) is relativistic but free from the self-action. This, new formulation, is free from non-physical entities and describes the right physics analogous to the atomic scattering description. Preliminary non relativistic estimations show that it is right. I have not presented the detailed relativistic calculations but it is clear that they only bring some numerical corrections to the right physics obtained already in the non-relativistic approximation.

Bob.

I think you misunderstand "the relativistic and Lorentz invariant" of divergent problems.

I have not yet read your paper "Reformulation instead of Renormalizations",
But as far as I read your first paper, there is nothing about solving the divergent problems
keeping Lorentz invariant.

The divergence is caused by the infinit loop (the action of infinit photon, particles and antiparticles) and divergent 4-momentum integral (which keep Lorentz invariant).
It is not caused only by 1/r as you say.

I do not like the idea of "bare mass or bare charge ".
I think the idea of QFT has reached the limit.
 
Physics news on Phys.org
  • #52
DarMM said:
This stuff is strange and perhaps I've been explaining badly. Think of what L_{1} is meant to be a function of, it's a function of the interacting fields \psi and A_{\mu} which are themselves functions of e. So L_{1} does contain terms of higher order in e through its dependence on the interacting fields.

Does this make sense?

Not yet. I see that our disagreement about QFT is even deeper than I thought. I always thought that quantum fields present in Weinberg's L_0 + L_1 + L_2 are *free* quantum fields. So, L_1 has only 1st order contributions. Apparently, you disagree with that.
 
  • #53
ytuab said:
I think you misunderstand "the relativistic and Lorentz invariant" of divergent problems.

I have not yet read your paper "Reformulation instead of Renormalizations",
But as far as I read your first paper, there is nothing about solving the divergent problems
keeping Lorentz invariant.

As I said previously, the relativistic theory of interacting particles or fields can be cast in the Hamiltonian form, so it is a multi-particle quantum mechanics. Such a form is covariant, it's been proven.

ytuab said:
The divergence is caused by the infinit loop (the action of infinit photon, particles and antiparticles) and divergent 4-momentum integral (which keep Lorentz invariant).

Do you understand what you are writing? The divergences are caused by divergences. It is a tautology. There is no physical mechanism behind such statements.

ytuab said:
It is not caused only by 1/r as you say.

Yes, it does. I give an example in "Atom...". If your potential in the integral is, roughly speaking,

1/(r+a) (i.e., "cut-off" or finite at r=0),

but you try to use a perturbation theory like that:

1/(r+a)=1/r - a/r2 + a2/r3 -...,

your integral will diverge at small r.

As I said previously, I not only use a better initial approximation for interacting fields, but also remove the self-action. So my Hamiltonian is different. It is well defined physically and mathematically, contrary to the standard QED Hamiltonian.

Bob_for_short.
 
Last edited:
  • #54
Bob_for_short said:
As I said previously, the relativistic theory of interacting particles or fields can be cast in the Hamiltonian form, so it is a multi-particle quantum mechanics. Such a form is covariant, it's been proven.
Bob_for_short.

I'm sorry to displease you.
But I'm still convinced that your paper is nonrelativistic and doesn't keep Lorentz invariant.

Because If you solve the divergent problems (infinit bare charge and mass ...) under the Lorentz invariant condition,
your paper will be immediately accepted by the top journal such as "Nature" or "Science".
So?

The relativistic particle is a point particle.
If you use " (natural) cut off ", the part of integral becomes noncontinuous and the upper and lower limit of momentum will appear. So this state doesn't keep Lorentz invariant.

I don't believe a point particle, so I don't believe QFT (and QM).
 
  • #55
ytuab said:
I'm sorry to displease you.
But I'm still convinced that your paper is nonrelativistic and doesn't keep Lorentz invariant.

It is a very superficial impression. I had no objections from experienced researchers.

Because If you solve the divergent problems (infinit bare charge and mass ...) under the Lorentz invariant condition, your paper will be immediately accepted by the top journal such as "Nature" or "Science".
So?

Not immediately. They all require the complete relativistic calculations, not only formulation.

The relativistic particle is a point particle.

As I showed in "Atom...", the point-like particle is the inclusive rather than "elastic" picture of scattering in QM.

If you use " (natural) cut off ", the part of integral becomes noncontinuous and the upper and lower limit of momentum will appear. So this state doesn't keep Lorentz invariant.

Yes, look at the atomic form-factors: they contain characteristic "cloud" sizes a0 or (me/MA)a0, for example. There is nothing wrong with it. On the contrary, it is natural unlike artificial cut-offs in the standard QFT.

I don't believe a point particle, so I don't believe QFT (and QM).

And I am trying to build a trustful and working theory.

Bob_for_short.
 
  • #56
meopemuk said:
Not yet. I see that our disagreement about QFT is even deeper than I thought. I always thought that quantum fields present in Weinberg's L_0 + L_1 + L_2 are *free* quantum fields. So, L_1 has only 1st order contributions. Apparently, you disagree with that.
No not really and to be honest we don't really have deep disagreements, I've just been vague at times for brevity. Let me try to explain in full. Some of the issues come from me talking in general rather than sticking to Weinberg. I'm wasn't clear about the non-Fock nature of the problem.
If you'll allow me, I will take the case of \phi^{4} in three dimensions since it is somewhat easier to deal with.

The first thing I should say is that the cancellations that I'm talking about probably can't be understood best as perturbative cancellations, but rather as direct operator cancellations or operator identities.

Anyway let's take the Hamiltonian of \phi^{4}_{3}. The interacting part, is \int{\lambda\phi^{4}}, this is the analogue of L_{1} in Weinberg. Immediately you can prove this isn't a well defined operator on Fock space. In my previous post I tried to demonstrate how badly behaved the operator is by showing that it can't act twice on a vector. Let me say the real problem, it's not self adjoint. This is true even for the L_{1} term in QED. An even bigger problem is that when added to the free part of the Hamiltonian it causes the total Hamiltonian to be unbounded, meaning there is no positivity of energy.
Now I know it seems strange, but it is a proven fact that these L_{1} terms are just as divergent or badly behaved as the counterterms. Even if you can't "see it" and I accept that it may be difficult, it is a fact that they are highly divergent.

In \int{\lambda\phi^{4}} physicists usually get around this with mass renormalization. We add a term to the Hamiltonian \delta m^{2}\int{\phi^{2}}. Now \delta m^{2} contains terms up to order \lambda^{2}. I'm claiming that this results in a well-defined Hamiltonian. However you rightly ask how can this be possible if \int{\lambda\phi^{4}} is only first order in \lambda and \delta m^{2} goes up to second order?

The truth is, it can't in Fock Space, which is the crux of Haag's theorem. If you move to the correct representation of the canonical commutation relations, or in physicist's speak "the interacting Hilbert space", then the cancellations are possible. See the paper by Glimm for details.

So yes, since Weinberg remains in Fock space then these cancellations cannot occur. However we know that interacting theories can't live in Fock space.

If one wants to stick to Fock space then you'll be presented with an odd situation, you'll have order by order cancellations for the S-matrix, but you'll have a poorly defined Hamiltonian. Not just because of the counterterms, but also because of L_{1}.

I also just want to mention that renormalization basically turns out to be Wick ordering in a non-Fock space.

Is that better?
 
  • #57
Bob_for_short said:
Yes, look at the atomic form-factors: they contain characteristic "cloud" sizes a0 or (me/MA)a0, for example. There is nothing wrong with it. On the contrary, it is natural unlike artificial cut-offs in the standard QFT.
.

I think you probably forgot that "the integral part " of the Hamiltonian(QED) should be Lorentz invariant. this part must be continuous and has no upper and lower limit of momentum.
In your papar, this is not commented anywhere. It is strange, I think.

If we only get the part of the hamiltonian Lorentz invariant, it is insufficient.

And I think both the natural and artificial cut-off doesn't keep Lorentz invariant.
 
  • #58
ytuab said:
I think you probably forgot that "the integral part " of the Hamiltonian(QED) should be Lorentz invariant. this part must be continuous and has no upper and lower limit of momentum. In your papar, this is not commented anywhere. It is strange, I think. If we only get the part of the hamiltonian Lorentz invariant, it is insufficient.

There are no limits in the Fourier integral itself. It is a form-factor that "cuts-of" certain parts in integration.

I forgot nothing. You just do not believe to that I wrote. There are well known things that go without saying. The proof that I wrote about is valid for all Hamiltonian terms including the four-fermion Coulomb term.

And I think both the natural and artificial cut-off doesn't keep Lorentz invariant.

As I showed in "Atom...", the elastic cross section can be measured. It contains the positive charge cloud size. Very very roughly, there is a dimensionless ratio of this size and the impact parameter in the elastic cross section.

The same is valid for inelastic cross sections.

But if you add up all cross sections, these dependencies smooth out and you obtain the Rutherford cross section, as if the target charge were point-like. That it what is observed in inclusive experiments. The inclusive picture is illusory, not real. That is why starting from assigning 1/r to a charge leads to bad mathematical expressions.

Bob.
 
Last edited:
  • #59
DarMM said:
the crux of Haag's theorem... However we know that interacting theories can't live in Fock space.

I think I know what Haag's theorem is, and in my (perhaps ill-informed) opinion this theorem does not present a significant obstacle for developing QFT in the Fock space. This theorem basically says that "interacting field" cannot have a manifestly covariant Lorentz transformation law. Some people say that this violates the relativistic invariance and, therefore, is unacceptable. However, I would like to disagree.

A quantum theory is relativistically invariant if its ten basic generators (total energy, total momentum, total angular momentum, and boost) satisfy Poincare commutation relations. These commutators have been proven for interacting QFT. For example, in the case of QED the detailed proof is given in Appendix B of

S. Weinberg, "Photons and Gravitons in S-matrix theory: Derivation of charge conservation and equality of gravitational and inertial mass", Phys. Rev. 135 (1964), B1049.

So, relativistic non-invariance is out of question. I think that the absence of the manifestly covariant transformation law of the "interacting field" is not a big problem. Actually, one can perform QFT calculations without even mentioning "interacting field" at all. It is quite sufficient to have a Hamiltonian and obtain the S-operator from it by usual Rules of Quantum Mechanics.

My claim remains that the Hamiltonian L_0 + L_1 is well-defined. However S-matrix divergences appear when products like L_1 * L_1 are calculated. In the renormalization theory these divergences get canceled by the addition of (divergent) counterterms L_2 in the Hamiltonian. So, the full Hamiltonian

H = L_0 + L_1 + L_2

is cutoff-dependent and divergent in the limit of removed cutoff. This divergence is not a big deal in regular QFT, where we are interested only in the S-matrix. However, if one day we decide to study the time evolution of states and observables in QFT, we may hit a difficult problem due to the absence of a well-defined Hamiltonian. Fortunately, this day seems to be quite far away, because experimental information about the time evolution of colliding particles is virtually non-existent.

I think that our disagreement reflects two different philosophies about dealing with interacting QFT. In your approach (which is widely accepted), you seek solution by leaving the Fock space. In my approach (which is less known) I stay in the Fock space and try to change the original Hamiltonian by "dressing". It may well happen that both philosophies are correct (or that both are wrong).
 
  • #60
meopemuk said:
I think I know what Haag's theorem is, and in my ... opinion this theorem does not present a significant obstacle for developing QFT in the Fock space.

I agree with you here. In fact, there may be different QFTs with different interaction Hamiltonians. In my Novel QED I stay within Fock spaces without problem.

I think that our disagreement reflects two different philosophies about dealing with interacting QFT. In your approach (which is widely accepted), you seek solution by leaving the Fock space. In my approach (which is less known) I stay in the Fock space and try to change the original Hamiltonian by "dressing". It may well happen that both philosophies are correct (or that both are wrong).

Both of you perform perturbative renormalizations of the standard QED (i.e., with self-action) however it is named. Eugene's approach keeps the fundamental constants intact and discards perturbative corrections to them. This is a typical renormalization prescription. Of course, it is also a perturbative dressing. What I propose is a non perturbative dressing and a physical interaction without wrong self-action and without wrong renormalizations.

Bob_for_short.
 
Last edited:
  • #61
meopemuk said:
I think I know what Haag's theorem is, and in my (perhaps ill-informed) opinion this theorem does not present a significant obstacle for developing QFT in the Fock space. This theorem basically says that "interacting field" cannot have a manifestly covariant Lorentz transformation law. Some people say that this violates the relativistic invariance and, therefore, is unacceptable. However, I would like to disagree.
The theorem says that any translationally invariant theory even non-manifestly Lorentz invariant ones, which satisfy the Wightman axioms cannot live in Fock space. So you can avoid it only if you drop one of the Wightman axioms, because dropping translation invariance would be a bit much. Maybe your dressing approach drops one of the Wightman axioms?

So, relativistic non-invariance is out of question. I think that the absence of the manifestly covariant transformation law of the "interacting field" is not a big problem. Actually, one can perform QFT calculations without even mentioning "interacting field" at all. It is quite sufficient to have a Hamiltonian and obtain the S-operator from it by usual Rules of Quantum Mechanics.
However it's been proven that the QED S-operator is not Hilbert Schmidt on Fock space and hence is not well defined nonperturbatively. This may not be a problem though if you only want things to work perturbatively. Maybe you disagree that there should be a nonperturbative QED, it's not necessarily a bad position.

My claim remains that the Hamiltonian L_0 + L_1 is well-defined.
It's not though, I mean it has been proven that it's not well-defined as an operator on Fock space, even in two dimensions. This is really the only thing about your position that I don't understand. It has been proven to not be self-adjoint or semibounded. How can you claim it is well-defined if there are proofs that it is not? This is a genuine question, maybe you mean something specific by "well-defined" which doesn't require the Hamiltonian to be self-adjoint or semi-bounded, or are you contesting the proofs?
I think that our disagreement reflects two different philosophies about dealing with interacting QFT. In your approach (which is widely accepted), you seek solution by leaving the Fock space. In my approach (which is less known) I stay in the Fock space and try to change the original Hamiltonian by "dressing". It may well happen that both philosophies are correct (or that both are wrong).
Maybe this is what you mean, that after this "dressing" L_{0} + L_{1} is well-defined as an operator on Fock space. All I'm saying is that L_{0} + L_{1} as it is defined in Weinberg is not well-defined, which is a fact with a rigorous mathematical proof behind it.
 
  • #62
DarMM said:
The theorem says that any translationally invariant theory even non-manifestly Lorentz invariant ones, which satisfy the Wightman axioms cannot live in Fock space. So you can avoid it only if you drop one of the Wightman axioms, because dropping translation invariance would be a bit much. Maybe your dressing approach drops one of the Wightman axioms?

There are many different formulations of Haag's theorem. I suspect that we have different things in mind. There is a nice paper, which discusses exactly the relationship between Haag's theorem and dressing (I hope our moderators won't be mad at me for mentioning this reprint)

M.I. Shirokov, "Dressing" and Haag's theorem, http://www.arxiv.org/abs/math-ph/0703021

DarMM said:
All I'm saying is that L_{0} + L_{1} as it is defined in Weinberg is not well-defined, which is a fact with a rigorous mathematical proof behind it.

Could you give me exact reference where this has been proved? I would like to take a look.
 
  • #63
ytuab said:
I think you misunderstand "the relativistic and Lorentz invariant" of divergent problems.


It is not caused only by 1/r as you say.

I do not like the idea of "bare mass or bare charge ".
I think the idea of QFT has reached the limit.

I am coming from a different direction but am interested in the same thing - cut off.



I would like to say (for a paper I am writing) that below a certain cut off distance
the universe has no answer because it runs out of 'precision',
because it requires too much data to exactly define such a fine grained system.
Thats why (another reason) we must 'cut off' (I want to write in paper if possible)
and also why it is legitimate to do so.

So if this length is about a plank length, then there would be no interaction differences
found between, let's say .003456 and .003457 plank lengths because it is below the cut
off because such a smalll difference will no be definable in terms of interactions -
such a small difference is 'not recognised' or able to trigger an event -its below a detectable precision limit.

Why? In this view data converts algorithmically to length and cannot be infinitely precise
it would require too many bits, ie the universe has not got infinite data and hence
infinite precision at its disposal.




I am very interested in collaborating with anyone that can help me to a more formal
exposition.
 
  • #64
p764rds said:
I am coming from a different direction but am interested in the same thing - cut off.

I would like to say (for a paper I am writing) that below a certain cut off distance
the universe has no answer because it runs out of 'precision',
because it requires too much data to exactly define such a fine grained system.
Thats why (another reason) we must 'cut off' (I want to write in paper if possible)
and also why it is legitimate to do so.

So if this length is about a plank length, then there would be no interaction differences
found between, let's say .003456 and .003457 plank lengths because it is below the cut
off because such a smalll difference will no be definable in terms of interactions -
such a small difference is 'not recognised' or able to trigger an event -its below a detectable precision limit.

Why? In this view data converts algorithmically to length and cannot be infinitely precise
it would require too many bits, ie the universe has not got infinite data and hence
infinite precision at its disposal.

I am very interested in collaborating with anyone that can help me to a more formal
exposition.

Your idea is in fact very popular idea of the coarse graining and it is very well developed.
W. Heizenberg advanced an idea of the fundamental length many years ago just to have a fundamental cut-off. In the statistical physics of phase transitions similar idea was employed by Kenneth Wilson in his renorm-group approach. It was then borrowed by QFT physicists to say that QFT and QED in particular are, maybe, the so called effective field theories.

Unfortunately this is not the case: the standard QED results, after renormalizations, are finite and do not contain any fundamental length or a cut-off at all. That means there should be a sort-cut to obtain the same finite results directly, without infinite bare parameters and infinite counter-terms to detract them. And, of course, without appealing to a fundamental length idea.

I promote such a short-cut. It encounters a huge resistance because people just do not believe in its existence. Factually though nobody could find a sole mathematical or physical error in my articles. It is a problem of prejudice which is the most difficult at the moment. The conceptual and mathematical difficulties have already been resolved.

Bob.
 
  • #65
p764rds said:
I am coming from a different direction but am interested in the same thing - cut off.
I would like to say (for a paper I am writing) that below a certain cut off distance
the universe has no answer because it runs out of 'precision',
because it requires too much data to exactly define such a fine grained system.
Thats why (another reason) we must 'cut off' (I want to write in paper if possible)
and also why it is legitimate to do so.

So if this length is about a plank length, then there would be no interaction differences
found between, let's say .003456 and .003457 plank lengths because it is below the cut
off because such a smalll difference will no be definable in terms of interactions -
such a small difference is 'not recognised' or able to trigger an event -its below a detectable precision limit.
Why? In this view data converts algorithmically to length and cannot be infinitely precise
it would require too many bits, ie the universe has not got infinite data and hence
infinite precision at its disposal.
I am very interested in collaborating with anyone that can help me to a more formal
exposition.

It will be a great thing to solve the divergent problems keeping Lorentz invariant.
Infinit bare mass, charge and divergent problems are inevitable in QFT.

I think I need to change the idea of QFT basically.

Bob_for_short said:
There are no limits in the Fourier integral itself. It is a form-factor that "cuts-of" certain parts in integration.

I forgot nothing. You just do not believe to that I wrote. There are well known things that go without saying. The proof that I wrote about is valid for all Hamiltonian terms including the four-fermion Coulomb term.
Bob.

In your paper (page 15 ), Equation (23) is not Lorentz invariant. You notice that?
You say Eq(23) is " relativistic Hamiltonian".

In the second term of Eq(23), the integral of d3R must be d4R (integral of space and time).
And R1 and R2 must not have the upper and lower limit of space and time.
The first term also must be the integral d4P.
And All in your Eq(23) is not Lorentz invariant. You confirm that?

In your paper you say "the problem of IR and UV divergences is removed in QED".
But if the Eq(23) is not Lorentz invariant, this conclution is not proper.
 
  • #66
ytuab said:
It will be a great thing to solve the divergent problems keeping Lorentz invariant. Infinit bare mass, charge and divergent problems are inevitable in QFT. I think I need to change the idea of QFT basically.

Before this text you quote the post of p764rds, not mine. The problems you mention are inevitable is the QFTs with self-action term.
In your paper (page 15 ), Equation (23) is not Lorentz invariant. You notice that?
You say Eq(23) is " relativistic Hamiltonian".

Have you ever seen a standard QED Hamiltonian in the Coulomb gauge? It is of the same structure but contains in addition a self-action term. My Hamiltonian does not contain it.
In the second term of Eq(23), the integral of d3R must be d4R (integral of space and time). And R1 and R2 must not have the upper and lower limit of space and time.
The first term also must be the integral d4P. And All in your Eq(23) is not Lorentz invariant. You confirm that?

You are just unfamiliar with the Hamiltonians of QED in the Coulomb gauge. The integrals are correct: d3R1d3R2. Read S. Weinberg or any other textbook on this particular subject to make sure I am right.
In your paper you say "the problem of IR and UV divergences is removed in QED".
But if the Eq(23) is not Lorentz invariant, this conclution is not proper.

And if it is invariant, this conclusion is correct.

Read also "Reformulation instead of renormalizations" for another motivation to construct formula (60).

Bob.
 
  • #67
meopemuk said:
There are many different formulations of Haag's theorem. I suspect that we have different things in mind. There is a nice paper, which discusses exactly the relationship between Haag's theorem and dressing (I hope our moderators won't be mad at me for mentioning this reprint)

M.I. Shirokov, "Dressing" and Haag's theorem, http://www.arxiv.org/abs/math-ph/0703021
Actually we're talking about the same thing. If you look at the reference it states Haag's theorem requires only translational and rotational invariance, not Lorentz invariance, which is why it affects some Galilean/non-relativistic field theories. However the reference also explains how the dressed approach gets around this. As I suspected you drop one of the Wightman axioms, namely that the interacting field operators transform covariantly. This allows you to remain in Fock space. Thanks for the references.

Could you give me exact reference where this has been proved? I would like to take a look.
To get an idea of the issues involved in the d = 2 case, take a look at:
Fermion currents in 1+1 dimensions
Carey, Hurst, O'Brien
J. Math. Phys. 24, p. 2212


For general problems related to only integrating fields over space see:
A.S. Wightman and L. Gårding,
Fields as operator valued distributions in relativistic quantum field theory.
Ark. f Fys., t. 28, 1965, p. 129
 
  • #68
Bob_for_short said:
Have you ever seen a standard QED Hamiltonian in the Coulomb gauge? It is of the same structure but contains in addition a self-action term. My Hamiltonian does not contain it.

You are just unfamiliar with the Hamiltonians of QED in the Coulomb gauge. The integrals are correct: d3R1d3R2. Read S. Weinberg or any other textbook on this particular subject to make sure I am right.

And if it is invariant, this conclusion is correct.
Bob.

Do you say the charge which is almost still (k^2 << m^2) or something?
I think what you say is probably the approximation.

For example, At calculation of Lamb shift, this approximation is used. (using d3k d3x integral instead of d4k d4x integral).

But Due to this approximation, this doesn't keep Lorentz invariant.

And Coulomb gauge doesn't keep Lorentz invariant ( Lorentz gauge does.)
And Coulomb gauge violates causality.

see http://en.wikipedia.org/wiki/Gauge_fixing
 
Last edited:
  • #69
DarMM said:
As I suspected you drop one of the Wightman axioms, namely that the interacting field operators transform covariantly. This allows you to remain in Fock space.

That's exactly right. I mentioned the non-covariance in an earlier post. I don't see a good reason for the "interacting field" to be covariant. It might sound counter-intuitive, but the full interacting theory is still relativistically invariant (in the sense described in Weinberg's vol. 1).

Thank you for the references.
 
  • #70
Re Haag's thm, the unitary "dressing" approach, etc...

(I know should probably stay quiet, but I'll offer my
$0.02 worth. BTW, some related stuff was discussed a
while back in this thread:
https://www.physicsforums.com/showthread.php?t=177865
which also explained some of the differences between
orthodox QFT and Meopemuk's approach.)

Anyway...

The widely-known formulations of Haag's thm tend to be based
on having an irreducible set of operators parameterized by
Minkowski spacetime coordinates. Covariance under a Lorentz
boost is then formulated with reference to these spacetime
coords.

The point of Shirokov's paper:

M.I. Shirokov, "Dressing" and Haag's theorem,
Available as: http://www.arxiv.org/abs/math-ph/0703021

is that such a view of "spacetime covariance" under Lorentz
boosts is untenable in an interacting QFT. (But the
incompatibilities between relativistic interactions and
naive Lorentz transformation of spacetime trajectories have
already been known for a long time in other guises.)

Another perspective on Haag's thm was given in Barton's
little book:

G. Barton, "Introduction to Advance Field Theory",
Interscience 1963,

(It might be possible to access a copy via
http://depositfiles.com/en/files/4816818 , or at
http://www.ebookee.com.cn/Introduction-to-advanced-field-theory_166416.html
but I haven't actually tried these out.)

Barton explains and emphasizes the role of unitarily
inequivalent representations of the CCRs, (which Weinberg
doesn't even mention), and concludes his analysis of Haag's
thm by saying (p157) "...the correspondence between
vector space in which the auxiliary (in) and (out) fields
are defined, and that in which the [interacting field(s)
are] defined, is necessarily mediated by an improper
[unitary] transformation.
" Here, "improper" means a
transformation between inequivalent representations, i.e.,
between disjoint Fock spaces.

(For any readers unfamiliar with unitarily inequivalent
representations, the Bogoliubov transformations of condensed
matter theory are a simple example.)

So, previously in this thread where "the Fock space" has
been mentioned, one must understand that there is not one
Fock space mathematically, but rather an uncountably
infinite number of disjoint Fock-like spaces. The unitary
dressing transformations form part of a technique to find
which one is physically correct.

A related approach of Shebeko+Shirkov, complementary to
Meopemuk's, can be found in

Shebeko, Shirokov,
"Unitary Transformations in QFT and Bound States"
Available as: nucl-th/0102037

My take on both approaches is this:

Starting from a Fock space corresponding to the free theory,
and an initial assumption about the form of the interaction,
one investigates the Hamiltonian and S-matrix, finds they're
ill-behaved in terms of high energy and infinite numbers of
particles, then performs an (improper) unitary
transformation at a particular order of perturbation, then
performs something similar to the usual mass and charge
renormalization (since even improper unitary transformations
alone seem unable to cure this kind of divergence), then
(at the next perturbation order) performs another improper
unitary transformation, and so on. All of this is aimed at
finding an S-matrix, a Hamiltonian, and a space in which
both are physically sensible (stable vacuum and 1-particle
states, finite operators, etc, etc).

HTH.
 
  • #71
strangerep said:
So, previously in this thread where "the Fock space" has
been mentioned, one must understand that there is not one
Fock space mathematically, but rather an uncountably
infinite number of disjoint Fock-like spaces. The unitary
dressing transformations form part of a technique to find
which one is physically correct.
I should also add that unlike general reps of the CCR, these Fock spaces have been completely categorised. That is, there has been shown to be a certain number of basic families and all spaces within these families can be indexed by one continuous parameter. All other Fock spaces are then direct sums or tensor products of these basic Fock spaces.
For instance "scalar" Fock spaces are one family and they're indexed by mass.

So I should tidy up my language a bit. When I say a non-Fock space I mean a representation of the canonical commutation relations (CCR) which is not any of these uncountably infinite Fock spaces.
For instance \phi^{4}_{3} in orthodox QFT, lives in a non-Fock space. There is no number operator defined over all of this space and there is no state that all annihilation operators acting on it gives zero. Hence the vacuum has no particular relation to the creation and annihilation operators and there are states which cannot be understood as being composed of particles.
 
  • #72
ytuab said:
Coulomb gauge doesn't keep Lorentz invariant ( Lorentz gauge does.)
And Coulomb gauge violates causality.

Apparently you understand the Lorentz invariance as an explicit one. But the same theory can be transformed, by the variable change, to an implicit Lorentz invariant form. That is what happens while gauge fixing if the gauge is no the Lorentz' one. The theory results remain relativistic whatever gauge is used.

The explicit Lorentz invariance was extremely useful to correctly discarding the perturbative corrections to the masses and charges in the renormalization prescription.

I do not obtain such a corrections to the fundamental constants, so I can work with an implicitly Lorentz invariant theory.

The Coulomb gauge "violates" the causality to the same extent as the other gauges. Read about Feynman propagator - it is different from zero in the space-like region.

In fact, this "violation" is due to too narrow and shallow (factually classical) understanding causality. In QM there is no point-like particles but waves existing in the whole volume. The QM interaction term takes into account their mutual influence. You can understand it as a wave interaction due to non linearity of the wave equation.

Bob_for_short.
 
  • #73
Bob_for_short said:
I do not obtain such a corrections to the fundamental constants, so I can work with an implicitly Lorentz invariant theory.

The Coulomb gauge "violates" the causality to the same extent as the other gauges. Read about Feynman propagator - it is different from zero in the space-like region.

In fact, this "violation" is due to too narrow and shallow (factually classical) understanding causality. In QM there is no point-like particles but waves existing in the whole volume. The QM interaction term takes into account their mutual influence. You can understand it as a wave interaction due to non linearity of the wave equation.
Bob_for_short.

This "violation" of Coulomb gauge is due to narrow and shallow ?
(Do you know the meaning of causality ?)

The causality violation of the Coulomb gauge is due to the independent scalar poteintial.
When the charge changes, scalar potential in all space will change at the same time.
Please read the upper site as I showed .

"The Coulomb gauge "violates" the causality to the same extent as the other gauges" is not correct.
Please read the part of "Coulomb gauge , Lorentz gauge" in your textbook.

In your paper, you use the distance r1-r2 (ex Eq(23)).
The absolute value r1-r2 or r will change, when the direction and velocity of Lorentz boosts changes. So this is not Lorentz invariant.
(Do you know the meaning of Lorentz boosts?)

If you have QFT textbook by Peskin, please read page 253.
"In the nonrelativistic limit it make sense to compute the potential V(r) ( q^2 << m^2)"
You use the potential energy ( ex E= q/r). This form is not basically Lorentz invariant.

(If you do not have Peskin textbook, please check your textbook .)
 
  • #74
ytuab said:
The causality violation of the gauge is due to the independent scalar poteintial.

The Coulomb gauge is not my invention. It is widely used in the relativistic QFT formulations. Apart from boosts one has to perform a specific gauge transformation to return to the Coulomb gauge in a new reference frame, so this formulation is Lorentz and gauge invariant. Read Schwinger's and Dirac's papers on this subject, for example. I wish you good luck in fighting against this gauge.

Bob.
 
Last edited:
  • #75
Bob_for_short said:
The Coulomb gauge is not my invention. It is widely used in the relativistic QFT formulations. Apart from boosts one has to perform a specific gauge transformation to return to the Coulomb gauge in a new reference frame, so this formulation is Lorentz and gauge invariant. Read Schwinger's and Dirac's papers on this subject, for example. I wish you good luck in fighting against this gauge.
Bob.

The Coulomb gauge is NOT widely used in the RELATIVISTIC QFT formulations.
The Lorentz gauge is used.

Please read the part of "Coulomb gauge" and "Lorentz gauge" in the textbook.

"Apart from boosts "? what do you mean?
Please read the part of "the Lorentz boosts" in your textbook.
 
  • #76
ytuab said:
The Coulomb gauge is NOT widely used in the RELATIVISTIC QFT formulations. The Lorentz gauge is used.

See about practised gauges S. Weinberg's textbook, Volume 1.

ytuab said:
"Apart from boosts "? what do you mean? Please read the part of "the Lorentz boosts" in your textbook.

If one makes only a boost, the Coulomb gauge Hamiltonian changes. If you make in addition a specific gauge transformation, the transformed Hamiltonian restores Coulomb gauge form in a new reference frame. I repeat this to you to explain how Lorentz invariance of the Coulomb gauge Hamiltonian can be preserved.

The Coulomb gauge is used in the fundamental quantization, especially in the Dirac's variables (gauge invariant formulation). It is as valid as the others. Your attacks on it are groundless.

The last: please, do not tell me what I shall read.

Bob.
 
Last edited:
  • #77
Bob_for_short said:
If one makes only a boost, the Coulomb gauge Hamiltonian changes. If you make in addition a specific gauge transformation, the transformed Hamiltonian restores Coulomb gauge form in a new reference frame. I repeat this to you to explain how Lorentz invariance of the Coulomb gauge Hamiltonian can be preserved.

The Coulomb gauge is used in the fundamental quantization, especially in the Dirac's variables (gauge invariant formulation). It is as valid as the others. Your attacks on it are groundless.

The last: please, do not tell me what I shall read.

Bob.

You understand what you say?

Lagrangian (Hamiltonian) must NOT be changed by a boost (relativistic). Do you know this meaning?

What do you mean "in adition specific gauge transformation" ?
(Do you know the meaning of the gauge transformation?)
Lagrangian(Hamiltonian) must NOT be changed by a gauge transformation.

The gauge transformation has NO relation here ( and in your paper).

Please do not say ridiculous things, Bob.
 
Last edited:
  • #78
ytuab said:
What do you mean "in addition specific gauge transformation" ?
(Do you know the meaning of the gauge transformation?)
Lagrangian(Hamiltonian) must NOT be changed by a gauge transformation.
The gauge transformation has NO relation here ( and in your paper).
Please do not say ridiculous things, Bob.

You apparently take me for a novice. I explain you for the third time: if you apply only a boost to the Coulomb gauge Hamiltonian, it changes - it becomes non Coulomb gauge one. New terms appear. One can restore the Coulomb gauge form in a new reference frame by applying an additional gauge transformation. Is it clear? (see Johnson K. // Ann. of Phys. 1960. V. 10. P. 536.).

I propose you not to comment my works anymore as well as the Coulomb gauge and Lorentz invariance. You yourself look ridiculous.

Bob.
 
  • #79
Bob_for_short said:
You apparently take me for a novice. I explain you for the third time: if you apply only a boost to the Coulomb gauge Hamiltonian, it changes - it becomes non Coulomb gauge one. New terms appear. One can restore the Coulomb gauge form in a new reference frame by applying an additional gauge transformation. Is it clear? (see Johnson K. // Ann. of Phys. 1960. V. 10. P. 536.).

I propose you not to comment my works anymore as well as the Coulomb gauge and Lorentz invariance. You yourself look ridiculous.

Bob.


What you say is not relevant here.

If Lagrangian (Hamiltonian) is changed by a boost, you must make such "specific gauge transformation methods" with each boost.
We usually choose "Lorentz gauge" if we do such an almost impossible thing.
It is much easier.

And everyone except you know that QED(relativistic) Lagrangian(Hamiltonian) must NOT be
changed under the boost and gauge transformation.
 
Last edited:
  • #80
I propose you again not to comment my works anymore as well as what I know and what I do not know.
 
  • #81
I just want to chime in that in the real physics community, renormalization has not been at all confusing or problematic since the mid 1970s. Building on some physical insights of Leo Kadanoff in the 1960s, Kenneth Wilson introduced the modern framework for field theoretic renormalization in 1973. This is when renormalization ceased to be a poorly understood necessity, and instead became a hugely valuable tool for understanding field theories. Wilson received the Nobel prize for this work in 1982.

An important part of the modern understanding is the relation between propragators and path integrals in a quantum field theory being mathematically identical to correlators and partition functions in a statistical field theory. In SFT it is favored to define field theories on a discrete lattice, while QFT prefers continuum field theories. Wilson's framework explains the 'infinities that plague QFT' by examining the continuum limit in detail, to see well behaved continuum quantum field theories as existing only at the scale invariant critical points of statistical field theories (scale invariance makes it possible to take the limit of zero lattice spacing).

Of course, there are lots of textbooks out there that do not discuss lattice field theories, and most laymen who study QFT don't care about condensed matter theory and statistical mechanics because these aren't as apparently 'sexy' as particle or string physics. The problem is that studying continuum field theory without knowing about lattice field theory is like doing calculus without a proper definition of limits: you are going to run into infinities that don't make sense.
 
  • #82
Thank you, buddy, for your popular explanation, but I know that.

Let me also make a popular explanation of my view point.

An old married couple of Europeans takes a car voyage over the United States. They visit different sites in the country and enter a big city. Soon they get lost. They stop their car and ask a pedestrian: "Excuse us, Sir, we got lost here. Tell us where we are, please?" The pedestrian answers: "You are in a car".

Needless to say to what extent such an answer is useless.

The same useless statements are:

The infinities (divergences) are due to ill-defined product of distributions (x-space).
The infinities (divergences) are due to divergent integrals in the momentum space (p-space).


These statements are correct but are in fact a tautology to a great extent.

The worst one is the following:

The infinities (divergences) are due to some unknown physics at short distances.

The latter is the most misleading.

My opinion, based on my solid experience, is the following:

Mathematically infinite corrections are due to too bad (too distant) initial approximation (free and "point-like" particles).

Physically it means the physicists did not guess (or pick up) a correct physical picture for the initial states.

I showed that the corrections to the fundamental constants appear due to wrong self-action ansatz. It leads to kinetic perturbative terms that add some kinetic constants to the initial ones. I underline that the self-action ansatz was introduced in order to preserve the energy-momentum conservation law. It never worked properly but always with difficulties. In fact, it had always to be abandoned with help of exact (in CED) or perturbative (in QED) renormalizations.

The energy-momentum conservation law can be preserved in another way: by considering a compound system with the center of inertia and relative degrees of freedom. This approach is based on a potential rather than on kinetic "perturbative" (or better interaction) terms, and it does not lead to mathematical and conceptual difficulties. At the same time it describes naturally all physical phenomena. Now, knowing all that, why should we neglect this physically and mathematically correct approach, make masses and charges guilty of our bad understanding of nature, and whisper about probable unknown phenomena at short distances?

The quantum mechanical charge smearing, always existing in the nature, should be taken into account exactly rather than perturbatively. That is the right solution of these problems. Then no divergences appear, not corrections to the fundamental constants arise, no renormalization is necessary. It is a short-cut, if you like, to the final finite results.

I underline - the charge smearing size is always much larger than any possible lattice or Plank distances, or other artificial "space-time grains". The charge form-factors serve as natural regularizators (cut-off mechanisms). There is no problem at short distances at all! What can be simpler?

Study my works carefully. I simplified everything to reveal the most explicitly the point where we make a mistake with the energy-momentum conservation law in particle-field interaction.

Bob.
 
Last edited:
  • #83
ExactlySolved said:
Building on some physical insights of Leo Kadanoff in the 1960s, Kenneth Wilson introduced the modern framework for field theoretic renormalization in 1973.

Hi ExactlySolved,

I admit that I don't know Wilson's framerwork well enough. I would appreciate if you can clarify how this framework is used in the particular example of renormalized QED. In particular, I would like to know what it tells us about the cutoff-dependence of the QED Hamiltonian. Can we obtain a finite Hamiltonian in the limit of infinite cutoff?

From what I've read, my understanding is that QED is considered to be an "effective field theory", which means that it makes sense only at limited momenta/energies (or large enough distances). For small distances or large momenta, QED must be replaced by some (yet unknown) theory, which takes into account "space-time granularity" or some other (yet unknown) small distance effects. Basically, this means that we are not allowed to take the infinite cutoff limit in QED. The Hamiltonian defined at the allowed finite cutoff remains finite, and this is the Hamiltonian, which should be used if one wants to study the time evolution of states and observables. Is this description correct?

Thanks.
Eugene.
 
  • #84
Well, what a surprise - I was the original guy who created this
thread - and am shocked by the huge reaction.

I have read the comments (some over my head) and will attempt a
'board room' overview (from my wisdom through great age perspective!):

1) There is a need for ultra violet and infra red cut-offs in these theorems.
- interestingly many views support this for seemingly different reasons.

2) There is a view (Bob's) that 'a quantum smearing of charge' provides a cut-off naturally,
- but this is not widely accepted yet.

3) Other views where operators are applied artificially to impose a cut off, through
renormalization and other devices.

4) There are worries about Lorentz Invariance, particles or free fields,
peturbations and infinities that are preventing a fully self consistent theory,
and these issues can be over complex.

7) It appears that the QFT approach itself is posing questions. Is there another approach?

8) The 'correct' approach is dependant from what perspective we view from (low high r, high or low energy etc)
Experimental results confirm these approaches.

It appears to me at this stage, that we need different approaches for different situations and that
there is no overall fit-all solution - at least as yet.


a) We are discussing cut-off but its ramifications are too large to keep the discussion bounded.
b) The discussions open up interesting avenues to underlying truths.
c) Complexity is an issue preventing clear solutions (or is this just me?)
 
Back
Top