Interacting theory lives in a different Hilbert space [ ]

Click For Summary
The discussion centers on the validity of the statement that interacting theories exist in a different Hilbert space than non-interacting theories. One participant argues that the same Hilbert space should apply to both types of systems since the logical propositions about their states remain unchanged. However, others highlight that the properties of operators, such as acceleration, differ significantly between interacting and non-interacting cases, suggesting that these theories cannot be unitarily equivalent. The conversation also touches on the implications of infinite degrees of freedom in quantum field theory (QFT), which complicate the mathematical structure compared to ordinary quantum mechanics. Ultimately, the consensus leans towards recognizing that the fundamental differences in degrees of freedom and operator behavior necessitate distinct Hilbert spaces for interacting and non-interacting theories.
  • #61
DrFaustus said:
...Please, show me how to do that. Maybe we could share a Nobel prize.
I am quite sure that you can expand functions in Taylor series. Take a textbook on calculus, choose a function and make such a calculation. Then you will see that the QED precision is not the best one obtained in the fourth order.

About Scharf's book. Has the author calculated something like Compton cross section in the first non-vanishing order? Or Rutherford-like (Mott or so) cross section? I speak of QED cross sections in the first non-vanishing order, without divergent loops. I want to know if he calculates some elastic cross sections at the beginning?
 
Last edited:
Physics news on Phys.org
  • #62
DrFaustus said:
See book by Scharf - "Finite QED". As the title suggests, there is not a single infinity.
I only have the 1st edition (1989), and it's been years since I read it, but I got the impression
that the Epstein-Glaser-Scharf approach works by choosing smoothing functions successively
at each order of perturbation. This seemed a bit ad hoc to me. Or am I missing something?

Bob_for_short said:
About Scharf's book. Has the author calculated something like Compton cross section in the first non-vanishing order? Or Rutherford-like (Mott or so) cross section? I speak of QED cross sections in the first non-vanishing order, without divergent loops. I want to know if he calculates some elastic cross sections at the beginning?
The 1st edition treats Compton and Moeller scattering up to at least the first loop order
where vacuum polarization, self-energy, etc, corrections arise.

Looking on Amazon, the 2nd edition is considerably expanded (to almost twice the size).
It has a modified title: "Finite Quantum Electrodynamics: The Causal Approach".

Product Description (from Amazon):
In this textbook for graduate students in physics the author carefully analyses the role of causality in Q.E.D. This new approach avoids ultraviolet divergences, so that the detailed calculations of scattering processes and proofs can be carried out in a mathematically rigorous manner. Significant themes such as renormalizability, gauge invariance, unitarity, renormalization group, interacting fields and axial anomalies are discussed. The extension of the methods to non-abelian gauge theories is briefly described. The book differs considerably from its first edition: Chap. 3 on Causal Perturbation Theory was completely rewritten and Chap. 4 on Properties of the S-Matrix and Chap. 5 on Other Electromagnetic Couplings are new.
 
  • #63
DrFaustus said:
And all I've been trying to say is that as long as QFT is formulated in terms of operator valued distributions (which quantum fields are), then some sort of renormalization is unavoidable.

I can agree with this statement. But I don't see a good reason why physical interactions should be always constructed as products of quantum fields. Perhaps, we can remove this artificial restriction and obtain a good theory (we should not call it *quantum field* theory then), in which there is no need for renormalization.

Examples of such a theory are not difficult to construct. See

O. W. Greenberg and S. S. Schweber, "Clothed particle operators in simple models of quantum field theory", Nuovo Cim., 8 (1958), 378.

For example, we can choose the interaction operator in the normally-ordered form

V = a^{\dag}a^{\dag}aa + a^{\dag}a^{\dag}aaa + a^{\dag}a^{\dag}a^{\dag}aa + ......(1)

The characteristic feature of this interaction is that usual QFT terms aaa + a^{\dag}aa + a^{\dag}a^{\dag}a + a^{\dag}a^{\dag}a^{\dag} are absent. These terms act non-trivially on the vacuum and 1-particle states. They are considered "bad" and forbidden. The "good" terms present in our interaction (1) have at least two annihilation operators and at least two creation operators.

One can easily see, that there is no need for mass renormalization with interaction (1). Free vacuum and free 1-particle states are eigenstates of the full interacting Hamiltonian with unchanged (free) eigenvalues. One can show that the charge renormalization can be avoided too. One can also make sure that all loop integrals are convergent.

The next question is how to make sure that the S-matrix computed with interaction (1) agrees with experiment (e.g., on scattering of charged particles and photons). There are basically two ways to do that:

(1) We can simply take scattering amplitudes from high-order QED calculations and/or experiment and fit coefficient functions in (1) to these data.

(2) We can apply the so-called "unitary dressing transformation" to the renormalized Hamiltonian of QED to bring it to the desired form (1).

Either way guarantees that the S-matrix calculated with (1) is exactly the same as the S-matrix computed in QED or measured in experiments. The benefits of using interaction (1) are: (i) there is no need for renormalization, (ii) both free and interacting theories live in the same Fock space.

Eugene.
 
  • #64
strangerep said:
I only have the 1st edition (1989), and it's been years since I read it, but I got the impression
that the Epstein-Glaser-Scharf approach works by choosing smoothing functions successively
at each order of perturbation. This seemed a bit ad hoc to me. Or am I missing something?
The basic point of the Epstein-Glaser approach is to take some expression from perturbation, which is not a well defined distribution since it contains products of other distributions and try to turn it into a well defined product. Let me call this object from perturbation theory S.
What you do is restrict S to a smaller domain of test functions it is well-defined on and then try to extend it to all test functions to produce a well-defined distribution. This requires making a small modification to S. There are many such possible modifications, but only one is consistent with causality and locality. You make this modification and you get S'. You now have a well-defined, local, causal object.
However this is still ad hoc as you had to actual change it by hand, however you can then show that these modifications need not be done by hand, but can be implemented by the Feynman rules themselves provided the coeffecients of the Lagrangian contain extra distributional terms. These terms agree with the counterterms of standard field theory.

Which demonstrates that renormalizing using causality, locality and distribution theory is the same as renormalizing by using the condition that a few physical numbers be finite.
 
  • #65
I would like you participants to express your feelings about the following:

In the Scharf’s - "Finite QED" and in all other books there are calculations of elastic processes like Compton scattering, Rutherford or Mott or Moeller (i.e., charge from charge) scattering and comparison of these results with experimental data. I speak of the first non-vanishing order of the perturbation theory, without loops. At first sight these results look good. But later on, much later on we discover that the probability of any elastic process is identically equal to zero. It is the inclusive cross sections and probabilities that are different from zero. And it is the inclusive quantities that correspond to the experimental situations.

So, in the first non-vanishing order the standard QED predicts events that never happen. And it does not predict the phenomenon that happen always (soft radiation). Don’t you consider this theory "feature" as a complete failure in the physics description? Isn’t it a too bad start for the perturbation theory?

Please answer these questions explicitly. I am waiting for your opinions.

To your information, in my pet theory the probabilities and cross sections of elastic processes, calculated in the first non-vanishing order, are just zero, as it should be. It is more correct, isn’t it? And the inclusive cross sections correspond to the Compton or Rutherford formulas with high accuracy, again, as it should be. It is also more correct, don’t you think so? I obtain correct physics in the first non-vanishing order, not in higher orders by the price of forced discarding self-action contributions, painful treatment of the infrared divergences and inventing renormalization ideology with its bare notions to cover this practice. I mention this just to show you the difference in quality of physics description.
 
Last edited:
  • #66
meopemuk said:
(1) We can simply take scattering amplitudes from high-order QED calculations and/or experiment and fit coefficient functions in (1) to these data.

(2) We can apply the so-called "unitary dressing transformation" to the renormalized Hamiltonian of QED to bring it to the desired form (1).

If we rely on existing high-order QED calculations to construct a "new" theory,
then this new theory is not predictive in its own right. How could we perform the next higher
order calculations if standard QED hasn't already done it?

If we rely heavily on experimental data to construct a "new" theory, then this new theory
is more phenomenological than standard QED. I don't see how it can be predictive in its
own right. How could it predict what the (future) experimental data would be when higher
accuracy becomes technically possible in the apparatus?

Either way guarantees that the S-matrix calculated with (1) is exactly the same as the S-matrix computed in QED or measured in experiments. The benefits of using interaction (1) are: (i) there is no need for renormalization, (ii) both free and interacting theories live in the same Fock space.
(i) Since the new theory is based on standard QED calculations, it implicitly relies on
the renormalization performed therein.

(ii) Since the new theory is not a non-perturbative theory, we cannot say anything
mathematically rigorous about such limits.
 
  • #67
And how about my questions? They are nearly "Yes or No" ones. Do you have your own opinion?
 
  • #68
Bob_for_short said:
And how about my questions? They are nearly "Yes or No" ones. Do you have your own opinion?

Bob, your questions are not "nearly yes or no", but require closer study of your paper.
Part of the problem is one of communication: I know that English is not your first
language, but you must understand that this makes it difficult at times for me to
understand properly what you really mean in your papers.

Also, badgering someone who is clearly willing to make the effort to study
your papers is not the way to win friends.
 
  • #69
No, without studying my papers, please answer the questions of post 65 concerning the standard QED. Forget for instance my papers.
You can answer privately, if you prefer.
 
  • #70
Hi strangerep,

Yes, I fully agree with your criticism. In its present form, the "dressed particle" approach is not developed to the point, where it can derive the Hamiltonian from some "first principles". The best we can do is to guess the Hamiltonian by relying on experiment or on high-order QED calculations. In this sense the theory is not predictive. However, it has the benefit of being consistent both physically and mathematically. On the other hand, the renormalized QFT is highly predictive, while being inconsistent. I think that both approaches have the right to exist. We'll see who will reach the goal (of being both consistent and predictive) first.

In spite of what I've said above, the "dressed particle" theory CAN make at least one important prediction. "Dressed" Hamiltonians have a characteristic property that interactions propagate instantaneously. In the last 16 years quite a few experiments have shown some signs of superluminal behavior. I am most impressed by the works done by A. Ranfagni et al. in Florence. Most theorists tend to dismiss these observations as some inconsequential curiosities. But I believe that they will be forced to change their attitudes soon. This will be the best argument in favor of the "dressed particle" theory.

Note that despite textbook claims, the traditional QFT cannot say anything about the speed of propagation of interactions. The "commutators of fields outside the light cone" have nothing to do with the time interval between the cause and the effect. In order to evaluate the speed of interactions, one must have a theory possessing a well-defined Hamiltonian and unitary time evolution. As we discussed earlier, the traditional QFT does not have these pieces. It can only calculate the S-matrix and energies of bound states, which do not reveal any time-dependent information.

Eugene.
 
Last edited:
  • #71
meopemuk said:
On the other hand, the renormalized QFT is highly predictive, while being inconsistent.
How is it inconsistent?

meopemuk said:
Note that despite textbook claims, the traditional QFT cannot say anything about the speed of propagation of interactions. The "commutators of fields outside the light cone" have nothing to do with the time interval between the cause and the effect. In order to evaluate the speed of interactions, one must have a theory possessing a well-defined Hamiltonian and unitary time evolution. As we discussed earlier, the traditional QFT does not have these pieces. It can only calculate the S-matrix and energies of bound states, which do not reveal any time-dependent information.
The investigation of the propogation of effects in QFT was performed by Segal and Guenin in the 1960s for specific models. Also Haag's algebraic apporach allows it to be treated in general, where you can show that effects do propogate at the speed of light.
 
  • #72
DarMM said:
How is it inconsistent?

It is undisputable that renormalized QFT (such as QED) can calculate the S-matrix (i.e., the result of time evolution between - and + infinite times) very accurately. I think we also agreed that this theory does not have a well-defined finite Hamiltonian. Without a Hamiltonian it is impossible to calculate the finite time evolution. That's the major inconsistency I am talking about.



DarMM said:
The investigation of the propogation of effects in QFT was performed by Segal and Guenin in the 1960s for specific models. Also Haag's algebraic apporach allows it to be treated in general, where you can show that effects do propogate at the speed of light.

I would appreciate exact references. Though, I am rather sceptical, for the same reason: the lack of a well-defined finite Hamiltonian in renormalized QFT.

Perhaps you are talking about 2D models? I admit, I know almost nothing about them.
 
  • #73
Bob_for_short -> What do you mena by later on we discover that the probability of an elastic process is zero? What do you mean that QED predicts an event that never happens?

No one ever claimed perturbation theory is perfect and the ultimate answer to all QFT problems. It has its limits, but renormalization is not one of them.

What has the Taylor power series expansion got to do with the perturbative expansion of, say, QED?

Also, it is very interesting how you ask people to answer your questions whereas you have not answered to my question: what kind of mathematical objects are your quantum fields?

strangerep & DarMM -> As I understand it, the renormalization procedure a la Epstein & Glaser (and hence the "infinite subtraction" one) is not more ad hoc then, say, solving a x^2 + b x +c =0. Let me elaborate this a bit. I'm trying to obtain an answer from my (perturbation) theory, and to get that answer I must extend my product of distributions to coinciding points where it is in general ill defined. Of course, I'll have to satisfy some conditions (causality and locality), but in principle this is "just" another mathematical problem one has to solve. Much like trying to get an answer about some physical problem which would involve the solution of the above (simple) equation. So in this sense, it does not appear more ad hoc than every other problem in physics. Comments?

meopemuk -> The vanishing of the fields commutator outside the light cone is precisely a statement about the finite speed of the propagation of signals. Don't know the details about interacting theories, but if you consider the simple free massless scalar quantum field you can compute the commutator, which is just the difference of the advanced and the retarded propagator, and study its support properties. And what you find out is that it is only supported ON the light cone, i.e. that "light travels at the speed of light". (I know a scalar field does not describe photons correctly, but the idea is the same - a massless particle travels at the speed of light, which is finite.) For the massive case, the commutator will not be supported only on the light cone, but it has causal support, i.e. on and inside the light cone. Outside the light cone the commutator vanishes identically. And this is just a mathematical fact which is a consequence of the hyperbolic character of the underlying field equations.
(You can find a discussion on causality for the free massive scalar field in Chapter 2.4 of Peskins & Schroeder. It's not really in the "rigorous QFT" spirit, but it is meaningful nevertheless.)
 
  • #74
DrFaustus said:
Bob_for_short -> What do you mena by later on we discover that the probability of an elastic process is zero? What do you mean that QED predicts an event that never happens?
I clearly write what I mean. First we calculate some elastic processes. On page 500 (I exaggerate, of course), when the IR divergence is treated, it is stated that the elastic cross section is identically equal to zero. So, instead of predicting zero for elastic processes the standard QED predicts some finite value (in the first non-vanishing order). That means a too bad start. If the exact value were 0.5 and the initial approximation would give 0.45, it would be OK. But the exact value is 0 and the initial calculation gives 1 (the probability of elastic process). It is too far from reality. Take any Taylor expansion around x0=0 and calculate it at very big x, say x=10. You will obtain the same blinder. Why not to take a Taylor expansion around a closer point, say, at x0 = 9.9. Then f(9.9) ≈ f(10) and the remaining corrections will be small.
DrFaustus said:
What has the Taylor power series expansion got to do with the perturbative expansion of, say, QED?
The magnetic moment is a series in powers of alpha/2pi. I speak of its numerical precision.
DrFaustus said:
...what kind of mathematical objects are your quantum fields?
My quantum fileds? Oh, they are terrible distributions. What saves me is their coming with natural regularization factors (charge form-factors).
 
  • #75
Well I thought I would check in with this thread after not doing so for a while.

Bob_for_short said:
So, in the first non-vanishing order the standard QED predicts events that never happen. And it does not predict the phenomenon that happen always (soft radiation). Don’t you consider this theory "feature" as a complete failure in the physics description? Isn’t it a too bad start for the perturbation theory?

Not at all, it's exactly what should happen. The cross section for processes that include up to an energy \Delta in observed soft photons takes the form

{d\sigma\over d\Omega}\biggr|_{\rm soft} = {d\sigma\over d\Omega}\biggr|_{\rm elastic}\left({c_1\Delta\over E}\right)^{\!\! c_2\alpha}[/itex]<br /> <br /> where E is the CM energy, \Delta is the maximum energy of the soft photons, \alpha is the fine-structure constant, and c_{1,2} are numerical constants. This does indeed go to zero as \Delta goes to zero. (d\sigma/d\Omega)_{\rm elastic} starts with the tree-level term, which is O(\alpha^2). So, expanding in powers of \alpha, one finds first the tree level term.
 
  • #76
Avodyne said:
The cross section for processes that include up to an energy \Delta in observed soft photons takes the form

{d\sigma\over d\Omega}\biggr|_{\rm soft} = {d\sigma\over d\Omega}\biggr|_{\rm elastic}\left({c_1\Delta\over E}\right)^{\!\! c_2\alpha}[/itex]<br /> <br /> where E is the CM energy, \Delta is the maximum energy of the soft photons, \alpha is the fine-structure constant, and c_{1,2} are numerical constants. This does indeed go to zero as \Delta goes to zero. (d\sigma/d\Omega)_{\rm elastic} starts with the tree-level term, which is O(\alpha^2). So, expanding in powers of \alpha, one finds first the tree level term.
<br /> What you wrote is called the inclusive cross section, and I speak of elastic one. The elastic one (the exact and experimental) equals zero rather than &gt; 0. The QED&#039;s one on the tree level is &gt; 0 which is wrong. You have to work hard to cope with the IR divergence to finally obtain a physically reasonable result instead of obtaining it automatically, if your theory &quot;catches&quot; the physics right.<br /> <br /> Let me put it in another way: whatever momentum q is transferred to the electron, no soft radiation appears on the tree level. Is it physical?
 
  • #77
DrFaustus said:
The vanishing of the fields commutator outside the light cone is precisely a statement about the finite speed of the propagation of signals.

I agree that free particles move at the speed of light or slower. But this fact has no relationship to the issue of the speed of propagation of interactions or (what is basically the same) signals. In order to say something about the speed of interactions/signals you must have a well-defined interacting theory. In the simplest case you should consider two particles at a distance R and solve a time-dependent problem in which you perturb the position of the particle 1 at time t=0 and determine the time at which this perturbation reaches the particle 2. I don't think this kind of problem can be analyzed within renormalized QFT, due to the lack of a well-defined finite Hamiltonian. And I don't think that "vanishing of field commutators" is relevant to the solution.
 
  • #78
This thread has turned into a discussion of Bob For Short's theory. I remind everyone that personal theories can be discussed only in the IR section.

I'm locking this thread. If the OP feels that his question hasn't been adequately addressed before this thread was derailed, he should start another one.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 0 ·
Replies
0
Views
1K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 1 ·
Replies
1
Views
736
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 1 ·
Replies
1
Views
563
  • · Replies 7 ·
Replies
7
Views
2K