A Haag's Theorem & Wightman Axioms: Solving Problems in QFT?

  • A
  • Thread starter Thread starter ftr
  • Start date Start date
  • Tags Tags
    Qft
ftr
Messages
624
Reaction score
47
QFT seems to be a bit sick with cluster decomposition assumption ..etc. So here comes Haag's theorem and Wightman axioms to the rescue, or do they? So what do these cures actually say differently than the generic QFT . Do they solve any practical problems, if not why the fuss, millennium prize and all.
 
Physics news on Phys.org
Only relativistic QFT in 4 space-time dimensions is sick because interacting fields are not well-defined in any logically coherent sense.

Theoretical physicists predicting particle properties work around using either finite lattice approximations, which is a quantum field analogue of solving partial differential equations using finite difference methods breaking all spacetime symmetries, or low order formal renormalized perturbation theory, which is to a logically precise specification like Euler's treatment of functions through formal power series a few hundred years ago. These methods seem to work and give sensible physical predictions, but they have no proper logical foundation.

The Wightman axioms spell out how the vacuum sector of such a theory should look like according to our present best guess, but (unlike in lower dimensions where there are many constructions) no interacting relativistic QFT in 4 space-time dimensions has been constructed in the more than 50 years of existence of the Wightman axioms. Neither are there nonexistence theorems. The simplest tractable case is widely held to be 4D Yang-Mills theory, which is the reason why there is a millennium prize on solving it. See https://www.physicsoverflow.org/217...osons-infinite-and-discrete?show=21846#a21846 for more comments on the latter.

Cluster decomposition is an important principle without which physics would be impossible. It basically says that far away from other particles particles behave independently of each other. This is the basic reason why we can look at (properly prepared) small objects independently of other objects. For its physical basis see on the informal level Weinberg's QFT book, Vol. I, Chapter 5. On the rigorous level, it guarantees the uniqueness of the vacuum state; see any book on algebraic quantum field theory.

Haag's theorem is hardly related to that; it only says that the CCR representation of the free field cannot extend to interacting fields, the interacting Hilbert space must look quite different from a Fock space. More precisely, while they have to be isomorphic as Hilbert spaces (all infinite-dimensional separable Hilbert spaces are isomorphic), the additional structure needed to set up quantum fields cannot be respected by such an isomorphism.
 
Last edited:
  • Like
Likes TheCanadian, Spinnor, haushofer and 3 others
Recently a colleague after a semester-long discussion on the foundations of QFT pointed me to this very nice review on Haag's theorem and related works:

https://link.springer.com/article/10.1007/s10670-005-5814-y
http://philsci-archive.pitt.edu/2673/1/earmanfraserfinalrevd.pdf

Don't worry! Although it says, it's about philosophy, it's in fact physics.

Whenever a serious teacher of QFT comes to explain asymptotic states of interacting particles, the S-matrix, the LSZ reduction formalism s/he gets doubts and discusses it with colleagues, of course, with not too much success; it's indeed an unsolved problem to find a mathematically rigorous formulation of 4D relativistic QFT with interacting particles.

FAPP QFT, as used by HEP phenomenologists, works with great success (even too much since the Standard Model stands up against all experimental efforts to disprove it; although recently LHCb came up with the next attack, but only at a bit over 2% confidence level yet ;-)). The way out is a practical approach: Just put the system in a box with periodic boundary conditions, use perturbation theory and renormalize. Then you get results with up to 12 significant digits accuracy. As Arnold said the other way out is the lattice approach and high computing power (at least in QCD, where you get, e.g., the hadron mass spectrum pretty nicely out too, and that's clearly beyond perturbation theory).

That's the approach Weinberg chooses in his book: Simply ignore Haag et al, which however is a pity since I think one should be aware of it!
 
vanhees71 said:
put the system in a box with periodic boundary conditions
This works for kinetic or hydrodynamic questions. But the resulting compactification destroys scattering theory since under these conditions the spectrum becomes wholly discrete.

vanhees71 said:
Then you get results with up to 12 significant digits accuracy.
This ultra-high accuracy is not obtained in this way but through NRQED: One uses a nonrelativistic $1/c^2$ expansion to get rid of the problems with the relativistic theory and then works essentially with nonrelativistic quantum mechanics with relativistic corrections.
 
Thanks both for your replies.

vanhees71 said:
mathematically rigorous formulation of 4D relativistic QFT with interacting particles

Since I am not a professional theorist, my opinion probably don't add to much. But I feel that "mathematically rigorous" should be physically rigorous. I am well aware of the successes, but I think it is mainly due to correctness of QM itself. The QFT formulation is heroic to be sure , but I have read the story of the development. A very messy process of experimentation(with it own problems, bumps.etc) and a theoretician trying to score.

But let me ask this ,I think the main issue with QFT is removing infinities and renormalization of the mass and charge, I think these are PHYSICAL problems and not "mathematical", do you agree.
 
A. Neumaier said:
This ultra-high accuracy is not obtained in this way but through NRQED

ftr said:
but I think it is mainly due to correctness of QM itself.

WE wrote this within minutes. Although I am no expert, I have deduced that.
 
A. Neumaier said:
This ultra-high accuracy is not obtained in this way but through NRQED: One uses a nonrelativistic $1/c^2$ expansion to get rid of the problems with the relativistic theory and then works essentially with nonrelativistic quantum mechanics with relativistic corrections.
Where do you need the non-relativistic approximation to evaluate the anomalous magnetic moment of the electron?
 
ftr said:
But let me ask this ,I think the main issue with QFT is removing infinities and renormalization of the mass and charge, I think these are PHYSICAL problems and not "mathematical", do you agree.
Renormalization has nothing to do with infinite radiation corrections per se. It comes to the rescue, but even if everything were perfectly finite, you'd need to renormalize to adjust the free parameters of the theory (wave-function normalization, masses, coupling constants) to observations.
 
Practical QFT is eg. the standard model of particle physics. Most physicists nowadays understand it to be a low energy theory, ie. to make good predictions at the energies we use in experiments, it is not necessary for the theory to exist at all energies. This is the Wilsonian viewpoint, and it is a great advance for understanding why renormalization is ok, and has nothing to do with removing infinities etc. The main problem in this practical way of thinking is that it needs a non-perturbatively defined quantum theory, eg. lattice gauge theory, but while lattice QED is thought to be ok, chiral fermions on the latttice are still problematic.

The Millenium prize has to do with 4D relativistic QFTs that exist at all energies (and not just at low energies).
 
  • Like
Likes vanhees71
  • #10
ftr said:
it is mainly due to correctness of QM itself.
This doesn't explain the high accuracy achieved! QM without QED has not such a high predictive accuracy.

vanhees71 said:
Where do you need the non-relativistic approximation to evaluate the anomalous magnetic moment of the electron?
To first loop order you don't need it but you don't get very high accuracy. For highest accuracy you cannot work with the standard Feynman approach; it is too daunting a computational task.
atyy said:
4D relativistic QFTs that exist at all energies (and not just at low energies).
Because of Poincare invariance, any relativistic QFT (in flat spacetime) will exist at all energies and (and not just at low energies).
 
  • #11
A. Neumaier said:
To first loop order you don't need it but you don't get very high accuracy. For highest accuracy you cannot work with the standard Feynman approach; it is too daunting a computational task.
I thought Kinoshita has calculated (g-2)/2 to order ##\alpha_{\text{em}}^5##. Are there non-relativistic approximations involved? I don't see anything mentioned like this, e.g., here

https://arxiv.org/abs/1205.5368

Because of Poincare invariance, any relativistic QFT (in flat spacetime) will exist at all energies and (and not just at low energies).

But the trouble with some QFTs (including QED) at high energies, at least perturbatively (Landau pole). The modern consensus is that the Standard Model is a low-energy approximation of some other theory, of which we have no clue (not even if it exists at all).
 
  • #12
A. Neumaier said:
This doesn't explain the high accuracy achieved! QM without QED has not such a high predictive accuracy.

I know you need QED for precision, I was just thinking that QT is real, QFT is just a technique. May one day be united:smile:
 
  • #13
ftr said:
I was just thinking that QT is real, QFT is just a technique

What are you basing this on?
 
  • #14
PeterDonis said:
What are you basing this on?

It just looks to me that QFT takes QM massages it and turns it into a calculational tool.
 
  • #15
ftr said:
It just looks to me that QFT takes QM massages it and turns it into a calculational tool.

You're leaving out at least three key physical phenomena that QFT can predict and ordinary non-relativistic QM can't:

(1) The existence of processes where particles are created or destroyed;

(2) The existence of antiparticles;

(3) The connection between spin and statistics.
 
  • Like
Likes vanhees71
  • #16
ftr said:
I know you need QED for precision, I was just thinking that QT is real, QFT is just a technique. May one day be united:smile:

QFT is a type of QT.

Condensed matter uses non-relativistic QFT. Non-relativistic QFT can be derived from the non-relativistic Schroedinger equation for many identical particles.
 
  • Like
Likes vanhees71
  • #17
vanhees71 said:
Are there non-relativistic approximations involved?
No. The paper you cited calculates some contributions up to the 10th order and even includes hadronic contributions (phenomenologically included into QED) since it already affects the 10th decimal. Indeed, I checked the references with the actual calculation details that no expansion in inverse powers of c is made. Thus the results are covariant.

I had mixed up the computation of ##g-2## with the computation of the hyperfine structure (Lamb shift), where NRQED is essential to get high precision.
 
  • Like
Likes vanhees71
  • #18
  • #19
Well, if you need a cutoff to define the theory then you admit that it's an effective theory valid up to the cutoff scale at most. The Epstein-Glaser approach avoids UV divergences, because it's more careful in dealing with products of distribution valued operators, and thus it solves a mathematical problem, but not the physical problem of the Landau pole. As far as I know, QED is an example for a QFT more likely not to exist in a strict sense than, e.g., QCD, which is asymptotic free, but for no realistic (i.e., (1+3)-dim. theory of interacting quantum fields) there has been a proof for it to exist in a strict sense. On the other hand FAPP this doesn't matter much, because we have anyway only a limited energy available, and it's quite likely that our contemporary Standard Model will fail at high enough energies somewhere. Today, nobody known where that scale might be. Maybe there's really a dessert up to the Planck scale, where one can definitely expect something should happen concerning quantum effects of gravity. Then HEP with accelerators is doomed, funding wise :-(.
 
  • #20
ftr said:
But let me ask this ,I think the main issue with QFT is removing infinities and renormalization of the mass and charge, I think these are PHYSICAL problems and not "mathematical", do you agree.
No. A lattice approximation with periodic boundary conditions defines a conceptually and mathematically valid and unproblematic theory. It does not have infinities, and does not need renormalization (it is, instead, possibly part of the process of renormalization - namely a regularized theory).

Such a lattice theory can be interpreted as some approximation of the field theory, and gives all the observable (physical) results, with some accuracy. So, the lattice theory is mathematically fine, and it is (at least if the computing power is sufficient, the lattice fine enough and the volume big enough) also physically fine. In principle, a lattice theory could be even a candidate for a more fundamental theory, given that it has neither physical nor mathematical problems, and QFT would be simply the large distance approximation of such a fundamental lattice theory.

What is not fine is the metaphysics. The lattice theory breaks Lorentz covariance, and some people consider Lorentz covariance as something obligatory, for some metaphysical reasons - relativistic symmetry being a fundamental insight or so.
 
  • Like
Likes jimmy1010100 and vanhees71
  • #21
A. Neumaier said:
This ultra-high accuracy is not obtained in this way but through NRQED: One uses a nonrelativistic $1/c^2$ expansion to get rid of the problems with the relativistic theory and then works essentially with nonrelativistic quantum mechanics with relativistic corrections.

The 12 decimal places of accuracy being talked about is a very specific quantity--the magnetic moment of the electron, right?
 
  • #22
atyy said:
The main problem in this practical way of thinking is that it needs a non-perturbatively defined quantum theory, eg. lattice gauge theory, but while lattice QED is thought to be ok, chiral fermions on the latttice are still problematic.
I think this is not a serious problem. For vector gauge fields, we have Wilson lattice gauge fields, which realize exact gauge symmetry. Exact gauge symmetry is what we need to describe massless gauge bosons, thus, QCD or QED. But they are anyway not chiral.

For the chiral gauge actions, a lattice realization which gives only approximate gauge symmetry seems fine. Not? If not, why not? Massive gauge fields being non-renormalizable is not an issue if we do effective field theory - they will be, in the large distance limit, equivalent to the closest renormalizable theory, so, they will look (have to look) like the renormalizable example we know, with exact but broken gauge symmetry.

But the chiral gauge action is the only problem here. Else, we have nothing chiral in the SM, everything are pairs of Dirac fermions.
 
  • #23
A. Neumaier said:
Haag's theorem is hardly related to that; it only says that the CCR representation of the free field cannot extend to interacting fields, the interacting Hilbert space must look quite different from a Fock space. More precisely, while they have to be isomorphic as Hilbert spaces (all infinite-dimensional separable Hilbert spaces are isomorphic), the additional structure needed to set up quantum fields cannot be respected by such an isomorphism.

As I understand it, perturbation theory typically involves treating the interacting states as a perturbation of free-particle states. So does Haag's theorem imply that there is something fundamentally wrong with doing this? (In that case, the fact that it works pretty well is a little mysterious.)
 
  • #24
stevendaryl said:
As I understand it, perturbation theory typically involves treating the interacting states as a perturbation of free-particle states. So does Haag's theorem imply that there is something fundamentally wrong with doing this? (In that case, the fact that it works pretty well is a little mysterious.)
It works pretty well because in a lattice approximation with periodic boundary conditions no problem exists, thus, as far as a technique can be applied to a finite lattice theory it has to work fine. Haag' s theorem becomes relevant only beyond the lattice theory.
 
  • Like
Likes vanhees71
  • #25
vanhees71 said:
Well, if you need a cutoff to define the theory then you admit that it's an effective theory valid up to the cutoff scale at most. The Epstein-Glaser approach avoids UV divergences, because it's more careful in dealing with products of distribution valued operators, and thus it solves a mathematical problem, but not the physical problem of the Landau pole. As far as I know, QED is an example for a QFT more likely not to exist in a strict sense than, e.g., QCD, which is asymptotic free, but for no realistic (i.e., (1+3)-dim. theory of interacting quantum fields) there has been a proof for it to exist in a strict sense. On the other hand FAPP this doesn't matter much, because we have anyway only a limited energy available, and it's quite likely that our contemporary Standard Model will fail at high enough energies somewhere. Today, nobody known where that scale might be. Maybe there's really a dessert up to the Planck scale, where one can definitely expect something should happen concerning quantum effects of gravity. Then HEP with accelerators is doomed, funding wise :-(.
The Epstein-Glaser approach has no cutoff and hence no problem with the Landau pole. Everything is Poincare covariant at all stages, and nothing at all can go wrong at high energies. The renormalization parameter ##\Lambda## that has to be chosen in this approach has not the slightest effect on the infinite order results, only on the finite loop approximations. The only forbidden thing is to pick ##\Lambda## close to the Landau pole, since there the Landau singularity strikes. For all other choices of ##\Lambda##, exactly the same theory, namely QED (in the case under discussion) results, since its parameters are the physical mass ##m## and the physical charge ##e##, and not ##\Lambda##.

Note that all this is true independent of the fact that QED (or the standard model) does not reflect Nature at very short distances. This discrepancy is completely unrelated to Landau's old arguments.
 
Last edited:
  • Like
Likes vanhees71
  • #26
Denis said:
A lattice approximation [...] does not need renormalization
This is a gross misunderstanding of the nature of lattice theories. These all need finite renormalization, which is done by matching the parameters defining the lattice and the interaction with the physical data through appropriate data fits. The coupling constants needed to achieve this match depend very sensitively on the lattice spacing!
 
  • Like
Likes jimmy1010100 and Denis
  • #27
stevendaryl said:
The 12 decimal places of accuracy being talked about is a very specific quantity--the magnetic moment of the electron, right?
yes. Only very few quantities are measured to such a high precision (and the corresponding extremely large computational effort). For most observable quantities much less accuracy suffices for matching experiments.
 
  • #28
stevendaryl said:
As I understand it, perturbation theory typically involves treating the interacting states as a perturbation of free-particle states. So does Haag's theorem imply that there is something fundamentally wrong with doing this? (In that case, the fact that it works pretty well is a little mysterious.)
Yes. There is something fundamentally wrong! The glaring evidence for this are the infinities that delayed the development of QED (started in 1928) by about 20 years.

The cure is the renormalization program. It sacrifices the Hilbert space (at finite times) to restore finiteness and predictability. In lower dimensions, mathematical physicists have been able to reconstruct the correct (non-Fock) Hilbert space and thus also restored consistency with ordinary quantum mechanics. In 4 dimensions the resolution of this problem is still wide open; at present, formal, non-converging power series expansions replace actual functions. But formal power series do not form a Hilbert space. The real difficulty is to construct a good inner product of whatever takes the role of the state vectors.
 
Last edited:
  • #30
Demystifier said:
The Haag's theorem is an artifact of taking the infinite-volume limit too seriously.
No. It is not an artifact, since only the infinite volume limit gives physically correct results. Even in lattice calculations, one must try to take the limit by extrapolation to get agreement with experiment.

The infinite volume is a necessary feature of Poincare invariance, which is the very basis of all relativistic particle physics.

Demystifier said:
It is brilliantly explained
Brilliant explanations may still be far off the mark! The power is in the facts, not in brilliant words.
 
  • #31
A. Neumaier said:
No. It is not an artifact, since only the infinite volume limit gives physically correct results. Even in lattice calculations, one must try to take the limit by extrapolation to get agreement with experiment.

The infinite volume is a necessary feature of Poincare invariance, which is the very basis of all relativistic particle physics.
I disagree, especially with the claim than only infinite volume gives physically correct results. In addition, Poincare invariance (translation invariance to be more precise) on large distances is broken by general relativity. Infinite volume is just an approximation which often works fine when wavelength of particles is much smaller than distance on which translation invariance may be broken, e.g. by cosmological effects.
 
Last edited:
  • #32
Demystifier said:
Poincare invariance (translation invariance to be more precise) on large distances is broken by general relativity.
But the standards model, on which all particle physics is based, is completely independent of general relativity.

Drop Poincare invariance, and you have essentially no constraints on the kind of action to consider. The terms kept in standard model are distinguished solely by Poincare invariance, renormalizability, and the assumed internal group, both extracted from experimental data and very well verified.
 
  • #33
A. Neumaier said:
But the standards model, on which all particle physics is based, is completely independent of general relativity.

Drop Poincare invariance, and you have essentially no constraints on the kind of action to consider. The terms kept in standard model are distinguished solely by Poincare invariance, renormalizability, and the assumed internal group, both extracted from experimental data and very well verified.
Nothing of this represents evidence that we live in an infinite volume.
 
  • Like
Likes vanhees71
  • #34
Demystifier said:
Nothing of this represents evidence that we live in an infinite volume.
This doesn't matter; I didn't claim that. Instead I claimed that
A. Neumaier said:
The infinite volume is a necessary feature of Poincare invariance, which is the very basis of all relativistic particle physics.
 
  • Like
Likes dextercioby
  • #35
A. Neumaier said:
This doesn't matter; I didn't claim that.
You claimed that "only the infinite volume limit gives physically correct results". Do you still stand with this claim?
 
  • #36
Demystifier said:
The Haag's theorem is an artifact of taking the infinite-volume limit too seriously. It is brilliantly explained in
A. Duncan, The Conceptual Framework of Quantum Field Theory (2012)
https://www.amazon.com/dp/0199573263/?tag=pfamazon01-20
Sec. 10.5 How to stop worrying about Haag's theorem

But even if we take the infinite volume limit too seriously, Haag's theorem is not a problem. It turns out that the results from the wrong derivations can be rigourously derived at the level of formal perturbation theory, which makes sense if the theory is also shown to exist.
 
Last edited by a moderator:
  • #37
A. Neumaier said:
But the standards model, on which all particle physics is based, is completely independent of general relativity.

Drop Poincare invariance, and you have essentially no constraints on the kind of action to consider. The terms kept in standard model are distinguished solely by Poincare invariance, renormalizability, and the assumed internal group, both extracted from experimental data and very well verified.

Couldn't you put the system on a torus?
 
  • #38
Demystifier said:
You claimed that "only the infinite volume limit gives physically correct results". Do you still stand with this claim?
Yes. This limit defines the theory, though of course every sufficiently accurate approximation to the limit is experimentally indistinguishable from it. But without having the limiting theory there is no way to tell what is sufficiently accurate...

In particular, current lattice calculations don't match experiment if they don't extrapolate to the limit. This is typically done by doing computations for a number of lattice sizes, doing finite renormalization to match the defining physical parameters, and then performing the extrapolation. On a fixed lattice, results are generally poor.
 
  • #39
atyy said:
if the theory is also shown to exist.
Though that is at present not the case in 4D. Moreover, even the formal perturbation theory needs infinite renormalization, which destroys the Fock space structure. No Hilbert space is left, except the free, asymptotic one at times ##\pm\infty##. Finite time calculations cannot be justified in this way.

atyy said:
Couldn't you put the system on a torus?
One can, but this preserves only translations, breaks Lorentz symmetry, and does not get rid of the infinities that characterize the perturbative approach based on Fock spaces.
 
  • #40
A. Neumaier said:
One can, but this preserves only translations, breaks Lorentz symmetry, and does not get rid of the infinities that characterize the perturbative approach based on Fock spaces.

But if we put the system on the torus, do all the formal reasons you gave for constraining the form of the standard model Lagrangian remain (with Haag's theorem not applying)?
 
  • #41
The fact that QFT's require renormalization almost certainly leads to inequivalent Hilbert spaces. Haag's theorem is just an independent argument for this fact, but even if doesn't hold, the renormalized Hilbert space will usually be different. A crucial difference between QM and QFT is that in QM, we have the Stone-von-Neumann uniqueness theorem, while in QFT, there are infinitely many inequivalent representations of the field Weyl algebra and none of them is physically preferred.
 
  • Like
Likes atyy and martinbn
  • #42
atyy said:
But if we put the system on the torus, do all the formal reasons you gave for constraining the form of the standard model Lagrangian remain (with Haag's theorem not applying)?
It depends. If you first derive the model in Minkowski space, use the restrictions there, and then put the result of the torus, you get of course just the same limited number of terms as in Minkowski space, since this is determined by the still unregulated UV behavior, and not by the IR behavior regulated by the compactness of the torus.

But if you start on the torus from scratch without assuming a relation to about Minkowski space you have far too many possibilities since there is no constraint from Lorentz invariance! Thus the number of possible parameters grows tremendously as there is no longer a (symmetry) reason why many of them should be equal.
 
  • #43
A. Neumaier said:
Yes. This limit defines the theory, though of course every sufficiently accurate approximation to the limit is experimentally indistinguishable from it. But without having the limiting theory there is no way to tell what is sufficiently accurate...

In particular, current lattice calculations don't match experiment if they don't extrapolate to the limit. This is typically done by doing computations for a number of lattice sizes, doing finite renormalization to match the defining physical parameters, and then performing the extrapolation. On a fixed lattice, results are generally poor.
Well, it defines obviously not the theory we use, because we get well-defined answers with renormalized perturbation-theory techniques (sometimes you have to resum to get rid of IR problems or problems along the light cone, particularly in many-body QFT at finite temperature and/or density; see, e.g., the AMY photon rates in a (Quark-Gluon) Plasma).

To calculate physical observables in perturbation theory you have to first regularize the theory. One way to get rid of the problems related to Haag's theorem is to put the system in a finite box (the example of a mass term as perturbation in Duncan is just great to understand the principle behind it). Another way are the more standard techniques of regularization in momentum space (cutoff, Pauli-Vilars, dim. Reg., heat-kernel/theta-function regularization) and then take the limit after renormalization or directly renormalize with BPHZ-substraction techniques. The results are the known successes of the Standard Model.

Another way is the lattice approach, which has its merits particularly in QCD in the vacuum (e.g., mass spectrum of hadrons) and at finite temperature (zero baryo-chemical potential but even beyond that to study, e.g., the Equation of State); I'd not call these results "poor".
 
  • #44
vanhees71 said:
it defines obviously not the theory we use
Defined was here not meant in the mathematical sense. It identifies the lattice theory with the corresponding Poincare invariant continuum theory, of which renormalized perturbation theory is another approach to approximately define it in the mathematical sense.

vanhees71 said:
the lattice approach, which has its merits particularly in QCD in the vacuum (e.g., mass spectrum of hadrons) and at finite temperature (zero baryo-chemical potential but even beyond that to study, e.g., the Equation of State); I'd not call these results "poor".
They are poor if evaluated on a fixed lattice, and give good results only when extrapolated to the limit, as I mentioned. You need to consider how the results change with the lattice spacing, and because convergence is extremely slow one must numerically extrapolate to the limit of zero lattice spacing to get the physically relevant approximations form the lattice calculations. These numeral limits can be quite different from the values obtained at the computationally feasible lattice spacing. The difference is like the difference between computing ##\sum_{k=1}^\infty 1/k^2## from the first few terms of the series (very poor approximation) or from a Pade-accelerated numerical limiting procedure based on the same number of coefficients (which gives a good approximation of the sum).
 
Last edited:
  • Like
Likes vanhees71
  • #45
Sure, with a few lattice points you don't get the results I mentioned, and continuum extrapolation is mandatory.
 
  • #46
A. Neumaier said:
The cure is the renormalization program. It sacrifices the Hilbert space (at finite times) to restore finiteness and predictability.
I disagree. If one uses a lattice with periodic boundary conditions as the regularization, the regularized theory lives in a standard Hilbert space. And, of course, at finite times.
 
  • #47
A. Neumaier said:
The cure is the renormalization program. It sacrifices the Hilbert space (at finite times) to restore finiteness and predictability.
Denis said:
If one uses a lattice with periodic boundary conditions as the regularization, the regularized theory lives in a standard Hilbert space. And, of course, at finite times.
True but this does not invalidate my statement.

First, your recipe requires a splitting into space and time, sacrificing instead covariance.

Second, the identification with the observables in these approximations is different at different lattice spacings and periods. Thus to identify the observables in the physical limit where the spacing goes to zero and the period to infinity, one needs renormalization. And in 4 dimensions nobody so far has been able to define a proper Hilbert space in the corresponding limit. Thus the physical Hilbert space is lost and replaced by an approximate one, different for each particular approximation scheme (which reflects its lack of physicality).

Third, the real time lattice formulation proposed by you is numerically extremely poorly behaved, not really suitable for prediction. It cannot even be used for scattering calculations since in a compact space, scattering is absent.

Lattice QFT calculations are almost universally done in a Euclidean framework where both space and imaginary time are discretized. This not only sacrifices covariance, but also time in favor of its imaginary version!
 
  • #48
PeterDonis said:
You're leaving out at least three key physical phenomena that QFT can predict and ordinary non-relativistic QM can't:

(1) The existence of processes where particles are created or destroyed;

(2) The existence of antiparticles;

(3) The connection between spin and statistics.
Relativistic QM (e.g. S-matrix theory) gives you all those. Field theory is not required.
 
  • #49
A. Neumaier said:
True but this does not invalidate my statement.
First, your recipe requires a splitting into space and time, sacrificing instead covariance.
So what, your statement was not about covariance, but about Hilbert space and finite times.
A. Neumaier said:
Second, the identification with the observables in these approximations is different at different lattice spacings and periods. Thus to identify the observables in the physical limit where the spacing goes to zero and the period to infinity, one needs renormalization. And in 4 dimensions nobody so far has been able to define a proper Hilbert space in the corresponding limit.
That this limit is not well-defined is also not a problem. Anyway one has to expect that near Planck length all this has to be replaced by a different theory. Once QFT is not more than a large distance approximation, there is no need to have a valid limit.
A. Neumaier said:
Thus the physical Hilbert space is lost and replaced by an approximate one, different for each particular approximation scheme (which reflects its lack of physicality).
Anyway everything you do is only approximate. So, approximation is not at all a lack of physicality.
A. Neumaier said:
Third, lattice QFT calculations are almost universally done in a Euclidean framework where both space and imaginary time are discretized. This not only sacrifices covariance, but also time in favor of its imaginary version! (The real time lattice formulation is numerically extremely poorly behaved, not really suitable for prediction.)
Of course, imaginary time makes no sense in a Schroedinger equation. But it is fine as a numerical method to compute lower energy states. Probably it makes sense also for something else, I'm not sure. Whatever, it does not matter at all if something is numerically poor if it is conceptually clean. That for numerical success one sometimes has to do some dirty things, ok, such is life.
 
  • #50
Denis said:
Once QFT is not more than a large distance approximation, there is no need to have a valid limit.
One needs the limiting concept including renormalization already to know how to extrapolate from the otherwise poor lattice calculations.

Denis said:
it does not matter at all if something is numerically poor if it is conceptually clean.
Your lattice Hilbert space allows no scattering, hence is far from being conceptually clean.
 
Back
Top