PeterDonis said:
In other words, part of your definition of "the relativistic paradigm" is "not working on preferred frame theories".
Yes.
PeterDonis said:
The whole point of Symanzik's work that you so blithely dismiss was to show that the Schrodinger equation in QFT is relativistic, in the sense of being Lorentz invariant, even though the Lorentz invariance is not manifest.
I blithely dismiss it? I quoted what was claimed, without questioning this claim.
Then, let's see: "relativistic" not found at all, "Lorentz" found once, in the following remark:
For most of our considerations, a plane ##\partial \Gamma## is sufficient, and then, for calculations, dimensional regularization is the most convenient: ##3 - \varepsilon## space dimensions (Re ##\varepsilon > 0##), one (euclidean or minkowskian) time dimension. This is as effective as introducing a lattice in space while keeping time continuous, but by itself does not break Lorentz invariance such that renormalization of the speed of light is not needed.
Does not look like the whole point of the paper, which the author himself describes as "We show here that in renormalizable models, the Schrödinger wave functional exists to all orders in perturbation theory".
In any case, the question what Symanzik has reached seems off-topic too, as well as what Coleman and Jackiw have reached here:
A. Neumaier said:
Coleman, Jackiw and others used the Schroedinger picture for QFT to study solitonic degrees of freedom and instantons - without any input from Bohmian mechanics.
So, let's concentrate on what follows from BM for QFT.
First of all, the BFT approach itself has to be based on a Schrödinger picture of QFT. If it would not exist, that would be bad news for BFT.
Then, it had to identify the correct choice for the configuration space. Different choices would define different, competing lines of development of BFT. So, if (for whatever reasons) the approach based on particle ontology (favored by Duerr et al) fails, those working with the field ontology would not be impressed at all, they would be even happy that their preference for the field ontology appears preferable. (But this choice may differ for bosons and fermions.)
Then, it has to look at the Schrödinger equation. Is it of the type which allows a dBB interpretation? This appears unproblematic. Beyond this, everything else is unproblematic.
A. Neumaier said:
On the other hand, what did the Bohmian's do to develop this picture? Nothing at all. They just say that they have a realistic Schrödinger equation interpretation, and leave the rest to others, just as before. They are content to iterate their mantra that there is no need for them to do such work, as you did:
That's a misinterpretation. The dBB approach does not lead to any restrictions in the use of mathematical methods developed by quantum theory. So, Bohmians will be happy about any progress reached with methods which do not rely on Bohmian trajectories too. They can be used as they are, there is no need to modify them or to reinvent them in some Bohmian version. Once the QM formalism can be derived from BM, all what can be reached based on the QM formalism can be reached using BM too.
This differs from the relativistic paradigm. This paradigm rejects those things which are not manifestly (fundamentally) Lorentz-covariant. So they are acceptable only as mathematical tools, only as long as there is no connection to reality.
To illustrate this, let's look at the role of regularizations. To get rid of the infinities in QFT, one has to regularize it. The usual way is to cut large momentum values. But "large momentum values" is nothing Lorentz-invariant. So, straightforward regularizations are theories which are not Lorentz-covariant. Sometimes one can circumvent this problem and find a regularization which is Lorentz-covariant. Say, one adds some large massive particles, and then invents such interactions with them that all the problematic terms are cancelled. But this are exceptions, not the rule. The most straightforward and simple way to regularize a field theory, a spatial lattice regularization, is obviously not Lorentz-covariant.
Once they are not covariant, they are, from the point of view of the relativistic paradigm, not even candidates for fundamental theories. Their only reasonable role is that of a dirty mathematical tool which allows to compute some intermediate results. These cannot be the final results - one has to consider some limit, which leads to some relativistic, Lorentz-covariant theory, else these intermediate results are worthless, and their agreement with observation is nothing but an unexplained accident.
If we restrict ourselves here to lattice regularization, this Lorentz-covariant limit can be reached only in the continuous limit of the lattice distance to zero. Unfortunately, even for renormalizable theories this limit either does not exist at all (Landau poles) or would be trivial (the interaction becomes zero). But, even if they were very uncomfortable with not having a really well-defined (and started research programs like algebraic QFT to solve these problems) this situation was more or less accepted: One could at least compute relations between the observables which had well-defined limits. Unfortunately, important fields (the gravitational field, and massive gauge fields) did not fit into this scheme. While there was found a way to handle massive gauge fields, there was not found a way to handle gravity.
So, the restrictions imposed by the relativistic paradigm created some problems with the quantization of gravity.
In a BM approach, there would not be any objection against the regularized theories. Any assumption that the theory has to be Lorentz covariant at the fundamental level is foreign to BM and with Bell's theorem it is known that one needs a preferred frame in BM. Instead, one would even require a regularization which reduces the degrees of freedom to a finite number, like lattice regularizations on a large cube do (instead of other regularization procedures like dimensional regularization, where it would be unclear if one could do any BM in the regularized theory). After this, there would be no BM-internal point to consider a continuous limit of that lattice theory. In particular, a lattice regularization of GR would be acceptable, and it would work for large distances (greater than Planck length) without any problems.
The approach which is now accepted by the mainstream is Wilsonian effective field theory: All the SM fields, as well as the field-theoretic version of GR, are only effective field theories. That means, there is some critical length, and below that critical length these theories have to be replaced by unknown different, more fundamental theories. The length is usually assumed to be the Planck length, but in principle it could be a different one.
Aside: Of course, Wilson had nothing to do with BM. At least AFAIK. But behind the Wilsonian approach there is an even more evil paradigm - the ether: There are sufficient similarities between classical condensed matter theories and the field theories of fundamental physics. Let's use these similarities by applying the methods used in one part shamelessly in the other part too. As a result, Wilson was able to apply renormalization techniques developed in fundamental physics for the SM in condensed matter theory and to reach quite nontrivial results there about phase transitions.
In the other direction, the empirical success was less obvious. But if one starts with some fundamental theory at Planck length, where all the imaginable terms would be of the same order, one finds that in the large distance limit the non-renormalizable terms are suppressed much more that renormalizable terms, so that this leads to the quite general prediction that we will observe renormalizable theories, except in the case where none exists, where the leading term will be a heavily suppressed (much weaker) non-renormalizable one. Which is gravity, which is indeed much weaker. So we have here also a qualitative but correct empirical prediction.