I do a regular check through the new papers on arxiv and today I came across this http://arxiv.org/abs/0903.3176 which is a continuation of a paper which Peter Morgan published in the Journal of Mathematical Physics a couple of years ago. http://arXiv.org/abs/0704.3420 This is not something I find readily understandable. It's a variant of ordinary QFT which differs drastically from the conventional version in a fundamental way. Can one get away with this? The formal setup is familiar: *-algebra, operator-valued distributions, Schwartz test-functions on ordinary Minkowski space. But one of the algebraic (commutator) relations has been changed. You could say it's a mutant. This is the type of thing that a mathematical physicist would naturally find of interest. Take some common accepted axiomatic system or framework, and see how it behaves when you weaken or replace one of the basic axioms. In this case, surprisingly enough, the mutant QFT doesn't break down or blow up, at least it doesn't in some obvious way that I as naive observer can detect. Yet intuitively I feel it must. So this bothers me a little. Maybe someone else can spot a shortcoming. Sometimes examining a contrasting variant can give a new understanding of the original. It may highlight some feature.
Glanced through the paper. He wants to relax positive semi definitiveness of the Hamiltonian, and not demanding the Poincare group to act continously on the derived Hilbert space. Also, these random Lie fields are essentially classical objects. Of course when you make your life difficult like that, reproducing even standard things (like Fermions) becomes highly tentative. Also theres nothing really to discuss yet, he makes no claim about whether or not this can model anything we know off. Anyway thats fine, its interesting to see where the math takes you and see if its useful for modeling something physical, but the task of reproducing 40 years of nontrivial and observed quantum effects that have no apparent classical analogs still remains as an 800 lb gorilla in the room (like say anomalies). Heck, constructive field theorists after decades of trying with far stronger and more natural constraints can't even prove the existence of a single physical field theory in 4d, much less tackle any sort of phenomenology. Its just one of those fiendishly difficult topics in physics.
Thanks for checking it out! Just so any passers-by will see what we're talking about I will paste the abstract: http://arxiv.org/abs/0903.3176 Lie random fields Peter Morgan 9 pages (Submitted on 18 Mar 2009) "The algebras of interacting "Lie random fields" that were introduced in J. Math. Phys. 48, 122302 (2007) are developed further. The conjecture that the vacuum vector defines a state over a Lie random field algebra is proved. The difference between Lie random field algebras and quantum field algebras is the triviality of the field commutator at time-like separation, the field commutator being trivial at space-like separation in both cases. Many properties that are usually taken to be specific to quantum theory, such as the superposition of states, entanglement, quantum fluctuations, and the violation of Bell inequalities, are also properties of Lie random fields." I've highlighted what I found puzzling. We know Peter Morgan from his occasional contributions to "beyond" forum and also from the essay on the nature of time he entered in the recently concluded FQXi essay contest.
I'm free associating here, so it's probably not relevant. But a toy model (but for QM, not QFT) which also violates the Bell inequalities is van Enk's http://arxiv.org/abs/0705.2742.
Probably not, atyy, but I hadn't noticed the van Enk model before, so I'm pleased with your free association. Thanks. Violation of Bell inequalities is of course possible in any number of ways, but none of them have ever gained traction, mostly, I would say, because they look ad-hoc (though I think the van Enk crosses the line in a more interesting way than usual). The ad-hocness of models is especially significant because we can only do engineering if we can find a way of constructing models for experiments/projects that consistently predict the behavior ahead of time. It mustn't need a very delicate understanding of fine details to do new stuff. It's not only engineering, of course; Physics also needs tractable consistency of predictions, the more easily to see where our models are not quite right. Lie random fields are good as a classical way to violate Bell inequalities because the violation is so natural. With the introduction of an explicit model for quantum fluctuations (why would we not model something that makes a difference in experiments?), Bell inequalities are violated. Introducing fluctuations of any kind, quantum or thermal, moves us into a mathematics of fields that are not differentiable, which is a significant sense in which random fields are not classical and in which common sense can often fail. There are quantum fluctuations in QFT (which are different from thermal fluctuations; albeit some approaches to understanding QFT deny that there are quantum fluctuations), so why should classical Physics not introduce such fluctuations as a way to make working models? A second significance of Lie random fields as an approach to understanding quantum field theory, I would say, is that quantum optics of free fields is identical in a Hilbert space formalism for random fields, so there's no need to prove that we can construct a model for each and every experiment someone presents as a challenge. That includes Bell inequality violating experiments. The use of Hilbert space as a formalism for presenting a classical random field makes so much the same as in QM/QFT that we have a head start on many other classical approaches to quantum theory. Proving empirical equivalence with all of the standard model is definitely a different matter, sadly. Haelfix definitely has his assessment close to right. I claim that there is something curious here to see, but definitely not enough to interest someone who's happy with the regularization and renormalization stew. Pejoratives aside, I can't see how anyone can be happy with so many, so delicately balanced infinities. I take it that the trick is not to get rid of the infinities that arise whenever we work with continuum models, however, but to control them in a way that is different enough that it doesn't obstruct our understanding so much. Every paper on the violation of Bell inequalities that has made its way into Nature and PRL over the last few years has crowed that classical particle property models are ruled out by such and such an experiment. The ruling out of random field models requires a completely different class of experiment.
Thanks Haelfix, and thanks Marcus for the OP. Although I'm careful to mention the lack of a continuous action of the Poincare group on the Hilbert space, I personally think this requirement is a mathematics too far in conventional axiomatizations, and is not a substantial limitation of the mathematics. I believe the restriction to a finite number of test functions could be taken away without special problems. I had an exchange with Fredenhagen, Rehren, and Seiler a year or two ago in which I queried the lack of a requirement for a continuous action in their paper "QFT: where we are". They require only "covariance under spacetime symmetries (in particular, Lorentz invariance of the dynamics)", but of course in correspondence it turned out that to them of course this means a continuous action. Still, I think calculations in perturbation theory, and practical calculations generally, never use the availability of a continuous action. I'd take small issue with the idea that "far stronger" constraints are "more natural". The restriction to positive frequency is just that, a restriction of the class of models we are willing to consider. If we take away this restriction, it gives us other models to consider. I'm just now pulling apart Hans Halvorson's and David Baker's paper "Antimatter", which gives as clear an analysis of complex scalar free QFTs as I've ever seen. In order to make a complex scalar free QFT have positive energy, they have to introduce a complex structure that could only be natural if you're determined to make the energy positive. If the vacuum is stable for thermodynamic reasons, not because there are no lower energy states for it to decay into, one can use the natural complex structure. The vacuum state gives probability densities for observables and for joint measurements of compatible observables because the inner product is positive semi-definite (I keep a Hilbert space structure, after all), but the Hamiltonian does not have to be positive semi-definite for there to be a sensible probabilistic interpretation of the mathematics. I note that the Hamiltonian is irrelevant to calculating the Wightman functions in the vacuum state of a free quantum field. We only need the commutation relations between creation and annihilation operators to calculate everything. Of course, creation and annihilation operators are not observables, they are theoretical objects that are used pervasively in perturbative QFT, quantum optics, and almost all practical applications of QFT, but they are definitely not used in algebraic QFT. Also, the deformation that I'm discussing is definitely a baby step, insofar as other deformations of the creation and annihilation are possible. Of course, eliminating the Hamiltonian from all our discussions changes things a little, but I think it's all good to have a mathematics that is so different, because it puts what we have been using in such sharp perspective. Haelfix: I would be interested to know what you think we might weaken in the Wightman axioms (or in the Haag-Ruelle axioms, if you prefer)? [I think it's not unreasonable to answer that people should continue to try to construct a rigorous concrete QFT that satisfies the Wightman or Haag-Ruelle axioms, but I would like whatever is produced to be comprehensible and tractable after the fact.] As far as Fermions are concerned, I think I will have to say that they're only observable by their effects on bosons and the consequent effects of the bosons on thermodynamically metastable mesoscopic or macroscopic collections of fermions and bosons (which are called detectors), since we cannot construct gauge invariant observables out of Fermion fields alone. If the phenomenology that we have become accustomed to attributing to Fermions can be modeled in a different way, I'd like to see it. It would of course, if it's possible, lead to a change in our understanding about as great as removing our reliance on Hamiltonians. [I know this is evasive, but I've only been working ten years on my own so far. String theory has had tens of thousands of physicist-years of ingenuity and hard work. Of course if tens of thousands of Physicist-years are expended on the mathematics of random fields, we may have as much to show for it as we would have if we expended the same effort on the worst kind of crank theory. The sociology of the adoption of an idea fascinates me as much as it fascinates any crank, almost as much as whether the mathematics is consistent.]
Hi Peter. I haven't thought too deeply about cqft basically since I was in graduate school, precisely b/c I felt it was too hard to make progress in. I'm not sure its been shown yet whether we *have* to weaken assumptions in the Wightman or Haag axiom sets, they do after all get the free fields right (and some interacting models in 2d) so at least thats something. Otoh I'd hazard to guess that I don't think big progress can be made without help from the conventional field theory or lattice side. Like for instance a working analytic closed form nonperturbative solution of N=4 SYM or something like that. The whole field suffers in my opinion from this chronic lack of examples, and absent those finding the right technical tools is a bit too much of a fishing expedition for my taste. So my answer to your question is.. I simply don't know =/
Thanks Haelfix. A large part of my problem at this point is to see how to make contact with existing QFT. I can make pretty direct contact with Quantum Optics, which is a better start than I think most approaches could make, but I'm trying to handle interactions in a different enough way than perturbative QFT that contact with that is so far very elusive. We'll see how I do over the next few years. I've been trying to make contact with Kreimer's Hopf-algebraic approach, but so far I've failed to do anything at all with it. If non-perturbative QFT can produce a relatively straightforward analytic solution that makes contact with experiment, with which I could try to make contact, then there would be no need for my approach. You're right about examples. It's precisely why I think there's scope for a re-examination of the Wightman axioms in particular. The article I cited above, "QFT: where we are", makes an attempt to make a reassessment, but I think it is too embedded in Haag-Ruelle axioms, which I think are considerably too abstract, specifically given that we already have no physical models of the Wightman axioms. It requires an acceptance that one is most likely to make no progress to take on foundational questions. Grad students, post-docs, junior faculty, are not allowed to make no progress, at least not without risking their careers, so foundations have to be kept firmly in the background in most areas of academia.
I'm not happy with the infinities, and the point you raise is also a key motivation for me. But I personally don't quite understand your original specific concern, and the logic by which that leads you to this exact thing. So it's interesting to see some of your concerns, it helps me at least. Can you comment why you don't like the infinities? Is it because the renormalization story is ad hoc, or something else? Part of my concerns lies at the information perspective. About the continuum, that is effectively extension to counting. Counting states, counting evidence. And there is something with an infinite number of distinguishable states, that is highly unphysical and at least goes against my intuition about how the world operates. Many people seems to have the capability to distinguish between "mathematical degrees of freedom", in the sense that you can distinguish one real number from the other, and the "physical degrees of freedom", that can be distinguished by means of a physical process. But still, there is something confusing and inconsistent with insisting on using a redundant language. Even if we realise that there is a difference between the language and what it describes, I think the idea (my idea at least) is that mahtmatical models should be somehow condensed and pure irreducible representations of the world. Usually with the right language, scentenses becomes simpler. I somehow have the opinion that it's not possible to give an optimal representation, unless you have the optimal language. This, in combination with the idea that mathematics in physics ultimately related to quantification of information, and thus counting, suggests to me that our usual application of continuum models to physics, is not quite right. This leads me to look into a reconstruction of the contiuum, constructing measures of howto count distinguishable states, and how these interact. It seems to me that while some of your concerns, has a similar basis, you also have a more confidence in classical modelling, and somehow start your reasoning from a classical perspective, rather than the information/observational perspective that I've chosen. Maybe together, I can see the logic in your attempt. I dont' know what demystifier has to say about this? He seems to also come from the classical path, beeing into Bohmian reasoning. Maybe he has something interesting to say? /Fredrik