I Wilsonian viewpoint and wave function reality

Click For Summary
The Wilsonian viewpoint treats quantum electrodynamics as an effective theory, emphasizing that low-energy predictions arise from coarse graining, which does not necessitate a real or classical lattice. The Copenhagen interpretation remains agnostic about the reality of the wave function, focusing instead on observable experimental results. Coarse graining involves restricting the algebra of observables to those that are measurable, with the understanding that these quantities are considered real. The discussion highlights that while quarks are not directly observable, their effects can be measured, reinforcing the notion that reality is tied to observability. Overall, the conversation underscores the complexities of defining reality within quantum field theory and the importance of effective descriptions based on observable phenomena.
  • #121
Demystifier said:
Since you are fan of it, can you recommend a good and not too technical review of the chiral fermion problem, which in simple terms explains what exactly this problem is? Or even better, could you write (perhaps as an insight entry on this forum) a non-technical review by yourself?

How about http://latticeqcd.blogspot.sg/2005/12/nielsen-ninomiya-theorem.html ?
 
Physics news on Phys.org
  • #122
atyy said:
Nielsen is a frequent guest at the institute where I am working, and sometimes I even share office with him. But I had the impression that the fermion doubling problem (which the Nielsen-Ninomiya theorem is about) is not the same thing as the chiral fermion problem. Are you saying that those are different names for the same problem?
 
  • #123
Demystifier said:
Nielsen is a frequent guest at the institute where I am working, and sometimes I even share office with him. But I had the impression that the fermion doubling problem (which the Nielsen-Ninomiya theorem is about) is not the same thing as the chiral fermion problem. Are you saying that those are different names for the same problem?

Well, they are closely related. The introduction of http://arxiv.org/abs/1307.7480 describes the relationship between the chiral fermion problem and the Nielsen-Ninomiya theorem.
 
  • #125
Demystifier said:
I don't think it's true. I think all so called "rigorous" interacting QFT's need some sort of regularization of UV divergences, not much different from a non-zero lattice spacing.
There are several rigorously constructed QFTs that do not have an ultraviolet cutoff, e.g. ##\phi^{4}_{3}##.
 
  • #126
Demystifier said:
@atyy In the literature I have seen the claim that the problem can be solved by Ginsparg-Wilson aproach. Here are some links
http://latticeqcd.blogspot.hr/2005/12/ginsparg-wilson-relation_21.html
http://arxiv.org/abs/hep-lat/9805015
What is your opinion on that approach?

It does work, if one can solve the relation. In many particular cases it can be used. However, it was probably wrongly believed eg.

"The full strength of the Ginsparg-Wilson relation was realized by Luscher who discovered that it suggests a natural definition of lattice chiral symmetry which reduces to the usual one in the continuum limit. Based on this insight, Luscher achieved a spectacular breakthrough: the non-perturbative construction of lattice chiral gauge theories" http://www.itp.uni-hannover.de/saalburg/Lectures/wiese.pdf

But fzero pointed out to me here on PF that showed this was probably not correct. https://www.physicsforums.com/threads/status-of-lattice-standard-model.823860/
 
Last edited:
  • Like
Likes Demystifier
  • #127
DarMM said:
There are several rigorously constructed QFTs that do not have an ultraviolet cutoff, e.g. ##\phi^{4}_{3}##.
Yes, but I meant in 4 dimensions.
 
  • #128
atyy said:
For it to be asymptotic series, the thing that it is approximating must exist. In other words, the theory must be constructed. Does a construction of QED exist?
Let's use the Wilsonian concept. That means, we replace, in a first step, QED or the whole SM by a lattice theory. With some lattice distance h and a finite size L with periodic boundary conditions so that there will be no IR infinities too.

This theory is well-defined in any sense. Now you can define, for the computations in this well-defined theory, any sort of pertubation theory. If the resulting series will be convergent or only an asymptotic series or whatever - this question is well-posed because the theory which is approximated by this pertubation theory is well-defined and exists. (And, in particular, at least up to this point this holds for gravity or other non-renormalizable stuff too.)

Then, you can consider the question of how this well-defined and hopefully well-approximated theory depends on the lattice approximation. What changes if you decrease h and increase L. The theories with different h and L will be related with each other by some renormalization. But if the relation between the lattice theory and its pertubation series is well-understood for one choice of h and L, it will probably be the same for other choices of h and L. Ok, may be with different radius of convergence or so, if there is such a thing.

But what about the limit? The limit is irrelevant. Because to recover everything observable it is sufficient to consider the well-defined lattice theory for small enough h and large enough L. Then, in particular, all terms violating relativistic symmetry (which is not exact on the lattice) will be small enough to be observable. And the theory with even smaller h will be indistinguishable.

(There is another point - to obtain a really well-defined theory it may be necessary to fix the gauge freedom and for gravity the freedom of choice of coordinates. With the Lorenz gauge and harmonic coordinates we have nice and simple candidates for this, so that this is unproblematic too. What has been said about relativistic symmetry holds for gauge and diff symmetry accordingly - for a fine enough grid it will be effectively recovered.)
 
  • #129
A. Neumaier said:
...the results obtained by truncating the covariant perturbation series at 5 loops (needed for the 10 digits) are provably equivalent (within computational accuracy) to those of a rigorously defined nearly relativistic QFT.
This is far better than what one has for the lattice approximations, which are completely uncontrolled.

I think one has to distinguish conceptual and pragmatical questions. It is one question how to compute QED predictions with maximal precision with minimal resources, and a completely different one to understand the concepts, and to understand how all this makes sense, and can be, at least in principle, defined in a rigorous way.

For efficient computations we can, easily, use, say, dimensional regularization. Even if a 4-\varepsilon-dimensional space is a completely meaningless nonsense. For understanding the concepts a lattice regularization is much better. It makes completely sense, in all details. It may be horribly inefficient for real computation, but so what? If we want to understand how a theory can be defined in a rigorous and meaningful way, a lattice regularization is the preferable way - it makes sense. In particular, even if the limit is not well-defined, a lattice with a small enough h is well defined. Which is an important difference to the QFT in 4-\varepsilon-dimensional space (which I have chosen here to illustrate the point).
 
  • #130
Ilja said:
a ##4-\varepsilon##-dimensional space is a completely meaningless nonsense
But a ##(4-\varepsilon)##-dimensional integral of a (sufficiently well-behaved) covariant function of momenta is a completely well-defined object. QFT makes use only of the latter.
 
  • #131
Demystifier said:
Since you are fan of it, can you recommend a good and not too technical review of the chiral fermion problem, which in simple terms explains what exactly this problem is? Or even better, could you write (perhaps as an insight entry on this forum) a non-technical review by yourself?
What I can tell you is how I see the problem.

First, there is the purely technical fermion doubling problem. If you put Dirac fermions naively on the lattice, you obtain not one but 16 Dirac fermions. In a more sophisticated version, this can be reduced to 4. What I propose here is a further reduction to 2, which is reached by using the old, not that relativistic variant of the Dirac equation i\partial_t \psi + i\alpha^i\partial_i \psi + m \beta \psi = 0 or so modulo signs, and then use the standard staggered fermion technique but only in the spatial discretization. Reducing it to two fermions is sufficient, because in the SM all fermions appear in electroweak doublets only. The problem to put a single Weyl fermion alone on the lattice may be a pragmatical problem, but is irrelevant for the conceptual understanding of the SM.

Then, the problem of creating a gauge-invariant lattice model. Here I say: Forget about it, use a non-gauge-invariant lattice model. Weak gauge fields are anyway massive, thus, not gauge-invariant. You need gauge invariance for renormalizability? No, think about what Wilson tells us about this. You have on the exact lattice theory renormalizable as well as non-renormalizable terms, and in the large distance limit the non-renormalizable ones decrease, the more horrible ones much faster than the almost renormalizable ones. So, conceptually we do not have to care.

Or, in other words, once we start with a lattice theory which is not gauge-invariant, we have to expect that in the large distance limit gauge invariance will not be recovered, and that the lowest order non-gauge-invariant terms survive. That will be the mass term. Fine, this is what we need for electroweak theory anyway.
 
  • Like
Likes Demystifier
  • #132
A. Neumaier said:
But a ##(4-\varepsilon)##-dimensional integral of a (sufficiently well-behaved) covariant function of momenta is a completely well-defined object. QFT makes use only of the latter.
Fine, but this is not the point. The integral is what you need to compute the results.

What you need for a conceptual understanding, or for a rigorous definition, is or a rigorous construction of the theory without cutoff, or at least a rigorous construction of a meaningful theory with cutoff. And a quantum field theory in a ##(4-\varepsilon)##-dimensional space is something I have never seen.
 
  • #133
Ilja said:
What I can tell you is how I see the problem.

First, there is the purely technical fermion doubling problem. If you put Dirac fermions naively on the lattice, you obtain not one but 16 Dirac fermions. In a more sophisticated version, this can be reduced to 4. What I propose here is a further reduction to 2, which is reached by using the old, not that relativistic variant of the Dirac equation i\partial_t \psi + i\alpha^i\partial_i \psi + m \beta \psi = 0 or so modulo signs, and then use the standard staggered fermion technique but only in the spatial discretization. Reducing it to two fermions is sufficient, because in the SM all fermions appear in electroweak doublets only. The problem to put a single Weyl fermion alone on the lattice may be a pragmatical problem, but is irrelevant for the conceptual understanding of the SM.

Then, the problem of creating a gauge-invariant lattice model. Here I say: Forget about it, use a non-gauge-invariant lattice model. Weak gauge fields are anyway massive, thus, not gauge-invariant. You need gauge invariance for renormalizability? No, think about what Wilson tells us about this. You have on the exact lattice theory renormalizable as well as non-renormalizable terms, and in the large distance limit the non-renormalizable ones decrease, the more horrible ones much faster than the almost renormalizable ones. So, conceptually we do not have to care.

Or, in other words, once we start with a lattice theory which is not gauge-invariant, we have to expect that in the large distance limit gauge invariance will not be recovered, and that the lowest order non-gauge-invariant terms survive. That will be the mass term. Fine, this is what we need for electroweak theory anyway.
Is such an approach compatible with zero mass of the photon?
 
  • #134
A. Neumaier said:
But a ##(4-\varepsilon)##-dimensional integral of a (sufficiently well-behaved) covariant function of momenta is a completely well-defined object. QFT makes use only of the latter.
True, but you don't need dim. reg. to define renormalization schemes, and it's indeed pretty unintuitive from a physics point of view. For me the most intuitive scheme is BPHZ, i.e., you define your renormalization scheme as conditions on the divergent proper vertex functions (which is a finite set for Dyson renormalizable and an infinite set for a non-renormalizable effective theory, where you need more and more low-energy constants the higher you go in the momentum expansion). There you introduce a renormalization scale, be it a (space-like) momentum if you need a mass-independent renormalization scheme or a momentum-subtraction scheme where some mass in your physics defines the scale. The Wilsonian point of view comes in via the various renormalization group equations.

You can also introduce a "cutoff function", implementing directly the Wilsonian point of view. That's a technique that is now well established in connection with the functional RG methods (Wetterich equation), which is used, e.g., to understand the QCD phase diagram (usually employing effective models like various chiral quark-meson models with or without Polyakov loops) with some success.
 
  • #135
vanhees71 said:
For me the most intuitive scheme is BPHZ
The problem here is that one loses manifest gauge invariance, hence has the problem of Gribov copies. or is there a way around that?
 
  • #136
You have the problem of Gribov copies independent of the chosen regularization scheme. Also I don't understand, where you see a problem of gauge invariance. The WTI's ensure that your counterterms are in accordance with gauge invariance. This was the great step forward in 1971 when 't Hooft published the results of his PhD thesis (under supervision of Veltman).
 
  • #137
vanhees71 said:
True, but you don't need dim. reg. to define renormalization schemes, and it's indeed pretty unintuitive from a physics point of view.
Yes, I have used this as an example to illustrate that it is useful to distinguish things which allow to make fast, efficient, accurate computations from things which allow to improve conceptual understanding or to prove consistency of the theory.

vanhees71 said:
You have the problem of Gribov copies independent of the chosen regularization scheme.
The question is if Gribov copies are a problem or not.

If you think that gauge-equivalent gauge fields are really identical states, and your gauge fixing condition is purely technical, then Gribov copies are clearly a problem, they mean that the same state is counted several times.
If you, instead, consider gauge-equivalent fields as physically different states (even if you have no way to distinguish them by observation), and the gauge condition as a physical equation for these degrees of freedom (so that you use a nice looking equation like the Lorenz condition), then there is no problem at all with Gribov copies.

Demystifier said:
Is such an approach compatible with zero mass of the photon?
Note that the EM field is a vector field, not chiral. So, to implement an exact gauge symmetry on the lattice is not a problem.

The only question would be if this is compatible with the approach to fermion doubling, where I have the electroweak doublet instead of a single Dirac fermion. In http://arxiv.org/abs/0908.0591 there the idea comes from I have an exact gauge invariance for U(3), but with the condition that all parts of an electroweak doublet have the same charge. While the group is fine (the SU(3) plus U(1)_{em} part is in reality U(3) too) its representation is not. But the EM field can be understood as a deformation of the diagonal U(1) symmetry. And for such a deformation the exact gauge symmetry remains, even if somehow deformed, so one can hope that the deformed symmetry remains an exact symmetry. Then, the field would remain massless.

But this is, of course, a point which needs better understanding.
 
  • Like
Likes Demystifier
  • #138
Ilja said:
Yes, I have used this as an example to illustrate that it is useful to distinguish things which allow to make fast, efficient, accurate computations from things which allow to improve conceptual understanding or to prove consistency of the theory.The question is if Gribov copies are a problem or not.

If you think that gauge-equivalent gauge fields are really identical states, and your gauge fixing condition is purely technical, then Gribov copies are clearly a problem, they mean that the same state is counted several times.
If you, instead, consider gauge-equivalent fields as physically different states (even if you have no way to distinguish them by observation), and the gauge condition as a physical equation for these degrees of freedom (so that you use a nice looking equation like the Lorenz condition), then there is no problem at all with Gribov copies.Note that the EM field is a vector field, not chiral. So, to implement an exact gauge symmetry on the lattice is not a problem.

The only question would be if this is compatible with the approach to fermion doubling, where I have the electroweak doublet instead of a single Dirac fermion. In http://arxiv.org/abs/0908.0591 there the idea comes from I have an exact gauge invariance for U(3), but with the condition that all parts of an electroweak doublet have the same charge. While the group is fine (the SU(3) plus U(1)_{em} part is in reality U(3) too) its representation is not. But the EM field can be understood as a deformation of the diagonal U(1) symmetry. And for such a deformation the exact gauge symmetry remains, even if somehow deformed, so one can hope that the deformed symmetry remains an exact symmetry. Then, the field would remain massless.

But this is, of course, a point which needs better understanding.

Not going to pretend I could follow it but Is the theory/proposal in the paper referenced significantly impacted by the discovery of the Higgs boson? Seemed like a paragraph there at the end suggested there might not be a Higgs sector. But it also seemed to be suggesting mass due to symmetry breaking was emergent (sorry if I am misrepresenting that significantly) - not necessarily that the Higgs stuff was wrong as a practical theory.
 
  • Like
Likes atyy
  • #139
Jimster41 said:
Not going to pretend I could follow it but Is the theory/proposal in the paper referenced significantly impacted by the discovery of the Higgs boson? Seemed like a paragraph there at the end suggested there might not be a Higgs sector. But it also seemed to be suggesting mass due to symmetry breaking was emergent (sorry if I am misrepresenting that significantly) - not necessarily that the Higgs stuff was wrong as a practical theory.
The Higgs sector is simply not yet considered in the model, at least yet.

But let's note that there are candidates for the role of the Higgs: Last but not least, some degrees of freedom of the Higgs field are simply transformed, by the symmetry breaking, into degrees of freedom of the massive bosons. These degrees of freedom exist in my model from the start, no need to obtain them in such a subtle way - and they exist for the gauge-symmetric gauge fields too. And for the U(1) gauge fields (the EM field, and the two additional U(1) fields which are supposed to be suppressed because of vacuum neutrality and anomaly) they are simple scalar fields. Then, for each electroweak doublet we have a massive scalar field. So there are a lot of scalar fields. How far they remember the Higgs, or the scalar particle which has been observed, would be a question to be studied.
 
  • Like
Likes atyy and Jimster41
  • #140
Ilja said:
The Higgs sector is simply not yet considered in the model, at least yet.

But let's note that there are candidates for the role of the Higgs: Last but not least, some degrees of freedom of the Higgs field are simply transformed, by the symmetry breaking, into degrees of freedom of the massive bosons. These degrees of freedom exist in my model from the start, no need to obtain them in such a subtle way - and they exist for the gauge-symmetric gauge fields too. And for the U(1) gauge fields (the EM field, and the two additional U(1) fields which are supposed to be suppressed because of vacuum neutrality and anomaly) they are simple scalar fields. Then, for each electroweak doublet we have a massive scalar field. So there are a lot of scalar fields. How far they remember the Higgs, or the scalar particle which has been observed, would be a question to be studied.

I am trying to follow (heuristically) your second more general paper on GLET. For what it's worth the exercise has helped me imagine what Smolin means when he talks about "Pure Relationism" in his recent book. As you may know he's is all over absolute time in that and he talks about Shape Dynamics and Causal Sets as relevant theories. Do you see them as such?

Heuristically, to me at least, your Ether seems like a Causal Set or LQG tetrahedral "foam" (or whatever quantization machine) indexed by absolute time so each chunk is unique and in some sense "located". I certainly hope I'm not missing the point entirely. Smolin's got this thing about similarity-distance that seems really appealing in this context.
 
Last edited:
  • #141
Jimster41 said:
I am trying to follow (heuristically) your second more general paper on GLET. For what it's worth the exercise has helped me imagine what Smolin means when he talks about "Pure Relationism" in his recent book. As you may know he's is all over absolute time in that and he talks about Shape Dynamics and Causal Sets as relevant theories. Do you see them as such?
No. I have an argument why I think that a purely relational, diff-invariant theory cannot be quantized. http://arxiv.org/abs/0909.1408 It is essentially that without a background one cannot recover the Newtonian limit (in the sense of Newtonian quantum gravity). For a simple experiment, easy to describe in Newtonian quantum gravity, one cannot compute a reasonable prediction in the GR case because one would need some information what it means for a particle being at the same position on different GR solutions.
Jimster41 said:
Heuristically, to me at least, your Ether seems like a Causal Set or LQG tetrahedral "foam" (or whatever quantization machine) indexed by absolute time so each chunk is unique and in some sense "located". I certainly hope I'm not missing the point entirely. Smolin's got this thing about similarity-distance that seems really appealing in this context.
Indexed not only by absolute time but also by absolute space. And, therefore, in no way purely relational, but, instead, explicitly rejecting relationalism.
 
  • Like
Likes Jimster41
  • #142
Thread closed for moderation.

Edit: This thread has run its course and will remain closed.
 
Last edited:
  • Like
Likes vanhees71 and dextercioby

Similar threads

  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 33 ·
2
Replies
33
Views
2K
  • · Replies 33 ·
2
Replies
33
Views
3K
Replies
32
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 8 ·
Replies
8
Views
2K
Replies
1
Views
1K
  • · Replies 36 ·
2
Replies
36
Views
7K