I How to derive Born's rule for arbitrary observables from Bohmian mechanics?

  • #51
A. Neumaier said:
But you assume that an eigenstate always changes into another (or the same) eigenstate, which is essentially as severe a restriction.
No, I don't assume that.

A. Neumaier said:
Probably you cannot even give an example of a Hamiltonian where your formula results but k' does not equal k!
Of course I can. For example, when ##|k\rangle## is a photon momentum eigenstate, then ##|k'\rangle=|0\rangle##, the photon vacuum. That's because the measurement of photon destroys the photon.
 
  • Like
Likes vanhees71
Physics news on Phys.org
  • #52
Demystifier said:
I have tried to explain you the justification in several ways, but you were not satisfied.
You only justified it by (i) calling it standard in measurement theory, (ii) referring to other papers which were not justifiying it either, and (iii) by a request that I should provide corrected right hand sides for your equations. Of course, neither is a satisfying justification.
Demystifier said:
You objected that it doesn't work for angular momentum, but I have not understood your objection. Can you try to rephrase your argument, why doesn't it work for angular momentum?
That it doesn't work for angular momentum was just a guess. I found a more detailed discussion of the angular momentum case in London and Bauer (also reprinted in Wheeler and Zurek, whose page numbers I am using). But precisely at the point where I'd have needed details the argument given there is incomplete: On p.251 they claim your equation (9) with ##v_k(y)## being eigenfunctions of the pointer, and refer to Section 12 for the argument. But there ##v_k(y)## denotes instead an arbitrary time-dependent function of ##y## (defined in the footnote * on p.254) satisfying (4). Thus there is a gap in their discussion of the measurement of angular momentum.

After having checked the literature myself, my most important objection is that the derivation of the formula (3) or (9) you assume is based on the far too strong assumption of nondemolition.
 
  • #53
In the SGE the "pointer" is the particle's position, right? If you accept this, it's the most simple example for a measurement describable completely by quantum dynamics (sic!), i.e., the motion of a neutral particle with a magnetic moment through an inhomogeneous magnetic field!
 
  • #54
Demystifier said:
No, I don't assume that.
So the ##|k'\rangle## are not the eigenstates of ##K## for the eigenvalue ##k'##? This is not what the notation would have suggested.

But you still assume without argument that separable states remain separable, while the evolution of a separable state by a general interacting Hamiltonian destroys separability. Thus you need a justification for this separability. The references you gave give none.
 
  • Like
Likes vanhees71
  • #55
A. Neumaier said:
my most important objection is that the derivation of the formula (3) or (9) you assume is based on the far too strong assumption of nondemolition.
Fine, but I explained how my (minor) generalization avoids this assumption without significantly affecting the further analysis. Again, if you think that nondemolition could have a more radical change of the right-hand side of (3), then you should be able to write down how do you imagine that it might look like.
 
  • #56
vanhees71 said:
Does anybody have a reference to a Stern-Gerlach measurement, where not the magnetic moment of a spin-1/2 angular momentum has been measured but some higher angular-momentum state, like an atomic SGE with atoms of larger total ##\vec{j}##?
London and Bauer (see post #52) discuss an atom with arbitrary spin, though (as mentioned) the discussion contains a gap.
 
  • #57
A. Neumaier said:
So the ##|k'\rangle## are not the eigenstates of ##K## for the eigenvalue ##k'##? This is not what the notation would have suggested.

But you still assume without argument that separable states remain separable, while the evolution of a separable state by a general interacting Hamiltonian destroys separability. Thus you need a justification for this separability. The references you gave give none.
Indeed for a measurement you need precisely the opposite: A good measurement entangles the measured observable with the pointer state of the apparatus!
 
  • #58
A. Neumaier said:
So the ##|k'\rangle## are not the eigenstates of ##K## for the eigenvalue ##k'##? This is not what the notation would have suggested.
You are right about that, I should have choosen a different notation or make an additional comment in the paper. But the paper is now accepted for publication in that form, so I will not change it.

A. Neumaier said:
But you still assume without argument that separable states remain separable, while the evolution of a separable state by a general interacting Hamiltonian destroys separability. Thus you need a justification for this separability. The references you gave give none.
By separable, do you actually mean factorizable? If so, then you are of course right that general Hamiltonian destroys it. But we are not studying a general Hamiltonian. We are studying a special Hamiltonian, chosen such that it serves as a measurement of K. A Hamiltonian that radically destroys that property could not be interpretd as measurement of K.
 
  • #59
Demystifier said:
Fine, but I explained how my (minor) generalization avoids this assumption without significantly affecting the further analysis.
You assume a generalization of a conclusion of an analysis that holds only under the assumption of nondemolition. This does not imply that your generalization holds without the assumption of nondemolition, because this is not the way logic works.

To prove your formula you need to repeat the standard argument without the nondemolition assumption and show that your generalized formula still follows.
Demystifier said:
if you think that nondemolition could have a more radical change of the right-hand side of (3), then you should be able to write down how do you imagine that it might look like.
Without assuming nondemolition, (3) would look like
$$|k\rangle|A_0\rangle \to \sum_{k'} |k'\rangle|A_{kk'}\rangle,$$
summed over a complete set of basis vectors ##|k'\rangle## (which might or might not be the original basis).
 
  • #60
A. Neumaier said:
London and Bauer (see post #52) discuss an atom with arbitrary spin, though (as mentioned) the discussion contains a gap.
I look for an experimental (!) paper!
 
  • #61
A. Neumaier said:
Without assuming nondemolition, (3) would look like
$$|k\rangle|A_0\rangle \to \sum_{k'} |k'\rangle|A_{kk'}\rangle,$$
summed over a complete set of basis vectors ##|k'\rangle## (which might or might not be the original basis).
Consider ##|A_{kq}\rangle## and ##|A_{kp}\rangle## for ##q\neq p##. Are ##|A_{kq}\rangle## and ##|A_{kp}\rangle## macroscopically distinguishable? My further analysis will depend on your answer.
 
  • #62
Demystifier said:
By separable, do you actually mean factorizable?
Yes; for a state, separable and factorizable are synonymous.
Demystifier said:
But we are not studying a general Hamiltonian. We are studying a special Hamiltonian, chosen such that it serves as a measurement of K. A Hamiltonian that radically destroys that property could not be interpreted as measurement of K.
Yes, but it must be shown that there are reasonable Hamiltonians that represent (i) a sensible system dynamics in isolation, (ii) a sensible detector dynamics in isolation, and (ii) sensible (physically realizable) interaction terms mimicking the measurement setting for an arbitrary system observable.
And it must be shown that these Hamiltonians actually produce the form (3) or (9) you want to assume for the subsequent analysis.

That this analysis is necessary but lacking in your paper is visible from the fact that you get the fully decohered result (15) by a very simple and general argument, while papers on decoherence must work quite hard to prove decoherence for given various simple idealized measurement settings.

Thus arriving at decoherence is quite nontrivial - it is indeed the only real difficulty of decoherence theory. Not that it cannot be done in particular settings, but it is done with sophisticated machinery, not with the tools of the 1930s that you employ. I haven't seen a decoherence analysis for measuring arbitrary system operators. Should you know one, it might solve the problem, and I'd be very interested in a reference.
 
Last edited:
  • Like
Likes Demystifier
  • #63
vanhees71 said:
In the SGE the "pointer" is the particle's position, right? If you accept this, it's the most simple example for a measurement describable completely by quantum dynamics (sic!), i.e., the motion of a neutral particle with a magnetic moment through an inhomogeneous magnetic field!
It is the recorded particle position (i.e., with irreversible amplification of the result). Otherwise the whole experiment is unitary and one is stuck with superpositions and no measurement.
 
  • #64
Demystifier said:
Consider ##|A_{kq}\rangle## and ##|A_{kp}\rangle## for ##q\neq p##. Are ##|A_{kq}\rangle## and ##|A_{kp}\rangle## macroscopically distinguishable? My further analysis will depend on your answer.
I don"t know. I just know what to expect in general, and that it is nontrivial to prove that something more specific happens.

Thus to decide this is for you to analyse for a specific Hamiltonian modeling the desired measuring process. It would be part of the required justification.
 
  • #65
vanhees71 said:
I look for an experimental (!) paper!
Ah, this was not clear from your query. I don't know about Stern-Gerlach, perhaps
https://arxiv.org/abs/cond-mat/0401526?

But you should look at
  • Franke‐Arnold, Allen & Padgett, Advances in optical angular momentum. Laser & Photonics Reviews, 2 (2008), 299-313.
The optical version produces nice pictures; I have seen live demonstrations!
 
  • #66
vanhees71 said:
I look for an experimental (!) paper!
A. Neumaier said:
I don't know about Stern-Gerlach, perhaps
https://arxiv.org/abs/cond-mat/0401526?
Actually, Stern-Gerlach experiments for higher spin started with
  • Breit & Rabi, Measurement of nuclear spin, Phys. Review 38 (1931), 2082.
In 1944, Rabi received the Nobel price for related work. I also looked at
It contains results for the fine structure of spin 1/2 nuclei; the 2 levels of the textbook treatment split due to relativistic corrections.
 
  • Like
Likes dextercioby and vanhees71
  • #67
Great! Thanks!
 
  • #68
Demystifier said:
We are studying a special Hamiltonian, chosen such that it serves as a measurement of K.
A. Neumaier said:
I haven't seen a decoherence analysis for measuring arbitrary system operators. Should you know one, it might solve the problem, and I'd be very interested in a reference.
Specifically, what is lacking is a valid argument that shows that, for some Hamiltonian with a sensible physical interpretation (i)-(iii) as stated in post #62, the pointer states have decohered, i.e., (15) is approximately true. If this were shown it would indeed follow by your argument that the Bohmian pointer positions reproduce Born's rule for the measurement of ##K##.
 
  • Like
Likes Demystifier
  • #69
A. Neumaier said:
Yes; for a state, separable and factorizable are synonymous.

Yes, but it must be shown that there are reasonable Hamiltonians that represent (i) a sensible system dynamics in isolation, (ii) a sensible detector dynamics in isolation, and (ii) sensible (physically realizable) interaction terms mimicking the measurement setting for an arbitrary system observable.
And it must be shown that these Hamiltonians actually produce the form (3) or (9) you want to assume for the subsequent analysis.

That this analysis is necessary but lacking in your paper is visible from the fact that you get the fully decohered result (15) by a very simple and general argument, while papers on decoherence must work quite hard to prove decoherence for given various simple idealized measurement settings.

Thus arriving at decoherence is quite nontrivial - it is indeed the only real difficulty of decoherence theory. Not that it cannot be done in particular settings, but it is done with sophisticated machinery, not with the tools of the 1930s that you employ. I haven't seen a decoherence analysis for measuring arbitrary system operators. Should you know one, it might solve the problem, and I'd be very interested in a reference.
OK, now we more or less agree. You are right that actually proving the existence of appropriate decoherence is nontrivial. I have assumed it, not proved it. But I hope you will agree that, from what is already known about decoherence (analytically solved toy models and numerically solved more complicated models), the assumption of existence of appropriate decoherence is a rather plausible and reasonable assumption. If it would turn out that a more rigorous analysis shows nonexistence of appropriate decoherence, it would be very surprising. So, strictly speaking, I have not rigorously proved my claim, but I have given a very plausible argument for that.
 
  • #70
A. Neumaier said:
I don"t know. I just know what to expect in general, and that it is nontrivial to prove that something more specific happens.

Thus to decide this is for you to analyse for a specific Hamiltonian modeling the desired measuring process. It would be part of the required justification.
Well, I am sure that they are not macro distinct for some Hamiltonians but macro distinct for other Hamiltonians. I have no intention to explicitly study the evolution by such Hamiltonians, but I can easily explain what are the physical consequences of each case.

First let me discuss the aspects which are common to both cases. Instead of my Eq. (3), more generally we have
$$|k\rangle|A_0\rangle \rightarrow \sum_q a_q |q\rangle |A_{kq}\rangle$$
where, due to unitarity,
$$\sum_q |a_q|^2=1$$
Hence, due to linearity, we have
$$\sum_k c_k |k\rangle|A_0\rangle \rightarrow \sum_k c_k \sum_q a_q |q\rangle |A_{kq}\rangle \equiv |\Psi\rangle$$

Now consider the case in which ##|A_{kq}\rangle## with the same ##k## but different ##q## are macro distinct. This means that one value of ##k## may result in more than one different measurement outcomes, so in this case the interaction cannot be interpreted as a measurement of ##K##. Nevertheless, it is still some kind of measurement (because we do have some distinguishable measurement outcomes) . In fact, it is a generalized measurement discussed in Sec. 3.3 of my paper. To see this, let us introduce the notation
$$(k,q)\equiv l, \;\;\; c_ka_q \equiv \tilde{c}_l, \;\;\; |q\rangle\equiv|R_l\rangle$$
With this notation, the ##|\Psi\rangle## above can be written as
$$|\Psi\rangle = \sum_l \tilde{c}_l |R_l\rangle |A_l\rangle$$
which is nothing but Eq. (17) in my paper.

Now consider the case in which ##|A_{kq}\rangle## with the same ##k## but different ##q## are not macro distinct. We write ##|\Psi\rangle## as
$$|\Psi\rangle = \sum_k c_k |\Psi_k\rangle $$
where
$$|\Psi_k\rangle \equiv \sum_q a_q |q\rangle |A_{kq}\rangle$$
In the multi-position representation we have
$$\Psi(\vec{x},\vec{y})=\sum_k c_k\Psi_k(\vec{x},\vec{y})$$
where
$$\Psi_k(\vec{x},\vec{y})=\sum_q a_q \psi_q(\vec{y}) A_{kq}(\vec{x})$$
Using the Born rule in the multi-position space we have
$$\rho(\vec{x},\vec{y}) =|\Psi(\vec{x},\vec{y})|^2
\simeq \sum_k|c_k|^2 |\Psi_k(\vec{x},\vec{y})|^2$$
In the second equality we have assumed that ##A_{kq}(\vec{x})## are macro distinct for different ##k##, which we must assume if we want to have a system that can be interpreted as a measurement of ##K##. Hence we obtain
$$\rho^{\rm (appar)}(\vec{x})=\int d\vec{y} \rho(\vec{x},\vec{y})
\simeq \sum_k|c_k|^2 \sum_q |a_q|^2 |A_{kq}(\vec{x})|^2$$
where we have used the orthogonality of the ##|q\rangle## basis in the form
$$\int d\vec{y} \psi^*_{q'}(\vec{y}) \psi_q(\vec{y}) =\delta_{q'q}$$
Finally, by denoting with ##\sigma_k## the region in the ##\vec{x}##-space in which all ##A_{kq}(\vec{x})## with the same ##k## are non-negligible, we have the probability
$$p_k^{\rm (appar)}=\int_{\sigma_k} d\vec{x} \rho(\vec{x})
\simeq|c_k|^2 \sum_q |a_q|^2 \int_{\sigma_k} d\vec{x} |A_{kq}(\vec{x})|^2
\simeq|c_k|^2 \sum_q |a_q|^2 = |c_k|^2$$
which is the derivation of the Born rule in the ##k##-space from the Born rule in the multi-position space. This derivation is nothing but a straightforward generalization of the derivation in Sec. 3.2 of my paper. The point is that the derivation works even when my (3) is replaced by a more general relation as you suggested.
 
  • Like
Likes Auto-Didact
  • #71
A. Neumaier said:
Thanks. But p. 180 has no proof at all, and the outline at the top of the next page makes the (in general unwarranted, see post #42 above) assumption that one can neglect both the system Hamiltonian and the detector Hamiltonian and that one may therefore only consider the interaction term.
I don't understand why this would be a problem. It is, as mentioned, physical level proof, many things which a mathematician would have to specify and prove will be simply ignored as trivialities. Here, the critical part is clearly the interaction term and not the Hamiltonians of the parts.
A. Neumaier said:
Well, we are reaching experimentally smaller and smaller distances. Thus preferred frame effects should at some point become observable. When depends on the actual model you propose for an effective QED. None exists in the literature, there are only toy theories that significantly deviate from QED even in the large-distance limit.
At some point. This point may be far away. To propose models beyond effective theory (QED is itself an effective theory, and as an effective theory does not need a model of itself) which would give QED in the large distance limit smells like ether theory, thus, is essentially a no go today. So, even if they would exist in the literature, they would be ignored and possibly could not even be discussed here.
 
  • #72
Demystifier said:
OK, now we more or less agree. You are right that actually proving the existence of appropriate decoherence is nontrivial. I have assumed it, not proved it. But I hope you will agree that, from what is already known about decoherence (analytically solved toy models and numerically solved more complicated models), the assumption of existence of appropriate decoherence is a rather plausible and reasonable assumption. If it would turn out that a more rigorous analysis shows nonexistence of appropriate decoherence, it would be very surprising. So, strictly speaking, I have not rigorously proved my claim, but I have given a very plausible argument for that.
Well, in (the arxiv version of) your paper you don't mention the assumption of existence of appropriate spatial decoherence but you give instead a shallow argument ''proving'' decoherence (15) without stating the required nondemolition assumption.

I don't know whether the assumption of existence of appropriate spatial decoherence is plausible for the measurement of an arbitrary observable. It is plausible for some observables, but needs an argument in the general case.

Elias1960 said:
I don't understand why this would be a problem. It is, as mentioned, physical level proof, many things which a mathematician would have to specify and prove will be simply ignored as trivialities. Here, the critical part is clearly the interaction term and not the Hamiltonians of the parts.
Well, that the neglect of the unperturbed part is a serious assumption - even on the level of physics - is pointed out in the measurement paper by Wigner quoted earlier, valid only for nondemolition measurements (which are rare).
 
Last edited:
  • #73
Elias1960 said:
At some point. This point may be far away. To propose models beyond effective theory (QED is itself an effective theory, and as an effective theory does not need a model of itself) which would give QED in the large distance limit smells like ether theory, thus, is essentially a no go today.
It is unreasonable to argue yourself with questions of principles but to criticize the use of questions of principles in my arguments.

The point where there is an effective QED tractable by Bohmian mechanics may also be far away. Lattice theories don't do it at present.
 
  • #74
A. Neumaier said:
It is unreasonable to argue yourself with questions of principles but to criticize the use of questions of principles in my arguments.
I don't get the point. My point that one cannot use the requirement of fundamentality of some particular symmetry as a decisive argument, given that most symmetries in physics are only approximate symmetries, is indeed a question of principle. But your remark that "preferred frame effects should at some point become observable" is nothing but a quite optimistic side remark. Which is, essentially, irrelevant, given that one cannot use it as an argument that alternatives, where they are approximate, should not be studied or so.
A. Neumaier said:
The point where there is an effective QED tractable by Bohmian mechanics may also be far away. Lattice theories don't do it at present.
Why would a lattice theory not do it? Do you have in mind problems constructing appropriate lattice models like putting chiral gauge fields as exact gauge symmetries on the lattice, or fermion doubling? On the other side, I see none. The lattice theory on a large cube is finite-dimensional, that already removes the main technical problem, what remains is standard BM theory.

For chiral gauge theories, there is no need for exact gauge symmetry, they are anyway massive. With staggered fermions, combined with the point that time can be left continuous, which reduces the problem by yet another factor 2, we end with two Dirac fermions. Which is all that is necessary for the SM where they appear only in electroweak doublets.
 
Last edited:
  • #75
Elias1960 said:
Why would a lattice theory not do it?
Lattice QED suffers from the triviality problem. in a continuum limit, it does not converge to covariant QED but to a free theory. Thus there is no cutoff that would result in a good approximation to QED. Necessary would be an approximation accurate to 12 decimal digits...
 
  • #76
A. Neumaier said:
Lattice QED suffers from the triviality problem. in a continuum limit, it does not converge to covariant QED but to a free theory. Thus there is no cutoff that would result in a good approximation to QED. Necessary would be an approximation accurate to 12 decimal digits...
According to what Wikipedia writes about the triviality problem it is a problem of ##\Lambda \to \infty##, thus, not of effective field theory.

If you cannot get 12 decimal digits with lattice theories given modern computers, so what? I do not object if those who make the real computations use even conceptually completely meaningless things like dimensional regularization. I'm interested in lattice theory out of conceptual questions, namely that it is a conceptually meaningful theory in itself, and is, at least in principle, also a candidate for theory beyond QFT.
 
  • Like
Likes Tendex and Demystifier
  • #77
Elias1960 said:
According to what Wikipedia writes about the triviality problem it is a problem of ##\Lambda \to \infty##, thus, not of effective field theory.
You wanted to replace QED by a version with finite cutoff, hence it applies. It also applies to lattice approximations, since a lattice approximation corresponds to ##\Lambda=s^{-1}##, where ##s## is the lattice spacing. (It does not apply to lattice QCD since QCD is asymptotically free and hence has no triviality problem.)
Elias1960 said:
If you cannot get 12 decimal digits with lattice theories given modern computers, so what? I do not object if those who make the real computations use even conceptually completely meaningless things like dimensional regularization. I'm interested in lattice theory out of conceptual questions, namely that it is a conceptually meaningful theory in itself, and is, at least in principle, also a candidate for theory beyond QFT.
It is not a matter of today's computers. No lattice approximation, even when solved with exact arithmetic and arbitrary lattice size, will be close to QED since for large lattice spacing the error is huge and for (not very) tiny lattice spacing triviality sets in. This happens already at currently feasible lattice sizes!

Thus one cannot use lattice approximations to make arguments of principle.
 
  • #78
A. Neumaier said:
You wanted to replace QED by a version with finite cutoff, hence it applies. It also applies to lattice approximations, since a lattice approximation corresponds to ##\Lambda=s^{-1}##, where ##s## is the lattice spacing.
The Wiki version describes it quite clear as a problem which does not exist for finite cutoffs. They give the formula
$$ g_{obs}={\frac {g_{0}}{1+\beta _{2}g_{0}\ln \Lambda /m}}$$
This gives the problematic zero for ##g_{obs}## only in the limit ##\Lambda\to\infty##. But for finite ##\Lambda## everything is fine. Except that if it reaches the Landau pole which gives infinity:
$$ g_{0}={\frac {g_{obs}}{1-\beta _{2}g_{obs}\ln \Lambda /m}}$$
The Landau pole is beyond anything imaginable,##10^{286} eV##, and we are interested here only in ##\Lambda## of at most Planck scale ##10^{28} eV##.

Maybe you mean something like what Wiki describes in the following words:
Lattice gauge theory provides a means to address questions in quantum field theory beyond the realm of perturbation theory, and thus has been used to attempt to resolve this question.
Numerical computations performed in this framework seems to confirm Landau's conclusion that QED charge is completely screened for an infinite cutoff.
But even here the problem is claimed to be one of the infinite cutoff. I have taken a look into arxiv:hep-th/9712244 cited as support there and it suggests something different from
A. Neumaier said:
No lattice approximation, even when solved with exact arithmetic and arbitrary lattice size, will be close to QED since for large lattice spacing the error is huge and for (not very) tiny lattice spacing triviality sets in. This happens already at currently feasible lattice sizes!
First, the very point of lattice theory combined with renormalization is that one can compute the renormalized parameters out of the original one on quite small lattices. They use a ##16^4## lattice. Once one can iterate this from anywhere down to the large distance limit, the limits for ##\Lambda## in this method have nothing to do with the limits for ##\Lambda## in lattice computations for, say, scattering coefficients.

Then, again, the point is simply that QED cannot be used down to arbitrary distances, because the limit fails. As an effective field theory, it is fine. For finite lattice sizes below the Landau pole region, a finite ##g_{0}## will give a nonzero ##g_{obs}##. The result of this paper, instead, remove the problem which could be created by a too-small Landau pole. If there would be, say, a Landau pole below Planck length, then the lattice theory for Planck length could possibly not give the originally intended large distance limit. But, according to the paper, there is no such danger at all for QED. And even if this would not be similar for the SM, the Landau pole is according to the paper with ##10^{34} GeV## yet much greater than Planck scale ##10^{19} GeV##.
A. Neumaier said:
Thus one cannot use lattice approximations to make arguments of principle.
One certainly can make such arguments. Like the one that such a lattice theory is a well-defined theory, and that it makes no problem to define a Bohmian version for it.
 
  • #79
A. Neumaier said:
Lattice QED suffers from the triviality problem. in a continuum limit, it does not converge to covariant QED but to a free theory. Thus there is no cutoff that would result in a good approximation to QED. Necessary would be an approximation accurate to 12 decimal digits...
I think it is a problem of the continuum (covariant) QED, not a problem of lattice QED. QED in a continuum is trivial, while QED on a lattice is not.
 
  • #80
Elias1960 said:
for finite ##\Lambda## everything is fine. Except that if it reaches the Landau pole which gives infinity:
$$ g_{0}={\frac {g_{obs}}{1-\beta _{2}g_{obs}\ln \Lambda /m}}$$
But for finite ##\Lambda## one is always far away from the covariant formulas that are used to make the predictions!
Elias1960 said:
I have taken a look into arxiv:hep-th/9712244 cited as support there [...] They use a ##16^4## lattice.
and they get near triviality already at this crude resolution not only in the continuum limit! At higher resolution it will be even closer to triviality, not closer to covariant QED!
Elias1960 said:
First, the very point of lattice theory combined with renormalization is that one can compute the renormalized parameters out of the original one on quite small lattices.
Only for asymptotically free theories such as QCD. It cannot be done for QED, hence there is no good lattice approximation for QED. If it could have been done it would have been done already!
Elias1960 said:
Then, again, the point is simply that QED cannot be used down to arbitrary distances, because the limit fails.
The limit only fails for lattice theories and other approximations with a fixed cutoff. This proves that lattice theories cannot approximate QED.

In causal perturbation theory there is no need for a cutoff. The formulas derived there can be used at any reasonable renormalization scale and are covariant from the start.
Demystifier said:
I think it is a problem of the continuum (covariant) QED, not a problem of lattice QED. QED in a continuum is trivial, while QED on a lattice is not.
No. On the contrary:

All experimentally verified predictions have been made with covariant QED, i.e., using the nontrivial, covariant renormalized continuum QED without cutoff (e.g., in [URL='https://www.physicsforums.com/insights/causal-perturbation-theory/']causal perturbation theory[/URL], expanded in powers of the fine structure constant).

On the other hand, no experimentally verified predictions have ever been made with lattice QED. Indeed, QED on a lattice, no matter how crude or fine its spacing, cannot give correct predictions, since its continuum limit is trivial, and triviality sets in already at lattice spacings that can be tested computationally.

This has been discussed already in other threads, e.g., here and here (and followups).
 
Last edited:
  • Like
Likes mattt
  • #81
Demystifier said:
I think it is a problem of the continuum (covariant) QED, not a problem of lattice QED. QED in a continuum is trivial, while QED on a lattice is not.
Like @A. Neumaier I'm not sure this argument is very solid even though it is presented in many texts. We have examples of theories which have Landau poles perturbatively and whose Lattice formulation tends to the free theory as the continuum limit is approached, but which are actually perfectly well defined in the continuum limit with non-trivial interactions. An example is the Gross-Neveu model.

The case for the Standard Model is even weaker, since there we have indications that even perturbatively and numerically it's not trivial. See Callaway's well known paper:
https://www.sciencedirect.com/science/article/abs/pii/0370157388900087
 
  • Like
Likes Demystifier and mattt
  • #82
A. Neumaier said:
Thus arriving at decoherence is quite nontrivial - it is indeed the only real difficulty of decoherence theory. Not that it cannot be done in particular settings, but it is done with sophisticated machinery, not with the tools of the 1930s that you employ. I haven't seen a decoherence analysis for measuring arbitrary system operators. Should you know one, it might solve the problem, and I'd be very interested in a reference.
On a related note: Have you seen a decoherence analysis of something like a Bell test?

In the cases which have been studied a lot, we have a system in a superposition state which gets turned into a mixed state by interacting with the environment. What I haven't seen yet is an analysis where the initial superposition state is an entangled state and the interaction happens only between one of the subsystems and its environment.
 
  • #83
A. Neumaier said:
But for finite ##\Lambda## one is always far away from the covariant formulas that are used to make the predictions!
This is a claim based on nothing. You have no base at all for claims about the size of the error of a lattice computation with, say, a Planck length as the lattice size.
A. Neumaier said:
and they get near triviality already at this crude resolution not only in the continuum limit! At higher resolution it will be even closer to triviality, not closer to covariant QED!
Looks like, first, you have not understood what they have computed on this ##16^4## lattice, and, second, you have not understood the problem with triviality.

The ##16^4## lattice has been used to compute the best approximation of the lattice equations on the ##16^4## lattice on the ##8^4## sublattice. This is one step of computation of the renormalized coefficients of the equations. This is something completely different than a computation of some physical prediction of QED on some particular lattice. They can use, say, the ##16^4## lattice with Planck length lattice size to compute the coefficients of the renormalized lattice theory on the ##8^4## sublattice with 2 times Planck length lattice size. Then, in a next step, they can use a ##16^4## lattice with 2 times Planck length lattice size to compute the coefficients of the renormalized lattice theory on the ##8^4## sublattice with 4 times Planck length lattice size. And so on. Or they could start this business with ##10^{-100}## Planck length as the lattice size. This is a completely different type of lattice computation which has nothing to do with lattice computations for any observable effects in QED.

And the problem with triviality does not even exist in the lattice theory. It exists only for the limit. All you have to do in the lattice theory is to compute the bare coefficients on the finest lattice by the renormalization techniques which give the appropriate observable value in the large distance limit.
A. Neumaier said:
Only for asymptotically free theories such as QCD. It cannot be done for QED, hence there is no good lattice approximation for QED. If it could have been done it would have been done already!
Here you mingle two completely different problems. Namely the problem of how to define a reasonable, meaningful lattice theory which gives QED in the limit - which is quite trivial - and the problem of finding a lattice theory which is good for making real computations. This second problem may be unsolvable, in the sense that conceptually meaningless methods like dimensional regularization will always give higher accuracy with the same computational effort than a lattice computation.
A. Neumaier said:
The limit only fails for lattice theories and other approximations with a fixed cutoff. This proves that lattice theories cannot approximate QED.
It shows only that QED is not a well-defined theory in the limit. And other methods have failed to show that QED is even in the continuum limit a well-defined theory. That means, there is nothing mathematically well-defined to approximate. QED itself is meaningful only as an approximation of some more fundamental theory.
A. Neumaier said:
In causal perturbation theory there is no need for a cutoff. The formulas derived there can be used at any reasonable renormalization scale and are covariant from the start.
Your beloved causal method gives only some asymptotic series, which is nothing. Asymptotic series are something comparable to computing sums like 1 - 2 + 3 - 4 + ...
A. Neumaier said:
All experimentally verified predictions have been made with covariant QED, i.e., using the nontrivial, covariant renormalized continuum QED without cutoff (e.g., in [URL='https://www.physicsforums.com/insights/causal-perturbation-theory/']causal perturbation theory[/URL], expanded in powers of the fine structure constant).
On the other hand, no experimentally verified predictions have ever been made with lattice QED.
Don't cry, these claims remain irrelevant even boldfaced. Again, I have no problem acknowledging that the most efficient way to compute predictions is to use such an asymptotic series or other ill-defined things like dimensional regularization. It would be an accident if it would be simple, boring and straightforward lattice theory which would be also the most efficient way to make computations.
A. Neumaier said:
Indeed, QED on a lattice, no matter how crude or fine its spacing, cannot give correct predictions, since its continuum limit is trivial, and triviality sets in already at lattice spacings that can be tested computationally.
Wrong. The straightforward limit would simply have an infinite interaction constant. Which makes no sense. But for every finite ##\Lambda##, everything else is fine. There would be some (very large, but so what) bare interaction constant which gives the correct interaction constant in the large distance limit. Your "triviality sets in" makes no sense. The only fact behind this is that the bare interaction constant increases with ##\Lambda## in an unbounded way. This, indeed, starts immediately but proves nothing.
 
  • Like
Likes Tendex
  • #84
Elias1960 said:
Your beloved causal method gives only some asymptotic series, which is nothing
In QM and QFT perturbation theory is always asymptotic, it's a bit extreme to say it is "nothing". @A. Neumaier 's point is that one has no Landau pole in this method. This is similar to the Gross-Neveu model where Landau poles show up in one method of perturbation theory and not another expansion, such as the ##\frac{1}{N}## expansion.

It shows we can't trust a Landau pole in one perturbative method to be a conclusive argument for triviality.
 
  • #85
DarMM said:
In QM and QFT perturbation theory is always asymptotic, it's a bit extreme to say it is "nothing".
In QFT, there is serious doubt that the continuous theory is even well-defined. For a proof that the continuous theory is well-defined, it is nothing.
To find reasonable empirical predictions, asymptotic series are fine. So, it depends on what one wants to reach.
DarMM said:
@A. Neumaier 's point is that one has no Landau pole in this method.
This is also what the paper which used lattice theory computations has claimed to have shown. So if you are right, there is nothing to argue about.
DarMM said:
It shows we can't trust a Landau pole in one perturbative method to be a conclusive argument for triviality.
Agreement too.

My point is a quite different one, namely that triviality is an issue only if one cares about having a well-defined continuum limit of QFT. In the effective field theory approach, one does not care about this, and so the whole problem is non-existing. For a finite cutoff, there will be parameters which give the correct QED in the large distance, and the triviality problem appears if the zero cutoff limit would give an infinite interaction constant (and if one replaces this by a finite value, the large distance limit becomes zero).
 
  • Like
Likes Tendex and Demystifier
  • #86
DarMM said:
We have examples of theories ... whose Lattice formulation tends to the free theory as the continuum limit is approached, but which are actually perfectly well defined in the continuum limit with non-trivial interactions. An example is the Gross-Neveu model.
I would like to learn more about this, can you give a reference for the claims above on the lattice and continuum versions of the Gross-Neveu model?
 
  • #87
Demystifier said:
I would like to learn more about this, can you give a reference for the claims above on the lattice and continuum versions of the Gross-Neveu model?
It's not easy stuff to read just to tell you.

I'd start with Vincent Rivasseau's "From perturbative to constructive Renormalization"
 
  • Like
Likes mattt and Demystifier
  • #88
Elias1960 said:
In QFT, there is serious doubt that the continuous theory is even well-defined
I wouldn't say this. We have several examples of continuum theories in 2D and 3D which are well-defined. Balaban also demonstrated the existence of a continuum limit for Yang-Mills in 4D, so I don't think any serious doubt remains. It's the infinite volume limit that is more difficult.
 
  • Like
Likes mattt, dextercioby, weirdoguy and 1 other person
  • #89
Elias1960 said:
For a finite cutoff, there will be parameters which give the correct QED in the large distance
No. Indeed, this cannot be proved without taking a continuum limit, since only then the Lorentz invariance characteristic for QED appears. But the continuum limit is obstructed by the Landau pole.
Thus your claim is wishful thinking.
 
  • Like
Likes weirdoguy
  • #90
A. Neumaier said:
No. Indeed, this cannot be proved without taking a continuum limit, since only then the Lorentz invariance characteristic for QED appears. But the continuum limit is obstructed by the Landau pole.
Thus your claim is wishful thinking.
Sorry, but Lorentz invariance is nothing characteristic for QED but a quite general property for wave equations. If the lattice equation gives in the large distance limit a wave equation, it has also Lorentz invariance.

That the continuum limit does not exist in a meaningful way is clear, with or without Landau pole we have the triviality problem. Note also that minor distortions of Lorentz covariance are unproblematic as long as they cannot be observed at the distances accessible now.

DarMM said:
I wouldn't say this. We have several examples of continuum theories in 2D and 3D which are well-defined. Balaban also demonstrated the existence of a continuum limit for Yang-Mills in 4D, so I don't think any serious doubt remains. It's the infinite volume limit that is more difficult.
If you think so, your choice. The formulation "demonstrated the existence of" sounds dubious, not like "has constructed an example of". Whatever, if he gets the prize for solving the Millenium problem I will no longer use this claim.

In fact, even if one can somehow define them, it will be not worth much, given that non-renormalizable gravity is anyway only an effective theory.
 
  • Sad
Likes weirdoguy
  • #91
Elias1960 said:
If you think so, your choice. The formulation "demonstrated the existence of" sounds dubious, not like "has constructed an example of". Whatever, if he gets the prize for solving the Millenium problem I will no longer use this claim.
"Demonstrate the existence of" is completely normal language. Do you just disagree with everything?
He has constructed the continuum limit. What hasn't been shown is that the infinite volume limit exists with a mass gap, which is required for the Millenium problem.

You were saying there is serious doubt over the existence of the continuum limit. There isn't, due to completely well defined 2D and 3D theories, as well as existence results for the 4D continuum limit. People working in constructive QFT don't have doubts over continuum QFT existing.

Elias1960 said:
In fact, even if one can somehow define them, it will be not worth much, given that non-renormalizable gravity is anyway only an effective theory.
This is again a non-sequiter. We were talking about whether QED and other QFTs have continuum limits. You said there were serious doubts, there aren't. Now what, there's a problem with this because nobody has formulated Quantum Gravity or something?
 
  • Like
Likes mattt, dextercioby and weirdoguy
  • #92
DarMM said:
He has constructed the continuum limit. What hasn't been shown is that the infinite volume limit exists with a mass gap, which is required for the Millenium problem.
I have tried to find the relevant paper and found this:
Balaban, T. (1987). Renormalization Group Approach to Lattice Gauge Field Theories I. Commun. Math. Phys. 109, 249-301
Balaban, T. (1988). Renormalization Group Approach to Lattice Gauge Field Theories II. Commun. Math. Phys. 116, 1-22
Are these the relevant papers?
(That would be funny if the best results about the very existence of continuous theories have been reached by the same lattice methods which Neumaier thinks cannot be applied to QED.)
DarMM said:
You were saying there is serious doubt over the existence of the continuum limit. There isn't, due to completely well defined 2D and 3D theories, as well as existence results for the 4D continuum limit. People working in constructive QFT don't have doubts over continuum QFT existing.
Ok, I will take this into account and formulate my position in the future differently.
DarMM said:
This is again a non-sequiter. We were talking about whether QED and other QFTs have continuum limits. You said there were serious doubts, there aren't. Now what, there's a problem with this because nobody has formulated Quantum Gravity or something?
There is no problem with this. I have based my statement about the serious problems on what I have heard about this question, from people less optimistic about this than you. I have not checked that myself given that for me it was an irrelevant side issue. If the situation is better, fine, I will remember this. But I don't have to change anything else, and the point of my remark about gravity was to explain why it is, in my opinion, only a quite irrelevant side issue.
 
  • #93
DarMM said:
I wouldn't say this. We have several examples of continuum theories in 2D and 3D which are well-defined. Balaban also demonstrated the existence of a continuum limit for Yang-Mills in 4D, so I don't think any serious doubt remains. It's the infinite volume limit that is more difficult.

Is it the case that the 4D limit has been established, but not the 3D limit? In describing Balaban's 3D work, http://www.claymath.org/sites/default/files/yangmills.pdf says that the contiuum limit has not been established: "This is an important step toward establishing the existence of the continuum limit on a compactified space-time. These results need to be extended to the study of expectations of gauge-invariant functions of the fields."

That article also seems to indicate that the 4D finite volume problem is open, and it is not just the infinite volume problem that remains: "These steps toward understanding quantum Yang–Mills theory lead to a vision of extending the present methods to establish a complete construction of theYang–Mills quantum field theory on a compact, four-dimensional space-time. One presumably needs to revisit known results at a deep level, simplify the methods,and extend them."
 
  • #94
Balaban has established that there is a continuum limit of the action, i.e. there is a well defined theory in the continuum. He never established that expectation values of gauge invariant operators are unique, nor did he prove certain analyticity properties for them.

These are usually necessary to solve what is called the finite volume case in constructive field theory, but they're not really the issues the average physicist means when they say the continuum limit. They usually mean there being something well defined and nontrivial in the continuum limit.

Thus Balaban has shown the continuum limit exists, but not demonstrated it has certain uniqueness and analytic properties for Wilson loops.
 
  • Like
Likes vanhees71
  • #95
DarMM said:
Thus Balaban has shown the continuum limit exists, but not demonstrated it has certain uniqueness and analytic properties for Wilson loops.

Have you heard the story about why he gave up working on Yang Mills? He moved house, and the movers lost the box with his notes on Yang Mills.
 
  • #97
atyy said:
Have you heard the story about why he gave up working on Yang Mills? He moved house, and the movers lost the box with his notes on Yang Mills.
Yes from yourself years ago! Makes one want to cry! :cry:
 
  • #98
Demystifier said:
$$\rho(\vec{x},\vec{y}) =|\Psi(\vec{x},\vec{y})|^2
\simeq \sum_k|c_k|^2 |\Psi_k(\vec{x},\vec{y})|^2 ~~~~~(1)$$
In the second equality we have assumed that ##A_{kq}(\vec{x})## are macro distinct for different ##k##, which we must assume if we want to have a system that can be interpreted as a measurement of ##K##.
In (1) [equation label added by me] you assume without justification that the ##\Psi_k(\vec{x},\vec{y})## with different ##k## have approximately disjoint support. This is unwarranted without a convincing analysis.
 
  • #99
A. Neumaier said:
In (1) [equation label added by me] you assume without justification that the ##\Psi_k(\vec{x},\vec{y})## with different ##k## have approximately disjoint support. This is unwarranted without a convincing analysis.
So you want to see an explicit calculation based on the theory of decoherence, right? If this is what would satisfy you, I will try to find one in the literature.
 
  • #100
Demystifier said:
So you want to see an explicit calculation based on the theory of decoherence, right? If this is what would satisfy you, I will try to find one in the literature.
Whatever you need to justify it without assuming nondemolition. Wigner's analysis indicates to me that this is impossible.
 

Similar threads

Back
Top