- 8,700
- 4,780
These only relate to the duration of measurements, not to properties of the resulting wave functions that would be needed to be established.Demystifier said:Fair enough, but I think in #116 I gave some additional heuristic arguments.
These only relate to the duration of measurements, not to properties of the resulting wave functions that would be needed to be established.Demystifier said:Fair enough, but I think in #116 I gave some additional heuristic arguments.
Fine, but how about the general argument based on decoherence, essentially saying that wave functions of many-body systems tend to decohere into branches localized in the position space because the interactions (that cause decoherence) are local in the position space? That heuristic argument (that can be supported by some explicit calculations in the literature) helps to explain why macroscopic objects look as being well localized in space. If you can accept that argument (which, admittedly, is still only heuristic), then the readings of measuring apparatuses are just a special case.A. Neumaier said:These only relate to the duration of measurements, not to properties of the resulting wave functions that would be needed to be established.
But in other cases, there is decoherence into coherent states, which are not local in the position space.Demystifier said:Fine, but how about the general argument based on decoherence, essentially saying that wave functions of many-body systems tend to decohere into branches localized in the position space because the interactions (that cause decoherence) are local in the position space?
The abstract of this paper says, ''Thus the environment can be said to perform a nondemolition measurement of an observable diagonal in the pointer basis'', confirming my reading of Wigner's analysis.Demystifier said:That heuristic argument (that can be supported by some explicit calculations in the literature) helps to explain why macroscopic objects look as being well localized in space. If you can accept that argument (which, admittedly, is still only heuristic), then the readings of measuring apparatuses are just a special case.
For more details with a quantitative analysis see e.g. https://journals.aps.org/prd/abstract/10.1103/PhysRevD.24.1516
By "local" in position space, I mean small ##\sigma_x##, not zero ##\sigma_x##. In that sense coherent states can be local too.A. Neumaier said:But in other cases, there is decoherence into coherent states, which are not local in the position space.
What do you mean by "can be found"? Found in nature or found mathematically on the paper? If you mean found in nature, then it is certainly not true, which corresponds to the fact that not any self-adjoint operator can be measured in practice. But then I expect that a weaker claim is true, namely that ... for any selfadjoint operator ##K## that can be measured in practice ... operator is approximately diagonal in the resulting pointer basis ...A. Neumaier said:Assuming the results of this paper, the question remaining is whether for any selfadjoint operator ##K## of the measured system a suitable measuring apparatus and a suitable environment (suitable = not contrived) can be found such that
Maybe this (in my view nontrivial) question has a positive answer; then I am satisfied.
- this operator is diagonal in the resulting pointer basis, and
- each element of this pointer basis is well localized in space.
What many people dislike about that is that the set of basic postulates involves a postulate on measurements. A fundamental microscopic theory should only have postulates on fundamental microscopic objects as such, not postulates on macroscopic measurements. Instead of being postulated, properties of macroscopic measurements should be derived from basic microscopic postulates.vanhees71 said:... basic postulates of the theory: Given an ideal measurement ...
My question in post #1 was about arbitrary ##K##, and your answer in post #2 was that the proof can be found in many places. Indeed, on p.5 of your paper https://arxiv.org/pdf/1811.11643.pdf (reference 3 in post #2) you didn't have any restriction on the operator K beyond selfadjointness (which is assumed in Born's rule). So you need to qualify your claim there.Demystifier said:What do you mean by "can be found"? Found in nature or found mathematically on the paper? If you mean found in nature, then it is certainly not true, which corresponds to the fact that not any self-adjoint operator can be measured in practice.
Maybe. But to show this at least for some class of operators measured in practice that is not trivially nondemolition still requires a nontrivial argument.Demystifier said:But then I expect that a weaker claim is true, namely that ... for any selfadjoint operator ##K## that can be measured in practice ... operator is approximately diagonal in the resulting pointer basis ...
You should reread post #42 and Wigner's treatise in Section II.2 of the reprint collection by Wheeler and Zurek , ''Quantum theory of measurement'', which is the background of my discussion with Demystifier.vanhees71 said:I'm a bit lost about the claim that only for "nondemolation measurements" the Born Rule should hold, but then it's trivial, because "nondemolation measurement" basically means that you measure and observable which is determined in the prepared state of the system.
Agreed!A. Neumaier said:But to show this at least for some class of operators measured in practice that is not trivially nondemolition still requires a nontrivial argument.
I added a note in the post #2.A. Neumaier said:My question in post #1 was about arbitrary KK, and your answer in post #2 was that the proof can be found in many places. Indeed, on p.5 of your paper https://arxiv.org/pdf/1811.11643.pdf (reference 3 in post #2) you didn't have any restriction on the operator K beyond selfadjointness (which is assumed in Born's rule). So you need to qualify your claim there.
But the first nontrivial case is that of two particles. Simply substituting the single particle kinetic energies ##\frac{p_k^2}{2m}## by their relativistic versions ##c\sqrt{p_k^2+(mc)^2}-mc^2## does not produce something Lorentz invariant.Demystifier said:Lorentz covariance for instrumentalists
##\hat{E}\equiv\sqrt{\hat{p}^2+m^2}##Usually ##\hat{H}(\hat{p})=\hat{p}^2/2m##, but in general ##\hat{H}(\hat{p})## can be arbitrary. So as a special case of non-relativistic QM consider
$$\hat{H}(\hat{p})=\hat{E}$$
where ##\hat{E}## is defined as above and ##m## is interpreted as mass in units ##c=1##. One recognizes that this particular Hamiltonian of non-relativistic QM has a hidden Lorentz symmetry, the same symmetry that is typical for relativistic quantum theory.
This can be done too, it's relatively straightforward. But since I don't want to do all the work alone, here is a deal. You write down the theory within standard quantum theory (please, don't use the thermal interpretation, just the standard theory), and then I will explain how the same works in BM.A. Neumaier said:But the first nontrivial case is that of two particles.
Ok. Gven the nonrelativistic multiparticle HamiltonianDemystifier said:This can be done too, it's relatively straightforward. But since I don't want to do all the work alone, here is a deal. You write down the theory within standard quantum theory (please, don't use the thermal interpretation, just the standard theory), and then I will explain how the same works in BM.
That was not the deal. The deal was that you solve everything within standard quantum theory (including the relativistic covariant version of standard quantum theory; it's up to you whether you will use relativistic QM, relativistic QFT, or whatever you want) and make a relativistic covariant measurable prediction (e.g. some probability distribution of measurement outcomes). After you do all this (you can use existing results from the literature), I explain how the same measurable results can be obtained from the point of view of Bohmian mechanics.A. Neumaier said:Ok. Gven the nonrelativistic multiparticle Hamiltonian
$$H=\sum_k \frac{p_k^2}{2m} +\sum_{j<k} V(|q_j-q_k|)$$
where $V(r)$ is a Lennard-Jones potential, say, what would be the Lorentz covariant relativistic version?
Well, then take the textbook description of QED in the book by Peskin and Schroeder, where everything needed to predict the anomalous magnetic moment of the electron is spelled out in Lorentz invariant terms. Your task is to explain how the anomalous magnetic moment of the electron can be obtained from the point of view of Bohmian mechanics.Demystifier said:That was not the deal. The deal was that you solve everything within standard quantum theory (including the relativistic covariant version of standard quantum theory; it's up to you whether you will use relativistic QM, relativistic QFT, or whatever you want) and make a relativistic covariant measurable prediction (e.g. some probability distribution of measurement outcomes). After you do all this (you can use existing results from the literature), I explain how the same measurable results can be obtained from the point of view of Bohmian mechanics.
That's not really an interesting example (in the context of post #131) because ##g-2## is a scalar so it doesn't change under a change of a Lorentz frame. I think you didn't take this example because you think it would help you to understand how BM does the trick. I think you took this example because you don't need to do any work, while my job would be hard so you would set me up. In fact it wouldn't be that hard for me, but since I think you wouldn't learn anything form it (because it was not your intention when you gave me this task), I will not do it here.A. Neumaier said:Well, then take the textbook description of QED in the book by Peskin and Schroeder, where everything needed to predict the anomalous magnetic moment of the electron is spelled out in Lorentz invariant terms. Your task is to explain how the anomalous magnetic moment of the electron can be obtained from the point of view of Bohmian mechanics.
Well, all Lorentz covariant QFT looks problematic to me from the Bohmian point of view, because nothing Bohmian survives renormalization.Demystifier said:you can pick up one segment of the theory (behind the calculation of ##g-2##) which is Lorentz-covariant and looks problematic to you from the Bohmian point of view.
I would use a lattice regularization and would not try to go to the limit of the lattice distance to zero. Leave it at Planck length. Renormalization between different lattices is unproblematic.A. Neumaier said:Well, all Lorentz covariant QFT looks problematic to me from the Bohmian point of view, because nothing Bohmian survives renormalization.
Well, how to get the anomalous magnetic moment of the electron from a lattice calculation to the known accuracy? You cannot get even close with present lattice technology!Elias1960 said:I would use a lattice regularization and would not try to go to the limit of the lattice distance to zero. Leave it at Planck length. Renormalization between different lattices is unproblematic.
I do not care about getting high accuracy first. Initially, I care about having a well-defined theory. Once one has a well-defined theory, one can start to improve the approximation methods. So, for theories with low interaction constants like 1/137 or so, it makes sense to look for approximation methods which make use of it, say, using some variant of a power series. Don't forget that this would be quite irrelevant for defining dBB trajectories - it is about methods to compute something well-defined in QT as well as dBB.A. Neumaier said:Well, how to get the anomalous magnetic moment of the electron from a lattice calculation to the known accuracy? You cannot get even close with present lattice technology!
It's good to know what really bothers you, so that we don't need to discuss all other technicalities that are not directly related to renormalization.A. Neumaier said:Well, all Lorentz covariant QFT looks problematic to me from the Bohmian point of view, because nothing Bohmian survives renormalization.
Present is the key word. If we had much much stronger computers which can handle lattices with much much bigger number of vertices, then there are no many doubts that ##g-2## could be be computed on the lattice with a great accuracy.A. Neumaier said:You cannot get even close with present lattice technology!
Standard renormalized QED at 6 loops is a perfectly well-defined covariant quantum field theory that gives excellent predictions. Its only defect is that it (extremely slightly) violates the axioms of Wightman. Since you discard wightman's axioms as well, you have no reasons left to consider QED ad ill-defined. Thus you should care about standard QED.Elias1960 said:I do not care about getting high accuracy first. Initially, I care about having a well-defined theory.
These are already well developed, to the point of giving results with 12cdecimals of relative accuracy!Elias1960 said:Once one has a well-defined theory, one can start to improve the approximation methods.
There are lots of well-defined theories completely unrelated to experiment. They are completely irrelevant. To claim physical content for a theory you need to show that you can reproduce the experimental results!Elias1960 said:So, for theories with low interaction constants like 1/137 or so, it makes sense to look for approximation methods which make use of it, say, using some variant of a power series. Don't forget that this would be quite irrelevant for defining dBB trajectories - it is about methods to compute something well-defined in QT as well as dBB.
Until this is the case (most likely never, since the computers would need more memory than the size of the universe allows) you only have a dream full of wishful thinking.Demystifier said:Present is the key word. If we had much much stronger computers which can handle lattices with much much bigger number of vertices,
According to the studies on triviality, there are even less doubts that ##g-2## would coms out to be zero to whatever great accuracy your imagined supersupercomputer will be able to muster.Demystifier said:then there are no many doubts that ##g-2## could be be computed on the lattice with a great accuracy.
We discussed that in another thread and didn't in fact agreed on this.A. Neumaier said:According to the studies on triviality, there are even less doubts that ##g-2## would coms out to be zero to whatever great accuracy your imagined supersupercomputer will be able to muster.
Well, you didn't demonstrate the truth of your conjecture, it is just a belief. Beliefs don't count in physios, thus there is at present no Bohmian version of QED making contact with experiment, only a hope.Demystifier said:We discussed that in another thread and didn't in fact agreed on this.
Fine, but the problem is not in Bohmian mechanics itself. Instead, the problem is in the lattice formulation of QED, irrespective of the interpretation (Copenhagen, Bohmian, thermal, or whatever). The standard practice is to work with a non-lattice type of regularization, which gives numbers that agree with experiments, but has its own mathematical problems because such non-lattice regularizations are not mathematically rigorous.A. Neumaier said:Well, you didn't demonstrate the truth of your conjecture, it is just a belief. Beliefs don't count in physios, thus there is at present no Bohmian version of QED making contact with experiment, only a hope.
Yes, and the reason is that QED is not defined on the lattice but on the continuum. It is to any fixed loop order Lorentz covariant and mathematically well-defined (in causal perturbation theory, which constructs everything, the S-matrix, the Hilbert space and the field operators). Already loop order 1 gives an excellent match with experiment, though for very high accuracy one needs orders up to six.Demystifier said:Fine, but the problem is not in Bohmian mechanics itself. Instead, the problem is in the lattice formulation of QED, irrespective of the interpretation (Copenhagen, Bohmian, thermal, or whatever). The standard practice is to work with a non-lattice type of regularization, which gives numbers that agree with experiments, but has its own mathematical problems because such non-lattice regularizations are not mathematically rigorous.
All the hard work had already been done in 1948 and was rewarded in 1954 by a Nobel prize. The results of the hard work can be found in any textbook treating QED; many thousands of students learn it every year.Demystifier said:That was not the deal. The deal was that you solve everything within standard quantum theory (including the relativistic covariant version of standard quantum theory; it's up to you whether you will use relativistic QM, relativistic QFT, or whatever you want) and make a relativistic covariant measurable prediction (e.g. some probability distribution of measurement outcomes). After you do all this (you can use existing results from the literature), I explain how the same measurable results can be obtained from the point of view of Bohmian mechanics.
If you complain that it's unfair because you must do the hard part while my part is easy, that's exactly my point.
I don't understand how you can call you job hard given that you said before thatDemystifier said:I think you took this example because you don't need to do any work, while my job would be hard
How standard QED works is understood very well. If you don't like the anomalous magnetic moment, pick instead your preferred scattering amplitude.Demystifier said:Bohmian mechanics is easy, once one understands how standard quantum theory works.
Is it a theory at all? It is nothing but an approximation for a particular experiment, namely scattering of particles which start and end with free particles far away.A. Neumaier said:Standard renormalized QED at 6 loops is a perfectly well-defined covariant quantum field theory that gives excellent predictions.
No, it is not even a consistent theory. And I do not care about accuracy of an approximation of a not even well-defined theory, I care first about having a well-defined theory.A. Neumaier said:Its only defect is that it (extremely slightly) violates the axioms of Wightman. Since you discard wightman's axioms as well, you have no reasons left to consider QED ad ill-defined. Thus you should care about standard QED.
Once I have a well-defined theory, which I have if I use a lattice regularization, then I can start about using your renormalized QED at 6 loops to compute approximations. So, no problem. Nobody forbids me to use such not-even-theories as approximations for particular situations like scattering.A. Neumaier said:These are already well developed, to the point of giving results with 12cdecimals of relative accuracy!
There are lots of well-defined theories completely unrelated to experiment. They are completely irrelevant. To claim physical content for a theory you need to show that you can reproduce the experimental results!
I can consider a particular lattice theory in general, using unspecified constants. Who was it who has referenced that paper where lattice computations have been used to compute the renormalization down to the place where the Landau pole should appear, but it did not appear on the lattice? So, to compute the renormalization is something possible and has been already done, and once in this case all lattice approximations are well-defined theories. All one has to do is to compute with this program the resulting large distance limit of the constants and to compare them with observation.A. Neumaier said:Thus to make a Bohmian version of QED based on a lattice you need to spell out which precise lattice field theory (at which lattice spacing, with which interaction constants) you want to consider.
QED triviality is not a problem of lattice theory, it is a problem which appears only in the limit of the lattice distance going to zero. Which I propose explicitly not to do. To go with the lattice distance below Planck length simply makes no sense at all. Don't forget that a lattice theory remains well-defined if the interaction constant is greater than 1, while you will fail completely with your Feynman diagrams.A. Neumaier said:For lack of computational evidence you would have to prove theoretically (not just say some handwaving words!) - which is probably impossible in the face of QED triviality and the Fermion doubling problem - that you can accurately approximate this lattice theory in some way that reproduces the standard low energy results of QED. Only then you have a substantiated claim.
Of course QED at a fixed number of loops is a theory, an established part of theoretical physics. It is mathematically as well-defined and as consistent as lattice field theory, and gives far superior results.Elias1960 said:Is it a theory at all? It is nothing but an approximation for a particular experiment, namely scattering of particles which start and end with free particles far away.
No, it is not even a consistent theory. And I do not care about accuracy of an approximation of a not even well-defined theory, I care first about having a well-defined theory.
The problem is that you need to show that renormalized QED at 6 loops is actually a valid approximation - which is dubious in the light of triviality results!Elias1960 said:I use a lattice regularization, then I can start about using your renormalized QED at 6 loops to compute approximations. So, no problem. Nobody forbids me to use such not-even-theories as approximations for particular situations like scattering.
QED triviality is a problem of relating the lattice QED to the continuum QED. Lacking this relation means lacking support for the claim that one approximates the other at the physical values of the parameters defining the specific theory.Elias1960 said:QED triviality is not a problem of lattice theory, it is a problem which appears only in the limit of the lattice distance going to zero.
This says nothing about how well the successful continuum theory for the standard model approximates the proposed lattice theory, hence does not do what you want it to do.Elias1960 said:For details, with the explicit 3D lattice, see arxiv:0908.0591