I Who is Ballentine and why is he important in the world of quantum mechanics?

  • I
  • Thread starter Thread starter joneall
  • Start date Start date
  • Tags Tags
    Second quantization
  • #51
Fra said:
Indeed such an approximation is terribly bad.
Yes. I am approximately the richest man in the world! And at the same time the poorest!
 
Physics news on Phys.org
  • #52
A. Neumaier said:
Yes. I am approximately the richest man in the world! And at the same time the poorest!
Makes perfect sense! :smile:

/Fredrik
 
  • #55
A. Neumaier said:
No, we are at the very basis of our eternal differences!

It might be called a random position measurement of the photon.

But it is certainly not a measurement of a field. The latter would produce approximate values for the field.

A photon has no position to begin with. A photon is a one-quantum Fock state of the electromagnetic field. There is a probability distribution for detecting it at a given place and a given time (with finite resolution of both of course). The probability is given by the (normalized) energy density of the electromagnetic field, which results from the analysis of the photoelectric effect on the detector material in the standard dipole approximation. I thought that's your view: the observables are given by correlators of local observable-operators, and indeed the energy density of the em. field is such an observable.

What else should be a measurement of "a field" than that? Also in classical electrodynamics, what's observable of the field are precisely such things as the "intensity", which also classically is given by the energy density.

That's also how the electromagnetic field is operationally defined, i.e., by its actions on charged matter.
 
  • #56
vanhees71 said:
A photon has no position to begin with. A photon is a one-quantum Fock state of the electromagnetic field. There is a probability distribution for detecting it at a given place and a given time (with finite resolution of both of course).
This is commonly called an (approximate) position measurement. It measures the transverse position orthogonal to the beam direction. This is represented by a well-defined operator with two commuting components.
vanhees71 said:
The probability is given by the (normalized) energy density of the electromagnetic field,
Measuring the probability would therefore be a measurement of the field intensity. But from a single photon one cannot get a probablilty, hence no measurement of the field.
vanhees71 said:
I thought that's your view: the observables are given by correlators of local observable-operators, and indeed the energy density of the em. field is such an observable.
Yes, the observables are the correlation functions but:
vanhees71 said:
A correlation function is a statistical quantity and cannot be empirically studied with just a single experiment. One photon leaves one dot at a random spot, not a distribution.
Thus the observation of a single photon (which is what we were discussing) impact does not measure these observables.
vanhees71 said:
What else should be a measurement of "a field" than that?
Anything that results in an approximate value of the smeared field at some point.
vanhees71 said:
Also in classical electrodynamics, what's observable of the field are precisely such things as the "intensity", which also classically is given by the energy density.
The classical intensity is a field, and observing it at x gives the value of the field averaged near x. The same holds in the quantum case with my definition of measurement, but not with your contrived one.
 
  • #57
A. Neumaier said:
Thus the observation of a single photon (which is what we were discussing) impact does not measure these observables.

What if we view see the general pattern here as similar to a "deep learning" methods with layers of abstractions, where we are here talking about "determining/mesure one abstraction" in a higher layer based on data flowing from the lower levels. Indeed the higher abstractions are driven by data, and single samples will not drive the process. And while it is true that the confidence in higher level constructs, depends on, and requires a certain "amount" of data from lower levels, the detection of lower level "events" are still the building blocks of the "measurement process"?

The association I can't help making is that from the "quantization step", we can see define "field" as a higher layer constructs, defined in terms of processing lower layers (in it's simplest form it can be average formation, but it can also be less trivial transformations) and while it is true that the confidence in higher level constructs, depends on, and requires a certain "amount" of data from lower levels. It seems that the "mesaurement" of ANYTHING, must necessarily start with detection of some elementary events. The question is, how many data points do we need to motivate an given construct? This would also make such higher constructed contextual as expected, as they are supported by the available observations.

It seems to me this can be made a deep question, what is a field, what is an observable...how are they defined conceptaully and operationally, rather than merely mathematically (which as discussed is in part fiction)?

/Fredrik
 
  • #58
A. Neumaier said:
This is commonly called an (approximate) position measurement. It measures the transverse position orthogonal to the beam direction. This is represented by a well-defined operator with two commuting components.
The position is the position of the detector. There's no position operator for the photon. In relativistic QFT time and position (four-vector) components are parameters with precisely this meaning.
A. Neumaier said:
Measuring the probability would therefore be a measurement of the field intensity. But from a single photon one cannot get a probablilty, hence no measurement of the field.
As in any QT the state refers to probabilistic properties of ensembles, of course.
A. Neumaier said:
Yes, the observables are the correlation functions but:

Thus the observation of a single photon (which is what we were discussing) impact does not measure these observables.
A photodetector registers a single photon at a given space-time point (within a finite resolution). That's a measurement par excellance as it is defined in standard QT.
A. Neumaier said:
Anything that results in an approximate value of the smeared field at some point.
This can of course only be achieved by measuring an ensemble (or rather a "statistical sample") of equally prepared systems.
A. Neumaier said:
The classical intensity is a field, and observing it at x gives the value of the field averaged near x. The same holds in the quantum case with my definition of measurement, but not with your contrived one.
I don't see, where we differ in this respect: the expecation value of a local observable like the electromagnetic field ##(\vec{E}(x),\vec{B}(x)## can of course again only be measured on an ensemble not a single system, and the expectation value as only one of the moments of the corresponding probability distribution only describes a small aspect of the state.
 
  • #59
vanhees71 said:
I don't see, where we differ in this respect: the expecation value of a local observable like the electromagnetic field ##(\vec{E}(x),\vec{B}(x)## can of course again only be measured on an ensemble not a single system, and the expectation value as only one of the moments of the corresponding probability distribution only describes a small aspect of the state.
In your view the "statistical samples" approximate the "ensemble". But the ensemble is a fiction in the sense of requiring infinite repeats etc. This is what is the "problem".

If I understand Neumaier, he thinks the "statistical sample" approximates not some fictional ensemble but the value of an actual "real" field (that is defined by accounting for ALL the actual varialbes the in universe, even those the local observer isn't informed about).

So I think the disagreemen it more, WHAT does our "statistical sample" approximates?

/Fredrik
 
  • #60
There is no problem or if there is a problem it's a problem of all kinds of measurement also within classical physics. You can always only prepare a finite number of systems and measure them. In general also both the preparation and the measurement are only approximate etc. etc. All this is covered by the standard procedures of the experimentalists, i.e., you have to do a careful analysis of the statistical and systematic errors in an experiment.

A statistical sample approximates an ensemble. Since ##\vec{E}## and ##\vec{B}## don't commute and since there possible values are continuous, they never can be precisely determined. It's as with position and momentum in non-relativistic physics.
 
  • #61
vanhees71 said:
The position is the position of the detector.
The detector is a screen and has many positions, one of them responds to the photon. The two coordinates of the responding position define the transverse position of the photon measured.
vanhees71 said:
There's no position operator for the photon.
For the photon, in the observer frame, there is no 3-component position operator with commuting components transforming properly under rotations.

But there are commuting operators for the two components of position transversal to the beam direction. Thus transverse position can be measured with in principle arbitrary accuracy.
vanhees71 said:
A photodetector registers a single photon at a given space-time point (within a finite resolution). That's a measurement par excellance as it is defined in standard QT.
But it is a measurement of a particle, not of a field. If it would measure a field, as you claim the energy intensity, which value do we get for the incident field at the impact point? and which values at non-impact points? (Not seeing a response is also a measurement of photon presence, but not a field measurement.)
vanhees71 said:
This can of course only be achieved by measuring an ensemble (or rather a "statistical sample") of equally prepared systems.

I don't see, where we differ in this respect: the expecation value of a local observable like the electromagnetic field ##(\vec{E}(x),\vec{B}(x)## can of course again only be measured on an ensemble not a single system, and the expectation value as only one of the moments of the corresponding probability distribution only describes a small aspect of the state.
An engineer measures a local observable like the electromagnetic field ##(\vec{E}(x),\vec{B}(x)## with a single measurement at x, not by statistical means. This works well, although only a single electromagnetic field is prepared, not an ensemble of fields.

Statistics is needed only for extremely weak fields, such as that defined by a single photon state, and only to acumulate responses, not to average field values.
 
Last edited:
  • Like
Likes mattt and gentzen
  • #62
Fra said:
If I understand Neumaier, he thinks the "statistical sample" approximates not some fictional ensemble but the value of an actual "real" field (that is defined by accounting for ALL the actual varialbes the in universe, even those the local observer isn't informed about).
An engineer records no statistical sample but only one value at any point where a measurement is made.

For very low intensity fields, the interpretation of the statistical sample is as usual - it approximates the fictional ensemble and produces in a large number of detector responses the q-expectation = 1-point function of the intensity field at the positions of the screen.

But a single detector response has no quantitative information about the intensity (except that it is positive at the response position). Thus it cannot be called a field mesurement.
 
  • #63
vanhees71 said:
it's a problem of all kinds of measurement also within classical physics. You can always only prepare a finite number of systems and measure them.
In classical physics we prepare one electromagnetic field and can measure it anywhere with a single measurement, provided the intensity of the field is large enough. The smaller the intensity the large the exposure time needed for an accurate measurement.

The same holds verbatim in quantum field theory: We prepare one electromagnetic field and can measure it anywhere with a single measurement, provided the intensity of the field is large enough. The smaller the intensity the large the exposure time needed for an accurate measurement.
 
  • #64
Independently from the measured system, be it describable with good accuracy within classical physics or be it only describable within QT, you always have to repeat an experiment on a "sample of equally prepared systems" very often to be able to evaluate the statistical and systematical errors.

In the discussion about the measurement of the electromagnetic field and it's possible approximation with classical electrodynamics it's clear that a classical electromagnetic field like a field from a Laser pointer is described by a coherent state of QT. The intensity (i.e., the em. field energy density) is a measure of the field strength. If the coherent state is of high intensity the photon number (i.e., the total energy devided by ##\hbar \omega##, where ##\omega## is the frequency of the excited laser mode) is Poisson distributed. Particularly thus ##\Delta N=\sqrt{\langle N \rangle}##. This means that ##\Delta N/\langle N \rangle=1/\sqrt{\langle N \rangle}## is small for ##N \gg 1##, i.e., for high-intensity coherent states. Then you'll find that the repeated measurement is indeed only weakly scattering around the average value, and thus in such a case the description as a classical em. field is a good approximation. For very "dim laser light", where in the extreme you can have ##\langle N \rangle <1## this is no longer the case, and the quantum description is needed. In this case the coherent state is mostly "vacuum" and it's very unlikely to even measure one photon in a given time. That's why in this case you'll see the "quantum noise" and the discreteness of the registration processes of single photons.
 
  • #65
vanhees71 said:
Independently from the measured system, be it describable with good accuracy within classical physics or be it only describable within QT, you always have to repeat an experiment on a "sample of equally prepared systems" very often to be able to evaluate the statistical and systematical errors.
For a macroscopic measurement (when an engineer measures a field, in particular, for most classical measurements) one rarely takes a sample, one just measures a single time. One needs it only in those case where the measurement results are so noisy that one needs to average a large number of measurement results.
vanhees71 said:
For very "dim laser light", where in the extreme you can have ##\langle N \rangle <1## this is no longer the case, and the quantum description is needed.
No, only a longer exposure is needed before a measurement of the field results.

The reaon is that for a coherent state input, the quantum description gives identical results for the final intensity as the classical description.

But when only one photon arrived, neither the classical nor the quantum descrition allows you to obtain a value for the intensity from the recorded dot. Thus you don't have an intensity measurement, only a photon detection.
vanhees71 said:
That's why in this case you'll see the "quantum noise" and the discreteness of the registration processes of single photons.
Yes, and that's why you dont have an intensity measurement.
 
Last edited:
  • #66
A. Neumaier said:
The smaller the intensity the large the exposure time needed for an accurate measurement.
This is a solution only if stationarity assumptions holds, right? Isn't the stationarity assumption a kind of "repeated preparation" in disguise?

/Fredrik
 
  • #67
vanhees71 said:
There is no problem or if there is a problem it's a problem of all kinds of measurement also within classical physics.
Yes, I think is in principle a problem in classical physics too, but since classical physics is non-contextual, in practice, it stays beeing a practical problem of the physicist ignorance...
vanhees71 said:
You can always only prepare a finite number of systems and measure them. In general also both the preparation and the measurement are only approximate etc.
...because the right value of which we have approximations is independent of the measureement. Its just ignorance.

Neumaier has some idea that the similar argument can be applied to QM in his interpretation. But this does not work in the ensemble inteepretation as you can repeat all experiments, for example cosmology (But I dont find that a satisfactory solution as I take the measurement and observer perspective as central)

/Fredrik
 
  • #68
Fra said:
This is a solution only if stationarity assumptions holds, right?
It must be nearly stationary during the time of observation. For less stationary sources with a known evolution law one gets a 1-point function heavily smeared in time.
Fra said:
Isn't the stationarity assumption a kind of "repeated preparation" in disguise?
If you think only statistically then you need to use this disguise. But If you think in terms of 1-point functions stationarity is not needed and one just has different smearing functions.
 
  • #69
A. Neumaier said:
For less stationary sources with a known evolution law one
In your interpretation, is this evolution law merely "in principle" knowable by some "omnipresent superobserver" but still assumed objectively determined?

How do you put that in terms of process tomography? Do you treat an actual finite observers processing of statistical samples just an "approximation" of something "real".

Or how is an observer independet definition of law (hamiltonian?) defined for say arbitrary observers in non-inertial frames? (Conceptually that is! as we know there is no full quantum gravity theory yet)

/Fredrik
 
  • #70
Fra said:
In your interpretation, is this evolution law merely "in principle" knowable by some "omnipresent superobserver" but still assumed objectively determined?
The evolution law of the universe is known only to God. But it is assumed to exist and to be deterministic and observer independent. In this sense it is objective. Observer form their own approximate models of this evolution law.
Fra said:
How do you put that in terms of process tomography? Do you treat an actual finite observers processing of statistical samples just an "approximation" of something "real".
Yes. Any collection of observations informs the observer about properties of the universe near its spacetime position. From this information observers construct, based on statistics and subjective plausbility (= prejudice) , their models for prediction.
Fra said:
Or how is an observer independet definition of law (hamiltonian?) defined for say arbitrary observers in non-inertial frames? (Conceptually that is! as we know there is no full quantum gravity theory yet)
For the universe, it is given by a classical action for fields defined on spacetime, to be interpreted somehow as a quantum dynamical law.
Fra said:
/Fredrik
 
  • #71
Im in no position to answer what a quantum field is, but questions such as this one might often lead astray to rather philosophical inquiries. If thats not what is asked for, more concrete questions might help the understand the concept. Such as:

1. How does it transform under coordinate change?
2. If it can be considered as a mapping, beween which spaces does it map?
3. What physical information can be calculated from a quantum field?
 
Last edited:
  • #72
A. Neumaier said:
It must be nearly stationary during the time of observation. For less stationary sources with a known evolution law one gets a 1-point function heavily smeared in time.
Well, then you don't like this year's physics Nobel prize?
 
  • #73
A. Neumaier said:
The evolution law of the universe is known only to God. But it is assumed to exist and to be deterministic and observer independent. In this sense it is objective. Observer form their own approximate models of this evolution law.
Wow, those assumptions fly in the face of almost every known quantum phenomena, whether quantum fields or particles or systems. I guess I'm surprised to hear this coming from you. Maybe I haven't read enough of your posts in this thread to understand your context, and if so, my apologies.

a) There is no currently known cause of apparently indeterministic outcomes of quantum measurements, so why would anyone assume there is such a cause?

b) Every prediction made on entangled systems (such as Bell or GHZ) is strictly observer dependent; and in fact the only relevant components of that prediction lie in a future measurement context freely* chosen by... observers. How does that point to objectivity? The field - whatever you may think it is - is not a factor in that calculation at all. (Obviously, any contribution it makes must cancel out, and can effectively be ignored completely.)

c) I won't even mention the usage of the word "evolution" in regards to quantum (as opposed to classical) systems or fields - other than to ask: How the passage of time (whether a nanosecond or a millennium) change anything? Time doesn't seem to a factor (in general, obviously there are exceptions where time is an explicit dependency).*Apparently.
 
  • #74
A. Neumaier said:
The evolution law of the universe is known only to God.
He should have been invited to the Solvay Conference!
 
  • #75
DrChinese said:
Wow, those assumptions fly in the face of almost every known quantum phenomena, whether quantum fields or particles or systems. I guess I'm surprised to hear this coming from you. Maybe I haven't read enough of your posts in this thread to understand your context, and if so, my apologies.
The context is my thermal interpretation of quantum physics.
This is a deterministic theory in which the traditional stochastic observables of quantum mechanics are denied to be observables. Instead the N-point functions of quantum field theories, which have a deterministic evolution law, are identified as the beables of quantum field theory in the sense of Bell.
DrChinese said:
a) There is no currently known cause of apparently indeterministic outcomes of quantum measurements, so why would anyone assume there is such a cause?
Because one can hold the philosophical view that there is no effect without a cause. Then having found no cause only means one has not understood enough.

1000 years ago all what we now call science was unknown, and no causes were known for what we now understand as being caused. Why should it today be different compared to 1000 years in the future?

DrChinese said:
b) Every prediction made on entangled systems (such as Bell or GHZ) is strictly observer dependent; and in fact the only relevant components of that prediction lie in a future measurement context freely* chosen by... observers. How does that point to objectivity?
In a deterministic universe, free choices are only apparent. Indeed, all our free choices are actually determined by our motivations, experiences, goals, feelings, fits of the moment, etc.!

DrChinese said:
The field - whatever you may think it is - is not a factor in that calculation at all. (Obviously, any contribution it makes must cancel out, and can effectively be ignored completely.)
The field carries whatever causes the detection events. Without fields no transmission of information.

DrChinese said:
c) I won't even mention the usage of the word "evolution" in regards to quantum (as opposed to classical) systems or fields - other than to ask: How the passage of time (whether a nanosecond or a millennium) change anything? Time doesn't seem to a factor (in general, obviously there are exceptions where time is an explicit dependency).
Time is a parameter in the evolution of a quantum state, hence the state changes with time (unless the state is stationary).
 
Last edited:
  • #76
PeroK said:
He should have been invited to the Solvay Conference!
He is outside the universe, but determined the Solvay Conference to happen in the form it actually happened....
 
Last edited:
  • #77
vanhees71 said:
Well, then you don't like this year's physics Nobel prize?
How does that relate to my statement that you quoted?
 
  • #78
I mean, in the work leadking to this year's physics Nobel prize, one indeed observes the time evolution of electrons in atoms/molecules on the atto-second resolution level.
 
  • #79
A. Neumaier said:
In classical physics we prepare one electromagnetic field and can measure it anywhere with a single measurement, provided the intensity of the field is large enough. The smaller the intensity the large the exposure time needed for an accurate measurement.

The same holds verbatim in quantum field theory: We prepare one electromagnetic field and can measure it anywhere with a single measurement, provided the intensity of the field is large enough. The smaller the intensity the large the exposure time needed for an accurate measurement.
A. Neumaier said:
It must be nearly stationary during the time of observation. For less stationary sources with a known evolution law one gets a 1-point function heavily smeared in time.

If you think only statistically then you need to use this disguise. But If you think in terms of 1-point functions stationarity is not needed and one just has different smearing functions.
vanhees71 said:
I mean, in the work leading to this year's physics Nobel prize, one indeed observes the time evolution of electrons in atoms/molecules on the atto-second resolution level.
This does not contradict my claims in this thread. For clarity I provided again the context for these claims, namely measuring the intensity of a field.

The observation of the short time dynamics of single electrons is quite different - it is a stochastic process where observation is very noisy. But the extracted information is again based on a statistical evaluation of the correlation function and its interpretation through the short time Fourier transform of a 2-point function. The latter is the true observable carrying the information of interest; the stochastic trajectory is just a means for arriving at this information.
 
  • #80
A. Neumaier said:
Because one can hold the philosophical view that there is no effect without a cause. Then having found no cause only means one has not understood enough.

1000 years ago all what we now call science was unknown, and no causes were known for what we now understand as being caused. Why should it today be different compared to 1000 years in the future?
Even holding such a philosophical view (which admittedly is shared by many), you must admit the reasoning is essentially circular. No evidence is still no evidence, regardless of hope for the future.

I would agree it is reasonable (and in some ways useful) assumption. But it is the requirement that the causes precede the effect that gives me pause. I realize that requirement will be part and parcel of any interpretation with "thermal" in the name. But some degree of time symmetry in physics seems a relevant consideration too. Nonlocal contextuality has been demonstrated in so many recent experiments, and that seems to fly in the face of traditional concepts of causality. Ordering just doesn't matter, and in a deterministic universe, you'd think it would.

If you changed your "1000 years" to "100 years", your analogy might appear quite different (which is probably why you chose 1000 and not 100). By 1935, it was argued that QM was complete and there were no underlying "causes" to be found. 88 years later, it is even more evident - and certainly not less - that there is no meaningful root cause of quantum indeterminacy*. *The Bohmians got the jump on everyone by saying it is everywhere and now (but manifestly nonlocal), but is unknowable even in principle. Somehow their "unknowable in principle" seems to be a good bet no matter what interpretation we end up with. :smile:
 
  • #81
DrChinese said:
Even holding such a philosophical view (which admittedly is shared by many), you must admit the reasoning is essentially circular. No evidence is still no evidence, regardless of hope for the future.
The thermal interpretation is positive evidence for the possibility of interpreting QFT deterministically and causally.
DrChinese said:
some degree of time symmetry in physics seems a relevant consideration too.
There is no evidence at all that Nature is time symmetric, but overwhelming evidence for the opposite. Time symmetry is simply a theoretical assumption made, supplemented by heuristics for nevertheless having a causal arrow.
DrChinese said:
Nonlocal contextuality has been demonstrated in so many recent experiments, and that seems to fly in the face of traditional concepts of causality. Ordering just doesn't matter, and in a deterministic universe, you'd think it would.
Since the thermal interpretation is nonlocal contextual, there is no conflict here.
DrChinese said:
If you changed your "1000 years" to "100 years", your analogy might appear quite different (which is probably why you chose 1000 and not 100). By 1935, it was argued that QM was complete and there were no underlying "causes" to be found. 88 years later, it is even more evident - and certainly not less - that there is no meaningful root cause of quantum indeterminacy*.
I found meaningful causes in the chaoticity of the hierarchical dynamics of the N-point functions of QFT.

This is just as meaningful as the causes for randomness in classical mechanics.
 
  • #82
A. Neumaier said:
Time symmetry is simply a theoretical assumption made, supplemented by heuristics for nevertheless having a causal arrow.
I wouldn't say it's an assumption--it's a property of what have turned out to be our best current theories, that their equations are time symmetric. Since, as you say, we observe that Nature itself is not time symmetric, we then have to figure out how time symmetric equations can produce outcomes that aren't time symmetric, which then leads to the heuristics you mention.
 
  • #83
PeterDonis said:
I wouldn't say it's an assumption--it's a property of what have turned out to be our best current theories, that their equations are time symmetric.
Even time-symmetric equations do not imply that Nature is time-symmetric and only our perception of it is asymmetric.

To get sensible physics one has to assume causal asymmetry in addition to the time-symmetric equations! Causality is at the very root of theoretical physics!

Both in classical and quantum field theory, time symmetry is explicitly broken by choosing the retarded solutions as the ones that have physical reality, and ignoring the advanced solutions. This is done on the basis of the law of causality, that nonlocal causes never conspire to have exclusively local effects.
 
  • #84
A. Neumaier said:
Even time-symmetric equations do not imply that Nature is time-symmetric and only our perception of it is asymmetric.
If the equations are just approximations, yes, that's true; the more fundamental equations they are approximations of could be time asymmetric.

A. Neumaier said:
To get sensible physics one has to assume causal asymmetry in addition to the time-symmetric equations!
More precisely, if one grants that the equations are time-symmetric, one has to make use of the fact that time-symmetric equations can still have time-asymmetric solutions--as long as the solutions occur in pairs, each the time reverse of the other. And then one has to explain why we live in one solution of such a pair instead of the other. I don't know if this is what you mean by "assume causal asymmetry"; your next paragraph suggests that it might be.
 
  • #85
PeterDonis said:
If the equations are just approximations, yes, that's true; the more fundamental equations they are approximations of could be time asymmetric.
Independent of whether the equations are just approximations, the solutions of linear hyperbolic differential equations (such as the wave equation) form a vector space that is the direct sum of a space of retarded solutions and a space of advanced solutions. Causality is indispensible to explain why only the retarded solutions and neither purely advanced nor mixed solutions occur in Nature.

This cannot be explained by the common argument that we simply experience the direction in which entropy increases.
PeterDonis said:
More precisely, if one grants that the equations are time-symmetric, one has to make use of the fact that time-symmetric equations can still have time-asymmetric solutions--as long as the solutions occur in pairs, each the time reverse of the other.
This would still allow superpositions of retarded and advanced solutions as long as the retarded one dominates.
PeterDonis said:
And then one has to explain why we live in one solution of such a pair instead of the other.
But we have to explain why these unphysical superpositions never occur! I don't know of any other explanation than to assume causality.
 
  • #86
A. Neumaier said:
This would still allow superpositions of retarded and advanced solutions as long as the retarded one dominates.
Hm, I see; if you have linear equations the solution set is not limited to the "pure" ones, so to speak.

My personal "solution" would be that the ultimate fundamental equations are not linear. But I don't know if that option is being investigated.
 
  • #87
PeterDonis said:
My personal "solution" would be that the ultimate fundamental equations are not linear. But I don't know if that option is being investigated.
In general, the linear equations define the 1-particle spaces upon which the quantum field theories are erected. The equations are necessarily linear, otherwise one has no superposition principle. But the latter is essential for the particle interpretation in scattering processes.

For Fermions one gets a Dirac equation. But only positive energy solutions are permitted for the superpositions. Positive energy is a causality constraint.
 
  • #88
PeterDonis said:
My personal "solution" would be that the ultimate fundamental equations are not linear. But I don't know if that option is being investigated.
https://arxiv.org/abs/2312.01992

In this work, we review and extend a version of the old attempt made by Louis de broglie for interpreting quantum mechanics in realistic terms, namely the double solution. In this theory quantum particles are localized waves, i.e, solitons, that are solutions of relativistic nonlinear field equations. The theory that we present here is the natural extension of this old work and relies on a strong time-symmetry requiring the presence of advanced and retarded waves converging on particles.
 
  • #89
A. Neumaier said:
But we have to explain why these unphysical superpositions never occur! I don't know of any other explanation than to assume causality.
If one looks from the observer side (which you don't in the same way).

How fit would a physical agent/observer be in order to maintain and encode picture that contains what most of us think of "unphysical things"?

Whould such an observer be stable? It would likely take an incredibly complex amount of processing to make up for the "poor ansatz". So selection pressure is to adjust the ansatz and gain stability.

So the alternative to a "constraint" is to consider the probability for agent survival if some have strange ways to encode the same data, that may be "logically possibly" but at face with a more natural or more economical interpretation of the data. In this view one would presume that among the crazy options, the distribution would be sharpply peaked around the "physically reasonable", not because the other ones are not possible, but because they are inefficient for nature.

This is how I interpret the KG to Dirac transition as well. Lets suppose, that an observer acually DID observer something going backwards in time? (whatever it means). the question is still, would the agent likely INFER that it was something going backwards in time, or would the discovery of a new particle simply by more natural? The latter is how I see it. Why and in what sense it is more "natural", is something to be clarified and makde more precise of course. But fitness is an possible intuitive angle.

(this would then indirectly relate back to the dynamics of hte "background spacetime", as the background spacetime on which QM is formed, might be emergent in a similar way)

/Fredrik
 
  • #90
A. Neumaier said:
This does not contradict my claims in this thread. For clarity I provided again the context for these claims, namely measuring the intensity of a field.

The observation of the short time dynamics of single electrons is quite different - it is a stochastic process where observation is very noisy. But the extracted information is again based on a statistical evaluation of the correlation function and its interpretation through the short time Fourier transform of a 2-point function. The latter is the true observable carrying the information of interest; the stochastic trajectory is just a means for arriving at this information.
Of course. If you measure the intensity of the em. field prepared in single- or few-photon Fock states, it's also a very noisy observation.
 
  • #91
vanhees71 said:
Of course. If you measure the intensity of the em. field prepared in single- or few-photon Fock states, it's also a very noisy observation.
But too noisy to extract any useful information, hence it cannot be called a measurement of the intensity of the field, which would produce numerical values for the intensity.
 
  • #92
Of course, it's very useful information to register a photon with a photon detector. It's the only useful information you can get about a single photon. "Intensity" for a single photon has of course the meaning of a probability (density) for detecting a photon at a given region in space at a given time.
 
  • #93
vanhees71 said:
Of course, it's very useful information to register a photon with a photon detector. It's the only useful information you can get about a single photon. "Intensity" for a single photon has of course the meaning of a probability (density) for detecting a photon at a given region in space at a given time.

Casually

https://www.azooptics.com/News.aspx?newsID=28527
.....
 
Last edited:
  • #94
vanhees71 said:
Of course, it's very useful information to register a photon with a photon detector. It's the only useful information you can get about a single photon. "Intensity" for a single photon has of course the meaning of a probability (density) for detecting a photon at a given region in space at a given time.
No. For a single photon, probability is operationally meaningless.
 
  • #95
So you say all quantum-optics experiments with single photons are meaningless? That doesn't make sense. What you can know about a single photon operationally is the detection probability at a given time and place, given its state.
 
  • #96
vanhees71 said:
So you say all quantum-optics experiments with single photons are meaningless? That doesn't make sense. What you can know about a single photon operationally is the detection probability at a given time and place, given its state.
He takes "single photon" literally. What makes no sense is his cryptic way of restarting that old discussion just now. No idea what he wants to achieve. Yes, you use words and concepts too much in the "you know what I mean" way, but this cryptic way of taking words literally won't change your mind either.

(We can have that discussion another time, but not today, or tomorrow, or ...)
 
  • Like
Likes physika and vanhees71
  • #97
Nowadays we indeed can take "single photon" literally since the quantum opticians can prepare true single-photon states (e.g., using parametric down conversion and using one of the entangled photons to herald the other one used for experiments). I don't know, what he means by the claim that probabilities for single photons were meaningless. The only thing, however, we have in QT for single quanta are probabilities. These are also not meaningless in an operational sense, because it simply means that when repeating the experiment with many equally prepared photons (operationally defining the "state" as the corresponding preparation procedure) you get in the limit of infinitely many such experiments the probability distribution for registering the photon at the place of the detector.
 
  • #98
vanhees71 said:
So you say all quantum-optics experiments with single photons are meaningless?
No, he's saying that you can't use a single instance to test a statistical prediction, and QM's predictions are statistical.
 
  • Like
Likes gentzen, vanhees71 and Kontilera
  • #99
Ok, but that's self-evident and not specific to photons.
 
  • #100
DrChinese said:
Nonlocal contextuality has been demonstrated in so many recent experiments, and that seems to fly in the face of traditional concepts of causality. Ordering just doesn't matter, and in a deterministic universe, you'd think it would.
And yet there is always a simple causal explanation for these experiments where order does not matter that you seem to ignore.

For instance, in the entanglement swapping experiment that you often cite as evidence for violating causality there is a simple explanation as to why photons 2 & 3 can tell you whether photons 1 & 4 were entangled. When photon 1 is measured, through the mechanism of entanglement it can provide all the information about the measurement angle to photon 2. When photon 4 is measured, through the mechanism of entanglement it can provide all of the information about the measurement angle to photon 3. So when photons 2 & 3 come together they have all the information about the measurements being used in the experiment to decide whether photons 1 & 4 were entangled or not.
 
Back
Top