Who is Ballentine and why is he important in the world of quantum mechanics?

  • #36
@strangerep if you want to continue the MOND subthread it should be moved to its own thread in the appropriate forum.
 
Physics news on Phys.org
  • #37
A. Neumaier said:
Photon particles are (both in the sense of vanhees71 and in my sense) very special states of the quantum field along a beam, namely sequences of 1-photon Fock states, or in the entangled case of 2-photon Fock states generated by parametric down-conversion, for use in Aspect type experiments).
So your point is that those sequences of 1-photon Fock states or entangled 2-photon Fock states are not beables in the thermal interpretation, hence vanhees71's point does not apply? I tentatively concluded this from your answer, together with your earlier answers to vanhees71, like:
A. Neumaier said:
It is about measuring fields, not particles - cf. the subject of the thread.

In the thermal interpretation, particles are not beables; only their probability distributions are. For this is what objectively distinguishes particles prepared in different states; see your answer and my reply here.

A. Neumaier said:
Nevertheless a detector event at a photosensitive screen (or a pair of them in Aspect-type experiments) is not at all a measurement of the electromagnetic field at the screen(s). What would be the measured value of E(x) or B(x) at the screen?
Fine. But ... I can't really blame vanhees71 in this specific case for not getting your point. If you want to distinguish between fields and detector events, you have to make that clear. Or rather, you have to make it clearer what you mean when you talk of fields. Apparently some measurable effects occuring in the context of fields in the QFT sense are not included in your notion of measuring fields.
 
  • #38
vanhees71 said:
Of course it is a measurement of the electromagnetic field.
If it is, what are the values of E(x) and B(x) obtained through the measurement?
vanhees71 said:
What else do you think you measure with a photoplate exposed by some electromagnetic radiation?
One measures the response of the detector to the field. One needs to use theory of the detector to know how to translate this into a statement about the electric field. The theory tells you the relation after a sufficiently long exposure to a stationary field, but not the relation about the exposure to a single 1-photon state.
vanhees71 said:
It's also clear that a photon detector measures "smeared" correlation functions
What is the value of the smeared correlation function known once the photon has left its mark? How to get a whole function from a single mark???
 
  • Like
Likes joneall and Peter Morgan
  • #39
The electromagnetic field is operationally defined by its actions on charged matter already in classical electrodynamics.

The photoplate after repeating a single-photon detection experiment many times depicts the energy density of the em. field (of course, as you stress yourself all the time, "coarse grained" due to finite resolution).

A correlation function is a statistical quantity and cannot be empirically studied with just a single experiment. One photon leaves one dot at a random spot, not a distribution.
 
  • #40
gentzen said:
So your point is that those sequences of 1-photon Fock states or entangled 2-photon Fock states
States of the field are beables, but not the traveling photons.
gentzen said:
If you want to distinguish between fields and detector events, you have to make that clear.
I had clearly emphasized it in my answer.
gentzen said:
Or rather, you have to make it clearer what you mean when you talk of fields.
In our case, the fields are the electric field E(x) and the magnetic field B(x).
They are spread out over the lab where the experiment is performed and E(x) has a nonnegligble intensity in the beam where the alleged photons are supposed to travel.
gentzen said:
Apparently some measurable effects occuring in the context of fields in the QFT sense are not included in your notion of measuring fields.
Yes. Not every measurable effect is a measurement of its cause. If rain caused your trousers to be wet, did you measure the rain?

Measuring the electromagnetic field in some region means getting approximate values for E(x) and B(x) in that region. Just like measuring the trajectory of a classical particle in some time interval means getting approximate values for x(t) during that time.
 
Last edited:
  • #41
vanhees71 said:
The photoplate after repeating a single-photon detection experiment many times depicts the energy density of the em. field (of course, as you stress yourself all the time, "coarse grained" due to finite resolution).
Yes. Thus scanning the plate after long exposure is indeed a measurement of the coarse-grained field intensity.
vanhees71 said:
A correlation function is a statistical quantity and cannot be empirically studied with just a single experiment. One photon leaves one dot at a random spot, not a distribution.
So the observation of the dot is not a measurement of the field or its correlation function.
 
  • #42
I think, we are splitting hairs here. Of course is the detection of a single photon a measurement. How else would you call it?
 
  • #43
vanhees71 said:
I think, we are splitting hairs here.
No, we are at the very basis of our eternal differences!
vanhees71 said:
Of course is the detection of a single photon a measurement. How else would you call it?
It might be called a random position measurement of the photon.

But it is certainly not a measurement of a field. The latter would produce approximate values for the field.
 
  • Like
Likes Peter Morgan
  • #44
vanhees71 said:
Of course is the detection of a single photon a measurement
But it's not a measurement of the field, since, as you have said, such a measurement requires a large number of dots so you can compute statistics from them.

A "detection of a single photon" is a measurement of where on the detector the single photon hit. (Even that requires theoretical interpretation based on knowledge of the detector, as @A. Neumaier has pointed out.)
 
  • Like
Likes PeroK
  • #45
A. Neumaier said:
No, we are at the very basis of our eternal differences!

It might be called a random position measurement of the photon.

But it is certainly not a measurement of a field. The latter would produce approximate values for the field.
I've thought of this in terms of the moment when an analog signal out of some device is converted to a digital form that we then record. The output current from an Avalanche PhotoDiode or similar device is noisy but near zero, then it is noisy but near some value that is distinctly different from zero, which we commonly say is because of an "avalanche" within the device. Discriminating hardware typically monitors the current and decides what time to record as the time at which the output current became non-zero, an event. Either such hardware is from a trusted manufacturer, off a slightly less trusted storeroom shelf, or else an experimenter has designed and built it. In any case, the experimenter will have to verify that the hardware responds appropriately to the signal it receives, and perhaps debug its operation, presumably as shown on a good oscilloscope that can also measure the current from the device at GHz rates.

The output current at the input to the hardware discriminator is a proxy for the electromagnetic field in a small region of the wire where it connects to the discriminating hardware. Loosely, that's a smeared measurement of E(x), not in the original device but at the connection to the discriminating hardware. That's a proxy for what is happening within the APD. The precise timing of recorded events depends on the noisy EM field environment that surrounds the APD, which is in turn determined by whatever other apparatus has been assembled around it. If we change the apparatus around the APD in any way, the statistics of events in the APD will change, although the response to a given modulation might be small enough that it would take months of data collection to be sure any change at all was made.

From a noisy EM perspective such as this, we don't suppose that every event in an APD is caused by one particle. Instead, the statistics of the recorded events allow us to infer properties of the whole apparatus. We have effectively replaced the "'particle and its properties' metaphysics" by a "'prepared noisy EM field and the events it causes in devices that we designed so that events would happen' metaphysics". I like to say that we don't use rocks as measurement devices in physics experiments, we use materials and supporting hardware that two centuries or more of experience have found to be fit for the purpose of providing us with events that we can record. Photographic plates were a good beginning, but we've come a long way since.

I've seen something like this all too many times, but the example I always use as an illustration is Gregor Weihs's Bell-violating experiment from the 1990s because I'm most familiar with it and because his schematic is significantly simpler and clearer than one usually sees for more modern experiments:
1700604787698.png

Obviously the output from an APD doesn't look much like the purely notional plot at the bottom left, which is directly below what I have called the discriminating hardware that is attached to the "Silicon Avalanche Diode" by a signal line, but certainly there has to be some feature of the signal on the signal line that the discriminating hardware can identify so that it decides to put a time into the "Time-Tag List".
With apologies that I'm no kind of experimentalist and not much of a theorist to be saying all this. This kind of thinking works for me, for what it's worth.
 
  • #46
I interpret these questions, and the "construction" of complex observables from elementary inputs in a context, where the observer/agent/physicists just processes and categorizis inputs, where the representation can also be changed, for example by creating higher phase spaces and embedding dimensions. All this is all internal to the observer, postprocessing that per see has nothing todo with "exernal reality". So in a way I think all "measurements" may well be hiearchial, and it's what I would expect from a unification approach anyway.

It's tempting to ask: Can we from a sequence of "one-type-clicks" (say quanta of a "unificatonfield") find patterns, create another "layers" with the illusion of more "complex/composite clicks", time and space? to the point where in the other hand, we have the standard model phenomenology? Where other fields are defind in terms of postprocessing the clicks from our uberfield counter?

I liked this old but suggestive mind-challenge of how to create dimensions from almost nothing iterating processing in layers https://math.ucr.edu/home/baez/nth_quantization.html. i think this is even key to grasp what a quantum field is, or can be at least.

This is I think the perspective linking registering a click event to thinking "we measured the field".

A. Neumaier said:
How to get a whole function from a single mark???
The same way you get (or don't get) a whole function from n marks?

/Fredrik
 
  • #47
Peter Morgan said:
The output current at the input to the hardware discriminator is a proxy for the electromagnetic field in a small region of the wire where it connects to the discriminating hardware. Loosely, that's a smeared measurement of E(x), not in the original device but at the connection to the discriminating hardware.
It is a measurement of the current in thewire, not of the electromagnetic field that caused the ejection of an electron. That's quite different!
Fra said:
The same way you get (or don't get) a whole function from n marks?
But for n=1 (as in the case under discussion) you don't get a function but only a point.
 
  • #48
A. Neumaier said:
But for n=1 (as in the case under discussion) you don't get a function but only a point.
Yes, would you get a "function" for n=100? so the find the "best match" of a function from that one point, perhaps it in practice is some dirc or gaussian, sharply peaking around that point, with a width corresponding to the "size" of the point. The size of a "real point", would be like the size of hte pixel or whatever the sensor element is like.

Indeed such an approximation is terribly bad. But it's still an approximation, and one can argue, the best one possible, given only one sample? What I am getting at is that I think the distinction is not a conceptually distinct one, but a scale when the uncertainty is "small enough".

/Fredrik
 
  • #49
A. Neumaier said:
It is a measurement of the current in thewire, not of the electromagnetic field that caused the ejection of an electron. That's quite different!
It is different, but it can at least be thought of as the most direct record we have of a measurement of the electromagnetic field. In particular, such measurements are —I think by construction— the only recorded measurement results. From the totality of all such actually recorded measurement results, we infer what we expect we would have recorded as the results of other measurements.
For some physicists, and certainly for some philosophers, this is too empiricist, but I don't claim that those inferences are empty, only that they are of a somewhat different kind than the actually recorded measurement results that are stored on hard drives contemporaneously with the experiment. To be sure, there is a computational and hardware path (for which the finest details are hopefully well-documented) to be considered when trying to understand those actually recorded measurement results as a foundation for inferences about what happened in the rest of the experiment, so that we can think of all measurement results as inferred if we think in a wide enough sense, but it feels as if there are different levels of directness that are substantive enough for it to be worthwhile to keep them in mind.
 
  • #50
Peter Morgan said:
It is different, but it can at least be thought of as the most direct record we have of a measurement of the electromagnetic field. In particular, such measurements are —I think by construction— the only recorded measurement results.
But here what is measured is inside the measurement apparatus. However, a measurement apparatus is designed to measure something outside it, otherwise
its purpose
Peter Morgan said:
From the totality of all such actually recorded measurement results, we infer what we expect we would have recorded as the results of other measurements.
vanhees71 claimed that the apparatus measures the electic field intensity impinging from a source to the screen. Thus we should be able to infer approximate values of the electic field intensity. But this is not the case. So whatever is measured in (inferred from) a single detector event, it is not a measurement of the electric field intensity in the beam.
 
  • #51
Fra said:
Indeed such an approximation is terribly bad.
Yes. I am approximately the richest man in the world! And at the same time the poorest!
 
  • #52
A. Neumaier said:
Yes. I am approximately the richest man in the world! And at the same time the poorest!
Makes perfect sense! :smile:

/Fredrik
 
  • Like
Likes bhobba
  • #55
A. Neumaier said:
No, we are at the very basis of our eternal differences!

It might be called a random position measurement of the photon.

But it is certainly not a measurement of a field. The latter would produce approximate values for the field.

A photon has no position to begin with. A photon is a one-quantum Fock state of the electromagnetic field. There is a probability distribution for detecting it at a given place and a given time (with finite resolution of both of course). The probability is given by the (normalized) energy density of the electromagnetic field, which results from the analysis of the photoelectric effect on the detector material in the standard dipole approximation. I thought that's your view: the observables are given by correlators of local observable-operators, and indeed the energy density of the em. field is such an observable.

What else should be a measurement of "a field" than that? Also in classical electrodynamics, what's observable of the field are precisely such things as the "intensity", which also classically is given by the energy density.

That's also how the electromagnetic field is operationally defined, i.e., by its actions on charged matter.
 
  • #56
vanhees71 said:
A photon has no position to begin with. A photon is a one-quantum Fock state of the electromagnetic field. There is a probability distribution for detecting it at a given place and a given time (with finite resolution of both of course).
This is commonly called an (approximate) position measurement. It measures the transverse position orthogonal to the beam direction. This is represented by a well-defined operator with two commuting components.
vanhees71 said:
The probability is given by the (normalized) energy density of the electromagnetic field,
Measuring the probability would therefore be a measurement of the field intensity. But from a single photon one cannot get a probablilty, hence no measurement of the field.
vanhees71 said:
I thought that's your view: the observables are given by correlators of local observable-operators, and indeed the energy density of the em. field is such an observable.
Yes, the observables are the correlation functions but:
vanhees71 said:
A correlation function is a statistical quantity and cannot be empirically studied with just a single experiment. One photon leaves one dot at a random spot, not a distribution.
Thus the observation of a single photon (which is what we were discussing) impact does not measure these observables.
vanhees71 said:
What else should be a measurement of "a field" than that?
Anything that results in an approximate value of the smeared field at some point.
vanhees71 said:
Also in classical electrodynamics, what's observable of the field are precisely such things as the "intensity", which also classically is given by the energy density.
The classical intensity is a field, and observing it at x gives the value of the field averaged near x. The same holds in the quantum case with my definition of measurement, but not with your contrived one.
 
  • #57
A. Neumaier said:
Thus the observation of a single photon (which is what we were discussing) impact does not measure these observables.

What if we view see the general pattern here as similar to a "deep learning" methods with layers of abstractions, where we are here talking about "determining/mesure one abstraction" in a higher layer based on data flowing from the lower levels. Indeed the higher abstractions are driven by data, and single samples will not drive the process. And while it is true that the confidence in higher level constructs, depends on, and requires a certain "amount" of data from lower levels, the detection of lower level "events" are still the building blocks of the "measurement process"?

The association I can't help making is that from the "quantization step", we can see define "field" as a higher layer constructs, defined in terms of processing lower layers (in it's simplest form it can be average formation, but it can also be less trivial transformations) and while it is true that the confidence in higher level constructs, depends on, and requires a certain "amount" of data from lower levels. It seems that the "mesaurement" of ANYTHING, must necessarily start with detection of some elementary events. The question is, how many data points do we need to motivate an given construct? This would also make such higher constructed contextual as expected, as they are supported by the available observations.

It seems to me this can be made a deep question, what is a field, what is an observable...how are they defined conceptaully and operationally, rather than merely mathematically (which as discussed is in part fiction)?

/Fredrik
 
  • #58
A. Neumaier said:
This is commonly called an (approximate) position measurement. It measures the transverse position orthogonal to the beam direction. This is represented by a well-defined operator with two commuting components.
The position is the position of the detector. There's no position operator for the photon. In relativistic QFT time and position (four-vector) components are parameters with precisely this meaning.
A. Neumaier said:
Measuring the probability would therefore be a measurement of the field intensity. But from a single photon one cannot get a probablilty, hence no measurement of the field.
As in any QT the state refers to probabilistic properties of ensembles, of course.
A. Neumaier said:
Yes, the observables are the correlation functions but:

Thus the observation of a single photon (which is what we were discussing) impact does not measure these observables.
A photodetector registers a single photon at a given space-time point (within a finite resolution). That's a measurement par excellance as it is defined in standard QT.
A. Neumaier said:
Anything that results in an approximate value of the smeared field at some point.
This can of course only be achieved by measuring an ensemble (or rather a "statistical sample") of equally prepared systems.
A. Neumaier said:
The classical intensity is a field, and observing it at x gives the value of the field averaged near x. The same holds in the quantum case with my definition of measurement, but not with your contrived one.
I don't see, where we differ in this respect: the expecation value of a local observable like the electromagnetic field ##(\vec{E}(x),\vec{B}(x)## can of course again only be measured on an ensemble not a single system, and the expectation value as only one of the moments of the corresponding probability distribution only describes a small aspect of the state.
 
  • Like
Likes gentzen
  • #59
vanhees71 said:
I don't see, where we differ in this respect: the expecation value of a local observable like the electromagnetic field ##(\vec{E}(x),\vec{B}(x)## can of course again only be measured on an ensemble not a single system, and the expectation value as only one of the moments of the corresponding probability distribution only describes a small aspect of the state.
In your view the "statistical samples" approximate the "ensemble". But the ensemble is a fiction in the sense of requiring infinite repeats etc. This is what is the "problem".

If I understand Neumaier, he thinks the "statistical sample" approximates not some fictional ensemble but the value of an actual "real" field (that is defined by accounting for ALL the actual varialbes the in universe, even those the local observer isn't informed about).

So I think the disagreemen it more, WHAT does our "statistical sample" approximates?

/Fredrik
 
  • #60
There is no problem or if there is a problem it's a problem of all kinds of measurement also within classical physics. You can always only prepare a finite number of systems and measure them. In general also both the preparation and the measurement are only approximate etc. etc. All this is covered by the standard procedures of the experimentalists, i.e., you have to do a careful analysis of the statistical and systematic errors in an experiment.

A statistical sample approximates an ensemble. Since ##\vec{E}## and ##\vec{B}## don't commute and since there possible values are continuous, they never can be precisely determined. It's as with position and momentum in non-relativistic physics.
 
  • #61
vanhees71 said:
The position is the position of the detector.
The detector is a screen and has many positions, one of them responds to the photon. The two coordinates of the responding position define the transverse position of the photon measured.
vanhees71 said:
There's no position operator for the photon.
For the photon, in the observer frame, there is no 3-component position operator with commuting components transforming properly under rotations.

But there are commuting operators for the two components of position transversal to the beam direction. Thus transverse position can be measured with in principle arbitrary accuracy.
vanhees71 said:
A photodetector registers a single photon at a given space-time point (within a finite resolution). That's a measurement par excellance as it is defined in standard QT.
But it is a measurement of a particle, not of a field. If it would measure a field, as you claim the energy intensity, which value do we get for the incident field at the impact point? and which values at non-impact points? (Not seeing a response is also a measurement of photon presence, but not a field measurement.)
vanhees71 said:
This can of course only be achieved by measuring an ensemble (or rather a "statistical sample") of equally prepared systems.

I don't see, where we differ in this respect: the expecation value of a local observable like the electromagnetic field ##(\vec{E}(x),\vec{B}(x)## can of course again only be measured on an ensemble not a single system, and the expectation value as only one of the moments of the corresponding probability distribution only describes a small aspect of the state.
An engineer measures a local observable like the electromagnetic field ##(\vec{E}(x),\vec{B}(x)## with a single measurement at x, not by statistical means. This works well, although only a single electromagnetic field is prepared, not an ensemble of fields.

Statistics is needed only for extremely weak fields, such as that defined by a single photon state, and only to acumulate responses, not to average field values.
 
Last edited:
  • Like
Likes mattt and gentzen
  • #62
Fra said:
If I understand Neumaier, he thinks the "statistical sample" approximates not some fictional ensemble but the value of an actual "real" field (that is defined by accounting for ALL the actual varialbes the in universe, even those the local observer isn't informed about).
An engineer records no statistical sample but only one value at any point where a measurement is made.

For very low intensity fields, the interpretation of the statistical sample is as usual - it approximates the fictional ensemble and produces in a large number of detector responses the q-expectation = 1-point function of the intensity field at the positions of the screen.

But a single detector response has no quantitative information about the intensity (except that it is positive at the response position). Thus it cannot be called a field mesurement.
 
  • #63
vanhees71 said:
it's a problem of all kinds of measurement also within classical physics. You can always only prepare a finite number of systems and measure them.
In classical physics we prepare one electromagnetic field and can measure it anywhere with a single measurement, provided the intensity of the field is large enough. The smaller the intensity the large the exposure time needed for an accurate measurement.

The same holds verbatim in quantum field theory: We prepare one electromagnetic field and can measure it anywhere with a single measurement, provided the intensity of the field is large enough. The smaller the intensity the large the exposure time needed for an accurate measurement.
 
  • #64
Independently from the measured system, be it describable with good accuracy within classical physics or be it only describable within QT, you always have to repeat an experiment on a "sample of equally prepared systems" very often to be able to evaluate the statistical and systematical errors.

In the discussion about the measurement of the electromagnetic field and it's possible approximation with classical electrodynamics it's clear that a classical electromagnetic field like a field from a Laser pointer is described by a coherent state of QT. The intensity (i.e., the em. field energy density) is a measure of the field strength. If the coherent state is of high intensity the photon number (i.e., the total energy devided by ##\hbar \omega##, where ##\omega## is the frequency of the excited laser mode) is Poisson distributed. Particularly thus ##\Delta N=\sqrt{\langle N \rangle}##. This means that ##\Delta N/\langle N \rangle=1/\sqrt{\langle N \rangle}## is small for ##N \gg 1##, i.e., for high-intensity coherent states. Then you'll find that the repeated measurement is indeed only weakly scattering around the average value, and thus in such a case the description as a classical em. field is a good approximation. For very "dim laser light", where in the extreme you can have ##\langle N \rangle <1## this is no longer the case, and the quantum description is needed. In this case the coherent state is mostly "vacuum" and it's very unlikely to even measure one photon in a given time. That's why in this case you'll see the "quantum noise" and the discreteness of the registration processes of single photons.
 
  • #65
vanhees71 said:
Independently from the measured system, be it describable with good accuracy within classical physics or be it only describable within QT, you always have to repeat an experiment on a "sample of equally prepared systems" very often to be able to evaluate the statistical and systematical errors.
For a macroscopic measurement (when an engineer measures a field, in particular, for most classical measurements) one rarely takes a sample, one just measures a single time. One needs it only in those case where the measurement results are so noisy that one needs to average a large number of measurement results.
vanhees71 said:
For very "dim laser light", where in the extreme you can have ##\langle N \rangle <1## this is no longer the case, and the quantum description is needed.
No, only a longer exposure is needed before a measurement of the field results.

The reaon is that for a coherent state input, the quantum description gives identical results for the final intensity as the classical description.

But when only one photon arrived, neither the classical nor the quantum descrition allows you to obtain a value for the intensity from the recorded dot. Thus you don't have an intensity measurement, only a photon detection.
vanhees71 said:
That's why in this case you'll see the "quantum noise" and the discreteness of the registration processes of single photons.
Yes, and that's why you dont have an intensity measurement.
 
Last edited:
  • #66
A. Neumaier said:
The smaller the intensity the large the exposure time needed for an accurate measurement.
This is a solution only if stationarity assumptions holds, right? Isn't the stationarity assumption a kind of "repeated preparation" in disguise?

/Fredrik
 
  • #67
vanhees71 said:
There is no problem or if there is a problem it's a problem of all kinds of measurement also within classical physics.
Yes, I think is in principle a problem in classical physics too, but since classical physics is non-contextual, in practice, it stays beeing a practical problem of the physicist ignorance...
vanhees71 said:
You can always only prepare a finite number of systems and measure them. In general also both the preparation and the measurement are only approximate etc.
...because the right value of which we have approximations is independent of the measureement. Its just ignorance.

Neumaier has some idea that the similar argument can be applied to QM in his interpretation. But this does not work in the ensemble inteepretation as you can repeat all experiments, for example cosmology (But I dont find that a satisfactory solution as I take the measurement and observer perspective as central)

/Fredrik
 
  • Like
Likes vanhees71
  • #68
Fra said:
This is a solution only if stationarity assumptions holds, right?
It must be nearly stationary during the time of observation. For less stationary sources with a known evolution law one gets a 1-point function heavily smeared in time.
Fra said:
Isn't the stationarity assumption a kind of "repeated preparation" in disguise?
If you think only statistically then you need to use this disguise. But If you think in terms of 1-point functions stationarity is not needed and one just has different smearing functions.
 
  • #69
A. Neumaier said:
For less stationary sources with a known evolution law one
In your interpretation, is this evolution law merely "in principle" knowable by some "omnipresent superobserver" but still assumed objectively determined?

How do you put that in terms of process tomography? Do you treat an actual finite observers processing of statistical samples just an "approximation" of something "real".

Or how is an observer independet definition of law (hamiltonian?) defined for say arbitrary observers in non-inertial frames? (Conceptually that is! as we know there is no full quantum gravity theory yet)

/Fredrik
 
  • #70
Fra said:
In your interpretation, is this evolution law merely "in principle" knowable by some "omnipresent superobserver" but still assumed objectively determined?
The evolution law of the universe is known only to God. But it is assumed to exist and to be deterministic and observer independent. In this sense it is objective. Observer form their own approximate models of this evolution law.
Fra said:
How do you put that in terms of process tomography? Do you treat an actual finite observers processing of statistical samples just an "approximation" of something "real".
Yes. Any collection of observations informs the observer about properties of the universe near its spacetime position. From this information observers construct, based on statistics and subjective plausbility (= prejudice) , their models for prediction.
Fra said:
Or how is an observer independet definition of law (hamiltonian?) defined for say arbitrary observers in non-inertial frames? (Conceptually that is! as we know there is no full quantum gravity theory yet)
For the universe, it is given by a classical action for fields defined on spacetime, to be interpreted somehow as a quantum dynamical law.
Fra said:
/Fredrik
 
  • Like
Likes Fra

Similar threads

  • Quantum Interpretations and Foundations
Replies
19
Views
660
  • Quantum Interpretations and Foundations
Replies
0
Views
1K
  • Quantum Interpretations and Foundations
3
Replies
91
Views
5K
  • Quantum Interpretations and Foundations
Replies
14
Views
2K
  • Quantum Interpretations and Foundations
Replies
25
Views
1K
  • Quantum Interpretations and Foundations
Replies
7
Views
715
  • Quantum Interpretations and Foundations
6
Replies
204
Views
7K
  • Quantum Interpretations and Foundations
Replies
7
Views
711
  • Quantum Interpretations and Foundations
Replies
5
Views
2K
Back
Top