QM Eigenstates and the Notion of Motion

  • #51
vanhees71 said:
how you define expectation values when you forbid to use Born's rule
Where in the formula for expectation values does Born's rule get used?
 
Physics news on Phys.org
  • #52
You need the probability (distribution) to calculate the expectation value. Let ##\hat{A}## be the self-adjoint operator representing the observable and ##|a,\alpha \rangle## a complete orthonormal set of (generalized) eigenvectors for the observable ##A##, according to Born's rule the probability to get the value ##a## when measuring ##A## and the system being prepared in the state ##\hat{\rho}##
$$P(a)=\sum_{\alpha} \langle a,\alpha |\hat{\rho}|a,\alpha \rangle.$$
From this you get the expectation value
$$\langle A \rangle = \sum_{a} P(a) a = \sum_{a,\alpha} \langle a,\alpha| \hat{\rho} \hat{A}|a,\alpha \rangle=\mathrm{Tr} (\hat{\rho} \hat{A} ).$$
Of course you can calculate the trace using any complete orthonormal set you like, but to derive this formula you need the assumption about the probabilities finding the eigenvalue ##a## when measuring the observable ##A##, given the state ##\hat{\rho}##.

Of course, if there are continuous spectra involved, then instead of the sums you have to take the corresponding integrals.
 
  • #53
vanhees71 said:
The only thing I do not understand concerning your interpretation is, how you define expectation values when you forbid to use Born's rule, but I don't think that we'll ever come to a consensus about this.
vanhees71 said:
You need the probability (distribution) to calculate the expectation value.
I once worked-out a "simple" example of how expectation values can be used in an idealized model without any underlying stochastic features (which would give rise to probabilities):
gentzen said:
... "The" real system could be a class of similarly prepared systems, it could be a system exhibiting stochastic features, or it could also be just some object with complicated features that we want to omit in our idealized model.

The absence of stochastic (and dynamic) features for this last case makes it well suited for clarifying the role of expectation values for the interpretation, and for highlighting the differences to ensemble interpretations.
As an example, we might want to describe the position and extent of Earth by the position and radius of a solid sphere. Because Earth is not a perfect sphere, there is no exactly correct radius. Therefore we use a simple probability distribution for the radius in the model, say a uniform distribution between a minimal and a maximal radius. We can compare our probabilistic model to the real system by a great variety of different observations, however only very few of them are "intended". (This is independent of the observations that are actually possible on the real system with reasonable effort.) ...

Of course, you will have to also read the mathematical part following the above quote, and try to understand yourself what it shows, and why I included it, i.e. which question is answered by that part, and why that question was important to me.

Skipping the "philosophy" at the begining of that post should be possible, but please still keep in mind the following disclaimer in the introduction:
gentzen said:
My intention is to follow the advice to "use your own words and speak in your own authority". Because "I prefer being concrete and understandable over being unobjectionable," this will include objectionable details that could be labeled as "my own original ideas".
 
  • #54
vanhees71 said:
What I also don't understand is your narrow interpretation of the "ensemble".
Dito. If you want a really flexible ensemble, then you have the microstate of the agent! :oldbiggrin:

It encodes the history of all it's interactions, which one can see as one long flexible "preparation" of the present. So every "now" of the agent is "prepared/forged" but it's own history.

vanhees71 said:
Of course quantum theory also applies to statistics made with a single system.
Yes, as there is only now "now", and only one "tomorrow". No agent cares if it was wrong a week ago, it always focuses on the one future. And the decision about what todo tomorrow is made only once.

Indeed the narrow ensemble view makes no sense :wink:

I am slolwly making some associations to neumaiers view, where I think that instead of a fictive ensemble or microstate, he assumes(or defines) some sort of equiblirium parameters, where one assumes the single case to be in some sort of equilibrium with the total ensemble, just like a small piece of matter can be in equiblirium to bulk via tempearature etc??? This is some sort of equibliriym assumption, in the agent picture i also at some pointenvision that an agent is in "equiblirium" in some abstract sense to the envuronment, but the question is HOW, and at that configuration? Here is where Neumaier looses me, I don't think it's covered. I am symphatetic to the critique against the ensemble, but there seems to be several ways around it, but maybe the paths meet at some point.

/Fredrik
 
  • #55
What's your definition of "agents"? I'm doing well with mundane measurement devices...
 
  • #56
vanhees71 said:
The only thing I do not understand concerning your interpretation is, how you define expectation values when you forbid to use Born's rule
The statistical (measured) expectation is the sample mean of measured single data instances.

The quantum (theoretical) expectation is defined in terms of the density operator ##\rho## characterizing the state by the formula ##\langle A\rangle:=Tr(\rho A)##. This mathematical formula is a pure definition, and has nothing to do with measurement, hence it does not involve Born's rule (which depends on measurement).

The law of large numbers implies that under the right conditions (sufficiently large sample of uncorrelated single data instances) the statistical (measured) expectation is arbitrarily close to the quantum (theoretical) expectation.
 
Last edited:
  • #57
vanhees71 said:
You need the probability (distribution) to calculate the expectation value.
No, you don't. You just need the wave function or density matrix. The expectation value is ##\bra{\psi} A \ket{\psi}## or ##Tr \rho A##. You can calculate that however you want, by hook or by crook; there is no need to expand ##\ket{\psi}## in a basis corresponding to any particular observable, whether it's ##A## or any other.
 
  • Like
Likes Simple question, mattt and gentzen
  • #58
vanhees71 said:
to derive this formula you need the assumption about the probabilities
Perhaps on your preferred interpretation you do; but as I understand it, the whole point of the thermal intepretation is that it doesn't require this; the expectation value formula for any operator and state is simply taken as a given and does not have the interpretation it normally has.
 
  • #59
Ok, as a physicist I want to have some argument for why I choose such a definition. If you do pure math, of course you can state anything you like as "axioms". For me the relation to physics, i.e., to observations and measurements are lost.

The main point is that I don't understand the operational meaning of this "trace formula" according to @A. Neumaier, since he expclicitly doesn't want to interpret in the sense of probabilities at all. I never understood how I should think about the operational meaning of the formula instead within this new interpretation.
 
Last edited:
  • #60
vanhees71 said:
The main point is that I don't understand the operational meaning of this "trace formula" according to @A. Neumaier, since he expclicitly doesn't want to interpret in the sense of probabilities at all. I never understood how I should think about the operational meaning of the formula instead within this new interpretation.
I gave the following very clear operational meaning:
A. Neumaier said:
The law of large numbers implies that under the right conditions (sufficiently large sample of uncorrelated single data instances) the statistical (measured) expectation is arbitrarily close to the quantum (theoretical) expectation.
I elaborated on this operational meaning in much more detail in my paper
A. Neumaier said:
 
  • Like
Likes mattt and PeterDonis
  • #61
vanhees71 said:
What's your definition of "agents"? I'm doing well with mundane measurement devices...
Conceptually an agent is a internal observer, meaning a observer that is an active participant, unlike a passive external observer only preparing and recording. Conceptually an agent also has a limited capacity for information processing, unlike the normal external observer with can process and record unlimited information about the "system".

Conceptually the external observer should be recovered in the limit of where the agent becomes infinitely massive and dominant relative to it's environmnet. For example where a classical laboratory "observes" subatomic event.

So my notion of agent is compatible with the standard notion for the normal corroborated domain of QM. But differences are expected when exploring extremes. Such as unification and gravity.

And as to what an "agent is", physically, is simply the same as to say "what is matter" imo.

/Fredrik
 
  • #62
A. Neumaier said:
Note that there is nothing more to my interpretation than that! Everything else is just apllication of this to various issues regarding single systems!

The quantum (theoretical) expectation is defined in terms of the density operator ##\rho## characterizing the state by the formula ##\langle A\rangle:=Tr(\rho A)##. This mathematical formula is a pure definition, and has nothing to do with measurement, hence it does not involve Born's rule (which depends on measurement).

The law of large numbers implies that under the right conditions (sufficiently large sample of uncorrelated single data instances) the statistical (measured) expectation is arbitrarily close to the quantum (theoretical) expectation.
What about response rates? The law of large numbers implies that under the right conditions the statistical relative frequency of an outcome ##a## is arbitrarily close to the theoretical response rate ##Tr(\rho P_a)##. Does the thermal interpretation privilege the quantum expectation over quantum response rates?
 
  • #63
Morbert said:
What about response rates? The law of large numbers implies that under the right conditions the statistical relative frequency of an outcome ##a## is arbitrarily close to the theoretical response rate ##Tr(\rho P_a)##. Does the thermal interpretation privilege the quantum expectation over quantum response rates?
The discussion in my quantum tomography paper is phrased in terms of response rates!
 
  • #64
A. Neumaier said:
I gave the following very clear operational meaning:

I elaborated on this operational meaning in much more detail in my paper
The law of large numbers implies that under the right conditions (sufficiently large sample of uncorrelated single data instances) the statistical (measured) expectation is arbitrarily close to the quantum (theoretical) expectation.
So finally, I'm allowed to interpret the formula as probabilistic? Then, I guess, there's no discrepancy between your and the minimal statistical interpretation anymore. Of course, in practice all ensembles (i.e., repetitions of measurements on equally prepared systems) are finite and thus the measured statistics only approximations to the predicted probabilities. That's of course implied by the frequentist interpretation of probabilities, which in my opinion still is the only operational meaning they have. I've no clue what the qbists' idea means in practice of measurements.
 
  • #65
vanhees71 said:
I've no clue what the qbists' idea means in practice of measurements.
If the agent aims for bulls eye and find that it consistently does hit (within some reasonable distribution), then there is nothing it can learn or improve. Ie. the agents expectations are well in tune with it's environmment, and the situation is trivial (all is unitary).

The descriptive probability of the qbist, is I think in line with your. The new thing is that there is no such meaning (and consequence) of a "guiding" or normative probability in your minimalist view?

If the agent aims for bulls eye and finds that it's consistently off, you can learn something, the agent adjusts it's aim. This does not mean there was a pathology, because beeing wrong is entirely normal. And correctingn the expectations means you need to change the information somehow. State revision is the simplest way, the more extreme measure is to revise the whole hilbert space.

To test this, agent2 can "prepare" an agent1 to by deliberately desinforming it(relative to agent2), and it should not hit bulls eye. This would then indicate that the agents interaction depends on the state of information, not on some matter of objective facts.

/Fredrik
 
  • #66
WernerQH said:
Thanks for trying to moderate. :-) But I can't understand why it should be off-topic to criticize what I perceive as a distortion of the term "motion" as most people use it.
I guess what I perceived as being off-topic was the suggestion on how to teach quantum mechanics. Perhaps because I have zero experience in that topic, especially when it comes to school teachers.

With respect to the term "motion," I initially simply didn't get that there were different possible interpretations. For me, it was more a discussion about the properties of stationary states, especially bound normalizable states. (For unbound non-normalizable states, it was already clarified before that the question has a clear unambiguous answer in each specific case.) I don't necessarily want to defend "our" use of the word "motion," but it "should" have been clear from the context what I meant. But the same could also be said for vanhees71's use, so we both (i.e. vanhess71 and me) simply didn't notice that we were talking past each other.

WernerQH said:
Does a harmonic oscillator oscillate? Only if it is not in an energy eigenstate?
That was indeed the question I was interested in. And I was slightly frustrated, because vanhees71 would simply assert that it didn't, without making any serious effort to explain to me why. Instead, he simply redefined "the question" to mean something much more trivial, at least that was how it felt to me.
WernerQH said:
How do you prepare it in the state ## n = 3 ## ? The formula in my previous post #26 is most easily derived by summing over energy eigenstates, but using coherent states you can obtain the exact same formula. I can't make sense of @vanhees71's point that one is allowed to speak of motion only when one uses coherent states.
In a coherent state, the oscillator will certainly show the type of systematic ("classical") movement (or oscillation) that vanhees71 is expecting when he "speaks of motion". My guess is that it does not "necessarily oscillate systematically" in an energy eigenstate, not even in a state like ##n=3##. On the other hand, "systematic oscillation" will probably also be able to generate the required statistics, just that many more "other fluctuation" are also "not excluded" by the required statistics. But I guess that the constant Bohmian solution is indeed excluded, if one investigates the context thoroughly enough. But I am not sure.
 
  • #67
vanhees71 said:
So finally, I'm allowed to interpret the formula as probabilistic?
That was always the case: If the circumstances allow one to do enough statistics, yes.

Otherwise you don't have enough samples to claim an ensemble, but my interpretation still tells what actually happens, since this is determined by the state and not by measurement. In particular, single measurements on macroscopic objects produce accurate results in spite of having no statistics at all!
vanhees71 said:
Then, I guess, there's no discrepancy between your and the minimal statistical interpretation anymore.
My point was always that my thermal interpretation needs much less baggage than the standard way of discussing everything in terms of Born's rule, and is more general since it accommodates measurements decribed by POVMs without any special trickary such as postulating an ancilla that noone ever sees.
 
Last edited:
  • Like
Likes Simple question and dextercioby
  • #68
vanhees71 said:
I've no clue what the qbists' idea means in practice of measurements.
It's the practice of decision making I'd assume. Even if you only play Russian roulette once, it's better to play with one bullet in the chamber than with 3. Usually this kind of decision making is connected to measurement with the notion of bets. Qbists see probabilities as instructions for betting on measurement outcomes.
 
Last edited:
  • #69
gentzen said:
In a coherent state, the oscillator will certainly show the type of systematic ("classical") movement (or oscillation) that vanhees71 is expecting when he "speaks of motion". My guess is that it does not "necessarily oscillate systematically" in an energy eigenstate, not even in a state like ##n=3##.
My guess is the exact opposite. It's the nature of a harmonic oscillator to perform systematic oscillations, as expressed by its response function. The more so, the higher the energy is. That the average deflection is constantly zero for an energy eigenstate is just because the phase of the oscillation is maximally uncertain. The position is averaged over a complete period. But this doesn't mean that there is no motion at all. That would be taking an inappropriate theoretical picture too literally.
gentzen said:
On the other hand, "systematic oscillation" will probably also be able to generate the required statistics, just that many more "other fluctuation" are also "not excluded" by the required statistics.
Looking at the paper about the motion of the LIGO-mirrors (that vanhees71 quoted) it strikes me that "quantum" and "thermal" fluctuations enter in the same (familiar) way.
 
  • Like
Likes vanhees71 and gentzen
  • #70
From "Quantum Tomography Explains Quantum Mechanics" by @A. Neumaier (https://arxiv.org/abs/2110.05294)
"When a source is stationary, it has a time independent state. In this case, response rates and
probabilities can also be measured in principle with arbitrary accuracy. These probabilities,
and hence everything computable from them – quantum values and the density operator
but not the individual detector events – are operationally quantifiable, independent of an
observer, in a reproducible way. Thus the density operator is an objective property of
a stationary quantum system, in the same sense as in classical mechanics, positions and
momenta are objective properties
"


If motion of a subsystem is stipulated to be the change in expectation value of the kinetic energy operator of the centre of mass of that subsystem, then everything is consistent, though it's not clear how this maps to the intuitive notion of motion as change in position, since position is no longer an objective property of the stationary quantum system like it is in classical physics.

[edit] - Or maybe the lesson is to technically apply the same "tomographist" attitude to classical physics
 
Last edited:
  • #71
Morbert said:
It's the practice of decision making I'd assume. Even if you only play Russian roulette once, it's better to play with one bullet in the chamber than with 3. Usually this kind of decision making is connected to measurement with the notion of bets. Qbists see probabilities as instructions for betting on measurement outcomes.
Yes, I understand this, but physical experiments are not about decision making but to figure out, how Nature behaves, and there I need to gain "enough statistics" in any experiment.

As an application take the weather forecast: If they say, tomorrow there's a 30% probability for rain, it's clear that this means that experience (and models built on this experience) tells us that in about 30% of equal or sufficiently similar weather conditions it will rain, and I can base a decision on this whether I take the risk to plan a garden party or not, but to figure out reliable probability estimates with some certainty you need to repeat the random experiment and use the frequentist interpretation of probabilities, and that's also well established within probability theory ("central limit theorem" etc.).
 
  • #72
vanhees71 said:
As an application take the weather forecast: If they say, tomorrow there's a 30% probability for rain, it's clear that this means that experience (and models built on this experience) tells us that in about 30% of equal or sufficiently similar weather conditions it will rain, and I can base a decision on this whether I take the risk to plan a garden party or not, but to figure out reliable probability estimates with some certainty you need to repeat the random experiment ...
It is not at all clear what a 30% probability for rain tomorrow means. (And the meaning of sufficiently similar feels quite subjective to me.) Say another weather forecaster works with another model and another systematic, and predicts a 33% probability instead. And yet another one has a systematic that can only predict 0%, 50%, and 100%, and predicts a 0% probability for rain tomorrow. How can you compare their predictions, and decide whose predictions are better or worse? Let's say you try to evaluate all three forecasters over a period of 3 months, i.e. based on approx. 90 prediction by each forecaster. Is there some "objective" way to do this?

"The Art of Statistics" by David Spiegelhalter is a nice book, which also gives an overview about how this type of problem gets approached in practice. One nice way is to compute their Brier score. I find this nice, because it is about the simplest score imaginable, and it has the nice property that predictions of 0% or 100% probability are not special in any way. This reminds me on David Mermin's QBist contribution, that prediction with 0% or 100% probability in quantum mechanics are still personal judgments, and don't give any more objectivity than any other prediction.
 
  • Like
Likes dextercioby
  • #73
Morbert said:
If motion of a subsystem is stipulated to be the change in expectation value of the kinetic energy operator of the centre of mass of that subsystem, then everything is consistent, though it's not clear how this maps to the intuitive notion of motion as change in position, since position is no longer an objective property of the stationary quantum system like it is in classical physics.
A subsystem simply has a smaller observable algebra, and its state is therefore the state obtained by tracing out all the other degrees of freedom.

A nonrelativistic system consisting of ##N## distinguishable particles has ##N## position vectors ##q_i## and ##N## momentum vectors ##p_i##, whose components belong to the observable algebra. A subsystem simply has fewer particles. Their uncertain position and momentum is given by the associated quantum expectation, with uncertainty given by their quantum deviation.

Motion is as intuitive as in the classical case, except that the world lines are now fuzzy world tubes because of the inherent uncertainty in observing quantum values. For example, an electron in flight has an extremely narrow world tube, while an electron orbiting a hydrogen atom in the ground state has a fuzzy world tube along the world line of the nucleus, with a fuzziness of the size of the Compton wavelength. I find this very intuitive!
 
  • Like
Likes Simple question and Morbert
  • #74
gentzen said:
It is not at all clear what a 30% probability for rain tomorrow means. (And the meaning of sufficiently similar feels quite subjective to me.) Say another weather forecaster works with another model and another systematic, and predicts a 33% probability instead. And yet another one has a systematic that can only predict 0%, 50%, and 100%, and predicts a 0% probability for rain tomorrow. How can you compare their predictions, and decide whose predictions are better or worse? Let's say you try to evaluate all three forecasters over a period of 3 months, i.e. based on approx. 90 prediction by each forecaster. Is there some "objective" way to do this?
The interpretation given by
vanhees71 said:
As an application take the weather forecast: If they say, tomorrow there's a 30% probability for rain, it's clear that this means that experience (and models built on this experience) tells us that in about 30% of equal or sufficiently similar weather conditions it will rain
can be made quite precise, and allows one to rate the quality of different prediction algorithms. Of course over the long run (which is what counts statistically), one algorithm can be quite accurate (and hence is trustworthy) while another one can be far off (and hence is not trustworthy).

The precise version of the recipe by @vanhees71is the following:

For an arbitrary forecast algorithm that predicts a sequence ##\hat p_k## of probabilities for a sequence of events ##X_k##,
$$\sigma:=\sqrt{mean_k((X_k-\hat p_k)^2)}\ge \sigma^*:=\sqrt{mean_k((p_k-\hat p_k)^2)},$$
where ##p_k## is the true probability of ##X_k##.

##\sigma## is called the RMSE (root mean squared error) of the forecast algorithm, while ##\sigma^*## is the unavoidable error. The closer ##\sigma## is to ##\sigma^*## the better the forecast algorithm.

Of course, in complex situations the unavoidable error ##\sigma^*## is unknown. Nevertheless choosing among the forecast algorithms available for forcasting the one with the smallest RMSE (based on predictions the past) is the most rational choice. Nothing subjective is left.
 
Last edited:
  • #75
A. Neumaier said:
Motion is as intuitive as in the classical case, except that the world lines are now fuzzy world tubes because of the inherent uncertainty in observing quantum values. For example, an electron in flight has an extremely narrow world tube, while an electron orbiting a hydrogen atom in the ground state has a fuzzy world tube along the world line of the nucleus, with a fuzziness of the size of the Compton wavelength. I find this very intuitive!
It's the meaning of these world lines and tubes that I am curious about. If we say that, in classical physics, normalised density operators are to replace phase space variables as the objective properties of the system, then if I have a toy ball on a spring, the real properties of the ball are not its position and momentum, but rather those properties for which, under the right condition, statistical properties are arbitrarily close.

But maybe this is fine, and the objective properties of classical theories are also density operators but with zero quantum deviation. Maybe there is a future paper "Classical Tomography Explains Classical Mechanics"
 
  • #76
Morbert said:
It's the meaning of these world lines and tubes that I am curious about. If we say that, in classical physics, normalised density operators are to replace phase space variables as the objective properties of the system,
In classical physics, phase space is of course classical, and world lines are curves, not fuzzy tubes. The relevant limit in which physics becomes classical is the macroscopic limit ##N\to \infty##, and the law of large numbers then provides the right limiting behavior.

[To achieve a good formal (but unphysical) classical limit for microsystems one would have to scale both ##\hbar## and the uncertainties by factors that vanish in the limit.]
 
Last edited:
  • #77
A. Neumaier said:
The precise version of the recipe by @vanhees71is the following:
It might be a precise recipe, but it is certainly not what vanhees71 is claiming. His claim is simply that there is no problem at all with probabilities for single non-reproducible events. And the weather tomorrow seems to me to be such a non-reproducible event. If it is still "too reproducible," then you may look at earthquake prediction instead. vanhees71's claim that the ensemble interpretation would be perfectly applicable to such scenarios will remain exactly the same.

From my point of view, the task is to produce some intuitive understanding (for people like vanhees71, who have trouble understanding the difference) for why such scenarios contain elements absent from scenarios where the ensemble interpretation just works. And talking about the randomness of prime numbers would be too far from what such people perceive as randomness. So weather or earthquake prediction seem to me to be the scenarios where you have to try to clarify the difficulties.
 
  • #78
gentzen said:
It might be a precise recipe, but it is certainly not what vanhees71 is claiming. His claim is simply that there is no problem at all with probabilities for single non-reproducible events.
He claims (as I understand him) that there is no problem with the interpretation of probabilities made by some prediction algorithm for arbitrary sequences of events, each one single and non-reproducible. This is what I formalized.
gentzen said:
And talking about the randomness of prime numbers would be too far from what such people perceive as randomness.
But this is also well-defined. There are canonical measures (in the mathematical sense) used in analytic number theory to be able to employ stochastic methods. Nevertheless, everything there is deterministic, and indeed, in my book 'Coherent quantum physics', I gave this as an example showing that stochastic formalisms may have deterministic interpretations.

It is like using vector methods for applications that never employ vectors in their geometric interpretation as arrows.
gentzen said:
So weather or earthquake prediction seem to me to be the scenarios where you have to try to clarify the difficulties.
Weather preditions are made every day, so there is lots of statistics available. Thus
my formalization of the meaning of probability forecasts applies. In particular, one can objectively test prediction algorithms for their quality. For example, the statement ''Nowadays weather forecasts are much better than those 20 years ago'' can be objectively proved.
 
Last edited:
  • #79
WernerQH said:
It's the nature of a harmonic oscillator to perform systematic oscillations, as expressed by its response function.
But it can oscillate with a zero amplitude. A pendulum at small amplitude is a harmonic oscillator. Does it lose this property whenever it stands still at its equilibrium position? I don't think so, because it can be excited again!
 
  • #80
vanhees71 said:
I need to gain "enough statistics"
The idea is that in a real game with an agent beeing a participant , it does't have the option to await nor process arbitrary amounts of data. Ie. there is a race condition.

For the external observer however, asymptotic data and in principle any requires postprocessing can be realized. Then "enough statistics" can be realized.

vanhees71 said:
Yes, I understand this, but physical experiments are not about decision making but to figure out, how Nature behaves, and there I need to gain "enough statistics" in any experiment.
The problem is when "nature" changes before we have "enough statistics". This is why the paradigm you describes works for small subsystems, when in some sense we can realize these repeats. Because then the timescale of the measurement AND postprocessing is small, relative to the lifetime of the observing context. In this case it seems natural that given enough prostprocessing, SOME patterns WILL be found, or can be "invented". Then we can see these as "timeless", but it's just because it's "timeless" relative to the timescale of the process, which in this case is small.

But this all breaks down for "inside observers", where one is put in a cosmological perspective, and the observer is necessarily saturated with more data it can store and process in realtime. But this is also IMO when things get more interesting!

Only asymptotic observables does not seem very interesting.

/Fredrik
 
Back
Top