What Intuitive Insights Explain Heisenberg's Uncertainty Principle?

  • #201


That's a good post Ben. Some of it had not been mentioned in this thread.

Most of this thread has been about what a "measurement" is and specifically what a "momentum measurement" is. The article by Ballentine that was linked to a few times early in this thread describes a single-slit experiment, where the particle has a wall of detectors in front of it after going through the slit. One of the detectors will signal detection. This is obviously a position measurement, but Ballentine argues that it's also a momentum measurement. We can certaintly calculate a value of momentum that we can call "the result". If we accept this as a valid way to measure momentum, i.e. if we in fact measure momentum by measuring the position, then von Neumann's axiom that all measurements project the state vector onto an eigenspace of the measured observable contradicts itself (since there's no state with a sharply defined position and a sharply defined momentum).

So we either have to modify that axiom, or refuse to call this a "measurement" of momentum.
 
Physics news on Phys.org
  • #202


Ben Niehoff said:
[...]
Another way to explain the HUP, however, is from merely looking at the Schrodinger equation. In classical mechanics, the equations of motion typically have two time derivatives; the Schrodinger equation, however, has only one. As you well know from basic differential equations, you may specify as many initial conditions as you have derivatives. In classical mechanics, we have two time derivatives, and hence we can specify two initial conditions: position and velocity. But in quantum mechanics, we have only one time derivative. Hence position and velocity cannot be independently specified. They are not independent quantities, and so it should not be too surprising that they cannot be measured to arbitrary accuracy at the same time. [...]

I don't find your argument valid. The state in the SE encodes the information about the system on equal footing. A valid physical state is the one on which all possible observables of the system can be measured (momentum, energy, position, spin, electric charge, parity, etc.), which mathematically translates into the state vector being in the Garding domain of the maximal symmetry algebra (the \Phi space in a rigged Hilbert space \Phi,\mathcal{H}, \Phi ').
A valid physical state is part of the space of all solutions to the Schroedinger equation which is nothing but a merger between the principle of temporal conservation of observable statistics and the need to have the time translations as a subgroup of the maximal symmetry group.
 
Last edited:
  • #203


Fredrik said:
[...] The article by Ballentine that was linked to a few times early in this thread describes a single-slit experiment, where the particle has a wall of detectors in front of it after going through the slit. One of the detectors will signal detection. This is obviously a position measurement, but Ballentine argues that it's also a momentum measurement. We can certaintly calculate a value of momentum that we can call "the result". If we accept this as a valid way to measure momentum, i.e. if we in fact measure momentum by measuring the position, then von Neumann's axiom that all measurements project the state vector onto an eigenspace of the measured observable contradicts itself (since there's no state with a sharply defined position and a sharply defined momentum).

So we either have to modify that axiom, or refuse to call this a "measurement" of momentum.

Based on personal preferences, I'd say we should leave aside von Neumann's axiom, because it forces us to add the following words to SE <In the absence of measurement, all possible physical states are solutions to the following 1st order differential equation:...>
which automatically puts a severe and artificial restriction to the time-evolution postulate itself.

Any valid argument against Ballentine's ?
 
Last edited:
  • #204


dextercioby said:
I don't find your argument valid. The state in the SE encodes the information about the system on equal footing. A valid physical state is the one on which all possible observables of the system can be measured (momentum, energy, position, spin, electric charge, parity, etc.), which mathematically translates into the state vector being in the Garding domain of the maximal symmetry algebra (the \Phi space in a rigged Hilbert space \Phi,\mathcal{H}, \Phi &#039;).
A valid physical state is part of the space of all solutions to the Schroedinger equation which is nothing but a merger between the principle of temporal conservation of observable statistics and the need to have the time translations as a subgroup of the maximal symmetry group.

I'm not sure that what I said contradicts this, but maybe I'm not understanding you correctly. "Cannot be measured" was a poor choice of words; of course position and momentum can be measured simultaneously. What I mean is that given that they are not independent, it stands to reason that they can't have precise values at the same time (because if we could imagine a state with definite position X and definite momentum P, then it seems we could specify X and P independently, which is not allowed).

This was just meant to be a heuristic argument for understanding "why" X and P happen to be noncommuting observables. It fails in the relativistic case, because the Klein-Gordon equation has two time derivatives.
 
  • #205


I was argueing about the not-allowed part. Why can't they be specified independently ?

Because I think they are essentially independent, and independent measurements of them can be made with arbitrary precision, regardless of (a however properly chosen) state. It's only that the statistics of measurements (mean squared deviations) are related by an inequality which could very well have had 0 in the right hand side.
 
  • #206


My heuristic argument was that X and P can't be specified independently because the Schrodinger equation has only one time derivative, and hence the entire time evolution is determined by one initial condition.

As for the rest, you are making statements about measurements, while I was making statements about the quantum state. The quantum state cannot be both an X eigenstate and a P eigenstate at the same time, because there is no such state. And the more concentrated the state vector is around a particular eigenstate in the X basis, the more spread out it must necessarily be among eigenstates of the P basis.

One can argue whether the quantum state has any real existence independent of measurements, but that's not something I want to get into. Consider my statements to be merely about the mathematical formalism if it makes you uncomfortable to think of them as statements about reality.
 
  • #207


dextercioby said:
Based on personal preferences, I'd say we should leave aside von Neumann's axiom, because it forces us to add the following words to SE <In the absence of measurement, all possible physical states are solutions to the following 1st order differential equation:...>
which automatically puts a severe and artificial restriction to the time-evolution postulate itself.
That's a good point. I didn't even think about that during this discussion, but we should be able to drop it completely and derive it (a version of it that's not inconsistent) as a theorem using decoherence theory.

Uh, now that I think about it even more, I think we can also derive a version of it from the following two correspondence rules: a) the average value in a series of measurements goes to the expectation value, as the number of measurements go to infinity, b) if f is a polynomial function, and A is an observable that represents measuring device M, then the measuring device that outputs the value f(a) when M outputs a, corresponds to the operator f(A). (I may remember this all wrong, but I think Isham did this in his QM book).

The way I see it, a set of statements isn't a theory unless it falsifiable. So the purely mathematical part of QM isn't a theory. It has to be supplemented by a specification of what sort of devices that should be used to test the accuracy of the theory's predictions. When we make these specifications, we have the option to only define position measurements, or to define measurements of every member of some set of interesting observables (like a list of generators of the theory's symmetry group).

If we choose the former option, then von Neumann's axiom isn't logically inconsistent, but it would still be preferable (at least in my opinion) to derive it as a theorem. If we instead choose the latter option, then von Neumann's axiom is logically inconsistent in its standard form, but we probably only need to change it to say that all measurements of observables that commute with position project the system onto an eigenspace of the measured observable. I would still prefer to derive it.

It might actually be better (and by that I mean that it would make this stuff simpler) to go with the former option. This is what I'm thinking: Since we would be measuring every variable by measuring position anyway, we wouldn't really be able to make a wider range of predictions. An experiment that tries to falsify a prediction about momentum for example, has no chance of doing that as long as the position measurements that are performed in the process are consistent with the predictions about position. So if someone wants to try to falsify the theory, it doesn't seem like he has any reason to look at anything other than the results of position measurements.

A slightly different option is to drop the concept of measuring device from the terminology altogether. Particles are detected, no properties are measured (not even position). They are all inferred from the experimental setup and the coordinates of the detection events.
 
Last edited:
  • #208


dextercioby said:
I was argueing about the not-allowed part. Why can't they be specified independently ?
I assumed that he meant that to specify an initial position is to specify a sharply peaked wavefunction, and that to specify an initial momentum is to specify a wavefunction with a sharply peaked Fourier transform. Since no wavefunction has both of these properties, you would have to specify two wavefunctions in order to specify both an initial position and an initial momentum. But that gives us two initial conditions, and we only need one. So this argument is just a different aspect of the usual stuff about Fourier transforms.
 
  • #209


Fredrik said:
Anyway, I suppose you would also be interested in the answer to a related question: If we turn Ballentine's thought experiment into an actual experiment, and perform this momentum "inference" over and over on identically prepared systems, how accurately will the distribution of results agree with the values of |u(p)|^2 where u is the Fourier transform of the wavefunction \psi.

I suspect it isn't. Raymer, "Uncertainty principle for joint measurement of noncommuting variables", American Journal of Physics 62:986 (1994) does give Ballentine's method of measuring position at large L as a way of measuring momentum. However, he seems to use it as the momentum conjugate to position at small L, not large L as Ballentine would need to claim simultaneous measurement of conjugate variables.

But even if that's true, I'm not sure this would get me off the hook due to the comments of Bell I quoted in post #89. Perhaps the qualifier "if the initial state is arbitrary and unknown" is still needed.
 
Last edited:
  • #210


atyy said:
I suspect it isn't. Raymer, "Uncertainty principle for joint measurement of noncommuting variables", American Journal of Physics 62:986 (1994) does give Ballentine's method of measuring position at large L as a way of measuring momentum. However, he seems to use it as the momentum conjugate to position at small L, not large L as Ballentine would need to claim simultaneous measurement of conjugate variables.

But even if that's true, I'm not sure this would get me off the hook due to the comments of Bell I quoted in post #89. Perhaps the qualifier "if the initial state is arbitrary and unknown" is still needed.
Bell's first comment is that it's "largely a question of semantics". That's consistent with what I've been saying, but I guess it's consistent with a lot of things. :smile: Then he starts talking about the (non-)existence of a joint probability distribution that, among other things, is linear in the wavefunction. That's something I don't see why we would need. At least i don't see why we would need it for Ballentine's thought experiment. We might need a joint probability distribution (p,q)\mapsto\rho_\psi(p,q) for each \psi with a sharply defined position, but we don't need (or want) the map \psi\mapsto\rho_\psi to be linear (which would mean that we're forced to consider states that aren't localized). Hm, now that I think about it, since our p is a function of q, I'd say that all we need is a distribution q\mapsto\rho_\psi(q), and we have that already: \rho_\psi(q)=|\psi(q)|^2.

Ballentine doesn't need a large L to claim simultaneous measurement. It's a simultaneous measurement of y and p_y regardless of L, and even regardless of the margins of error \delta y and \delta p_y. The reason he mentioned a large L is that he didn't just want to show that you can measure both at the same time. (The margins of error don't even enter into that). He wanted to show that you can do it in a way that makes the product (\delta y)(\delta p_y) smaller than what a naive application of the uncertainty relations suggests it can be. Choosing L large is just the easiest way to make that product small. A large L gives us a small \delta p_y, and it's much easier to believe that we can make L large enough than that we can make \delta y small enough.

I haven't looked at the Raymer article yet, but I also feel that if any kind of limit is supposed to be a part of the definition of this momentum measurement, it's L→0, not L→∞. The reason is that what we're measuring is more like an average momentum than the "momentum right now". The position measurement is performed on a particle with a wavefunction that has had some time to spread out. To claim that we have really performed a simultaneous measurement, we should measure the momentum when the particle is in the same state as when we measure the position, but the momentum measurement involves two different times, and the wavefunction is spreading out over time. So it seems that we are closer to a "true" simultaneous measurement when L is small.

Hm, this could possibly be developed into an argument that Ballentine is wrong about how \delta p_y depends on L.
 
Last edited:
  • #211


It is uncontested that you can measure both at the same time if you don't care about accuracy.
 
  • #212


atyy said:
It is uncontested that you can measure both at the same time if you don't care about accuracy.
Uncontested by you perhaps. :smile:

I have added some stuff to my previous post that you might be interested in...and now I have to get some sleep.
 
  • #213


Fredrik said:
Uncontested by you perhaps. :smile:

Surely uncontested by everyone - what else could the tracks in a cloud chamber be but simultaneous position and momentum measurements? The only question is whether simultaneous accurate measurements of both are possible.

Fredrik said:
I haven't looked at the Raymer article yet, but I also feel that if any kind of limit is supposed to be a part of the definition of this momentum measurement, it's L→0, not L→∞. The reason is that what we're measuring is more like an average momentum than the "momentum right now". The position measurement is performed on a particle with a wavefunction that has had some time to spread out. To claim that we have really performed a simultaneous measurement, we should measure the momentum when the particle is in the same state as when we measure the position, but the momentum measurement involves two different times, and the wavefunction is spreading out over time. So it seems that we are closer to a "true" simultaneous measurement when L is small.

Yes, I think that's where Ballentine is wrong - it must be accurate measurements of the same state. However, if I read Raymer correctly, an accurate momentum measurement of the state at small L is done by taking L large, whereas an accurate position measurement of that state is done at small L. So Ballentine's error is that his accurate position and momentum measurements both performed at large L are accurate position and momentum measurements of different states. So he doesn't have accurate conjugate position and momentum.
 
Last edited:
  • #214


atyy said:
It is uncontested that you can measure both at the same time if you don't care about accuracy.
I'll have to agree with this. Confidence levels and confidence intervals is what the SD is all about anyway.

Maybe Fredrik insists that even a non-proper measurement (like we are talking about here in the Ballentine case) or "incomplete" measurement should be incorporated in a mesaurement theory. And here I agree; but this IMO requires a reconstruction of measurment theory as there are then more sublte points around.

A measurement without qualifying confidence measures is IMO not complete. And to make it complete in the conventional picture you need a complete ensemble or a ninfinity of reruns.

If we are to get away from this, how can be understand and construct instrinsic confidence measures without referring on unreal ensembles?

Technically, to falsify QM, one or two detector clicks is not enough. You need an infinite of them to the point where you does effectively simulate the full ensemble. This is also acknowledged by Popper. Falsification is only a statistical process as well. Any single datapoint can be explained away as noise.

/Fredrik
 
  • #215


Fredrik said:
It also seems to me that this is precisely the type of "inference" that measuring devices do when they test the accuracy of the theory's predictions, so how can anyone not call it a measurement?
But theory does not predict the outcomes of single measurements (single data points) anyway. It only predicts the ensemble properties.

If we stick to ensemble interpretation, one could even argue that it's completely meaningless to even bother speak about single measurements, becuase our theory doesn't make any statement about it, it only makes statements of the statistics.

However I think that makes no sense becauase it leaves out many real life situations. But this is to me a conceptual problem of the ensemble interpretation. It shows that it's absurd as a basis for decision making in interactions, and to me a theory is an interaction tool more than a description. I want they theory to guide me through the future, not describe the past that is already history.

Like we discussed briefly in another thread, in a cosmological perspective, we actually do replace the ensemble with "counting evidence" from several interactions with the same system. I'm suggesting that a similar perspective may be used in QM. Here the information encoded in the "ensemble" can instead be thought of as physicall encoded inthe observing systems microstructure. In that way, the "ensemble" is indirectly defined by the state of hte observer (which is a function of it's history). This means you always have an "effective ensemble" whenever you have an observer. Then "single measurments" would simply slowly evolve the effective ensemble.

/Fredrik
 
  • #216


Okay..

Both CM and QM says "position" and "momenta" are different. They're mathematically used differently, and operationally defined differently. The experimental data for a "position" (point) and a "momentum" (two or more points with or without "path") are different.

So why is it that we read HUP as "weird" because of something Galileo once said?

Rather: HUP should correct Galileo's misconception and note that the "arbitrary" degree of accuracy only extends to the point where the large numbers of atoms in the system mask the effects of underlying quantum events at Planck scales. (e.g., human-scale phenomena)
 
Last edited:
  • #217


I also suspect that Raymer's momentum "measurement" isn't a true momentum measurement. If it were, we would expect the state to collapse into a momentum eigenstate before the "measurement", since the "measurement" at infinite time is supposed to reflect momentum at finite t. Raymer says "This mapping of the momentum distribution into position for large L is analogous to far-field diffraction in optics". My guess is that it isn't a true momentum measurement, because it uses some knowledge of the state. At one extreme, if one knows the state of particle, one can get both position and momentum distributions with no measurement and no collapse at all.
 
Last edited:
  • #218


atyy said:
I also suspect that Raymer's momentum "measurement" isn't a true momentum measurement. If it were, we would expect the state to collapse into a momentum eigenstate before the "measurement", since the "measurement" at infinite time is supposed to reflect momentum at finite t. Raymer says "This mapping of the momentum distribution into position for large L is analogous to far-field diffraction in optics". My guess is that it isn't a true momentum measurement, because it uses some knowledge of the state. At one extreme, if one knows the state of particle, one can get both position and momentum distributions with no measurement and no collapse at all.
Can you describe his method or quote the relevant part of the article?

What do you mean by a "true momentum measurement"? Is it that it works on an unknown state? Is it that it involves collapse to a momentum eigenstate? (I would only require that it gives us results with dimensions of momentum, and that those results will be distributed approximately as described by the squared absolute value of the Fourier transform of the wavefunction).
 
  • #219


atyy said:
...what else could the tracks in a cloud chamber be but simultaneous position and momentum measurements?
I would describe it as a series of approximate position measurements that together can be considered a single momentum measurement. Do we want to call this a simultaneous measurement of both? Maybe. It seems to be a matter of semantics, and taste. The argument in favor of calling it a simultaneous measurement is of course that by the end of it, we have obtained a value of each of the position components and each of the momentum components. The argument against it would be that we only need one of the liquid drops to obtain the values of the position components, but we need several to obtain the values of the momentum components.

I just skimmed thorough parts of section 4.4 (titled "Particle detectors") in "Nuclear and particle physics", by Brian Martin. I was hoping that it would tell me what particle physicists consider "momentum measurements", and I believe it did. The bottom line is that it always involves a series of approximate position measurements. The momentum is then inferred from the shape of the particle track. The difference between the older types of detectors (cloud chambers, bubble chambers) and the more recent (gas detectors, wire detectors, semiconductor detectors) is that the new ones don't bother to make the track visible. They just use electrodes to collect the electrically charged products of the interactions, and (I presume) calculate the shape of the track from the amplitude and timing of the electrical signals.

So for the purposes of this discussion, it seems that we can take a bubble chamber (or any other of these devices) as a definition of what is meant by a "momentum measuring device". But I don't know if we really should say that this is a way to measure the momentum of a particle in a given state (unknown and completely arbitrary). What I'm thinking is that the interactions that produced the first bubble, or interactions before that, must have put the particle in a state such that the wavefunction and its Fourier transform have approximately the same width in units such that \hbar=1. So maybe we should just say that this is the definition of how to measure momentum when the particle is known to be in that kind of state.

However, since all momentum measurements seem to involve at least two approximate position measurements (or at least a preparation of a state with sharply defined position, followed by a position measurement), I don't think there can exist a meaningful definition of what it would mean to measure the momentum of a particle in an arbitrary state. This is probably as good as it gets. Momentum measurements of the type suggested by von Neumann's projection axiom don't exist.
 
Last edited:
  • #220


Note that if we replace the wall of detectors in Ballentine's thought experiment with something like a bubble chamber, the wavefunction will become somewhat localized again as soon as the particle enters the chamber, no later than at the time of the interaction that creates the first bubble. Immediately after this, I guesstimate that the width of the wavefunction and the width of its Fourier transform will be of the same order of magnitude for the rest of the passage through the chamber. The other bubble events are approximate position measurements that don't localize the particle any more than it already is. The momentum calculated from the shape of the track will tell us the approximate momentum of the state that the particle was put into after it entered the chamber. This is a different state than the one we wanted to perform the momentum measurement on.

Because of this, I'm starting to think that the method suggested by Ballentine is the only thing that can be called a py measurement of a particle that hasn't interacted with its environment since it passed through the slit. The funny thing is that this doesn't make it obvious that the distribution of results will agree with the squared absolute value of the Fourier transform of the wavefunction. It's possible that the agreement is poor for large L (the distance from the slit to the wall of detectors), and in that case, his estimate of the margin of error on py is questionable. And his argument that we can measure y and py accurately at the same time may fall with it.
 
Last edited:
  • #221


Here's a free article that describes the same thing as Raymer: http://tf.nist.gov/general/pdf/1283.pdf .

It talks about the position and momentum "shadows" of an initial state. The shadow of position occurs at a different time from the shadow of momentum. So Ballentine is wrong because he is not talking about canonically conjugate position and momentum.
 
  • #222


Fredrik said:
However, since all momentum measurements seem to involve at least two approximate position measurements (or at least a preparation of a state with sharply defined position, followed by a position measurement), I don't think there can exist a meaningful definition of what it would mean to measure the momentum of a particle in an arbitrary state. This is probably as good as it gets. Momentum measurements of the type suggested by von Neumann's projection axiom don't exist.

I agree .. that is exactly that I was saying (or at least trying to), back on the first page of this thread :wink:.

The interesting question raised by that is, why not? Is it due to some fundamental limitation (i.e. your last statement should be strengthened to "... axiom can't exist.")? Or is it just that we haven't figured out how to build one yet?

The thing I find most troubling and bizarre is that not even a tiny hint of what we have been discussing here appears in any QM text that I have ever seen. They just state the measurement axiom, explain how it works for eigenstates and superpositions of some unspecified operator O, and then move on. But what is the point of having the axiom in the first place if the only thing we can actually measure directly is position, and all other quantities must be inferred? It seems like all of this should have been hashed out by "the heavyweights" back during the development of QM, but it seems to have been overlooked. Can that really be true?

[EDIT] The more I think about this .. the more wrong it seems. The whole discussion of eigenstates as "the only possible results" of a "measurement" clearly has some kernel of truth to it, but it seems like a drastic over-simplification. On the other hand, it seems like any oversimplification must not matter very much, given the long, strong history of agreement between QM theory and experiment. I am getting more confused by the minute here. :confused:
 
Last edited:
  • #223


atyy said:
Here's a free article that describes the same thing as Raymer: http://tf.nist.gov/general/pdf/1283.pdf .

It talks about the position and momentum "shadows" of an initial state. The shadow of position occurs at a different time from the shadow of momentum. So Ballentine is wrong because he is not talking about canonically conjugate position and momentum.

Thanks! That looks like a very interesting article, but I am not sure how it gets at the measurement problem that we are discussing. Perhaps it will be more clear after I have had more time to read it carefully.
 
  • #224


SpectraCat said:
[EDIT] The more I think about this .. the more wrong it seems. The whole discussion of eigenstates as "the only possible results" of a "measurement" clearly has some kernel of truth to it, but it seems like a drastic over-simplification. On the other hand, it seems like any oversimplification must not matter very much, given the long, strong history of agreement between QM theory and experiment. I am getting more confused by the minute here. :confused:

The reason we have been discussing position and momentum being exactly measurable is that otherwise Ballentine is trivially wrong, and there is no discussion. However, position and momentum cannot be exactly measured, and are always jointly measured approximately. A less approximate measurement of momentum means a more approximate measurement of position. This seems to be found in all standard quantum optics textbooks. This is also found in QFT notes such as http://www.kitp.ucsb.edu/members/PM/joep/Web221A/Lecture8.pdf and http://www.kitp.ucsb.edu/members/PM/joep/Web221A/LSZ.pdf . So yes, the elementary textbook stuff is a lie, but not in the way Ballentine advocates. And the more rigourous way of dealing with it makes it clear that the standard lie is in fact the correct heuristic (not Ballentine's).
 
Last edited by a moderator:
  • #225


I think we all (including ballentine) agrees that HUP refers to expectations defined by a STATE, nothing else. An Ballentines point was not to confuse different "error measures".

I don't think this is what we debate here, what seems to be of debate here. what seems to be of debate is wether the example in the Ballentine notes Fredrik posted where one is using an inference to "measure" momentum can qualify as a measurement, and thus wether one can define at least loosely speaking (until proper full analyis is made) some "effective state" that derives from the mixed measurements + inference?

I think this is what we discuss here. And if so, I have an objection to Ballentines elaboration. If there is to be any sense in the inference he makes, the "state of information" that we end up with after the detection + the kind of inference from the angle and p_y, then I propose one has to at least be somewhat sensible consider the "effective" uncertainty in the STATE that is inferred by ballentines idea, of both y and p_y following from the ENTIRE set of information.

In particular this means that all we know is that y was somewhere between the slit input and detection hit (as time also passes, this however should not matter for the inferences. Information is information, no matter if old; the expectations doesn't care)

\Delta y \approx L tan \theta +\delta y, not just \delta y

It also seems reasonable to think that roughly

\delta y \gtrsim h/p,

Also since \delta \theta \approx \delta y / (L cos^2 \theta),

we seems to end up with - by inference in ballentines example - loosely speaking something like

Also since \Delta y \Delta p_y \gtrsim h\ (1+\lambda_{broglie}/L),

So I'm tempted to think that if we ARE to try to make an inference like Ballentines wants, and INFER something like the uncertaintes of the INFORMATION (without explict statistical ensemble) in a way that has anything at alla to do with the original discussion, wouldn't the above be more reasonable? And if so, we certainly do get something in the ballpark of the original HUP even for this inference. So when I read ballentines notes it seems flawed?

/Fredrik
 
  • #226


Fra said:
what seems to be of debate is wether the example in the Ballentine notes Fredrik posted where one is using an inference to "measure" momentum can qualify as a measurement, and thus wether one can define at least loosely speaking (until proper full analyis is made) some "effective state" that derives from the mixed measurements + inference?
I agree that one of the things we're discussing is if what Ballentine is describing is a momentum measurement, but I consider the issue of what new state is prepared by the measurement trivial. Either the particle is absorbed by the detector and no new state is prepared, or the particle makes it all the way through the detector and escapes on the other side. In that case, the wavefunction is sharply peaked at the location of the detector, and is close to zero outside of it.

The rest of what you said seems to be based on the assumption that the new state is going to be spread out all over the region between 0 and y+δy. I don't see a reason to think that.

When we try to decide if this should be considered a momentum measurement, I don't think the properties of the state that's prepared by the interaction should influence us in any way. The only thing that should concern us is this: If we define "quantum mechanics" so that this is a momentum measurement, will the theory's predictions about momenta be better or worse than if we define the theory so that this isn't a momentum measurement?

My opinion about Ballentine's argument has changed during the course of this thread. This is what I'm thinking now: The best way to define a "momentum measurement" of a particle prepared in a localized state such as the particle that emerges from the slit in this thought experiment is, roughly speaking, to do what Ballentine does and then take the limit L→0. To be more precise, we say that a detection of the particle followed by this sort of inference of the momentum is an approximate momentum measurement, and the approximation is exact in the limit L→0. The margin of error \delta p_y will depend on L. When L is small, it should therefore be proportional to L. When L is larger, terms with higher exponents will become important.

Ballentine's argument relies on his claim that \delta p_y can be made arbitrarily small by making L large. This claim appears to be false. (It's correct if we leave out the L→0 statement from the definition of momentum measurement, but it's false if we include it).
 
  • #227


I think Ballentine's claim that the distribution of position values at large L corresponds to the momentum distrubution at small L is true. However, the position distribution at large L is not conjugate to the momentum distribution at small L. It is the position distribution at small L that is conjugate to the momentum distribution at small L. So I think Ballentine's claim that conventional wisdom is wrong is false because he isn't talking about conjugate variables.
 
  • #228


atyy said:
I think Ballentine's claim that the distribution of position values at large L corresponds to the momentum distrubution at small L is true.
I don't understand what this means.
 
  • #229


Fredrik said:
I don't understand what this means.

Try http://tf.nist.gov/general/pdf/1283.pdf, figure 2. In the text on the left column of p25, they say: "Figure 2c shows the results predicted by theory for atoms with a wide range of propagation times. In the extreme Fresnel regime, we recognize the spacelike shadow of the two slits. With increasing td, the wavepackets start to overlap and interfere until, for large td, we arrive at the Fraunhofer regime in which the diffraction pattern embodies the momentum-like shadow of the state."

These guys http://www.mpq.mpg.de/qdynamics/publications/library/Nature395p33_Duerr.pdf say something similar: "Figure 2 shows the spatial fringe pattern in the far field for two different values of tsep. We note that the observed far-field position distribution is a picture of the atomic transverse momentum distribution after the interaction."

So the position distribution on the screen at large L (Fraunhofer regime) corresponds to the momentum distribution of the initial state, whereas the position distribution on the scrren at small L (Fresnel regime) corresponds to the position distribution of the initial state.
 
Last edited by a moderator:
  • #230


Fredrik said:
The rest of what you said seems to be based on the assumption that the new state is going to be spread out all over the region between 0 and y+δy. I don't see a reason to think that.

My estimates are handwaving semiclassical IMO, and is supposed to be a ballpark estimate only. I'll try to add more later, but I think the perhaps interresting discussion to keep going here is exactly how to understand a "state preparation". What I tried to do is suggest that one can make state preparattions without statistical ensembles, if you instead think in terms of "counting evidence". And the spread of the y above, IS the spread of the information set we do use for the inference. This is what I think it is relevant. I'll try to get back alter and explain my logic.

Fredrik said:
Ballentine's argument relies on his claim that \delta p_y can be made arbitrarily small by making L large. This claim appears to be false. (It's correct if we leave out the L→0 statement from the definition of momentum measurement, but it's false if we include it).

Not sure what you mean. It seems to me that ballentine is right on that point. My disagreement with this notes isn't that. Roughly it seemn like

\Delta p_y \approx (p/L) \delta y cos^3 \theta \rightarrow 0, if L \rightarrow\infty

but then also in My estimate
\Delta y \rightarrow \infty

Maybe I'm missing something from the quick estimate?

/Fredrik
 
  • #231


If we allow L-> 0 then it seems to me that

\Delta y \rightarrow \delta y
\Delta p_y \approx (p/L) \delta y cos^3 \theta \rightarrow \infty, if L \rightarrow 0

So it reduces to a plain y measurement, where the inference of p yields no information. What is the problem with this?

/Fredrik
 
  • #232


The momentum of an electron can be measured just by letting it fall on a photographic plate, and so we know both the position and momentum. The point however is that in any given situation, the energy-momentum or space-time relations must be used at least twice, otherwise they are not defined.
 
  • #233


atyy said:
I think Ballentine's claim that the distribution of position values at large L corresponds to the momentum distrubution at small L is true.

Fredrik said:
I don't understand what this means.

atyy said:
Try http://tf.nist.gov/general/pdf/1283.pdf, figure 2. In the text on the left column of p25, they say: "Figure 2c shows the results predicted by theory for atoms with a wide range of propagation times. In the extreme Fresnel regime, we recognize the spacelike shadow of the two slits. With increasing td, the wavepackets start to overlap and interfere until, for large td, we arrive at the Fraunhofer regime in which the diffraction pattern embodies the momentum-like shadow of the state."
...
So the position distribution on the screen at large L (Fraunhofer regime) corresponds to the momentum distribution of the initial state, whereas the position distribution on the scrren at small L (Fresnel regime) corresponds to the position distribution of the initial state.

What figure 2c seems to indicate is that in a double-slit experiment, we won't get the typical "both slits open" interference pattern if the particles are moving too fast. Making L small should have the same effect. In either case, the wavefunction won't have spread out enough in the y direction by the time its peaks reach the screen.

I see that the pattern will depend on the initial wavefunction (and therefore on its Fourier transform), and that it will "look like" the wavefunction itself when L is small. I don't see how the pattern will "correspond to the momentum distribution of the initial state" when L is large. Do you mean that it will actually "look like" the Fourier transform of the wavefunction?

I still don't see how to interpret your statement in the first quote above. Did Ballentine even say something like that? Which one of his statements have you translated into what you're saying now?

I also don't see what this implies about the single-slit experiment.

I'm not saying that you're wrong, only that I don't understand what you're thinking.
 
  • #234


dx said:
The momentum of an electron can be measured just by letting it fall on a photographic plate, and so we know both the position and momentum.
Only if it was known to be in a state with a sharply defined position earlier. Maybe not even then. This is still a matter of some debate in this thread. Ballentine's thought experiment is just a particle going through a single slit, and then reaching a wall of detectors. This wall of detectors could be a photographic plate. Those details aren't important here.

I think we have to define what we mean by a "momentum measurement" in this situation. I don't think it can be derived. We should choose the definition that gives us the best agreement between theory and experiment. I'm thinking that since we want to measure the momentum of the particle when it's in the state prepared by the slit, we should do it as soon as possible. The longer we wait, the more the state will have changed, and we're not really measuring what we want to measure. A longer "wait" corresponds to a larger L (the distance to the wall of detectors).

So I want to use a definition that implies that the value of py that's inferred from the y measurement is only an approximate measurement, and that the inaccuracy of the y measurement isn't the only thing that contributes to the total error. There's also a contribution that depends on L (and goes to zero when L goes to zero) that must be added to the contribution from the inaccuracy in the y measurement.

Since the error depends on L, it should grow at least linearly with L. So I want to define a "momentum measurement with minimum error L" as a y measurement at x coordinate L, followed by a calculation of py. Maybe that should be "with minimum error kL", where k is some number, but right now I don't know what number that would be, so I'm just setting it to 1.
 
  • #235


Fra said:
And the spread of the y above, IS the spread of the information set we do use for the inference.
I'm not sure I understand what you're saying. What do you mean by "spread of the information set"? If you're talking about the width of the wavefunction after the detection, how could it be larger than the detector?

Fra said:
Not sure what you mean. It seems to me that ballentine is right on that point.
See my answer to dx above. Does this help you understand what I mean at least?

Regarding your calculations, I haven't really tried to understand them. It would be much easier to do that if you explained how you got those results. Are the upper case deltas supposed to be "uncertainties" of the kind that appear in the uncertainty relations?
 
  • #236


What I had in mind was simply a state which is prepared with a definite momentum, which is then measured by the photographic plate. So when the particle falls on the plate, we know its position and also its momentum because we have measured ('prepared') the momentum before.
 
  • #237


dx said:
What I had in mind was simply a state which is prepared with a definite momentum, which is then measured by the photographic plate. So when the particle falls on the plate, we know its position and also its momentum because we have measured ('prepared') the momentum before.
So we know that it's a momentum eigenstate, and just need to find out which one that is? I think we would need to detect the particle twice to be able to calculate a momentum, and if we do, the first detection will change the state of the particle. I don't know if this should be called a "momentum measurement". (I'm thinking it probably shouldn't).

I think this approach to momentum measurements (the idea that we can calculate the momentum from the coordinates of two detection events) only works when both the wavefunction and its Fourier transform are peaked, but obviously not so sharply peaked that this statement contradicts itself. If the initial state is such that the width of the wavefunction and the width of its Fourier transform are of the same order of magnitude in units such that \hbar=1, then we can make two (or more) position measurements that aren't accurate enough to change the state by much, and calculate a momentum from that.
 
  • #238


Fredrik said:
I think we have to define what we mean by a "momentum measurement" in this situation
Yes, I think this is what we are discussing, and I was proposing something in the direction.
Fredrik said:
I'm not sure I understand what you're saying. What do you mean by "spread of the information set"? If you're talking about the width of the wavefunction after the detection, how could it be larger than the detector?
I think by detector you mean the resolution of the detectors at the wall.

But IMO, the entire slit setup is part of the "detector", simply because in this "generalized" "measurement" where we also try to infer momentum, the inference depends on the entire setup, inlucing L. So I think in the case where we try to as you say, define or generalized some kind of inference of p_y in parallell to infering y, the entire setup is the "detector" IMO. The actuall counter on the wall does not alone allow infering p_y.
Fredrik said:
So I want to use a definition that implies that the value of py that's inferred from the y measurement is only an approximate measurement, and that the inaccuracy of the y measurement isn't the only thing that contributes to the total error. There's also a contribution that depends on L (and goes to zero when L goes to zero) that must be added to the contribution from the inaccuracy in the y measurement.

Since the error depends on L, it should grow at least linearly with L. So I want to define a "momentum measurement with minimum error L" as a y measurement at x coordinate L, followed by a calculation of py. Maybe that should be "with minimum error kL", where k is some number, but right now I don't know what number that would be, so I'm just setting it to 1.

Why would the uncertainy of the inference increase with L? It seems to be the other way around? Holding
\delta y fixed, and increasing L, decreases \delta \theta and thus the error?

OTOH, since this "inference" is defined with respect to a time interval where the particle goes from the slit input to a detector cell, the mathcing uncertainty in y loosely speaking "conjugating with this momentum inference" should be L sin \theta.

Also; I'm not thinking in terms of wavefunctions here. I'm thinking in terms of information state; this information state is inferred. I don't think it's consistent to at the same time thinkg that [\Delta y \approx \delta y and have confidence in an inference in p_y that DEPDENDS on a path or transition through the slit construction Lsin \theta. I think it's an inconsistent inference.

I'm just suggesting that I think that if you DO insist in the inference like you do, then I think we need to acknowledge that the uncertaint in y is also a function of L. This is IMO the consequence of L you might seek.

/Fredrik
 
  • #239
Fredrik said:
What figure 2c seems to indicate is that in a double-slit experiment, we won't get the typical "both slits open" interference pattern if the particles are moving too fast. Making L small should have the same effect. In either case, the wavefunction won't have spread out enough in the y direction by the time its peaks reach the screen.

I see that the pattern will depend on the initial wavefunction (and therefore on its Fourier transform), and that it will "look like" the wavefunction itself when L is small. I don't see how the pattern will "correspond to the momentum distribution of the initial state" when L is large. Do you mean that it will actually "look like" the Fourier transform of the wavefunction?

I still don't see how to interpret your statement in the first quote above. Did Ballentine even say something like that? Which one of his statements have you translated into what you're saying now?

I also don't see what this implies about the single-slit experiment.

I'm not saying that you're wrong, only that I don't understand what you're thinking.

I don't know the derivation, but I believe what those papers say is this. Let's say the transverse wave function at the slit is u(x). If we measure its transverse position accurately, we expect it to be distributed as |u(x)|2; if we measure its transverse momentum accurately, we expect it to be distributed as |v(p)|2, where v is the Fourier transform of u. If you measure the transverse position at large L, and for each measured position xL you take the corresponding sinθL, where tanθ=xL/L, then sinθL is distributed like |v(p)|2.

Although the paper talks about a double slit, I expect it to be true for a single slit, where a single slit is a double slit with zero distance between the slits. Also, http://tf.nist.gov/general/pdf/1283.pdf" and Raymer all seem to use this trick, even though the Durr and Raymer papers don't assume a double slit.

This is the same procedure Ballentine uses to get the momentum. So I believe that his momentum distribution is an accurate reflection of the momentum at an earlier time.
 
Last edited by a moderator:
  • #240


One on my personal quests is:

How to understand and generalize measurement theory as a way to intrisically count and represent information, while respecting constraints on information capacity. And how to from this, construct rational expectations and ultimately rational actions.

Current QM does not do this. It violates grossly the information capacity bounds just to mention one thing (the environment is used as an information sink; this works fine for typical collider experiments but not for cosmology, or for unification of forces). Also it's an extrinsic theory; depending on a classical observer context. RG theory does not accomplish what I want, so we need something new.

So the first step:

This is a way to understand information states, without statistical ensembles. Or rather the "statistics" does not refer to infinite repeats or "ensembles of trials", it refers to "counting evidence", and instead we can do a form of observer-state statistics on datapoints. This generalizes the information stats to cases where we clearly can't repeat experiments nor represent enough data.

In this way, it should be possible to generalize "measurements in QM" to general inferences. It's an information interpretation takes to some new depths.

So I agree that the notion of measurment in QM certainly isn't general enough to describe all relevant inferences. This is why a new "inference theory" is needed, in QM style but more creative. QM was designed to solve different problems, than we face today. Unification and QG wasn't I think on the map when QM was defined. Just that we are so deep into this now it's hard to imagine a different framework.

/Fredrik
 
  • #241


Fra said:
I think by detector you mean the resolution of the detectors at the wall.
Yes, I meant one of the little boxes to the right in the figure in Ballentine's article.

Fra said:
But IMO, the entire slit setup is part of the "detector", simply because in this "generalized" "measurement" where we also try to infer momentum, the inference depends on the entire setup, inlucing L. So I think in the case where we try to as you say, define or generalized some kind of inference of p_y in parallell to infering y, the entire setup is the "detector" IMO.
I disagree. A measuring device (an idealized one) only interacts with the system during the actual measurement, and the measurement is performed on the last state the system was in before the interaction with the measuring device began. In this case, we're clearly performing the measurement on the state that was prepared by the slit, so it can't be considered part of the momentum measuring device. The momentum measuring device consists of the wall of detectors and any computer or whatever that calculates and displays the momentum that we're going to call "the result". The coordinates and size of the slit will of course be a part of that calculation, but those are just numbers typed manually into the computer. Those numbers are part of the measuring device, but the slit isn't physically a part of it.

Fra said:
Why would the uncertainy of the inference increase with L? It seems to be the other way around? Holding
\delta y fixed, and increasing L, decreases \delta \theta and thus the error?
You're talking about the the contribution to the total error that's caused by the inaccuracy of the y measurement. I was talking about a different contribution to the total error. I started explaining it here, but I realized that my explanation (an elaboration of what I said in my previous posts) was wrong. I've been talking about how to define a momentum measurement on a state with a sharply defined position, but now that I think about it again, I'm not sure that even makes sense.

What we need here is a definition of a "momentum measurement" on the state the particle is in immediately before it's detected, and the only argument I can think of against Ballentine's method being the only correct one is that classically, it would measure the average momentum of the journey from the slit to the detector. However, classically, there's no difference between "momentum" and "average momentum" when the particle is free, as it is here. I don't see a reason to think this is different in the quantum world, so I no longer have a reason to think we're measuring "the wrong thing", and that means I can no longer argue for a second contribution to the total error that comes from "measuring the wrong thing". (That was the contribution I said would grow with L).

Fra said:
Also; I'm not thinking in terms of wavefunctions here. I'm thinking in terms of information state;
Huh? What's an information state? Are you even talking about quantum mechanics?
 
Last edited:
  • #242


I don't think anyone believes quantum theory is fine as it is :)
 
  • #243


atyy said:
I don't know the derivation, but I believe what those papers say is this. Let's say the transverse wave function at the slit is u(x). If we measure its transverse position accurately, we expect it to be distributed as |u(x)|2; if we measure its transverse momentum accurately, we expect it to be distributed as |v(p)|2, where v is the Fourier transform of u. If you measure the transverse position at large L, and for each measured position xL you take the corresponding sinθL, where tanθ=xL/L, then sinθL is distributed like |v(p)|2.

OK, thanks. If anyone knows a derivation (or a reason to think this is wrong), I'd be interested in seeing it. (I haven't tried to really think about this myself).

atyy said:
This is the same procedure Ballentine uses to get the momentum. So I believe that his momentum distribution is an accurate reflection of the momentum at an earlier time.
I still don't understand the significance of this. If we replace the wall of detectors with a photographic plate and make L large, how does it help us to know that the image we're looking at is the momentum distribution of the initial state (the state that was prepared by the slit)?

I know that I've been talking about how to define a momentum measurement on that initial state (sorry if that has caused confusion), but what we really need to know is how to define a momentum measurement on the state immediately before detection. I mean, we're performing the position measurement on that state, so if we're going to be talking about simultaneous measurements, the momentum measurement had better be on that state too.
 
  • #244


Fredrik said:
I still don't understand the significance of this. If we replace the wall of detectors with a photographic plate and make L large, how does it help us to know that the image we're looking at is the momentum distribution of the initial state (the state that was prepared by the slit)?

I know that I've been talking about how to define a momentum measurement on that initial state (sorry if that has caused confusion), but what we really need to know is how to define a momentum measurement on the state immediately before detection. I mean, we're performing the position measurement on that state, so if we're going to be talking about simultaneous measurements, the momentum measurement had better be on that state too.

Ballentine's procedure gives the position distribution of the state just before detection. It also gives the momentum distribution of the initial state (just after the slit), which is not the momentum distribution of the state just before detection. So he does not have simultaneous accurate measurement of both position and momentum.
 
Last edited:
  • #245


atyy said:
Ballentine's procedure gives the position distribution of the state just before detection. It also gives the momentum distribution of the initial state (just after the slit), which is not the momentum distribution of the state just before detection. So he does not have simultaneous accurate measurement of both position and momentum.
Aha. You're saying that because of what you described in the post before the one I'm quoting now, the position distribution (which we are measuring) is the same function as the momentum distribution of the initial state, and that this means that we're performing the momentum measurement on the wrong state.

That promotes the issue of how to prove (or disprove) that claim to the main issue right now.
 
  • #246


Fra said:
it's an extrinsic theory; depending on a classical observer context.
Isn't this a problem with all theories, i.e. all sets of statements that satisfy some kind of falsifiability?
 
  • #247


Fredrik said:
Aha. You're saying that because of what you described in the post before the one I'm quoting now, the position distribution (which we are measuring) is the same function as the momentum distribution of the initial state, and that this means that we're performing the momentum measurement on the wrong state.

That promotes the issue of how to prove (or disprove) that claim to the main issue right now.

Yes, that's what I'm thinking.
 
  • #248


Fredrik said:
I disagree. A measuring device (an idealized one) only interacts with the system during the actual measurement, and the measurement is performed on the last state the system was in before the interaction with the measuring device began. In this case, we're clearly performing the measurement on the state that was prepared by the slit, so it can't be considered part of the momentum measuring device. The momentum measuring device consists of the wall of detectors and any computer or whatever that calculates and displays the momentum that we're going to call "the result". The coordinates and size of the slit will of course be a part of that calculation, but those are just numbers typed manually into the computer. Those numbers are part of the measuring device, but the slit isn't physically a part of it.

This is where I think we either disagree or aren't trying to do the same thing. Normally I agree with you, ie. if all we are doing is measuring position at the plate. Then I agree.

But I though the whole point here is that we are trying to generalize some kind of "measurement" as an inference, from the picture outlined in Ballentine. And in THIS case, since as you acknowledge below, we really have an "average" throughout the construct, then this has to be respected by the y measurement as well, other wise we are IMO not inferring y and p_y from the same information - thus the comparasion of uncertainties make no sense at all.
Fredrik said:
I've been talking about how to define a momentum measurement on a state with a sharply defined position, but now that I think about it again, I'm not sure that even makes sense.

Mmm ok. Then we were trying to accomplish different things. I don't think this makes sense either. I mean, sure we could come up with some type of calculation of dy and dp, but in the way you seek it I think it would not correspond to the same information state (see below).

Fredrik said:
Huh? What's an information state? Are you even talking about quantum mechanics?

Yes, but in a generalized sense (as you were the one seeking to define new measurements).

I just mean that wavefunction gives a part classical flavour. I think more in terms of an abstracte state vector (which is of course suppsedly encoding the same info as the wavefunction) but interpreted differently from ballentines stat int.

The interpretation is that, instead of a thinking of the state vectors as encoding information about statistical ensemble, realized as an infinity of identical prepared systems etc, I'm thinking of the observers state of information/knowledge about the system.

Technically this is not a properpy of the system, it's a property of hte STATE of the observing system. Only at equilibrium, does the state of the observers information about the system match at least in some sense the system. The point is that this interpretation allows understanding the concept of information state, even when no ensemble can be realized, or when the information that "should need to go into the ensemble" must be truncated simply because the observing system is NOT and infinite environment serving as informaiton sink, but rather a finite mass subsystem of the universe.

But it's not news that my interpretation of QM, is not at all like Ballentines statistical view.
Fredrik said:
What we need here is a definition of a "momentum measurement" on the state the particle is in immediately before it's detected, and the only argument I can think of against Ballentine's method being the only correct one is that classically, it would measure the average momentum of the journey from the slit to the detector. However, classically, there's no difference between "momentum" and "average momentum" when the particle is free, as it is here. I don't see a reason to think this is different in the quantum world, so I no longer have a reason to think we're measuring "the wrong thing", and that means I can no longer argue for a second contribution to the total error that comes from "measuring the wrong thing". (That was the contribution I said would grow with L).
This sounds like the objection I have too.

I phrased it differently ,but the objection is similar. What we do infer is the momentum "spread out" over the time from where the information used for inference originates. This is why this is also the "time stamp" for any y measurement we want to "associate" to the same inforation. Ie. this is why I ague for the extra uncertaint in y. It's not because the error at the screen is larger than the detector cell, but because we are force to add this error if we insist on associating it with the infered p_y average.

So instead of saying "we infer the wrong thing", I took that whatever we measured as the starting point, given ballentimes scheme, and then suggested that to make it a coherent inferent we need to adjust the y-inference as well and add the error.

When you look at what information, that is used for an inference. This becomes more clear. The only "time stamp" we have are parameterizations of how the information set evolves. An intrinsic comparasion must work on the same information set (corresponding to the generalisation of conjugate at same time).

Atyy's points has been similar althjough I didn't read all the quoted papers, except the way you put it depends on your interpretation.

/Fredrik
 
Last edited:
  • #249


Fredrik said:
Isn't this a problem with all theories, i.e. all sets of statements that satisfy some kind of falsifiability?

Partly, but I think we can do a lot better.

Just because something is tradition doesn't make it satisfactory.

To address your example, it's the notion of falsifiability that needs to be developed. In particular what happens when a theory IS falsified. Then an extrinsic theory simply fails as there is not rational mechanism for using the information that cause the falsification to evolve the theory.

So my proposal is that we should abandom the descriptive picture of a theory which in poppian spirit is simply either corroborated or Wrong with a picture where a theory is an interaction tool. Where beeing wrong is in fact an essential part of hte learning curve that that we should add some analysis into the induction part, how a new theory is induced from a falsified teory. This is the completely ignored part in the descriptive view.

This is also why these old scheme works perfectly fine for subsystems, but not for cosmological scale theories and also presents problems for understanding unification of interactions. The cosmologicla theory issues here (where the ensembles and subsystem idealizaion obviouls does break down) becomes hot topic in understanding unification if you think that any subsystem acts rational upong what information it does have about it's environemtn. Then it's clear that the "inference" that ultimately results in "theories" here are an essential part of a proper "theory scaling", which is essentialy IMO at the core of unification.

/Fredrik
 
  • #250


Example: what is the important trait of life? It's not just that fact that we are mortal. No, the magic lies in the variation/adaption/reproduction learning from mistakes etc. What one needs to assert, is not that the theory can be wrong, but that the theory comese with a framework that allows progress THROUGH falsification in a rational way.

Biology has accomplished this, and I think so has phyislca laws, it's just that we humans scientists just haven't understood it "that way" YET ;)

This is the difference between seeing a theory as a "static description" or an "interaction tool", where the latter obviously means we rather have "evolving expectations".

/Fredrik
 
Last edited:
Back
Top