Measurements and The Copenhagen Interpretation

  • Thread starter jambaugh
  • Start date

jambaugh

Science Advisor
Insights Author
Gold Member
2,175
231
Also, I was always wondering what is a status of Quantum Decoherence in CI? Is it accepted, rejected or simply ignored?
Decoherence is a qualitative description of a quantum mechanical process, just as is say entanglement. CI as with the other interpretations is just that, an interpretation and neither "accepts" nor "rejects" the process but rather again interprets it.

Understanding CI as interpreting the wave-function and density operator as descriptions of our knowledge about the physical system one then understands that within CI decoherence describes a particular class of evolutions of our knowledge about the system.
 
Last edited:

jambaugh

Science Advisor
Insights Author
Gold Member
2,175
231
I can't figure out how that even makes sense, let alone decide whether I agree or disagree with it.


Formal logic has a word for such a thing: contradictary.

A collection of formal statements is consistent if and only if there is a mathematical model of those statements. If there isn't a model, then there is a contradiction.
Exactly! But my analogy was to distinguish model from that which is being modeled. In this case we are now speaking of an ontological model for observed quantum phenomena. I assert that no such model exists which is equivalent to saying no "collection of formal statements" about the system is possible, i.e. no state of reality.

In classical physics what we think of as "reality" is actually a model existing in our minds. What is being modeled is the actuality of phenomena. At the classical model it is possible to find a unique (up to gauge) "reality model". At the quantum level it is impossible (even with the non-CI interpretations.)

Hence the "no deep reality" and "the wave function describes (models) our knowledge about the system" of CI.

Our insistence on an underlying reality = system state = complete set of formal statements about the system itself leads to Bell's inequality. Quantum nature continues to ignore our insistence and violates BI at every turn.
 

jambaugh

Science Advisor
Insights Author
Gold Member
2,175
231
...
No, I cannot agree. CI goes above and beyond simply acknowledging that quantum mechanics has to yield approximately classical results when classical mechanics works. CI insists that the wavefunction, whatever it is, undergoes a collapse,
yes...
and the system described by the wavefunction is left in a definite state corresponding to the collapsed wavefunction.
Arrrrrg! Again that word "state" CI says nothing about the system's state beyond what outcomes are actually measured. But yes the collapsing wave function expresses the new knowledge due to the physical change the act of measurement necessarily effects on the system.
This contrasts with decoherence-based interpretations of quantum mechanics, which posits that Schrödinger's equation remainds valid through a measurement -- the (relative) wavefunction merely evolves into a mixed state.
Keeping the CI of the wave function as modeling our knowledge about the system and this fine, one is simply examining the measurement process in more detail. It is not a re-interpretation. But you still have a classical collapse of probabilities into actualities when you select which value of the observable was made. Again the collapse is no different in quality than the collapse of the expected winnings of a lottery ticket once you know what number actually was drawn.
For example:
. MWI takes the wavefunction as a complete description of the system, and thus asserts that the system remains in an indeterminate state. However, each component of the mixture is exhibits the desired "controlled stability".

. The Bohm interpretation hypothesizes additional variables. It accepts the wavefunction as being correct, and so retains its evolution into a mixture of "stable" components, but it includes particles which I believe (I don't understand BI well) tend to organize themselves in some way corresponding to that.

CI rejects both of these possibilities (and more) -- it fixes upon specific properties of classical measurement, such as definite outcomes1, and rather than allowing for it to be an approximately correct emergent property of the underlying mechanics, CI insists that it is the literal truth of the matter, and even makes some insistence on how it's the literal truth.
You mis-characterize. You still update the wave-function/density operator/system description in MWI and Bohm's Pilot-Wave Interp, once you assert that a given outcome to the measurement has been made!. It is simply Bayesian conditional probabilities. The probability distribution for future outcomes given a particular measurement had a particular value is "suddenly" different from the probability distribution prior to incorporating this information. No mystery. You can examine the process by which "quantum probabilities" become "classical probabilities" but there is in fact no distinction in the probabilities, they are relative frequencies for hypothetical outcomes and obey Bayesian inference rules. The only distinction is that in CM we can model the probability distribution with pdf over a state manifold whereas in QM given the absence of a state manifold we must revert to an amplitude distribution over a complete set of commuting observables along with the Born probability formula.

[EDIT: Let me add that it is only in the MWI or Bohmian interpretations, where the wave function is given ontological status that there is an issue with wave-function collapse because they are asserting some physical collapse must be occurring. In MWI the whole friggin' universe has to split and Bohm's pilot waves must execute some fancy fast stepping (>C or back in time) to update. In CI we simply are updating our records given new information.

Like the IRS registering a change of address. Confusing the company records with the person's actual location leads one to think they moved in the blink of an eye because the computer record changed in the blink of an eye.

1: The ironic thing is that definite outcomes isn't required classically. It's just that in the classical case, the math behind mixed states is exactly identicial to the math behind ignorance probabilities, and for whatever reason, people preferred to think in terms of the latter rather than the former.
The math is identical because the meanings are identical. CI => "mixed states" = "mixed states of probability".
 
Last edited:

jambaugh

Science Advisor
Insights Author
Gold Member
2,175
231
I can't figure out how that even makes sense, let alone decide whether I agree or disagree with it.
Mathematical objects are completely defined. E.g. a line through two points on the Euclidean plane. Some properties are associated with the line, e.g. dimension, some are not, e.g. color or mass. For every associated property there is a definitive value e.g. dimension=1.

If one is talking about a dynamically changing line one can speak of its objective state (complete set of properties and their values) as a function of a "time" parameter.

Physical systems when treated classically are likewise assumed to have objective properties with constantly defined values. This means we can map the physical system to a mathematical representation. This is an ontological model of the system. The canonical method for so doing is to represent any system as a point on a state manifold. (phase space). However this doesn't always work well and we introduce gauge degrees of freedom wherein the system is represented by gauge orbits (branes) in an extended phase space. One can always "fix the gauge" i.e. mod out the gauge orbits and again recover a point on state manifold description.

To give this operational meaning we must assume that there exists a complete set of observables, "complete" meaning that the values determine the state of the classical system.

So classically properties correspond to observables with value = value. In quantum mechanics we can have observables ("q-properties"?) which do not have well defined values at all times. The shift from ontological to epistemological description is complete. Note classical "completeness" can no longer be given meaning since we negate the classical assumption of an underlying state. We replace it with the epistemological equivalent of "maximal" i.e. no more information can be gleaned without invalidating on of the prior measurements.

To make the classical to quantum transition we translate the ontological aspects of classical systems to their epistemological equivalents state variables -> observables.
We pay attention to the dynamics of the epistemological variables instead of the ontological ones.
We then map to the epistemological quantum description preserving the dynamics of the observables. In fact we take the dynamics more seriously preserving the non-commutative structure of the observables guided by Noether's correspondence between observables and generators of system/frame transformations. By invalidating the classical completeness assertion we invalidate the use of ontological models and stick to the heart of the physics, the epistemological model = the dynamics of what we know about the system = the dynamics of the wave-function/density operator.

Remember science and thus physics is an epistemological discipline. There is no metaphysics in physics. Ontological models are useful --sometimes-- as scaffolds for constructing epistemologically meaningful theories. But although the scaffolding dictates the shape of the building it is not the building. The model is not the theory. Carrying the analogy further the use of a scaffolding limits the form our buildings can take. To go beyond we must rather abandon the scaffolding method and grow the building up from its epistemological foundation.
 

Hurkyl

Staff Emeritus
Science Advisor
Gold Member
14,845
17
Decoherence is a qualitative description of a quantum mechanical process, just as is say entanglement. CI as with the other interpretations is just that, an interpretation and neither "accepts" nor "rejects" the process but rather again interprets it.
This is by no means clear. The collapse postulate of CI solves a particular problem related to measurement. Decoherence appears to solve that exact same problem. This leads to a very fundamental dichotemy in interpretations of quantum mechanics: are the properties of measurement
(1) The result of decoherence that occurs in the process of unitary evolution?
(2) The result of some other process that isn't described by unitary evolution?
(With (2) commonly resulting in a wavefunction collapse)

If you retain the collapse postulate of CI, that means you are rejecting the hypothesis that measurement could be described (in principle) via unitary evoultion, and that the effects of measurement are the result of decoherence.
 

Hurkyl

Staff Emeritus
Science Advisor
Gold Member
14,845
17
At the classical model it is possible to find a unique (up to gauge) "reality model".
(Depending on what you actually mean) mathematics does not support that level of specificity -- for example, given any formal language and mathematical structure into which that language can be interpreted, one can find a different, nonisomorphic mathematical structure that is completely indistinguishable from the original using just the chosen language.


Our insistence on an underlying reality = system state = complete set of formal statements about the system itself leads to Bell's inequality. Quantum nature continues to ignore our insistence and violates BI at every turn.
You're making a crucial mistake -- you are saying "complete set of formal statements about the system itself" but you are thinking "a complete set of formal statements about the system itself that has the qualities of being local, realistic, and counterfactually definite" (and whatever other assumptions Bell used in his theorem).

In quantum mechanics we can have observables ("q-properties"?) which do not have well defined values at all times.
There is nothing wrong with "wavefunction == deep reality", as demonstrated, for example, by the many worlds interpretation.
 
Last edited:

Hurkyl

Staff Emeritus
Science Advisor
Gold Member
14,845
17
But you still have a classical collapse of probabilities into actualities when you select which value of the observable was made.
Which is an unwarranted assumption! It's sufficient for the "perceived actuality" to be conditional on "observed observable" without requiring that there is a definite value of the observable which was observed. (i.e. it's enough for the observed value to be conditioned upon which value was observed)


It is simply Bayesian conditional probabilities.
No it's not. Pre-measurement, CI will compute conditional probabilities. Post-measurement, CI insists that a definite outcome occured, and asserts that the probabilities are now absolute probabilities.

e.g. When using the CI, after measuring the electron, one might say something like "the electron is definitely spin up", rather than saying "given that we measured the electron to be spin up, it is definitely spin up". Doing so is definitely good for economy of speech, but CI means it literally, not as a shortcut.
 
Last edited:

jambaugh

Science Advisor
Insights Author
Gold Member
2,175
231
I think a good paradigm example of a measurement is to consider the Stern-Gerlach measurement of an electron's spin.

1.) We prepare the electon in a crisply defined momentum mode roughly orthogonal to the direction we wish to measure spin. w.o.l.o.g. define that as the z-direction.

2.) We allow the electron to evolve in a non-uniform B field said non-uniformity in the direction of the component of spin to be measured. The S-G magnets couple the electron's spin component to its momentum resulting in entanglement between z-component of momentum and z-component of spin. The coupling must occur to sufficient degree that the z-component of spin differs for distinct spin values more than the Delta P_z in the initial preparation.

3.) We then measure the z-component of momentum to within a resolution sufficient to distinguish the effect of spin. This is where the actual measurement occurs.

( and typically this is done by also preparing the electrons in step 1 with a certain Delta z resolution of z- position and allowing the vacuum propagation of the electron after the S-G magnet to couple z-momentum to z-position and thence measuring position to within a resolution sufficient to resolve the momentum and thence the spin.)

Note that prior to some decoherence inducing registration of the electron's position/momentum we cannot really say the value of the spin has been measured and yes we can "undo" in principle the entanglement created and recombine the two electron paths (modulo some random momentum variables for those who worry about determinism).

I'm trying to think of the simplest (rough) non-destructive (to spin) position or momentum measurement device. Consider a loop of wire with straight segments running close and parallel to the two beam paths. I'll draw a quick diagram:
attachment.php?attachmentid=19038&stc=1&d=1243008184.png

Not shown in the diagram but implicit is the heat sink cooling the amplifier and wire down so that thermal fluctuations are below the signal level. Note the amplifier must be producing heat because it must maintain a subsystem with effective "negative temperature" in order to amplify the signal to the point of classical observability.

We then will see either a positive or negative spike in the output voltage indicating whether the electron took one path or the other. I am pretty sure (but not absolutely certain) that the interaction between electron and wire will not further affect the electron's spin. There should only be a slight back reaction as the magnetic field of the induced current in the wire acts on the electron. We should actually place the wire behind the beam path in the diagram so that the B-field for current in the wire will be parallel to the z-component of spin and not precess it any.

To keep things as simple as possible we can let the wire itself be near zero temp and super-conducting so that we may further treat the electron field in the wire as a quantum system. Ultimately the classical-quantum cut occurs within the amplifier. But due to the very close interaction between wire and amp we can treat the wire as a quantum system coupled immediately to an entropy dump and let it be the source of measurement.

Can anyone think of a simpler (actualizible in principle) non-destructive (of the quantity to be measured) measurement process? I first considered a cloud/bubble chamber but wasn't sure we could reliably assume the spin didn't get scrambled in the process.
 

Attachments

jambaugh

Science Advisor
Insights Author
Gold Member
2,175
231
Which is an unwarranted assumption! It's sufficient for the "perceived actuality" to be conditional on "observed observable" without requiring that there is a definite value of the observable which was observed. (i.e. it's enough for the observed value to be conditioned upon which value was observed)



No it's not. Pre-measurement, CI will compute conditional probabilities. Post-measurement, CI insists that a definite outcome occured, and asserts that the probabilities are now absolute probabilities.
But CI doesn't assert which outcome will occur. It let's QM tell us the probabilities of the various outcomes. The distinct modified wave-functions and their subsequently calculated probabilities are still conditional probabilities, said probabilities conditions on the measured value. It is only when we stop considering the probabilistic array of possible outcomes an say something like "suppose it was outcome 3!" that we update the wave function, (or pick a universe, or assert that the pilot waves have done some communicating.)

In your head you are still thinking in terms of "definite wave-function = definite state of the system" rather than watching as the information gets updated in CI.
e.g. When using the CI, after measuring the electron, one might say something like "the electron is definitely spin up", rather than saying "given that we measured the electron to be spin up, it is definitely spin up". Doing so is definitely good for economy of speech, but CI means it literally, not as a shortcut.
Let's take that example. Before measurement,
1.) we agree that we are going to measure "spin up" vs "spin down" and set things up.
2.) we write down the wave function or density matrix for the electron coming from our source S.
[You imagine this expresses the state of the electron, and I that it is just our encoding of what we know about the electron.]

3.) we calculate the probabilities of each measurement and say determine them to be 50% vs 50%.

4.) We do the experiment but neither of us look at the result just yet, a computer prints it on a card which we set face down. While we are at it we decide to see who gets to turn over the card so we rattle a die in a cup which is also hidden. Even and I get to flip, odd and you get to flip.

Now at this point I say: "OK, now that we've made the measurement we can express the system in terms of a density operator projecting with weight 50% onto one 'collapsed wave-function', and projecting with weight 50% onto the other 'collapsed wave function'."

I add "Oh by the way we can represent the outcome of the die toss as a probability distribution 50% even vs 50% odd".

5.) We look at the die and I win so I say "I'll update our die pdf to register 100% even"
I then flip the card and we see that the electron's spin was measured as "up" and so I say: "OK now that we know what was measured we can update the density operator to a projection onto the up 'collapsed' wave-function."

Now we didn't have to first write down a density operator between measuring and looking at the card. The original wave function and the updated density operator both expressed the same information about what we could observe after the fact. The only distinction is that the two descriptions may differ in calculated predictions about other outcomes which are now impossible since we cannot "undo" the measurement once the result is printed on the card.

(I thought of including your part of the conversation but didn't want to put words in your mouth.)

I assert that applying CI the wave function collapse was no different from the die pdf collapse. Something we do when we actually incorporate new information into our description of what we know about the system.
 

jambaugh

Science Advisor
Insights Author
Gold Member
2,175
231
I actually have a slight mistake in my characterization of Hurkyl's example. We shouldn't use a density operator to express our knowledge of the system, post measurement but pre-acknowledgment of measured value. Rather we should use a classical discrete pdf of the two "collapsed" modes. The reason being an entropy calculation on the density operator would yield non-zero entropy whereas having access to a record of the measurement outcome implies zero entropy. It should be even more in parallel with the die case. The case where we actually use a density operator we should be sure that the classical probabilities stem from correlation (entanglement) of the observable with variables in the entropy dump which are thereby inaccessible. The classical probabilities in the density operator stem from tracing over the inaccessible parts of the environment.

I believe this is just a technical qualification of the notation we could choose to utilize the same mathematical object, the density operator to represent both kinds of "classical" probabilities but with the type distinction made explicit. One is dealing with the same kind of entropy issues which arise in interpreting probabilities in a purely classical setting. Maxwell's demon is starting to rear his little pointy head.

I'll have to consider what this means in terms of my understanding of entropy. Hmmm....
I think essentially the CI must extend also in the classical realm. We should understand the meaning of entropy as expressing not a property of the system but a property of our knowledge about the system... said knowledge being of course linked via the empirical epistemology of physics to the physical constraints we impose in defining (physically setting up and example of) the system.

Hmmm... this would make the second law of thermodynamics an abstract law about information. I'll meditate on this.
 
453
0
I think essentially the CI must extend also in the classical realm. We should understand the meaning of entropy as expressing not a property of the system but a property of our knowledge about the system...

Hmmm... this would make the second law of thermodynamics an abstract law about information. I'll meditate on this.
Ok, so the 2nd LOT is an abstraction of and generalization based on the information available to us. What's there to meditate about?

Regarding your OP. I've learned to think of the meaning of wavefunction collapse in the CI in the same way that your posts indicate that you do. Wrt the CI wavefunction collapse isn't a problem. Given definite qualitative instrumental results (ie., measurements), info is updated. That's all it means.

Measurements (ie., information) are distinguished from models of the measurement process. Measurements refer to recorded instrumental events or output. These records are amenable to our senses and can be described using classical concepts. This provides the objective basis for the science of physics.

The CI, as I've learned it, says that definitive, objective answers to questions about deep reality are prohibited by the quantum theory.
 

Hurkyl

Staff Emeritus
Science Advisor
Gold Member
14,845
17
Here's an important thought experiment, I think.


We're all familiar with the scourge of thermodynamics: Maxwell's demon. Given a container with a small sliding window, this is the little imp that is capable of creating a vacuum simply by opening the window to let inside air molecules exit, and closing it before outside air molecules would enter.

We're all familiar with the half-silvered mirror experiment: we fire a beam of light through the mirror and it splits. We guide the split beams to another mirror, and they recombine.

We also know that if we add something along one of the beam paths that would interact with the light, then the two beams do not recombine at the end.

We also know the quantum eraser experiment: in some cases, it is possible to add a second interaction that "undoes" the first, and the two beams again recombine at the end.



Now, what happens if we combine Maxwell's demon with the idea of a quantum eraser? We have a setup like the half-silvered mirror experiment; let's make it an incredibly huge experiment, to give the demon ample time to do his stuff.

We pass one of the beams through a closed room of the demon's design. In the room, we place some sort of measuring device along the beams path that will print results onto a sheet of paper. We give the demon free reign to do whatever he wants inside his room, as long as he doesn't disable/disconnect/whatever our measuring device.

Now, the $1,000,000 question1: under these restrictions, can the demon build a quantum eraser?

If so, then what about if we put Wigner's friend in the room, and he's allowed to read the sheet of paper?



1: If you believe in hidden variables, then also consider putting another restriction on the demon that he's only allowed to use quantum mechanics, and isn't allowed to peek at the hidden variables.
 

jambaugh

Science Advisor
Insights Author
Gold Member
2,175
231
Here's an important thought experiment, I think.
...
Now, what happens if we combine Maxwell's demon with the idea of a quantum eraser?
...
This is a horribly ill posed thought experiment. Maxwell's demon is an inherently classical entity having deific knowledge of the objective state of an inherently classical system.

To set up the details of your experiment you must necessarily invoke counter-factuals. Maxwell's demon is supposed to simultaneously know values for all observables including mutually incompatible ones.

Make your demon a "quantum Maxwell's demon" and he ceases to be demonic. On of the great triumphs of QM was its resolution of Maxwell's demonic paradox. You can't invoke the demon without invalidating QM.
 

jambaugh

Science Advisor
Insights Author
Gold Member
2,175
231
Ok, so the 2nd LOT is an abstraction of and generalization based on the information available to us. What's there to meditate about?
I have the same predilection toward objectifying systems as the next fellow. It is easy to fall into the trap of thinking of entropy as a system variable instead of a description variable. I see the 2nd law in action every time I eat and excrete waste. Just as it is hard to fight off the notion that [itex]\Psi[/itex] is the system's "quantum state" it is hard to fight off the notion that S is a concrete system observable. These notions, though consciously treated correctly need to be incorporated into our intuition or we tend waste time re-educating ourselves with what we already know consciously.

When I run across a realization which makes me say to myself "Oh yea I should have known that, it should have been obvious!" I take this as a warning bell that I haven't fully integrated facts into my intuitive understanding and hence I "meditate" on the issue to remediate the deficit.

Regarding your OP...The CI, as I've learned it, says that definitive, objective answers to questions about deep reality are prohibited by the quantum theory.
Yes and indeed the questions themselves are ill posed. But we evolved our though processes over the last eon or so dealing with our environment at the classical level. We are fighting a great deal of instinct in trying to properly understand QM.
 

Hurkyl

Staff Emeritus
Science Advisor
Gold Member
14,845
17
This is a horribly ill posed thought experiment.
Quantum erasers aren't an ill-posed thought experiment -- they are things we actually build in the real world and observe in action.

Make your demon a "quantum Maxwell's demon" and he ceases to be demonic.
:confused:

On of the great triumphs of QM was its resolution of Maxwell's demonic paradox. You can't invoke the demon without invalidating QM.
No, seriously, :confused:
 

jambaugh

Science Advisor
Insights Author
Gold Member
2,175
231
Quantum erasers aren't an ill-posed thought experiment -- they are things we actually build in the real world and observe in action.
Right! But mixing such with a "Maxwell Demon" is ill-posed. To invoke M's D you must be in a purely classical description. The demon is able to divine the objective state of the system without interaction with it. It thus knows values for observables which in QM are not simultaneously measurable and thus not simultaneously well defined. He grossly violates the uncertainty principle.

If you disagree then formulate your experiment in more detail and show me where I err.
 
1,827
7
Right! But mixing such with a "Maxwell Demon" is ill-posed. To invoke M's D you must be in a purely classical description. The demon is able to divine the objective state of the system without interaction with it. It thus knows values for observables which in QM are not simultaneously measurable and thus not simultaneously well defined. He grossly violates the uncertainty principle.

If you disagree then formulate your experiment in more detail and show me where I err.

I also like to see more details about Hurkyl's thought experiment, but I don't see how MD necessarily violates the uncertainty principle. In a previous posting you wrote:

On of the great triumphs of QM was its resolution of Maxwell's demonic paradox. You can't invoke the demon without invalidating QM.
which I don't think is true at all. Even if a hypothetical MD in some setting would have to violate the uncertainty principle to work 100% efficiently, there is already a problem. It's decisions don't need to be correct all the time, what matters is that its decisions are more often correct that wrong. A coarse grained measurement of the local density on both sides of the door is enough to make a decision that is statistically correct.

The total entropy will go down and the memory of the Demon will contain more and more random information about its past decisions. According to quantum mechanics, the whole process must be unitary, so no information can be lost. If the entropy of the gas is going down, then that means that specifying the exact microstate given the macrostate requires less information. Given the (pseudo) random nature of the initial state, this is impossible unless all of the information that you now don't need anymore to point out the exact microstate is present in the memory if the Demon.

The impossiblility follows from counting states. The process would have worked starting from an intitial state being any of the possible microstates that the system could have been in. But the gas ends up in a final state that is one out of a smaller set of microstates. Then for the process to be unitary requires the possible final memory states of the Demon to compensate for this, otherwise two different initial states would be mapped under time evolution to the same final state.


This is then also an answer to an earlier cricism about me invoking the 2nd law to argue in favor of unitary evolution. While it sounds contradictory, the above argument explains that if you can violate unitary evolution, you can actually make an effective Maxwell's Demon that can really lower the entropy of a system.
 

Hurkyl

Staff Emeritus
Science Advisor
Gold Member
14,845
17
Right! But mixing such with a "Maxwell Demon" is ill-posed. To invoke M's D you must be in a purely classical description. The demon is able to divine the objective state of the system without interaction with it. It thus knows values for observables which in QM are not simultaneously measurable and thus not simultaneously well defined. He grossly violates the uncertainty principle.
:confused: If something isn't well-defined, then it cannot be part of the objective state of the system.

The superpower of Maxwell's demon is that he is not limited by what is practical or feasible -- he is only limited by what is, in principle, possible.
 

jambaugh

Science Advisor
Insights Author
Gold Member
2,175
231
:confused: If something isn't well-defined, then it cannot be part of the objective state of the system.

The superpower of Maxwell's demon is that he is not limited by what is practical or feasible -- he is only limited by what is, in principle, possible.
I see that I am thinking of a different version of Maxwell's demon, i.e. an entity with "god like" knowledge of the system's objective state and dynamics and thus able to predict the outcome of all acts of measurement. This version is not meaningful in the quantum setting.

However the definition you are giving is more robust and I think more in line with Maxwell's original definition. With this definition I retract my objections of invoking such in a quantum context.

I however do not see the point. In all our though experiments we do not seem to be limiting the experimenter beyond what is possible and thus they are demonesque in this sense.
 

jambaugh

Science Advisor
Insights Author
Gold Member
2,175
231
:confused: If something isn't well-defined, then it cannot be part of the objective state of the system.
The point here is that in a quantum setting "the objective state of the system" is itself ill-defined. That is at least within CI.
 

Related Threads for: Measurements and The Copenhagen Interpretation

  • Posted
Replies
5
Views
3K
  • Posted
Replies
11
Views
844
Replies
1
Views
2K
Replies
8
Views
2K
Replies
8
Views
2K
Replies
5
Views
3K
Replies
12
Views
941
Replies
12
Views
1K

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving

Hot Threads

Top