Fundamental Quantum Probability

In summary: H}".It's just a ket representing a state in the Hilbert space \mathcal{H}.By the way, I just realized that I made a mistake in the original question. The expression should have been |\langle\phi|\Psi\rangle|^2 instead of |\langle\phi^*|\Psi\rangle|^2. I apologize for the confusion.But to answer your question, the |\psi_n\rangle represent the possible states that can be measured. The "occurrence(s) of measuring particular state(s)" would then refer to the number of times that a particular state |\psi_n\rangle is measured in a given
  • #1
Geremia
151
0
Let [itex]|\psi_n\rangle\in\mathcal{H}[/itex], where [itex]\mathcal{H}[/itex] is Hilbert space, be orthonormal states forming a complete set, and [itex]n\in\mathbb{N}[/itex]. Let

[tex]|\Psi\rangle=\sum_{n=1}^N c^{(1)}_n|\psi_n\rangle,[/tex]

where [itex]c_n[/itex]s are normalized coefficients and [itex]N[/itex] is either finite or infinite. Let [itex]m[/itex] be an eigenvalue of observable [tex]\hat O[/tex] corresponding to the eigenket [itex]|\phi\rangle\in\mathcal{H}[/itex], where

[tex]|\phi\rangle=\sum_{n=1}^N c^{(2)}_n|\psi_n\rangle.[/tex]

The probability of measuring [itex]m[/itex] is

[tex]|\langle\phi|\Psi\rangle|^2=\frac{\mathrm{occurrence(s)\;of\;measuring\;particular\;state(s)\;}|\psi_n\rangle}{\mathrm{total\;possible\;measurable\;states\;}N},[/tex]

which follows from the definition of probability. What specifically is "occurrence(s) of measuring particular state(s)" for given [itex]c_n[/itex]s and [itex]N[/itex]?
 
Last edited:
Physics news on Phys.org
  • #2
According to Copenhagen interpretation, there is no explanation about this.
Accoording to hidden variable theories its due to our lack of knowledge about the initial conditions. (Like the probabilities in classical statistical physics)
 
  • #3
Geremia said:
What specifically is "occurrence(s) of measuring particular state(s)" for given [tex]c_n[/tex]s and [tex]N[/tex]?

It's true that it's an interpretational thing. I personally see it in the a kind of subjective conditional probability view.

If you can imagine "a measure of" or equivalently "a way to count" the set of physically distinguishable possibilities, given a premise (seen as information); ie. each observer encodes this measure.

Then the quantum probability is the counting needed to define "probability the way you imagine" is simply counting the number of physically distinguishable possibilities that are consistent with the constraint implied by your prior premise: [tex]\{c_n\}_{i=1}^N; \mathcal{H}[/tex] which supposedly represents the observers "information", on which the "probability" is conditional.

The mystery is that given such a view, why does the counting compute/evaluate exactly like the structure of QM for a given operator? I personally see this as a so far open problem. But the general idea is that a bounded observer, can only encode/hold finite information, and therefore his "measure" is incomplete, and therefore it is possible for the emergence of non-commutative observables. Given that the next question could be the CHOICE of observables/information, that defines the observer. Given, that the observer has limited capacity, WHAT information should be retained/encoded and what should be discarded?

I think all these questions need answered in order to give a somewhat satisfactory answer to your original question.

/Fredrik
 
  • #4
Fra said:
observer, can only encode/hold finite information, and therefore his "measure" is incomplete, and therefore it is possible for the emergence of non-commutative observables.

This incompleteness should thouhg not be confused with the realist type of incompletness of QM that Einsteins were talking about. This incompletness is not an incompletness of our theory, it's an incompletness intrinstic to the makeup of nature and the interactions in nature.

/Fredrik
 
  • #5
Geremia said:
Let [tex]|\psi_n\rangle\in\mathcal{H}[/tex], where [tex]\mathcal{H}[/tex] is Hilbert space, be orthonormal states forming a complete set, and [tex]n\in\mathbb{N}[/tex]. Let

[tex]|\Psi\rangle=\sum_{n=1}^N c^{(1)}_n|\psi_n\rangle,[/tex]

where [tex]c_n[/tex]s are normalized coefficients and [tex]N[/tex] is either finite or infinite. Let [tex]m[/tex] be an eigenvalue of observable [tex]\widehat{O}[/tex] corresponding to the eigenket [tex]|\phi\rangle\in\mathcal{H}[/tex], where

[tex]|\phi\rangle=\sum_{n=1}^N c^{(2)}_n|\psi_n\rangle.[/tex]

The probability of measuring [tex]m[/tex] is

[tex]|\langle\phi^*|\Psi\rangle|^2=\frac{\mathrm{occurrence(s)\;of\;measuring\;particular\;state(s)\;}|\psi_n\rangle}{\mathrm{total\;possible\;measurable\;states\;}N},[/tex]

which follows from the definition of probability. What specifically is "occurrence(s) of measuring particular state(s)" for given [tex]c_n[/tex]s and [tex]N[/tex]?
Most of this looks very strange to me. There certainly shouldn't be a * on that phi near the end. It's not clear what the [itex]\psi_n[/itex] are. Are they the members of an arbitrary orthonormal basis? In that case, they shouldn't appear at all in the final expression. Are they the eigenstates of a complete set of commuting observables? In that case, is [tex]\hat O[/tex] one of them? And why are the states labled by n instead of the eigenvalues? Also, you didn't actually say that [tex]\hat O[/tex] is the observable being measured.
 
Last edited:
  • #6
Fredrik said:
Most of this looks very strange to me. There certainly shouldn't be a * on that phi near the end.
Yes, that was a mistake; I changed it.
Fredrik said:
It's not clear what the [itex]\psi_n[/itex] are.
I should have been more clear about that, too. I am leaving them unspecified. What if I were to say that the [itex]|\psi_n\rangle[/itex]s are eigenstates of [tex]\hat O[/tex]; how would that change things? I meant that they are the complete set of orthonormal states in all of Hilbert space which can represent any arbitrary state [itex]|\phi\rangle[/itex]. Is this not possible?
Fredrik said:
Are they the members of an arbitrary orthonormal basis? In that case, they shouldn't appear at all in the final expression.
The [itex]|\phi_n\rangle[/itex]s only appear implicitly in the last expression.
Fredrik said:
Are they the eigenstates of a complete set of commuting observables? In that case, is [tex]\hat O[/tex] one of them? And why are the states labled by n instead of the eigenvalues?
Why should "yes" be the answer to any of these three questions?
Fredrik said:
Also, you didn't actually say that [tex]\hat O[/tex] is the observable being measured.
It is.
 
  • #7
Geremia said:
I meant that they are the complete set of orthonormal states in all of Hilbert space which can represent any arbitrary state [itex]|\phi\rangle[/itex].
So it's an arbitrary basis. Then it doesn't seem to have anything to do with the rest of what you're talking about, but I probably still don't understand what you were trying to say.

Geremia said:
What if I were to say that the [itex]|\psi_n\rangle[/itex]s are eigenstates of [tex]\hat O[/tex]; how would that change things?
I thought you were trying to say something similar to this: If the system is in state [itex]|\psi\rangle[/itex] just before we measure an observable [tex]\hat O[/tex] with eigenvectors [itex]|\phi_n\rangle[/itex], then [itex]|\langle\phi_m|\psi\rangle|^2[/itex] is the probability that the result will be the eigenvalue corresponding to [itex]|\phi_m\rangle[/itex]. (Note that there's no need to mention a basis).

This is the simplest version of the probability rule. It only works when the space of eigenvectors corresponding to the eigenvalue we're interested in is 1-dimensional.

Geremia said:
The [itex]|\phi_n\rangle[/itex]s only appear implicitly in the last expression.
You haven't defined any [itex]|\phi_n\rangle[/itex]. I asked about [itex]|\psi_n\rangle[/itex], and they appear on the right. But I don't know what you meant by that right-hand side. Maybe something like this:

"Number of measurements with result Om" / "Total number of measurements"

...where Om is the eigenvalue corresponding to [itex]|\phi_m\rangle[/itex].

You seemed to be going for something else, something like

"number of [itex]|\psi_n\rangle[/itex] with a certain property" / "total number of [itex]|\psi_n\rangle[/itex]",

but this certainly doesn't make sense when the Hilbert space is infinite dimensional, and I don't think there's a way to make sense of it in the finite-dimensional case either.

Geremia said:
Why should "yes" be the answer to any of these three questions?
It doesn't have to be. I just wanted to know what you meant.
 
Last edited:
  • #8
Maybe it was I that misunderstood his question, but I didn't interpret his question to have much to do with these issues or typos??

I thought his question was, and it was what I responded to, simply the quest for how to understand how to interpret something that evaluates as a superposition [tex]P(\phi|\psi_1 \vee \psi_2)[/tex], with a frequentist probability. ie. if you look at the expression and compare it with how classical logic would handle the same, and try to interpret this in terms of "frequency counts" of classical (non-quantum) logic.

If you see this as a manipulation of conditional probability, I thought OP's question was simply a confusion over how to understnand the superposition principle? but it's possible that I totally missed the point.

/Fredrik
 
  • #9
Fra said:
I thought his question was, and it was what I responded to, simply the quest for how to understand how to interpret something that evaluates as a superposition [tex]P(\phi|\psi_1 \vee \psi_2)[/tex], with a frequentist probability. ie. if you look at the expression and compare it with how classical logic would handle the same, and try to interpret this in terms of "frequency counts" of classical (non-quantum) logic.
This is basically what I was trying to ask: How does [itex]||\phi\rangle|^2[/itex] relate to frequentist probability, as you call it.
 
  • #10
Geremia said:
This is basically what I was trying to ask: How does [itex]||\phi\rangle|^2[/itex] relate to frequentist probability, as you call it.

Yes that's exactly what I thought you asked: you are trying to "understand" the superposition construction (quantum statistics), term by term, in terms of a "probability count" (or classical statistics)?

It's what I tried to say the first two posts. In the regular introduction QM, there is no physical understanding of this. It's basically part of the postulated structure of QM, in one form or the other.

This connection is I still an open issue. I have my own private understanding of this but there are various attempts at this also published.

Past related threads dicsussing this related to his is
"Quantum gravity and the 'measurement problem"
- https://www.physicsforums.com/showthread.php?t=360080&page=2

"Wetterich's derivation of QM from classical"
- https://www.physicsforums.com/showthread.php?t=322346

"MWI bugging me"
- https://www.physicsforums.com/showthread.php?t=198571&page=10

I think depending on where you start, this is also related to understanding the born rule.

My personally preferred view is that the "counting" that can restore a "count picture", counts not primary events, but rather distinguishable "information quanta" in a retained and generally lossily compressed time history of events. Superposition and non-commutativity then is a result of counting the information states of a memory record, rather than the actual past time history.

Then depending on the compression and structure of this memory record, indeterminism and non-commutative measurements result. This is how I picture x and p, they represent different counting domains, and when one tries something like [tex]P(x \wedge p)[/tex] we have two information streams that competes for a fixed "information capacity", this is why confidence in one stream, reduces the confidence in the other one.

I find this is remotely related to Wetterich idea.

I personally am not aware of a paper that describes this in a good way (in a way I think is good). Eventually I hope to get around to writing a paper on this, but I don't want to publish anything that's isolated from the big picture since I am quite convinced that it will be misunderstood.

/Fredrik
 
  • #11
Geremia said:
This is basically what I was trying to ask: How does [itex]||\phi\rangle|^2[/itex] relate to frequentist probability, as you call it.
The way I see it, the definition of science requires that we test each prediction (assignment of a probability to each possible result of an experiment) by performing a large number of equivalent experiments and comparing the predictions to the relative frequencies of the different possible results. If anyone claims otherwise, I'd like to know what definition of science they're using.

Note that the probabilities that appear in the theory are just numbers between 0 and 1, which the theory associates with results of experiments, and that science requires us to compare those numbers to relative frequencies in large but finite ensembles. There's no need, and it's actually quite irrational, to assume the existence of a limit that the relative frequencies tend to when the number of experiments go to infinity.

A longer version of this appears in this thread, starting at post #20. (But I think I expressed my ideas more clearly in some of the posts later in the thread).
 
Last edited:
  • #12
Geremia said:
How does [itex]||\phi\rangle|^2[/itex] relate to frequentist probability [...]
If you're up for a bit more math, you could try this paper:

J.B.Hartle, "Quantum Mechanics of Individual Systems",
Am. J. Phys., 36, No. 5, p704 (1968)

(If you don't have easy access to this journal, look it up on
Google Scholar and you'll find a couple of places from
which to download it as a pdf.)
 
  • #13
strangerep said:
J.B.Hartle, "Quantum Mechanics of Individual Systems",
Am. J. Phys., 36, No. 5, p704 (1968)
I just read that paper again, and I have to say, I still don't get it (or maybe I do and that's the problem). Hartle defines an operator [itex]f_\infty{}^k[/itex] and proves that an infinite tensor product [itex]|s\rangle\otimes|s\rangle\otimes\cdots[/itex] is an eigenvector of that operator with eigenvalue [itex]|\langle k|s\rangle|^2[/itex]. What I don't get is why that should be interpreted as a derivation of the probability rule.

Let's look at the frequency operator for finite ensembles, just to avoid the technical difficulties. It can be defined by specifying its action on a tensor product of eigenstates of the operator we're interested in:

[tex]f_N{}^k|i_1,1\rangle\otimes\cdots\otimes|i_n,N\rangle=\bigg(\frac 1 N\sum_j\delta_{ki_j}\bigg)|i_1,1\rangle\otimes\cdots\otimes|i_n,N\rangle[/tex]

The eigenvalue is the fraction of the systems that are in state |k>. So [itex]f_N{}^k[/itex] has an obvious interpretation as a frequency operator when it acts on eigenstates of the relevant observable. But the interpretation is not so obvious when it acts on an arbitrary tensor product of states of identically prepared systems.

[tex]f_N{}^k|s,1\rangle\otimes\cdots\otimes|s,N\rangle=f_N{}^k\sum_{i_1\cdots i_N}|i_1,1\rangle\otimes\cdots\otimes|i_N,N\rangle\langle i_N,N|\otimes\cdots\otimes\langle i_1,1|\Big(|s,1\rangle\otimes\cdots\otimes|s,N\rangle\Big)[/tex]

If we use my definition of [itex]f_N{}^k[/itex] here, we see that it's equivalent to Hartle's definition:

[tex]f_N{}^k=\sum_{i_1\cdots i_N}|i_1,1\rangle\otimes\cdots\otimes|i_N,N\rangle\bigg(\frac 1 N\sum_j\delta_{ki_j}\bigg)\langle i_N,N|\otimes\cdots\otimes\langle i_1,1|[/tex]

The expectation value of this operator is (in a very oversimplified notation which I hope will be obvious enough):

[tex]\langle s|f_N{}^k|s\rangle=\sum_I\langle s|f_N{}^k|I\rangle\langle I|s\rangle=\sum_I\bigg(\frac 1 N\sum_j\delta_{ki_j}\bigg)|\langle I|s\rangle|^2[/tex]

These are the probabilities that N measurements will yield a specific sequence of results, multiplied by the frequency of k in each sequence, and added up. So the expectation value is the expected frequency of k in a sequence of N measurements, i.e. it's close to the what we would get if we performed the entire sequence of measurements M times, recorded the frequency of k in each sequence, and computed the average of those frequencies, assuming that M is a large enough number.

Does this justify the interpretation of [itex]f_N{}^k[/itex] as a frequency operator when it's acting on a tensor product of identical but arbitrary states? I would say that it does, but note that we used the probability rule to interpret [itex]|\langle I|s\rangle|^2[/itex] as a probability. We seem need the result we're trying to prove to justify the interpretation of the result we get as the result we're trying to prove!
 
Last edited:
  • #14
Fredrik said:
I just read that paper again, and I have to say, I still don't get it (or maybe I do and that's the problem).
...
Does this justify the interpretation of [itex]f_N{}^k[/itex] as a frequency operator when it's acting on a tensor product of identical but arbitrary states? I would say that it does, but note that we used the probability rule to interpret [itex]|\langle I|s\rangle|^2[/itex] as a probability. We seem need the result we're trying to prove to justify the interpretation of the result we get as the result we're trying to prove!
I think you are right. Indeed, it is often claimed in the literature that all attempts to explain the Born rule in MWI are circular - they "explain" it by tacitly assuming it in some place. A detailed discussion of this is given in the last chapter (written by N. Graham) of the book
The Many-Worlds Interpretation of Quantum Mechanics, edited by B. S. DeWitt and N. Graham (Princeton University Press, 1973). In this chapter Graham also attempts to avoid this problem. His attempt does not look convincing to me, but I cannot say that I really understood all this, so I would like to see your opinion on the Graham arguments. If you don't have this book, see also Private Messages.
 
  • #15
Demystifier said:
I think you are right. Indeed, it is often claimed in the literature that all attempts to explain the Born rule in MWI are circular - they "explain" it by tacitly assuming it in some place.
Yes, but I think Hartle had already done that by assuming that the Hilbert space of a system that consists of many subsystems is the tensor product of the Hilbert spaces of the subsystems. (That can also be justified by appealing to the Born rule). So it seems that in this case, the implicit use of the Born rule isn't enough. We also need to use it explicitly. :smile:
 
  • #16
Fredrik said:
... by assuming that the Hilbert space of a system that consists of many subsystems is the tensor product of the Hilbert spaces of the subsystems. (That can also be justified by appealing to the Born rule).
The question is: Is there any OTHER justification of that assumption?
 
  • #17
Meopemuk recently posted a couple of references to articles that might have an answer. I haven't read them myself yet. Here's the link to the thread in case you want to check them out.
 
  • #19
Fredrik said:
The way I see it, the definition of science requires that we test each prediction (assignment of a probability to each possible result of an experiment) by performing a large number of equivalent experiments and comparing the predictions to the relative frequencies of the different possible results. If anyone claims otherwise, I'd like to know what definition of science they're using.

Note that the probabilities that appear in the theory are just numbers between 0 and 1, which the theory associates with results of experiments, and that science requires us to compare those numbers to relative frequencies in large but finite ensembles. There's no need, and it's actually quite irrational, to assume the existence of a limit that the relative frequencies tend to when the number of experiments go to infinity.

How can you from a conceptal consistency point of view, one hand, make use of the existence of limits in a mathematical sense, and at the same time say that wether these limits exists is irreleveant?

I can buy this if we just see QM as a "theory" that we are testing, without caring about how it's constructed, and mainly focuses of the falsification or corroboration. But then we ignore the most important step which I think is the revision of a falsified hypothesis to a new one. Popper made the same mistake in his analysis.

In my view, the subjective conditional probability is defined as the inverse of the number of complexions in the observers memory of microstructure, consistent with the condition. If the information capacity is bounded, I prefer to think of this measure as not exhausting the entire real segment [0,1], it would rather be a subset of it.

The question is then, if the mictrostructure can be naturally given a hilbert like structure? It is then clear that since in principle this microstructure, can informtionally, be decomposed in many different ways into substructures, each substructure representing an observable, it is not possible due to the information capacity constraint to encode all different substructures at one. Non-commutativite substructures result. And the choice of structures are a result of the interaction history, as optimim data compression representations.

But in my view at least, the key to this conslusion is the limited information capacity. And without a careful assessment of the information theoretic basis, such as the probability and limits or non-limits this key is lost.

One can afterwards without problems scale up the complexity to teh effective continuum limit, but then we have defined the physical limiting procedure. If we start in the wrong end, the limiting procedure needed to make sense out of computations becomes as usual: ambigous, because we have not clue what the physical counting measure is.

I hava feeling - to jump a little - that I think what you call "irrationality" is the quest for improving the hypothesis generation. This is exactly what Karl Popper also called irrational and dismissed to pshychology of the human brain. I think we can perform better than that.

/Fredrik
 
  • #20
Fra said:
How can you from a conceptal consistency point of view, one hand, make use of the existence of limits in a mathematical sense, and at the same time say that wether these limits exists is irreleveant?
I have no idea why you think I've done that.

It doesn't make sense to assume that the average value after N measurements goes to some value as N→∞, for many different reasons, including that the universe will not support intelligent life (or machines) long enough for infinitely many experiments to be performed.

People who feel that we need that limit to exist must have failed to understand that only mathematical concepts have exact definitions, and that theories of science are already (and will always be) associating mathematical concepts that have exact definitions with real-world concepts that don't have an exact definition.

Fra said:
I can buy this if we just see QM as a "theory" that we are testing, without caring about how it's constructed, and mainly focuses of the falsification or corroboration. But then we ignore the most important step which I think is the revision of a falsified hypothesis to a new one. Popper made the same mistake in his analysis.
QM is a theory that we're testing, and what does "revision of a falsified hypothesis to a new one" have to do with anything I've been saying? It's a completely different subject, so how could it be a mistake not to mention it?

Fra said:
In my view, the subjective conditional probability is defined as the inverse of the number of complexions in the observers memory of microstructure, consistent with the condition.
Mathematical concepts should be defined mathematically, not in terms of real world concepts that don't have an exact definition.

QM seems to work fine even without observers. For example, the nuclear reactions in a star in a distant galaxy seem to work just fine without anyone thinking about the probabilities of those particular interactions.

Fra said:
The question is then, if the mictrostructure can be naturally given a hilbert like structure?
There are already at least three fully developed ways to get to quantum theory by associating a mathematical structure with something in the real world. (See e.g. the first paragraph in this post). I don't see a need for another one, especially not one that starts with an attempt to define probability in terms of psychology.

Fra said:
I hava feeling - to jump a little - that I think what you call "irrationality" is the quest for improving the hypothesis generation. This is exactly what Karl Popper also called irrational and dismissed to pshychology of the human brain. I think we can perform better than that.
That's not a little jump. This stuff has nothing to do with anything I've said.
 
Last edited:
  • #21
Fra said:
born rule
This is a good article in Science about the historical development of http://www.sciencemag.org/cgi/content/abstract/218/4578/1193" [Broken]. It has a complex history, but basically probability as [itex]|||\psi\rangle|^2[/itex] originated from a footnote in a paper by Born on collision theory and independently by Schrödinger in his wave description.
 
Last edited by a moderator:
  • #22
Fredrik said:
It doesn't make sense to assume that the average value after N measurements goes to some value as N→∞, for many different reasons, including that the universe will not support intelligent life (or machines) long enough for infinitely many experiments to be performed.
I fully agree.
Fredrik said:
People who feel that we need that limit to exist must have failed to understand that only mathematical concepts have exact definitions, and that theories of science are already (and will always be) associating mathematical concepts that have exact definitions with real-world concepts that don't have an exact definition.
Fully agree too, but my intended point was the reverse:

The fact that there is no physical correspondence to the limit, suggest that there is a redundancy/very poor isomorphism of/between the model and reality.

The my point is not just to nitpick the obvious: that model-reality connection is always fuzzy or incomplete. I raise this because I see a specific way where this insight, can IMPROVE our mathematical model by taking into account known and obvious flaws. This is how models have been improved in the past, so I'm not suggesting anything mad.

I'm not I sure how rejection of these "flaws" - that we seem to loosely agree upon - as irrational is fits in the process of trying improve the models. How do we improve a model, if not by improving the details where we see flaws?

If it wasn't for improvements, I don't know what all these repeating threads are about.

To discuss "interpretations" without ambitions of improvement, and extensions (unification or QG etc), is what is irrational to me. If we stick to the well tested domains of QM using it for engineering, I have no problem to adapt to a shut up and calculate view. But I think the ambitions for anyone in this discussion is higher.
Fredrik said:
I have no idea why you think I've done that.
From the above arguments, it seems you are happy with/settle with the mathematical formalism which os implicitly uses limits, yet acknowledges that it isn't quite right, but then rejects this mismatch as due to irrationality?

Fredrik said:
QM is a theory that we're testing, and what does "revision of a falsified hypothesis to a new one" have to do with anything I've been saying? It's a completely different subject, so how could it be a mistake not to mention it?

Mathematical concepts should be defined mathematically, not in terms of real world concepts that don't have an exact definition.
There are open problems. Some think we need a new framework, some don't - I do. So I think the quest howto improve/revise the current framework into something that more easily allows solving the open problems is what we are discussing here. It's at least what I try to discuss.

On sub-point in such a discussion is the physical basis of stuff like "state vector", "information", and "probability".

As I see it, we are not arguing wether or not QM is right or wrong in it's tested domains, the question is wether QM needs a reformulation in order to extend to the open problems.

I think we can agree yet that even though the distinction between model and reality is clear, our ambition is to find as close fit as possible. Whenever a mismatch is seen, it's rational to attempt to fix it.
Fredrik said:
QM seems to work fine even without observers. For example, the nuclear reactions in a star in a distant galaxy seem to work just fine without anyone thinking about the probabilities of those particular interactions.
As inferred by an Earth based observer, yes :)

Each observer can without contradiction, infer so called observerinvariant descriptions of subsystems in it's own environment. But this descriptions itself is not by certainty observer independent - it could be, but it's not an inference that can be established by an inside observer. It's a consistent possibility, but not a decidable proposition - the main problem is that there is more than one possible, but not decidable such propostiion.

Those who try to do that, are structural realists - which I'm not.
Fredrik said:
QM seems to work fine even without observers. For example, the nuclear reactions in a star in a distant galaxy seem to work just fine without anyone thinking about the probabilities of those particular interactions.

There are already at least three fully developed ways to get to quantum theory by associating a mathematical structure with something in the real world. (See e.g. the first paragraph in this post). I don't see a need for another one, especially not one that starts with an attempt to define probability in terms of psychology.

Psychology have nothing to do with this. As I've said a number of times, a sensible view that acknowledges the observer does not equate observer with human.

An oberver is any subsystem interacting with it's own environment. Observer is just an abstraction. The only traits needed are information capacity, ability to sense/react and respond to the environment through a communication channel.

/Fredrik
 
  • #23
I am surprised no one has mentioned http://www.iumj.indiana.edu/IUMJ/FULLTEXT/1957/6/56050" [Broken] is a good article, too.
 
Last edited by a moderator:
  • #24
There are also this

Ariel Caticha's
Consistency, Amplitudes and Probabilities in Quantum Theory

"Quantum theory is formulated as the only consistent way to manipulate
probability amplitudes. The crucial ingredient is a consistency constraint:
if there are two different ways to compute an amplitude the two
answers must agree. This constraint is expressed in the form of functional
equations the solution of which leads to the usual sum and product rules
for amplitudes. A consequence is that the Schr¨odinger equation must be
linear: non-linear variants of quantum mechanics are inconsistent. The
physical interpretation of the theory is given in terms of a single natural
rule. This rule, which does not itself involve probabilities, is used to
obtain a proof of Born’s statistical postulate. Thus, consistency leads to
indeterminism."
-- http://arxiv.org/PS_cache/quant-ph/pdf/9804/9804012v2.pdf

But to speak for my own angle, this does not quite respond to my questions, neither does Gleason's theorem.

There are premises/assumptions in both of those attempts that aren't obvious enough to me.

It we take a step back and just ponder what properties the set of possible information states has? For example why does this space define a fixed hilbert space?

It is a matter of taste, where you start, how much and which baggage you accept to start with.

It seems correct that if you accept a lot of the baggage of the hilbert structure, you can prove certain things - by consistency requirement. But the value of this "proof" is nor worth more than your confidence in the premises.

The only way I can in my analysis of this, identify physically, a linear space of states, is in a differential sense, where an observer reflects over possible futures. But I see no good reasons why this translates to non-infinitesimal time evolutions. This is why I expect something like evolving hilbert spaces, and probably where the born rule can be seen to make sense in some kind of tangent space of "expected variations". But in this view, a radical view of probability is needed, since by construction it's not really possible, in a fundamental sense, to actually REPEAT finitely or indefinitely the same experiment, since technically as time evolves, the hilbert spaces is another one. And some kind of transformation to the new hilbert space is needed.

The case of time unitary evolution in a fixed hilbert space can as far as I see be nothing but a special case.

These things - that I think QM is a special case - are the real motivators to me to understand the foundations of QM. The unitary structure and the hilbert space structure are the things that I want to understand as well, they are not plausible starting premises - on the contrary do the seem to not qualify for a "minimally speculative" starting point, since thye probably just apply to special cases.

/Fredrik
 
  • #25
Fra said:
But to speak for my own angle, this does not quite respond to my questions, neither does Gleason's theorem.

There are premises/assumptions in both of those attempts that aren't obvious enough to me.
But I think it shows how probability is so ingrained in the quantum formalism.

Also, http://dx.doi.org/10.1016/0003-4916(89)90141-3" [Broken] is another good article.
 
Last edited by a moderator:
  • #26
Geremia said:
But I think it shows how probability is so ingrained in the quantum formalism.

Also, http://dx.doi.org/10.1016/0003-4916(89)90141-3" [Broken] is another good article.

I can agree with that :) To keep all other QM structure, and just change the born rule, would hardly make much sense to me at least.

I just wanted to suggest that the born rule isn't the only thing that is questionable.

Since my opinon is different here, I sometimes don't understnad the effort spent on trying to "prove" internal consistencies in an overall framework that is questionable anyway (in the context of the open problems; not normal say atomic/particle physics). Maybe there exists a more constructive focus. The problem are most obvious when the idea of isolated subsystem totally fails - such as cosmological models and observers probing into an unkonwn environment rather than into a small subsystem.

/Fredrik
 
Last edited by a moderator:
  • #27
Demystifier said:
I think you are right. Indeed, it is often claimed in the literature that all attempts to explain the Born rule in MWI are circular - they "explain" it by tacitly assuming it in some place. A detailed discussion of this is given in the last chapter (written by N. Graham) of the book
The Many-Worlds Interpretation of Quantum Mechanics, edited by B. S. DeWitt and N. Graham (Princeton University Press, 1973). In this chapter Graham also attempts to avoid this problem. His attempt does not look convincing to me, but I cannot say that I really understood all this, so I would like to see your opinion on the Graham arguments. If you don't have this book, see also Private Messages.
I have only had a quick look at it so far, so I don't have a clear opinion yet, but Adrian Kent does:

Against many-worlds interpretations

He ends the section by saying that what Graham does in that article is "not relevant to physics".
 
Last edited:
  • #28
Superposition Principle

Fra said:
superposition principle
Yes, this is essentially what I want to know: What justifies the superposition principle?
 
  • #29


Geremia said:
Yes, this is essentially what I want to know: What justifies the superposition principle?

I'm trying to find my own way in understanding this becuase I have yet to find something published that satisfies me.

But the path I envision, is that the linear space of "state vectors" (hilbert space) and the superposition principle as well as born rule will follow from an underlyding information theoretic model where the linear hilbert space is associated to a kind of tangent space to a system of statistical manifolds (think direct product of statistical manifolds) but where the manifolds are related and all constrained by total complexity (one can grow at the expense of another one, preserving total complexity).

But in this view (which is not yet worked out) the QM formalism of hilbert space is only valid in a differential sense, since large evolutions deforms the linear structures.

As for the why, I expect the quantum logic (implicit in the hilbert space and superposution) to be emergent as the most stable representation of information in a differential sense.

But I think to understand this, we need to find the general case explaining how and why the simple linear strucutre of QM is a special case.

/Fredrik
 

What is fundamental quantum probability?

Fundamental quantum probability is a mathematical framework that describes the behavior and interactions of quantum particles at the most fundamental level. It is based on the principles of quantum mechanics and provides a way to calculate the probabilities of different outcomes of a quantum system.

How does quantum probability differ from classical probability?

Quantum probability differs from classical probability in that it takes into account the unique properties of quantum particles, such as superposition and entanglement, which do not exist in classical systems. This leads to different rules and equations for calculating probabilities in quantum systems.

What is the role of uncertainty in quantum probability?

Uncertainty is a fundamental aspect of quantum probability. In quantum mechanics, it is impossible to know both the position and momentum of a particle with complete certainty. Instead, quantum probability allows us to calculate the likelihood of a particle having a particular position or momentum.

How is quantum probability used in practical applications?

Quantum probability has many practical applications, particularly in the field of quantum computing. It is used to model and predict the behavior of quantum systems, which is essential for designing and operating quantum technologies. It also has applications in cryptography, quantum communication, and quantum sensing.

What are some current challenges in understanding and using quantum probability?

One of the main challenges in understanding and using quantum probability is the complex mathematics involved. It can be difficult to interpret and apply quantum probability without a strong background in mathematics and physics. Additionally, the nature of quantum systems and the uncertainty principle can make it challenging to accurately measure and predict outcomes, leading to potential errors and limitations in practical applications.

Similar threads

  • Quantum Physics
Replies
2
Views
720
  • Quantum Physics
Replies
1
Views
874
Replies
2
Views
667
  • Quantum Physics
Replies
2
Views
920
Replies
2
Views
776
Replies
3
Views
768
  • Quantum Physics
Replies
9
Views
1K
  • Quantum Physics
Replies
13
Views
937
  • Quantum Physics
Replies
11
Views
877
Replies
0
Views
446
Back
Top