Probability expectations -> measurement outcomes

In summary, probabilities do influence measurements, but they don't determine the outcome of the measurement.f
  • #1
3,121
4
Probability expectations --> measurement outcomes

Is there any physical significance to the question "How do probability expectations influence measurement outcomes?"
 
  • #2
I think that there's only a mathematical significance to the question, which relates to what probability really is.

If for a given measurement our knowledge is probabilistic, we can't predict what will occur, but we can predict the proportions of different results if we practice the measurement a lot of times.
 
  • #3
Been studying bayesian statistics? If you only make a finite number of measurements, it is inevitable that your prior will colour your conclusion.
 
  • #4
Is there any physical significance to the question "How do probability expectations influence measurement outcomes?"

Are you asking a quantum mechanical question, or a statistical mechanical one? I'll assume the former.

It's important to keep in mind that the wave function is not something that can be deduced empirically. In any system, a wave function can only be determined by solving the Schrodinger Equation.

However, probabilities do influence measurements. However, xnick alluded to an important consequence of performing multiple experiments on identical systems. For example, if a particle encounters a potential barrier in which the potential is less than the particle's energy, then the particle has a certain probability of being reflected, and a certain probability of being transmitted through the barrier. These are called the reflection and transmission coefficients respectively. But if you send a large number of particles through the potential barrier, then these coefficients will determine the proportion of particles that are reflected/transmitted.

Or if a particle's position has a certain expectation value, then observations of a large number of identical particles will, on average, give this expectation value (note that you can't measure the same particle multiple times, since this collapses, and subsequently alters, the wave function).

So yes, probability expectations certainly do influence measurement; if not then they wouldn't be very useful! But you can't empirically deduce a system's wave function.
 
  • #5
arunma
For example, if a particle encounters a potential barrier in which the potential is less than the particle's energy, then the particle has a certain probability of being reflected, and a certain probability of being transmitted through the barrier. These are called the reflection and transmission coefficients respectively. But if you send a large number of particles through the potential barrier, then these coefficients will determine the proportion of particles that are reflected/transmitted.
Is there a duality here, first alluded to by xnick, inherent between "probabilities" and "proportions" - i.e., the correspondence principle?

Can someone give me a simple definition of Bayesian statistics?
If you only make a finite number of measurements, it is inevitable that your prior will colour your conclusion.

I had suspected such, cesiumfrog.
 
  • #6
arunmaIs there a duality here, first alluded to by xnick, inherent between "probabilities" and "proportions" - i.e., the correspondence principle?

Well, no, this isn't quite the same thing as the correspondence principle. The correspondence principle states that in the classical limit, quantum systems reduce to classical systems. For example, the Schrodinger Equation has a [tex]\dfrac{\hbar^{2}}{2m}[/tex] term. So if you apply this equation to a bowling ball, the large mass will cause that term to annihilate. But sending a large number of small particles through a potential barrier is still a quantum mechanical problem. In fact, it's an excellent example of how quantum mechanical phenomena can have observable effects.

Can someone give me a simple definition of Bayesian statistics?

Bayesian Statistics is a branch of statistics which deal with conditional probabilities. A conditional probability is the recalculated probability of one event occurring, given that another event has already occured. Unfortunately my probability and statistics course as an undergrad was all probability and no statistics, so I can't tell you much more than that.
 
  • #7
I stress that we can't predict the outcome of ONE measure.

Examples:
1) A lottery game has 1000 numbers. I buy 500 numbers so i have a probability of winning p=0.5 . Some guy buys only one number, and so he has probability p=0.001, and I laugh at him saying "you're so screwed"... Yet he wins the lottery and then he laughs at me.

2) Pick any real number between 0 and 1. The probability of finding your number within an interval [a,b] with 0<= a <= b <= 1 would equal the length of the interval p=b-a. We should conclude that the probability of picking any single point is 0. But a number is picked anyway.

(Observe that I have implicitly assumed equi-probability in both examples)
------------------------
The morale is that the influence of probabilities on measurements can only be seen by repeating the measurement a lot of times (preferably an infinite number of times).

The Bayesian Statistics doesn't change that stated; IMO it's just a way to juggle different relations within conditional probabilities reflecting different ways of regrouping the "events" or outcomes of the "experiment".
 
  • #8
The denominator in calculating standard deviation is (n-1), where n is the number of measurements. Does that particular difference result from one measurement not being statistically significant (definable)?
 
  • #9
Pick any real number between 0 and 1. The probability of finding your number within an interval [a,b] with 0<= a <= b <= 1 would equal the length of the interval p=b-a. We should conclude that the probability of picking any single point is 0. But a number is picked anyway.
I think that reasoning is false.
A point does not constitute an interval and has no length.
 
  • #10
I think that reasoning is false.
A point does not constitute an interval and has no length.

Confusing as it is, he does happen to be right. When a sample space is continuous rather than discrete (as with the reals and any Cartesian product of the reals), the probability of any particular outcome is zero. That's why we talk about probability functions when dealing with discretized sample spaces, and probability density functions when dealing with infinite sample spaces such as the interval [0,1]. The familiar example to a physicist is the wave function. Note that we never discuss the probability that a particle will be observed at a particular point, but rather the probability that a particle will be observed within some region. Thus, the probability of picking any particular point in [0,1] is 0. We can only talk about the probability of a number being within some subset of [0,1].

Well, I hope that my rambling was at least marginally helpful. But now we're starting to discuss math instead of physics. And math is the devil, so I'll stop here.
 

Suggested for: Probability expectations -> measurement outcomes

Replies
50
Views
2K
Replies
57
Views
5K
Replies
26
Views
12K
Replies
1
Views
1K
Replies
2
Views
788
Replies
4
Views
4K
Back
Top