- #1
Loren Booda
- 3,125
- 4
Is there a significance to the probability value -1 or i?
Loren Booda said:I'm sorry that I cannot recall where I had seen this concept, but that it may have arisen from quantum mechanics.
Stephen Tashi said:A difficulty of making sense of i or -1 as a probability might come in dealing with P(A|B) = P(A and B)/ P(B) when P(A|B) = 1. But perhaps clever people have figured out how to make the ratio come out correctly with complex valued probabilities.
The unfortunate property of probability theory is that is has such a tenuous connection with things that are definite and real. People very much desire to connect probabilities with the actual observed frequencies of events. Theorems of statistics and probability that say anything about actual frequencies only talk about the probabilities of actual frequencies. (This is a rather circular situation!) The "law of large numbers" is about the only thing you can get a hold of that will connect to reality. It involves a limit of probabilities-of-actual-frequencies, but the limit is 1. If there is a fundamental objection to complex valued probabilities and the fact that (in some probability space) an event either happens or doesn't, I'd think that the Law Of Large Numbers would have to come into play to support the objection.
lavinia said:Conditional amplitudes are well defined just as are conditional probabilities.
It seems to me that ,since the Shroedinger equation predicts observations exactly - the probabilities of observing a free particle at some point in space are not tenuous connections to reality but are exact descriptions of it.
Stephen Tashi said:But what does it mean for a theory to predict a probability "exactly"? Don't the tests of such theories boil down to applying the law of large numbers? Or were you referring to some other testable thing that the Shroedinger equation predicts?
kai_sikorski said:And how do you interpret statements like "There is a 60% chance of rain tomorrow"?
I think any attempt to verify a theoretical probability with real world data is based around Bayesian Inference in one way or another; which is totally unsatisfying because there isn't really a good way to justify the prior probabilities.
kai_sikorski said:And how do you interpret statements like "There is a 60% chance of rain tomorrow"?
kai_sikorski said:I think the only principle that I've seen for assigning priors that I've heard of that is even remotely objective is maximum entropy. However even there you would often seek a maximum entropy distribution satisfying certain constraints, and picking those constraints can again be subjective. However the fact that basically all of the famous probability distributions are actually obtainable as maximum entropy distributions subject to some simple constraint is somehow very satisfying to me.
micromass said:I actually mailed some meteriologists about that.
It means: if you like at all the days with the same initial conditions, then in 60% of the days it will rain. So the weather people will look at all the data of the previous days, they will search a computer for days of very similar conditions and they see in which days it rains or not.
chiro said:Maximum entropy basically implies a uniform distribution which ends up giving results that are more 'classical' since the prior function is proportional to a constant.
kai_sikorski said:No, that's only true if the only restriction you put on the distribution is the range over which it can be positive.
If for example instead you restrict only the mean and variance, then the maximum entropy distribution is Normal. If you specify it has to be positive and have a certain mean then you get the Exponential distribution.
You can specify any number of moments and get different distributions.
kai_sikorski said:chiro,
I don't actually work in this field so I only have a cursory familiarity with it. This section of the wikipedia entry on prior distributions is along the lines that I was thinking but it seems that you're right and basically in almost all cases if you're trying to get an objective prior using maximum entropy you'll end up with a uniform distribution.
http://en.wikipedia.org/wiki/Prior_probability#Uninformative_priors
The cases I mentioned are mentioned in the article as well, but since you have to specify the mean or mean and variance or other such parameter, this still seems like a subjective prior to me.
Stephen Tashi said:I'm not sure how the existence of complex valued things whose modulus is used to compute real valued probabilities bears on the question of whether there could be a sensible theory of probability that allowed complex values for probabilities. Is your point that we could create a complex valued probability theory (for ordinary macroscopic situations like coin tossing, etc.) by making a theory that parallels the formalism of quantum mechanics. (I've always wondered why the Quantum Mechanics books never seem to worry about things like measure theory.)
But what does it mean for a theory to predict a probability "exactly"? Don't the tests of such theories boil down to applying the law of large numbers? Or were you referring to some other testable thing that the Shroedinger equation predicts?
The values of -1 and i are not typically used in probability calculations and do not have any real-world meaning. These values may arise in certain mathematical equations, but they do not have any practical interpretation in terms of probability.
No, probability cannot be negative or imaginary. Probability represents the likelihood of an event occurring and it is always a positive value between 0 and 1. Negative or imaginary probabilities do not make sense in the context of probability and are not used in calculations.
The values of -1 and i may appear in mathematical equations used in scientific research, but they are not directly related to the concept of probability. These values may be used in complex calculations or modeling, but do not have any specific meaning in terms of probability.
No, there is no practical application for probability -1 or i. These values are not used in real-world scenarios and do not have any meaningful interpretation in terms of probability. They are simply mathematical values that may arise in certain equations.
Positive values are used for probability because they represent the likelihood of an event occurring. Negative or imaginary probabilities do not have any practical meaning and are not used in probability calculations. Additionally, using only positive values ensures that the total probability of all possible outcomes is equal to 1.