B Multiply Probabilities vs. Sum of the Squares

jaydnul
Messages
558
Reaction score
15
Hi! I'm getting confused by these two things. If I have two uncorrelated probabilistic events, and I want to know the probability of seeing them both land beyond 3.3 sigma (for example), do I multiply the probabilities .001*.001 or do I do sum of the squares sqrt(.001^2 + .001^2). I assume it is the former, but can you explain what context we would use sum of the squares instead?
 
Physics news on Phys.org
jaydnul said:
If I have two uncorrelated probabilistic events, and I want to know the probability of seeing them both land beyond 3.3 sigma (for example), do I multiply the probabilities .001*.001 or do I do sum of the squares sqrt(.001^2 + .001^2).
Uncorrelated and independent are different things. If they are independent then you multiply the probabilities. Correlation is simply one specific type of dependence. However, it is easy to come up with examples where one event causes the other with 100% certainty (so the probability of seeing them both is equal to the probability of the cause) and yet they are uncorrelated.

jaydnul said:
explain what context we would use sum of the squares instead?
I don't know of a context where you use the sum of the squares of the probabilities. That could easily lead to a number greater than 1, which couldn’t be a probability.

Often you use the sum of the squares of the standard deviations. For example, to calculate the variance of the sum of two random variables.
 
Last edited:
  • Like
Likes PeroK and FactChecker
Ok thanks that makes sense!
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.

Similar threads

Replies
2
Views
2K
Replies
11
Views
2K
Replies
57
Views
6K
Replies
11
Views
3K
Replies
9
Views
5K
Back
Top