Probabilty and sample spaces Definition

  • Thread starter Thread starter ak416
  • Start date Start date
  • Tags Tags
    Definition
ak416
Messages
121
Reaction score
0
Ok, I am taking a stats course right now and I am trying to understand exactly how probability is defined. It says in the textbook that there are a few ways it can be defined. I understand the first one: Assume an experiment with n possible outcomes, each equally likely. If some event is satisfied by m of the n, then the probability of that event is m/n. However, if the events are not all equally likely, then this definition can't be used. There's also the other definitions like empirical probability and subjective probability, but these don't really give you a precise answer. Then there's the axiomatic probability with 4 axioms. But all it says is
1. P(A) >= 0,
2. P(S) = 1,
3. P(A U B) = P(A) + P(B) for mutually exclusive events A and B
4. P(the union of all mutually exclusive events) = sum from 1 to infinity (P(Ai))

this still doesn't give an explicit answer for what the probability of any event A would be! Using 3, to know P(A) i would need to know P(A U B) and P(B), and to know either of those i would need to know the other probabilities.

I think the best definition is the first definition, but then there must be a way to reduce all elements of a sample space to being equally likely.

Any insight would be greatly appreciated.
 
Physics news on Phys.org
Suppose you have 3 indep. outcomes A, B and C where A and B are equally likely and C is twice as likely as A or B. Then you can define events C1 and C2 as each being equally likely as A or B and run the experiment using the following routine: the first time C is observed is credited to C1. The second time C is observed is credited to C2, etc.

Prob{C} can be defined as Prob(C1} + Prob{C2}.
 
this still doesn't give an explicit answer for what the probability of any event A would be!
And it shouldn't! There are lots of possible probability measures on any given set of events.

For example, to model an ordinary coin, you would use the uniform distribution on {heads, tails}: P(heads) = P(tails) = 1/2. To model a double-headed coin, you would use the distribution where P(heads) = 1 and P(tails) = 0.
 
o i see, so you just have to find the probability of each event relative to the others, and using the fact that the probability of the whole sample space is 1, you would be able to find the absolute probability. It still doesn't really give an answer to how you would know which events are relatively more likely than the others (unless you can reduce the sample space to a bunch of equally likely outcomes), but i guess that must be found experimentally?
 
ak416 said:
o i see, so you just have to find the probability of each event relative to the others... but i guess that must be found experimentally?
The definition you had posted had made an axiomatic determination of the relative probabilities ("they are all equal"); my thought experiment was an extension of that axiomatic statement.

Now you are going one step beyond the original post and asking "how did they know all outcomes were equally likely?" My guess is they could've formulated it as a hypothesis then statistically tested it; so it could've been experimental in that respect. But they did not need to. One can start with an axiomatic statement and get an aximatic definition without being experimental.
 
Namaste & G'day Postulate: A strongly-knit team wins on average over a less knit one Fundamentals: - Two teams face off with 4 players each - A polo team consists of players that each have assigned to them a measure of their ability (called a "Handicap" - 10 is highest, -2 lowest) I attempted to measure close-knitness of a team in terms of standard deviation (SD) of handicaps of the players. Failure: It turns out that, more often than, a team with a higher SD wins. In my language, that...
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Back
Top