Basic Probability: Questions & Clarification

  • Thread starter Imparcticle
  • Start date
  • Tags
    Probability
In summary, the conversation is about probability and the difference between "independent" and "dependent" variables. The probability of an event happening is determined by the number of outcomes that correspond to the event divided by the total number of possible outcomes. When two variables are independent, the probability of one event occurring does not affect the probability of the other event. In contrast, when two variables are dependent, the probability of one event occurring can affect the probability of the other event. This can be calculated using the axiomatic rules of probability.
  • #1
Imparcticle
573
4
Probability has never been my best subject in math for some reason. I really want to better my understanding of it. We just started a chapter on the basics and as you can imagine, I felt like saying good bye to my A. I just thought I'd come here and ask a bunch of questions about things I need clarification on.
What exactly is the difference between "independent" and "dependent" [variables] ?
I could use all the examples I can get for this one.

thanx.
 
Mathematics news on Phys.org
  • #2
If two variables are independent then:

[tex]P(A | B) = P(A)[/tex].

That is to say, that the probability of A given B, doesn’t really matter as B happening doesn’t affect A, so it is just equal to the probably of A.

You can derive from the above statement to say when the probability of A and B are independent then:

[tex]P(A \cap B) = P(A) P(B)[/tex]
 
  • #3
The probability of Event A happening is
How many times you take the chance divided by how many posible outcomes there are. Let's take the simplest example...

You flip a coin one time, there are two posible outcomes (heads or tails). [tex] \frac {1} {2} = .5[/tex]. Therefore that chance is .5, 50%, 1/2, whatever you want to call it.

Now if you flip that coin two times, your chances are [tex] \frac {2} {2} = 1 [/tex] So you have a 100% chance of getting either a heads or a tails (if you flip the coin twice).

This reminds me...I have been wanting to post a thread on this question, but I suppose it won't harm to throw it in right now. If you flip a coin twice, your chances of getting a heads (for instsance) is 100%, but there still IS a chance that you won't get heads. Is this just how the mathematics of chance operate?
 
Last edited:
  • #4
eNathan said:
The probability of Event A happening is
How many times you take the chance divided by how many posible outcomes there are. Let's take the simplest example...

You flip a coin one time, there are two posible outcomes (heads or tails). [tex] \frac {1} {2} = .5[/tex]. Therefore that chance is .5, 50%, 1/2, whatever you want to call it.

Now if you flip that coin two times, your chances are [tex] \frac {2} {2} = 1 [/tex] So you have a 100% chance of getting either a heads or a tails (if you flip the coin twice).

This reminds me...I have been wanting to post a thread on this question, but I suppose it won't harm to throw it in right now. If you flip a coin twice, your chances of getting a heads (for instsance) is 100%, but there still IS a chance that you won't get heads. Is this just how the mathematics of chance operate?

But why does it matter if you flip the coin twice ?

it would still be 100 % tails or heads regardless of number of tosses no ?

and that statement obviously doesn't work in practice, because as you said, it won't definitely land on heads if you tried it.
 
Last edited:
  • #5
eNathan, probability is actually well defined, contrary to what many people who haven't studied it say.

Lets look at a coin problem.

Lets call event A the probability of getting a heads on the first coin and event B the probability of getting a heads on the on the 2nd coin. Now:

[tex]P(A) = \frac{1}{2} \; \text{and} \; P(B) = \frac{1}{2}[/tex]

If event A happens then P(B) = 1/2. Or that is to say that given event A then the probability of B is still the same, or:

[tex]P(B | A) = P(B)[/tex]

This tells us the two events A and B are independent, from this we can say that the probability of both the A and B occurring is equal to the probability of A occurring multiplied by the probability of B occurring. Or:

[tex]P(A \cap B) = P(A) P(B) = \frac{1}{4}[/tex]

Try and tackle every problem with rigorous use of axiomatic probability rules if you don’t have understanding of probability, otherwise more often than not you just tend to get confused.
 
  • #6
Where to begin! In addition to having absolutely nothing to do with the original question about the difference between "dependent" and "independent" events, eNathan's answer is simple nonsense!

"The probability of Event A happening is
How many times you take the chance divided by how many posible outcomes there are. "
No, it has nothing to do with "how many times you take the chance". In simple discrete probability it is the number of outcomes that correspond to event A divided by the total number of possible outcomes.

"Now if you flip that coin two times, your chances are [tex]\frac{2}{2}= 1[/tex] So you have a 100% chance of getting either a heads or a tails (if you flip the coin twice)."
Number of "times you flip" divided by number of outcomes (head or tail: 2)! But as I said, that's completely wrong. The correct calculation here is: if you flip a coin twice, there are four possible outcomes: (H,H) (heads on both first and second flips), (H,T) (heads on first flip, tails on second), (T,H) (tails on first flip, heads on second), (T,T) (tails on both first and second flip). I'm not sure what he means by "getting either a head or a tail. If you flip a coin ONCE you are certain to get either heads of tails! eNathan may be saying that since the probability of getting heads on one flip is 1/2 and the probability of getting heads on one flip is 1/2, the probabililty of getting "heads and tails" (not "or") is 1/2+ 1/2= 1. No, the probability of getting heads on the first flip and heads on the second is (1/2)*(1/2)= 1/4- you multiply, not add.
Or, he may be arguing that since the probability of "a or b" is prob(a)+ prob(b) (for "mutally exclusive" events- the more general formula is prob(a)+prob(b)-prob(a and b)), the probability of getting heads or tails is 1/2+ 1/2. That's true- on one flip, not two. Since the coin can only come up heads or tails, the probability of getting heads or tails on one flip 1- it's certain to happen.

"This reminds me...I have been wanting to post a thread on this question, but I suppose it won't harm to throw it in right now. If you flip a coin twice, your chances of getting a heads (for instsance) is 100%, but there still IS a chance that you won't get heads. Is this just how the mathematics of chance operate?"

No, if you flip a coin twice, the probability of getting a head (at least one) is 75%: you could get two heads:probability (1/2)(1/2)= 1/4. You could get a head on the first flip and tail on the second: probability (1/2)(1/2)= 1/4. You could get a tail on the first flip and a head on the second: probability (1/2)(1/2)= 1/4. Probability of getting at least one head is 1/4+ 1/4+ 1/4= 3/4. (Probability of getting exactly one head, if that was what was meant, if 1/4+1/4= 1/2.)

eNathan- if the result of your reasoning is non-sense, you should at least consider the possibility (if not "probability") that it is your reasoning that is at fault rather than mathematics!
 
Last edited by a moderator:
  • #7
In response to the OP.

A dependant variable is a variable whose value is determined by the value of an independent variable, simply the dependant variable is a function (x) of the independant variable (y). f(x) = y

An independant variable is the observed variable, and is not dependant on any other variables in the environment.

NewScientist
 
  • #8
NewScientist said:
In response to the OP.

A dependant variable is a variable whose value is determined by the value of an independent variable, simply the dependant variable is a function (x) of the independant variable (y). f(x) = y

An independant variable is the observed variable, and is not dependant on any other variables in the environment.

NewScientist
Thats right, but, i believe when the OP meant variables, he/she meant "random variables" (given that the subject was probability), which arent exactly defined the way you have done here.

-- AI
 
  • #9
Well,

The dependant variable is not a random variable, it is a function of other variables - it DEPENDS on their inputs.

In probability, especially forcasting or doing simulations, some data is input as independat variables, and the dependant variables follow.
 
  • #10
Hmmm, looking back I see that the original post did say "(variables)" but he also was talking about probability. I suspect he really did mean dependent and independent "events" rather than "variables".

Here's an example that I thought of when I was reading eNathan's ravings: Suppose a coin has probility 0.5 of either heads or tails (a fair coin):P(H)= 1/2. I flip the coin twice. Since what happens on the first flip does not affect the second flip, they are "independent". The probability that the first flip is heads and the second tails is (1/2)(1/2)= 1/4. The probability of "first tails, then heads" is also 1/4 and the probability of "one heads, the other tails, in any order" is 1/4+ 1/4= 1/2.
Now suppose I have an "unfair" coin that has probability 1/3 of coming up heads, 2/3 of coming up tails:P(H)= 1/3, P(T)= 2/3. The probabilty that, when I flip it twice, the first flip is heads and the second tails is (1/3)(2/3)= 2/9. The two flips are still independent. The probablity of "first tails, second heads" is still 2/9 and the probability of "one heads, the other tails, in any order" is 2/9+ 2/9= 4/9.

NOW, I flip my fair coin. If it comes up heads I will flip it again, if it comes up tails, I will flip the unfair coin. The probability it comes up heads on the first flip is 1/2 and then I flip it again. Now the probability of getting tails on the second flip is also 1/2 and the probability of getting "first heads, then tails" is (1/2)(1/2)= 1/4. But there is a 1/2 probability that the fair coin will come up tails. If that happens, I flip the unfair coin and now the probability that will come up heads is only 1/3. The probability of getting "first tails, then heads" is (1/2)(1/3)= 1/6. The probabilities on the second flip depend on what happens on the first flip. The probability of getting "one heads, one tails, in any order" is now 1/4+ 1/6= 5/12.
 
  • #11
Yes HallsofIvy, I meant independent or dependent events. Right now I have much more time to describe (much more specifically) my problem with probability.

My math book defines the basic counting principle the following way:

Suppose an event can occur in p different ways. Another event can occur in q different ways. There are (p)(q) ways both events can occur.

The book then pointed out that this principle can be extended to accomadate those events which are defined as being either dependent or independent. I thought I understood how this could be done after looking at the examples but then became lost when I had to do the practice problems.

For example, here is one problem that I do not understand:

1.) How many 4 digit patterns are there in which all the digits are different?

answer: 480

When I look at this problem, all I can come up with is this:
The question can be restated as "How many 4 digit patterns can be formed from the numbers 0-9 using each number exactly once?"
I would thus do the problem by finding the quotient of [9!/(4-1)!] , in which case I'd get 60480.
I merely chose this method because it worked on the previous problem:

How many different 4 letter patterns can be formed frm the letters a, e,i,o, r,s and t if no letter occurs more than once?
answer:840
how to get the answer: [7!/(4-1)!]. where 7 represents the number of letters and 4 is the number of letter patterns. Why the minus 1 ? It just worked so I accepted it.

If there is anything I abhor, it is doing math using algorythms (<--gee, great speller I am!) that I memorize, instead of understand. So please don't give me formulae, but just some conceptual examples that'll make this as close to intuitive as it can be.

thanx again.
 
  • #12
HallsofIvy said:
Hmmm, looking back I see that the original post did say "(variables)" but he also was talking about probability. I suspect he really did mean dependent and independent "events" rather than "variables".

Here's an example that I thought of when I was reading eNathan's ravings: Suppose a coin has probility 0.5 of either heads or tails (a fair coin):P(H)= 1/2. I flip the coin twice. Since what happens on the first flip does not affect the second flip, they are "independent". The probability that the first flip is heads and the second tails is (1/2)(1/2)= 1/4. The probability of "first tails, then heads" is also 1/4 and the probability of "one heads, the other tails, in any order" is 1/4+ 1/4= 1/2.
Now suppose I have an "unfair" coin that has probability 1/3 of coming up heads, 2/3 of coming up tails:P(H)= 1/3, P(T)= 2/3. The probabilty that, when I flip it twice, the first flip is heads and the second tails is (1/3)(2/3)= 2/9. The two flips are still independent. The probablity of "first tails, second heads" is still 2/9 and the probability of "one heads, the other tails, in any order" is 2/9+ 2/9= 4/9.

so if you flip it three times, you would multiply (1/2)(1/2)(1/2) (if it is a fair coin)?
Why does this (the one u explained) work?
 
  • #13
Why wouldn't it work? If event A has probability p of occurring on any given trial (0<= p<= 1) and you do n trials, then the probability that A will occur every time is pn.

One way to see that is to write B as "A does not happen" so this becomes a binomial distribution. The probability of A is 1/2, so the probability of B is also 1/2. Doing the experiment n times means that there are 2n different "events"
(from AAA... A to BBB... B: there are two things (A or B) that can happen each time so there are 2n ways the two can happen in n trials. Of those, only one is
"AAA...A" so the probabilty is 1/2n. In particular, the probability of getting heads once is 1/2, the probability of getting "HH" (heads two consecutive times in two flips) is 1/22= 1/4, and the probability of getting "HHH" (heads three consecutive time in three flips) is 1/23= 1/8.
If you flip a fair coin n times, the probability of getting i heads and j= n-i tails (in any order) is nCi(1/2n where nCi is the binomial coefficien "n choose i".
 
  • #14
HallsofIvy said:
Where to begin! In addition to having absolutely nothing to do with the original question about the difference between "dependent" and "independent" events, eNathan's answer is simple nonsense!

"The probability of Event A happening is
How many times you take the chance divided by how many posible outcomes there are. "
No, it has nothing to do with "how many times you take the chance". In simple discrete probability it is the number of outcomes that correspond to event A divided by the total number of possible outcomes.

"Now if you flip that coin two times, your chances are [tex]\frac{2}{2}= 1[/tex] So you have a 100% chance of getting either a heads or a tails (if you flip the coin twice)."
Number of "times you flip" divided by number of outcomes (head or tail: 2)! But as I said, that's completely wrong. The correct calculation here is: if you flip a coin twice, there are four possible outcomes: (H,H) (heads on both first and second flips), (H,T) (heads on first flip, tails on second), (T,H) (tails on first flip, heads on second), (T,T) (tails on both first and second flip). I'm not sure what he means by "getting either a head or a tail. If you flip a coin ONCE you are certain to get either heads of tails! eNathan may be saying that since the probability of getting heads on one flip is 1/2 and the probability of getting heads on one flip is 1/2, the probabililty of getting "heads and tails" (not "or") is 1/2+ 1/2= 1. No, the probability of getting heads on the first flip and heads on the second is (1/2)*(1/2)= 1/4- you multiply, not add.
Or, he may be arguing that since the probability of "a or b" is prob(a)+ prob(b) (for "mutally exclusive" events- the more general formula is prob(a)+prob(b)-prob(a and b)), the probability of getting heads or tails is 1/2+ 1/2. That's true- on one flip, not two. Since the coin can only come up heads or tails, the probability of getting heads or tails on one flip 1- it's certain to happen.

"This reminds me...I have been wanting to post a thread on this question, but I suppose it won't harm to throw it in right now. If you flip a coin twice, your chances of getting a heads (for instsance) is 100%, but there still IS a chance that you won't get heads. Is this just how the mathematics of chance operate?"

No, if you flip a coin twice, the probability of getting a head (at least one) is 75%: you could get two heads:probability (1/2)(1/2)= 1/4. You could get a head on the first flip and tail on the second: probability (1/2)(1/2)= 1/4. You could get a tail on the first flip and a head on the second: probability (1/2)(1/2)= 1/4. Probability of getting at least one head is 1/4+ 1/4+ 1/4= 3/4. (Probability of getting exactly one head, if that was what was meant, if 1/4+1/4= 1/2.)

eNathan- if the result of your reasoning is non-sense, you should at least consider the possibility (if not "probability") that it is your reasoning that is at fault rather than mathematics!

Yes, you are right. I forgot it :lol: I don't know how I forgot that, I just learned it a year ago.

No, if you flip a coin twice, the probability of getting a head (at least one) is 75%:
Actaully, I was just thinking about answer after I posted the question.

Anyway, thanks for correcting me, ill make sure to double check what I write from now often. :grumpy:

Would this be correct. The chance of event A occurring is
[tex]P = 1 - \frac { (\frac {1} {O}) } {T} [/tex]
Where P is the probability, O is the posible number of outcomes, and T is how many times you take this chance? :smile:
 
  • #15
NewScientist said:
The dependant variable is not a random variable, it is a function of other variables - it DEPENDS on their inputs.

In probability, especially forcasting or doing simulations, some data is input as independat variables, and the dependant variables follow.
Let us be careful with our vocabulary here. If you were doing regression analysis, just the opposite is true. In the regression equation y(t) = b0 + b1x1(t) + ... + bkxk(t) + u(t), the x's are independent, nonrandom variables; y is a dependent, random variable, and u is a random error. (And b's are the coefficients to be estimated by regression analysis.)

In general, if y = f(x,u) where x is either random or nonrandom and u is random, then y is random. In this general form, too, y is the dependent variable and x is the independent variable.

{P.S. If you know function f, then you can uniquely estimate the distribution of y from the distribution of u (assuming x is nonrandom, as usually assumed in regression analysis). Conversely, if you knew the distributions of both y and u (and provided one or two standard assumptions hold), then you can derive the function f uniquely (as a combination of b's and x's). This is the statistical theory of regression analysis explained in a few words while straining to sit upright in front of the computer screen.}
{P.P.S. Regression analysis uses "independent" and "dependent" differently from the way they are used in general probability theory. For this general usage, independence between two variables is defined in terms of conditional probabilities. Someone has already posted that definition above in this thread. An alternative definition of independence in that general sense is "two variables are independent if and only if their joint distribution is identical to the product of their individual (i.e. marginal) distributions." Moreover, linear independence is operationalized as the covariance between X and Y being zero. In other words, "two variables are linearly independent if and only if their covariance is zero." General independence implies linear independence; but the reverse is not true: two variables that have zero covariance may be dependent in a nonlinear way.}
 
Last edited:
  • #16
For example, here is one problem that I do not understand:

1.) How many 4 digit patterns are there in which all the digits are different?

answer: 480

When I look at this problem, all I can come up with is this:
The question can be restated as "How many 4 digit patterns can be formed from the numbers 0-9 using each number exactly once?"
I would thus do the problem by finding the quotient of [9!/(4-1)!] , in which case I'd get 60480.

No, neither of those answers is correct. I believe the answer in the book is incorrect (should be 5040).

You need to learn about combinations and permuations. There are general formuli for these but they are easily derived and understood. I'll take this "4 digits out of 10" example to help explain the formula.

Firstly assuming that each different odering of the same four digits is considered as being a different pattern (eg that 1,2,3,4 is considered a different pattern than 2,1,3,4) then the number of possible patterns is simply 10*9*8*7. That is, there are 10 ways that you can choose the first digit in each pattern (any one of 0..9) but only 9 ways you can choose the second digit (because the problem stipulates that each digit only occurs once), and only 8 way you can choose the third digit and so forth.

This type of counting problem occurs so often that with give it a name (permutation) and generalize the formula to a compact form. I think you can see that another way to write 10*9*8*7 is 10!/(10-4)!. This formula is then generalized for the number of permutation of "r" items drawn from a set of "n" items as :
Perm(n,r) = n!/(n-r)!

A closely related problem is the case where each group of four digits is only counted once, regardless of how many ways that this group can be internally rearranged. That is, this time we don't count 1,2,3,4 as being a differnent pattern to 2,1,3,4. It is fairly easy to see that there are 4*3*2*1 = 4! ways of rearranging any particular 4 digits (4 way to choose the first digit of the arrangement, 3 ways to choose the second and so on) and hence there will be 4! less combinations of 4 the digits than there are permutations. In general there are r! less (I mean divide by r!) combinations than there are permutations, so the general formula for this is:
comb(n,r) = n!/( (n-r)! r! ).

Hope that helps.
 
Last edited:
  • #17
Imparcticle said:
How many different 4 letter patterns can be formed frm the letters a, e,i,o, r,s and t if no letter occurs more than once?
answer:840
how to get the answer: [7!/(4-1)!]. where 7 represents the number of letters and 4 is the number of letter patterns. Why the minus 1 ? It just worked so I accepted it.

Ok this is one example where your book does have the correct *answer*. Only it's not 7!/(4-1)! it's 7!/(7-4)!. Just take a look at the definition and derivation of the permutation formula above and you'll easily understand this one.
 
  • #18
Or, more simply, there are 10 possible digits for the first place. Since you cannot repeat digits, there are 9 possible digits for the second place, 8 for the third, and 7 for the fourth: by the "fundamental counting principle" the number of ways of writing 4 distinct digits is 10*9*8*7. Of course, if you really want to use factorial notation,
that is the same as (10*9*8*7*6*5*4*3*2*1)/(6*5*4*3*2*1)= 10!/6!.
In either case, that is 5040 as uart said.

For the other problem: "How many different 4 letter patterns can be formed frm the letters a, e,i,o, r,s and t if no letter occurs more than once?"

I would argue that there are 7 choices for the first letter, 6 for the second, 5 for the third, and 4 for the last. That means there are 7*6*5*4= (7*6*5*4*3*2*1)/(3*2*1)= 7!/3!= 7!/(7-4)!= 840.
 

1. What is the definition of probability?

Probability refers to the measure of the likelihood that an event will occur. It is expressed as a number between 0 and 1, where 0 represents impossibility and 1 represents certainty.

2. What is the difference between theoretical and experimental probability?

Theoretical probability is the expected probability based on mathematical calculations and assumptions, while experimental probability is the observed probability based on actual experiments or data.

3. How do you calculate probability?

To calculate probability, you divide the number of favorable outcomes by the total number of possible outcomes. For example, if you are rolling a six-sided die and want to know the probability of rolling a 3, you would divide 1 (the number of favorable outcomes) by 6 (the total number of possible outcomes) to get a probability of 1/6 or approximately 16.67%.

4. What is the difference between independent and dependent events?

Independent events are events where the outcome of one event does not affect the outcome of another event. For example, flipping a coin twice is an independent event because the outcome of the first flip does not affect the outcome of the second flip. Dependent events are events where the outcome of one event does affect the outcome of another event. For example, drawing two cards from a deck without replacing the first card is a dependent event because the probability of drawing the second card is affected by the outcome of the first card.

5. How do you use the addition and multiplication rules in probability?

The addition rule states that the probability of two mutually exclusive events occurring is the sum of their individual probabilities. The multiplication rule states that the probability of two independent events occurring together is the product of their individual probabilities. These rules can be used to calculate the probability of more complex events or to determine the probability of multiple events occurring in a sequence.

Similar threads

Replies
45
Views
3K
Replies
9
Views
2K
Replies
6
Views
296
Replies
7
Views
1K
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • General Math
Replies
1
Views
4K
Replies
11
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
11
Views
504
Back
Top