Bernoulli Distribution/ Random Variables

In summary: P(A and B) = P(A)P(B) for all a in A and b in B then you can say that A and B are independent of each other.Basically this is a way to check independence, but it's a stronger condition than just saying that the PDF's of A and B are equal. To conclude independence you need to show that P(A) is not equal to P(B) for all a in A and b in B, basically to show that you can't have a joint probability that is equal to the product of the probabilities of the individual events.So you have to choose X and Y so that P(X and Y) != P(X)P(Y) for all x in X and y in Y.
  • #1
AleksIlia94
5
0

Homework Statement



Take Ω = [0, 1] and P the uniform probability.
(a) Give an example of two random variables X and Y defined on Ω that
both have the same Bernoulli distribution with parameter θ = 1/3.

(b) Give an example of such random variables that are independent, and not
independent.

Homework Equations


For a Bernoulli distribution:

Pr(x=0)= 1-θ
Pr(x=1)= θ

The Attempt at a Solution



I have that Pr(X=0)=Pr(Y=0)=2/3
Pr(X=1)=Pr(Y=1)=1/3
What I am not sure about is the part where I am asked for the examples of 2 random variables with this distribution. Can I simply say let X represent the chance of rolling a 1 on a 3 sided dice as an example? Then what would I choose for Y such that X and Y are independent for the second part?

Thanks for your help.
 
Physics news on Phys.org
  • #2
Hey AleksIlia94 and welcome to the forums.

Typically if we say that two variables have the same distribution, then their PDF's are equal. So basically the PDF of X is the same as of Y (which is what you have defined above with theta and 1 - theta) but obviously X and Y are independent variables (X is not equal to Y but they have the same PDF).

As a hint of a physical situation: remember that one bernoulli trial represents an outcome of either a true or false. So if you have N numbers of these individual processes, what can you say?

In terms of independence: basically two random variables are dependent if one is a function of the other: so if Y = X^2 where X and Y are random variables, then they are dependent.

Formally we use P(A and B) = P(A)P(B) implies that two random variables may be independent, but they don't have to be.

If P(A and B) != P(A)P(B) (think of P(A) as the PDF of A and P(B) as the PDF of B as independent random variables) then we know for sure that they are not independent, but if they were equal we can't say for sure that they are.

Basically independence means that doing a trial for X doesn't change no matter what you do the result in Y. So if X and Y have a fixed distribution, then the way that you have non-independence between the two is if X = f(Y) or Y = g(X) for some f or g and if it is independent then it means that no f or g exists.

However if you are allowed to choose any two random variables that are not independent, then you can just create any probability function where P(A and B) != P(A)P(B) for all a in A and b in B and that's enough (or just defined Y = f(X) or X = g(Y) as above).
 
  • #3
AleksIlia94 said:

Homework Statement



Take Ω = [0, 1] and P the uniform probability.
(a) Give an example of two random variables X and Y defined on Ω that
both have the same Bernoulli distribution with parameter θ = 1/3.

(b) Give an example of such random variables that are independent, and not
independent.

Homework Equations


For a Bernoulli distribution:

Pr(x=0)= 1-θ
Pr(x=1)= θ




The Attempt at a Solution



I have that Pr(X=0)=Pr(Y=0)=2/3
Pr(X=1)=Pr(Y=1)=1/3
What I am not sure about is the part where I am asked for the examples of 2 random variables with this distribution. Can I simply say let X represent the chance of rolling a 1 on a 3 sided dice as an example? Then what would I choose for Y such that X and Y are independent for the second part?

Thanks for your help.

You have not at all solved the problem. You are supposed to say how you would start with a random variable uniformly distributed between 0 and 1 (that is, takes on all values between 0 and 1) and somehow turn that into a Bernoulli random variable that takes only the two values 0 and 1, and with probabilities as specified. Note: this is what we do when we perform Monte-Carlo simulation on a computer: we start with a uniform distribution, then do some manipulations to get some non-uniform distribution.

After you have figured out how to get a single Bernoulli sample X, you need to figure out how to get a second Bernoulli sample Y that is not independent of X, and another Y that is independent of X. Note: this means that given a *single* sample point ω in [0,1], you need to generate both X and Y from it; that is, you need do describe X(ω) and Y(ω), both using the same ω.

RGV
 
Last edited:
  • #4
So you're saying that the random variables X and Y I choose must be events that occur inside the sample space [0,1]? I.e X can represent choosing a number in the domain [0,1/3] and Y can represent choosing a number in the domain [1/2,5/6] since both the probability of these events will fit the given distribution? That makes sense to me, but when I am asked to find independent X and Y with such distribution inside this sample space I am struggling to find X and Y such that pr(x)*pr(y)=pr(x and y)
 
  • #5
What other constraints do you need between X and Y? The first thing you should think about is if this is true, what is the simplest way to have independent variables?
 
  • #6
There are no other restrictions on X and Y except that they must have the same distribution and that in the second part they must be independent? From what I know the simplest way to have independent variables is to choose them so that P(a)*P(b)= P(a and b).. I'm not really sure what you are getting at and whether there is a simpler way?
 
  • #7
That distribution doesn't imply independence, but it is implied from independence.

Basically something is independent to something else if there is no direct function to link it: if Y = f(X) or X = f(Y) then X is not independent from Y.

Under uncertainty this is a difficult thing, but if you can say that getting X does not affect getting Y in terms of the process (with a good enough argument) then that is enough.

Think about how you would describe it process wise: you have already figured out the PDF but again having a PDF with these properties only gaurantees that they are un-correlated and not independent: independence comes from knowing what the variables actually mean in the context of some process.
 
  • #8
I don't see how that is relevant though, the question is just asking me for an example of two random variables that fit this distribution? My example of X and Y in the sample space of [0,1] in the previous post fits this distribution although they are not independent since the two random variables affect each other since getting X restricts Y and vice versa.. What I want to know is whether I am on the right track with my example? I'm not entirely sure what you are saying here about the independence, are you saying that we cannot choose an X and Y in the sample space such that they are independent without knowing more? I am at a bit of a loss.. Thanks
 
  • #9
AleksIlia94 said:
I don't see how that is relevant though, the question is just asking me for an example of two random variables that fit this distribution? My example of X and Y in the sample space of [0,1] in the previous post fits this distribution although they are not independent since the two random variables affect each other since getting X restricts Y and vice versa.. What I want to know is whether I am on the right track with my example? I'm not entirely sure what you are saying here about the independence, are you saying that we cannot choose an X and Y in the sample space such that they are independent without knowing more? I am at a bit of a loss.. Thanks

Although your description of generating X was not as clearly presented as it should be, I presume you have in mind something like this: if 0 ≤ ω < 2/3, set X = 0 and if 2/3 ≤ ω ≤ 1, set X = 1 (or some equivalent scheme). OK, so if you do it as above, and you generate X = 0, what does that tell you about the possible values of ω and their new probabilities? How can you still ensure now that in the new ω-distribution you still get P{Y = 0|X=0} = 2/3 and P{Y = 1|X=1} = 1/3? Similarly, if you generate X=1, what does that say about ω? How can you now generate Y so as to get P{Y=0|X=1} = 2/3 and P{Y=1|X=1} = 1/3?

RGV
 
  • #10
All I'm saying is that P(A and B) = P(A)P(B) follows from independence but if you just have that it doesn't imply independence (only implies that A and B are un-correlated random variables).

The probability needs to be put into context with something else if you want to justify it's independence.

I think for your purposes though using this restriction on A and B for independence and then violating it for dependence is good enough, but keep in mind that this identity for independence is not a two way thing: independence assumptions create that identity but it's not a backwards thing (i.e. you just got the property that P(A and B) = P(A)P(B) for some distribution A and B with a mathematical description and nothing else for the PDF then you can't say that they are independent).
 
  • #11
chiro said:
All I'm saying is that P(A and B) = P(A)P(B) follows from independence but if you just have that it doesn't imply independence (only implies that A and B are un-correlated random variables).

The probability needs to be put into context with something else if you want to justify it's independence.

I think for your purposes though using this restriction on A and B for independence and then violating it for dependence is good enough, but keep in mind that this identity for independence is not a two way thing: independence assumptions create that identity but it's not a backwards thing (i.e. you just got the property that P(A and B) = P(A)P(B) for some distribution A and B with a mathematical description and nothing else for the PDF then you can't say that they are independent).

Most of what you said above is incorrect: the _definition_ (or one of the definitions) of two events A and B being independent is that P(A & B) = P(A)*P(B). Don't take my word for it; look it up! In the context of random variables X and Y, the definition of their independence is that the events A = {X in S1} and B = {Y in S2} are independent for any S1 and S2; in particular, P{X ≤ x and Y ≤ y} = P{X ≤ x}*P{Y ≤ y}, etc.

RGV
 
  • #12
Yeah you're right: I was thinking about correlation (i.e. un-correlated random variables) as opposed to independence.

The easiest way to intuitively see this is that P(A|B) = P(A) implies A never depends on any B which ends up equating to P(A and B) = P(A)P(B) (and then inductively for more variables).
 

1. What is the Bernoulli Distribution?

The Bernoulli Distribution is a discrete probability distribution that models the outcome of a single binary event, such as flipping a coin or a yes/no experiment.

2. What are the characteristics of a Bernoulli random variable?

A Bernoulli random variable can only take on two possible outcomes, usually represented as 0 or 1. It has a fixed probability of success, denoted by p, and a fixed probability of failure, denoted by 1-p.

3. What is the formula for calculating the mean of a Bernoulli random variable?

The mean of a Bernoulli random variable is calculated as p, or the probability of success. This can also be interpreted as the expected value of the random variable.

4. How is the Bernoulli Distribution related to the Binomial Distribution?

The Bernoulli Distribution is a special case of the Binomial Distribution, where there is a fixed number of n independent trials, each with a fixed probability of success p. The Binomial Distribution can be thought of as a series of n Bernoulli trials.

5. How is the Bernoulli Distribution used in real life?

The Bernoulli Distribution is used in various fields, including statistics, economics, and engineering, to model binary events or outcomes. It can be applied in real life scenarios such as predicting the success of a marketing campaign, analyzing the results of a medical treatment, or determining the probability of a stock market trend.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
449
  • Precalculus Mathematics Homework Help
Replies
7
Views
979
  • Precalculus Mathematics Homework Help
Replies
24
Views
989
  • Set Theory, Logic, Probability, Statistics
Replies
30
Views
2K
  • Precalculus Mathematics Homework Help
Replies
3
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
423
  • Precalculus Mathematics Homework Help
Replies
4
Views
493
  • Precalculus Mathematics Homework Help
Replies
17
Views
976
  • Precalculus Mathematics Homework Help
Replies
10
Views
954
  • Precalculus Mathematics Homework Help
Replies
29
Views
1K
Back
Top