1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Bernoulli Distribution/ Random Variables

  1. Sep 8, 2012 #1
    1. The problem statement, all variables and given/known data

    Take Ω = [0, 1] and P the uniform probability.
    (a) Give an example of two random variables X and Y defined on Ω that
    both have the same Bernoulli distribution with parameter θ = 1/3.

    (b) Give an example of such random variables that are independent, and not
    independent.

    2. Relevant equations
    For a Bernoulli distribution:

    Pr(x=0)= 1-θ
    Pr(x=1)= θ




    3. The attempt at a solution

    I have that Pr(X=0)=Pr(Y=0)=2/3
    Pr(X=1)=Pr(Y=1)=1/3
    What I am not sure about is the part where I am asked for the examples of 2 random variables with this distribution. Can I simply say let X represent the chance of rolling a 1 on a 3 sided dice as an example? Then what would I choose for Y such that X and Y are independent for the second part?

    Thanks for your help.
     
  2. jcsd
  3. Sep 8, 2012 #2

    chiro

    User Avatar
    Science Advisor

    Hey AleksIlia94 and welcome to the forums.

    Typically if we say that two variables have the same distribution, then their PDF's are equal. So basically the PDF of X is the same as of Y (which is what you have defined above with theta and 1 - theta) but obviously X and Y are independent variables (X is not equal to Y but they have the same PDF).

    As a hint of a physical situation: remember that one bernoulli trial represents an outcome of either a true or false. So if you have N numbers of these individual processes, what can you say?

    In terms of independence: basically two random variables are dependent if one is a function of the other: so if Y = X^2 where X and Y are random variables, then they are dependent.

    Formally we use P(A and B) = P(A)P(B) implies that two random variables may be independent, but they don't have to be.

    If P(A and B) != P(A)P(B) (think of P(A) as the PDF of A and P(B) as the PDF of B as independent random variables) then we know for sure that they are not independent, but if they were equal we can't say for sure that they are.

    Basically independence means that doing a trial for X doesn't change no matter what you do the result in Y. So if X and Y have a fixed distribution, then the way that you have non-independence between the two is if X = f(Y) or Y = g(X) for some f or g and if it is independent then it means that no f or g exists.

    However if you are allowed to choose any two random variables that are not independent, then you can just create any probability function where P(A and B) != P(A)P(B) for all a in A and b in B and that's enough (or just defined Y = f(X) or X = g(Y) as above).
     
  4. Sep 8, 2012 #3

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    You have not at all solved the problem. You are supposed to say how you would start with a random variable uniformly distributed between 0 and 1 (that is, takes on all values between 0 and 1) and somehow turn that into a Bernoulli random variable that takes only the two values 0 and 1, and with probabilities as specified. Note: this is what we do when we perform Monte-Carlo simulation on a computer: we start with a uniform distribution, then do some manipulations to get some non-uniform distribution.

    After you have figured out how to get a single Bernoulli sample X, you need to figure out how to get a second Bernoulli sample Y that is not independent of X, and another Y that is independent of X. Note: this means that given a *single* sample point ω in [0,1], you need to generate both X and Y from it; that is, you need do describe X(ω) and Y(ω), both using the same ω.

    RGV
     
    Last edited: Sep 8, 2012
  5. Sep 10, 2012 #4
    So you're saying that the random variables X and Y I choose must be events that occur inside the sample space [0,1]? I.e X can represent choosing a number in the domain [0,1/3] and Y can represent choosing a number in the domain [1/2,5/6] since both the probability of these events will fit the given distribution? That makes sense to me, but when I am asked to find independent X and Y with such distribution inside this sample space I am struggling to find X and Y such that pr(x)*pr(y)=pr(x and y)
     
  6. Sep 10, 2012 #5

    chiro

    User Avatar
    Science Advisor

    What other constraints do you need between X and Y? The first thing you should think about is if this is true, what is the simplest way to have independent variables?
     
  7. Sep 10, 2012 #6
    There are no other restrictions on X and Y except that they must have the same distribution and that in the second part they must be independent? From what I know the simplest way to have independent variables is to choose them so that P(a)*P(b)= P(a and b).. I'm not really sure what you are getting at and whether there is a simpler way?
     
  8. Sep 10, 2012 #7

    chiro

    User Avatar
    Science Advisor

    That distribution doesn't imply independence, but it is implied from independence.

    Basically something is independent to something else if there is no direct function to link it: if Y = f(X) or X = f(Y) then X is not independent from Y.

    Under uncertainty this is a difficult thing, but if you can say that getting X does not affect getting Y in terms of the process (with a good enough argument) then that is enough.

    Think about how you would describe it process wise: you have already figured out the PDF but again having a PDF with these properties only gaurantees that they are un-correlated and not independent: independence comes from knowing what the variables actually mean in the context of some process.
     
  9. Sep 10, 2012 #8
    I don't see how that is relevant though, the question is just asking me for an example of two random variables that fit this distribution? My example of X and Y in the sample space of [0,1] in the previous post fits this distribution although they are not independent since the two random variables affect each other since getting X restricts Y and vice versa.. What I want to know is whether I am on the right track with my example? I'm not entirely sure what you are saying here about the independence, are you saying that we cannot choose an X and Y in the sample space such that they are independent without knowing more? I am at a bit of a loss.. Thanks
     
  10. Sep 10, 2012 #9

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    Although your description of generating X was not as clearly presented as it should be, I presume you have in mind something like this: if 0 ≤ ω < 2/3, set X = 0 and if 2/3 ≤ ω ≤ 1, set X = 1 (or some equivalent scheme). OK, so if you do it as above, and you generate X = 0, what does that tell you about the possible values of ω and their new probabilities? How can you still ensure now that in the new ω-distribution you still get P{Y = 0|X=0} = 2/3 and P{Y = 1|X=1} = 1/3? Similarly, if you generate X=1, what does that say about ω? How can you now generate Y so as to get P{Y=0|X=1} = 2/3 and P{Y=1|X=1} = 1/3?

    RGV
     
  11. Sep 10, 2012 #10

    chiro

    User Avatar
    Science Advisor

    All I'm saying is that P(A and B) = P(A)P(B) follows from independence but if you just have that it doesn't imply independence (only implies that A and B are un-correlated random variables).

    The probability needs to be put into context with something else if you want to justify it's independence.

    I think for your purposes though using this restriction on A and B for independence and then violating it for dependence is good enough, but keep in mind that this identity for independence is not a two way thing: independence assumptions create that identity but it's not a backwards thing (i.e. you just got the property that P(A and B) = P(A)P(B) for some distribution A and B with a mathematical description and nothing else for the PDF then you can't say that they are independent).
     
  12. Sep 11, 2012 #11

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    Most of what you said above is incorrect: the _definition_ (or one of the definitions) of two events A and B being independent is that P(A & B) = P(A)*P(B). Don't take my word for it; look it up! In the context of random variables X and Y, the definition of their independence is that the events A = {X in S1} and B = {Y in S2} are independent for any S1 and S2; in particular, P{X ≤ x and Y ≤ y} = P{X ≤ x}*P{Y ≤ y}, etc.

    RGV
     
  13. Sep 11, 2012 #12

    chiro

    User Avatar
    Science Advisor

    Yeah you're right: I was thinking about correlation (i.e. un-correlated random variables) as opposed to independence.

    The easiest way to intuitively see this is that P(A|B) = P(A) implies A never depends on any B which ends up equating to P(A and B) = P(A)P(B) (and then inductively for more variables).
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Bernoulli Distribution/ Random Variables
  1. Normal Random variable Q (Replies: 10)

  2. Random variables (Replies: 5)

Loading...