Expected value of two uniformly distributed random variables

In summary: E}(Z) = \int_0^1 z f_{Z}(z) dz$$where you can find the density ##f_Z(z)## by differentiating ##F_Z(z)##.
  • #1
DottZakapa
239
17
Homework Statement
##X_1## and## X_2## are uniformly distributed with parameters ##(0,1)##
then:
##E[min {X_1,X_2}]=##
Relevant Equations
Probability
##X_1## and## X_2## are uniformly distributed random variables with parameters ##(0,1)##
then:
##E \left[ min \left\{ X_1 , X_2 \right\} \right] = ##

what should I do with that min?
 
Physics news on Phys.org
  • #2
If ##Z = \text{min}(X_1, X_2)##, then ##Z > z## iff both ##X_1## and ##X_2## are greater than ##z##. It follows that$$P(Z > z) = P(X_1 > z)P(X_2 > z)$$You can use this to find the cumulative probability function ##F_Z(z) = P(Z<z)## in terms of ##F_{X_1}(z)## and ##F_{X_2}(z)##, the last two of which are known since we're given that ##X_1## and ##X_2## are uniformly distributed in the interval ##[0, 1]##.
 
Last edited by a moderator:
  • #3
It should be something like this?
##f_{x_{1}} \left(z\right) = \frac 1 {b-a}##
##f_{x_{2}} \left(z\right) = \frac 1 {b-a}##
##F_{x_{1}} \left(z\right) = \frac {z-a} {b-a}##
##F_{x_{1}} \left(z\right) = \frac {z-a} {b-a}##

but, being ##a=0## and ##b=1##

##F_{x_{1}} \left(z\right) = z ##
##F_{x_{1}} \left(z\right) = z ##
so
##F_{x_{1}} \left(z\right)*F_{x_{1}} \left(z\right)=z^2##

being by definition

##E \left[x\right] = \int_a^b \left(1-F_x\left(z\right)\right) \, dz##

so i'll have to integrate

##\int_0^1 \left(1-z^2 \right ) \, dz ## is it correct?
 
  • #4
Your ##F_{X_1}(z) = z## and ##F_{X_2}(z) = z## are correct, but the rest is not. Start with$$\begin{align*}P(Z > z) &= P(X_1 > z)P(X_2 > z) \\ \\ 1-F_Z(z) &= (1-F_{X_1}(z)) (1-F_{X_2}(z)) = 1 - F_{X_1}(z) - F_{X_2}(z) + F_{X_1}(z)F_{X_2}(z) \\ \\ F_Z(z) &= F_{X_1}(z) + F_{X_2}(z) - F_{X_1}(z)F_{X_2}(z) = 2z - z^2 \end{align*}$$and this last expression you might notice is a statement of the inclusion-exclusion principle.

Now you have the cumulative function ##F_Z(z)##, you can find ##\mathbb{E}(Z)## either by integrating ##1 - F_Z(z)##, which is the way you suggested, or by finding ##f_Z(z) = \frac{dF_Z(z)}{dz}## and integrating ##z f_Z(z)##.
 
  • Like
Likes PeroK
  • #5
ok so the thinking is :
being ##X_1>z## and ##X_2>z##
for each of them I consider the Survival Function ##\overline{F_{x}}##, that is
##\overline{F_{x_1}}=1-F_{x_1}\left(z\right)= 1-z ## same for the other ##\overline{F_{x_2}}=1-F_{x_2}\left(z\right)= 1-z ##
and
##\overline{F_{x_{1}} \left(z\right)}*\overline{F_{x_{1}} \left(z\right)}=\left(1-z\right)*\left(1-z\right)= \left(1-z\right)^2= z^2-2z+1##

and integrate as
##\int_0^1 \left(z^2-2z+1 \right ) \, dz ##
 
  • #6
I haven't come across the term 'survival function', but from how you use it I'll just assume it's 1 minus the cumulative probability. Then yes,$$\overline{F_{Z}}(z) = \overline{F_{X_1}}(z) \overline{F_{X_2}}(z) = (1-z)^2$$which you can just integrate up like$$\mathbb{E}(Z) = \int_0^1 \overline{F_{Z}}(z) dz$$
 
  • Like
Likes DottZakapa

1. What is the definition of expected value?

The expected value of a random variable is the sum of all possible outcomes multiplied by their respective probabilities. In other words, it is the average value that we would expect to see if we repeated the experiment an infinite number of times.

2. How is the expected value of two uniformly distributed random variables calculated?

The expected value of two uniformly distributed random variables can be calculated by taking the average of the two variables. This means adding the two variables together and dividing by 2.

3. What is the significance of the expected value in probability theory?

The expected value is an important concept in probability theory because it helps us understand the long-term behavior of random variables. It allows us to make predictions about the average outcome of an experiment, even if we do not know the exact outcome of each trial.

4. Can the expected value of two uniformly distributed random variables be negative?

Yes, it is possible for the expected value of two uniformly distributed random variables to be negative. This can occur if the two variables have a large enough negative difference between them.

5. How does the expected value of two uniformly distributed random variables change as the sample size increases?

As the sample size increases, the expected value of two uniformly distributed random variables will approach the true expected value. This is known as the law of large numbers, which states that the average of a large number of independent trials will approach the expected value.

Similar threads

  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • Calculus and Beyond Homework Help
Replies
9
Views
797
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
486
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
862
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
2K
  • Calculus and Beyond Homework Help
Replies
8
Views
676
  • Calculus and Beyond Homework Help
Replies
11
Views
2K
  • Calculus and Beyond Homework Help
Replies
8
Views
1K
Back
Top