# Homework Help: Probability theory: Understanding some steps

Tags:
1. Oct 11, 2018

### WWCY

1. The problem statement, all variables and given/known data
Hi all, I have some difficulty understanding the following problem, help is greatly appreciated!

Let $U_1, U_2, U_3$ be independent random variables uniform on $[0,1]$. Find the probability that the roots of the quadratic $U_1 x^2 + U_2 x + U3$ are real.

2. Relevant equations

3. The attempt at a solution

From the determinant, the following must hold for real solutions
$$U_2 ^2 \geq 4U_1 U_3$$
And the corresponding probability to compute is
$$P(U_1 \leq \frac{U_2 ^2}{4U_3} )$$
I fixed $u_2$ which gives me the following function,
$$P(U_1 \leq \frac{u_2 ^2}{4U_3} | U_2 = u_2)$$
which I solved for by integrating over the domain specified by inequality, the result was
$$u_2 ^2 /4 - \frac{u_2 ^2}{2} \ln (u_2 / 2)$$

Now I want the 2nd equation from the top, which gives the total probability of having $U_1 \leq \frac{U_2 ^2}{4U_3}$. My intuition was to integrate the result from $u_2 \in [0,1]$, which according to my solutions manual turns out to be true. But doesn't this imply that
$$P(U_1 \leq \frac{U_2 ^2}{4U_3} ) = \int P(U_1 \leq \frac{u_2 ^2}{4U_3} | U_2 = u_2) \ du_2$$
which is something I've not really seen before. But it leads to the answer, so my question is, why would this step be correct?

I have tried looking around the net for something relating a conditional CDF to a joint CDF, and I found this,
$$F(x|y) f_y (y) = \frac{dF(x,y)}{dy}$$
could it be that
$$F(x,y) = \int_{-\infty}^{y} F(x|u) f_y (u) \ du$$
$$P(X\leq x) = \lim_{y \to \infty} F(x,y) = \int_{-\infty}^{\infty} F(x|u) f_y (u) \ du$$
in my case $f_y$ would simply be 1, and so would the upper limit of my integral. Is this sound?

Many thanks in advance for any assistance!

2. Oct 11, 2018

### Ray Vickson

Let's go back to your formula
$$P\{ (U_1,U_2,U_3) \in A\} = \int P\{(U_1,U_2,u_3) \in A | U_3 = u_3 \} f_{U_3}(u_3) \; du_3 \hspace{4em}(1)$$ To avoid technical issues that just get in the way of understanding, let's replace the $U_i$ by discrete versions and integrals by finite summations. We then have
$$P\{ (U_1,U_2,U_3) \in A\} = \sum_{(u_1,u_2,u_3) \in A} P(U_1=u_1, U_2=u_2,U_3=u_3).$$ However,
$$P(U_1=u_1, U_2=u_2, U_3=u_3) = P\{ (U_1,U_2) = (u_1,u_2) \& U_3 = u_3 \}\\ = \frac{P\{ (U_1,U_2) = (u_1,u_2)|U_3 = u_3\}}{P(U_3=u_3)} P(U_3 = u_3)\\ = P\{(U_1,U_2) = (u_1,u_2) | U_3=u_3 \} P(U_3 = u_3).$$

You should try to get comfortable with formulas like (1) above; they are among the most common results/tools in probability, and using them can save hours of work.

Anyway, I agree with all of your equations, and aside from not seeing your final result, I get the same as you.

Last edited: Oct 11, 2018
3. Oct 11, 2018

### StoneTemplePython

shouldn't there be a a more satisfying approach, that basically reads

$X:=U^2$
$Y:= 4U_1 U_3$

noting that $Y\in [0,4]$ and $X \in [0,1]$ but there's no harm in considering $[0,4]$ -- it is understood that $Pr\{X \gt 1\} = 0$

now get the pdf for $X$ and the cdf for $Y$

If you want the total probability associated with $X$ you integrate
$\int_0^1 f_X(x) dx = 1$
or in extended form

$\int_0^4 f_X(x) dx = 1$

(sometimes this may be written as $\int_0^4 dF_X(x)$ which has some machinery implications but should look about the same and in my view helps make the connection between discrete, continuous and mixed cases... CDFs are quite nice.)

So the total probability of $X$ is one, as we thought. But we want to partition this into cases where $X$ is at least as big as Y vs not so you end up with

$\text{desired probability} = \int_0^4 f_X(x)F_Y(x) dx = \int_0^4 f_X(x)\cdot Pr\{Y \leq x\} dx$
- - - -
I think Ray's suggestion of a discrete case, at least for $X$ is right in that it makes the result even more clear -- in such a case you have

$\sum_x p_X(x) = 1$

and for the matter at hand:

$\text{desired probability}= P(A)= \sum_x p_X(x)F_Y(x) = \sum_x p_X(x)\cdot Pr\{Y \leq x\}$
and the formula here is just a direct application of total probability-- i.e. we are bi-partitioning total probability into

$1= P(A) + P(A^c)$
$= \Big(\sum_x p_X(x)\cdot Pr\{Y \leq x\}\Big) + \Big(\sum_x p_X(x)\cdot Pr\{Y \gt x\}\Big)$
$= \sum_x \Big(p_X(x)\cdot Pr\{Y \leq x\} + p_X(x)\cdot Pr\{Y \gt x\}\Big)$
$= \sum_x \Big(p_X(x)\cdot \big(Pr\{Y \leq x\} + Pr\{Y \gt x\}\big)\Big)$
$= \sum_x \Big(p_X(x)\cdot 1\Big) = 1$

- - - -

One of the nice things about CDFs is that no matter what kind of underlying r.v., the CDF is a probability, and it has an inequality form which is exactly what your problem is looking for.

- - - -
edit: you'll see a lot about expectation of products of random variables, but in general not so much about the underlying distribution of the product, and coming up with the CDF for products is not easy. As is often the case, the uniform R.V. case is tractable but other stuff, not so much. E.g. for reading, see this:

Last edited: Oct 11, 2018
4. Oct 13, 2018

### WWCY

Many thanks for your replies. I think I'm starting to see things a bit more clearly, though I need some time to fully work through what's been said. I'll do that ASAP and get back with any problems I need help with. Cheers.

5. Oct 15, 2018

### WWCY

I'll try to rephrase what's been said to see if I actually get the point.

For @Ray Vickson 's method:

I need to find the probability that $U_1, U_2, U_3$ lie in some volume $V$.
$$P\big( (U_1, U_2, U_3) \in A \big) = \iiint_{(U_1, U_2, U_3) \in A} f(u_1, u_2, u_3) \ du_1 du_2 du_3$$
which equals to
$$P\big( (U_1, U_2, U_3) \in A \big) = \iiint_{(U_1, U_2, U_3) \in A} f( u_1, u_3 | u_2) f(U_2 = u_2) \ du_1 du_2 du_3$$
Integrating over $u_1, u_3$ given $u_2$ gives
$$\int_{U_2 \in A} P( U_1, U_3 \in A | u_2) f(U_2 = u_2) \ du_2 = \int_{U_2 \in A} P(U_1 \leq \frac{u_2 ^2}{4U_3} | U_2 = u_2) f(U_2 = u_2) \ du_2$$

Is this right?

For @StoneTemplePython 's method:
$$P\big( (X,Y) \in A \big) =\int_0^4 \int_0^x f_X(x)f_Y(y) \ dy dx =\int_0^4 f_X(x)F_Y(x) dx$$
which can then be written in this case as
$$\int_0^1 f_X(x) P(Y\leq x) dx = \int_0^1 f_{U_2 ^2}( u_2 ^2 )P(U_1 \leq \frac{u_2 ^2}{4U_3}) du_2$$

But how do I show that $f_{U_2 ^2} = f_{U_2}$?

6. Oct 15, 2018

### StoneTemplePython

well, there are standard change of variable formulas related to integration that you'd follow. Note that $g(a) = a^2$ is invertible over non-negative numbers which is nice. However in your case, see this:
https://math.stackexchange.com/ques...of-uniform-distribution-have-density-function

It's getting late here but my sense is that there is a subtle and unpleasant bug lurking related to change of variables...
- - - -
Note that it is perhaps a matter of taste, but $X$ and $Y$ are random variables that can stand on their own two feet. If it were me, at time zero I would separately calculate their PDF for $X$ and CDF for $Y$ and put that in the corner so you can abstract away from it. Switching back to the $U_k$'s seems confusing to me.

I suppose we could also do this using the tower property of conditional expectations. I.e. have an Indicator Random Variable $\mathbb I_A$ which takes a value of 1 if $x \geq y$ and zero otherwise. So

$P(A) = E\Big[\mathbb I_A\Big] = E\Big[E\big[\mathbb I_A \big \vert X \big] \Big] = E\Big[E\big[\mathbb I_A \big \vert X=x\big] \Big]$

Last edited: Oct 15, 2018
7. Oct 15, 2018

### Stephen Tashi

As @StoneTemplePython suggests , maybe you have seen it before, if you are familiar with "conditional expectation": $E_{F_x} (X) = E_{F_y} ( E_{F_{x|y} }(X) )$

What does the notation "$P(U_1 < \frac{ U_2^2}{4 U_3})$" mean? The P-notation requires that the expression in parenthesis be a set. The way to interpret "$U_1 < \frac{ U_2^2}{4 U_3}$" as a set is that it is the set where a boolean (or "indicator" ) function of the random variables $U_1,U_2,U_3$ takes the value 1.

For an boolean random variable $X$, can we say that $P(X) = E(X)$?

This is, again, a matter of notation. For an arbitrary real valued random variable $X$, it makes sense to write things like "$P(X = 3)$", but it doesn't make sense to write "$P(X)$". In the case of a boolean random variable $X$, we interpret "$P(X)$" to mean $P(X=1)$. So the question is whether $P(X=1) = E(X)$ for a boolean random variable $X$. It would be an interesting exercise to prove that from the fundamental definitions of measure theory.

Define the function $b(u_1,u_2,u_3)$ by:
$b(u_1,u_2,u_3) = 1$ if $u_1 < \frac{u_2^2}{4u_3}$
$b(u_1, u_2,u_3) = 0$ otherwise

Define the random variable $B$ as a function of the random variables $U_1,U_2,U_3$ by $B = b(U_1,U_2,U_3)$.

Assume "$P(B)$" denotes $E(B) = E_{F_{U_1,U_2,U_3}} (B)$ where $F_{U_1,U_2,U_3}$ is the joint distribution of $(U_1,U_2,U_3)$.

You can express the expectation $E(B)$ in various ways using the concept of conditional expectation, one of which is the equation you ask about.