Proving Markov's Inequality (Probability)

In summary, the conversation discusses how to prove the inequality P(g(x) \geq c) \leq \frac{E((g(X))^r)}{c^r}, given that g(x) \geq 0 and c > 0. The attempt at a solution involves using a similar proof to one found online, but the inequalities found do not involve g(x) or the exponent r. The solution involves using the fact that for g(x) with a pdf f, P(g(x) \geq c) can be written as an integral with y \geq c, and using the fact that y^r \geq c^r. For the case in which g(x) does not have a
  • #1
TelusPig
15
0

Homework Statement



If [itex]g(x)\ge 0[/itex], then for any constant ##c>0, r>0##:

[itex]P(g(X)\ge c)\le \frac{E((g(X))^r)}{c^r}[/itex]

Homework Equations



I know that [itex]E(g(X))=\int_0^\infty g(x)f(x)dx[/itex] if [itex]g(x)\ge 0[/itex] where ##f(x)## is the pdf of ##X##.

The Attempt at a Solution



I tried following a similar proof to what I found here (on page 2: http://www.stat.cmu.edu/~larry/=stat705/Lecture2.pdf).

I'm really stuck on this. I've tried to look on the Internet for a proof, but the inequalities I find involve X, not g(X) and there isn't any exponent ##r##. I tried doing something similar by replacing X with g(X) but it doesn't quite work out. I still need an exponent ##r## and I don't know how the last step would get me to write something like [itex]c^r P(g(x)\ge c)[/itex]
 
Physics news on Phys.org
  • #2
Suppose [itex]g(x)[/itex] has a pdf. Let's call the pdf [itex]f[/itex]. Then
$$P(g(x) \geq c) = \int_c^\infty f(y) dy$$
In this integral, we have [itex]y \geq c[/itex]. If [itex]r > 0[/itex] and [itex]c > 0[/itex], then this implies [itex]y^r \geq c^r[/itex]. This in turn is equivalent to [itex]1 \leq y^r/c^r[/itex]. What's the natural next step?

For the more general case in which [itex]g(x)[/itex] does not necessarily admit a pdf, the proof should be similar, except you would use a Stieltjes integral.
 
Last edited:

FAQ: Proving Markov's Inequality (Probability)

1. What is Markov's Inequality in probability?

Markov's Inequality is a mathematical tool that allows us to estimate the probability of an event occurring based on the expected value of a random variable. It states that for any non-negative random variable X and any constant c greater than 0, the probability that X is greater than or equal to c is less than or equal to the expected value of X divided by c.

2. Why is Markov's Inequality important in probability?

Markov's Inequality is important because it provides a simple and general bound on the probability of an event occurring. It is often used in probability and statistics to prove other important theorems and inequalities, and it can also be used to derive other useful results.

3. How do you prove Markov's Inequality in probability?

The proof of Markov's Inequality involves using basic properties of probability, such as the non-negativity and monotonicity of probabilities, along with the definition of expected value. It can also be proven using the Chebyshev's Inequality, which is a more general form of Markov's Inequality.

4. Can Markov's Inequality be applied to any type of random variable?

Yes, Markov's Inequality can be applied to any non-negative random variable, regardless of its distribution. This makes it a versatile tool in probability and statistics, as it can be used in a wide range of applications.

5. What are the limitations of Markov's Inequality?

Markov's Inequality provides a very loose bound on the probability of an event occurring, which means that the estimated probability may be significantly different from the actual probability. It also does not take into account the shape of the probability distribution, so it may not be the most precise estimate in certain cases.

Similar threads

Back
Top