Markov's Inequality for Geometric Distribution.

In summary: I also understand the statement that it is better than nothing, but is it really? The formula can give more of an upper bound than P(X≥a). I guess that it is better than nothing because it is a bound in an upper limit. So I guess that I understand the point of the Markov Inequality, but I'm still a bit confused by the language. I understand the actual mathematics, but the meaning behind the words in the "real world" is a bit confusing. As for Chebyshev, I think that I understand it better. It is a measure of how much the values can vary from the mean. So the difference between the two is limited by the variance. Thank you for your
  • #1
whitejac
169
0

Homework Statement


Let X∼Geometric(p). Using Markov's inequality find an upper bound for P(X≥a), for a positive integer a. Compare the upper bound with the real value of P(X≥a).

Then, using Chebyshev's inequality, find an upper bound for P(|X - EX| ≥ b).

Homework Equations


P(X≥a) ≤ Ex / a
P(|X - EX| ≥ b) ≤ Var(x) / b2

The Attempt at a Solution



Part 1:

So the first inequality is easy enough...
Markov's inequality:
E(geometric) = 1/p ⇒ P(X≥a) = 1/ap.

Real value:
P(X≥a) = 1 - P(X≤a) = P(X = 0) + P(X = 1) +... + P(X = a-1) + P(X = a) = p + pq +... +pqa-1
What I'm having trouble with is understanding the definition of markov's inequality and how I can explain it mathematically compared to the real value of P(X≥a).

A lot of people simply say that the real value is less than markov's inequality and therefore that is a comparison. This doesn't make much sense to me in the general form because all i'd be saying is:

1-P(X≤a) < 1/apPart 2:
By definition, the upperbound is Var(x) / b^2 = (1-p) / (b2p2)
I have a similar issue with finding Chebyshev's inquality... I can "do" it, but I don't really know what I'm doing. My book simply states that this is calculating the difference between X and EX is limited by the variance which is fine and intuitive, but I don't really think it should be this simple...

So, are my interpretations correct? And are they explained properly...
 
Last edited:
Physics news on Phys.org
  • #2
whitejac said:

Homework Statement


Let X∼Geometric(p). Using Markov's inequality find an upper bound for P(X≥a), for a positive integer a. Compare the upper bound with the real value of P(X≥a).

Then, using Chebyshev's inequality, find an upper bound for P(|X - EX| ≥ b).

Homework Equations


P(X≥a) ≤ Ex / a
P(|X - EX| ≥ b) ≤ Var(x) / b2

The Attempt at a Solution



Part 1:

So the first inequality is easy enough...
Markov's inequality:
E(geometric) = 1/p ⇒ P(X≥a) = 1/ap.d

Real value:
P(X≥a) = 1 - P(X≤a) = P(X = 0) + P(X = 1) +... + P(X = a-1) + P(X = a) = p + pq +... +pqa-1
What I'm having trouble with is understanding the definition of markov's inequality and how I can explain it mathematically compared to the real value of P(X≥a).

A lot of people simply say that the real value is less than markov's inequality and therefore that is a comparison. This doesn't make much sense to me in the general form because all i'd be saying is:

1-P(X≤a) < 1/apPart 2:
By definition, the upperbound is Var(x) / b^2 = (1-p) / (b2p2)
I have a similar issue with finding Chebyshev's inquality... I can "do" it, but I don't really know what I'm doing. My book simply states that this is calculating the difference between X and EX is limited by the variance which is fine and intuitive, but I don't really think it should be this simple...

So, are my interpretations correct? And are they explained properly...

Well, first off, be more careful with your inequalities. For integer values of "a", ##P(X \geq a) = P(X > a-1) = (1-p)^{a-1}## for the geometric case. So, the Markov inequality in this case is saying that ##(1-p)^{a-1} \leq \frac{1}{pa}##. This IS true, but do you really think it is "obvious"?

Not only do a lot of people say that the Markov inequality is a bound, they are speaking the 100% truth, whether it makes sense to you or not. The point is: sometimes an exact evaluation of ##P(X \geq a)## is very difficult, but calculation of ##E(X)/a## is relatively easy. In such a case the Markov result is better than nothing, and sometimes is good enough in particular applications. Ditto for Chebychev's inequality.
 
  • #3
Ray Vickson said:
Well, first off, be more careful with your inequalities. For integer values of "a", P(X≥a)=P(X>a−1)=(1−p)a−1P(X \geq a) = P(X > a-1) = (1-p)^{a-1} for the geometric case. So, the Markov inequality in this case is saying that (1−p)a−1≤1pa(1-p)^{a-1} \leq \frac{1}{pa}. This IS true, but do you really think it is "obvious"?
I apologize, your inequality was correct. I meant to fix that in my edit. Now, I guess that this is why the geometric seems a little obvious to me...
(1-p)a-1 will give a fraction to an exponent. This will make an exponentially smaller answer for large "a." However, Markov's rule is linear. So it will always be much higher.

Take this case of (a = 3, p = 1/2)
We see already a large difference between the two sides:
P(X ≥ 3) = (1-p)2 = 1/4

E[x] = (1/p) / a = (2 / a) = 2/3. That's only the third term. I've heard that because these apply so generally they cannot be that specific, but knowing the results of this case I can easily see it being truer elsewhere.
 

1. What is Markov's Inequality for Geometric Distribution?

Markov's Inequality for Geometric Distribution is a mathematical formula that provides an upper bound on the probability that a random variable takes on a value greater than or equal to a given threshold. It is commonly used in probability and statistics to estimate the likelihood of rare events occurring.

2. How is Markov's Inequality for Geometric Distribution derived?

Markov's Inequality for Geometric Distribution is derived from Markov's Inequality, which is a general formula for bounding the probability of a random variable being greater than or equal to a certain value. The geometric distribution is a specific type of discrete probability distribution that models the number of trials needed to achieve a success in a series of independent trials.

3. What is the significance of Markov's Inequality for Geometric Distribution in real-world applications?

Markov's Inequality for Geometric Distribution has many practical applications, such as in risk analysis, quality control, and financial modeling. It is often used to estimate the probability of rare events, which can be useful in decision making and risk management.

4. How is Markov's Inequality for Geometric Distribution related to other statistical concepts?

Markov's Inequality for Geometric Distribution is closely related to other statistical concepts, such as the geometric distribution itself, as well as other probability distributions and inequalities, such as Chebyshev's inequality and the Central Limit Theorem. It is also related to concepts in inferential statistics, such as hypothesis testing and confidence intervals.

5. Can Markov's Inequality for Geometric Distribution be applied to continuous random variables?

No, Markov's Inequality for Geometric Distribution is only applicable to discrete random variables. For continuous random variables, similar inequalities such as Chebyshev's inequality can be used to estimate probabilities of events occurring above a certain threshold.

Similar threads

  • Calculus and Beyond Homework Help
Replies
5
Views
3K
  • Calculus and Beyond Homework Help
Replies
4
Views
858
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
747
  • Calculus and Beyond Homework Help
Replies
8
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
0
Views
1K
  • Quantum Physics
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
6
Views
1K
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
2K
Back
Top