Finding E(X) from distribution function

In summary, Theorem: Let F(x) be the distribution function of X. If X is any r.v. (discrete, continuous, or mixed) defined on the interval [a,∞) (or some subset of it), then E(X)= ∞ ∫ [1 - F(x)]dx + a. This formula applies even for singular distributions and is useful for computing expected values without needing the density function. It also holds true for any real number a, including negative values.
  • #1
kingwinner
1,270
0
Theorem: Let F(x) be the distribution function of X.
If X is any r.v. (discrete, continuous, or mixed) defined on the interval [a,∞) (or some subset of it), then
E(X)=

∫ [1 - F(x)]dx + a
a

1) Is this formula true for any real number a? In particular, is it true for a<0?

2) When is this formula ever useful (computationally)? Why don't just get the density function then integrate to find E(X)?

Thanks for clarifying!
 
Physics news on Phys.org
  • #2
1) If there was a restriction on a, the statement of the theorem should have said something. If the statement is right for positive a, then it's surely right for negative a as well.

2) Why would you go through all the trouble of looking for the density distribution*, multiplying by x, and integrating that when you could just use that formula? :confused:


*: The hypotheses cover the situation where the density cannot be written as a function
 
  • #3
The formula for general RV X is [tex]EX = -\int_{-\infty}^{0} F(x) dx + \int_{0}^{\infty} (1 - F(x)) dx[/tex]. This formula works in a much more general setting than you might expect. Some distributions don't have densities (singular distributions), for example http://en.wikipedia.org/wiki/Cantor_distribution but the formula still applies.
 
  • #4
Mandark said:
The formula for general RV X is [tex]EX = -\int_{-\infty}^{0} F(x) dx + \int_{0}^{\infty} (1 - F(x)) dx[/tex]. This formula works in a much more general setting than you might expect. Some distributions don't have densities (singular distributions), for example http://en.wikipedia.org/wiki/Cantor_distribution but the formula still applies.

I've seen this general formula. But does it imply that the "theorem" above is true for a<0 (e.g. a=-2, or a=-2.4) as well? I've done some manipulations and I think the theorem above is true for ANY a, but I would like someone to confirm this.

Thanks!
 
  • #5
Try to prove it by manipulating the general formula I posted, it's not hard.
 

What is the formula for finding E(X) from a distribution function?

The formula for finding E(X) from a distribution function is E(X) = ∑ x * f(x), where x represents the potential outcomes and f(x) represents the probability of each outcome.

How do you interpret the expected value of a distribution function?

The expected value of a distribution function represents the average value of all possible outcomes, weighted by their respective probabilities. It can also be interpreted as the long-term average of a random variable.

Can the expected value of a distribution function be negative?

Yes, the expected value of a distribution function can be negative if the potential outcomes have a combination of positive and negative values, and the probabilities are such that the negative outcomes outweigh the positive ones.

What is the relationship between the expected value and variance of a distribution function?

The expected value and variance of a distribution function are both measures of central tendency. The expected value represents the average value of the distribution, while the variance represents the spread of the distribution around the expected value. A higher variance indicates a wider spread of values, while a lower variance indicates a more concentrated distribution.

How can finding E(X) from a distribution function be useful in scientific research?

Finding E(X) from a distribution function can be useful in scientific research as it provides a measure of central tendency for a random variable. This can help researchers understand the average value of a phenomenon and make predictions based on the expected outcome. It can also be used in statistical analysis to compare different groups or treatments and determine if there is a significant difference in their expected values.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
951
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
4K
Replies
0
Views
333
  • Set Theory, Logic, Probability, Statistics
Replies
10
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
904
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
4K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
Replies
3
Views
192
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
777
Back
Top