What does it mean for X and Y to be compared in terms of their expected values?

  • Thread starter kingwinner
  • Start date
In summary, the statement "If X ≤ Y, then E(X) ≤ E(Y)" means that if the values of X are less than or equal to the values of Y, then the expected value of X will also be less than or equal to the expected value of Y. This statement is important in statistics as it helps us understand the relationship between two random variables and their expected values, allowing us to make predictions and draw conclusions. It can be proven mathematically using principles such as the Law of Total Probability and the Markov Inequality. In real-world scenarios, it has implications in fields such as healthcare and finance. However, there are exceptions to this statement, such as when the variables are not independent or when there are extreme
  • #1
kingwinner
1,270
0
There is a theorem that says:
"Let X and Y be random variables. If X ≤ Y, then E(X) ≤ E(Y)."

But I don't really understand the meaning of "X ≤ Y". What does it mean?
For example, if X takes on the values 0,1,2,3, and Y takes on the values -1,2,5. Is X ≤ Y??

Any help is appreciated!
 
Physics news on Phys.org
  • #2
The theorem assumes that X and Y are defined on the same probability space [itex]\Omega[/itex]. [itex]X\leq Y[/itex] means [itex]X(\omega)\leq Y(\omega),\quad \forall\omega\in\Omega[/itex]. Actually, it would be enough to have [itex]X(\omega)\leq Y(\omega)[/itex] for P-almost all [itex]\omega\in\Omega[/itex], where P is the probability measure on [itex\Omega][/itex].
 
  • #3
Pere Callahan said:
The theorem assumes that X and Y are defined on the same probability space [itex]\Omega[/itex]. [itex]X\leq Y[/itex] means [itex]X(\omega)\leq Y(\omega),\quad \forall\omega\in\Omega[/itex]. Actually, it would be enough to have [itex]X(\omega)\leq Y(\omega)[/itex] for P-almost all [itex]\omega\in\Omega[/itex], where P is the probability measure on [itex\Omega][/itex].
Thanks! Now I understand what X ≤ Y means in the theorem.

Consider a separate problem. How about X ≤ Y in the context of finding P(X ≤ Y)? In this case, do X any Y have to be defined as random variables with X([itex]\omega[/itex]) < Y([itex]\omega[/itex]) for ALL [itex]\omega\in\Omega[/itex]?
 
Last edited:
  • #4
No. In order to compute P(X ≤ Y), you have to take the probability of all omega such that X(omega) ≤ Y(omega). There might be other omega which do not satisfy this inequality but then they don't contribute to P(X ≤ Y).

[tex]
P(X\leq Y)=P\left(\omega\in\Omega: X(\omega)\leq Y(\omega)\right)
[/tex]

You may have noticed that the probability measure P has strictly speaking two different meanings here. On the right hand side it is a function which takes as argument a subset of [itex]\Omega[/itex]. While on the left hand side...well...it is only a shorthand for the right side:smile:
 
  • #5
Thanks! I love your explanations!
 
  • #6
Two follow-up questions:

1) For P(X ≤ Y), do X and Y have to be defined on the SAME sample space [itex]\Omega[/itex]?

2) In order statistics, when they say X(1)≤X(2)≤...≤X(n), they actually mean X(1)(ω)≤X(2)(ω)≤...≤X(n)(ω) for each and for all ω E [itex]\Omega[/itex] (or almost all), right??
 
  • #7
kingwinner said:
Two follow-up questions:

1) For P(X ≤ Y), do X and Y have to be defined on the SAME sample space [itex]\Omega[/itex]?

They need to be on the same probability space. Having the same sample space is not enough.
2) In order statistics, when they say X(1)≤X(2)≤...≤X(n), they actually mean X(1)(ω)≤X(2)(ω)≤...≤X(n)(ω) for each and for all ω E [itex]\Omega[/itex] (or almost all), right??

If they don't say a.e. or a.s. you can assume they mean for all [itex] \omega \in \Omega [/itex].
 

What does the statement "If X ≤ Y, then E(X) ≤ E(Y)" mean?

The statement means that if the values of X are less than or equal to the values of Y, then the expected value of X will also be less than or equal to the expected value of Y.

Why is this statement important in statistics?

This statement is important in statistics because it helps us understand the relationship between two random variables and their expected values. It allows us to make predictions and draw conclusions based on the values of the variables.

Can this statement be proven mathematically?

Yes, this statement can be proven using mathematical principles and theorems such as the Law of Total Probability and the Markov Inequality.

What are the implications of this statement in real-world scenarios?

This statement has various implications in real-world scenarios. For example, it can be used to compare the effectiveness of different treatment methods in healthcare or to analyze the risk and return of different investment options in finance.

Are there any exceptions to this statement?

Yes, there are exceptions to this statement. It may not hold true in certain situations where the variables are not independent or when there are extreme outliers in the data. It is important to consider the specific context and assumptions when applying this statement.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
428
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
841
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
14
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
17
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
834
Back
Top