# Difficulty understanding ∫ P(X).X^2 dx = <X^2> ?

I know for discrete random variables Σ P(x).x = <x>

Translating for continuous random variables
I'm also aware of the result ∫ P(x).x dx

In my lecture notes ( I more or less transcribed from what the lecturer said ):
∫ P(x).x^2 dx = <x^2> , should it not be ∫ P(x^2).x^2 dx = <x^2>?

Does P(x^2) even mean anything in relation to P(x) ? I find it difficult to link the two.

EDIT: Touching on the ∫ P(x^2).x^2 dx = <x^2> confusion again, for ∫ P(x^2).x^2 dx = <x^2> would you have to be integrating wrt (x^2) too? Ahh, much confusion.

Could somebody please clear this up for me? Any examples would be much appreciated

FactChecker
Gold Member
I know for discrete random variables Σ P(x).x = <x>
What is your definition of the notation <x>? Is it the expected value? If so, the expected value of any function, f(x), of x is defined as E(f) =Σ P(x).f(x). That might clarify your remaining questions
Translating for continuous random variables
I'm also aware of the result ∫ P(x).x dx
= E(X)
In my lecture notes ( I more or less transcribed from what the lecturer said ):
∫ P(x).x^2 dx = <x^2>
If your <x^2> notation means the expected value E( X^2 ), then this is true by the definition of expected value.
, should it not be ∫ P(x^2).x^2 dx = <x^2>?
No. This must be interpreted as ∫P( X = x2) ⋅ x2 dx. Suppose we have a case where the random variable X is always negative. Then P(X=x2) ≡ 0 for any x. So this integral must be zero.

For instance, if P(X=-2) = 1 then obviously the expected value of X2 is 4 since X is always -2. This is E(X2) = 4. Your integral would be zero.

chiro