Variance of a function in an infinite space

In summary: The mean and variance of a function change as t changes. The mean is 1 when t is 0 and the variance is 0 when t is inf.
  • #1
AcesofSpades
10
0

Homework Statement



It is given a function y(t)=ae(t) where e(t) is the "[URL error function
[/URL]
I am looking for the variance of this function in an infinite space. Since t is time, I assume that this space is defined as [0,+∞). Thus, the usual variance functions does not apply since they need a finite space. Any suggestions?

The Attempt at a Solution



I found https://www.physicsforums.com/showthread.php?t=114629"thread in the forum discussing the average value of a function sent to infinity. I am not sure if this is what I am looking for, though it seems valid. So I guess there is a next step to find variance through average.
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
No, the "usual variance of functions" does NOT require a finite space. It only requires that the functions go to 0 fast enough as x goes to infinity- which is true for the error function.
 
  • #3
Thank you for answering!
So I open the MATLAB to calculate the variance and it seems that there is not a command for the variance of a function.
So I calculate the Variance= E (x^2) - [E(x)]^2

However, when I calculate the E(x^2) = int (x^2*erf^2) MATLAB returns this:

Warning: Explicit integral could not be found.

Ex2 =

int(x^2*erf(x^2), x)

So can I really find the variance like this? Is there something I am missing?
 
  • #4
I am sorry for double posting but it seems that I find a wall in my way.
Could you show me the direction about the variance of the error function?
 
  • #5
Why are you calculating [itex] E(x^2) [/itex] as

[tex]
\int x^2 erf^2(x) \, dx
[/tex]
?

(Specifically, why is the error function squared?)
 
  • #6
Sorry, forget about the infinities in the previous post.

So I want to calculate the variance with Variance= E (x^2) - [E(x)]^2

I calculate the μ=E(erf(x))=lim(L->inf)((1/L)*int(a*erf(x),0,L)) = a

So then I want to calculate the E(erf(x)^2) and with the previous type I have an integral of (erf(x))^2 . With Taylor series I find a solution something like:

-(4*L^3*(2*L^2 - 5))/(15*pi)

but when I take the lim(L->inf) of this expression the result is -inf.
I am pretty sure that this is not right.
 
  • #7
AcesofSpades said:
Sorry, forget about the infinities in the previous post.

So I want to calculate the variance with Variance= E (x^2) - [E(x)]^2

I calculate the μ=E(erf(x))=lim(L->inf)((1/L)*int(a*erf(x),0,L)) = a

So then I want to calculate the E(erf(x)^2) and with the previous type I have an integral of (erf(x))^2 . With Taylor series I find a solution something like:

-(4*L^3*(2*L^2 - 5))/(15*pi)

but when I take the lim(L->inf) of this expression the result is -inf.
I am pretty sure that this is not right.

As statdad has pointed out in post #5, E(X2) is definitely not:
[tex]\int x^2 \mbox{erf}^{{\color{Red}2}}(x) dx[/tex]

The variance of a random variable X with probability density function f(x) is calculated by the following formula:

[tex]\mbox{Var}(X) = E[X^2] - (E[X]) ^ 2 = \int x^2 {\color{Red}f(x)} dx - \left( \int x {\color{Red}f(x)} dx \right) ^ 2[/tex]

statdad said:
Why are you calculating [itex] E(x^2) [/itex] as

[tex]
\int x^2 erf^2(x) \, dx
[/tex]
?

(Specifically, why is the error function squared?)
 
Last edited:
  • #8
So I cannot find the variance without having the probability density function?

Another thought: Can I say from the plot that mean of erf(x) is zero and variance of erf(x) is one??

[URL]http://upload.wikimedia.org/wikipedia/commons/2/2f/Error_Function.svg[/URL]
 
Last edited by a moderator:
  • #9
AcesofSpades said:
So I cannot find the variance without having the probability density function?
What do you think "variance" means? You cannot define variance without a probabilityh density function.

Another thought: Can I say from the plot that mean of erf(x) is zero and variance of erf(x) is one??
Well, you can't tell it exactly from the plot but erf(x) is, by definition, the standard normal distribution, "standard" meaning here that the mean is 0 and the standard deviation is 1. The variance is the square of the standard deviation and so is also 1.

[PLAIN]http://upload.wikimedia.org/wikipedia/commons/2/2f/Error_Function.svg[/QUOTE]
 
Last edited by a moderator:
  • #10
Thank you, you have been very helpful!
One last question: Since t is time I want the erf(t) from 0 to inf
So how the mean and the variance adjust to this?
(mean is 1 and variance is 0?)
 

FAQ: Variance of a function in an infinite space

1. What is the definition of variance in a function in an infinite space?

Variance in a function in an infinite space is a measure of how much the values of the function vary from the average value. It is calculated by taking the average of the squared differences between each value and the mean value of the function.

2. How does the variance of a function in an infinite space differ from that in a finite space?

In a finite space, the variance of a function can be calculated by using the same formula as in an infinite space. However, in an infinite space, the variance may be undefined or infinite due to the infinite number of values that the function can take on.

3. What is the significance of the variance of a function in an infinite space?

The variance of a function in an infinite space can provide insight into the behavior of the function and how much it deviates from its average value. It can also be used to compare the variability of different functions in the same infinite space.

4. How is the variance of a function in an infinite space related to its standard deviation?

The standard deviation of a function is the square root of its variance. This means that the standard deviation is a measure of the spread of the function's values around its mean, while the variance is a measure of their squared differences from the mean.

5. Can the variance of a function in an infinite space be negative?

No, the variance of a function in an infinite space cannot be negative. This is because the squared differences used in its calculation will always result in a positive value. However, the variance can be zero if all the values of the function are the same, indicating no variability or deviation from the mean.

Back
Top