Finding E(ln x) and Var(ln x): Cramer-Rao Lower Bound

In summary, to find the expected value and variance of ln(x), you can use the properties of a lognormal distribution and its moment generating function. Alternatively, you can use the inner product definition or the delta method. Both methods involve calculating integrals over the domain of x.
  • #1
safina
28
0
May I ask how to find the [tex]E\left(ln x\right)[/tex] and [tex]Var\left(ln x)[/tex]?
The [tex]X_{i}[/tex] are random sample from the [tex]f\left(x; \theta\right) = \theta x^{\theta - 1}I_{\left(0, 1\right)}\left(x\right)[/tex] where [tex]\theta > 0[/tex].

I need the information in finally solving the Cramer-Rao lower bound for the variance of an unbiased estimator of a function of [tex]\theta[/tex]. And also for checking if I have an unbiased estimator.
 
Physics news on Phys.org
  • #2
safina said:
May I ask how to find the [tex]E\left(ln x\right)[/tex] and [tex]Var\left(ln x)[/tex]?

A random variable X has a lognormal distribution if [tex]X=e^{Y}[/tex] where Y is normally distributed with mean [tex]\mu[/tex] and standard deviation [tex]\sigma[/tex].

Then: [tex] E(X)=e^{\mu+\sigma/2}, Var(X)=e^{2(\mu+\sigma)}-e^{2\mu+\sigma}[/tex]

The moment generating function is: [tex]E(X^n)=e^{n\mu+(n^2\sigma^2/2)}[/tex]

Note the moments of the lognormal distribution do not uniquely define the distribution and a finite mgf only exists on the interval [tex](-\infty,0][/tex]

EDIT:: You can assign other distributions to Y, but without specification, I don't think there's a single answer to your question. The lognormal is the default under the Central Limit Theorem..
 
Last edited:
  • #3
Let g(x)= ln x. For E[g(x)] see the inner product definition here. For Var[g(x)], look under continuous case here with g(x) replacing x and [itex]\mu[/itex] is interpreted as E[g(x)]; alternatively see the delta method.
 
  • #4
That is, Var[g(x)] = integral of ( g(x) - E[g(x)] )^2 f(x), over x's domain.

Alternatively, [tex]

Var[g(x)] = E\left[g(x)^2\right] - \left(E[g(x)]\right)^2
= \int{g(x)^2 f(x) dx} - \left(\int {g(x)f(x)dx}\right)^2

[/tex]

where each integral is over the domain of x.
 
Last edited:
  • #5


To find the expected value (E) of ln(x), we can use the definition of expected value which is the integral of the function multiplied by the probability density function. In this case, we have:

E(ln x) = \int_{0}^{1} ln(x) \theta x^{\theta - 1} dx

To solve this integral, we can use integration by parts. Let u = ln(x) and dv = \theta x^{\theta - 1} dx. Then, du = \frac{1}{x} dx and v = x^{\theta}. Substituting these into the integral, we get:

E(ln x) = \left[ln(x) x^{\theta}\right]_{0}^{1} - \int_{0}^{1} \frac{x^{\theta}}{x} dx

E(ln x) = \left[ln(x) x^{\theta}\right]_{0}^{1} - \int_{0}^{1} x^{\theta - 1} dx

E(ln x) = \left[ln(x) x^{\theta}\right]_{0}^{1} - \left[\frac{x^{\theta}}{\theta}\right]_{0}^{1}

E(ln x) = 1 - \frac{1}{\theta}

To find the variance (Var) of ln(x), we can use the definition of variance which is the expected value of the squared difference between the random variable and its mean. In this case, we have:

Var(ln x) = E[(ln x - E(ln x))^2]

Substituting the value we found for E(ln x), we get:

Var(ln x) = E[(ln x - 1 + \frac{1}{\theta})^2]

Using the same method as above, we can solve this integral to get:

Var(ln x) = \frac{2}{\theta^2}

Now, to apply the Cramer-Rao lower bound, we need to find the Fisher information (I) for the given distribution. The Fisher information is defined as:

I = -E\left[\frac{\partial^2}{\partial \theta^2} ln f(x; \theta)\right]

Where f(x; \theta) is the probability density function. In this case, we have:

f(x; \theta) =
 

1. What is the Cramer-Rao lower bound?

The Cramer-Rao lower bound is a theoretical limit on the variance of any unbiased estimator for a given parameter. It states that the variance of an unbiased estimator will always be equal to or greater than the inverse of the Fisher information, which measures how much information a particular sample contains about the parameter being estimated.

2. Why is the Cramer-Rao lower bound important?

The Cramer-Rao lower bound is important because it provides a benchmark for evaluating the performance of different estimators for a given parameter. If an estimator has a variance that is close to the Cramer-Rao lower bound, it is considered to be efficient and reliable. On the other hand, if an estimator has a variance that is significantly higher than the Cramer-Rao lower bound, it may not be a good choice for estimating the parameter.

3. How is the Cramer-Rao lower bound related to the logarithm of x?

The Cramer-Rao lower bound can be used to calculate the variance of estimators for the natural logarithm of x (ln x). This is because the Cramer-Rao lower bound is based on the Fisher information, which is derived from the logarithmic derivative of the probability density function. Therefore, the Cramer-Rao lower bound can be used to find the best possible variance for estimators of ln x.

4. What is the process for finding E(ln x) and Var(ln x) using the Cramer-Rao lower bound?

The process for finding E(ln x) and Var(ln x) using the Cramer-Rao lower bound involves first calculating the Fisher information for the natural logarithm of x. This can be done by taking the second derivative of the natural logarithm of the probability density function. Then, the inverse of the Fisher information is used to calculate the Cramer-Rao lower bound, which can be used to find the best possible variance for estimators of ln x.

5. Can the Cramer-Rao lower bound be used for any type of data?

Yes, the Cramer-Rao lower bound can be used for any type of data as long as the probability density function can be differentiated to find the Fisher information. This means that the Cramer-Rao lower bound can be applied to a wide range of statistical models and distributions, making it a valuable tool for evaluating the performance of estimators in various scenarios.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
0
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
745
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
862
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
923
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
2K
Back
Top