Maximum likelihood to fit a parameter of this model

In summary, the conversation discusses using maximum likelihood to estimate the parameter lambda in an EWMA model for time series data. This involves assuming a distribution for the data, optimizing the likelihood function with respect to lambda, and using equations to find the ML estimators for the parameters. The density function of the data is necessary for this process.
  • #1
member 428835
Hi PF!

Given random time series data ##y_i##, we assume the data follows a EWMA (exponential weighted moving average) model: ##\sigma_t^2 = \lambda\sigma_{t-1}^2 + (1-\lambda)y_{t-1}^2## for ##t > 250##, where ##\sigma_t## is the standard deviation, and ##\sigma_{M=250}^2 = \sum_{i=1}^{250}y_i^2/250## to initialize. How would we use maximum likelihood to estimate ##\lambda##?

In general, it seems to use the principal we first choose a distribution ##P(y_i)## the data likely came from ( like a Bernoulli variable maybe for flipping a coin and estimating probability of heads ##p##, or normal distribution if we've been given heights of people as a sample and want to estimate the mean, standard deviation). Next, since the data are i.i.d. (we assume this is true) we optimize ##\Pi_i P(y_i)## with respect to the variable we seek (##p## or ##\mu## in the previous examples, in this question should be ##\lambda##). I'm just confused how the assumed model with ##\sigma## plays a role. Any help is greatly appreciated.
 
Physics news on Phys.org
  • #2
The squaring just adds unnecessary superscripts for this exercise so let's write ##S_i## for ##\sigma^2_i## and ##X_i## for ##y_i^2##.

Typically we assume the ##X_i## are iid. Say the distribution of ##X_i## has parameters ##\mathbf \beta=\langle \beta_1,...,\beta_K\rangle##, and the probability density function of ##X_i## is ##f_{\mathbf \beta}##. We need to estimate ##\mathbf\beta## and ##\lambda## given observations ##s_1, ...,, s_n## for the random variables ##S_1, ..., S_N##.

Given the equations
$$S_t = \lambda S_{t-1} + (1-\lambda)Y_{t-1}$$
for ##t=2,...N##
and the missing equation ##S_1=X_1##
we can write the realized values ##x_1,..., x_N## of the random variables ##X_1,..., X_N## in terms of just ##\lambda## by inserting the observed values of ##S_1, ..., S_N##. Write these as ##x_1(\lambda),..., X_N(\lambda)## to emphasise this dependence.

The likelihood of the observed data given ##\mathbf \beta,\lambda## is
$$\mathscr L (\mathbf \beta,\lambda)=
\prod_{i=1}^N
f_{\mathbf\beta}(x_i(\lambda))$$

This expression has ##K+1## unknowns: ##\beta_1, ..., \beta_K, \lambda##. We partially differentiate it wrt each of those unknowns in turn and set it equal to zero, to get ##K+1## equations, the same number as we have unknowns. Solving those equations leads to the ML estimators of those unknowns.

Note how we needed the density function of ##X_i## to form the expression for ##\mathscr L##.
 

1. What is maximum likelihood?

Maximum likelihood is a statistical method used to estimate the parameters of a probability distribution by maximizing the likelihood function. It is based on the principle that the most likely values for the parameters are those that make the observed data most probable.

2. How is maximum likelihood used to fit a parameter of a model?

Maximum likelihood is used to find the values of the parameters that make the observed data most probable. This is done by calculating the likelihood function, which is a measure of how likely the observed data is given the values of the parameters. The parameter values that maximize the likelihood function are considered the best fit for the model.

3. What types of models can be fitted using maximum likelihood?

Maximum likelihood can be used to fit a wide range of models, including linear regression, logistic regression, and time series models. It is a versatile method that can be applied to any model that has a probability distribution.

4. How does maximum likelihood compare to other parameter estimation methods?

Maximum likelihood is considered one of the most reliable and efficient methods for parameter estimation. It is often preferred over other methods, such as least squares, because it takes into account the variability of the data and provides a measure of uncertainty for the estimated parameters.

5. What are the assumptions of maximum likelihood?

The primary assumption of maximum likelihood is that the data follow a specific probability distribution. Additionally, it assumes that the observations are independent and that the model is correctly specified. Violations of these assumptions can affect the accuracy of the parameter estimates.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
16
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
921
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
903
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
23
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
960
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
14
Views
2K
Back
Top