EM algorithm convergence KF log likelihood decrease

In summary, the EM (Expectation-Maximization) algorithm is a computational method that iteratively estimates the values of latent variables and updates model parameters until convergence is achieved. Convergence refers to the point at which the parameter estimates stop changing significantly. The algorithm ensures convergence by using the expectation step and maximization step in each iteration. The log likelihood plays a role in convergence by representing the probability of observing the data given the current parameter estimates. However, the log likelihood may sometimes decrease during convergence if the algorithm is not initialized properly or gets stuck in a local maximum.
  • #1
MikeLowri123
11
0
Hi everyone,

Im running the KF to learn parameters of a model, the log likelihood of the p(Y_{k}|Y_{k-1}), however decreases.

Can anyone advise, does this mean my implementation is wrong or can this just be the case.

Advice appreciated

Thanks
 
Physics news on Phys.org
  • #2
Can anyone offer a small piece of advice or even a reference??
 
  • #3
Hey MikeLowri123.

If you are using the EM to fit some parameters for a parametric distribution, have you tried grabbing some code from a repository like the R platform and seeing the results?
 

1. What is the EM algorithm and how does it work?

The EM (Expectation-Maximization) algorithm is a computational method used for finding maximum likelihood estimates in statistical models with latent variables. It works by iteratively estimating the values of the latent variables and then using those estimates to update the parameters of the model until convergence is achieved.

2. What does it mean for the EM algorithm to converge?

Convergence in the EM algorithm refers to the point at which the parameter estimates stop changing significantly with each iteration. This means that the algorithm has reached a stable solution and further iterations will not result in significant changes to the estimates.

3. How does the EM algorithm ensure convergence?

The EM algorithm ensures convergence by using the expectation step (E-step) and the maximization step (M-step) in each iteration. In the E-step, the algorithm calculates the expected values of the latent variables based on the current parameter estimates. In the M-step, the algorithm updates the parameter estimates based on the expected values from the E-step. This iterative process continues until the parameter estimates no longer change significantly, resulting in convergence.

4. What is the role of the log likelihood in the EM algorithm convergence?

The log likelihood represents the probability of observing the data given the current parameter estimates. In the EM algorithm, the goal is to maximize the log likelihood by updating the parameter estimates in each iteration. As the algorithm converges, the log likelihood should also increase until it reaches a maximum value.

5. Why does the log likelihood sometimes decrease during the EM algorithm convergence?

The log likelihood may decrease during the EM algorithm convergence if the algorithm is not initialized with appropriate starting values. In some cases, the algorithm may get stuck in a local maximum of the log likelihood, causing it to decrease. This can be avoided by trying different initial values or using alternative optimization techniques.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
992
Replies
0
Views
284
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
15
Views
91K
  • Calculus and Beyond Homework Help
Replies
1
Views
666
  • Set Theory, Logic, Probability, Statistics
Replies
12
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
  • Programming and Computer Science
Replies
31
Views
2K
  • Atomic and Condensed Matter
Replies
3
Views
867
Back
Top