Gaussian Mixture Model Confusion

Click For Summary
SUMMARY

The discussion focuses on implementing the Gaussian Mixture Model (GMM) for background subtraction as outlined by Chris Stauffer and W.E.L. Grimson in their paper "Adaptive background mixture models for real-time tracking." The user expresses confusion regarding the update mechanism for mean (μ) and variance (σ²) using recursive formulas, particularly the small value of ρ, which leads to slow convergence. The formulas provided for updating the parameters include μ_t and σ²_t, with ρ defined as a function of the covariance matrix (Ʃ) and learning rate parameter (α). The user seeks clarification on potential normalization techniques to address the slow convergence issue.

PREREQUISITES
  • Understanding of Gaussian Mixture Models (GMM)
  • Familiarity with recursive statistical formulas
  • Knowledge of covariance matrices and their properties
  • Experience with background subtraction techniques in computer vision
NEXT STEPS
  • Research normalization techniques for Gaussian Mixture Models
  • Explore advanced background subtraction algorithms
  • Learn about the impact of learning rate (α) on convergence in GMM
  • Study the conditions under which parameter updates occur in GMMs
USEFUL FOR

Computer vision practitioners, machine learning engineers, and researchers implementing Gaussian Mixture Models for real-time tracking and background subtraction applications.

mp6250
Messages
3
Reaction score
0
Hi All,

I'm trying to implement the Gaussian Mixture Model for background subtraction as described by Chris Stauffer and W.E.L Grimson in their paper "Adaptive background mixture models for real-time tracking."

I'm having a little trouble with the logic in the step that updates the mean and variance of the models. According to the paper, when new image data comes in, you follow a recursive formula to get exponential moving statistics for these parameters based on the following formulas:

μ_t = (1-ρ)μ_{t-1} + ρX_t
σ^2_t = (1-ρ)σ^2_{t-1} + ρ(X_t-μ_t)^T(X_t-μ_t)

where μ and σ are the mean and standard deviation of the model, X_t is the incoming data vector, and the subscript indicate the relative times between variables. ρ is defined as:

ρ=α \frac{1}{(2π)^{\frac{n}{2}}|Ʃ|^{\frac{1}{2}}} e^{-\frac{1}{2}(X_t-μ_t)^T Ʃ^{-1}(X_t-μ_t)}

where Ʃ is the covariance matrix (taken to be diagonal for simplicity) and α is a parameter that controls the learning rate.

My confusion is this, ρ will always be tiny. The algorithm assumes large variances to begin with and the tiny probabilities that come out of these functions will cause very slow convergence, regardless of the choice of alpha (usually taken to be around 0.05 or so). It's my understanding that you would never set α > 1.0, so where could this be corrected for? Is there a normalization I am missing somewhere?
 
Physics news on Phys.org
mp6250 said:
Hi All,

I'm trying to implement the Gaussian Mixture Model for background subtraction as described by Chris Stauffer and W.E.L Grimson in their paper "Adaptive background mixture models for real-time tracking."

http://www.google.com/url?sa=t&rct=...yJmHKPWgOWl9_zw&bvm=bv.44770516,d.aWM&cad=rja

My confusion is this, ρ will always be tiny. The algorithm assumes large variances to begin with and the tiny probabilities that come out of these functions will cause very slow convergence, regardless of the choice of alpha (usually taken to be around 0.05 or so).

Just glancing at that paper, the updating you described only happens on certain conditions. A drastic change in a pixel value is updated by a different procedure.
 

Similar threads

  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
5K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 8 ·
Replies
8
Views
4K