Gaussian Mixture Model Confusion

  • Thread starter mp6250
  • Start date
3
0
Hi All,

I'm trying to implement the Gaussian Mixture Model for background subtraction as described by Chris Stauffer and W.E.L Grimson in their paper "Adaptive background mixture models for real-time tracking."

I'm having a little trouble with the logic in the step that updates the mean and variance of the models. According to the paper, when new image data comes in, you follow a recursive formula to get exponential moving statistics for these parameters based on the following formulas:

[itex]μ_t = (1-ρ)μ_{t-1} + ρX_t[/itex]
[itex]σ^2_t = (1-ρ)σ^2_{t-1} + ρ(X_t-μ_t)^T(X_t-μ_t)[/itex]

where μ and σ are the mean and standard deviation of the model, [itex]X_t[/itex] is the incoming data vector, and the subscript indicate the relative times between variables. ρ is defined as:

[itex] ρ=α \frac{1}{(2π)^{\frac{n}{2}}|Ʃ|^{\frac{1}{2}}} e^{-\frac{1}{2}(X_t-μ_t)^T Ʃ^{-1}(X_t-μ_t)}[/itex]

where Ʃ is the covariance matrix (taken to be diagonal for simplicity) and α is a parameter that controls the learning rate.

My confusion is this, ρ will always be tiny. The algorithm assumes large variances to begin with and the tiny probabilities that come out of these functions will cause very slow convergence, regardless of the choice of alpha (usually taken to be around 0.05 or so). It's my understanding that you would never set α > 1.0, so where could this be corrected for? Is there a normalization I am missing somewhere?
 

Stephen Tashi

Science Advisor
6,815
1,127
Hi All,

I'm trying to implement the Gaussian Mixture Model for background subtraction as described by Chris Stauffer and W.E.L Grimson in their paper "Adaptive background mixture models for real-time tracking."
http://www.google.com/url?sa=t&rct=j&q= "adaptive background mixture models for real-time tracking." &source=web&cd=1&ved=0CEMQFjAA&url=http://www.ai.mit.edu/projects/vsam/Publications/stauffer_cvpr98_track.pdf&ei=LjdgUZLVGdPlqAHA14FY&usg=AFQjCNERJxusL_4C7-VyJmHKPWgOWl9_zw&bvm=bv.44770516,d.aWM&cad=rja

My confusion is this, ρ will always be tiny. The algorithm assumes large variances to begin with and the tiny probabilities that come out of these functions will cause very slow convergence, regardless of the choice of alpha (usually taken to be around 0.05 or so).
Just glancing at that paper, the updating you described only happens on certain conditions. A drastic change in a pixel value is updated by a different procedure.
 

Related Threads for: Gaussian Mixture Model Confusion

Replies
4
Views
988
  • Posted
Replies
3
Views
2K
  • Posted
Replies
1
Views
839
  • Posted
Replies
1
Views
845
  • Posted
Replies
18
Views
2K
  • Posted
Replies
1
Views
3K
  • Posted
Replies
11
Views
2K
  • Posted
Replies
1
Views
2K
Top