I Smoother EWMA that mean-reverts

  • I
  • Thread starter Thread starter cppIStough
  • Start date Start date
cppIStough
Messages
24
Reaction score
2
EWMA (exponential weighted moving average) is one way to estimate variance of time series data, and is pretty well known. The issue I have with EWMA is the maximums aren't smooth, especially when recovering from a time-series large spike, and it can take a little while to recover to pre-spike levels. I'm wondering if you know of (or are creative enough to come up with it yourself) a smoother EWMA that reverts to previous-spike levels quicker.

Let me know if I'm not clear, and thanks again for your advice!
 
Physics news on Phys.org
You might consider a fixed-time moving average. The data of Covid-19 deaths is a good example. That data is often presented with 3-day and 7-day moving average options. The 7-day MA has an advantage of always including one weekend, when reporting is always low, and a Monday/Tuesday, when the reports catch up for the weekend (either this weekend or the prior weekend). The advantage is that it greatly smooths out the daily average numbers and suppresses the weekly cycles. The disadvantage is that any spike or variation is watered down by the surrounding 6 days.

Alternatively, you could use your own weights.
 
Last edited:
Covid was an interesting example @FactChecker mentioned. It shows the questions I had (and didn't post as they missed rigor until I saw the Covid example).

What is a spike, a potential data error (random), or a system immanent error (repeated) as in the Covid case? Is there a specific point above which you call data a spike? The word spike has a connotation of something you see in the data, not of something you measure. You first have to make it measurable in order to deal with it.
 
  • Like
Likes FactChecker
fresh_42 said:
Covid was an interesting example @FactChecker mentioned. It shows the questions I had (and didn't post as they missed rigor until I saw the Covid example).

What is a spike, a potential data error (random), or a system immanent error (repeated) as in the Covid case? Is there a specific point above which you call data a spike? The word spike has a connotation of something you see in the data, not of something you measure. You first have to make it measurable in order to deal with it.
Those are the big questions: What do you measure, how do you measure it? Maybe too many only deal with technical aspects but don't dwell on such important questions.
 
  • Like
Likes FactChecker
Namaste & G'day Postulate: A strongly-knit team wins on average over a less knit one Fundamentals: - Two teams face off with 4 players each - A polo team consists of players that each have assigned to them a measure of their ability (called a "Handicap" - 10 is highest, -2 lowest) I attempted to measure close-knitness of a team in terms of standard deviation (SD) of handicaps of the players. Failure: It turns out that, more often than, a team with a higher SD wins. In my language, that...
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...

Similar threads

Back
Top