Time series analysis and data transformation

  • I
  • Thread starter fog37
  • Start date
  • #1
fog37
1,568
108
TL;DR Summary
time series analysis and transformations
Hello,
Many time-series forecasting models (AR, ARMA, ARIMA, SARIMA, etc.) require the time series data to be stationarity.

But often, due to seasonality, trend, etc. we start with an observed time-series that is not stationary. So we apply transformations to the data so it becomes stationary. Essentially, we get a new, stationary time series which we use to create the model (AR, ARMA, etc.). But the transformed data is very different from the original data...Isn't the model supposed to work with data like the original data, i.e. isn't the goal to build a model that describes and can make forecasting on data that looks like the original data, not like the transformed data?

Thanks!
 
Physics news on Phys.org
  • #2
fog37 said:
TL;DR Summary: time series analysis and transformations

Isn't the model supposed to work with data like the original data, i.e. isn't the goal to build a model that describes and can make forecasting on data that looks like the original data, not like the transformed data?
As long as there is an inverse transform then you can get back to the original scale.

The usual problem with computing the statistics on the transformed data is that the residuals usually have different properties. Assumptions on the residual distribution hold on the transformed scale, and when inverse transformed they may be quite different.
 
Last edited:
  • Like
Likes fog37 and BvU
  • #3
There are many levels and definitions of "stationary". See Stationary process. A lot of people would not consider an ARIMA or SARIMA to be stationary in the most simple sense.
 
  • Like
Likes fog37
  • #4
Dale said:
As long as there is an inverse transform then you can get back to the original scale.

The usual problem with computing the statistics on the transformed data is that the residuals usually have different properties. Assumptions on the residual distribution hold on the transformed scale, and when inverse transformed they may be quite different.
Ok, I guess the key word is "inverse transformation". We convert the original signal into a new signal, create a model for the new signal, make predictions, and finally apply an inverse transformation to predictions which would now make sense for the original data...
It is the same things as when we convert a time-domain signal ##f(t)## into its frequency version ##F(\omega)##, solve the problem in the frequency domain, get a frequency domain solution, and convert that solution back to the time domain...
 
  • Like
Likes Dale
  • #5
Yes, that is a good example
 
  • Like
Likes fog37
  • #6
One realization I just had is that time-series models like ##AR, MA, ARMA, etc.## seem to just be discrete time ODEs, i.e. difference equations...But these linear models are generally used to make predictions/extrapolations of unknown values of ##y_t## without reaching a final solution, ##y=f(t)##, correct? Why not?

For example, a fitted AR(1) model is something like this: $$y_t = a y_{t-1}$$ which can be converted to the ODE model $$y_t = \frac {a} {a-1} y'$$

Why not solve for ##y_t## instead of keeping it as ##y_t = a y_{t-1}##?
 
  • #7
fog37 said:
Why not solve for ##y_t## instead of keeping it as ##y_t = a y_{t-1}##?
Because the direct solution for ##y_t## includes the cumulative random terms of all the preceding time steps. That can have a huge random variance. On the other hand, if you know the value of ##y_{t-1}##, why not use it and the random variance from that is just from one time step and is relatively small.
 
  • Like
Likes fog37
  • #8
I was thinking the following in regards to transformations, inverse transformations, ARMA, ARIMA and SARIMA.

ARMA is meant to model time-series that are weakly stationary (constant mean, variance, autocorrelation). To train an ARMA model, the training signal ##y(t)## must therefore be stationary. If it is not, we need to apply transformations to make it so and apply inverse transformations at the very end.

With ARIMA, we avoid manually doing the stationarizing step since the ##I(d)## part of ARIMA automatically make our input signal with trend and seasonality stationary, if it is not, by taking the difference transform...
But I guess differencing does not take remove the seasonal component from ##y(t)##? Does that means that we would need to remove seasonality manually before using ARIMA?

The best solution seems to then use SARIMA which does not care if the training signal has trend and/or seasonality because it takes care of it internally: we don't need to manually apply any transformations to the raw time series ##y(t)## and inverse transformations to the prediction outputs of the SARIMA model....

Any mistake in my understanding? I would definitely choose SARIMA, more convenient, since we can skip all those preprocessing transformations to make ##y(t)## stationary and inverse transformations after the forecasting...
 
  • #9
fog37 said:
Any mistake in my understanding? I would definitely choose SARIMA, more convenient, since we can skip all those preprocessing transformations to make ##y(t)## stationary and inverse transformations after the forecasting...
That is a natural thought. But you should avoid anything that would be like "throwing everything at the wall to see what sticks". A time series analysis tool-set might allow you to automate finding a SARIMA solution that only includes terms that are statistically significant. But you should have some subject-matter reason to include the trend and seasonal terms. A good tool-set should allow you to prevent the inclusion of terms that do not make sense .
 

1. What is time series analysis?

Time series analysis involves statistical techniques used to model and analyze time-ordered data points. It helps identify patterns such as trends, seasonality, and cycles in the data, which can be crucial for forecasting and making informed decisions in various fields such as finance, economics, and environmental science.

2. What are the common methods used in time series analysis?

Common methods include moving average (MA), autoregressive (AR), and autoregressive integrated moving average (ARIMA) models. More advanced techniques involve seasonal decomposition of time series (STL), vector autoregression (VAR), and machine learning approaches like recurrent neural networks (RNNs).

3. Why is data transformation important in time series analysis?

Data transformation is crucial because it helps stabilize the variance, make the data more closely conform to a normal distribution, and improve the model's predictive accuracy. Common transformations include logarithmic, square root, and differencing methods, which help in stabilizing trends and seasonality in the data.

4. How do you handle missing data in time series analysis?

Handling missing data is essential to avoid biased or inaccurate results. Techniques include imputation, where missing values are replaced with substituted values based on methods like linear interpolation, last observation carried forward, or more sophisticated approaches like multiple imputation or using machine learning algorithms.

5. What is the difference between time series analysis and cross-sectional analysis?

Time series analysis deals with data collected at regular intervals over a period of time, focusing on temporal dependencies, trends, and seasonal patterns. Cross-sectional analysis, on the other hand, examines data collected at a single point in time across different subjects or categories, focusing on differences or relationships across these categories without considering the time factor.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
14
Views
257
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
487
  • Set Theory, Logic, Probability, Statistics
Replies
10
Views
949
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
485
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
881
  • Programming and Computer Science
Replies
1
Views
281
Back
Top