Correlation Function of the Ornstein-Uhlenbeck Process

Click For Summary
The Ornstein-Uhlenbeck (OU) process is a stationary, stochastic, and Markovian model that describes the velocity of a Brownian particle. A numerical simulation of the OU process was conducted, and the discussion focused on confirming its properties through empirical data. Methods such as histogram analysis for Gaussianity and power spectral density calculations were suggested to verify the Markovian and stationary characteristics. The Wiener-Khinchin theorem was applied successfully to demonstrate stationarity, while the empirical distribution function showed a strong overlap with the Gaussian CDF. The conversation concluded with inquiries about estimating uncertainty in statistical properties, emphasizing the need for robust methods like hypothesis testing.
mhsd91
Messages
22
Reaction score
4
Basically, The Ornstein-Uhlenbeck (OU) process (and its time-integral) decribes the velocity of a brownian particle. The OU process is Stationary (in time), Stochastic AND Markovian.

Now, I've done an exact, one dimensional, numerical simulation of the OU process similar to D. T. Gillespie in his article: Phys. Rev. E 54, 2084 (Aug. 1996) titled: Exact numerical simulation of the Ornstein-Uhlenbeck process and its integral

The thing is, I was reading the "Correlation Function"-article on Wikipedia which stated, and I quote:

"(...), the study of correlation functions is similar to the study of probability distributions. Many stochastic processes can be completely characterized by their correlation functions; the most notable example is the class of Gaussian processes."

I wonder if the OU process is completely characterized by it's correlation functions, and if so, how do we derive them AND show this; assuming we have Empirical/Numerical data of the process?

Any help, tips or constructive advice is most appriciated.
 
Physics news on Phys.org
Having giving the problem another look, I'm now able to specify the problem a little:

A OU process is characterized by beein the only non-trivial process having all of the three properties,
  • Gaussian
  • Stationary (in time)
  • Markovian

Then showing and confirming these for my numerical data should prove the simulation to be an OU process. Now, I'd like any help anyone may offer on how to deduce these properties from numerical/experimental data.

Thanks!
 
I am not sure I can give you a completely satisfactory answer to your question. If you have can generate several (hopefully hundreds or thousands or more) processes and want to examine if the processes are Gaussian, generate a large number of processes, select a value for the time (number n for the time series of the process, n should not be too small (first or second number but maybe 50)), histogram the value at that number for each process. Does the result look Gaussian. For a quantitative examination, refer to algorithms such as Kolmogorov, Smirnov, or other methods to come up with a metric as to whether density, or distribution functions agree.

As far as stationary and Markovian. Determine the power spectral density for your process. (MATLAB, (xcov) or Fortran). For a first order Markov process, the power spectral density should be an exponential function. For a 2nd order Markov process, it should be a damped sine function (or so.). I have done this many times and I can try to provide further help if needed. These are just ideas. (I am somewhat familiar with Gillespie's article), but I have my own methods for generating these processes.
 
  • Like
Likes 1 person
Hi and thanks for the replay! I forgot to update this as I figured it out. I hope that it may be of help to anyone who's also struggling with this.

I did just as you say to prove the Markovian and stationary property. Concerning stationarity, I applied the Wiener–Khinchin theorem, found the power spectral density which decayed exponentially and flattened out. I also calculated the auto-correlation function which suggested memory-loss (hence, the data being Markovian).

For the Gaussian property, I calculated the empirical distribution function (EDF) and plotted against an ideal Gaussian's cumulative distribution function (CDF) with same mean and variance as the data had. They overlapped impressively well!

Now, I'm really grateful of the answer as it kind of confirms I did something right. To end this, do you have any short comments on how to estimate the uncertainty in concluding statistical properties of empirical data (in general)? I mean, is it most common to conduct a "Statistical hypothesis test", or is it some other more powerful method?

Again, thank you so much!:)
 
Excellent! Although I was late in replying, I am relieved to see I was on the right track.
 
I do not have any quick answers to your last question. For example, I do not see many papers that outline how they determine error bars on their power spectral densities. I know some of my colleagues have taken time series analysis courses.
 
Hello, I'm joining this forum to ask two questions which have nagged me for some time. They both are presumed obvious, yet don't make sense to me. Nobody will explain their positions, which is...uh...aka science. I also have a thread for the other question. But this one involves probability, known as the Monty Hall Problem. Please see any number of YouTube videos on this for an explanation, I'll leave it to them to explain it. I question the predicate of all those who answer this...
There is a nice little variation of the problem. The host says, after you have chosen the door, that you can change your guess, but to sweeten the deal, he says you can choose the two other doors, if you wish. This proposition is a no brainer, however before you are quick enough to accept it, the host opens one of the two doors and it is empty. In this version you really want to change your pick, but at the same time ask yourself is the host impartial and does that change anything. The host...
I'm taking a look at intuitionistic propositional logic (IPL). Basically it exclude Double Negation Elimination (DNE) from the set of axiom schemas replacing it with Ex falso quodlibet: ⊥ → p for any proposition p (including both atomic and composite propositions). In IPL, for instance, the Law of Excluded Middle (LEM) p ∨ ¬p is no longer a theorem. My question: aside from the logic formal perspective, is IPL supposed to model/address some specific "kind of world" ? Thanks.