Autocorrelation function from PDF?

AI Thread Summary
To determine the autocorrelation function Rxx(τ) from a probability density function (pdf), a joint pdf relating multiple random variables is necessary, as a single pdf describes only one variable. The autocorrelation function Rxx(τ) is specifically applicable to wide-sense stationary processes, where it depends solely on the time difference τ. If only a single pdf is available, only Rxx(0) can be computed, which represents the autocorrelation at zero time offset. For strict sense stationary processes, expectation values can be calculated using integrals over the joint pdf of the variables involved. Ultimately, the joint pdf is crucial for accurately finding the autocorrelation in stochastic processes.
iVenky
Messages
212
Reaction score
12
What is the exact prodecure for finding out the auto correlation function Rxx(τ) for a given pdf?
Is it possible at all to find out the auto correlation function from the pdf? If not then what is given usually when you find out the auto correlation function Rxx(τ)?

Thanks
 
Physics news on Phys.org
The autocorrelation is applied to a stochastic process, which is a family of random variables. A pdf might describe a single random variable. To find the autocorrelation, you would need the joint pdf that relates the random variables.

Some terms that might be worth learning are "stationary process" and "wide-sense stationary". You describe an auto-correlation function Rxx(τ), but in general the autocorrelation will be Rxx(t1, t2). It is only written Rxx(τ) if the processes is a wide-sense stationary process. This is because for a wide-sense stationary process, the autocorrelation only depends on the difference τ between the two times.

With only single pdf for X that was not a joint pdf, you would only be able to find Rxx(0), which is for zero [time] offset.
 
MisterX said:
The autocorrelation is applied to a stochastic process, which is a family of random variables. A pdf might describe a single random variable. To find the autocorrelation, you would need the joint pdf that relates the random variables.

Some terms that might be worth learning are "stationary process" and "wide-sense stationary". You describe an auto-correlation function Rxx(τ), but in general the autocorrelation will be Rxx(t1, t2). It is only written Rxx(τ) if the processes is a wide-sense stationary process. This is because for a wide-sense stationary process, the autocorrelation only depends on the difference τ between the two times.

With only single pdf for X that was not a joint pdf, you would only be able to find Rxx(0), which is for zero [time] offset.


Ya if it is a "strict sense stationary process" then can we find out Rxx(τ) using the pdf?
 
You should remember how to find expectation values of functions continuous random variables.
E[g(X)] = \int _{-\infty}^{\infty} g(x)p_{X}(x)dx

If you have a joint PDF for two variables X and Y, it is similar, except the integral has to cover all possibilities for X and Y.

E[g(X, Y)] = \int _{-\infty}^{\infty}\int _{-\infty}^{\infty} g(x, y)p_{XY}(x,y)dxdyFor example if you wanted to find the auto-covariance of a wide sense stationary stochastic process you'd be finding

E\left[\left(X_t - E[X_t]\right)\left(X_{t + \tau} - E[X_{t + \tau}]\right)\right]

For such a process you should have a joint pdf that depends on tau. p_{XX}(x_1, x_2, \tau). This gives the joint PDF for two variables from the process that are separated by τ. You should not integrate over tau; it does not correspond to one of the random variables.

It's also useful to know

E\left[\left(X - E[X]\right)\left(Y - E[Y]\right)\right] = E[XY - E[X]Y - E[Y]X + E[Y]E[X] ] = E[XY] - E[X]E[Y] - E[X]E[Y] + E[X]E[Y]
= E[XY] - E[X]E[Y]

So

E\left[\left(X_t - E[X_t]\right)\left(X_{t + \tau} - E[X_{t + \tau}]\right)\right] = E[X_t X_{t + \tau}] - E[X_t]E[X_{t + \tau}]

The autocorrelation is the autocovariance divided by the standard deviations of both variables.
R{xx}(t_1, t_2) = \frac{E\left[\left(X_{t1} - E[X_{t1}]\right)\left(X_{t2} - E[X_{t2}]\right)\right] }{\sigma_{t1} \sigma_{t2}}

In the problem you are attempting to solve, the standard deviations \sigma_{t1} and \sigma_{t2} might be equal.
 
Last edited:
So we need to have the joint pdf to find out the Autocorrelation, right?
 
iVenky said:
So we need to have the joint pdf to find out the Autocorrelation, right?

yes. and the assumption that this random process is ergodic. then you can turn any time-average into a probabilistic average.
 
I was reading documentation about the soundness and completeness of logic formal systems. Consider the following $$\vdash_S \phi$$ where ##S## is the proof-system making part the formal system and ##\phi## is a wff (well formed formula) of the formal language. Note the blank on left of the turnstile symbol ##\vdash_S##, as far as I can tell it actually represents the empty set. So what does it mean ? I guess it actually means ##\phi## is a theorem of the formal system, i.e. there is a...
Back
Top