- #1

- 186

- 5

Is it possible at all to find out the auto correlation function from the pdf? If not then what is given usually when you find out the auto correlation function Rxx(τ)?

Thanks

- Thread starter iVenky
- Start date

- #1

- 186

- 5

Is it possible at all to find out the auto correlation function from the pdf? If not then what is given usually when you find out the auto correlation function Rxx(τ)?

Thanks

- #2

- 760

- 69

Some terms that might be worth learning are "stationary process" and "wide-sense stationary". You describe an auto-correlation function Rxx(τ), but in general the autocorrelation will be Rxx(t

With only single pdf for X that was not a joint pdf, you would only be able to find Rxx(0), which is for zero [time] offset.

- #3

- 186

- 5

Some terms that might be worth learning are "stationary process" and "wide-sense stationary". You describe an auto-correlation function Rxx(τ), but in general the autocorrelation will be Rxx(t_{1}, t_{2}). It is only written Rxx(τ) if the processes is a wide-sense stationary process. This is because for a wide-sense stationary process, the autocorrelation only depends on the difference τ between the two times.

With only single pdf for X that was not a joint pdf, you would only be able to find Rxx(0), which is for zero [time] offset.

Ya if it is a "strict sense stationary process" then can we find out Rxx(τ) using the pdf?

- #4

- 760

- 69

You should remember how to find expectation values of functions continuous random variables.

[itex]E[g(X)] = \int _{-\infty}^{\infty} g(x)p_{X}(x)dx[/itex]

If you have a joint PDF for two variables X and Y, it is similar, except the integral has to cover all possibilities for X and Y.

[itex]E[g(X, Y)] = \int _{-\infty}^{\infty}\int _{-\infty}^{\infty} g(x, y)p_{XY}(x,y)dxdy[/itex]

For example if you wanted to find the auto-covariance of a wide sense stationary stochastic process you'd be finding

[itex]E\left[\left(X_t - E[X_t]\right)\left(X_{t + \tau} - E[X_{t + \tau}]\right)\right] [/itex]

For such a process you should have a joint pdf that depends on tau. [itex]p_{XX}(x_1, x_2, \tau)[/itex]. This gives the joint PDF for two variables from the process that are seperated by τ. You should not integrate over tau; it does not correspond to one of the random variables.

It's also useful to know

[itex]E\left[\left(X - E[X]\right)\left(Y - E[Y]\right)\right] = E[XY - E[X]Y - E[Y]X + E[Y]E[X] ] = E[XY] - E[X]E[Y] - E[X]E[Y] + E[X]E[Y] [/itex]

[itex]= E[XY] - E[X]E[Y][/itex]

So

[itex]E\left[\left(X_t - E[X_t]\right)\left(X_{t + \tau} - E[X_{t + \tau}]\right)\right] = E[X_t X_{t + \tau}] - E[X_t]E[X_{t + \tau}] [/itex]

The autocorrelation is the autocovariance divided by the standard deviations of both variables.

[itex]R{xx}(t_1, t_2) = \frac{E\left[\left(X_{t1} - E[X_{t1}]\right)\left(X_{t2} - E[X_{t2}]\right)\right] }{\sigma_{t1} \sigma_{t2}}[/itex]

In the problem you are attempting to solve, the standard deviations [itex]\sigma_{t1} [/itex] and [itex]\sigma_{t2} [/itex] might be equal.

[itex]E[g(X)] = \int _{-\infty}^{\infty} g(x)p_{X}(x)dx[/itex]

If you have a joint PDF for two variables X and Y, it is similar, except the integral has to cover all possibilities for X and Y.

[itex]E[g(X, Y)] = \int _{-\infty}^{\infty}\int _{-\infty}^{\infty} g(x, y)p_{XY}(x,y)dxdy[/itex]

For example if you wanted to find the auto-covariance of a wide sense stationary stochastic process you'd be finding

[itex]E\left[\left(X_t - E[X_t]\right)\left(X_{t + \tau} - E[X_{t + \tau}]\right)\right] [/itex]

For such a process you should have a joint pdf that depends on tau. [itex]p_{XX}(x_1, x_2, \tau)[/itex]. This gives the joint PDF for two variables from the process that are seperated by τ. You should not integrate over tau; it does not correspond to one of the random variables.

It's also useful to know

[itex]E\left[\left(X - E[X]\right)\left(Y - E[Y]\right)\right] = E[XY - E[X]Y - E[Y]X + E[Y]E[X] ] = E[XY] - E[X]E[Y] - E[X]E[Y] + E[X]E[Y] [/itex]

[itex]= E[XY] - E[X]E[Y][/itex]

So

[itex]E\left[\left(X_t - E[X_t]\right)\left(X_{t + \tau} - E[X_{t + \tau}]\right)\right] = E[X_t X_{t + \tau}] - E[X_t]E[X_{t + \tau}] [/itex]

The autocorrelation is the autocovariance divided by the standard deviations of both variables.

[itex]R{xx}(t_1, t_2) = \frac{E\left[\left(X_{t1} - E[X_{t1}]\right)\left(X_{t2} - E[X_{t2}]\right)\right] }{\sigma_{t1} \sigma_{t2}}[/itex]

In the problem you are attempting to solve, the standard deviations [itex]\sigma_{t1} [/itex] and [itex]\sigma_{t2} [/itex] might be equal.

Last edited:

- #5

- 186

- 5

So we need to have the joint pdf to find out the Autocorrelation, right?

- #6

- 2,226

- 9

yes. and the assumption that this random process isSo we need to have the joint pdf to find out the Autocorrelation, right?

- Replies
- 3

- Views
- 3K

- Last Post

- Replies
- 33

- Views
- 10K

- Last Post

- Replies
- 6

- Views
- 3K

- Replies
- 3

- Views
- 4K

- Last Post

- Replies
- 3

- Views
- 5K

- Last Post

- Replies
- 5

- Views
- 3K

- Last Post

- Replies
- 2

- Views
- 813

- Last Post

- Replies
- 1

- Views
- 4K

- Last Post

- Replies
- 2

- Views
- 30K

- Replies
- 1

- Views
- 4K