- #1
weetabixharry
- 111
- 0
I understand the concept of covariance, relating two complex random (scalar) variables. However, I get confused when I have both deterministic and random variables. Therefore, what I write might make very little sense -- I'm really only looking for any general advice on where to start reading. [In all my work, all variables are zero-mean and random variables are complex Gaussian].
For example, let's say I have two (zero-mean) independent random variables, [itex]r_1[/itex] and [itex]r_2[/itex] which vary as a function of time with known second order statistics. Similarly, I have two (zero-mean) deterministic variables [itex]d_1[/itex] and [itex]d_2[/itex] which vary in a known manner as a function of time.
Firstly, I don't even know if the term 'covariance' (or, more generally, 'expectation') can be applied to the deterministic variables. I don't see why I couldn't design, for example, [itex]d_1[/itex] and [itex]d_2[/itex] such that:
[itex]\mathcal{E}\{d_1d_2^*\}=\rho[/itex]
for some suitable complex scalar, [itex]\rho[/itex]. If such things are possible, then can I say my two (zero-mean) deterministic variables are independent? That is:
[itex]\mathcal{E}\{d_1d_2^*\}= \mathcal{E}\{d_1\}\mathcal{E}\{d_2^*\}=0[/itex]
Even if all that is possible, I get particularly confused when random variables are combined with deterministic variables. Specifically, I feel like there must be certain statistical properties of the random variables which are sort of 'unchangeable' after multiplication by deterministic variables (or, at least, ones which don't do anything weird like take zero values all the time). For example, I'd like to be able to write something along the lines of:
[itex]\mathcal{E}\{d_1d_2^*r_1r_2^*\}=\mathcal{E}\{d_1d_2^*\}\mathcal{E}\{r_1r_2^*\}[/itex]
which would somehow illustrate that the behaviour of the random variables can be separated from the known behaviour of the deterministic variables.
How can I begin to approach problems such as this, which involve both random and deterministic variables? Any advice is greatly appreciated!
For example, let's say I have two (zero-mean) independent random variables, [itex]r_1[/itex] and [itex]r_2[/itex] which vary as a function of time with known second order statistics. Similarly, I have two (zero-mean) deterministic variables [itex]d_1[/itex] and [itex]d_2[/itex] which vary in a known manner as a function of time.
Firstly, I don't even know if the term 'covariance' (or, more generally, 'expectation') can be applied to the deterministic variables. I don't see why I couldn't design, for example, [itex]d_1[/itex] and [itex]d_2[/itex] such that:
[itex]\mathcal{E}\{d_1d_2^*\}=\rho[/itex]
for some suitable complex scalar, [itex]\rho[/itex]. If such things are possible, then can I say my two (zero-mean) deterministic variables are independent? That is:
[itex]\mathcal{E}\{d_1d_2^*\}= \mathcal{E}\{d_1\}\mathcal{E}\{d_2^*\}=0[/itex]
Even if all that is possible, I get particularly confused when random variables are combined with deterministic variables. Specifically, I feel like there must be certain statistical properties of the random variables which are sort of 'unchangeable' after multiplication by deterministic variables (or, at least, ones which don't do anything weird like take zero values all the time). For example, I'd like to be able to write something along the lines of:
[itex]\mathcal{E}\{d_1d_2^*r_1r_2^*\}=\mathcal{E}\{d_1d_2^*\}\mathcal{E}\{r_1r_2^*\}[/itex]
which would somehow illustrate that the behaviour of the random variables can be separated from the known behaviour of the deterministic variables.
How can I begin to approach problems such as this, which involve both random and deterministic variables? Any advice is greatly appreciated!