Covariance of Fourier conjugates for Gaussian distributions

In summary, the covariance between two variables is related as the Fourier conjugate of the variables.
  • #1
redtree
285
13
Given two variables ##x## and ##k##, the covariance between the variables is as follows, where ##E## denotes the expected value:
\begin{equation}
\begin{split}
COV(x,k)&= E[x k]-E[x]E[k]
\end{split}
\end{equation}

If ##x## and ##k## are Foureir conjugates and ##f(x)## and ##\hat{f}(k)## are Gaussian distributions, how does that affect the covariance?

This is not a homework problem. I am just trying to understand the covariance of Fourier conjugates, particularly for Gaussians.
 
Physics news on Phys.org
  • #2
Okay, nobody answered yet, so I'll try to give an answer.

In ordinary probability theory, I would say that what you're asking for doesn't have an answer without clarification.

You have a variable ##x## that has a certain probability distribution, ##f(x)##. For example, suppose ##x## is a person's height in centimeters. What does ##k## mean in that case? You can certainly ask what the expected height is, but what does it mean to be the expected value of the Fourier conjugate of ##x##?

In quantum mechanics, you can make sense of these things in terms of viewing ##x## and ##k## as operators, rather than simple variables. Is that what you mean?
 
  • #3
I apologize if I was unclear. I am concerned with probability, specifically probability amplitude functions and Fourier transforms. This is not a question about quantum mechanics.

I will restate the question more explicitly. Given the normalized Gaussian probability amplitude in ##x##-space, where ##E(x^2)## denotes the variance:
\begin{equation}
\begin{split}
f(x) &= \left(2 \pi E(x^2) \right)^{-\frac{1}{4}} e^{-\frac{x^2}{4 E(x^2)}}
\end{split}
\end{equation}

Thus:
\begin{equation}
\begin{split}
\mathscr{F}\left[f(x) \right]&= \hat{f}(k)
\end{split}
\end{equation}

The equation for ##\hat{f}(k)## is the normalized probability amplitude in ##k##-space, where ##E(k^2)## denotes the variance, as follows:
\begin{equation}
\begin{split}
\hat{f}(k) &= \left(2 \pi E(k^2) \right)^{-\frac{1}{4}} e^{-\frac{k^2}{4 E(k^2)}}
\end{split}
\end{equation}

Clearly ##x## and ##k## are related as Fourier conjugates. For example, it is easy to derive the relationship between ##E(x^2)## and ##E(k^2)##. Given:
\begin{equation}
\begin{split}
\hat{f}(k) &\doteq \int_{-\infty}^{\infty} e^{2 \pi i k x} f(x) dx
\end{split}
\end{equation}

Thus ##E(x^2)## and ##E(k^2)## are related as follows (the relationship is an equality given the definition of ##f(x)## and ##\hat{f}(k)## given above):
\begin{equation}
\begin{split}
E(k^2) E(x^2) &= \frac{1}{16 \pi^2}
\end{split}
\end{equation}

However, I am having trouble finding the covariance of ##x## and ##k## where covariance is defined as follows:
\begin{equation}
\begin{split}
COV(x,k)&= E[x k]-E[x]E[k]
\end{split}
\end{equation}

Obviously, ##x## and ##k## are related, but how does that manifest in their covariance?
 
  • #4
x is a random variable, but what is k?
 
  • #5
##k## is the Fourier conjugate of ##x##.

My understanding is that ##k## is not completely independent of ##x##. Were ##k## and ##x## completely independent, i.e., ##Cov(x,k) = 0##, then why would ##E(x^2) E(k^2) = \frac{1}{16 \pi^2}##?
 
  • #6
If ##f(x)## is an arbitrary probability distribution, then ##\hat{f}(k)## will not necessarily be real. For example, if ##f(x) = 1## for ##0 \leq x \leq A## and 0 everywhere else, then ##\hat{f}(k) = \dfrac{e^{ikA} - 1}{ikA}##. So in general, taking a Fourier transform of a probability distribution for ##x## does not give a probability distribution for ##k##. So in general, I don't understand what it means to compute an expectation value for ##k## based on a probability distribution for ##x##. Now, it happens to be in your case that ##\hat{f}(k)## is real, but why should it be interpreted as a probability distribution for ##k##?
 
  • #7
##f(x)## is not an arbitrary probability distribution. I have defined ##f(x)## as the probability amplitude of the normalized Gaussian. Furthermore, the Fourier transform of a Gaussian probability distribution is also a Gaussian probability distribution (and the Fourier transform of the probability amplitude of the normalized Gaussian is also the probability amplitude of a normalized Gaussian). See for example: http://mathworld.wolfram.com/FourierTransformGaussian.html
 
  • #8
redtree said:
##f(x)## is not an arbitrary probability distribution. I have defined ##f(x)## as the probability amplitude of the normalized Gaussian. Furthermore, the Fourier transform of a Gaussian probability distribution is also a Gaussian probability distribution (and the Fourier transform of the probability amplitude of the normalized Gaussian is also the probability amplitude of a normalized Gaussian). See for example: http://mathworld.wolfram.com/FourierTransformGaussian.html

Yes, ##\hat{f}(k)## happens to be a Gaussian, but why do you interpret it as a probability distribution for ##k##?

And ##f(x)## is not a probability amplitude, it is a probability distribution.
 
  • #9
Using ##\hat f(k)## as a distribution function and defining a random variable with this distribution says nothing about the relationship between this variable and the one with distribution ##f(x)##.
 
  • #10
stevendaryl said:
Yes, ##\hat{f}(k)## happens to be a Gaussian, but why do you interpret it as a probability distribution for ##k##?

And ##f(x)## is not a probability amplitude, it is a probability distribution.

The mathematical uncertainty principle is a description of the relationship between the variances of Fourier conjugates with normalized Gaussian distributions. In order to calculate both variances, one must interpret both ##f(x)## and ##\hat{f}(k)## as probability functions. See for example: http://www.ams.org/publicoutreach/feature-column/fcarc-uncertainty

In order for normalization to hold in both position and wavenumber space, one must transform the square root of the normalized Gaussian probability density function, which I call the probability amplitude function. Happy to demonstrate if necessary.

The relationship between the variances of the two related functions is well-known, i.e., the mathematical uncertainty principle. I am merely trying to understand the relationship between their expected values as described by their covariance.
 
Last edited:
  • #11
mathman said:
Using ##\hat f(k)## as a distribution function and defining a random variable with this distribution says nothing about the relationship between this variable and the one with distribution ##f(x)##.

The mathematical uncertainty principle says a lot about the relationship between Fourier conjugates, specifically the relationship between their variances. Again, I am merely trying to see what has been discovered regarding the relationship between their expected values as represented by the covariance.
 
  • #12
redtree said:
The mathematical uncertainty principle is a description of the relationship between the variances of Fourier conjugates with normalized Gaussian distributions. In order to calculate both variances, one must interpret both ##f(x)## and ##\hat{f}(k)## as probability functions. See for example: http://www.ams.org/publicoutreach/feature-column/fcarc-uncertainty

It is certainly true that if ##f(x) = A e^{-\lambda x^2}##, then its Fourier transform ##\hat{f}(k)## has the same form:

##\hat{f}(k) = A' e^{-\lambda' k^2}##

where ##A' = A/\sqrt{2\lambda}## and ##\lambda' = 1/(4 \lambda)##

That's a fact about functions and their Fourier transforms. But that has nothing to do with interpreting ##f(x)## and ##\hat{f}(k)## as probability distributions. There is no meaning to the expectation value you are asking about: ##E(xk)##, as far as I know. That quantity makes no sense for probability distributions. It does make sense for probability amplitudes in the sense of quantum mechanics. If ##\psi(x)## is a probability amplitude, so that ##|\psi(x)|^2## is the corresponding probability distribution, then we can Fourier-transform ##\psi(x)## to get an amplitude for ##k##: ##\hat{\psi}(k) = \frac{1}{\sqrt{2\pi}} \int \psi(x) e^{-ikx} dx##. Then we can interpret the operator ##\hat{k}## via:

##\hat{k} \psi(x) = -i \frac{d \psi}{dx}##

Then the expectation value ##\langle x \hat{k} \rangle = \int \psi^*(x) x \hat{k} \psi(x) dx##
 
  • #13
I understand what you are saying, and I don't want to make this a debate about definitions. I note that the American Mathematical society considers the Fourier transform of the square root of the normalized Gaussian also to be a the square root of a p.d.f., i.e. both ##f(x)^2## and ##\hat{f}(k)^2## are p.d.f.'s.
See: http://www.ams.org/publicoutreach/feature-column/fcarc-uncertainty

Specifically:
And More Generally ...
{C}This property does not generalize to arbitrary probability distribution functions (p.d.f.'s) because Ff may not be a p.d.f. at all (it was not for our fh examples) and its variance may not be defined. On the other hand, the Fourier transform preserves the integral of the square of a function (this is the Plancherel Theorem). So if f 2 is a p.d.f., then (Ff )2, which is automatically non-negative, will also have total integral equal to 1: it is also a p.d.f.

Perhaps, we can just agree to disagree on this point.

What I really want to understand is the covariance relationship between ##x## and ##k##. One can easily derive the following:

\begin{equation}
\begin{split}
E(x^2) E(k^2) &\geq E(x k)^2
\end{split}
\end{equation}

I am trying to figure out how ##E(x^2) E(k^2)## relates to ##E(x)^2 E(k)^2## and for that I need to determine the covariance of ##x## and ##k##.

Of course, if the covariance is 0, then ##E(x k) = E(x) E(k)##, which is fine. However, considering that ##E(x^2)## and ##E(k^2)## are closely related via uncertainty, I am not inclined to assume ##Cov(x,k) = 0##.
 
Last edited:

1. What is the concept of covariance in statistics?

Covariance is a measure of how two variables change together. It indicates the direction and strength of the relationship between two variables. A positive covariance means that the two variables tend to increase or decrease together, while a negative covariance means that they have an inverse relationship.

2. How does covariance relate to Fourier conjugates?

In the context of Gaussian distributions, the covariance of two Fourier conjugates is a measure of their joint variability. It represents how much the two conjugates change together, and is an important factor in determining the shape of the Gaussian distribution.

3. How is the covariance of Fourier conjugates calculated for Gaussian distributions?

The covariance of two Fourier conjugates for Gaussian distributions can be calculated using the formula: Cov(X,Y) = E[(X-μ)(Y-ν)], where X and Y are the two random variables, μ and ν are their respective means, and E[] represents the expected value. In simpler terms, it is the average of the product of the deviations of X and Y from their respective means.

4. What does a high covariance between Fourier conjugates indicate?

A high covariance between Fourier conjugates for Gaussian distributions indicates a strong relationship between the two variables. This means that they tend to change together and their values are highly dependent on each other.

5. How is the covariance of Fourier conjugates used in practical applications?

The covariance of Fourier conjugates is an important statistic in various fields such as signal processing, image analysis, and machine learning. It is used to understand the relationship between different variables and to make predictions based on this relationship. Additionally, it is used in the calculation of other statistical measures such as correlation and regression coefficients.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
14
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
950
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
846
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
925
  • Set Theory, Logic, Probability, Statistics
Replies
25
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
911
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
Back
Top