# I Covariance of Fourier conjugates for Gaussian distributions

#### redtree

Given two variables $x$ and $k$, the covariance between the variables is as follows, where $E$ denotes the expected value:

\begin{split}
COV(x,k)&= E[x k]-E[x]E[k]
\end{split}

If $x$ and $k$ are Foureir conjugates and $f(x)$ and $\hat{f}(k)$ are Gaussian distributions, how does that affect the covariance?

This is not a homework problem. I am just trying to understand the covariance of Fourier conjugates, particularly for Gaussians.

Related Set Theory, Logic, Probability, Statistics News on Phys.org

#### stevendaryl

Staff Emeritus

In ordinary probability theory, I would say that what you're asking for doesn't have an answer without clarification.

You have a variable $x$ that has a certain probability distribution, $f(x)$. For example, suppose $x$ is a person's height in centimeters. What does $k$ mean in that case? You can certainly ask what the expected height is, but what does it mean to be the expected value of the fourier conjugate of $x$?

In quantum mechanics, you can make sense of these things in terms of viewing $x$ and $k$ as operators, rather than simple variables. Is that what you mean?

#### redtree

I apologize if I was unclear. I am concerned with probability, specifically probability amplitude functions and Fourier transforms. This is not a question about quantum mechanics.

I will restate the question more explicitly. Given the normalized Gaussian probability amplitude in $x$-space, where $E(x^2)$ denotes the variance:

\begin{split}
f(x) &= \left(2 \pi E(x^2) \right)^{-\frac{1}{4}} e^{-\frac{x^2}{4 E(x^2)}}
\end{split}

Thus:

\begin{split}
\mathscr{F}\left[f(x) \right]&= \hat{f}(k)
\end{split}

The equation for $\hat{f}(k)$ is the normalized probability amplitude in $k$-space, where $E(k^2)$ denotes the variance, as follows:

\begin{split}
\hat{f}(k) &= \left(2 \pi E(k^2) \right)^{-\frac{1}{4}} e^{-\frac{k^2}{4 E(k^2)}}
\end{split}

Clearly $x$ and $k$ are related as Fourier conjugates. For example, it is easy to derive the relationship between $E(x^2)$ and $E(k^2)$. Given:

\begin{split}
\hat{f}(k) &\doteq \int_{-\infty}^{\infty} e^{2 \pi i k x} f(x) dx
\end{split}

Thus $E(x^2)$ and $E(k^2)$ are related as follows (the relationship is an equality given the definition of $f(x)$ and $\hat{f}(k)$ given above):

\begin{split}
E(k^2) E(x^2) &= \frac{1}{16 \pi^2}
\end{split}

However, I am having trouble finding the covariance of $x$ and $k$ where covariance is defined as follows:

\begin{split}
COV(x,k)&= E[x k]-E[x]E[k]
\end{split}

Obviously, $x$ and $k$ are related, but how does that manifest in their covariance?

#### mathman

x is a random variable, but what is k?

#### redtree

$k$ is the Fourier conjugate of $x$.

My understanding is that $k$ is not completely independent of $x$. Were $k$ and $x$ completely independent, i.e., $Cov(x,k) = 0$, then why would $E(x^2) E(k^2) = \frac{1}{16 \pi^2}$?

#### stevendaryl

Staff Emeritus
If $f(x)$ is an arbitrary probability distribution, then $\hat{f}(k)$ will not necessarily be real. For example, if $f(x) = 1$ for $0 \leq x \leq A$ and 0 everywhere else, then $\hat{f}(k) = \dfrac{e^{ikA} - 1}{ikA}$. So in general, taking a Fourier transform of a probability distribution for $x$ does not give a probability distribution for $k$. So in general, I don't understand what it means to compute an expectation value for $k$ based on a probability distribution for $x$. Now, it happens to be in your case that $\hat{f}(k)$ is real, but why should it be interpreted as a probability distribution for $k$?

#### redtree

$f(x)$ is not an arbitrary probability distribution. I have defined $f(x)$ as the probability amplitude of the normalized Gaussian. Furthermore, the Fourier transform of a Gaussian probability distribution is also a Gaussian probability distribution (and the Fourier transform of the probability amplitude of the normalized Gaussian is also the probability amplitude of a normalized Gaussian). See for example: http://mathworld.wolfram.com/FourierTransformGaussian.html

#### stevendaryl

Staff Emeritus
$f(x)$ is not an arbitrary probability distribution. I have defined $f(x)$ as the probability amplitude of the normalized Gaussian. Furthermore, the Fourier transform of a Gaussian probability distribution is also a Gaussian probability distribution (and the Fourier transform of the probability amplitude of the normalized Gaussian is also the probability amplitude of a normalized Gaussian). See for example: http://mathworld.wolfram.com/FourierTransformGaussian.html
Yes, $\hat{f}(k)$ happens to be a Gaussian, but why do you interpret it as a probability distribution for $k$?

And $f(x)$ is not a probability amplitude, it is a probability distribution.

#### mathman

Using $\hat f(k)$ as a distribution function and defining a random variable with this distribution says nothing about the relationship between this variable and the one with distribution $f(x)$.

#### redtree

Yes, $\hat{f}(k)$ happens to be a Gaussian, but why do you interpret it as a probability distribution for $k$?

And $f(x)$ is not a probability amplitude, it is a probability distribution.
The mathematical uncertainty principle is a description of the relationship between the variances of Fourier conjugates with normalized Gaussian distributions. In order to calculate both variances, one must interpret both $f(x)$ and $\hat{f}(k)$ as probability functions. See for example: http://www.ams.org/publicoutreach/feature-column/fcarc-uncertainty

In order for normalization to hold in both position and wavenumber space, one must transform the square root of the normalized Gaussian probability density function, which I call the probability amplitude function. Happy to demonstrate if necessary.

The relationship between the variances of the two related functions is well-known, i.e., the mathematical uncertainty principle. I am merely trying to understand the relationship between their expected values as described by their covariance.

Last edited:

#### redtree

Using $\hat f(k)$ as a distribution function and defining a random variable with this distribution says nothing about the relationship between this variable and the one with distribution $f(x)$.
The mathematical uncertainty principle says a lot about the relationship between Fourier conjugates, specifically the relationship between their variances. Again, I am merely trying to see what has been discovered regarding the relationship between their expected values as represented by the covariance.

#### stevendaryl

Staff Emeritus
The mathematical uncertainty principle is a description of the relationship between the variances of Fourier conjugates with normalized Gaussian distributions. In order to calculate both variances, one must interpret both $f(x)$ and $\hat{f}(k)$ as probability functions. See for example: http://www.ams.org/publicoutreach/feature-column/fcarc-uncertainty
It is certainly true that if $f(x) = A e^{-\lambda x^2}$, then its Fourier transform $\hat{f}(k)$ has the same form:

$\hat{f}(k) = A' e^{-\lambda' k^2}$

where $A' = A/\sqrt{2\lambda}$ and $\lambda' = 1/(4 \lambda)$

That's a fact about functions and their Fourier transforms. But that has nothing to do with interpreting $f(x)$ and $\hat{f}(k)$ as probability distributions. There is no meaning to the expectation value you are asking about: $E(xk)$, as far as I know. That quantity makes no sense for probability distributions. It does make sense for probability amplitudes in the sense of quantum mechanics. If $\psi(x)$ is a probability amplitude, so that $|\psi(x)|^2$ is the corresponding probability distribution, then we can Fourier-transform $\psi(x)$ to get an amplitude for $k$: $\hat{\psi}(k) = \frac{1}{\sqrt{2\pi}} \int \psi(x) e^{-ikx} dx$. Then we can interpret the operator $\hat{k}$ via:

$\hat{k} \psi(x) = -i \frac{d \psi}{dx}$

Then the expectation value $\langle x \hat{k} \rangle = \int \psi^*(x) x \hat{k} \psi(x) dx$

#### redtree

I understand what you are saying, and I don't want to make this a debate about definitions. I note that the American Mathematical society considers the Fourier transform of the square root of the normalized Gaussian also to be a the square root of a p.d.f., i.e. both $f(x)^2$ and $\hat{f}(k)^2$ are p.d.f.'s.
See: http://www.ams.org/publicoutreach/feature-column/fcarc-uncertainty

Specifically:
And More Generally ...
{C}This property does not generalize to arbitrary probability distribution functions (p.d.f.'s) because Ff may not be a p.d.f. at all (it was not for our fh examples) and its variance may not be defined. On the other hand, the Fourier transform preserves the integral of the square of a function (this is the Plancherel Theorem). So if f 2 is a p.d.f., then (Ff )2, which is automatically non-negative, will also have total integral equal to 1: it is also a p.d.f.

Perhaps, we can just agree to disagree on this point.

What I really want to understand is the covariance relationship between $x$ and $k$. One can easily derive the following:

\begin{split}
E(x^2) E(k^2) &\geq E(x k)^2
\end{split}

I am trying to figure out how $E(x^2) E(k^2)$ relates to $E(x)^2 E(k)^2$ and for that I need to determine the covariance of $x$ and $k$.

Of course, if the covariance is 0, then $E(x k) = E(x) E(k)$, which is fine. However, considering that $E(x^2)$ and $E(k^2)$ are closely related via uncertainty, I am not inclined to assume $Cov(x,k) = 0$.

Last edited:

"Covariance of Fourier conjugates for Gaussian distributions"

### Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving