Continous limit of a multivariate normal distribution

AI Thread Summary
The discussion focuses on the challenges of calculating Fisher information for a parameter θ in a multivariate normal distribution as the variables approach a continuous distribution. The covariance matrix Σ becomes singular in this limit, complicating calculations. Participants suggest that the issue may relate to the need for defining a Gaussian process with a suitable mean and covariance function indexed by time. While stationary processes allow for some approximations in frequency space, the non-stationary nature of the discussed process requires alternative approaches. The thread highlights the importance of understanding covariance functions to address the singularity problem and explore potential solutions.
QuantizedFun
Messages
3
Reaction score
0
Hello everyone,

I am currently considering a set of random variables, \vec{x} = [x_1,x_2,...x_N] which are know to follow a multivariate normal distribution,
P(\vec{x}) \propto \mathrm{exp}(-\frac{1}{2}(\vec{x}-\vec{\mu})^\mathrm{T}\Sigma^{-1}(\vec{x}-\vec{\mu}))
The covariance matrix Σ and the vector of mean values μ are constructed numerically from quantum optics, and I the purpose is to calculate the Fisher information of estimating a parameter θ,
\mathcal{I}(\theta) = \frac{\partial \vec{\mu}^\mathrm{T}}{\partial \theta}\Sigma^{-1} \frac{\partial \vec{\mu}}{\partial \theta}

Now, I wish to take the limit as the set of variables approach a continuous distribution,
x_i \rightarrow x(t), \quad \quad \Sigma_{i,j} \rightarrow \Sigma(t,t')
In this case Σ-1 is ill-defined and numerically the calculations break down because Σ-1 becomes singular when close columns become (nearly) identical in this limit.

I would be very grateful for any suggestions to solve a problem like this.

Is there perhaps a "standard" way to take this continuum limit of a multivariate normal distribution or maybe just of the inverse matrix?
 
Last edited:
Physics news on Phys.org
What you are looking for is a Gaussian process (https://en.wikipedia.org/wiki/Gaussian_process), which includes countably and uncountably infinite collections of Gaussian random variables. As your variables are suitably indexed (by time), you just need to define the mean as a function of time and the covariance as a function of a pair of time points. I'm not sure how you would get the continuum limit of your covariance matrix however, as that should depend on its specific form.
 
madness said:
I'm not sure how you would get the continuum limit of your covariance matrix however, as that should depend on its specific form.

Are we looking for an "autocovariance function"? If so, there should be literature on how to estimate it from data.
 
Stephen Tashi said:
Are we looking for an "autocovariance function"? If so, there should be literature on how to estimate it from data.

In the continuous case? That would at least require the specification of some kind of functional form a priori, and then the fitting of parameters.
 
madness said:
What you are looking for is a Gaussian process (https://en.wikipedia.org/wiki/Gaussian_process), which includes countably and uncountably infinite collections of Gaussian random variables. As your variables are suitably indexed (by time), you just need to define the mean as a function of time and the covariance as a function of a pair of time points. I'm not sure how you would get the continuum limit of your covariance matrix however, as that should depend on its specific form.

Thank you, the process is indeed a Gaussian process. I have defined my mean μ(t) and the covariance function Σ(t,t') as functions of the continuous time variables t and t'.

This, however, does allow me to construct a probability density P(x(t)) for the continuous time series x(t) (which would be needed to obtain the Fisher information from the definition).

After you made me aware that I am dealing with a Gaussian process I, however, was able to specify my search.
I turns out that for such processes, the mean and covariance function can be considered in frequency space to yield approximate formulas for the Fisher information if the process is stationary (Σ(t,t')=Σ(t-t'))
see http://www.ese.wustl.edu/~nehorai/paper/papersadd/ieeetass90-2.pdf

The problem is that my process is not stationary, so I somehow need to extend these results or find another way around the problem.
 
QuantizedFun said:
The problem is that my process is not stationary, so I somehow need to extend these results or find another way around the problem.

That's an interesting problem and I don't want to see this thread die out. You might not appreciate an intuitive, uninformed discussion - but I can't resist.

If I understood whether your covariance function implies that you have an "ordinary" Gaussian process, I would understand whether the paper http://bpuig.perso.univ-pau.fr/articles/PuigPoirion.pdf would help. It's about simulation.

In terms of generalities, your problem involves the concept of integration of a functional over all possible paths. As a generality, its not clear that such an integral exists and if it exists, it may not be unique. From a simplistic point of view, the general pattern for an integral over all possible paths would be to define it as a limit of a sums. Each sum would we a sum of terms and each term would have the form f(s_i) p(s_i) where f(s_i) is a number that "characterizes" some property of all the paths in a set s_i of paths and p(s_i) gives the probability that the realized trajectory of the stochastic process falls in the set s_i. We'd like the sets s_i to partition the space of trajectories.

If a trajectory is simulated as a finite set of points then the finite vector of points can be taken to represent the set s_i defined as the set of all trajectories that pass through those points. The crucial question is whether one may assign a number f(s_i) to that set that "characterizes" the entire set of values \{ f(v): v \in s_i \}.
 
Last edited by a moderator:
Stephen Tashi said:
If I understood whether your covariance function implies that you have an "ordinary" Gaussian process, I would understand whether the paper http://bpuig.perso.univ-pau.fr/articles/PuigPoirion.pdf would help. It's about simulation.

My covariance function looks like this:

untitled.jpg


A simpler example, where the problem of an ill-defined inverse arises, is

\Sigma(t,t') = \exp(-(t-t')^2)

If one could understand why this fails in contrast to e.g.

\Sigma(t,t') = \exp(-|t-t'|) or \Sigma(t,t') = 1-|t-t'|
where the inverse exists, a solution might be possible to find.These processes are of course stationary, but I believe the root of my problem is the same.
 
Last edited by a moderator:
Back
Top