Understanding Conditional Expectation, Variance, and Precision Matrices

Click For Summary
The discussion focuses on understanding the derivation of conditional expectation and variance in the context of Gaussian Markov Random Fields (GMRFs), specifically referencing Lindgren, Rue, and Lindström's work. The conditional expectation is expressed as a function of neighboring values, while the variance is defined as a constant dependent on a parameter 'a'. The precision matrix is introduced, with its structure changing when extending to second-order neighbors. The user seeks clarification on the origins of these expressions and the significance of the parameter 'a' in scaling. Overall, the inquiry emphasizes the relationship between the conditional distributions and the precision matrix in GMRFs.
MAXIM LI
Messages
6
Reaction score
2
Homework Statement
$$E(u_{i,j}\vert u_{-i,j}) = \frac{1}{a}(u_{i-1,j}+u_{i+1,j}+u_{i,j-1}+u_{i,j+1})$$
Relevant Equations
$$E(u_{i,j}\vert u_{-i,j}) = \frac{1}{a}(u_{i-1,j}+u_{i+1,j}+u_{i,j-1}+u_{i,j+1})$$
My question relates to subsection 2.2.1 of [this article][1]. This subsection recalls the work of Lindgren, Rue, and Lindström (2011) on Gaussian Markov Random Fields (GMRFs). The subsection starts with a two-dimensional regular lattice where the 4 first-order neighbours of $u_{i,j}$ are identified. The article defines the full conditional distribution through the expectation $$E(u_{i,j}\vert u_{-i,j}) = \frac{1}{a}(u_{i-1,j}+u_{i+1,j}+u_{i,j-1}+u_{i,j+1})$$ and variance $$Var(u_{i,j}\vert u_{-i,j}) = \frac{1}{a}.$$
This is then redefined in terms of the precision matrix, where the upper right quadrant is
\begin{array}
-1 \\
a & -1
\end{array}

Extending to second-order neighbours (i.e. the neighbours of first-order neighbours), the precision matrix becomes (again, just the upper right quadrant)
\begin{array}
-1 \\
-2a & 2 \\
4+a^2 & -2a & 1.
\end{array}

I am new to this topic and am trying to understand where the expressions for the conditional expectation and variance came from, and how the precision matrices were derived. I'd appreciate a fulsome explanation and derivation for both the first-order and second-order case. I tried looking in the book 'Gaussian Markov Random Fields
Theory and Applications' and this looks very similar to a conditional autoregression (CAR) model, defined in Chapter 1. However, here the full conditionals are written down as

$$
x_i \vert \mathbf{x}_{-i} \sim N\left(\sum_{j\neq i}\beta_{ij}x_{j},\kappa_i^{-1} \right)
$$

and the elements of the corresponding precision matrix are stated to be ##Q_{ii} = \kappa_i## and ##Q_{ij} = -\kappa_{i}\beta_{ij}## for ##i\neq j##This seems to be more general, which leaves me wondering how the conditional mean and variance at the start of this post were derived (along with the precision matrices). Where did a come from and why did we scale by this amount? Any help addressing this is much appreciated.

Note that ##\mathbf{x}_{-i}## means the vector of random variables excluding ##x_i##.

[1]: https://becarioprecario.bitbucket.io/spde-gitbook/ch-intro.html#sec:spde
 

Similar threads

Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 1 ·
Replies
1
Views
5K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 1 ·
Replies
1
Views
1K
Replies
9
Views
3K
  • · Replies 13 ·
Replies
13
Views
2K
Replies
3
Views
4K
Replies
2
Views
3K