Understanding Conditional Expectation, Variance, and Precision Matrices

  • #1
MAXIM LI
6
2
Homework Statement
$$E(u_{i,j}\vert u_{-i,j}) = \frac{1}{a}(u_{i-1,j}+u_{i+1,j}+u_{i,j-1}+u_{i,j+1})$$
Relevant Equations
$$E(u_{i,j}\vert u_{-i,j}) = \frac{1}{a}(u_{i-1,j}+u_{i+1,j}+u_{i,j-1}+u_{i,j+1})$$
My question relates to subsection 2.2.1 of [this article][1]. This subsection recalls the work of Lindgren, Rue, and Lindström (2011) on Gaussian Markov Random Fields (GMRFs). The subsection starts with a two-dimensional regular lattice where the 4 first-order neighbours of $u_{i,j}$ are identified. The article defines the full conditional distribution through the expectation $$E(u_{i,j}\vert u_{-i,j}) = \frac{1}{a}(u_{i-1,j}+u_{i+1,j}+u_{i,j-1}+u_{i,j+1})$$ and variance $$Var(u_{i,j}\vert u_{-i,j}) = \frac{1}{a}.$$
This is then redefined in terms of the precision matrix, where the upper right quadrant is
\begin{array}
-1 \\
a & -1
\end{array}

Extending to second-order neighbours (i.e. the neighbours of first-order neighbours), the precision matrix becomes (again, just the upper right quadrant)
\begin{array}
-1 \\
-2a & 2 \\
4+a^2 & -2a & 1.
\end{array}

I am new to this topic and am trying to understand where the expressions for the conditional expectation and variance came from, and how the precision matrices were derived. I'd appreciate a fulsome explanation and derivation for both the first-order and second-order case. I tried looking in the book 'Gaussian Markov Random Fields
Theory and Applications' and this looks very similar to a conditional autoregression (CAR) model, defined in Chapter 1. However, here the full conditionals are written down as

$$
x_i \vert \mathbf{x}_{-i} \sim N\left(\sum_{j\neq i}\beta_{ij}x_{j},\kappa_i^{-1} \right)
$$

and the elements of the corresponding precision matrix are stated to be ##Q_{ii} = \kappa_i## and ##Q_{ij} = -\kappa_{i}\beta_{ij}## for ##i\neq j##This seems to be more general, which leaves me wondering how the conditional mean and variance at the start of this post were derived (along with the precision matrices). Where did a come from and why did we scale by this amount? Any help addressing this is much appreciated.

Note that ##\mathbf{x}_{-i}## means the vector of random variables excluding ##x_i##.

[1]: https://becarioprecario.bitbucket.io/spde-gitbook/ch-intro.html#sec:spde
 

Similar threads

  • Calculus and Beyond Homework Help
Replies
3
Views
2K
  • Calculus and Beyond Homework Help
Replies
1
Views
4K
  • Calculus and Beyond Homework Help
Replies
1
Views
5K
  • Programming and Computer Science
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
8
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
2K
  • Differential Equations
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
2K
  • Calculus and Beyond Homework Help
Replies
13
Views
2K
Back
Top