I'd need to combine several vector-valued estimates of a physical quantity in order to obtain a better estimate with less uncertainty.(adsbygoogle = window.adsbygoogle || []).push({});

As in the scalar case, the weighted mean of multiple estimates can provide a maximum likelihood estimate. For independent estimates we simply replace the variance ##σ^2## by the covariance matrix ##∑## and the arithmetic inverse by the matrix inverse (both denoted in the same way, via superscripts); the weight matrix then reads (see https://en.wikipedia.org/wiki/Weighted_arithmetic_mean#Vector-valued_estimates)

$$W_i =∑_i^{-1}$$

The weighted mean in this case is:

$$\bar x = \Sigma_{\bar x} \left(\sum_{i=1}^n \text{W}_i \mathbf{x}_i\right)$$

(where the order of the matrix-vector product is not commutative).

The covariance of the weighted mean is:

$$\Sigma_{\bar x} = \left(\sum_{i=1}^n \text{W}_i\right)^{-1}$$

For example, consider the weighted mean of the point ##[1~0]^\top## with high variance in the second component and ##[0~1]^\top## with high variance in the first component. Then

$$x_1 := \begin{bmatrix}1\\0\end{bmatrix}, \qquad \Sigma_1 := \begin{bmatrix}1 & 0\\ 0 & 100\end{bmatrix}$$

$$x_2 := \begin{bmatrix}0\\1\end{bmatrix}, \qquad \Sigma_2 := \begin{bmatrix}100 & 0\\ 0 & 1\end{bmatrix}$$

then the weighted mean is:

$$ \bar x = \left(\Sigma_1^{-1} + \Sigma_2^{-1}\right)^{-1} \left(\Sigma_1^{-1} \mathbf{x}_1 + \Sigma_2^{-1} \mathbf{x}_2\right) \\[5pt] =\begin{bmatrix} 0.9901 &0\\ 0& 0.9901\end{bmatrix}\begin{bmatrix}1\\1\end{bmatrix} = \begin{bmatrix}0.9901 \\ 0.9901\end{bmatrix}$$

On the other hand, for scalar quantities it is well known that correlations between estimates can be easily accounted. In the general case (see https://en.wikipedia.org/wiki/Weighted_arithmetic_mean#Accounting_for_correlations), suppose that ##X=[x_1,\dots,x_n]^\top##, ##C## is the covariance matrix relating the quantities ##x_i##,##\bar x## is the common mean to be estimated, and ##W## is the design matrix ##[1, ..., 1]^\top## (of length ##n##). The Gauss–Markov theorem states that the estimate of the mean having minimum variance is given by:

$$\bar x = \sigma^2_\bar x (W^\top C^{-1} X) $$

with

$$\sigma^2_\bar x=(W^\top C^{-1} W)^{-1}$$

The question is, how can correlated vector-valued estimates be combined?

In our case, how to proceed if ##x_1## and ##x_2## are not independent and all the terms in the covariance matrix are known?

In other words, are there analogous expressions to the last two for vector-valued estimates?

Any suggestion or reference, please?

**Physics Forums | Science Articles, Homework Help, Discussion**

Join Physics Forums Today!

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

# A How to combine correlated vector-valued estimates

Tags:

Have something to add?

Draft saved
Draft deleted

Loading...

Similar Threads for combine correlated vector |
---|

B Combinations of n elements in pairs |

I Combinatorics & probability density |

I A specific combination problem |

I Combination of Non Adjacent Numbers |

B Arranging blocks so that they fit together |

**Physics Forums | Science Articles, Homework Help, Discussion**