Notation: Var(Y) is the variance-covariance matrix of a random vector Y B' is the tranpose of the matrix B. 1) Let A be a m x n matrix of constants, and Y be a n x 1 random vector. Then Var(AY) = A Var(Y) A' Proof: Var(AY) = E[(AY-A E(Y)) (AY-A E(Y))' ] = E[A(Y-E(Y)) (Y-E(Y))' A' ] = A E[(Y-E(Y)) (Y-E(Y))'] A' = A Var(Y) A' Now, I don't understand the step in red. What theorem is that step using? I remember a theorem that says if B is a m x n matrix of constants, and X is a n x 1 random vector, then BX is a m x 1 matrix and E(BX) = B E(X), but this theorem doesn't even apply here since it requries X to be a column vector, not a matrix of any dimension. 2) Theorem: Let Y be a n x 1 random vector, and B be a n x 1 vector of constants(nonrandom), then Var(B+Y) = Var(Y). I don't see why this is true. How can we prove this? Is it also true that Var(Y+B) = Var(Y) ? Any help is greatly appreciated!