I have a large weighted least squares system with millions of observations. I can't store the least squares matrix in memory, so I only store the normal equations matrix: [tex] A^T W A [/tex] where [itex]A[/itex] is dense and n-by-p and [itex]W = diag(w_1,...,w_n)[/itex]. It takes a very long time to build this normal equations matrix due to the large number of observations. However, I also want to iteratively re-weight the observations to reduce the effect of outliers, and so at the next iteration, the weight matrix will be the product of the original weights [itex]W[/itex] with some new weights [itex]V = diag(v_1,...,v_n)[/itex]. Therefore I need to construct the matrix: [tex] A^T W' A [/tex] where [itex]W' = diag(w_1 v_1, w_2 v_2, ..., w_n v_n)[/itex]. Does anyone know of a slick (fast) way to determine the matrix [itex]A^T W' A[/itex] given the matrix [itex]A^T W A[/itex] and the new weights [itex]V[/itex]? I am hoping I don't need to start over from scratch to build [itex]A^T W' A[/itex] for each iteration, since it takes so long. I did search the literature and didn't find much on this topic.