constraints on matrix-variate normal distributions
I'm wondering about matrix-variate normal distributions. I know they normally assume an n x p random matrix, X, and associated row and column covariance matrices Omega and Sigma, but I'm wondering how the probability density function changes if X is comprised of a square, symmetric matrix that still has covariances among the rows and columns.
Can it be shown that even given these covariances, the matrix can be vectorized without loss of information?
Is it the case for this constrained example that the row and column covariance matrices are equal, and so the determinant in the denominator of the right-hand side term outside the exponential is multiplied by itself?
I'm interested in maximum likelihood estimation under this simplified example.