# Linear Algebra: Question about Inverse of Diagonal Matrices

## Homework Statement

Not for homework, but just for understanding.

So we know that if a matrix (M) is orthogonal, then its transpose is its inverse.

Using that knowledge for a diagonalised matrix (with eigenvalues), its column vectors are all mutually orthogonal and thus you would assume that its inverse is its transpose...

However, that is wrong and the inverse is actually just the diagonalised matrix with its non-zero entries reciprocated. I understand the proof, but fail to see why the above logic wouldn't apply (beyond that it wouldn't multiply to the identity matrix)

Can anyone help me see where I have gone wrong in trying to combine the two theorems?

DrClaude
Mentor
Simply because the columns of a matrix are orthogonal doesn't mean that the matrix is orthogonal. The columns also have to be unit vectors.

Simply because the columns of a matrix are orthogonal doesn't mean that the matrix is orthogonal. The columns also have to be unit vectors.
Thank you for your response. So what are the other conditions for orthogonality? Even if you normalised the column vectors, it still wouldn't be equal to the reciprocal.

Nevermind, I realised if you normalised the vectors, it becomes the identity matrix.

Last edited:
StoneTemplePython
Gold Member

## Homework Statement

Not for homework, but just for understanding.

So we know that if a matrix (M) is orthogonal, then its transpose is its inverse.

Using that knowledge for a diagonalised matrix (with eigenvalues), its column vectors are all mutually orthogonal [should say orthonormal] and thus you would assume that its inverse is its transpose...

However, that is wrong and the inverse is actually just the diagonalised matrix with its non-zero entries reciprocated. I understand the proof, but fail to see why the above logic wouldn't apply (beyond that it wouldn't multiply to the identity matrix)

Can anyone help me see where I have gone wrong in trying to combine the two theorems?

From what I can tell the issues is you haven't carefully written out the math of what you think you're saying -- you're just writing sentences and in the process confusing the underlying matrix/operator with a similarity transform.

##A = UDU^T##
##A^{-1} = \big(UDU^T\big)^{-1} = \big(U^T\big)^{-1}D^{-1}U^{-1} = UD^{-1} U^T##

PeroK
From what I can tell the issues is you haven't carefully written out the math of what you think you're saying -- you're just writing sentences and in the process confusing the underlying matrix/operator with a similarity transform.

##A = UDU^T##
##A^{-1} = \big(UDU^T\big)^{-1} = \big(U^T\big)^{-1}D^{-1}U^{-1} = UD^{-1} U^T##
Thanks for the response. I understand the maths that you have written, but perhaps the area I have gone wrong is this orthogonal / orthonormal confusion. Does an orthogonal matrix need to have normalised column vectors?

PeroK
Homework Helper
Gold Member
2020 Award
Thanks for the response. I understand the maths that you have written, but perhaps the area I have gone wrong is this orthogonal / orthonormal confusion. Does an orthogonal matrix need to have normalised column vectors?

I thought you had proved this for yourself.

StoneTemplePython
Gold Member
Thanks for the response. I understand the maths that you have written, but perhaps the area I have gone wrong is this orthogonal / orthonormal confusion. Does an orthogonal matrix need to have normalised column vectors?

so, quoting this:

So we know that if a matrix (M) is orthogonal, then its transpose is its inverse.

you have ##M## or in my writeup ##U## is orthogonal. Suppose it is square and each column vector is mutually orthogonal.

Then ##U^T U = D## where ##D## is some diagonal matrix... but based on your quote, you want ##D = I##, so what does that tell you about ##\big \Vert \mathbf u_k\big \Vert_2^2## which are the entries in the kth diagonal spot of ##D##?