Linear Algebra: Question about Inverse of Diagonal Matrices

Click For Summary

Homework Help Overview

The discussion revolves around the properties of diagonal matrices and orthogonal matrices in linear algebra. Participants explore the relationship between a matrix's orthogonality and its inverse, particularly focusing on diagonalized matrices and their eigenvalues.

Discussion Character

  • Conceptual clarification, Assumption checking

Approaches and Questions Raised

  • Participants examine the assumption that orthogonal column vectors imply that the matrix itself is orthogonal. They question the conditions necessary for orthogonality and the implications for the inverse of a diagonal matrix.

Discussion Status

There is an ongoing exploration of the definitions and properties of orthogonal versus orthonormal matrices. Some participants have provided mathematical insights, while others are reflecting on their understanding of these concepts and how they relate to the original poster's confusion.

Contextual Notes

Participants note the importance of normalization in defining orthogonal matrices and discuss the implications of this for the relationship between a matrix and its inverse. There is a recognition of the need for careful mathematical expression to avoid confusion.

Master1022
Messages
590
Reaction score
116

Homework Statement


Not for homework, but just for understanding.

So we know that if a matrix (M) is orthogonal, then its transpose is its inverse.

Using that knowledge for a diagonalised matrix (with eigenvalues), its column vectors are all mutually orthogonal and thus you would assume that its inverse is its transpose...

However, that is wrong and the inverse is actually just the diagonalised matrix with its non-zero entries reciprocated. I understand the proof, but fail to see why the above logic wouldn't apply (beyond that it wouldn't multiply to the identity matrix)

Can anyone help me see where I have gone wrong in trying to combine the two theorems?
 
Physics news on Phys.org
Simply because the columns of a matrix are orthogonal doesn't mean that the matrix is orthogonal. The columns also have to be unit vectors.
 
DrClaude said:
Simply because the columns of a matrix are orthogonal doesn't mean that the matrix is orthogonal. The columns also have to be unit vectors.
Thank you for your response. So what are the other conditions for orthogonality? Even if you normalised the column vectors, it still wouldn't be equal to the reciprocal.

Nevermind, I realized if you normalised the vectors, it becomes the identity matrix.
 
Last edited:
Master1022 said:

Homework Statement


Not for homework, but just for understanding.

So we know that if a matrix (M) is orthogonal, then its transpose is its inverse.

Using that knowledge for a diagonalised matrix (with eigenvalues), its column vectors are all mutually orthogonal [should say orthonormal] and thus you would assume that its inverse is its transpose...

However, that is wrong and the inverse is actually just the diagonalised matrix with its non-zero entries reciprocated. I understand the proof, but fail to see why the above logic wouldn't apply (beyond that it wouldn't multiply to the identity matrix)

Can anyone help me see where I have gone wrong in trying to combine the two theorems?

From what I can tell the issues is you haven't carefully written out the math of what you think you're saying -- you're just writing sentences and in the process confusing the underlying matrix/operator with a similarity transform.

##A = UDU^T##
##A^{-1} = \big(UDU^T\big)^{-1} = \big(U^T\big)^{-1}D^{-1}U^{-1} = UD^{-1} U^T##
 
  • Like
Likes   Reactions: PeroK
StoneTemplePython said:
From what I can tell the issues is you haven't carefully written out the math of what you think you're saying -- you're just writing sentences and in the process confusing the underlying matrix/operator with a similarity transform.

##A = UDU^T##
##A^{-1} = \big(UDU^T\big)^{-1} = \big(U^T\big)^{-1}D^{-1}U^{-1} = UD^{-1} U^T##
Thanks for the response. I understand the maths that you have written, but perhaps the area I have gone wrong is this orthogonal / orthonormal confusion. Does an orthogonal matrix need to have normalised column vectors?
 
Master1022 said:
Thanks for the response. I understand the maths that you have written, but perhaps the area I have gone wrong is this orthogonal / orthonormal confusion. Does an orthogonal matrix need to have normalised column vectors?

I thought you had proved this for yourself.
 
Master1022 said:
Thanks for the response. I understand the maths that you have written, but perhaps the area I have gone wrong is this orthogonal / orthonormal confusion. Does an orthogonal matrix need to have normalised column vectors?

so, quoting this:

Master1022 said:
So we know that if a matrix (M) is orthogonal, then its transpose is its inverse.

you have ##M## or in my writeup ##U## is orthogonal. Suppose it is square and each column vector is mutually orthogonal.

Then ##U^T U = D## where ##D## is some diagonal matrix... but based on your quote, you want ##D = I##, so what does that tell you about ##\big \Vert \mathbf u_k\big \Vert_2^2## which are the entries in the kth diagonal spot of ##D##?
 

Similar threads

  • · Replies 25 ·
Replies
25
Views
4K
  • · Replies 11 ·
Replies
11
Views
3K
Replies
4
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
Replies
15
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
4
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 1 ·
Replies
1
Views
5K