Definition of Orthogonal Matrix: Case 1 or 2?

  • Thread starter Thread starter sjeddie
  • Start date Start date
  • Tags Tags
    Matrix Orthogonal
sjeddie
Messages
18
Reaction score
0
Is the definition of an orthogonal matrix:

1. a matrix where all rows are orthonormal AND all columns are orthonormal

OR

2. a matrix where all rows are orthonormal OR all columns are orthonormal?

On my textbook it said it is AND (case 1), but if that is true, there's a problem:
Say we have a square matrix A, and we find its eigenvectors, they are all distinct so A is diagonalizable. We put the normalized eigenvectors of A as the columns of a matrix P, and (our prof told us) P becomes orthogonal and P^-1 = P^T. My question is how did P become orthogonal straight away? By only normalizing its columns how did we guarantee that its rows are also orthonormal?
 
Physics news on Phys.org
It turns out that the rows of a square matrix are orthonormal if and only if the columns are orthonormal. Another way to express that the condition that all columns are orthonormal is that A^T A = I (think about why this is). Then we see that if v \in \mathbb{R}^n, \parallel x \parallel^2 = x^T x = x^T ( A^T A ) x = ( A x )^T ( A x ) = \parallel A x \parallel^2, and therefore A is injective. Since we are working with finite-dimensional spaces, A must also be surjective, so for v \in \mathbb{R}^n, there exists w \in \mathbb{R}^n with v = Aw, and therefore A A^T v = A A^T A w = A w = v, so A A^T = I as well. You can check this this implies that the rows of A are orthonormal. The proof of the converse is similar.

Note that this argument relies on the finite-dimensionality of our vector space. If you move up to infinite dimensional spaces, there may be transforms T with T^*T = I but T T^* \neq I. This type of behavior is what makes functional analysis and operator algebras fun! :smile:
 
Last edited:
Theres actually an easier way to see that A^T A = I implies A is injective, I just tend to think in terms of isometries like I wrote. If v is such that Av = 0, then 0 = A^T 0 = A^T A v = v, so A is injective. Some may prefer this purely algebraic argument.
 
Ah I see, thank you rochfor1, the (A^T)(A) = I thing makes a lot of sense :)
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top