Proving only 1 normalized unitary vector for normal matrix

In summary, every normal matrix has a full eigenvector solution, with the eigenvectors being mutually orthonormal and the matrix being unitary. There is only one normalized modal matrix solution, with the columns of this matrix being the eigenvectors. This solution is unique and can be proven by showing that the matrix can be diagonalized by a unitary matrix, and that there is only one set of mutually orthonormal eigenvectors for each eigenvalue. The ordering and scaling of the eigenvectors does not affect the solution, and the use of permutation matrices does not create multiple solutions.
  • #1
swampwiz
571
83
AIUI, every normal matrix has a full eigenvector solution, and there is only 1 *normalized* modal matrix as the solution (let's presume unique eigenvalues so as to avoid the degenerate case of shared eigenvalues), and the columns of the modal matrix, which are the (normalized) eigenvectors, are unitary vectors. (I am presuming that there is only 1 such solution, a proof of which I don't think I am familiar with,)

But I'd to prove this singleness from the opposite direction. I know that the sesquilinear quadratic product for the case of unitary matrices as the side matrix is such that the normality of the product is the same as the normality of the central matrix, and thus there must exist a unitary matrix that diagonalizes any particular normal matrix since the sesquilinear product of a diagonal matrix and a unitary matrix (namely the inverse of the original one, which is the transpose because it is unitary, and thus unitary itself) must be normal to follow the normality of a diagonal matrix (which is de facto normal). But I can't seem to prove that there only exists 1 unitary matrix that accomplishes this.

EDIT: Each column can be + , so there are 2n modal matrices, but if the columns are limited such that the element that corresponds to the pivot element is the same sign, then this situation goes away.

So I am hung up as to why there is only 1 solution (unless I am wrong about this!) for the unitary diagonalization, and as well the singleness of the eigenvector solution. Obviously, once it is proven that there is only 1 for each, then it is proven that this matrix is one in the same.
 
Physics news on Phys.org
  • #2
There's a lot of jargon in your post and it's not quite right. I assume we're talking about n-dimensional vectors and in general the scalars are in ##\mathbb C##.
E.g.
swampwiz said:
(namely the inverse of the original one, which is the transpose because it is unitary, and thus unitary itself) must be normal to follow the normality of a diagonal matrix (which is de facto normal).

This is technically wrong -- you mean to say conjugate transpose. It's a technical point and matters in cases like dealing with the DFT which is symmetric but not Hermitian.

swampwiz said:
AIUI, every normal matrix has a full eigenvector solution, and there is only 1 *normalized* modal matrix as the solution (let's presume unique eigenvalues so as to avoid the degenerate case of shared eigenvalues), and the columns of the modal matrix, which are the (normalized) eigenvectors, are unitary vectors. (I am presuming that there is only 1 such solution, a proof of which I don't think I am familiar with,)

I'm not sure there's such a thing as 'unitary vectors' (and while I know matrices can be treated as a vector space, that is not what we're talking about here). In general, when you have n mutually orthonormal vectors, with at least one having a non-zero imaginary component, and you collect them in an ##n## x ##n## matrix, you call the matrix unitary.

swampwiz said:
So I am hung up as to why there is only 1 solution (unless I am wrong about this!) for the unitary diagonalization, and as well the singleness of the eigenvector solution. Obviously, once it is proven that there is only 1 for each, then it is proven that this matrix is one in the same.

I'm not really sure that there is 'one solution'. You've already identified that the eigenvectors can be rescaled. What about the use of permutation matrices?

I.e. suppose you have ##\mathbf A = \mathbf {UDU}^*##

where ##\mathbf A## is normal and has been unitarily diagonalized. Well we can also say

##\mathbf A = \mathbf {UIDIU}^* = \big(\mathbf U\mathbf P\big) \big(\mathbf P^* \mathbf D \mathbf P\big) \big(\mathbf P^* \mathbf U^*\big) = \big(\mathbf U\mathbf P\big) \big(\mathbf P^* \mathbf D \mathbf P\big) \big(\mathbf U\mathbf P\big)^*##

where ##\mathbf P## is any permutation matrix that you like. So you may infer that ordering doesn't matter.

Ok what does the set of eigenvectors look like? You've already said to assume all eigenvalues are unique, so there must be ##n## linearly independent eigenvectors, one for each eigenvalue, (why? And note that this creates a pidgeon hole problem -- a basis has exactly n linearly independent vectors, each with positive length -- if even one of your eigenvalues had extra eigenvectors --i.e. geometric multiplicity ##\gt 1##-- you'd have more linearly independent eigenvectors than is possible for forming a basis -- i.e. a given set of linearly independent vectors with positive length has cardinality of at most n).

So the nullspace of ##\big(\mathbf A - \mathbf \lambda_k \mathbf I\big)## has the zero vector in it and exactly one non-zero vector, for each eigenvalue. You may scale the non-zero vector as you choose. That is all.
 
  • #3
StoneTemplePython said:
There's a lot of jargon in your post and it's not quite right. I assume we're talking about n-dimensional vectors and in general the scalars are in ##\mathbb C##.
E.g.

This is technically wrong -- you mean to say conjugate transpose. It's a technical point and matters in cases like dealing with the DFT which is symmetric but not Hermitian.

Yes, I meant conjugate transpose. What is DFT?

StoneTemplePython said:
I'm not sure there's such a thing as 'unitary vectors' (and while I know matrices can be treated as a vector space, that is not what we're talking about here). In general, when you have n mutually orthonormal vectors, with at least one having a non-zero imaginary component, and you collect them in an ##n## x ##n## matrix, you call the matrix unitary.

Yes, I meant the eigenvectors are such that the matrix is unitary.

StoneTemplePython said:
I'm not really sure that there is 'one solution'. You've already identified that the eigenvectors can be rescaled. What about the use of permutation matrices?

I.e. suppose you have ##\mathbf A = \mathbf {UDU}^*##

where ##\mathbf A## is normal and has been unitarily diagonalized. Well we can also say

##\mathbf A = \mathbf {UIDIU}^* = \big(\mathbf U\mathbf P\big) \big(\mathbf P^* \mathbf D \mathbf P\big) \big(\mathbf P^* \mathbf U^*\big) = \big(\mathbf U\mathbf P\big) \big(\mathbf P^* \mathbf D \mathbf P\big) \big(\mathbf U\mathbf P\big)^*##

where ##\mathbf P## is any permutation matrix that you like. So you may infer that ordering doesn't matter.

The modal matrix here is normalized, so there is no arbitrary scaling other than the signs. I didn't consider the fact that the modes could be in any order, but I presumed that the they would be in order as per the arithmetic value, which removes the permutation. Of course, this only works if the eigenvalues are real, since complex numbers can't be ordered as such - although they could by sorting by real component first, then imaginary component.

StoneTemplePython said:
Ok what does the set of eigenvectors look like? You've already said to assume all eigenvalues are unique, so there must be ##n## linearly independent eigenvectors, one for each eigenvalue, (why? And note that this creates a pidgeon hole problem -- a basis has exactly n linearly independent vectors, each with positive length -- if even one of your eigenvalues had extra eigenvectors --i.e. geometric multiplicity ##\gt 1##-- you'd have more linearly independent eigenvectors than is possible for forming a basis -- i.e. a given set of linearly independent vectors with positive length has cardinality of at most n).

So the nullspace of ##\big(\mathbf A - \mathbf \lambda_k \mathbf I\big)## has the zero vector in it and exactly one non-zero vector, for each eigenvalue. You may scale the non-zero vector as you choose. That is all.

Yes, right after I posted this, I thought that the fact that this all derives from the terms of the nullspace solution explains my question in some way. Thanks
 
  • #4
swampwiz said:
Yes, I meant conjugate transpose. What is DFT?
Yes, I meant the eigenvectors are such that the matrix is unitary.
The modal matrix here is normalized, so there is no arbitrary scaling other than the signs. I didn't consider the fact that the modes could be in any order, but I presumed that the they would be in order as per the arithmetic value, which removes the permutation. Of course, this only works if the eigenvalues are real, since complex numbers can't be ordered as such - although they could by sorting by real component first, then imaginary component.
Yes, right after I posted this, I thought that the fact that this all derives from the terms of the nullspace solution explains my question in some way. Thanks
DFT = Discrete Fourier Transform matrix.

It may be easier to think about this stuff in the more narrow confines of of Hermitian matrices --- then you get real eigenvalues which certainly makes ordering seem more natural. You also have the fact that for any ##\mathbb C ^{n x n}## matrix the left eigenvectors and right eigenvectors are orthogonal if the eigenvalues are different -- I.e. for any matrix ##\mathbf A## in ##\mathbb C ^{n x n}##, you have left eigenpairs ## (\lambda_j, \mathbf v_j)## and right eigenpairs ## (\lambda_k, \mathbf x_k)##, when all eigs are unique and ##k \neq j##, then

##\big(\mathbf v_j^* \mathbf A\big) \mathbf x_k = \lambda_j \mathbf v_j^* \mathbf x_k = \lambda_k \mathbf v_j^* \mathbf x_k = \mathbf v_j^* \big(\mathbf A \mathbf x_k\big)##

but ##\lambda_j \neq \lambda_k## so ##\mathbf v_j^* \mathbf x_k =0##
- - - -
But a Hermitian matrix has the same left and right eigenvectors, hence all eigenvectors are mutually orthogonal in your matrix example with unique eigs.
 

1. What does it mean for a vector to be normalized?

A vector is considered normalized when its length or magnitude is equal to 1. This means that the vector has been scaled to have a length of 1 while maintaining its direction.

2. Why is it important to prove that there is only one normalized unitary vector for a normal matrix?

Proving that there is only one normalized unitary vector for a normal matrix is important because it helps us understand the properties of these types of matrices. It also allows us to use this information in various mathematical calculations and applications.

3. How do you prove that there is only one normalized unitary vector for a normal matrix?

To prove that there is only one normalized unitary vector for a normal matrix, we first need to show that the matrix is diagonalizable. Then, we can use the eigenvalue-eigenvector theorem to find the normalized unitary vector associated with the unique eigenvalue of the matrix.

4. Can a normal matrix have more than one normalized unitary vector?

No, a normal matrix can only have one normalized unitary vector. This is because the normalized unitary vector is associated with the unique eigenvalue of the matrix, and normal matrices can only have distinct eigenvalues.

5. What are some applications of proving only one normalized unitary vector for a normal matrix?

Proving only one normalized unitary vector for a normal matrix has various applications in fields such as quantum mechanics, signal processing, and data analysis. It can also be used in solving problems related to finding the minimum and maximum values of a function, as well as in finding the best approximation for a given matrix.

Similar threads

  • Linear and Abstract Algebra
Replies
14
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
606
  • Linear and Abstract Algebra
Replies
3
Views
936
  • Advanced Physics Homework Help
Replies
15
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
808
  • Linear and Abstract Algebra
Replies
7
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
2K
  • Linear and Abstract Algebra
Replies
12
Views
1K
Back
Top