Undergrad Proving only 1 normalized unitary vector for normal matrix

Click For Summary
Every normal matrix has a complete set of eigenvectors, and under the assumption of unique eigenvalues, there exists a unique normalized modal matrix composed of these eigenvectors, which are unitary vectors. The discussion highlights the challenge of proving the singularity of this unitary diagonalization, noting that while the eigenvectors can be rescaled, the ordering of eigenvalues and the presence of permutation matrices complicate the notion of uniqueness. It is clarified that the eigenvectors form a basis, and if any eigenvalue has a geometric multiplicity greater than one, it would contradict the uniqueness of the basis. The conversation also touches on the properties of Hermitian matrices, which ensure real eigenvalues and orthogonality among eigenvectors, making the analysis more straightforward. Ultimately, the intricacies of eigenvector properties and matrix normalization are central to understanding the diagonalization of normal matrices.
swampwiz
Messages
567
Reaction score
83
AIUI, every normal matrix has a full eigenvector solution, and there is only 1 *normalized* modal matrix as the solution (let's presume unique eigenvalues so as to avoid the degenerate case of shared eigenvalues), and the columns of the modal matrix, which are the (normalized) eigenvectors, are unitary vectors. (I am presuming that there is only 1 such solution, a proof of which I don't think I am familiar with,)

But I'd to prove this singleness from the opposite direction. I know that the sesquilinear quadratic product for the case of unitary matrices as the side matrix is such that the normality of the product is the same as the normality of the central matrix, and thus there must exist a unitary matrix that diagonalizes any particular normal matrix since the sesquilinear product of a diagonal matrix and a unitary matrix (namely the inverse of the original one, which is the transpose because it is unitary, and thus unitary itself) must be normal to follow the normality of a diagonal matrix (which is de facto normal). But I can't seem to prove that there only exists 1 unitary matrix that accomplishes this.

EDIT: Each column can be + , so there are 2n modal matrices, but if the columns are limited such that the element that corresponds to the pivot element is the same sign, then this situation goes away.

So I am hung up as to why there is only 1 solution (unless I am wrong about this!) for the unitary diagonalization, and as well the singleness of the eigenvector solution. Obviously, once it is proven that there is only 1 for each, then it is proven that this matrix is one in the same.
 
Physics news on Phys.org
There's a lot of jargon in your post and it's not quite right. I assume we're talking about n-dimensional vectors and in general the scalars are in ##\mathbb C##.
E.g.
swampwiz said:
(namely the inverse of the original one, which is the transpose because it is unitary, and thus unitary itself) must be normal to follow the normality of a diagonal matrix (which is de facto normal).

This is technically wrong -- you mean to say conjugate transpose. It's a technical point and matters in cases like dealing with the DFT which is symmetric but not Hermitian.

swampwiz said:
AIUI, every normal matrix has a full eigenvector solution, and there is only 1 *normalized* modal matrix as the solution (let's presume unique eigenvalues so as to avoid the degenerate case of shared eigenvalues), and the columns of the modal matrix, which are the (normalized) eigenvectors, are unitary vectors. (I am presuming that there is only 1 such solution, a proof of which I don't think I am familiar with,)

I'm not sure there's such a thing as 'unitary vectors' (and while I know matrices can be treated as a vector space, that is not what we're talking about here). In general, when you have n mutually orthonormal vectors, with at least one having a non-zero imaginary component, and you collect them in an ##n## x ##n## matrix, you call the matrix unitary.

swampwiz said:
So I am hung up as to why there is only 1 solution (unless I am wrong about this!) for the unitary diagonalization, and as well the singleness of the eigenvector solution. Obviously, once it is proven that there is only 1 for each, then it is proven that this matrix is one in the same.

I'm not really sure that there is 'one solution'. You've already identified that the eigenvectors can be rescaled. What about the use of permutation matrices?

I.e. suppose you have ##\mathbf A = \mathbf {UDU}^*##

where ##\mathbf A## is normal and has been unitarily diagonalized. Well we can also say

##\mathbf A = \mathbf {UIDIU}^* = \big(\mathbf U\mathbf P\big) \big(\mathbf P^* \mathbf D \mathbf P\big) \big(\mathbf P^* \mathbf U^*\big) = \big(\mathbf U\mathbf P\big) \big(\mathbf P^* \mathbf D \mathbf P\big) \big(\mathbf U\mathbf P\big)^*##

where ##\mathbf P## is any permutation matrix that you like. So you may infer that ordering doesn't matter.

Ok what does the set of eigenvectors look like? You've already said to assume all eigenvalues are unique, so there must be ##n## linearly independent eigenvectors, one for each eigenvalue, (why? And note that this creates a pidgeon hole problem -- a basis has exactly n linearly independent vectors, each with positive length -- if even one of your eigenvalues had extra eigenvectors --i.e. geometric multiplicity ##\gt 1##-- you'd have more linearly independent eigenvectors than is possible for forming a basis -- i.e. a given set of linearly independent vectors with positive length has cardinality of at most n).

So the nullspace of ##\big(\mathbf A - \mathbf \lambda_k \mathbf I\big)## has the zero vector in it and exactly one non-zero vector, for each eigenvalue. You may scale the non-zero vector as you choose. That is all.
 
StoneTemplePython said:
There's a lot of jargon in your post and it's not quite right. I assume we're talking about n-dimensional vectors and in general the scalars are in ##\mathbb C##.
E.g.

This is technically wrong -- you mean to say conjugate transpose. It's a technical point and matters in cases like dealing with the DFT which is symmetric but not Hermitian.

Yes, I meant conjugate transpose. What is DFT?

StoneTemplePython said:
I'm not sure there's such a thing as 'unitary vectors' (and while I know matrices can be treated as a vector space, that is not what we're talking about here). In general, when you have n mutually orthonormal vectors, with at least one having a non-zero imaginary component, and you collect them in an ##n## x ##n## matrix, you call the matrix unitary.

Yes, I meant the eigenvectors are such that the matrix is unitary.

StoneTemplePython said:
I'm not really sure that there is 'one solution'. You've already identified that the eigenvectors can be rescaled. What about the use of permutation matrices?

I.e. suppose you have ##\mathbf A = \mathbf {UDU}^*##

where ##\mathbf A## is normal and has been unitarily diagonalized. Well we can also say

##\mathbf A = \mathbf {UIDIU}^* = \big(\mathbf U\mathbf P\big) \big(\mathbf P^* \mathbf D \mathbf P\big) \big(\mathbf P^* \mathbf U^*\big) = \big(\mathbf U\mathbf P\big) \big(\mathbf P^* \mathbf D \mathbf P\big) \big(\mathbf U\mathbf P\big)^*##

where ##\mathbf P## is any permutation matrix that you like. So you may infer that ordering doesn't matter.

The modal matrix here is normalized, so there is no arbitrary scaling other than the signs. I didn't consider the fact that the modes could be in any order, but I presumed that the they would be in order as per the arithmetic value, which removes the permutation. Of course, this only works if the eigenvalues are real, since complex numbers can't be ordered as such - although they could by sorting by real component first, then imaginary component.

StoneTemplePython said:
Ok what does the set of eigenvectors look like? You've already said to assume all eigenvalues are unique, so there must be ##n## linearly independent eigenvectors, one for each eigenvalue, (why? And note that this creates a pidgeon hole problem -- a basis has exactly n linearly independent vectors, each with positive length -- if even one of your eigenvalues had extra eigenvectors --i.e. geometric multiplicity ##\gt 1##-- you'd have more linearly independent eigenvectors than is possible for forming a basis -- i.e. a given set of linearly independent vectors with positive length has cardinality of at most n).

So the nullspace of ##\big(\mathbf A - \mathbf \lambda_k \mathbf I\big)## has the zero vector in it and exactly one non-zero vector, for each eigenvalue. You may scale the non-zero vector as you choose. That is all.

Yes, right after I posted this, I thought that the fact that this all derives from the terms of the nullspace solution explains my question in some way. Thanks
 
swampwiz said:
Yes, I meant conjugate transpose. What is DFT?
Yes, I meant the eigenvectors are such that the matrix is unitary.
The modal matrix here is normalized, so there is no arbitrary scaling other than the signs. I didn't consider the fact that the modes could be in any order, but I presumed that the they would be in order as per the arithmetic value, which removes the permutation. Of course, this only works if the eigenvalues are real, since complex numbers can't be ordered as such - although they could by sorting by real component first, then imaginary component.
Yes, right after I posted this, I thought that the fact that this all derives from the terms of the nullspace solution explains my question in some way. Thanks
DFT = Discrete Fourier Transform matrix.

It may be easier to think about this stuff in the more narrow confines of of Hermitian matrices --- then you get real eigenvalues which certainly makes ordering seem more natural. You also have the fact that for any ##\mathbb C ^{n x n}## matrix the left eigenvectors and right eigenvectors are orthogonal if the eigenvalues are different -- I.e. for any matrix ##\mathbf A## in ##\mathbb C ^{n x n}##, you have left eigenpairs ## (\lambda_j, \mathbf v_j)## and right eigenpairs ## (\lambda_k, \mathbf x_k)##, when all eigs are unique and ##k \neq j##, then

##\big(\mathbf v_j^* \mathbf A\big) \mathbf x_k = \lambda_j \mathbf v_j^* \mathbf x_k = \lambda_k \mathbf v_j^* \mathbf x_k = \mathbf v_j^* \big(\mathbf A \mathbf x_k\big)##

but ##\lambda_j \neq \lambda_k## so ##\mathbf v_j^* \mathbf x_k =0##
- - - -
But a Hermitian matrix has the same left and right eigenvectors, hence all eigenvectors are mutually orthogonal in your matrix example with unique eigs.
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 33 ·
2
Replies
33
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 15 ·
Replies
15
Views
3K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K