I Can one find a matrix that's 'unique' to a collection of eigenvectors?

  • #31
renormalize said:
What if the matrix A has repeated eigenvalues and not all the eigenvectors are linearly independent?
I admit that the cases of eigenvalue degeneration should be considered next. I am optimistic to expect that for an example degeneracy 2 eigenvectors will span plane and we may choose 2 basis vector on the plane as we like, which can make n eigenvectors span.
 
Last edited:
Physics news on Phys.org
  • #32
As for the diagonalization of n x n matrix A,
$$A=PDP^{-1}$$
P has n real number parameters which decide length with plus-minus direction, of n eigenvectors.
P, D have n! choice of order or numbering of eigenvectors. Inserting permutation matrix Q made of product of exchanging matrices , ##Q^2=E## as
$$A=PQ^2DQ^2P^{-1}=PQ(QDQ)(PQ)^{-1}$$ make it.
I expect the above tells the full parameters. Here again degeneration should be considered next.
 
Last edited:
  • #33
anuttarasammyak said:
P has n real number parameters which decide length with plus-minus direction, of n eigenvectors.
P has more parameters. For example, a 2×2 rotation matrix A has complex eigenvalues
$$
A=
\begin{pmatrix}
\cos\theta & -\sin\theta \\
\sin\theta & \cos\theta
\end{pmatrix}
=PDP^{-1}=
\begin{pmatrix}
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\
\frac{-i}{\sqrt{2}} & \frac{i}{\sqrt{2}}
\end{pmatrix}

\begin{pmatrix}
\cos\theta+i\sin\theta & 0 \\
0 & \cos\theta-i\sin\theta
\end{pmatrix}

\begin{pmatrix}
\frac{1}{\sqrt{2}} & \frac{i}{\sqrt{2}} \\
\frac{1}{\sqrt{2}} & \frac{-i}{\sqrt{2}}
\end{pmatrix}
=
\begin{pmatrix}
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\
\frac{-i}{\sqrt{2}} & \frac{i}{\sqrt{2}}
\end{pmatrix}

\begin{pmatrix}
e^{i \theta} & 0 \\
0 & e^{-i \theta}
\end{pmatrix}

\begin{pmatrix}
\frac{1}{\sqrt{2}} & \frac{i}{\sqrt{2}} \\
\frac{1}{\sqrt{2}} & \frac{-i}{\sqrt{2}}
\end{pmatrix}


$$

The eigenvectors must have complex components. ##P## has n complex number parameters which decide length and phase of eigenvectors. These effects are compensated for by multiplying the inverse ##P^{-1}##.
 
Last edited:
  • #34
mathwonk said:
Sciencemaster:..."Is there anything else I'm missing here?" I would say you are missing the basic property of a linear transformation, namely it is entirely determined by its effect on a basis. For the same reason, a linear transformation only has one matrix in the standard basis. done.

I.e.
1) The eigenvectors and the eigenvalues completely determine the linear transformation, assuming the eigenvectors contain a basis.
2) A linear transformation completely determines its (standard) matrix.

These are both for the same reason, namely that a Linear transformation is determined by (and determines) its action on a basis. Thus if the eigenvectors contain a basis, knowing them and the eigenvalues tells you what the transformation does to that eigenbasis, hence determines the transformation on everything. Now that we know the linear transformation, we also know what it does to the standard basis, which uniquely determines the (columns of the) matrix.

It has absolutely nothing to do with the existence of a diagonal matrix.
I.e. if you know the behavior of any linear map on any basis, then that map has only one standard matrix.

I apologize if this is still confusing. It confused me too. I at first thought I should use the diagonal matrix somehow.
I see. It makes more sense when I look at it like this: The columns of a matrix transformation represent where the standard basis vectors "go". If a different matrix were to transform the same vectors to the same place, each column would be identical to the first case, and so the transformation would be identical. For example, if we have the matrix ##\begin{bmatrix}1&2\\2&1\end{bmatrix}##, no other transformation can place the standard basis vectors at #\begin{bmatrix}1\\2\end{bmatrix}# and #\begin{bmatrix}2\\1\end{bmatrix}#, lest each column be identical to this matrix. I'm sure there's a similar argument to be made with non-standard basis vectors, although it's a bit harder to visualize. It helps me to think of a (linearly independent) transformation being decomposed into an inverse matrix and a matrix (i.e. #M=B^{-1}A#), representing the initial vector being transformed into a standard basis vector, and then to wherever the original matrix would have placed it. Both of these operations are easy to imagine as one-to-one transformations via a similar argument to the one I made above.
From there, it seems trivial to use eigenvectors and values instead of some other set of vectors. After all, due to the linearity of the transformation, you can extrapolate how one vector transforms given how others do so comparatively easily.
I imagine this next argument isn't very helpful or anything, but I *believe* that a matrix transformation was originally meant to be an alternative representation of a system of linear equations. Does the "one-to-one-ness" have anything to do with such a system of equations having only one solution (N parameters for N equations/rows) so long as it's linearly independent?
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
3K
Replies
13
Views
3K
Replies
4
Views
3K
Replies
3
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K