I Can one find a matrix that's 'unique' to a collection of eigenvectors?

Sciencemaster
Messages
129
Reaction score
20
TL;DR Summary
Is there a method to show that a collection of eigenvalues 'belong' *only* to a transformation matrix of a specific form?
If you have a collection of n (nonzero and different) eigenvectors, is there a way to find a general form of an n-by-n matrix that corresponds to them in such a way that 'rules out' alternative forms? For example, let's say we have the vectors ##\begin{bmatrix}c\\1\end{bmatrix}## and ##\begin{bmatrix}-c\\1\end{bmatrix}## and we use diagonalization (##A=PDP^{-1}##) to construct a 2D matrix that has these as its eigenvectors (with eigenvalues ##\lambda_1## and ##\lambda_2##), we would find that the corresponding matrix takes the form $$A=\begin{bmatrix}\lambda_1+\lambda_2&c(\lambda_1-\lambda_2)\\\frac{\lambda_1-\lambda_2}{c}&\lambda_1+\lambda_2\end{bmatrix}=\begin{bmatrix}a&bc\\\frac{b}{c}&a\end{bmatrix}$$
Where a, b, and c are arbitrary scalars. Does that necessarily mean that there aren't any other forms of a matrix that have the same eigenvectors? In this case, since c is a general constant and our eigenvectors can have any scalar coefficient, is this operation sufficient to ascertain that any 2D matrix with 2 eigenvectors whose direction is the reflection of one another across the x-axis must have the form above? Part of the reason I'm unsure about the uniqueness of the matrix ##A## is because as far as I know, ##P## and ##D## aren't unique to a given matrix due to the order of eigenvalues/vectors being interchangable, although I imagine the inclusion of ##P^{-1}## counteracts any impact the ordering would have. Is there some other way to show that a collection of eigenvalues correspond *only* to a given form of matrix?
 
Physics news on Phys.org
Sciencemaster said:
TL;DR Summary: Is there a method to show that a collection of eigenvalues 'belong' *only* to a transformation matrix of a specific form?

Is there some other way to show that a collection of eigenvalues correspond *only* to a given form of matrix?
In your example the two vectors are orhogonal so -c^2+1=0. Am I misunderstanding you?
QDQ^{-1} is a general form of matrix which has eigenvalues of \lambda_1,\lambda_2,...,\lambda_n where D is a diagonalized matrix which has these eigenvalues and other components of which are zero, and Q is an orthogonal matrix.
 
Last edited:
anuttarasammyak said:
In your example the two vectors are orhogonal so -c^2+1=0. Am I misunderstanding you?
QDQ^{-1} is a general form of matrix which has eigenvalues of \lambda_1,\lambda_2,...,\lambda_n where D is a diagonalized matrix which has these eigenvalues and other components of which are zero, and Q is an orthogonal matrix.
The example with the two vectors was meant to illustrate one scenario where my question could be applied. It doesn't need to be those specific vectors. That being said, they're actually only orthogonal if c=1. For example, if c=2, then we have vectors ##\begin{bmatrix}2\\1\end{bmatrix}## and ##\begin{bmatrix}-2\\1\end{bmatrix}##, which aren't orthogonal to one another.
The core of my question is, if ##A=PDP^{-1}## is the general form of a matrix with the corresponding eigenvectors/values, does that mean that ##A## is the *only* matrix with the eigenvalues and eigenvectors represented in ##P## and ##D##? Or is there some other way to show that some matrix ##A## is the *only* matrix with certain eigenvectors/values?
 
Sciencemaster said:
The example with the two vectors was meant to illustrate one scenario where my question could be applied. It doesn't need to be those specific vectors. That being said, they're actually only orthogonal if c=1. For example, if c=2, then we have vectors ##\begin{bmatrix}2\\1\end{bmatrix}## and ##\begin{bmatrix}-2\\1\end{bmatrix}##, which aren't orthogonal to one another.
The core of my question is, if ##A=PDP^{-1}## is the general form of a matrix with the corresponding eigenvectors/values, does that mean that ##A## is the *only* matrix with the eigenvalues and eigenvectors represented in ##P## and ##D##? Or is there some other way to show that some matrix ##A## is the *only* matrix with certain eigenvectors/values?
You can always renumber the order of the eigenvectors, i.e., apply a permutation matrix. Your question is not very precise, so I think the Jordan normal form answers your question.
 
Sciencemaster said:
The example with the two vectors was meant to illustrate one scenario where my question could be applied. It doesn't need to be those specific vectors. That being said, they're actually only orthogonal if c=1. For example, if c=2, then we have vectors [21] and [−21], which aren't orthogonal to one another.
For c=2 it is impossible that both the two vectors are eigenvectors. I misinterpreted you mentioned they are eigenvectors. Eigenevectors make a orthogonal set.
 
Last edited:
anuttarasammyak said:
For c=2 it is impossible that both the two vectors are eigenvectors. I misinterpreted you mentioned they are eigenvectors. Eigenevectors make a orthogonal set.
Are you sure? I'm pretty sure that ##\begin{bmatrix}1&4\\1&1\end{bmatrix}## has the eigenvectors ##\begin{bmatrix}2\\1\end{bmatrix}## and ##\begin{bmatrix}-2\\1\end{bmatrix}##, which would be the c=2 case, even though those two vectors aren't orthogonal to one another.
 
  • Like
Likes anuttarasammyak
My bad. I wrongly thought they are real symmetric matrices, or normal matrices.
 
Last edited:
Sciencemaster said:
TL;DR Summary: Is there a method to show that a collection of eigenvalues 'belong' *only* to a transformation matrix of a specific form?
Although couched in the operator language of quantum mechanics rather than finite-dimensional matrices, this is possibly relevant: The Compatibility Theorem.
 
As for 2X2 matrices which have eigenvalue l_1 and l_2, their general expression is

With parameters ##a_{12} \neq 0## and ##a_{11}##:
$$a_{22}=-a_{11}+l_1+l_2$$
$$a_{21}=\frac{1}{a_{12}}[a_{11}(-a_{11}+l_1+l_2)-l_1l_2)]$$
Eivenvectors for eigenvalue l_j , j=1,2 are
$$(a_{12},-a_{11}+l_j)^t$$ for eigenvalue ##l_j##

For ##a_{12} = 0##:
$$\{a_{11},a_{22}\}=\{l_1,l_2\}$$
any ##a_{21}## is OK
Eivenvectors for eigenvalue l_j , {i,j}={1,2} are
$$(a_{11}-l_j,a_{21})^t$$ for eigenvalue ##l_i##
I hope this has something to do with your problem.

[EDIT]
With eigenvalues ##l_i## and eigenvectors ##(\cos t_i, \sin t_i)^t## given

$$a_{11}=\frac{l_2 \cot t_2-l_1 \cot t_1}{\cot t_2 - \cot t_1}$$

$$a_{12}=\frac{(l_1-l_2) \cot t_1 \cot t_2}{\cot t_2 - \cot t_1}$$

$$a_{21}=\frac{(l_1 \cot t_2-l_2 \cot t_1)(l_2 \cot t_2-l_1 \cot t_1)}{(\cot t_2 - \cot t_1)(l_1-l_2) \cot t_1 \cot t_2}$$

$$a_{22}=\frac{l_1 \cot t_2 - l_2 \cot t_1}{\cot t_2 - \cot t_1}$$

,if I made no mistake in calculation. The matrix seems unique.
 
Last edited:
  • #10
Sciencemaster said:
The core of my question is, if A=PDP−1 is the general form of a matrix with the corresponding eigenvectors/values,
I don't think that the diagonalization is possible in general. In the case of normal matrix A, it is.
 
  • #11
anuttarasammyak said:
I don't think that the diagonalization is possible in general. In the case of normal matrix A, it is.
What I meant by 'general' is that we use scalar parameters like a and b in the place of indices in the final matrix, eigenvalues, etc. I would imagine that's just as valid as doing diagonalization with eigenvectors/values with defined numbers since you can set the parameters to any such number.
 
  • #12
In ordet to catch your question exactly, I would like to ask you whether your ##A=P^{-1}DP## setting include $$\begin{bmatrix}1&4\\1&1\end{bmatrix}$$ in post #6, which is not a normal matrix, or not.
 
  • #13
anuttarasammyak said:
In ordet to catch your question exactly, I would like to ask you whether your ##A=P^{-1}DP## setting include $$\begin{bmatrix}1&4\\1&1\end{bmatrix}$$ in post #6, which is not a normal matrix, or not.
I think I may be miscommunicating a little. For this specific example, what I'm trying to check for is whether $$\begin{bmatrix}1&4\\1&1\end{bmatrix}=\begin{bmatrix}2&-2\\1&1\end{bmatrix}\begin{bmatrix}3&0\\0&-1\end{bmatrix}\frac{1}{4}\begin{bmatrix}1&2\\-1&2\end{bmatrix}$$ is sufficient to show that ##\begin{bmatrix}1&4\\1&1\end{bmatrix}## is the *only* matrix with eigenvalues ##3,-1## and eigenvectors ##\begin{bmatrix}2\\1\end{bmatrix},\begin{bmatrix}-2\\1\end{bmatrix}##. The stuff with the "C" vectors was meant to generalize this idea to any number. For example, $$\frac{1}{4}\begin{bmatrix}2(\lambda_1+\lambda_2)&4(\lambda_1-\lambda_2)\\(\lambda_1-\lambda_2)&2(\lambda_1+\lambda_2)\end{bmatrix}=\begin{bmatrix}2&-2\\1&1\end{bmatrix}\begin{bmatrix}\lambda_1&0\\0&\lambda_2\end{bmatrix}\frac{1}{4}\begin{bmatrix}1&2\\-1&2\end{bmatrix}$$ being the only matrix with the same eigenvectors but two general parameters as eigenvectors. If this isn't enough to show this, I'm trying to see if there's some method that *can*.
 
  • #14
Sciencemaster said:
For this specific example, what I'm trying to check for is whether [1411]=[2−211][300−1]14[12−12]
In my matrix calculation LHS ##\neq## RHS = $$\begin{bmatrix}1&7/2\\1&1\end{bmatrix}$$
which has eigenvalues ##1 \pm \sqrt{\frac{7}{2}}##
 
Last edited:
  • #15
Sciencemaster said:
Or is there some other way to show that some matrix A is the *only* matrix with certain eigenvectors/values?
In [EDIT] of post #6, I tried to get general expression for 2X2 matrix. This expression seems almost unique. Your checks/comments will be appreciated.

There 4 parameters decide 4 elements of matrix. In general case of n X n matrix, n sets of eigenvalues and n dimension eigenvectors whose magnitude is arbitary or normalized to 1 give n^2 parameters, which is same as number of the elements. It seems to support the uniqueness.
 
Last edited:
  • #16
Sciencemaster said:
The core of my question is, if ##A=PDP^{-1}## is the general form of a matrix with the corresponding eigenvectors/values, does that mean that ##A## is the *only* matrix with the eigenvalues and eigenvectors represented in ##P## and ##D##? Or is there some other way to show that some matrix ##A## is the *only* matrix with certain eigenvectors/values?

Yes, where D is a Jordan normal form, ie. the expression of the map with respect to a basis of (generalized) eigenvectors \{v_1, \dots, v_n\}. Conjugation by P then gives the expression of the map with respect to the standard basis.
 
  • #17
Thanks to pasmith and fresh_42, I think I finally understand your question. To paraphrase:
A linear transformation is entirely determined by what it does to any basis, hence if a linear transformation has enough eigenvectors to form a basis, then those eigenvectors and their eigenvalues do determine the linear transformation uniquely.
Since a linear transformation of k^n has a unique k-matrix, in terms of the standard basis (in the standard ordering), then any matrix which has enough eigenvectors to form a basis is indeed uniquely determined by those eigenvectors and eigenvalues.
Any such matrix, whose eigenvectors contain a basis, can also be diagonalized, but the diagonal matrix is not necessarily unique, since it depends on a choice of an ordering of the eigenbasis, or at least on an ordering of the eigenvalues they represent; e.g. there are several 3x3 diagonal matrices with 1,1,2 on the diagonal, although only one with these eigenvalues arranged in their natural order (if they are real numbers). So uniqueness seems to depend on whether the field k is ordered or not.
I hope I summarized correctly the wisdom of the previous answers.
 
  • #18
pasmith said:
Yes, where D is a Jordan normal form, ie. the expression of the map with respect to a basis of (generalized) eigenvectors \{v_1, \dots, v_n\}. Conjugation by P then gives the expression of the map with respect to the standard basis.
Just to check, if a matrix has an inverse, that means that its transformation is necessarily one-to-one, right? Meaning, if we transform some vector from the standard basis -> eigenvector basis -> standard basis, both transformations can only result in one possible vector.
 
  • #19
Yes.
 
  • #20
mathwonk said:
Thanks to pasmith and fresh_42, I think I finally understand your question. To paraphrase:
A linear transformation is entirely determined by what it does to any basis, hence if a linear transformation has enough eigenvectors to form a basis, then those eigenvectors and their eigenvalues do determine the linear transformation uniquely.
Since a linear transformation of k^n has a unique k-matrix, in terms of the standard basis (in the standard ordering), then any matrix which has enough eigenvectors to form a basis is indeed uniquely determined by those eigenvectors and eigenvalues.
Any such matrix, whose eigenvectors contain a basis, can also be diagonalized, but the diagonal matrix is not necessarily unique, since it depends on a choice of an ordering of the eigenbasis, or at least on an ordering of the eigenvalues they represent; e.g. there are several 3x3 diagonal matrices with 1,1,2 on the diagonal, although only one with these eigenvalues arranged in their natural order (if they are real numbers). So uniqueness seems to depend on whether the field k is ordered or not.
I hope I summarized correctly the wisdom of the previous answers.
This is very helpful. One thought I had about the diagonal matrix is this: since the eigenvectors in the matrix #P# are aligned with the order of the eigenvalues in the diagonal matrix #D#, changing the order of the eigenvalues in #D# would also require rearranging the columns of #P# which in turn similarly changes #P^{-1}# effectively "canceling" out the impact of the ordering when computing #A=PDP^{-1}#. So, while the RHS can be represented in multiple ways depending on how the eigenvalues and eigenvectors are ordered, each specific combination of #P# and #D# can only give one #A# (our "original", non-diagonal matrix). Does that make sense?

Also, what exactly do you mean by the k-matrix here?
 
  • #21
Sciencemaster said:
So, while the RHS can be represented in multiple ways depending on how the eigenvalues and eigenvectors are ordered, each specific combination of #P# and #D# can only give one #A# (our "original", non-diagonal matrix). Does that make sense?
##A## and ##D## do not share eigenvalues as posted #14. Even so, is diagonalization by ##P## useful ?
In case that real A is a normal matrix, ##A^TA=AA^T##, P is orthogonal matrix and diagonalization is familiar to us.
 
Last edited:
  • #22
k is the field of scalars in the matrix, e.g. k could be the reals. I think your analysis as to canceling out makes sense, but to me it is backwards to start from the non unique diagonal matrix and argue that the original matrix is unique. That's why I gave an argument for the uniqueness of the original matrix before discussing diagonalization. I.e. the answer to your question in your title is "yes" and I gave a direct proof that indeed knowing a basis of eigenvectors and their associated eigenvalues does uniquely determine the matrix (in the standard basis).
 
  • #23
anuttarasammyak said:
[EDIT]
With eigenvalues li and eigenvectors (cos⁡ti,sin⁡ti)t given

a11=l2cot⁡t2−l1cot⁡t1cot⁡t2−cot⁡t1

a12=(l1−l2)cot⁡t1cot⁡t2cot⁡t2−cot⁡t1

a21=(l1cot⁡t2−l2cot⁡t1)(l2cot⁡t2−l1cot⁡t1)(cot⁡t2−cot⁡t1)(l1−l2)cot⁡t1cot⁡t2

a22=l1cot⁡t2−l2cot⁡t1cot⁡t2−cot⁡t1

,if I made no mistake in calculation. The matrix seems unique.

mathwonk said:
I.e. the answer to your question in your title is "yes" and I gave a direct proof that indeed knowing a basis of eigenvectors and their associated eigenvalues does uniquely determine the matrix (in the standard basis).
My 2X2 case study is an easy example to support it. Exchanging j=1,2 does not change the matrix elements. I suppose in n x n matrix case, the matrix elements are invariant under any permutation of j=1,2,...,n.
 
  • #24
anuttarasammyak said:
##A## and ##D## do not share eigenvalues as posted #14. Even so, is diagonalization by ##P## useful ?
In case that real A is a normal matrix, ##A^TA=AA^T##, P is orthogonal matrix and diagonalization is familiar to us.
I went back and checked, and they definitely do share eigenvalues. Double checking the math (and also doing the reverse calculation with an eigenvector calculator just in case my algebra was wrong) and the LHS *does* equal the RHS, and the diagonal matrix has the same eigenvalues as the one on the LHS.
 
  • #25
mathwonk said:
k is the field of scalars in the matrix, e.g. k could be the reals. I think your analysis as to canceling out makes sense, but to me it is backwards to start from the non unique diagonal matrix and argue that the original matrix is unique. That's why I gave an argument for the uniqueness of the original matrix before discussing diagonalization. I.e. the answer to your question in your title is "yes" and I gave a direct proof that indeed knowing a basis of eigenvectors and their associated eigenvalues does uniquely determine the matrix (in the standard basis).
Alright, I think I’m starting to get how this works (also, thanks to pasmith who explained it like this). In short: yes — if you’re given a set of eigenvalues and their corresponding eigenvectors, you can uniquely determine the matrix that has them.
The intuitive reason is that when you express the diagonal matrix (which contains the eigenvalues) in the eigenvector basis, and then transform it back into the standard basis using those eigenvectors, that process is a one-to-one mapping. The transformation is reversible, so each eigenvector maps to exactly one vector in the standard basis, which gives us the original matrix. Is this correct? Is there anything else I'm missing here?
If I'm not mistaken, this has the side effect that no matrix can share set of eigenvectors and eigenvalues with any other.
However, one thought I have is that, why should the ordering of columns of the non-diagonal matrix be unique if the ordering of the diagonal matrix isn't? In the example I used originally, I used diagonalization to show that #\begin{bmatrix}a&bc\\\frac{b}{c}&a\end{bmatrix}# is the only matrix with eigenvectors #\begin{bmatrix}c\\1\end{bmatrix}# and #\begin{bmatrix}-c\\1\end{bmatrix}#. Working backwards, we know that switching the rows of the matrix and using #\begin{bmatrix}bc&a\\a&\frac{b}{c}\end{bmatrix}# wouldn't have the same eigenvectors, but if the transformation into the diagonal (eigenvector) basis can output a matrix with any ordering, why shouldn't the inverse operation do the same?
 
  • #26
Sciencemaster said:
I went back and checked, and they definitely do share eigenvalues. Double checking the math (and also doing the reverse calculation with an eigenvector calculator just in case my algebra was wrong) and the LHS *does* equal the RHS, and the diagonal matrix has the same eigenvalues as the one on the LHS.
My Bad! Thank you for your correction.
 
  • #27
Sciencemaster:..."Is there anything else I'm missing here?" I would say you are missing the basic property of a linear transformation, namely it is entirely determined by its effect on a basis. For the same reason, a linear transformation only has one matrix in the standard basis. done.

I.e.
1) The eigenvectors and the eigenvalues completely determine the linear transformation, assuming the eigenvectors contain a basis.
2) A linear transformation completely determines its (standard) matrix.

These are both for the same reason, namely that a Linear transformation is determined by (and determines) its action on a basis. Thus if the eigenvectors contain a basis, knowing them and the eigenvalues tells you what the transformation does to that eigenbasis, hence determines the transformation on everything. Now that we know the linear transformation, we also know what it does to the standard basis, which uniquely determines the (columns of the) matrix.

It has absolutely nothing to do with the existence of a diagonal matrix.
I.e. if you know the behavior of any linear map on any basis, then that map has only one standard matrix.

I apologize if this is still confusing. It confused me too. I at first thought I should use the diagonal matrix somehow.
 
Last edited:
  • #28
Say matrix A has sets of eigenvalues and eigenvectors as
$$(\mathbf{A}-\lambda_i\mathbf{E})\mathbf{x}_i=0$$
Say there exists another matrix A' which has same sets of eigenvalues and eigenvectors
$$(\mathbf{A'}-\lambda_i\mathbf{E})\mathbf{x}_i=0$$
subtracting the both sides
$$(\mathbf{A}-\mathbf{A'})\mathbf{x_i}=0$$
As any vector ##\mathbf{x}## is expressed as linear combination of {##\mathbf{x_i}##},
$$(\mathbf{A}-\mathbf{A'})\mathbf{x}=0$$
So
$$\mathbf{A}=\mathbf{A'}$$
A is unique.
 
  • #29
anuttarasammyak said:
A is unique.
What if the matrix ##A## has repeated eigenvalues and not all the eigenvectors are linearly independent?
 
  • Like
Likes anuttarasammyak
  • #30
In post #28, what matters is that the vectors xj span. And that they are eigenvectors is irrelevant. Indeed the eigenvectors of a matrix do not always span. (Eg. a 3x3 matrix with all zeroes except 1's just above the diagonal, and a 3x3 matrix with all zeroes except 1's everywhere above the diagonal, are both nilpotent and different, but have the same eigenvectors and eigenvalues, namely non zero multiples of e1, with eigenvalue zero.)

Here is the argument from #28 without the eigenvectors:
Assuming A, A' are any two matrices that agree on a spanning set xj. Then Axj = A'xj, so (A-A')xj = 0 all j, hence, since the xj span, (A-A')x = 0, for all x, and thus A = A'.
 
Last edited:
  • Like
Likes anuttarasammyak and fresh_42
  • #31
renormalize said:
What if the matrix A has repeated eigenvalues and not all the eigenvectors are linearly independent?
I admit that the cases of eigenvalue degeneration should be considered next. I am optimistic to expect that for an example degeneracy 2 eigenvectors will span plane and we may choose 2 basis vector on the plane as we like, which can make n eigenvectors span.
 
Last edited:
  • #32
As for the diagonalization of n x n matrix A,
$$A=PDP^{-1}$$
P has n real number parameters which decide length with plus-minus direction, of n eigenvectors.
P, D have n! choice of order or numbering of eigenvectors. Inserting permutation matrix Q made of product of exchanging matrices , ##Q^2=E## as
$$A=PQ^2DQ^2P^{-1}=PQ(QDQ)(PQ)^{-1}$$ make it.
I expect the above tells the full parameters. Here again degeneration should be considered next.
 
Last edited:
  • #33
anuttarasammyak said:
P has n real number parameters which decide length with plus-minus direction, of n eigenvectors.
P has more parameters. For example, a 2×2 rotation matrix A has complex eigenvalues
$$
A=
\begin{pmatrix}
\cos\theta & -\sin\theta \\
\sin\theta & \cos\theta
\end{pmatrix}
=PDP^{-1}=
\begin{pmatrix}
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\
\frac{-i}{\sqrt{2}} & \frac{i}{\sqrt{2}}
\end{pmatrix}

\begin{pmatrix}
\cos\theta+i\sin\theta & 0 \\
0 & \cos\theta-i\sin\theta
\end{pmatrix}

\begin{pmatrix}
\frac{1}{\sqrt{2}} & \frac{i}{\sqrt{2}} \\
\frac{1}{\sqrt{2}} & \frac{-i}{\sqrt{2}}
\end{pmatrix}
=
\begin{pmatrix}
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\
\frac{-i}{\sqrt{2}} & \frac{i}{\sqrt{2}}
\end{pmatrix}

\begin{pmatrix}
e^{i \theta} & 0 \\
0 & e^{-i \theta}
\end{pmatrix}

\begin{pmatrix}
\frac{1}{\sqrt{2}} & \frac{i}{\sqrt{2}} \\
\frac{1}{\sqrt{2}} & \frac{-i}{\sqrt{2}}
\end{pmatrix}


$$

The eigenvectors must have complex components. ##P## has n complex number parameters which decide length and phase of eigenvectors. These effects are compensated for by multiplying the inverse ##P^{-1}##.
 
Last edited:
  • #34
mathwonk said:
Sciencemaster:..."Is there anything else I'm missing here?" I would say you are missing the basic property of a linear transformation, namely it is entirely determined by its effect on a basis. For the same reason, a linear transformation only has one matrix in the standard basis. done.

I.e.
1) The eigenvectors and the eigenvalues completely determine the linear transformation, assuming the eigenvectors contain a basis.
2) A linear transformation completely determines its (standard) matrix.

These are both for the same reason, namely that a Linear transformation is determined by (and determines) its action on a basis. Thus if the eigenvectors contain a basis, knowing them and the eigenvalues tells you what the transformation does to that eigenbasis, hence determines the transformation on everything. Now that we know the linear transformation, we also know what it does to the standard basis, which uniquely determines the (columns of the) matrix.

It has absolutely nothing to do with the existence of a diagonal matrix.
I.e. if you know the behavior of any linear map on any basis, then that map has only one standard matrix.

I apologize if this is still confusing. It confused me too. I at first thought I should use the diagonal matrix somehow.
I see. It makes more sense when I look at it like this: The columns of a matrix transformation represent where the standard basis vectors "go". If a different matrix were to transform the same vectors to the same place, each column would be identical to the first case, and so the transformation would be identical. For example, if we have the matrix ##\begin{bmatrix}1&2\\2&1\end{bmatrix}##, no other transformation can place the standard basis vectors at #\begin{bmatrix}1\\2\end{bmatrix}# and #\begin{bmatrix}2\\1\end{bmatrix}#, lest each column be identical to this matrix. I'm sure there's a similar argument to be made with non-standard basis vectors, although it's a bit harder to visualize. It helps me to think of a (linearly independent) transformation being decomposed into an inverse matrix and a matrix (i.e. #M=B^{-1}A#), representing the initial vector being transformed into a standard basis vector, and then to wherever the original matrix would have placed it. Both of these operations are easy to imagine as one-to-one transformations via a similar argument to the one I made above.
From there, it seems trivial to use eigenvectors and values instead of some other set of vectors. After all, due to the linearity of the transformation, you can extrapolate how one vector transforms given how others do so comparatively easily.
I imagine this next argument isn't very helpful or anything, but I *believe* that a matrix transformation was originally meant to be an alternative representation of a system of linear equations. Does the "one-to-one-ness" have anything to do with such a system of equations having only one solution (N parameters for N equations/rows) so long as it's linearly independent?
 
Back
Top