MHB Why is the eigenvector of a row reduced matrix not always the zero vector?

  • Thread starter Thread starter ognik
  • Start date Start date
  • Tags Tags
    Basis
ognik
Messages
626
Reaction score
2
Hi - this follows on from my earlier post - http://mathhelpboards.com/linear-abstract-algebra-14/all-basis-go-standard-basis-16232.html. Just like to confirm my understanding so far ...

Theorem
Given two vector spaces $ V,W $, a basis $ {α_1 ,…,α_n } $ of V and a set of n vectors $ {β_1 ,…,β_ n}⊂W $ , then there exists a unique linear transformation $T:V⟶W $ such that $ T(α_i )=β_i $.

Let $ C_v $ be the change of basis matrix w.r.t. $V$ and $C_w$ w.r.t.. $W$

1) Please confirm that the 2 change of basis matrices are both w.r.t. the standard basis?

2) So, given a vector $ \vec{a}$ in $ R^n$, we would have (I believe?) $ \vec{a} =C_w \left[ \vec{a} \right]_w $ and $ \vec{a} =C_v \left[ \vec{a} \right]_v $ ?

3) And so $ \left[ \vec{a}_w \right] = C^{-1}_w C_v \left[ \vec{a}_v \right] $?

4) I have seen a method for finding $ C^{-1}_w C_v $ (transform $ \vec{a} $ from basis V to W), - augment $ C_w $ with $ C_v $, then row reduce the LHS to the identity matrix, the RHS is then $ C^{-1}_w C_v $ ?

5) Another question, Given a transformation matrix A, if D is the equivalent transformation w.r.t. a basis B, ie. $ T(\vec{x}) = A\vec{x}, D=C^{-1}_B AC_B $, should we write $[T(\vec{x})]_B =D[(\vec{x})]_B ,\: or \: T[(\vec{x})]_B $ ?

Also, anything I am missing? Much appreciated as usual.
 
Last edited:
Physics news on Phys.org
ognik said:
Hi - this follows on from my earlier post - http://mathhelpboards.com/linear-abstract-algebra-14/all-basis-go-standard-basis-16232.html. Just like to confirm my understanding so far ...

Theorem
Given two vector spaces $ V,W $, a basis $ {α_1 ,…,α_n } $ of V and a set of n vectors $ {β_1 ,…,β_ n}⊂W $ , then there exists a unique linear transformation $T:V⟶W $ such that $ T(α_i )=β_i $.

Let $ C_v $ be the change of basis matrix w.r.t. $V$ and $C_w$ w.r.t.. $W$

1) Please confirm that the 2 change of basis matrices are both w.r.t. the standard basis?

2) So, given a vector $ \vec{a}$ in $ R^n$, we would have (I believe?) $ \vec{a} =C_w \left[ \vec{a} \right]_w $ and $ \vec{a} =C_v \left[ \vec{a} \right]_v $ ?

3) And so $ \left[ \vec{a}_w \right] = C^{-1}_w C_v \left[ \vec{a}_v \right] $?

4) I have seen a method for finding $ C^{-1}_w C_v $ (transform $ \vec{a} $ from basis V to W), - augment $ C_w $ with $ C_v $, then row reduce the LHS to the identity matrix, the RHS is then $ C^{-1}_w C_v $ ?

5) Another question, Given a transformation matrix A, if D is the equivalent transformation w.r.t. a basis B, ie. $ T(\vec{x}) = A\vec{x}, D=C^{-1}_B AC_B $, should we write $[T(\vec{x})]_B =D[(\vec{x})]_B ,\: or \: T[(\vec{x})]_B $ ?

Also, anything I am missing? Much appreciated as usual.

First of all, I do not understand what you mean by $C_v$ and $C_w$. A change-of-basis matrix is, in effect "basis-less"-it turns one set of numbers into a different set of numbers. To interpret what those numbers MEAN, is what we mean by "choosing a basis". In any case, it makes no sense to speak of "the change-of-basis" matrix corresponding to a vector space. For a change-of-basis matrix to mean anything at all, we need TWO bases in the SAME vector space.

The uniqueness of the linear transformation $T$ that takes $\alpha_i \mapsto \beta_i$ (when $\dim V = \dim W$) should be clear: since $A = \{\alpha_1,\dots,\alpha_n\}$ is a basis, any vector $v \in V$ is a UNIQUE linear combination:

$v = c_1\alpha_1 +\cdots + c_n\alpha_n$.

This, in turn means that:

$T(v) = T(c_1\alpha_1 +\cdots + c_n\alpha_n) = c_1T(\alpha_1) +\cdots + c_nT(\alpha_n)$

$= c_1\beta_1 +\cdots + c_n\beta_n$, since $T$ is linear.

Since $B = \{\beta_1,\dots,\beta_n\}$ is ALSO a basis, this linear combination of the $\beta_i$ is likewise unique, that is we have specified the value of $T$ uniquely for every $v \in V$. If two functions agree on every point of their domain, and have the same co-domain, they are the same function, so any other linear transformation that maps $\alpha_i$ to $\beta_i$ for each $i = 1,2,\dots,n$ must, in fact, BE $T$.

Now, what I think you meant is, if $P$ is the matrix whose columns are (the coordinates of) the $\alpha_i$ in the standard basis, and $Q$ is the matrix whose columns are (the coordinates of) the $\beta_i$ likewise, then, yes:

$Q^{-1}P$ will take $[v]_A \to [v]_B$, and is the $A \to B$ change-of-basis matrix. Note my two matrices $P$ and $Q$ each reference two bases, either $A$ or $B$ and the standard basis (I like to call this the "invisible basis" because an element of $\Bbb R^n$'s coordinates in this basis are its coordinates as a point in space).

I believe this addresses points 1-3.

Note as well, that row-reducing a matrix corresponds to left-multiplying by certain invertible matrices. Using an augmented matrix is just simultaneously applying the operations to two matrices. If we start with:

$A|B$, and we applying a series of row-operations equivalent to (cumulatively) the invertible matrix $P$, we have:

$PA|PB$.

If this becomes:

$I|PB$, it follows that $P = A^{-1}$ (as in my previous post), and thus $PB = A^{-1}B$, as you conjectured. For large matrices, this is often an efficient way of inverting $A$.

I'm not sure I understand your question 5- for one thing it is atypical to "tag" change-of-basis matrices, and more typical to "tag" the matrix corresponding to a linear transformation depending on the bases used at start and finish. In general:

$[T]_C^B[v]_B = [Tv]_C$ is the convention I'm used to.

If $A$ is the matrix of $T$ *in the standard basis*, then the matrix of $T$ relative to the bases $B$ and $C$, where the "inputs" are in $B$-coordinates and the desired "output" is in $C$-coordinates, and if $P$ is the change-of-basis matrix from $B$-coordinates to the standard basis, and $Q$ is the change-of-basis matrix from $C$-coordinates to the standard basis, then:

$[T]_C^B = Q^{-1}AP$

If $B = C$ (so we are just swapping one basis for another), then $P = Q$, and $[T]_B = P^{-1}AP$:

$P: [v]_B \to v$

$AP: [v]_B \to v \to Av$

$P^{-1}AP: [v]_B \to v \to Av \to [Av]_B$,

where here I am following your convention of letting $v$ denote itself in the standard basis.
 
This took time to process ...

1 - 3) Yes, I used $C_v = P$ etc. and my 'starting' basis for each of V, W assumed the standard basis. So should I have declared $C_v$ as the change of basis matrix from the standard basis to V?
4) Thanks for showing the derivation
5) I since confirmed that $ T[(\vec{x})] $ was a typo in the notes I saw it in, T should be inside the [
Also, I prefer putting the vector in brackets, I hope that is not bad practice? And I notice you don't put an arrow on top of vectors?
 
It's OK to put vectors inside brackets to denote their representation in a certain basis-this is a commonly used notation, although it can be cumbersome when dealing with the actual "coordinates" themselves, which I like to do as:

$[1,2,4]_B$ to denote $b_1 + 2b_2 + 4b_3$ for the basis $B = \{b_1,b_2,b_3\}$. Fair warning: this notation is non-standard, and can confuse people.

As for writing $\vec{v}$, I generally don't bother with it, some people like to write vectors as $\mathbf{v}$-this is mostly to just distinguish vectors from scalars, which is also done by writing scalars with Greek letters, and vectors with Roman ones:

$v = (\alpha_1,\alpha_2,\alpha_3) = \alpha_1e_1 + \alpha_2e_2 + \alpha_3e_3$.

I typically use letters at the START of the alphabet for scalars, and letters at the REAR for vectors. This is just NOTATION, as long as there is an agreement between you and who you are discoursing with as to what is MEANT, the notation is just a TOOL.

My reasons for not "distinguishing" vectors from scalars, comes from working with Galois theory, where a lot of times the "vectors" are just elements of a larger field, and so the "scalar multiplication" is just the field multiplication in the larger field. It is also not unusual to see the notation $rm$ for a "scalar product" of a ring element $r \in R$ and $m \in M$ where $M$ is an $R$-module (a kind of generalization of vector spaces over rings, instead of fields-in fact, vector spaces are just $F$-modules, which will make sense to you, later).
 
Thanks - illuminating. I agree my extra notation is cumbersome physically, but I find it helps me process - especially with relatively new (or long forgotten) material.

A question from eigenvectors, I did a problem which ended with a row reduced matrix of $\begin{bmatrix}1&0&0\\0&0&1\\0&1&0\end{bmatrix}$, ie $x_1 = 0; x_2 = 0, x_3 = 0$. I know, but don't fully grasp why, the eigenvector is $\begin{bmatrix}0\\0\\1\end{bmatrix}$ instead of the zero vector?
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
Back
Top