How does the change of basis theorem work in linear algebra?

bonfire09
Messages
247
Reaction score
0
Let B={b1,b2} and C={c1,c2} be basis. Then the change of coordinate matrix P(C to B) involves the C-coordinate vectors of b1 and b2. Let
[b1]c=[x1] and [b2]c=[y1]
...[x2]...[y2].

Then by definition [c1 c2][x1]=b1 and [c1 c2][y1]=b2. I don't get how you can
....... [x2].....[y2]
multiply the matrix with basis set C with the change of coordinate matrix P(C to B) to get back basis set B ?
Can anyone help me understand how the derive the fact that you can take the set C basis and matrix P to get basis b1 and b2? My textbook just says very little about it.

Here is an example of a problem relating to this idea.
There was a problem that stated find a basis {u1,u2,u3} for R^3 such that P is the change of coordinates matrix from{u1,u2,u3} to the basis {v1,v2,v3}? P was given and v1,v2,v3were given as well. I know how to do it but don't get the how it works?
 
Last edited:
Physics news on Phys.org
Ok. Let's work in Two Dimensions because writing math text on this forum is slow for me.

Suppose there is a vector v = <x1,x2>. We are used to the regular cartesian basis vectors
{e}_1 = <1,0> and {e}_2 = <0,1> such that

v = &lt;x1,x2&gt;^{T} = x1{e}_1 + x2{e}_2

now let's say we want to express v in a new basis like {f}_1 = <1,1> and {f}_2 = <1,2> such that

v = &lt;y1,y2&gt;^{T} = y1{f}_1 + y2{f}_2

since v is the same vector regardless of how it is represented, we can equate the two basis expressions

v = v

y1{f}_1 + y2{f}_2 = x1{e}_1 + x2{e}_2

<{f}_1,{f}_2>&lt;y1,y2&gt;^{T} = <{e}_1,{e}_2>&lt;x1,x2&gt;^{T}

Let <{f}_1,{f}_2> = F (2x2 Matrix with f basis as columns)
Let <{e}_1,{e}_2> = E (2x2 Matrix with f basis as columns)

Then

F&lt;y1,y2&gt;^{T} = E&lt;x1,x2&gt;^{T}

So

&lt;y1,y2&gt;^{T} = F^{T}E&lt;x1,x2&gt;^{T}

Let P = F^{T}E

Then

&lt;y1,y2&gt;^{T} = P&lt;x1,x2&gt;^{T}

I hope this shows where the logic is coming from. If it is still not clear, let me know, and I will clarify. I will edit this better when I return home.
 
thanks this is what I was looking for. I get it.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
Back
Top