Linear transformation representation with a matrix

Click For Summary

Homework Help Overview

The discussion revolves around a linear transformation T: R2-->R2 defined by T(x1, x2) = (x1 + x2, 2x1 - x2). Participants are tasked with using a matrix A to find T(v) for the vector v = (2, 1), while also considering two bases B and B'.

Discussion Character

  • Exploratory, Conceptual clarification, Mathematical reasoning

Approaches and Questions Raised

  • Participants explore the relationship between the transformation T and its representation using matrices, questioning how to derive the coordinates of vector v in the basis B. There is confusion regarding the coefficients [1 -1]T and their origin in the context of the transformation.

Discussion Status

Some participants have provided insights into the coordinate representation of vectors in different bases and the process of expressing a vector as a linear combination of basis vectors. There is acknowledgment of a gap in the textbook explanation regarding the derivation of certain coefficients, leading to further clarification among participants.

Contextual Notes

Participants note that the textbook may have omitted details on how to express the vector v in terms of the basis B, which has led to confusion about the transformation process. The discussion reflects a mix of understanding and uncertainty regarding the application of linear transformations and basis changes.

patricio2626
Messages
6
Reaction score
0

Homework Statement



For the linear transformation T: R2-->R2 defined by T(x1, X2) = (x1 + x2, 2x1 - x2), use the matrix A to find T(v), where v = (2, 1). B = {(1, 2), (-1, 1)} and B' = {(1, 0), (0, 1)}.

Homework Equations



T(v) is given, (x1+x2, 2x1-x2)

The Attempt at a Solution



Okay, I see that T(v) is simply (2+1, 2*2-1) --> (3, 3), but this matrix business has me a bit confused. From my textbook:

Using the basis B = {(1, 2), (-1, 1)}, you find that v = (2, 1) = 1(1, 2) - 1(-1, 1), which implies [v]B = [1 -1]T.

Now, where did this seemingly 'magical' 1 and -1 come from? The matrix relative to B and B' is obviously {(3, 0), (0, 3)}, but I have no idea where this [1 -1]T came from, nor what it is. I see that this is the coordinate matrix for (3, 3) relative to B, because I see that A*v in this case gives (3, 3), but have no idea how it was derived. Perhaps there's some small gem or concept here that I'm missing?
 
Last edited:
Physics news on Phys.org
patricio2626 said:

Homework Statement



For the linear transformation T: R2-->R2 defined by T(x1, X2) = (x1 + x2, 2x1 - x2), use the matrix A to find T(v), where v = (2, 1). B = {(1, 2), (-1, 1)} and B' = {(1, 0), (0, 1)}.

Homework Equations



T(v) is given, (x1+x2, 2x1-x2)

The Attempt at a Solution



Okay, I see that T(v) is simply (2+1, 2*2-1) --> (3, 3), but this matrix business has me a bit confused. From my textbook:

Using the basis B = {(1, 2), (-1, 1)}, you find that v = (2, 1) = 1(1, 2) - 1(-1, 1), which implies [v]B = [1 -1]T.

Now, where did this seemingly 'magical' 1 and -1 come from? The matrix relative to B and B' is obviously {(3, 0), (0, -3)}, but I have no idea where this [1 -1]T came from, nor what it is. I see that this is the coordinate matrix for (3, 3) relative to B, because I see that A*v in this case gives (3, 3), but have no idea how it was derived. Perhaps there's some small gem or concept here that I'm missing?
The coordinates of a vector represent the constants that multiply the vectors in a basis.
In the standard basis for ##\mathbb{R}^2##, the vector ##\begin{bmatrix} 3 \\ 2\end{bmatrix}## means ##3\begin{bmatrix} 1 \\ 0\end{bmatrix} + 2\begin{bmatrix} 0\\ 1\end{bmatrix}##.
What you show as ##[v]_B## are the coordinates of basis B to result in ##\begin{bmatrix} 2 \\ 1 \end{bmatrix}##
 
  • Like
Likes   Reactions: patricio2626
Mark44 said:
The coordinates of a vector represent the constants that multiply the vectors in a basis.
In the standard basis for ##\mathbb{R}^2##, the vector ##\begin{bmatrix} 3 \\ 2\end{bmatrix}## means ##3\begin{bmatrix} 1 \\ 0\end{bmatrix} + 2\begin{bmatrix} 0\\ 1\end{bmatrix}##.
What you show as ##[v]_B## are the coordinates of basis B to result in ##\begin{bmatrix} 2 \\ 1 \end{bmatrix}##

Okay, so

v = (2, 1) = 1(1, 2) - 1(-1, 1)

because
1*1 - 1*-1 = 2
1*2 - 1*1 = 1

So, the book simply skipped this piece of the explanation?
 
patricio2626 said:
Okay, so

v = (2, 1) = 1(1, 2) - 1(-1, 1)

because
1*1 - 1*-1 = 2
1*2 - 1*1 = 1

So, the book simply skipped this piece of the explanation?

Yes.
 
  • Like
Likes   Reactions: patricio2626
Thanks gents, this got me past this headscratcher, and then I had an on-the-fly tutor session yesterday and I get it now. To convert between nonstandard bases and find a transform for a vector expressed in the standard basis:

-To get the relative matrix we transform the first nonstandard base, say, B1, and express each column vector as a linear combination of our second nonstandard base, say, B2
-To get v expressed in terms of multiples of B1 we express v as a linear combination of B1.
-We then multiply the result by our transform matrix: Av, and this is expressed in terms of our coefficients for B2
 

Similar threads

  • · Replies 26 ·
Replies
26
Views
3K
Replies
9
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 24 ·
Replies
24
Views
3K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K