MHB (Very) Basic Questions on Linear Transformations and Their Matrices

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
Firstly, my apologies to Deveno in the event that he has already answered these questions in a previous post ...

Now ...

Suppose we have a linear transformation $$T: \mathbb{R}^3 \longrightarrow \mathbb{R}^2$$ , say ...

Suppose also that $$\mathbb{R}^3$$ has basis $$B$$ and $$\mathbb{R}^2$$ has basis $$B'$$, neither of which is the standard basis ...

Suppose further that $$T(x, y, z) = ( x + 2y - z , 3x + 5z )$$ ... ...

... ... ...

Then (if I am right) we write the matrix $$A$$, of the transformation as follows:

$$A = \begin{bmatrix} 1 & 2 & -1 \\ 3 & 0 & 5 \end{bmatrix}$$... BUT ... questions ...Question 1

Is the expression for the linear transformation

$$T: \mathbb{R}^3 \longrightarrow \mathbb{R}^2$$

an expression in terms of the transformation of $$v = (x, y, z)$$ into $$w = T(v)$$ in terms of the bases $$B$$ and $$B'$$ ... ...

That is, when we input some vector $$v = ( 2, 1, -3 )$$ , say ... ... is that vector to be read as being in terms of the basis $$B$$ or in terms of the standard basis ... ...

... ... and is the output vector from applying T, namely

$$T(v) = ( 2, 1, 3 ) = ( x + 2y - z , 3x + 5z ) = ( 2 + 2(1) - (-3) , 3(2) + 5(-3) ) = ( 7, -9 )$$

in terms of the basis $$B'$$ or in terms of the standard basis?[By the way, I think it is, by convention, that linear transformations from $$\mathbb{R}^n$$ to $$\mathbb{R}^m$$ are expressed as if they are from a standard basis to a standard basis ... but why they are not taken to be in the declared bases $$B$$ and $$B'$$, I am not sure ... ... ]
Question 2

Does the matrix of the transformation

$$A = \begin{bmatrix} 1 & 2 & -1 \\ 3 & 0 & 5 \end{bmatrix}$$

represent the transformation from $$[v]_B$$ to $$[T(v)]_{B'}$$

or

does it represent the transformation from $$[v]_{S_1}$$ to $$[T(v)]_{S_2}$$

where $$S_1$$ is the standard basis for $$\mathbb{R}^3$$

and $$S_2$$ is the standard basis for $$\mathbb{R}^2$$
Hope someone can help ...

Peter
 
Physics news on Phys.org
A matrix representation of a linear transformation DEPENDS on the basis chosen.

In the same vein, the COORDINATES of a vector ALSO depend on the basis chosen.

By itself, the triple $(2,0,5)$ means nothing-it is just three numbers separated by commas, and enclosed in parentheses.

By convention (and *solely* by convention), elements of $F^n$ (where $F$ is any field) are usually denoted by their representation in the standard basis:

$e_1 = (1,0,\dots,0)$
$e_2 = (0,1,\dots,0)$
$\vdots$
$e_n = (0,0,\dots,1)$so when we say, $(x,y,z) \in \Bbb R^3$, for example, what we really MEAN is, the linear combination:

$xe_1 + ye_2 + ze_3$.

Given an $n$-dimensional vector space, the only point unambiguously defined by an $n$-tuple is the $0$-vector, which is the same in any basis. For example, the point in $3$-space you may think of as the $x$-unit vector ($e_1$) might be what I think of as $(\frac{\sqrt{2}}{2},\frac{\sqrt{2}}{2},0)$, because my $3$-space has the $xy$-plane rotated 45 degrees.

Given a linear transformation, there isn't THE matrix representation, only *A* linear representation *relative to some choice of bases*. Indeed there is a linear isomorphism from:

$\text{Hom}_F(U,V) \to \text{Mat}_{\dim(V) \times \dim(U)}(F)$

but this isomorphism isn't unique, we get a different one for each pair of bases chosen for $U$ and $V$.

Bases are a great way to turn our calculations with vectors into calculations in the underlying field-but a vector space doesn't "come" with a basis supplied (it doesn't care what coordinate system you choose).
 
Deveno said:
A matrix representation of a linear transformation DEPENDS on the basis chosen.

In the same vein, the COORDINATES of a vector ALSO depend on the basis chosen.

By itself, the triple $(2,0,5)$ means nothing-it is just three numbers separated by commas, and enclosed in parentheses.

By convention (and *solely* by convention), elements of $F^n$ (where $F$ is any field) are usually denoted by their representation in the standard basis:

$e_1 = (1,0,\dots,0)$
$e_2 = (0,1,\dots,0)$
$\vdots$
$e_n = (0,0,\dots,1)$so when we say, $(x,y,z) \in \Bbb R^3$, for example, what we really MEAN is, the linear combination:

$xe_1 + ye_2 + ze_3$.

Given an $n$-dimensional vector space, the only point unambiguously defined by an $n$-tuple is the $0$-vector, which is the same in any basis. For example, the point in $3$-space you may think of as the $x$-unit vector ($e_1$) might be what I think of as $(\frac{\sqrt{2}}{2},\frac{\sqrt{2}}{2},0)$, because my $3$-space has the $xy$-plane rotated 45 degrees.

Given a linear transformation, there isn't THE matrix representation, only *A* linear representation *relative to some choice of bases*. Indeed there is a linear isomorphism from:

$\text{Hom}_F(U,V) \to \text{Mat}_{\dim(V) \times \dim(U)}(F)$

but this isomorphism isn't unique, we get a different one for each pair of bases chosen for $U$ and $V$.

Bases are a great way to turn our calculations with vectors into calculations in the underlying field-but a vector space doesn't "come" with a basis supplied (it doesn't care what coordinate system you choose).
Well! ... ... That was REALLY HELPFUL!

Thanks Deveno ... that has cleared a few things up for me ...

Thanks again,

Peter
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...

Similar threads

Replies
1
Views
3K
Replies
52
Views
3K
Replies
4
Views
2K
Replies
7
Views
2K
Replies
4
Views
2K
Replies
2
Views
1K
Back
Top