Transforming Linear Algebra: How to Find the Matrix Representation

Click For Summary

Discussion Overview

The discussion centers on finding the matrix representation of linear transformations between finite-dimensional vector spaces, specifically addressing two problems: determining if the image of a basis under a linear transformation is also a basis, and constructing a specific matrix given certain linear dependencies among its columns.

Discussion Character

  • Technical explanation
  • Mathematical reasoning
  • Homework-related

Main Points Raised

  • Some participants propose that if T is a one-to-one linear transformation from V to W, then the image of a basis of V under T, {T(v1), T(v2), ..., T(vn)}, is a basis for W, contingent on showing linear independence and spanning.
  • One participant suggests using the invertible matrix theorem to approach the problem of linear independence in the context of the transformation.
  • Another participant notes that the construction of the matrix A is incorrect due to a misunderstanding of dimensions, clarifying that the matrix presented is a 3x5 matrix rather than the required 8x5 matrix.
  • There is a discussion about the implications of linear independence, with one participant asserting that linearly independent subsets of V map to linearly independent subsets of W, provided T is injective.
  • A later reply describes a method for constructing the matrix representation of a linear transformation by applying T to each basis vector of the domain and expressing the results in terms of the basis of the codomain.

Areas of Agreement / Disagreement

Participants generally agree on the properties of linear transformations and the implications of linear independence, but there are differing approaches and some uncertainty regarding the specific construction of the matrix A and the proof of the basis property for the image of T.

Contextual Notes

Some assumptions about the properties of the linear transformation and the definitions of linear independence may not be fully articulated, and the discussion includes unresolved steps in the proof regarding the basis property.

student64
Messages
3
Reaction score
0
1. Let V and W be finite dimensional vector spaces with dim(v) = dim(w). Let {v1,v2,...,vn} be a basis for V. If T:V->W is a one to one linear transformation, determine if {T(v1), T(v2), ... , T(vn)} is a basis for W.

2. How do i get a matrix out of this: Let A be an 8x5 matrix with columns a1, a2, a3, a4, a5, where a1, a3, and a5 form a linearly independent set and a2=2*a1+3*a5, and a4=a1-a3+2*a5.

I have looked all over, and I have starts to each of these problems, any help would be received with much thanks.

So far, for 1. I know that it is true by a theorem I found, but I am really unsure how to prove it.

on 2. I made a matrix like this

1 2 0 1 0
0 0 1 -1 0
0 3 0 2 1

I reduced it and came up with the answer that the dimension of NulA is 2 because it reduces to having 2 free variables.

If this is the wrong way to get the matrix A, how do I do it?

Thanks.
 
Physics news on Phys.org
Here's a hint for #1:

You will need to use the fact that f is one-to-one. You want to show that any w in W can be written as a linear comibination of the { T(v1), ... , T(vn) }. Well, if f is one-to-one, what can you say about f^{-1}(w)? Moreover, any v (such that f(v) = w) can be written as a linear combination of v1 through vn. What happens when you evaluate f(v) ?
 
student64 said:
2. How do i get a matrix out of this: Let A be an 8x5 matrix with columns a1, a2, a3, a4, a5, where a1, a3, and a5 form a linearly independent set and a2=2*a1+3*a5, and a4=a1-a3+2*a5.

on 2. I made a matrix like this

1 2 0 1 0
0 0 1 -1 0
0 3 0 2 1

I reduced it and came up with the answer that the dimension of NulA is 2 because it reduces to having 2 free variables.

If this is the wrong way to get the matrix A, how do I do it?

Thanks.

You're constructing it right, but that's not an 8x5! It's a 3x5 (sometimes I get confused on which are rows/columns).

For question 1, show that { T(v1), ... , T(vn) } is linearly independent and then you're done.
 
How would I go about doing that?

Right now, I'm trying to find an answer using the invertible matrix theorem.
 
for 1 - if T is linear from V-->W, linearly independent subsets of V always map to linearly independent subsets of W. Since you have a basis for V, it's linearly independent. All you really have to show is that the LI subset you get in W is actually a basis for W. But since the dim(V)=dim(W)=n, you mapped n LI vectors to an LI subset with n vectors. Therefore, you have a basis.

If you have to, make a lemma for the part that LI subsets of V map to LI subsets of W (you need the injectivity (1-1) of T for this part). It's pretty easy to show with a proof by contradiction if you get stuck.
 
Given a vector space U with basis \{u_1, u_2, ..., u_n}, a vector space V with basis \{v_1, v_2, ..., v_n\}, and a linear transformation, T, from U to V, a standard way of writing T as a matrix (with respect to those bases) is to apply T to each basis vector,u_1, u_2, ..., u_n in turn. Applying T to u_i will give a vector in V which can be written as a linear combination , a_1v_1+ a_2v_2+ ...+ a_nv_n. Those coefficients, a_1, a_2, ... a_n are the numbers in the i row of the matrix.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 44 ·
2
Replies
44
Views
5K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 1 ·
Replies
1
Views
4K