Linear Transformation (Image, Kernel, Basis, Dimension)

  • Thread starter says
  • Start date
  • #1
594
12
Mod note: Moved from Precalc section
1. Homework Statement

Given l : IR3 → IR3 , l(x1, x2, x3) = (x1 + 2x2 + 3x3, 4x1 + 5x2 + 6x3, x1 + x2 + x3), find Ker(l), Im(l), their bases and dimensions.

My language in explaining my steps is a little sloppy, but I'm trying to understand the process and put it in terms that I understand.

Homework Equations


ker(l) Ax=0

The Attempt at a Solution


Step 1: Find the matrix associated to this transformation using the standard basis.

l(1,0,0) = (1,4,1)
l(0,1,0) = (2,5,1)
l(0,0,1) = (3,6,1)

Im(l) =

[1] [4] [1]
[2] , [5] , [1]
[3] [6] [1]

To find Ker(l) = Ax=0. Put Im(l) into an augmented matrix and set it =0

[1 4 1 |0]
[2 5 1 |0]
[3 6 1 |0]

Reduced row echelon form =

[1 0 -1/3 |0]
[0 1 1/3 |0]
[0 0 0 |0]

X1= X1 [1] X2 [0] X3 [ -1/3]
X2= X1 [0] + X2 [1] + X3 [ 1/3 ]
X3= X1 [0] X2 [0] X3 [ 0 ]

ker(l) =

[1] [0] [-1/3]
[0] , [1] , [1/3]
[0] [0] [0]

Basis(l) = minimum number of vectors that span the subspace

[1] [0]
[0] , [1]
[0] [0]

Dimension = number of vectors that span the subspace = 2

I hope I've done this right.
 
Last edited by a moderator:

Answers and Replies

  • #2
35,425
7,284

Homework Statement


Given l : IR3 → IR3 , l(x1, x2, x3) = (x1 + 2x2 + 3x3, 4x1 + 5x2 + 6x3, x1 + x2 + x3), find Ker(l), Im(l), their bases and dimensions.

My language in explaining my steps is a little sloppy, but I'm trying to understand the process and put it in terms that I understand.

Homework Equations


ker(l) Ax=0
ker(L) = {x in R3 such that Ax = 0}
says said:

The Attempt at a Solution


Step 1: Find the matrix associated to this transformation using the standard basis.

l(1,0,0) = (1,4,1)
l(0,1,0) = (2,5,1)
l(0,0,1) = (3,6,1)

Im(l) =

[1] [4] [1]
[2] , [5] , [1]
[3] [6] [1]
It doesn't look like you actually did anything here. The image of L, Image(L), is the set of all possible outputs of the transformation. IOW, Image(L) = {y in R3 such that y = Ax, for some x in R3}
says said:
To find Ker(l) = Ax=0.
says said:
No. The above says that Ker(L) = 0, which may or may not be true. See above, under your relevant equations, for the definition of Ker(L).
Put Im(l) into an augmented matrix and set it =0

[1 4 1 |0]
[2 5 1 |0]
[3 6 1 |0]
When you're finding the kernel, you don't need an augmented matrix. The rightmost column starts off with all zeroes, and can't possibly change, so there's no point in dragging it along. It doesn't hurt to have it, but it isn't necessary here (in finding the kernel).
says said:
Reduced row echelon form =

[1 0 -1/3 |0]
[0 1 1/3 |0]
[0 0 0 |0]
I get something different.
says said:
X1= X1 [1] X2 [0] X3 [ -1/3]
X2= X1 [0] + X2 [1] + X3 [ 1/3 ]
X3= X1 [0] X2 [0] X3 [ 0 ]
This isn't how it works. Based on your last augmented matrix (which is incorrect), you have
x1 = 1/3 x3
x2 = -1/3 x3
x3 = x3

As a check on your work, if x = <1/3, -1/3, 1>^, is Ax = 0?
says said:
ker(l) =

[1] [0] [-1/3]
[0] , [1] , [1/3]
[0] [0] [0]
No.
says said:
Basis(l) = minimum number of vectors that span the subspace

[1] [0]
[0] , [1]
[0] [0]
This is not a basis for the kernel. As it turns out, dim(ker(L)) = 1.
says said:
Dimension = number of vectors that span the subspace = 2

I hope I've done this right.
 
  • #3
594
12
I'm a bit confused on how I am supposed to find the Im(l)
 
  • #4
Ray Vickson
Science Advisor
Homework Helper
Dearly Missed
10,706
1,722
Mod note: Moved from Precalc section
1. Homework Statement

Given l : IR3 → IR3 , l(x1, x2, x3) = (x1 + 2x2 + 3x3, 4x1 + 5x2 + 6x3, x1 + x2 + x3), find Ker(l), Im(l), their bases and dimensions.

My language in explaining my steps is a little sloppy, but I'm trying to understand the process and put it in terms that I understand.

Homework Equations


ker(l) Ax=0

The Attempt at a Solution


Step 1: Find the matrix associated to this transformation using the standard basis.

l(1,0,0) = (1,4,1)
l(0,1,0) = (2,5,1)
l(0,0,1) = (3,6,1)


I hope I've done this right.

Trying to use matrices and matrix methods is almost a waste of time in this problem. To find the kernel, you just need to determine the dimensionality of the solution space to the linear system
[tex]\begin{array}{l} x_1 + 2x_2 + 3x_3 = 0\\
4x_1 + 5x_2 + 6x_3 = 0\\
x_1 + x_2 + x_3 = 0
\end{array} [/tex]
More generally (since it also helps with other parts of the problem) you can consider the general system
[tex]\begin{array}{l} x_1 + 2x_2 + 3x_3 = r_1\\
4x_1 + 5x_2 + 6x_3 = r_2\\
x_1 + x_2 + x_3 = r_3
\end{array} [/tex]

It is just as easy to deal with these systems directly and forget about matrices. For example, in the second system the third equation implies that ##x_3 = r_3 - x_1 - x_2##, and substituting that into the first two equations yields two equations in the two unknowns ##x_1, x_2##. When ##r_1 = r_2 = r_3 = 0## you can continue the process, using the (new) first equation to solve for ##x_2 ## in terms of ##x_1##, then see what that gives you for the (new) second equation. When the ##r## are not all zero, you will get a result for the third equation that makes sense only if ##r_1, r_2, r_3## are related in some way that you can determine from the resulting formulas. Figuring out what ##r##-values are allowed will tell you the image of ##l##. Figuring out what the solution looks like when all ##r##'s are zero will tell you what is the kernel of ##l##.
 
  • #5
594
12
l(x1, x2, x3) = (x1 + 2x2 + 3x3, 4x1 + 5x2 + 6x3, x1 + x2 + x3)

x1, x2, x3 are vectors?

l(x1) = x1 + 2x2 + 3x3
l(x2) = 4x1 + 5x2 + 6x3
l(x3) = 6x3, x1 + x2 + x3

x1 + 2x2 + 3x3 = 0
4x1 + 5x2 + 6x3 = 0
x1 + x2 + x3 = 0

x1 = x3
x2 = -2x3
x3 = free variable

reduced row echelon form:

[1 0 -1]
[0 1 2 ]
[0 0 0 ]

Dimension = 2

ker(l) =

[1] [0]
[0] , [1]
[0] [0]
 
  • #6
35,425
7,284
l(x1, x2, x3) = (x1 + 2x2 + 3x3, 4x1 + 5x2 + 6x3, x1 + x2 + x3)

x1, x2, x3 are vectors?
No, they are components of a vector in R3.
says said:
l(x1) = x1 + 2x2 + 3x3
l(x2) = 4x1 + 5x2 + 6x3
l(x3) = 6x3, x1 + x2 + x3
The above is not helpful.
says said:
x1 + 2x2 + 3x3 = 0
4x1 + 5x2 + 6x3 = 0
x1 + x2 + x3 = 0

x1 = x3
x2 = -2x3
x3 = free variable
Edit: No, you're slightly off. This is correct See my comment below your reduced matrix.
Also, this says that ##\begin{bmatrix} x_1 \\ x_2 \\ x_3\end{bmatrix} = x_3\begin{bmatrix} 1 \\ -2 \\ 1\end{bmatrix}##
Is A times the last vector above equal to the zero vector? That's a sanity check you should always do in this sort of work.
says said:
reduced row echelon form:

[1 0 -1]
[0 1 2 ]
[0 0 0 ]
Edit: Check your arithmetic. You have an error in the first row.
This is correct

Also, the equations you have above should come from this row-reduced matrix, so I don't understand why you wrote the equations first, and then the reduced matrix.
says said:
Dimension = 2
No. The equations you have above suggest that dim(Ker(L)) = 1, not 2.
says said:
ker(l) =

[1] [0]
[0] , [1]
[0] [0]
 
Last edited:
  • #7
594
12
I learn much faster and better if I know the process before definitions because I get really bogged down with terminology and new definitions. I think in my rush to get the process down pat I've got a bit lost with this problem.

A =
x1 + 2x2 + 3x3
4x1 + 5x2 + 6x3
x1 + x2 + x3

l (x1,x2,x3) = A (x1,x2,x3) =

[ 1 2 3 ] [x1]
[ 4 5 6 ] [x2]
[ 1 1 1 ] [x3]

[1] [2] [3]
[4] x1 + [5] x2 + [6] x3
[1] [1] [1]

Does this mean Im(l) =

[1] [2] [3]
[4] , [5] , [6]
[1] [1] [1]

It looks similar to my first answer, only with a different orientation.
 
  • #8
35,425
7,284
I learn much faster and better if I know the process before definitions because I get really bogged down with terminology and new definitions.
This is really the wrong order. How can a process make any sense if you don't understand the basic definitions behind the process?
If you don't understand the basic terms, such as ker(L), Im(L), and so on, you're just blindly applying some steps
says said:
I think in my rush to get the process down pat I've got a bit lost with this problem.
Yes. Instead of trying to use rote memorization of some process, focus on understanding what the terms mean.
That's why I corrected what you wrote about Ker(L) and Im(L) in a previous post.

This is tied into what Ray Vickson said, as well, that for this problem you don't really need to use matrices.
says said:
A =
x1 + 2x2 + 3x3
4x1 + 5x2 + 6x3
x1 + x2 + x3

l (x1,x2,x3) = A (x1,x2,x3) =

[ 1 2 3 ] [x1]
[ 4 5 6 ] [x2]
[ 1 1 1 ] [x3]

[1] [2] [3]
[4] x1 + [5] x2 + [6] x3
[1] [1] [1]

Does this mean Im(l) =

[1] [2] [3]
[4] , [5] , [6]
[1] [1] [1]
No.
says said:
It looks similar to my first answer, only with a different orientation.

I would start by finding Ker(L), which means solving the equations
x1 + 2x2 + 3x3 = 0
4x1 + 5x2 + 6x3 = 0
x1 + x2 + x3 = 0

You can do this by directly working with the equations, or you can row-reduce this matrix:
##\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 1 & 1 & 1\end{bmatrix}##
You almost had it in a previous post. Reread my post where I talk about this, and see if you can find Ker(L).
 
  • #9
594
12
I get a reduced row echelon form matrix:

[ 1 0 -1]
[ 0 1 2]
[ 0 0 0]


so ker(l) =
[1] [0] [-1]
[0] , [1] , [2]
[0] [0] [0]
 
  • #10
35,425
7,284
I get a reduced row echelon form matrix:

[ 1 0 -1]
[ 0 1 2]
[ 0 0 0]
Yes, this is correct (and I had a sign error in my work earlier).

The equations that this matrix represents are just as you wrote before:
x1 = x3
x2 = -2x3
x3 = free variable
In other words,
##\begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = x_3 \begin{bmatrix} 1 \\ -2 \\ 1\end{bmatrix}##
This should be a pretty good hint about Ker(L) and its size (dimension).

says said:
so ker(l) =
[1] [0] [-1]
[0] , [1] , [2]
[0] [0] [0]
 
  • #11
594
12
Yep!

X1=
[1]
[0]
[1]

X2=
[0]
[1]
[-2]

X3 =
[1]
[-2]
[1]
 
  • #12
594
12
ker(l)=

[1] [0] [1]
[0] , [1] , [-2]
[1] [-2] [1]
 
  • #13
594
12
Only independent variable, so only one basis vector. That is done by making x3=1

ker(l) =

[1]
[-2]
[1]

Checked it by plugging in values into original transformation and I get 0s
 
  • #14
35,425
7,284
Yep!

X1=
[1]
[0]
[1]

X2=
[0]
[1]
[-2]

X3 =
[1]
[-2]
[1]
No, I don't know what you're doing here.
ker(l)=

[1] [0] [1]
[0] , [1] , [-2]
[1] [-2] [1]
Or here, either.
Only independent variable, so only one basis vector. That is done by making x3=1

ker(l) =

[1]
[-2]
[1]

Checked it by plugging in values into original transformation and I get 0s
Yes! The kernel is the one-dimensional vector space spanned by this vector. IOW, the kernel is a line through the origin in the direction of <1, -2, 1>.

Next up is figuring out what the Im(L) is.
By definition, which I'm hoping you're starting to realize is something important, Im(L) = {y in R3, such that Ax = y, for some x in R3}.
To find Im(L) you want to find out if there is some vector x that produces an arbitrary y (denoted as <a, b, c> below).

##\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 1 & 1 & 1\end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3\end{bmatrix} = \begin{bmatrix} a \\ b \\ c \end{bmatrix}##
You can set this up as an augmented matrix whose fourth column is <a, b, c>T. Your textbook should have an example or two of how this works.
 
  • #15
594
12
My textbook doesn't even use augmented matrices -- it's easily the worst formatted piece of text I've ever read, but I digress...

The definition you put of Im(l) is a little vague. If Ax= y, and I multiply the matrix A by x1,x2,x3 and that = a,b,c then I get the three vectors below, which was one of my original answers.

[1] [4] [1]
[2] , [5] , [1]
[3] [6] [1]
 
  • #16
35,425
7,284
My textbook doesn't even use augmented matrices -- it's easily the worst formatted piece of text I've ever read, but I digress...

The definition you put of Im(l) is a little vague. If Ax= y, and I multiply the matrix A by x1,x2,x3 and that = a,b,c then I get the three vectors below, which was one of my original answers.
In the first post of this thread you set up an augmented matrix, so you must have at least heard about them, even if your textbook doesn't use them.

What part of the definition I gave are you uncertain about?
says said:
[1] [4] [1]
[2] , [5] , [1]
[3] [6] [1]
I don't know what you're doing here.
##\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 1 & 1 & 1\end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3\end{bmatrix} = \begin{bmatrix} a \\ b \\ c \end{bmatrix}##

The augmented matrix looks like this:
##\begin{bmatrix} 1 & 2 & 3 & | & a \\ 4 & 5 & 6 & | & b\\ 1 & 1 & 1 & | & c\end{bmatrix}##
Row reduce this augmented matrix, and the final matrix gives you a set of equations that show which vectors ##<x_1, x_2, x_3>## have <a, b, c> as their image. Again, your textbook must have at least one example of how this works.
 
  • #17
594
12
I row reduced, not sure if I did it right. I got a set of equations on the RHS of the equation but I'm not entirely sure what they mean.
I got this row reduced echelon form matrix on the LHS
[1 0 -1]
[0 1 2]
[0 0 0 ]

Equations (from top to bottom) on the right hand side of this matrix are as follows

(2b-5a)/3
(-b+4a)/3
c-a-(b/3)+(4a/3)
 
  • #18
35,425
7,284
So combining what you showed, the final augmented matrix looks like this:
##\begin{bmatrix} 1 & 0 & -1 & | & -(5/3)a + (2/3)b\\ 0 & 1 & 2 & | & (4/3)a - (1/3)b\\ 0 & 0 & 0 & | & -(1/3)a + (1/3)b -c\end{bmatrix}##

Since the third row on the left side of the augmented matrix is all zeroes, the corresponding equation is
##0x_1 + 0x_2 + 0x_3 = -(1/3)a + (1/3)b -c##
That equation gives you a condition on the possible values of a, b, and c. And if you play with it a bit, it will give you a basis for Im(L).
 
  • #19
594
12
If I setup in a similar way that i did for solving the kernel

[X1] [-5/3 ] [2/3] [0]
[X2] a [ 4/3 ] + b [-1/3] + c [0]
[X3] [ 1/3 ] [-1/3] [1]

c = -(1/3)a + (1/3)b

setting c to 1

-(1/3)6 + (1/3)21 = 1

-(5/3)6+(2/3)21 + c = -10+14= 4

(4/3)6 - (1/3)21 = 8-7 = 1

Basis for Im(l) =
[4]
[1]
[1]
 
  • #20
35,425
7,284
If I setup in a similar way that i did for solving the kernel

[X1] [-5/3 ] [2/3] [0]
[X2] a [ 4/3 ] + b [-1/3] + c [0]
[X3] [ 1/3 ] [-1/3] [1]

c = -(1/3)a + (1/3)b

setting c to 1

-(1/3)6 + (1/3)21 = 1

-(5/3)6+(2/3)21 + c = -10+14= 4

(4/3)6 - (1/3)21 = 8-7 = 1

Basis for Im(l) =
[4]
[1]
[1]
No.

As it turns out, and you might not know this yet, a basis for Im(L) will have to have two vectors.

This equation will tell you everything you need to know about Im(L):
##0x_1 + 0x_2 + 0x_3 = -(1/3)a + (1/3)b -c##

What conditions does it place on a, b, and c? These conditions can give you an idea of the possible outputs of Ax.
 
  • #21
594
12
what about the Im(l)?
 
  • #22
35,425
7,284
what about the Im(l)?
Im(L) is the set of vectors y in R3 such that Ax = y, for some x in R3.

In the previous posts, y = <a, b, c>T.

If there are no conditions on a, b, and c (and thus on y), then Im(L) is the entire space R3. But, since Ker(L) is of dimension 1, and since there is a condition on a, b, and c, dim(Im(L)) < 3.

Look at the equation I gave in my last post -- from it you can get a basis for Im(L).
 
  • #23
594
12
Im(l) span =

[1] [0]
[4] , [1]
[1] [1/3]
 
  • #24
594
12
the column space of a matrix of a linear transformation is equal to the image of the linear transformation

so if i take Transpose of A
put it in rref
the rows will equal column vectors, which are the Im(l)
 
  • #25
35,425
7,284
Im(l) span =

[1] [0]
[4] , [1]
[1] [1/3]
I got a different set of vectors, but yours are also correct.

What I did was look at that last line of the matrix, which said ##0 = -1/3 a + 1/3 b - c##
Or, a - b + 3c = 0
So
a = b - 3c
b = b
c = .....c
So ##\begin{bmatrix} a \\ b \\ c\end{bmatrix} = b\begin{bmatrix} 1 \\ 1 \\ 0\end{bmatrix} + c\begin{bmatrix}-3 \\ 0 \\ 1 \end{bmatrix}##

It turns out that your vectors and mine define the same plane in R3. I checked this by taking the cross product of your vectors, and the cross product of my vectors. They came out the same, which says that the normal to the plane of your vectors is the same as the normal to the plane of my vectors. Therefore... we're talking about the same plane in R3.
 

Related Threads on Linear Transformation (Image, Kernel, Basis, Dimension)

Replies
12
Views
4K
Replies
1
Views
7K
Replies
1
Views
2K
Replies
3
Views
16K
Replies
1
Views
1K
M
Replies
1
Views
2K
Replies
3
Views
2K
Replies
3
Views
11K
Top