Row reducing the matrix of a linear operator

*FaerieLight*
Messages
43
Reaction score
0
I'm having difficulty understanding the concepts presented in the following question.
I'm given a matrix,
[2,4,1,2,6; 1,2,1,0,1; ,-1,-2,-2,3,6; 1,2,-1,5,12], which is the matrix representation of a linear operator from R5 to R4.
The question asks me to find a basis of the image and the kernel of the map.

Row-reducing the matrix gives
[1,2,0,0,-1; 0,0,1,0,2; 0,0,0,1,3; 0,0,0,0,0,] as the reduced row echelon form. I'm told that the columns 1, 3 and 4 are linearly independent, and hence the corresponding columns of the original matrix form a basis of the image of A. This is what I don't understand. How do I know that these columns are linearly independent? I suppose it has something to do with the fact that these columns are the only ones with a single 1 entry, while all the rest of the entries are 0, above and below. But why should this feature necessarily mean that the columns are linearly independent?
After determining the linearly independent columns, why should the corresponding columns of the original matrix correspond to a basis of the image of A?

Also, how would I find a basis for the kernel from the row-reduced echelon form?

Thanks very much.
 
Physics news on Phys.org
*FaerieLight* said:
I'm having difficulty understanding the concepts presented in the following question.
I'm given a matrix,
[2,4,1,2,6; 1,2,1,0,1; ,-1,-2,-2,3,6; 1,2,-1,5,12], which is the matrix representation of a linear operator from R5 to R4.
The question asks me to find a basis of the image and the kernel of the map.

Row-reducing the matrix gives
[1,2,0,0,-1; 0,0,1,0,2; 0,0,0,1,3; 0,0,0,0,0,] as the reduced row echelon form. I'm told that the columns 1, 3 and 4 are linearly independent, and hence the corresponding columns of the original matrix form a basis of the image of A. This is what I don't understand. How do I know that these columns are linearly independent? I suppose it has something to do with the fact that these columns are the only ones with a single 1 entry, while all the rest of the entries are 0, above and below. But why should this feature necessarily mean that the columns are linearly independent?
Because that means your vectors are of the form
v_1= \begin{bmatrix}1 \\ a_1 \\ a_2 \\ ...\end{bmatrix}
v_2= \begin{bmatrix}0 \\ 1 \\ b2 \\ ...\end{bmatrix}
v_3= \begin{bmatrix}0 \\ 0 \\ 1 \\ ...\end{bmatrix}
etc.

Now, suppose we have some linear combination \alpha_1v_1+ \alpha_2v_2+ \alpha_3v3+ ...= 0
That gives
\begin{bmatrix}\alpha_1 \\ \alpha_1a_1+ \alpha_2 \\ \alpha_1a_2+ \alpha_2b_2+ \alpha_3\\ ...\end{bmatrix}= \begin{bmatrix} 0 \\ 0 \\ 0 \\ ...\end{bmatrix}
The first component gives \alpha_1= 0. The second component gives \alpha_1a_+ \alpha_2= 0 and since \alpha_1= 0, \alpha_2= 0. The third component gives \alpha_1a_2+ \alpha_2b_2+ \alpha_3= 0. Since \alpha_1= 0 and \alpha_2= 0, \alpha_3= 0, etc. That is, all scalars in the linear combination are 0 which means the vectors are independent.

After determining the linearly independent columns, why should the corresponding columns of the original matrix correspond to a basis of the image of A?
Imagine taking A times each of the vectors <1, 0, 0, 0, 0>, <0, 1, 0, 0, 0>, <0, 0, 1, 0, 0>, <0, 0, 0, 1, 0>, and <0, 0, 0, 0, 1>, the "standard basis" for R5 in turn. You would get, of course, just the 5 columns of the matrix. Since "row reduction" does not change "independence", you know that the first, third, and fourth rows of the row-reduced form are independent, so are the first, third, and fourth rows of the original matrix and we know they are in the kernel. The other two, the second and fifth columns, are not independent so they map <0, 1, 0, 0, 0> and <0, 0, 0, 0, 1> into combinations of the other three columns- that is those three columns span the image as well as being independent and so are a basis for the image.

Also, how would I find a basis for the kernel from the row-reduced echelon form?

Thanks very much.
A vector, v, is in the kernel of A if and only if Av= 0. Imagine solving the equation by row reducing the augmented matrix. The added column is all 0s so no matter what the row operations are, the last column will remain all 0s. That is, row reducing the augmented matrix for your example will give
\left[\begin{array}{ccccc}1 &amp; 2 &amp; 0 &amp; 0 &amp; -1 \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; 2\\0 &amp; 0 &amp; 0 &amp; 1 &amp; 3\\0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \end{array}\right|\left|\begin{array}{c}0 \\ 0 \\ 0 \\ 0\end{array}\right]
If call a vector in the domain <u, v, x, y, z> those say that we must have u+ 2v- z= 0, x+ 2z= 0, and y+ 3z= 0. From the second and third equations, x= -2z and y= -3z. From the first equation, z= u+ 2v so that x= -2(u+ 2v)= -2u- 4v and y= -3(u+ 2v)= -3u- 6v. That is, we can write any vector in the kernel as <u, v, x, y, z>= <u, v, -2u-4v, -3u- 6v, u+ 2v>= <u, 0, -2u, -3u>+ <0, v, 4v, -6v, 2v>= <1, 0, -2, -3>u+ <0, 1, 4, -6, -2>v which tells us that any vector in the kernel can be written as a linear combination of <1, 0, -2, -3> and <0, 1, 4, -6, -2>. And those are clearly independent because of the "<1, 0" and "<0, 1" first two terms.
 
Last edited by a moderator:
Thanks for that! It was a very helpful explanation.

I have another question. What exactly does row reducing a matrix do to it? Why is it that we can gain so much information about the matrix and the linear operator it represents from the row reduced form?

Thanks
 
Every row operation is the same as multiplying by an "elementary matrix"- the matrix you get by applying that row operation to the identity matrix. So if A can be row reduced to B then there exist a series P1, P2, P3, ..., Pn so that Pn...P3P2P1A= PA= B. Since every row reduction is invertible, their product, P, is also invertible. It follows then that if A is invertible, so is B, etc. For example, if B is the identity matrix, Pn...P3P2P1A= I then Pn...P3P2P1 is the inverse matrix to A. Since Pn...P3P2P1= Pn...P3P2P1I, applying the same row operations, that reduce A to the identity matrix, to the identity matrix gives A^{-1}.
 
Last edited by a moderator:
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
Back
Top