# Homework Help: Linear Algebra: Basis vs basis of row space vs basis of column space

1. Jul 19, 2013

### twinkerules

In my linear algebra class we previously studied how to find a basis and I had no issues with that. Now we are studying the basis of a row space and basis of a column space and I'm struggling to understand the methods being used in the textbook. The textbook uses different methods to find these bases and doesn't explain why or even how to properly do it. The textbook in question is Elementary Linear Algebra by Kolman (9th ed, section 4.9). I've read the chapter at least three times and can't make sense of why they do things certain ways. I decided to attempt some problems and was doing ok until the following two problems. The first problem listed below I was able to solve successfully but the second I got wrong at first and was successful on a second attempt.

1. The problem statement, all variables and given/known data

Problem number one: Find a basis for the row space of A consisting of vectors that (a) are not necessarily row vectors of A; and (b) are row vectors of A.

A = $$\begin{bmatrix} 1 & 2 & -1\\ 1 & 9 & -1\\ -3 & 8 & 3\\ -2 & 3 & 2 \end{bmatrix}$$

I was able to solve this one successfully by just doing reduced row echelon form (rref) and using any nonzero row as the basis.

Problem number two: Find a basis for the column space of A consisting of vectors that (a) are not necessarily column vectors of A; and (b) are column vectors of A.

A = $$\begin{bmatrix} 1 & -2 & 7 & 0\\ 1 & -1 & 4 & 0\\ 3 & 2 & -3 & 5\\ 2 & 1 & -1 & 3 \end{bmatrix}$$

2. Attempt at a solution.

For this one I tried doing rref but had the wrong answer. Instead, I transposed the matrix given above, did rref, and then got the correct answer if I transposed the columns again as shown below.

$$\begin{bmatrix} 1 & 1 & 3 & 2\\ -2 & -1 & 2 & 1\\ 7 & 4 & -3 & -1\\ 0 & 0 & 5 & 3 \end{bmatrix} -->rref \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & \frac{1}{5}\\ 0 & 0 & 1 & \frac{3}{5}\\ 0 & 0 & 0 & 0 \end{bmatrix} Answer: \begin{bmatrix} 1\\0\\0\\0 \end{bmatrix}, \begin{bmatrix} 0\\1\\0\\ \frac{1}{5} \end{bmatrix}, \begin{bmatrix} 0\\0\\1\\ \frac{3}{5} \end{bmatrix}$$

Can you please explain why the column space problem needs to be transposed before doing rref and why the answer is actually the rows of the matrix which need to be transposed back to columns. I can't find a pattern or answer as to why there is a difference in the procedures being performed. I listed problem one above, even though I got it right, because I don't feel confident that I know why that one was solved the way it was. Can you please provide some contrast between the different procedures being performed and how to know which one to do? I apologize in advance for any formatting errors, I have never used tex prior to this day. Thank you kindly.

2. Jul 19, 2013

### Zondrina

Well first, your problem states :

When you transposed the matrix, the rows became the columns and the columns became the rows.

So when you were trying to find a basis for the column space of $A$, denoted $C(A)$, in the first problem, you didn't transpose anything and you went right through the calculation; so the columns stayed the columns and the rows stayed as rows. Hence why you did not have to transpose your answer to get the correct one.

For the second problem, you DID transpose $A$, and really, you wound up solving $A^t$; so you got $C(A^t)$, not $C(A)$.

Using this useful property though ( I would recommend proving this one ) :

$(A^t)^t = A$

You are able to retrieve $C(A)$.

3. Jul 19, 2013

### voko

When you do row reduction, you treat rows as vectors, which you add to one another with some scaling factor; so the end result is vectors that are linearly related to the original row vectors, and because they are manifestly independent, they are the basis.

Now, what does that mean w.r.t. column vectors?

4. Jul 19, 2013

### twinkerules

To Zondrina:

In regards to your answer, would it be ok then, anytime it asks to find a basis for a row space to only perform row operations to get the answer? And at the same time, whenever asked to find a basis for the column space, only perform column operations to get an answer? In the case of the column space should this be equivalent to transposing the matrix and then doing only row operations? Thank you.

Last edited: Jul 19, 2013
5. Jul 19, 2013

### twinkerules

I think my problem is that I wasn't making the connection that I was doing row operations hence when I was working with a column space I need to transpose so the row operations don't result in a matrix that is not a linear combination of the original matrix. If that makes sense.

6. Jul 19, 2013

### voko

It looks like you have grasped the essence of the issue. Good!

7. Jul 19, 2013

### lurflurf

^Yes this is confusing as some books suggest transposing the matrix reducing it then transposing it again because the authors believes column operations might confuse the reader others suggest using column operations. These are equivalent.

8. Jul 19, 2013

### twinkerules

Thanks everyone for the help. I feel like I have a better understanding and will give the problems a try again in my next study session.

9. Jul 19, 2013

### Zondrina

The column space of a matrix $A \in M_{m*n}( \mathbb{F} )$ ( Denoted $C(A)$ ) is the subset of $\mathbb{F}^m$ containing all linear combinations of the columns of $A$.

So imagine the columns of $A$ are listed $\{ A_1, A_2, ..., A_n\}$.

Then $C(A) = span\{A_1, A_2, ..., A_n\}$.

You also know that a basis is a set of linearly independent vectors that spans the space.

Now, you can easily just take all the columns of $A$ and plug them into $C(A)$, but there could potentially be more efficient ways to describe the set. If you row reduce $A$ before you plug the columns in, you can see which columns are linearly independent by checking pivots ( Leading 1s ). This may potentially reduce the amount of vectors you need to describe $C(A)$ which is what you're aiming for.

Now there's a great theorem you can utilize here. Let $R(A)$ denote the row space of A. I suggest you prove this, but we can write :

$$R(A) = C( A^t )$$

Then all the same logic about $C(A)$ applies to $R(A)$ simply by transposing $A$.

Last edited: Jul 19, 2013