# Homework Help: Vector Subspace Proof

1. Sep 25, 2009

### Hallingrad

Hey guys, I've got another problem I could use some assistance with.

"In this problem we suppose that F is a field, A is an m by n matrix over
F and that W is a subspace of Fm.
(a) Show that U = {v $$\in$$ Fn: Av $$\in$$ W} is a subspace of Fn.
(b) Now suppose that m = n and A is invertible, and that B = {v1, v2,...vk} is a basis for W. Show that {A-1v1, .... A-1vn} is a basis for U."

For a) The way I'm understanding it is by looking at a 3x4 matrix. With such a matrix, v is a 4x1 vector, each product of Av is a 3x1 matrix, and includes only those that are a subset of W, i.e. are independent of each other. There will only be a maximum of n products that satisfy this result, as the rank cannot exceed the lesser of m or n. Am I on the right track for completing the proof?

as for b) I'm a little less clear on. I'm thinking I'd multiply both Av $$\in$$ W by A-1, and since W is in turn composed of the basis (v1 to vn) I'd multiply each of those vectos by A-1 to form my basis for U.

Any suggestions or comments to my reasoning and approach to the problems would be greatly appreciated.

2. Sep 25, 2009

### Office_Shredder

Staff Emeritus
To prove something is a subspace, you need to prove it's closed under two operations. Do you know how to do this?

3. Sep 25, 2009

### Hallingrad

Yes, I'm well aware of that property. I'm just a little confused by the requisite condition placed on the subspace, and the differing use of both Fm and Fn in the problem's description.

4. Sep 25, 2009

### aPhilosopher

It's very easy. Think about the properties of linear transformations.

5. Sep 25, 2009

### Hallingrad

We aren't actually covering linear transformations until next week, although the assignment is due on Monday. He always puts problems on the assignments that he doesn't cover until after they're due >_>. I thought I could solve the problem with just what I knew about inverses, vector spaces, and bases, but I guess I should do some reading about linear transformations then? Anything I should focus on?

6. Sep 25, 2009

### aPhilosopher

Well you can essentially consider a linear transformation to be a matrix, so what you know about matrices should do the trick: You know that A(x + y) = Ax + Ay and that A(cx) = c(Ax) where c is a scalar and x, y are vectors, right? Those property defines a linear transformation.

7. Sep 25, 2009

### Hallingrad

Ah, so I take it that in a), our matrix A is mapping each vector from Fn to Fm? So I'd just show that any vector that can successfully be mapped form a subspace by proving that v1 + v2 is still in the subspace of Fn (and can be mapped into Fm by A), and likewise for scalar multiplication? I feel like I'm overly complicating things :/

8. Sep 25, 2009

### aPhilosopher

Well you're overcomplicating things because it's a give that because v1 and v2 lie in W, any linear combination of them does as well. Also, what would a vector that can't be successfully mapped look like? (hint: it would look like a unicorn or a teapot floating around the sun or something because it doesn't exist!)

9. Sep 25, 2009

### Hallingrad

Wait, isn't W in Fm, while v1 and v2 are in Fn? Aren't v1 and v2 only in W (and hence Fm) upon being mapped by A?

10. Sep 25, 2009

### aPhilosopher

Let's get our terminology straight and keep (a) different from (b) to keep things clear.

let v1 and v2 lie in W which is in Rn. Instead of labeling their image in Rm, let's just write it as Av1 and Av2. Does that make sense? We know that v1 and v2 lie in W and we also know that v1 + v2 lies in W along with any other linear combination of them. Plus we have the fact that A respects vector addition and scalar multiplication as I explained in post #6. Do you think that you can organize those pieces of information into a proof of part (a)?

Last edited: Sep 25, 2009
11. Sep 25, 2009

### Hallingrad

Yeah, I think I can write it as such. The only thing I'm unclear of as is to why v1 and v2 lie in W when the v's are in Fn while W is in Fm?

12. Sep 25, 2009

### aPhilosopher

My bad. I switched m and n. I'm used to doing it the other way. W lies in Fm.

For the second one the vi all lie in Fn so just write their image under A-1 (which lies in Fm) as A-1vi and that should get you started.

If you want, post your proofs and I or somebody else can check them for you.

13. Sep 26, 2009

### Hallingrad

So for the first part, because the v's lie in Fn and W is in Fm, the v's don't actually lie in W, correct? Instead the transformation under A brings them into W, so we'd have to show that Av1 and Av2 form a subspace?

14. Sep 26, 2009

### aPhilosopher

Yeah that's almost right. You have to show that if v1 gets mapped to W and v2 gets mapped to W (and so both lie in U), then any linear combination of v1 and v2 gets mapped to W because then that linear combination would lie in U as well. Use the properties of linearity of A as I outlined in post #6 and the subspace properties of W ;)

15. Sep 26, 2009

### Hallingrad

Here are my answers to the proofs. Could you let me know what you think?

For a) I wrote "A is a matrix that maps a vector from Fn to Fm. Each v lies in Fn, and is then transformed by A to lie in W, which is in Fm. U is a subspace of Fn, and consists of all such vectors that are in Fn and that are mapped into W by A. This is a subspace because A(v1 + v2) = Av1 + Av2, which is still in W, and hence, v1 + v2 is still in U. Therefore it's closed under addition. To show that it's closed under scalar multiplication, we have A(cv1) = c(Av1), which is still mapped to W, and hence, cv1 is still in U.

b) Because A maps from Fn to Fm, or more precisely, from U to W, it's inverse does the opposite. Because the vectors in W's basis are independent by definition, taking the inverse leaves them independent still.

I'm just not sure how to show that it still spans all of U. Does it just follow from the definition?

16. Sep 26, 2009

### aPhilosopher

(a) looks good. (b) is just stating what you want to show without actually showing it.

Also, in (b), it would be less confusing to let m = n.

The first step in the proof is to take arbitrary u in U. Then Au lies in W and so can be expressed as Au = c1v1 + ... + cnvn with the basis B given in the statement of part (b).

A comment on part (a):

Put in words, what was said is that a subspace in the target of a linear map/matrix is the image of a subspace in the domain. A converse statement which is also true is that the image of a subspace under a linear map is also a subspace. The reason for this is very simple: subspaces are defined in terms of vector addition and scalar multiplication. Because a linear transformation/matrix preserves these two operations, they also preserve subspaces. It would be a good exercise (very similar to (a)) to prove the converse statement that I just gave you.

Share this great discussion with others via Reddit, Google+, Twitter, or Facebook