# Deos this question make sense to anyone?

1. Jun 8, 2013

### Emspak

1. The problem statement, all variables and given/known data

Let S be a nonempty set and K a field. Let C(S,K) denote the set of all functions $f \in C(S,K)$ such that $f(s) = 0$ for all but a finite number of elements of S.

Prove that C(S,K) is a vector space.

OK, I can't make the slightest sense of this. What in the name of all that is holy does the value of f(0) have to do with C(S,K) being a vector space? Can anyone at least hep me figure out what the heck I am supposed to even start with?

Last edited: Jun 8, 2013
2. Jun 8, 2013

### Dick

I don't think there's to much to prove. Function spaces are pretty generally vector spaces. I think you just want to prove that if f and g vanish for all but a finite number of elements of S then so do f+g and k*f for k in K. Just show it's closed under addition and scalar multiplication. Though you didn't say what the scalars are supposed to be for this vector space, I'm just guessing.

3. Jun 8, 2013

### Emspak

Where in the world does g come from?

I can see that f(s) = 0 should be a part of C(S,K). But that's an intuition more than anything else. I know that in order to be a vector space you have to be able to add and do scalar multiplications; there are several axioms in the book that define vector spaces. Well and good. But I am completely, utterly, totally baffled by this problem because I can't get a handle on what they are asking.

4. Jun 8, 2013

### Dick

Well, if f(s) is zero for all but a finite number of elements of S and ditto for g(s), then does f(s)+g(s) vanish for all but a finite number of elements of S? Like I said, they may just want an argument for closure. Vector spaces have to be closed under addition and scalar multiplication. Isn't that one of your axioms?

5. Jun 8, 2013

### Emspak

The axioms for the definition of vector space we have are:

(V is a Vector space, F is Field)

1. for all x,y in vector space V x+y = y+ x
2. for all x, y, z in V (x + y) + z = x + (y + z)
3. there exists an element in V denoted by 0 s.t. x + 0 = x for each x in v
4. for each element x in V there exits an element y in V s.t. x + y = 0
5. for each element x in V 1x = x
6. for each pair of elements a, b, in F and each element x in V, (ab)x = a(bx)
7. for each element a in F and each pair of elements x,y in V, a(x+y) = ax + ay
8. for each pair of elements a, b in F and each element x in V (a+b)x = ax + ab

we did do a whole exercise of showing closure under addition in certain cases, I think, but I am not entirely sure what this has to do with anything here. Closure under addition and multiplication is required for somethign to be a subspace, no?

I did see a proof like this one with the problem stated as:

That one is stated like this: Let S be a nonempty set and F a field. Let C (S,F) denote the set of all functions f ∈ F (S,F) such that f(s) = 0 for all but a finite number of elements of S . Prove that C(S,F) is a subspace of F(S,F).

Proof (as stated in the book)
For f(s) = 0, the ”zero” element of F(S,F) is clearly in C (S, F).

(It is not clear to me at all, really, but I'll take it as it is for the moment)

Let f(s) = ci, for ci ≠ 0 , si ∈ S, i = 1, ...., n, and g(ti) = di for di ≠ 0 , ti ∈S, i = 1 , .., m
and for all other s ∈ S let f(s) = 0 , g(s) = 0.
Now, ( f + g )(s) = 0 for all s ∈ S except for s = si, i = 1 , .., n and s = t i , i = 1 , ..., m . Since f + g is again a function from S to F, that vanishes at all but a finite number of elements of S . Similarly, one can show αf ∈ C(S, F).

Again, none of this is as obvious as the text makes it out, I am looking at this and not sure if I can adapt this to what I see in front of me. I am feeling like this is mandarin right now.

6. Jun 8, 2013

### Dick

That's fine. You are probably just thinking this is harder than it is. If the {di} are finite set and the {ti} are a finite set then their union (or intersection, which is what you really want) is a finite set. I think that's really all there to this problem. Same thing for scalar multiplication.

7. Jun 8, 2013

### Emspak

so g is just an arbitrary function? It looks pulled out of nowhere.

Arrrgh. This is the problem. The proof I showed you looks like just magic right now and this is pretty frustrating.

Here's what I can sort of figure out. we have f(s) = 0 and we can say that it is in C(S,K) butI have o idea whether that's something we can take as given or not.

You say it vanishes. How? Because it is zero?

if we pick an arbitrary function g and set that = 0 yes, (f+g) = 0. But I am not getting the connection to proving that anything is a vector space. I just don't see what one has the slightest to do with the other. Unless it is something to do with axiom 3 and 4?

8. Jun 8, 2013

### Dick

Mmm. Your list of axioms omits the ones you really want to prove, the closure ones. Here's what you really need to prove. If f and g are in C(S,K) then f+g is in C(S,K). Is that true? If k is a scalar in K then is k*f in C(S,K). Just think about the definition of C(S,K).

9. Jun 8, 2013

### Simon Bridge

g comes from the same place as f and g(s)=0 under the same conditions that f(s)=0.
In order to prove something is a vector space, you need two arbitrary vectors in the space.

10. Jun 8, 2013

### Emspak

OK, that makes more sense. And I can just pull g out of nowhere as an arbitrary function, right? I say that g is in C(S,K) along with f, and prove they are both in C(S,K). Is that about right? Because k*f being in C(S,K) would be axiomatic it seems to me (from the list I gave you).

Now, thinking about it, if I say that f(s) = 0 for all but a certain number of elements of S, and say that we're adding it to a random, arbitrary function g, then we get f + g = g

and f becomes the 0, per axiom 3.

am i getting there?

11. Jun 9, 2013

### Simon Bridge

I'm going to leave the actual math part to Dick ... you also appear to have a conceptual issue, I'll address that:
You are not pulling g "out of nowhere", you are pulling it out of C(S,K). There are a lot of functions that would fit the definition you have, you pull two of them out, call one f and the other one g. You can call them Bazza and Trev if you like - it's just a label.

12. Jun 9, 2013

### Emspak

I think I am just tired. Most of the time you have to justify that kid of thing somehow, and that was what was confusing me i think.

13. Jun 9, 2013

### Emspak

OK. Here's a version of the proof.

Let S be a nonempty set and K a field. Let C(S,K) denote the set of all functions f∈C(S,K) such that f(s)=0 for all but a finite number of elements of S. Prove that C(S, K) is a vector space.

Vector spaces are defined by the following axioms:
1. for all x,y in vector space V x+y = y+ x
2. for all x, y, z in V (x + y) + z = x + (y + z)
3. there exists an element in V denoted by 0 s.t. x + 0 = x for each x in v
4. for each element x in V there exits an element y in V s.t. x + y = 0
5. for each element x in V 1x = x
6. for each pair of elements a, b, in F and each element x in V, (ab)x = a(bx)
7. for each element a in F and each pair of elements x,y in V, a(x+y) = ax + ay
8. for each pair of elements a, b in F and each element x in V (a+b)x = ax + ab

Proof:

Let g(s) be an arbitrary function. ƒ, g ∈ C(S,K)
ƒ + g = g when ƒ(s) = 0. So axiom (3) is satisfied.

Let scalars λ1, λ2 ∈ K
With the definition of addition: (f+g)(s) = f(s)+g(s) ∀s ∈ S

By definition the addition is commutative and satisfies axiom (1)

(λ1+ λ2)f(s)= λ1f(s) + λ2f(s) ∀x ∈ S satisfies axiom (8)

λ1ƒ,+ λ2g ∈ C(S,K) and satisfies axiom (7)

(λ1λ2)f(s) = λ1 (λ2f(s)) ∀s ∈ S satisfies axiom (6)

1ƒ(s) = ƒ(s) ∀s ∈ S and ƒ(s) is in C(S,K) so axiom (5) is satisfied

-11ƒ(s) = -ƒ(s) ∀s ∈ S and -ƒ(s) + ƒ(s) = 0 and -ƒ(s) ∈ C(S,K) satisfying axiom (4)

By the definition of the problem there is a finite set S1 ⊆ S such that ƒ(s) ≠ 0 and another finite set S2 ⊆ S such that g(s) ≠ 0. Therefore the only space where λ1ƒ,+ λ2g ≠ 0 is inside S1 ∪ S2 and (S1 ∪ S2) ⊆ S.

Since both sets are finite and both sets are inside S.

Therefore C(S,K) is a vector space

anyone willing to tell me if I messed it up again?

14. Jun 9, 2013

### Dick

I think that's pretty much it but most of it's pretty generic. The only really important part for your problem is that if S1 and S2 are finite, then S1 U S2 if finite. Right?

15. Jun 9, 2013

### Emspak

I suppose so, and I wonder if the reason for the problem (as in: why it's assigned that way is to illustrate that point.

16. Jun 9, 2013

### Dick

Hard to say. But if the problem had been define D(S,K) to be the set of all functions f:S->K such that f(s)=0 for all but AT MOST ONE value of s, then it would NOT be a vector space. If you see why then you probably understand the problem well enough.

Last edited: Jun 10, 2013
17. Jun 10, 2013

### Emspak

I am not sure I understand why it isn't a vector space with exactly one f(s) = 0 is 0 for just one s as opposed to a finite set of s values. I mean I can think of a few vector functions that are presumably in a vector space that do just that, unless I am reading it wrong. I get why S1 U S2 would be finite.

18. Jun 10, 2013

### Ray Vickson

You are reading it wrong---exactly backwards. For example, we might have f(1) ≠ 0 but f(x) = 0 for all x≠1. We might have g(2) ≠ 0 but g(x) = 0 for all x≠2. Then f+g would be nonzero at two points, so would not be a function that is nonzero at only one point.

19. Jun 10, 2013

### Emspak

Ah, what you are showing me looks a lot like the delta function. There's a relation there, no?

20. Jun 10, 2013

### Dick

The problem says f(s) is only NONZERO at a finite number of points. If f is nonzero at one point and g is nonzero at one point is f+g nonzero at one point?