# Understanding Vector Spaces with functions

Is the set of all differentiable functions ƒ:ℝ→ℝ such that ƒ'(0)=0 is a vector space over ℝ? I was given the answer yes by someone who is better at math than me and he tried to explain it to me, but I don't understand. I am having difficulty trying to conceptualize this idea of vector spaces with functions because I can't really visualize it like a plane in 3d space. I am also wondering what is the importance of having vector spaces set over a field? It seems trivial or maybe its just me being brainwashed by years of elementary mathematics

fresh_42
Mentor
What is a vector space to you? What are you allowed to do in it?

What is a vector space to you? What are you allowed to do in it?
From my understanding Vector spaces are set in which they follow particular rules such as commutativity, addition, associativity, scalar multiplication, having a zero vector, and identity.

fresh_42
Mentor
Yes, so you have to name the domain the scalars are from, i.e. the field. E.g. ##\{0,1\}## is a field as well, and it gives different vector spaces as e.g. ##\mathbb{R}## or ##\mathbb{C}##. So the field is by no means "trivial".

The other part is the addition, which has to be commutative (mostly), associative, has a zero element and negatives (inverse elements).
For short: it is an additive group.

Now to the functions. We can define ##f+g## as the function ##x \mapsto f(x) + g(x)## and ##c \cdot f## as the function ##x \mapsto c\cdot f(x)## and have the a vector space, say a real one. To decide whether the given example set ##S:=\{f \in C^1(\mathbb{R}\,\vert \, f'(0)=0\}## is a vector space, you have to check, whether the defined operations keeps you inside this set and whether it is non-empty.

Stephen Tashi
I am having difficulty trying to conceptualize this idea of vector spaces with functions because I can't really visualize it like a plane in 3d space.
Don't try to conceptualize a vector space of functions by a simple geometric picture. You might make some headway by thinking of a function as a trajectory. Think of the variable as "time". For example if ##f(t)## and ##g(t)## give the prices of two stocks at time t, someone might define a stock index that varies with time by computing ## (1/2)(f(t) + g(t))##, which embodies the idea of "multiplication by a scalar" and "adding vectors".

A general pattern for thinking of functions as vectors occurs in physics and other sciences when we have a problem of finding "unknown functions" (as opposed to case of solving equations where the solutions are "unknown numbers".) In many important cases, we have the additional feature that if ##f_1(x)## and ##f_2(x)## are solutions to the problem then ##k_1f_1(x) + k_2f_2(x)## is also a solution, where ##k_1,k_2## are arbitrary constants. In those special cases the solutions form a vector space of functions.

I am also wondering what is the importance of having vector spaces set over a field? It seems trivial

Perhaps by "trivial" you mean that you can always think of "the field" as being essentially the same thing as the real numbers. No, eventually you will encounter material where that won't work. There are topics where you must think of "the field" as the set of complex numbers and there are topics where "the field" must be a finite field, such as the integers mod 7.

f∈C1
I am sorry but I am not totally sure what this means and back to my original question does it mean this is a vector space since the function must have a degree ≠1 in order for it to satisfy the parameter ƒ'(0) = 0. Therefore addition exists because f+g will exist within this space because it does satisfy the parameter. So does commutativity, which correct me if I am wrong, really looks simple. And associativity does exists also because switching the parenthesis around does not change anything. I am actually unsure about what I am saying now. Does the zero vector exist? Does it matter for all x that f(x) must equal zero, I have thought up of a few examples that it would not exist such as f(x) = 0. Never mind 0v(x) = 0 and 0v'(x) = 0. So the zero vector does exists. There does exist an additive inverse because -f(x) will still evaluate to f'(0) = 0 and will still add to the zero vector. The scalar multiple of u will still follow the f'(0) = 0. Distributivity also works. The multiplicative identity also works. Sorry if I left a couple out. I am still confuse how we would go about saying these rules occur on the vector space. It seems to be working all right in my head, but I guess thats due to just accepting addition and multiplication as it is. Can you explain how I would go about proving each of these rules?

fresh_42
Mentor
I am sorry but I am not totally sure what this means and back to my original question does it mean this is a vector space since the function must have a degree ≠1 in order for it to satisfy the parameter ƒ'(0) = 0. Therefore addition exists because f+g will exist within this space because it does satisfy the parameter. So does commutativity, which correct me if I am wrong, really looks simple. And associativity does exists also because switching the parenthesis around does not change anything. I am actually unsure about what I am saying now. Does the zero vector exist? Does it matter for all x that f(x) must equal zero, I have thought up of a few examples that it would not exist such as f(x) = 0. Never mind 0v(x) = 0 and 0v'(x) = 0. So the zero vector does exists. There does exist an additive inverse because -f(x) will still evaluate to f'(0) = 0 and will still add to the zero vector. The scalar multiple of u will still follow the f'(0) = 0. Distributivity also works. The multiplicative identity also works. Sorry if I left a couple out. I am still confuse how we would go about saying these rules occur on the vector space. It seems to be working all right in my head, but I guess thats due to just accepting addition and multiplication as it is. Can you explain how I would go about proving each of these rules?
##f \in C^1(\mathbb{R})## is only short for "one time differentiable real function".
You said a vector space is a set. So the set here is ##S = \{f :\mathbb{R}\rightarrow \mathbb{R}\,\vert \,f \textrm{ is differentiable and } f'(0)=0\}##

Now we have to show, that ##S \neq \{\}##. Since obviously ##0 \in S## (the function that maps all real numbers to zero), this isn't an issue.

Then you have said, that there has to be an addition (within the set), that is a group (commutative, associative, inverse elements and ##0##).

You are right with your argumentation, although the word "degree" is misplaced here. ##x \mapsto \cos x## is also in ##S##.
Formally we would have to show, that if ##f,g \in S## then ##f+g\, , \,-f\, , \,0 \in S## as well and that they obey commutativity ##(f+g=g+f)## and associativity ##(f+(g+h)=(f+g)+h)##. Remember that ##f+g## is defined by ##f+g : x \mapsto f(x)+g(x)##, so e.g. commutativity means ##(f+g)(x)=f(x)+g(x)=g(x)+f(x)=(g+f)(x)## for all ##x \in \mathbb{R}##. It is not really difficult as you've noticed, rather the duty to write it down. In books one usually reads "left to the reader", "obviously" or similar expressions. The only point is perhaps to explicitly mention ##(f+g)'(0)=f'(0)+g'(0)=0+0=0## and likewise with ##-f## and ##0##.

Next you said, that scalar multiplication is allowed in ##S##. This means multiples of real numbers ##c## and functions ##f \in S##.

So the same procedure as above. ##c\cdot f : x \mapsto c\cdot f(x)## is the definition of this multiplication by (real) scalars, and ##(c\cdot f)'(0)=c \cdot f'(0) = c \cdot 0 = 0## is the formal proof, that ##c \cdot f \in S##.

Of course to be completely rigorous, one also has to show that ##f+g## and ##c\cdot f## are also differentiable.

Now we have shown, that ##S## is actually a vector space, because it has the properties required (see your post #3).
As a remark: If we'd define ##S_1 := \{f :\mathbb{R}\rightarrow \mathbb{R}\,\vert \,f \textrm{ is differentiable and } f'(0)=1\}##, then this would be no vector space.

• Austin Chang
Stephen Tashi
I am sorry but I am not totally sure what this means and back to my original question does it mean this is a vector space since the function must have a degree ≠1 in order for it to satisfy the parameter ƒ'(0) = 0.

(As to the "degree" of a function in your example - "Degree" is a concept that applies to polynomials, but the set of functions in your example also includes functions like ##cos(x)##. )

Let's consider a simpler situation first. Consider the set ##V## consisting of all possible functions that have domain ##\mathbb{R}##.

We define addition of functions "pointwise", meaning the definition of function ##h = f+g## is that ##h(x) = f(x) + g(x)##. We define the multiplication of a scalar ##k## (a real number) times a function ##f## "pointwise", meaning the definition of the function ##h = kf## is ##h(x) = k f(x)##.

Define the "zero vector" to be the constant function ## z(x) = 0 ##.

With those definitions, the set ##V## is a vector space. You can verify that all the properties that define a vector space do work.

If we are presented with a problem that involves a more restricted set of functions (e.g. "The set of differentiable functions whose domain is ##\mathbb{R}##") the we have to worry about whether the restrictions would cause any of the properties to fail. (e.g. Is the sum of differentiable functions also a differentiable function? Is a scalar times a differentiable function also a differentiable function? Is the constant function z(x) = 0 a differentiable function?) In typical textbook problems, if any of properties fail they usually involve the failure of operation to be "closed" - i.e the failure of an operation on two things of a certain kind to produce a thing of that same kind. Properties like associativity are usually easy to verify.

Your example has the restriction "differentiable functions" plus the restriction "functions whose derivative is 0 when evaluated at x = 0". So you ask: Is the sum of two differentiable functions, each of whose derivative is 0 when evaluated at x= 0 also a differentiable function who derivative is 0 when evaluated at x = 0? ... and so forth.

Formally we would have to show, that if f,g∈Sf,g∈Sf,g \in S then f+g,−f,0∈S
DO we have to show this explicitly with the field the set is in?

PeroK
Homework Helper
Gold Member
2020 Award
Is the set of all differentiable functions ƒ:ℝ→ℝ such that ƒ'(0)=0 is a vector space over ℝ? I was given the answer yes by someone who is better at math than me and he tried to explain it to me, but I don't understand. I am having difficulty trying to conceptualize this idea of vector spaces with functions because I can't really visualize it like a plane in 3d space. I am also wondering what is the importance of having vector spaces set over a field? It seems trivial or maybe its just me being brainwashed by years of elementary mathematics

Specific words don't really imply anything. Suppose, instead of calling it a "Vector" space, we call it a "Linear" space. It has all the same axioms.

First we show that vectors (3D or 2D - or nD) form a Linear Space.

Then we show that sets of functions form a Linear Space.

Now, where is the problem with "functions" being "vectors"?

• Austin Chang
fresh_42
Mentor
Formally we would have to show, that if ##f,g \in S## then ##f+g\, , \,-f\, , \,0 \in S## as well ...
DO we have to show this explicitly with the field the set is in?
I don't see a field here. For ##f,g \in S## we have to show that addition (of these functions, which are regarded as vectors) is closed, i.e. happens within the assumed vector or linear space ##S##. E.g. if ##f'(0)=g'(0)= 1## was the definition of an ##S'##, then ##f+g## is still a differentiable function, but as ##(f'+g')(0)=2## it would leave ##S'## which isn't allowed for a vector space. The same for the (additive) inverse element ##-f## and the (additive) neutral element ##0.## They all have to be in ##S##, that is differentiable and ##(f+g)'(0)=0\, , \,-f'(0)=0\, , \,0'(0)=0##.

The field comes into play, if we consider the other operation, the scalar multiplication, i.e. ##(c\cdot f)(x)=c\cdot f(x)## with a scalar ##c## from the field, e.g. ##\mathbb{R}##. Here, too, has to hold ##(c\cdot f)'(0)=0##.

Whether this has to be shown, depends on where you stand or what your task is. Most of us did this from time to time and in presumably more complicated cases, so we simply see it without any writings. The most difficult part would be to show that ##f+g## and ##c\cdot f## are differentiable, if ##f## and ##g## are. The rest can be seen.

For f,g∈Sf,g∈Sf,g \in S we have to show that addition (of these functions, which are regarded as vectors) is closed, i.e. happens within the assumed vector or linear space
My biggest question right now is wouldn't showing this true right now would require identity function which we have not proven yet? F'(0) = 0 and g'(0) = 0 (f+g)'(0) = f'(0) +g'(0) which will be distributivity? sorry I'm not sure. But my point is how can we prove that addition is closed in this space without having proved the other points and conversely how can we prove that commutativity, etc exist without proving addition. Sorry I am 99% sure this a dumb question based on my lack of knowledge.

fresh_42
Mentor
You start by what you said in post #3:
Vector spaces are set in which they follow particular rules such as commutativity, addition, associativity, scalar multiplication, having a zero vector, and identity.
Formally you first need a non-empty set that shall carry the vector space structure. (I'll take the field to be the rational numbers here, but any other does it as well. I take this in order to show, that the field as the domain for the scalars, doesn't have to be the same domain, on which the functions are defined on. These are two different things.)

The set here is ##S := \{f : \mathbb{R} \rightarrow \mathbb{R}\,\vert \, f \textrm{ differentianle and } f'(0)=0\}##.
1) ##S \neq \{\}##

Next we have to define two different binary operations:
2) ##+ \, : \,S \times S \rightarrow S##
3) ##\cdot \, : \, S \times \mathbb{Q} \rightarrow S##
These definitions are made by setting ##(f,g) := f+g : x \mapsto f(x)+g(x)## and ##(c,f) = c\cdot f \, : \, x \mapsto c\cdot f(x)##.
It has to be shown, that both operations are on and in the sets as claimed, i.e. that they land in ##S## again! Being in ##S## means being differentiable and the differential at zero is zero. This has to be shown.

So now we have the ingredients. Next are the properties:
4) Existence of ##0 \in S##
5) Existence of inverse ##-f \in S##
6) Associativity ##(f+g)+h = f+(g+h)##
7) Commutativity ##f+g = g+f##
8) Distributivity ##c\cdot (f+g) = c\cdot f + c\cdot g\, , \, c \in \mathbb{Q}##
9) ##c \cdot f = f \cdot c## (if you want to be very petty.)

All identities are defined by ##f_1 = f_2 \Longleftrightarrow f_1(x)=f_2(x) \; \forall \; x \in \mathbb{R}##.

So this would be a schedule for a formal proof. (And I hope I haven't forgotten something.)

PeroK
Homework Helper
Gold Member
2020 Award
My biggest question right now is wouldn't showing this true right now would require identity function which we have not proven yet? F'(0) = 0 and g'(0) = 0 (f+g)'(0) = f'(0) +g'(0) which will be distributivity? sorry I'm not sure. But my point is how can we prove that addition is closed in this space without having proved the other points and conversely how can we prove that commutativity, etc exist without proving addition. Sorry I am 99% sure this a dumb question based on my lack of knowledge.

You might want to prove the following:

1) If ##f## and ##g## are functions from ##\mathbb{R}## to ##\mathbb{R}## then ##f+g## is a function from ##\mathbb{R}## to ##\mathbb{R}##.

2) If ##f## and ##g## are differentiable, then ##f+g## is differentiable.

3) If ##f'(0) = g'(0) = 0##, then ##(f+g)'(0) = 0##

And note that you do not need the Linear Space axioms. In particular, you do not need commutivity of functional addition. You may need to use the commutivity of real numbers, but not of functional addition.

fresh_42
Mentor
And note that you do not need the Linear Space axioms.
Why that ?

PeroK
Homework Helper
Gold Member
2020 Award
Why that ?

You mean you can always show that the set of all functions is a vector space? And then you can use the vector space axioms for differentiable functions?

Yes, that's a better idea.

fresh_42
Mentor
You mean you can always show that the set of all functions is a vector space? And then you can use the vector space axioms for differentiable functions?

Yes, that's a better idea.
No, I simply didn't understand. If you want to show that a set is a vector space, why could its axioms be omitted? I mean formally, regardless whether they are obvious or not. I thought you meant that one can weaken requirements to consider more general constructions like left- and right vector spaces.

PeroK
Homework Helper
Gold Member
2020 Award
No, I simply didn't understand. If you want to show that a set is a vector space, why could its axioms be omitted? I mean formally, regardless whether they are obvious or not. I thought you meant that one can weaken requirements to consider more general constructions like left- and right vector spaces.

No, I thought the OP wasn't sure how you could deal with derivatives and differentiable functions without assuming, for example, that function addition is commutative.

I thought it would be interesting for him to look at, say, the definition of the derivative and see whether the linear space axioms are required (even to well-define a derivative).

But, showing the Linear Space axioms hold for all functions removes any need for that.

• fresh_42
You might want to prove the following:

1) If ##f## and ##g## are functions from ##\mathbb{R}## to ##\mathbb{R}## then ##f+g## is a function from ##\mathbb{R}## to ##\mathbb{R}##.

2) If ##f## and ##g## are differentiable, then ##f+g## is differentiable.

3) If ##f'(0) = g'(0) = 0##, then ##(f+g)'(0) = 0##

And note that you do not need the Linear Space axioms. In particular, you do not need commutivity of functional addition. You may need to use the commutivity of real numbers, but not of functional addition.
How would we prove that the first 1 seems really trivial i mean you pull out from the same domain and the function will always scale somewhere in the range.
The 2 one do you want me to prove this with limits? because that would take a while.
3 one kinda seems obvious from what fresh_42 said before
(f+g)′(0)=f′(0)+g′(0)=0+0=0(f+g)′(0)=f′(0)+g′(0)=0+0=0(f+g)'(0)=f'(0)+g'(0)=0+0=0
Please correct me if I am wrong. I don't understand what you are trying to get at nor do I understand how far do you need to go in order to prove the rules do exist on the vector space. I do understand the notion of a vector space now, but the technicalities are messing with me because I don't have a good fundamental understanding of proofs in Mathematics.
Sorry if I seem kinda hopeless with my questions ._. Its driving me insane that I don't understand it.

fresh_42
Mentor
Can you tell which point according to my numbering in post #13 you have difficulties with? From what you quoted I can only reply:
In order for a function ##f## to be an element of ##S## (for which we want to prove it's a vector space) there are three properties to be fulfilled:

a) ##f## is a function from ##\mathbb{R}## to ##\mathbb{R}\,.##
b) ##f## is (at least) once differentiable.
c) ##f'(0)=0##

The first part of the proof has to show, that we don't leave ##S## by our operations ##"+"## and ##"\cdot "\;##
The second part is to show that the operations have the properties required (associativity, etc.).

The functions might not look like the arrows you normally associate with a vector. So the crucial point here is, that we concentrate on the rules that make up a vector space rather than drawing some arrows. This means addition and stretching, resp. compressing must be available to our now function-arrows. And this must happen within the set. It will be a vector space of infinite dimension, which makes it harder to think of as the plane or three-dimensional space we usually think of as vector spaces. E.g. all functions ##x \mapsto x^{2n}\; (n \in \mathbb{N})## alone are already linear independent and elements of ##S##.

Can you tell which point according to my numbering in post #13 you have difficulties with? From what you quoted I can only reply:
In order for a function ##f## to be an element of ##S## (for which we want to prove it's a vector space) there are three properties to be fulfilled:

a) ##f## is a function from ##\mathbb{R}## to ##\mathbb{R}\,.##
b) ##f## is (at least) once differentiable.
c) ##f'(0)=0##

The first part of the proof has to show, that we don't leave ##S## by our operations ##"+"## and ##"\cdot "\;##
The second part is to show that the operations have the properties required (associativity, etc.).

The functions might not look like the arrows you normally associate with a vector. So the crucial point here is, that we concentrate on the rules that make up a vector space rather than drawing some arrows. This means addition and stretching, resp. compressing must be available to our now function-arrows. And this must happen within the set. It will be a vector space of infinite dimension, which makes it harder to think of as the plane or three-dimensional space we usually think of as vector spaces. E.g. all functions ##x \mapsto x^{2n}\; (n \in \mathbb{N})## alone are already linear independent and elements of ##S##.
I totally agree with what you have to say on post #13. I believe I am no longer confuse about what a vector space is. I was confused at first because what I saw in the book was very simple ones that you could picture with your mind and see if it worked, but I now know that for example the one we have been talking about has to qualify for scalar multiplication and addition of two functions and still maintain its parameter f'(0) = 0 which seems obvious enough for me. What is really getting at me now is how do I prove each of the rules. The book gives us these rules, which makes sense to me in general and for this problem, but how do I prove that each of these rules work in this problem where it seems definite and true to me. I don't know, but
So now we have the ingredients. Next are the properties:
4) Existence of 0∈S0∈S0 \in S
5) Existence of inverse −f∈S−f∈S-f \in S
6) Associativity (f+g)+h=f+(g+h)(f+g)+h=f+(g+h)(f+g)+h = f+(g+h)
7) Commutativity f+g=g+ff+g=g+ff+g = g+f
8) Distributivity c⋅(f+g)=c⋅f+c⋅g,c∈Qc⋅(f+g)=c⋅f+c⋅g,c∈Qc\cdot (f+g) = c\cdot f + c\cdot g\, , \, c \in \mathbb{Q}
9) c⋅f=f⋅cc⋅f=f⋅cc \cdot f = f \cdot c (if you want to be very petty.)
this doesn't feel like enough for me to prove it nor do I know how to prove it on paper.

#### Attachments

• Screen Shot 2017-02-05 at 4.35.26 PM.png
66 KB · Views: 1,408
fresh_42
Mentor
Well, post #13 contains (almost) all what has to be shown (I've forgotten 8-10 in your book's numbering) and post #20 says what it means to be in ##S## or ##V## if you will. However, you are right, the hardest part in this example can be to write down, what apparently is quite obvious. That's why I said it can be seen.

Well, post #13 contains (almost) all what has to be shown (I've forgotten 8-10 in your book's numbering) and post #20 says what it means to be in ##S## or ##V## if you will. However, you are right, the hardest part in this example can be to write down, what apparently is quite obvious. That's why I said it can be seen.
So for example #1 I have to prove that f+g exists in the Vector Space. I think it looks obvious to most people, but how do I necessarily put in words. Do i just say for all f,g that are elements in our set f'(0) = 0 g'(0) = 0. Therefore (f+g)'(0) = f'(0) +g'(0)= 0+0=0( where do we get the tools to say that you can split f+g(0) to f(0)+ g(0) or where we can say that the derivative of f+g(0) is the same as f(0) + g(0) and are we allowed to do 0+0 because additive identity in the field?) Sorry still unclear on this part

fresh_42
Mentor
So for example #1 I have to prove that f+g exists in the Vector Space. I think it looks obvious to most people, but how do I necessarily put in words. Do i just say for all f,g that are elements in our set f'(0) = 0 g'(0) = 0. Therefore (f+g)'(0) = f'(0) +g'(0)= 0+0=0( where do we get the tools to say that you can split f+g(0) to f(0)+ g(0) or where we can say that the derivative of f+g(0) is the same as f(0) + g(0) and are we allowed to do 0+0 because additive identity in the field?) Sorry still unclear on this part

All identities are defined by ##f_1 = f_2 \Longleftrightarrow f_1(x)=f_2(x) \; \forall \; x \in \mathbb{R}##.
You also need the differentiation rules
You might want to prove the following:

1) If ##f## and ##g## are functions from ##\mathbb{R}## to ##\mathbb{R}## then ##f+g## is a function from ##\mathbb{R}## to ##\mathbb{R}##.

2) If ##f## and ##g## are differentiable, then ##f+g## is differentiable.

3) If ##f'(0) = g'(0) = 0##, then ##(f+g)'(0) = 0##
because for any function ##f## to be in ##S=V## we need
a) ##f## is a function from ##\mathbb{R}## to ##\mathbb{R}\,##.
b) ##f## is (at least) once differentiable.
c) ##f'(0)=0##
These determine what it means to be in ##S=V## and what it means, that an equality between functions hold. The identity ##f'(0)=0## is in ##\mathbb{R}## as the function is real valued. The identity ##c\cdot f## can be with a ##c## from another field, that one which defines the vector space, ##c \in \mathbb{Q}## in my example.

• Austin Chang
You also need the differentiation rules

because for any function ##f## to be in ##S=V## we need

These determine what it means to be in ##S=V## and what it means, that an equality between functions hold. The identity ##f'(0)=0## is in ##\mathbb{R}## as the function is real valued. The identity ##c\cdot f## can be with a ##c## from another field, that one which defines the vector space, ##c \in \mathbb{Q}## in my example.
Ok I see What you mean I guess then it will just mean a lot of work. Thanks!!!