MHB Is \( U \) a Subspace of \( F[0,1] \)?

  • Thread starter Thread starter Dethrone
  • Start date Start date
  • Tags Tags
    Subspace
Dethrone
Messages
716
Reaction score
0
Is the following a subspace of $F[0,1]$?

$U={}\left\{f|f(0)=f(1)\right\}$

First, it contains the 0 vector if you consider $f(x)=0$, which is 0 for all $x$. Now I'm not sure how to prove that it is closure under addition. Here's what I have so far:

If $f_1, f_2 \in U$, then $f_1(0)+f_2(0)=f_1(1)+f_2(1)$

but I'm not sure how I can draw any conclusion from this. Since the functions aren't the same, I can't really combine them.
 
Physics news on Phys.org
I assume you are adopting the usual operations over the space of functions, that is, $$(\alpha f)(x) = \alpha \cdot (f(x)),$$ and $$(f_1 + f_2)(x) = f_1(x) +f_2(x)$$ for $f_1, f_2, f \in F[0,1]$ and $\alpha \in \mathbb{R}$ (this also assumes the underlying field is $\mathbb{R}$). In this case, you have to show that for every linear combination $$\alpha_1 f_1 + \alpha_2 f_2$$ of functions in that subset, that linear combination has the desired property.

Take it slow. You know that $(\alpha_1 f_1)(0) = \alpha_1 \cdot f_1(0)$, same for the index $2$. Using the usual operations, we know that $$(\alpha_1 f_1 + \alpha_2 f_2)(0) = \alpha_1 \cdot f_1(0) + \alpha_2 f_2(0).$$ Since both functions $f_1, f_2$ are in that subset, they satisfy the conditions $f_1(0) =f_1(1)$ and $f_2(0) = f_2(1)$. Therefore, $$\alpha_1 \cdot f_1(0) + \alpha_2 \cdot f_2(0) = \alpha_1 \cdot f_1(1) + \alpha_2 \cdot f_2(1).$$ Can we combine this into the operations in the vector space? Yes, we can: $$\alpha_1 \cdot f_1(1) + \alpha_2 \cdot f_2 (1) = (\alpha_1 f_1 + \alpha_2 f_2)(1).$$ Thus $$(\alpha_1 f_1 + \alpha_2 f_2)(0) = (\alpha_1 f_1 + \alpha_2 f_2)(1).$$ What happens if the desired property is that $f(1) = 1$? Would that subset, namely, $$\{ f \in F[0,1] \; | \; f(1) = 1 \},$$ be a subspace of $F[0,1]$? :)
 
Hmm, why are we taking linear combinations? That concept is introduced in the next section, so I'm not sure if that tool is available to use. But, can we also say this? (based off your proof) :D

Let $f_1, f_2 \in U$, then $(f_1+f_2)(0)=f_1(0)+f_2(0)=f_1(1)+f_2(1)$. Then $(f_1+f_2)(0)=(f_1+f_2)(1)$. Define $g(x)\equiv f_1+f_2$, then $g(0) =g(1)$ and so $g(x) \in U$. Therefore, it is closed under addition.
Fantini said:
Would that subset, namely, $$\{ f \in F[0,1] \; | \; f(1) = 1 \},$$ be a subspace of $F[0,1]$? :)

I just did this problem! No, because there are no zero vectors in the subspace since $f(1)=1$. But, I don't think I completely understand this. Is it because the zero function $0(x)$ is not contained in this subspace since $0(1)\ne 1$?
 
Rido12 said:
Hmm, why are we taking linear combinations? That concept is introduced in the next section, so I'm not sure if that tool is available to use. But, can we also say this? (based off your proof) :D

Let $f_1, f_2 \in U$, then $(f_1+f_2)(0)=f_1(0)+f_2(0)=f_1(1)+f_2(1)$. Then $(f_1+f_2)(0)=(f_1+f_2)(1)$. Define $g(x)\equiv f_1+f_2$, then $g(0) =g(1)$ and so $g(x) \in U$. Therefore, it is closed under addition.
I thought you had that shortcut. :confused: You'll prove that a subset of a vector space is a subspace if and only if all linear combinations, using the operations of the original vector space, are contained within that subset.

You can, but you don't need to define $g$. It doesn't clarify anything. Since you don't have the subspace theorem, you would have to show all 8 properties of a vector space. :( Tedious work.

Rido12 said:
I just did this problem! No, because there are no zero vectors in the subspace since $f(1)=1$. But, I don't think I completely understand this. Is it because the zero function $0(x)$ is not contained in this subspace since $0(1)\ne 1$?
Yes, you are correct. The zero function, which is the neutral element of vector addition, does not belong to this subset. Addition will also not work for any two functions. Multiplication by a scalar will also jumble things up. :) Many ways to see this. Try them!
 
Hmm...I haven't learned that version of the theorem, it will be something to think about :D The one taught to me was that a subset of a vector space is a vector space if and only if the three properties are satisfied: 1) there is a 0 vector, 2) must be closed under vector addition, 3) must be closed under scalar multiplication.

But, I have another confusion. They say the zero vector must be contained for 1). If the subspace is of functions, how can you have vectors? Wait! Is it because "vectors" are defined to be elements of a vector space? Also, what do you mean by a "neutral element"? :D
 
Rido12 said:
Hmm...I haven't learned that version of the theorem, it will be something to think about :D The one taught to me was that a subset of a vector space is a vector space if and only if the three properties are satisfied: 1) there is a 0 vector, 2) must be closed under vector addition, 3) must be closed under scalar multiplication.
I have some serious communication issues. :confused: That's what I was talking about. Those last two properties can be combined in one, which is showing for elements of the subset we have $\alpha_1 v_1 + \alpha_2 v_2$ belonging to the subset.

Rido12 said:
But, I have another confusion. They say the zero vector must be contained for 1). If the subspace is of functions, how can you have vectors? Wait! Is it because "vectors" are defined to be elements of a vector space? Also, what do you mean by a "neutral element"? :D
Yes, you are right! We call them vectors because they are elements of a vector space. :) Polynomials are vectors, too!

By neutral element I mean a vector $e$ such that, for all vectors $v$, we have $v+e = v$ and $v+(-v) = e$. :)
 
Last edited:
I see! This makes a lot more sense now. Thanks! :D
 
Back
Top