Vector Space, and Normed Vector Space

zli034
Messages
106
Reaction score
0
Hi all,

It has been very useful of posting my questions here to help me pushing through the book reading of analysis. This forum is a perfect place and the best place for people who are interested in knowledge and the beauty of knowledge. Here goes another question from me.

All continuous functions on closed interval [a, b] is a vector space. What is the difference with the normed vector space?

What are the vectors?

In the 3D space, a vector is just a point which is indicated by the 3 coordinates.

How do I understand the vector space of functions and normed vector spaces.
 
Physics news on Phys.org
Hi zli034! :smile:

The vector space of continuous functions on [a,b] (denoted by C[a,b]) has not really much to do with the 3D-vectors we're all used to. The 3D vectors we know have a length, an orientation, etc. All of this does not really make much sense for C[a,b].

A vector space is here just a set with two operations: an addition operation, which is given here by the addition of functions:

(f+g)(x)=f(x)+g(x)

and a scalar multiplication, which is here

(\lambda f)(x)=\lambda f(x)

Furthemore, this addition and multiplication have some cool properties like associativity, zero elements, inverses, etc. Together, this means that C[a,b] is a vector space.

The vectors are here just the elements of C[a,b], thus the vectors are functions!

In 3D space, every vector was determined by 3 coordinates, this is not possible here. Of course, we could determine every vector f by an infinite number of coordinates, but this is not so handy.

A vector space equiped with a norm is called a normed vector space. The usual norm on C[a,b] is

\|f\|_\infty=\sup_{z\in [a,b]}{|f(z)|}

This is a norm on C[a,b] and this makes C[a,b] into a normed vector space. Other norms are possible, though, for example

\|f\|_p=\sqrt[p]{\int_a^b{|f(z)|^pdz}}

is another popular norm which makes C[a,b] into a normed vector space.

As you see, in 3D space, there was a unique norm that makes it into a normed vector space, but in C[a,b] this is not true anymore. There are many different possible norms on C[a,b] and all have their uses...

I hope this answers your questions a bit?
 
Great explanation :approve:

So the norm is a means to compare the vectors among themselves (like the unit vectors in 3D space)?
 
meldraft said:
Great explanation :approve:

So the norm is a means to compare the vectors among themselves (like the unit vectors in 3D space)?

That's one way you can see it yes. The norm is actually the "length" of a vector, or equivalently, the distance of the vector to 0. Of course, what does it mean for a function to have a length? :smile:
 
Of course, what does it mean for a function to have a length?

The way I understand it, we basically compare the deviation of each function in the vector space from a known function, and assign a positive sign to it (or zero if it's the same function). Isn't that the reason that we have different possible norms?
 
meldraft said:
The way I understand it, we basically compare the deviation of each function in the vector space from a known function, and assign a positive sign to it (or zero if it's the same function). Isn't that the reason that we have different possible norms?

Hmm, what you describe is a metric actually, but this is a good way of seeing things. The norm measures the deviation of the function from the zero function. And this can be done in many possible way, it's just that there a several interesting ways of measuring a deviation. And thus there are many possible metrics/norms.

Of course, the norm isn't really that important, i.e. exactly what number is being given to a function isn't important. It's the open sets that are important, and notions such as Cauchy sequences...
 
So basically we use a normed vector space to define a Banach space when the normed vector space is complete (every Cauchy sequence in the vector space converges in the vector space).

I don't yet have a grasp though of why exactly a Banach space is so important (from the actual physical standpoint), although I know that it enables the extension of several theories to infinite dimensions (and of course Hilbert spaces and quantum mechanics).
 
That's actually an excellent question, meldraft! :smile: Unfortunately, I'm merely a mathematician who knows nothing about physics, but I do think I can give you a satisfactory answer.

Firstly, completeness of a normed space isn't really essential. If we want to, we can formulate all of the mathematical and physicsal theories in incomplete spaces. OK, the formulation will be a lot more complicated, but we can still do it. Why is it true what I said? Well, every normed space (metric space even) has a unique completion. That is, it is possible to adjoin a few elements to an incomplete space in order to make it complete. Does, it is possible to work in incomplete spaces and occasionally refer to it's completeness whenever we need it.

That said, the only reason we work with \mathbb{R} is because the space is complete. Let me elaborate. Say we want to do a measurement of some kind. For example, say we want to measure the lengths of persons. Then possible outcomes will be 1.90m, 1.85m, 1.77m, etc. As you see, all the outcomes will be rational numbers. So for measurement purposes, we can work in \mathbb{Q} most of the time.

Why do we consider those measurements in \mathbb{R}, then? Well, for the simple reason that we want to do calculus on the measurement. For example, we want to find a curve that best fits our measurements and we want to find the integral of that curve. But the integral is defined as a limit! But, when working in \mathbb{Q}, we have no reason why this limit should even exist! So, in order to make things work, we need to be able to talk about limits. Not that we're really interested in the value of the limit (we will approximate that value anyway), but to make things easy, we want to know that the limit exists. And this is where completness comes in.

Approximating zeroes of a function is another thing we need completeness for. Say we have an equation x=cos(x)+sin(x). One method to approximate the root is to form the function

F:\mathbb{R}\rightarrow\mathbb{R}:x\rightarrow \cos(x)+\sin(x)

Applying this function to an arbitrary value will eventually converge to the root. So F(F(F(F(0)))) will be a close approximation to the root. Of course, the approximation is enough for all practical purposes, but to make the theory easier, we will need that the sequence exist.

Another thingy involves differential equations. In practise, we will always solve differential equations by the computer and we will find approximations for them. But to make our theory simple, we need to know that the differential equations have solutions. But this is exactly where completeness comes in! Given an differential equation, we can always transform it into an integral equation, for example:

f(x)=g(x)+\int_0^x{f(t)dt}

How do we show that we have a solution, well, we form the function

F:\mathcal{C}[a,b]\rightarrow \mathcal{C}[a,b]:f\rightarrow g+\int f

Iterating this function (that is, doing F(F(F(F(F(F(0))))))) will eventually converge to the desired function. But to know that the limit even exists, we need to know that the space C[a,b] is complete!

To summarize, I feel that completeness is actually a technical requirement to make the theory easier. We can do without it, but the theory will become much uglier!
 
Micromass: Are you using the contraction mapping principle on R to guarantee that the iterations converge to a solution?
 
  • #10
Bacle said:
Micromass: Are you using the contraction mapping principle on R to guarantee that the iterations converge to a solution?

I should use that yes (although there are other iteration theorems out there). However, I did not mention it. The contraction mapping does uses completeness, anyway.
 
  • #11
Wow, that was quick; felt like IMming.!

Sorry, Micro, I did not get your answer; you mean one should use it, but I am

not sure if it actually is what you used (together, of course, with the completeness

of the Reals.). Can you give some suggestion for showing that the maps are contraction

maps? Not necessarily a full argument, but at least an idea.

Thanks.
 
  • #12
Well, it won't for every such operator, but consider

T(f)=\int_0^x{\frac{f(t)}{2}dt}

For example, then

|T(f)|=\left|\int_0^x{\frac{f(t)}{2}dt}\right|\leq \frac{1}{2}\|f\|_\infty

Which implies that it's a contraction. There are results similar to this...
 
Back
Top