What is the Definition of Limit in Real-Valued Spaces?

  • Thread starter Thread starter sammycaps
  • Start date Start date
  • Tags Tags
    Definition Limit
sammycaps
Messages
89
Reaction score
0
So, it seems that in a real-valued setting, the limit and the derivative of a real-valued function is defined only if the domain is an open subset of Euclidean space. I'm a little confused as to why this is the case, and why we can't just define a limit and derivative on any subset of Euclidean space with a limit point (well I know that limit can be defined on anything with a limit point, but I'm more unsure of the derivative). The way I was explained was that you need to be locally "similar" to a vector space so that we can add and subtract points to obtain the "linear approximation". Is this the right way to think about it?

I know manifolds enter this discussion at some point.

If anyone has a reference that would be helpful as well.
 
Physics news on Phys.org
Limits and derivatives can be defined on more general sets. For example, I think we can easily see how to define differentiability on a closed interval.

The problem is not so much with the definition as with the theorems. A lot of useful theorems of derivatives want you to work in an open set. In sets that are not open, the proof might fail.
The first exmple that comes in mind is the following: if a function f attains a local minimum in [/itex]a[/itex], then f^\prime(a)=0. This is perfectly valid when f is defined on some open set (a-\varepsilon,a+\varepsilon). But it fails for functions like f:[a,a+1]\rightarrow \mathbb{R}:x\rightarrow x.

So I guess the definition of derivatives is not really a problem, but it turns out to be a useless concept.

Limits however are usually defined on general sets (with a limit point) and studied on such sets.
 
micromass said:
Limits and derivatives can be defined on more general sets. For example, I think we can easily see how to define differentiability on a closed interval.

The problem is not so much with the definition as with the theorems. A lot of useful theorems of derivatives want you to work in an open set. In sets that are not open, the proof might fail.
The first exmple that comes in mind is the following: if a function f attains a local minimum in [/itex]a[/itex], then f^\prime(a)=0. This is perfectly valid when f is defined on some open set (a-\varepsilon,a+\varepsilon). But it fails for functions like f:[a,a+1]\rightarrow \mathbb{R}:x\rightarrow x.

So I guess the definition of derivatives is not really a problem, but it turns out to be a useless concept.

Limits however are usually defined on general sets (with a limit point) and studied on such sets.

Hm, ok that makes sense.

I still feel though (at least from discussions with people better at math than I am) that the lack of local linearity is an issue in more general subsets of euclidean space.
 
If your set is open, then every point is an interior point and can be enclosed inside a ε-disc. This ensures that it can be approached by all directions, as needed by the definition.

If you are working with a closed set as your domain, your boundary points are still being approached in all possible directions, since, as far as your function is concerned, there aren't any inputs outside of your domain. Thus, it *can* make sense to call a function differentiable on a closed set. Linearity would also extend to these points.
 
Back
Top