Is the Subset A Closed in C([0,1]) with the Given Metric?

Click For Summary
SUMMARY

The subset A = {f ∈ C([0,1]); ∫_0^1 f(x) dx = 0} is closed in the metric space C([0,1]) with the metric ρ(f,g) = (∫_0^1 |f(x) - g(x)|² dx)^{1/2}. The proof involves demonstrating that the complement of A is open, which can be achieved using the continuity of the integral map and the Cauchy-Schwarz inequality. The application of Jensen's inequality is crucial in establishing the necessary inequalities for the proof.

PREREQUISITES
  • Understanding of metric spaces, specifically C([0,1])
  • Familiarity with the integral metric ρ(f,g) = (∫_0^1 |f(x) - g(x)|² dx)^{1/2}
  • Knowledge of continuity in the context of functional analysis
  • Proficiency in applying inequalities such as Cauchy-Schwarz and Jensen's inequality
NEXT STEPS
  • Study the properties of C([0,1]) as a complete metric space
  • Learn about the continuity of linear functionals in functional analysis
  • Explore the applications of Jensen's inequality in various mathematical contexts
  • Investigate the relationship between different metrics on function spaces, particularly the supremum metric
USEFUL FOR

Mathematicians, particularly those specializing in functional analysis, students studying metric spaces, and anyone interested in the properties of continuous functions on closed intervals.

r4nd0m
Messages
96
Reaction score
1
This seems to be a very easy exercise, but I am completely stuck:
Prove that in C([0,1]) with the metric
\rho(f,g) = (\int_0^1|f(x)-g(x)|^2 dx)^{1/2}

a subset
A = \{f \in C([0,1]); \int_0^1 f(x) dx = 0\} is closed.

I tried to show that the complement of A is open - it could be easily done if the metric was \rho(f,g) = sup_{x \in [0,1]}|f(x)-g(x)| - but with the integral metric it's not that easy.

Am I missing something?
Thanks for any help.
 
Last edited:
Physics news on Phys.org
I would attempt to show that the map f--->int f(x)dx is continuous.

And you need / not \ in your closing tex tags.
 
Consider if:
\left| \int_0^1 g(x)dx \right| = \epsilon
then you might be able to show that
N_{\epsilon}(g(x)) \cap A = \emptyset
 
Last edited:
Well, I tried both, but the problem is that I still miss some kind of inequality that I could use.

I mean - if I want to show the continuity for example - I have to show that:
\forall \varepsilon > 0 \quad \exists \delta >0 \quad \forall g \in C([a,b]) : \rho(f,g)<\delta \quad |\int^1_0 f(x)-g(x) dx|< \varepsilon.

But what then?
|\int^1_0 f(x) - g(x) dx| \leq \int^1_0 |f(x) - g(x)| dx
but I miss some other inequality where I could compare it with \rho(f,g)
 
Well, there is another inequality lying around. So use it That last inequality is also <=p(f,g).
 
Of course :rolleyes: - a little modified Cauchy-Schwartz inequality is the key.
I hate algebraic tricks :smile:
Thanks for help
 
It's definitely not an algebraic trick. It is an application of the Jensen inequality from analysis. You might know it from probability theory, since it just states that the variance of a random variable is positive, i.e.

E(X^2)>E(X)^2

where E is the expectation operator and X an r.v.
 
How did you prove it for the supremum case? If you can prove it for the supremum, this proof here is self-contained because the given metric is always less than the supremum.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 20 ·
Replies
20
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 14 ·
Replies
14
Views
2K