Continuity of Integrals in L^1 Spaces

  • Thread starter Thread starter Oxymoron
  • Start date Start date
  • Tags Tags
    Continuity
Oxymoron
Messages
868
Reaction score
0
Question:

Prove that if f \in L^1(\mathbb{R},\mathcal{B},m) and a \in \mathbb{R} is fixed, then F(x):=\int_{[a,x]}f\mbox{d}m is continuous. Where \mathcal{B} is the Borel \sigma-algebra, and m is a measure.
 
Physics news on Phys.org
I was hoping to use the following definition:

A function f is continuous if for any sequence x_n such that

x_n \rightarrow x

then

F(x_n) \rightarrow F(x)

Does this sound like the right approach?
 
Maybe. Can you show that \int_a^b+\int_b^c=\int_a^c, and then that as xn->x, \int_{x_n}^x f dm->0?
 
Im not sure that would help. But what do I know!? :rolleyes:

I was thinking that to show that F(x) was continuous I would do something like this:

1) Fix a \in \mathbb{R} and let x_n be a sequence that converges to x as n approaches infinity. You know, all the regular proof setting up stuff.

2) Use the Dominated Convergence Theorem to show that the sequence F(x_n) converges, and finally get something like

3) \int_{[a,x]}f_n\mbox{d}m = \lim_{n\rightarrow\infty}\int_{[a,x]}f_n\mbox{d}m

Hence showing that

F(x_n) \rightarrow_{n\rightarrow\infty} F(x)

So basically I think using the D.C.T. is essential here. What does anyone think of this method?
 
Last edited:
I'm guessing the f_n are functions that are equal to f everywhere except in (x_n,x), where they are 0. So you need to show that the integral from [a,x_n] of f is equal to the integral over [a,x] of f_n, and that the integrals of the f_n converges to the integral of their limit, f, which you can do using the dominated convergence theorem, bounding the |f_n| above by |f|. Sounds good. My suggestion was just to show that the error (the integral over [x_n,x] of f) goes to zero as x_n goes to x.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top