If A is dense in [0,1] and f(x) = 0, x in A, prove ∫fdx = 0.

  • Thread starter Thread starter Eclair_de_XII
  • Start date Start date
  • Tags Tags
    Real analysis
Eclair_de_XII
Messages
1,082
Reaction score
91

Homework Statement


"A set ##A\subset [0,1]## is dense in ##[0,1]## iff every open interval that intersects ##[0,1]## contains ##x\in A##. Suppose ##f:[0,1]\rightarrow ℝ## is integrable and ##f(x) = 0,x\in A## with ##A## dense in ##[0,1]##. Show that ##\int_{0}^{1}f(x)dx=0##."

Homework Equations


Let ##P=\{x_1,...,x_n\}## be a partition of ##[0,1]##

##M_i=sup\{f(x):x\in[x_{i-1},x_i]\}##
##m_i=inf\{f(x):x\in[x_{i-1},x_i]\}##

##U(P,f)=\sum_{i=1}^nM_i(f)(x_i-x_{i-1})##
##L(P,f)=\sum_{i=1}^nm_i(f)(x_i-x_{i-1})##

##\int_{0}^{1} fdx=sup\{L(P,f):P \space \text {a partition of} \space [0,1]\}=inf\{U(P,f):P \space \text {a partition of} \space [0,1]\}##

The Attempt at a Solution


I haven't actually attempted a solution yet; everything that follows is scratchwork...

Basically, I think I have to use the fact that ##A## being dense in ##[0,1]## gives the following implication:

##\int_{x\in A}f(x)dx=0 \space ⇒ \int_{0}^{1} f(x)dx=0##,

since ##f(x)=0,\forall x\in A##, then ##\int_{x\in A} f(x)dx=0##.

So my plan is to construct a series of open intervals that intersect ##[0,1]## as such:

##Q_i=(a_i,b_i)## where ##a_i=b_{i-1}##

Then, I define the initial open interval as ##Q_0=(a_0,b_0)##, with ##a_0\leq 0## and ##b_0 > 0## and the final open interval as ##Q_n=(a_n,b_n)## where ##b_n\geq 1##.

Then I construct a partition using those open intervals with ##(x_{i-1},x_i)=(a_i,b_i)##, with ##x_0=0## and ##x_n=1##. So I would construct a lower and upper sum, which by the conditions of the problem, must be equal to each other, and equal to zero (still going to need to figure this part out).

This is my current idea of how to handle the problem. Can anyone tell me if anything I have written is off? Thanks.
 
Physics news on Phys.org
The claim is not true with the usual definition of integrable, which is Lebesgue-integrable. The Indicator function that is 1 on the irrational numbers and zero elsewhere is Lebesgue-integrable and has integral 1 on [0,1], even though the set on which it is zero - the rational numbers in [0,1] - is dense.

The claim may be true if we restrict ourselves to Riemann integration, as the above indicator function is not Riemann-integrable.

From your plan outline, it looks like you are working with Riemann integrals. Some element of the proof will have to exclude cases like the irrational indicator function on the grounds that they are not Riemann-integrable. Also, the proof will need to use the density of ##A##, a fact that the above plan does not reference.

I think the proof can be easier than your plan.
First prove that the 'sup' bit is zero, using the density premise. This is the only bit where you need to refer to partitions.
Then complete the proof using the premise that ##f## is integrable. What does that premise tell us about the relation between the 'sup' bit and the 'inf' bit?
 
andrewkirk said:
The claim may be true if we restrict ourselves to Riemann integration, as the above indicator function is not Riemann-integrable.

Sorry; I should have mentioned that this class is working with Riemann integration, and that ##f## is Riemann-integrable.

andrewkirk said:
First prove that the 'sup' bit is zero, using the density premise. This is the only bit where you need to refer to partitions.

My guess:

Let ##O## be an open interval such that ##O\cap [0,1]\neq ∅##.

Because ##A## is dense in ##[0,1]##, ##O\cap [0,1] \subseteq A##. So let ##P=\{x_1,...,x_n\}##.

Then, since ##A## is dense in ##[0,1]## and if ##x\in (a,b)\subset [0,1]##, then ##f(x)=0##.

So ##m_i=inf\{f(x):x\in (x_{i-1},x_i)\subset [x_{i-1},x_i]\}=0,\forall i\in ℕ##, since we assume that ##f(x)\geq 0,\forall x\in ℝ##.

So ##sup\{L(P,f)\}=0##. Since ##f## is Riemann-integrable, ##sup\{L(P,f)\}=inf\{U(P,f)\}=\int_{0}^{1} f(x)dx=0##.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top