Is the integral of zero always zero?

In summary, the integral of zero problem arises from a misconception of the indefinite integral. The integral of zero is not simply a constant, but rather a set of functions. This is due to the fact that the derivative of a function can be zero without the function itself being a constant. Therefore, the integral of zero cannot be simplified to just a constant and must be written as a set of functions with a constant added.
  • #1
Theodore260
1
0
The "Integral of Zero Problem"

I know that this seems like such a trivial problem, but take a look at this:

[itex] \frac{df}{dx} = 0, [/itex]

[itex] f(x) = \int{0}dx, [/itex]

[itex] f(x) = 0\int{}dx, [/itex]

[itex] f(x) = 0(x + C), [/itex]

[itex] f(x) = 0x + 0C, [/itex]

[itex] f(x) = 0, [/itex]

where any general constant multiplied by zero simplifies to zero.

So, is the integral of zero a general constant, or another zero?

What is flawed with the above equations? I can't seem to find any flaws in these equations.

Thanks.
 
Mathematics news on Phys.org
  • #2
I think the issue is that you're doing a division by zero when you're factoring zero out of the indefinite integral.

Recall that: [tex]\int { dx } =\int { 1dx }[/tex]

Here is what's wrong:

[tex]f(x)=\int { 0 } dx\\ f(x)=0\int { 1 } dx\\ \\ and\quad we\quad know\quad that\quad \frac { 0 }{ 0 } ≠1[/tex]
 
Last edited:
  • #3
The constant of integration (C) is added to the indefinite integral. ∫0dx = 0 + C.
 
  • #4
It is a little more subtle than this.
You have
[tex]
\frac{df}{dx} = 0
[/tex]

When you integrate both sides you get
[tex]
\int \frac{df}{dx} \, dx = \int 0 \, dx + c
[/tex]

which gives
[tex]
f = \int 0 \, dx + c
[/tex]

When you first integrate each side the constant appears: then you worry about simplifying the integral of zero.
 
  • #5
I don't think the previous replies really answered the question. The reason for that is that the answer to the question is fairly complicated and is something not usually dealt with in calculus books. So my answer will be fairly high-level and I might use things that will not be immediately understandably to the typical calculus student. I do encourage you to ask questions about this in order to understand it.

First, a short answer. The short answer is that [itex]\int f(x)dx[/itex] is not what you think it is, in particular: it is not a function, but it is a collection of functions. Also, the [itex]C[/itex] in [itex]F(x)+C[/itex] is not a constant (unlike what many calculus books claim). And lastly, it is not true that [itex]0\cdot (F(x) + C) = 0[/itex].

In the following, a function [itex]f:D\rightarrow \mathbb{R}[/itex] will always have [itex]D[/itex] open and will always be differentiable.

Let us first define an equivalence relation on the set of all differentiable functions. So, let [itex]f:D\rightarrow \mathbb{R}[/itex] and [itex]g:D^\prime\rightarrow \mathbb{R}[/itex] be differentiable functions. We define [itex]f[/itex] and [itex]g[/itex] to be equivalent and we denote [itex]f\sim g[/itex] if [itex]D=D^\prime[/itex] and if [itex]f^\prime(x)=g^\prime(x)[/itex] for all [itex]x\in D[/itex].

Now we can define equivalence classes. Given a function [itex]f:D\rightarrow \mathbb{R}[/itex], we define the set
[tex][f] = \{g:D^\prime \rightarrow \mathbb{R}~\vert~f\sim g\}[/tex]
So, ##[f]## is the set of all functions whose derivative equals ##f##.

In particular, we can take the function ##0:D\rightarrow \mathbb{R}:x\rightarrow 0##, and we can look at ##[0]##, the set of all functions whose derivative is ##0##.

Misconception: The set of all functions whose derivative is ##0## are exactly the constant functions
This is not true. The validity of this statement depends crucially on the domain ##D## of ##0##. If ##D=\mathbb{R}## or if ##D=(a,b)##, then it is indeed true that a function ##f:D\rightarrow \mathbb{R}## has ##f^\prime =0## if and only if ##f(x)=C## for all ##x##. But for other sets ##D##, this fails.
For example, take ##D=\mathbb{R}\setminus \{0\}##, then the function
##f(x)=\left\{\begin{array}{cc} 2 & \text{if}~x<0\\ 3 & \text{if}~x>0 \end{array}\right.##
satisfies ##f^\prime(x)=0## for all ##x\in D##. And the function ##f## is of course not constant!

Let's fix a domain ##D## for the sequel (so all functions in the future will have the same domain). We can define an addition and multiplication on the equivalence classes.
So if ##f,g:D\rightarrow \mathbb{R}##, then we define ##[f]+[g] := [f+g]## and ##[f]\cdot [g]=[f\cdot g]##.
Also, if ##\alpha\in \mathbb{R}##, then we define ##\alpha [f] = [\alpha f]##.

Now, let ##0:D\rightarrow \mathbb{R}##. We define ##C## to be ##C:= [0]##. So ##C## is the set of all functions whose derivative is ##0##. Now, it turns out that for any ##f:D\rightarrow \mathbb{R}##, that
[tex][f] = \{f+g ~\vert~g\in C\}[/tex]
and that ##[f] = [g]## if and only if ##f-g\in C##.

Now we are going to commit an abuse of notation, but one which is standard in mathematics. We are going to write ##[f] = f + C##. I wish to stress that ##f+C## is defined exactly as ##[f]## and is not formally "adding a function and a constant".

The operations can be recast in the new notation as
[tex](f+C) + (g+C) = [f] + [g] = [f+g] = (f+g) + C[/tex]
[tex](f+C)\cdot (g+C) = [f]\cdot [g] = [f\cdot g] = (f\cdot g) + C[/tex]
[tex]\alpha (f+ C) = \alpha [f] = [\alpha f] = \alpha f + C[/tex]

So in particular, we see that [itex]0(f+C) = 0 + C[/itex]. Writing something like [itex]0(f+C) = 0[/itex] makes no sense at all since the LHS is a set of functions and the RHS is just one function.

Now we can define the indefinite integral. Given a function [itex]f:D\rightarrow \mathbb{R}[/itex]. We define
[tex]\int f(x) dx := [F]~\text{where}~ F:D\rightarrow \mathbb{R}~ \text{is a function such that}~ F^\prime(x) = f(x)~ \text{for all} ~ x\in D[/tex]
So we see that ##\int f(x)dx## is not a function, but rather a set of functions!

With our abuse of notation, we write
[tex]\int f(x)dx = F + C.[/tex]

Now, we have the rule
[tex]\int \alpha f(x)dx = \alpha \int f(x)dx[/tex]
But the multiplication in the RHS is not just multiplying a function by a constant, rather it is the multiplication that we have defined as ##\alpha (f+C ) =\alpha f + C##.

To get back to your example, we indeed have that
[tex]\int 0dx = 0\int 1dx = 0\cdot (x + C)[/tex]

But this is not equal to ##0##. We have to follow our multiplication rule:

[tex]\int 0dx = 0\int 1dx = 0\cdot (x + C) = 0\cdot x + C = 0 + C[/tex]

And as we see, there are no contradictions here!
 
  • #6
@micromass: You seem to use a lot of unfamiliar notation. Is there any book that you would recommend that explains these mathematical notations? For example: I don't understand this notation: [itex]f:D\rightarrow \mathbb{R}[/itex].

In addition, it seems that I was wrong about the division of zero (because you seem to do it in your post). Can you please explain why post #2 is incorrect?
 
  • #7
InvalidID said:
@micromass: You seem to use a lot of unfamiliar notation. Is there any book that you would recommend that explains these mathematical notations? For example: I don't understand this notation: [itex]f:D\rightarrow \mathbb{R}[/itex].

It depends on which notations you don't understand. All the notations I used are usually introduced in books which talk about set theory or proofs. However, the spirit of my post is close to abstract algebra. I guess a book like "Introduction to set theory" by Hrbacek and Jech explains all the notations. A book like "How to prove it" by Velleman explains everything too. But it is probably just better to ask it here.

The notation [itex]f:D\rightarrow \mathbb{R}[/itex] just says that ##f## is a function with domain ##D## and codomain ##\mathbb{R}##. So this means that ##f(x)## is defined only for those ##x\in D##.

For example, the function ##f(x)=\frac{1}{x}## is not defined for ##x=0##. So the domain of this function is ##D=\mathbb{R}\setminus \{0\}## or smaller.

It is very standard in mathematics to define functions ##f:D\rightarrow \mathbb{R}## and ##g:D^\prime \rightarrow \mathbb{R}## as equal only if ##D=D^\prime## and ##f(x)=g(x)## for all ##x\in D##. This means in particular that the functions
[tex]f:[0,1]\rightarrow \mathbb{R}:x\rightarrow x[/tex]
and
[tex]g:\mathbb{R}\rightarrow \mathbb{R}:x\rightarrow x[/tex]
are not equal. Indeed, ##f(x)## is only defined for ##x\in [0,1]## and ##g(x)## is defined for ##x\in \mathbb{R}##.

In addition, it seems that I was wrong about the division of zero (because you seem to do it in your post). Can you please explain why post #2 is incorrect?

To be honest, I don't understand your post #2. Why do those two equations yield [itex]\frac{0}{0}=1[/itex]??
 
  • #8
micromass said:
To be honest, I don't understand your post #2. Why do those two equations yield [itex]\frac{0}{0}=1[/itex]??

I was trying to saying that factoring anything involves a division as shown:[tex]{ x }^{ 2 }+x\\ =x(\frac { { x }^{ 2 } }{ x } +\frac { x }{ x } )\\ =x(x+1)\\[/tex]So when he factors 0, he needs to perform a division by zero: [tex]f(x)=\int { 0 } dx\\ f(x)=0\int { \frac { 0 }{ 0 } } dx\\ f(x)=0\int { 1 } dx[/tex]
 
  • #9
InvalidID said:
I was trying to saying that factoring anything involves a division as shown:[tex]{ x }^{ 2 }+x\\ =x(\frac { { x }^{ 2 } }{ x } +\frac { x }{ x } )\\ =x(x+1)\\[/tex]So when he factors 0, he needs to perform a division by zero: [tex]f(x)=\int { 0 } dx\\ f(x)=0\int { \frac { 0 }{ 0 } } dx\\ f(x)=0\int { 1 } dx[/tex]

Factoring is closely related to division but it's not the same at all.

The following equality is true

[tex]0\cdot 1 = 0[/tex]

but it doesn't imply in any way that [itex]\frac{0}{0}=1[/itex].

I know we can often recast a multiplication into a division. For example [itex]2\cdot 3 = 6[/itex] can be written as [itex]3=\frac{6}{2}[/itex]. But this can only be done with numbers that are nonzero. As soon as you get something like [itex]2\cdot 0 = 0[/itex], we cannot recast this into a division by [itex]0[/itex]!

In any case, the OP did not do

[tex]0 = 0 \cdot \frac{0}{0} = 0\cdot 1[/tex]

the middle term makes no sense since [itex]\frac{0}{0}[/itex] is undefined. What the OP did was immediately write [itex]0=0\cdot 1[/itex], which is perfectly valid.
 
  • #10
micromass said:
What the OP did was immediately write [itex]0=0\cdot 1[/itex], which is perfectly valid.

Is there any particular reason why the OP wrote [itex]0=0\cdot 1[/itex] instead of [itex]0=0\cdot 2[/itex]?

I guess even if he used: [itex]0=0\cdot 2[/itex] then his conclusion would still be the same. But if I used 2 instead of 1, that would still be right, right?
 
  • #11
InvalidID said:
I was trying to saying that factoring anything involves a division as shown:[tex]{ x }^{ 2 }+x\\ =x(\frac { { x }^{ 2 } }{ x } +\frac { x }{ x } )\\ =x(x+1)\\[/tex]

Um, no.
##x^2+x = x(x+1)##
is true. But
##x^2+x =x(\frac{x^2}{x}+\frac{x}{x})##
as functions only when the domain of both exclude x=0. So if you start with the domain of ℝ (which is what you assume without quantification) the equality fails.
 
  • #12
micromass said:
For example, the function ##f(x)=\frac{1}{x}## is not defined for ##x=0##. So the domain of this function is ##D=\mathbb{R}\setminus \{0\}## or smaller.

Why smaller? Isn't it just not defined for x=0?

micromass said:
It is very standard in mathematics to define functions ##f:D\rightarrow \mathbb{R}## and ##g:D^\prime \rightarrow \mathbb{R}## as equal only if ##D=D^\prime## and ##f(x)=g(x)## for all ##x\in D##. This means in particular that the functions
[tex]f:[0,1]\rightarrow \mathbb{R}:x\rightarrow x[/tex]
and
[tex]g:\mathbb{R}\rightarrow \mathbb{R}:x\rightarrow x[/tex]
are not equal. Indeed, ##f(x)## is only defined for ##x\in [0,1]## and ##g(x)## is defined for ##x\in \mathbb{R}##.

You lost me at:
[tex]f:[0,1]\rightarrow \mathbb{R}:x\rightarrow x[/tex]
and
[tex]g:\mathbb{R}\rightarrow \mathbb{R}:x\rightarrow x[/tex]

So in this case, f is a function that has a domain between 0 and 1 that includes all real numbers, right?

Why does g have [itex]\mathbb{R}\rightarrow \mathbb{R}:x\rightarrow x[/itex]? Isn't [itex]\mathbb{R}=\mathbb{R}[/itex] and [itex]x=x[/itex] so isn't that kind of redundant? Also, what does the second colon mean?
 
  • #13
InvalidID said:
Why smaller? Isn't it just not defined for x=0?

The domain of function is always a set. Specifically, in the case of 1/x, the domain must be a subset of ##\mathbb{R}\setminus\{0\}##. It might be just positives, but we don't know that.


So in this case, f is a function that has a domain between 0 and 1 that includes all real numbers, right?

Why does g have [itex]\mathbb{R}\rightarrow \mathbb{R}:x\rightarrow x[/itex]? Isn't [itex]\mathbb{R}=\mathbb{R}[/itex] and [itex]x=x[/itex] so isn't that kind of redundant? Also, what does the second colon mean?

We have
##g: \mathbb{R} \to \mathbb{R}##
##g(x) = x##
in words: the identity map from ℝ to ℝ.
The first ℝ is the domain, the second is the co-domain. They refer to different things.

Also ##x \mapsto x## is very different to ##x = x##.
 
  • #14
pwsnafu said:
Also ##x \mapsto x## is very different to ##x = x##.

[itex]x \mapsto x[/itex] is just [itex]f(x)=x[/itex], right? So why is it different from [itex]x = x[/itex]? I guess it's because the output (that the funnction outputs) = to the input? Not that input = input?
 
  • #15
InvalidID said:
[itex]x \mapsto x[/itex] is just [itex]f(x)=x[/itex], right? So why is it different from [itex]x = x[/itex]?

Wait, do you believe that ##f(x) = x## is the same thing as ##x = x##?
 
  • #16
pwsnafu said:
Wait, do you believe that ##f(x) = x## is the same thing as ##x = x##?

$$f(x)=x\quad \quad \quad (1)\\ f(x)=x\quad \quad \quad (2)\\ \\ Substitute\quad (1)\quad into\quad (2)\\ x=x$$

Now I'm embarrassed that I've made a stupid mistake!
 

1. What is the "Integral of Zero Problem"?

The "Integral of Zero Problem" refers to the mathematical concept of finding the integral of a function that is equal to zero over a given interval. This problem arises when the integrand (the function being integrated) is equal to zero at all points within the interval.

2. Why is the "Integral of Zero Problem" important?

The "Integral of Zero Problem" is important because it allows us to calculate the area under the curve of a function that has values of zero. This is useful in various fields such as physics, engineering, and economics where the value of a function may be zero at certain points.

3. How do you solve the "Integral of Zero Problem"?

To solve the "Integral of Zero Problem", we can use the fundamental theorem of calculus which states that the integral of a function can be calculated by finding the antiderivative (or indefinite integral) of the function and evaluating it at the upper and lower bounds of the interval. In the case of the integral of zero, the antiderivative will simply be a constant, and thus the integral will be equal to the difference between the upper and lower bounds.

4. Can the "Integral of Zero Problem" have multiple solutions?

Yes, the "Integral of Zero Problem" can have multiple solutions. This is because the antiderivative of a function is not unique and can have different constant values added to it. However, all solutions will have the same value of zero for the integral over the given interval.

5. How is the "Integral of Zero Problem" related to the area under a curve?

The "Integral of Zero Problem" is directly related to the area under a curve. This is because, as mentioned earlier, the integral of a function is equal to the area under the curve of the function. In the case of the integral of zero, the function has a value of zero at all points, so the integral will simply be the area of the x-axis over the given interval.

Similar threads

Replies
1
Views
637
Replies
8
Views
1K
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
284
Replies
6
Views
928
Replies
2
Views
932
Replies
1
Views
937
Replies
2
Views
151
  • Calculus and Beyond Homework Help
Replies
2
Views
278
Replies
4
Views
414
Back
Top