Is the integral of zero always zero?

  • Thread starter Thread starter Theodore260
  • Start date Start date
  • Tags Tags
    Integral Zero
AI Thread Summary
The integral of zero is not simply zero; rather, it represents a set of functions that differ by a constant. When integrating zero, the result is expressed as f(x) = C, where C is a constant of integration. The discussion highlights a common misconception that the integral of zero leads to a single zero value, ignoring the constant term. Additionally, the conversation touches on the importance of defining functions within specific domains, as this affects the interpretation of derivatives and integrals. Ultimately, the integral of zero is a more nuanced concept than it initially appears, involving a collection of constant functions rather than a singular value.
Theodore260
Messages
1
Reaction score
0
The "Integral of Zero Problem"

I know that this seems like such a trivial problem, but take a look at this:

\frac{df}{dx} = 0,

f(x) = \int{0}dx,

f(x) = 0\int{}dx,

f(x) = 0(x + C),

f(x) = 0x + 0C,

f(x) = 0,

where any general constant multiplied by zero simplifies to zero.

So, is the integral of zero a general constant, or another zero?

What is flawed with the above equations? I can't seem to find any flaws in these equations.

Thanks.
 
Mathematics news on Phys.org
I think the issue is that you're doing a division by zero when you're factoring zero out of the indefinite integral.

Recall that: \int { dx } =\int { 1dx }

Here is what's wrong:

f(x)=\int { 0 } dx\\ f(x)=0\int { 1 } dx\\ \\ and\quad we\quad know\quad that\quad \frac { 0 }{ 0 } ≠1
 
Last edited:
The constant of integration (C) is added to the indefinite integral. ∫0dx = 0 + C.
 
It is a little more subtle than this.
You have
<br /> \frac{df}{dx} = 0<br />

When you integrate both sides you get
<br /> \int \frac{df}{dx} \, dx = \int 0 \, dx + c<br />

which gives
<br /> f = \int 0 \, dx + c<br />

When you first integrate each side the constant appears: then you worry about simplifying the integral of zero.
 
I don't think the previous replies really answered the question. The reason for that is that the answer to the question is fairly complicated and is something not usually dealt with in calculus books. So my answer will be fairly high-level and I might use things that will not be immediately understandably to the typical calculus student. I do encourage you to ask questions about this in order to understand it.

First, a short answer. The short answer is that \int f(x)dx is not what you think it is, in particular: it is not a function, but it is a collection of functions. Also, the C in F(x)+C is not a constant (unlike what many calculus books claim). And lastly, it is not true that 0\cdot (F(x) + C) = 0.

In the following, a function f:D\rightarrow \mathbb{R} will always have D open and will always be differentiable.

Let us first define an equivalence relation on the set of all differentiable functions. So, let f:D\rightarrow \mathbb{R} and g:D^\prime\rightarrow \mathbb{R} be differentiable functions. We define f and g to be equivalent and we denote f\sim g if D=D^\prime and if f^\prime(x)=g^\prime(x) for all x\in D.

Now we can define equivalence classes. Given a function f:D\rightarrow \mathbb{R}, we define the set
[f] = \{g:D^\prime \rightarrow \mathbb{R}~\vert~f\sim g\}
So, ##[f]## is the set of all functions whose derivative equals ##f##.

In particular, we can take the function ##0:D\rightarrow \mathbb{R}:x\rightarrow 0##, and we can look at ##[0]##, the set of all functions whose derivative is ##0##.

Misconception: The set of all functions whose derivative is ##0## are exactly the constant functions
This is not true. The validity of this statement depends crucially on the domain ##D## of ##0##. If ##D=\mathbb{R}## or if ##D=(a,b)##, then it is indeed true that a function ##f:D\rightarrow \mathbb{R}## has ##f^\prime =0## if and only if ##f(x)=C## for all ##x##. But for other sets ##D##, this fails.
For example, take ##D=\mathbb{R}\setminus \{0\}##, then the function
##f(x)=\left\{\begin{array}{cc} 2 & \text{if}~x<0\\ 3 & \text{if}~x>0 \end{array}\right.##
satisfies ##f^\prime(x)=0## for all ##x\in D##. And the function ##f## is of course not constant!

Let's fix a domain ##D## for the sequel (so all functions in the future will have the same domain). We can define an addition and multiplication on the equivalence classes.
So if ##f,g:D\rightarrow \mathbb{R}##, then we define ##[f]+[g] := [f+g]## and ##[f]\cdot [g]=[f\cdot g]##.
Also, if ##\alpha\in \mathbb{R}##, then we define ##\alpha [f] = [\alpha f]##.

Now, let ##0:D\rightarrow \mathbb{R}##. We define ##C## to be ##C:= [0]##. So ##C## is the set of all functions whose derivative is ##0##. Now, it turns out that for any ##f:D\rightarrow \mathbb{R}##, that
[f] = \{f+g ~\vert~g\in C\}
and that ##[f] = [g]## if and only if ##f-g\in C##.

Now we are going to commit an abuse of notation, but one which is standard in mathematics. We are going to write ##[f] = f + C##. I wish to stress that ##f+C## is defined exactly as ##[f]## and is not formally "adding a function and a constant".

The operations can be recast in the new notation as
(f+C) + (g+C) = [f] + [g] = [f+g] = (f+g) + C
(f+C)\cdot (g+C) = [f]\cdot [g] = [f\cdot g] = (f\cdot g) + C
\alpha (f+ C) = \alpha [f] = [\alpha f] = \alpha f + C

So in particular, we see that 0(f+C) = 0 + C. Writing something like 0(f+C) = 0 makes no sense at all since the LHS is a set of functions and the RHS is just one function.

Now we can define the indefinite integral. Given a function f:D\rightarrow \mathbb{R}. We define
\int f(x) dx := [F]~\text{where}~ F:D\rightarrow \mathbb{R}~ \text{is a function such that}~ F^\prime(x) = f(x)~ \text{for all} ~ x\in D
So we see that ##\int f(x)dx## is not a function, but rather a set of functions!

With our abuse of notation, we write
\int f(x)dx = F + C.

Now, we have the rule
\int \alpha f(x)dx = \alpha \int f(x)dx
But the multiplication in the RHS is not just multiplying a function by a constant, rather it is the multiplication that we have defined as ##\alpha (f+C ) =\alpha f + C##.

To get back to your example, we indeed have that
\int 0dx = 0\int 1dx = 0\cdot (x + C)

But this is not equal to ##0##. We have to follow our multiplication rule:

\int 0dx = 0\int 1dx = 0\cdot (x + C) = 0\cdot x + C = 0 + C

And as we see, there are no contradictions here!
 
@micromass: You seem to use a lot of unfamiliar notation. Is there any book that you would recommend that explains these mathematical notations? For example: I don't understand this notation: f:D\rightarrow \mathbb{R}.

In addition, it seems that I was wrong about the division of zero (because you seem to do it in your post). Can you please explain why post #2 is incorrect?
 
InvalidID said:
@micromass: You seem to use a lot of unfamiliar notation. Is there any book that you would recommend that explains these mathematical notations? For example: I don't understand this notation: f:D\rightarrow \mathbb{R}.

It depends on which notations you don't understand. All the notations I used are usually introduced in books which talk about set theory or proofs. However, the spirit of my post is close to abstract algebra. I guess a book like "Introduction to set theory" by Hrbacek and Jech explains all the notations. A book like "How to prove it" by Velleman explains everything too. But it is probably just better to ask it here.

The notation f:D\rightarrow \mathbb{R} just says that ##f## is a function with domain ##D## and codomain ##\mathbb{R}##. So this means that ##f(x)## is defined only for those ##x\in D##.

For example, the function ##f(x)=\frac{1}{x}## is not defined for ##x=0##. So the domain of this function is ##D=\mathbb{R}\setminus \{0\}## or smaller.

It is very standard in mathematics to define functions ##f:D\rightarrow \mathbb{R}## and ##g:D^\prime \rightarrow \mathbb{R}## as equal only if ##D=D^\prime## and ##f(x)=g(x)## for all ##x\in D##. This means in particular that the functions
f:[0,1]\rightarrow \mathbb{R}:x\rightarrow x
and
g:\mathbb{R}\rightarrow \mathbb{R}:x\rightarrow x
are not equal. Indeed, ##f(x)## is only defined for ##x\in [0,1]## and ##g(x)## is defined for ##x\in \mathbb{R}##.

In addition, it seems that I was wrong about the division of zero (because you seem to do it in your post). Can you please explain why post #2 is incorrect?

To be honest, I don't understand your post #2. Why do those two equations yield \frac{0}{0}=1??
 
micromass said:
To be honest, I don't understand your post #2. Why do those two equations yield \frac{0}{0}=1??

I was trying to saying that factoring anything involves a division as shown:{ x }^{ 2 }+x\\ =x(\frac { { x }^{ 2 } }{ x } +\frac { x }{ x } )\\ =x(x+1)\\So when he factors 0, he needs to perform a division by zero: f(x)=\int { 0 } dx\\ f(x)=0\int { \frac { 0 }{ 0 } } dx\\ f(x)=0\int { 1 } dx
 
InvalidID said:
I was trying to saying that factoring anything involves a division as shown:{ x }^{ 2 }+x\\ =x(\frac { { x }^{ 2 } }{ x } +\frac { x }{ x } )\\ =x(x+1)\\So when he factors 0, he needs to perform a division by zero: f(x)=\int { 0 } dx\\ f(x)=0\int { \frac { 0 }{ 0 } } dx\\ f(x)=0\int { 1 } dx

Factoring is closely related to division but it's not the same at all.

The following equality is true

0\cdot 1 = 0

but it doesn't imply in any way that \frac{0}{0}=1.

I know we can often recast a multiplication into a division. For example 2\cdot 3 = 6 can be written as 3=\frac{6}{2}. But this can only be done with numbers that are nonzero. As soon as you get something like 2\cdot 0 = 0, we cannot recast this into a division by 0!

In any case, the OP did not do

0 = 0 \cdot \frac{0}{0} = 0\cdot 1

the middle term makes no sense since \frac{0}{0} is undefined. What the OP did was immediately write 0=0\cdot 1, which is perfectly valid.
 
  • #10
micromass said:
What the OP did was immediately write 0=0\cdot 1, which is perfectly valid.

Is there any particular reason why the OP wrote 0=0\cdot 1 instead of 0=0\cdot 2?

I guess even if he used: 0=0\cdot 2 then his conclusion would still be the same. But if I used 2 instead of 1, that would still be right, right?
 
  • #11
InvalidID said:
I was trying to saying that factoring anything involves a division as shown:{ x }^{ 2 }+x\\ =x(\frac { { x }^{ 2 } }{ x } +\frac { x }{ x } )\\ =x(x+1)\\

Um, no.
##x^2+x = x(x+1)##
is true. But
##x^2+x =x(\frac{x^2}{x}+\frac{x}{x})##
as functions only when the domain of both exclude x=0. So if you start with the domain of ℝ (which is what you assume without quantification) the equality fails.
 
  • #12
micromass said:
For example, the function ##f(x)=\frac{1}{x}## is not defined for ##x=0##. So the domain of this function is ##D=\mathbb{R}\setminus \{0\}## or smaller.

Why smaller? Isn't it just not defined for x=0?

micromass said:
It is very standard in mathematics to define functions ##f:D\rightarrow \mathbb{R}## and ##g:D^\prime \rightarrow \mathbb{R}## as equal only if ##D=D^\prime## and ##f(x)=g(x)## for all ##x\in D##. This means in particular that the functions
f:[0,1]\rightarrow \mathbb{R}:x\rightarrow x
and
g:\mathbb{R}\rightarrow \mathbb{R}:x\rightarrow x
are not equal. Indeed, ##f(x)## is only defined for ##x\in [0,1]## and ##g(x)## is defined for ##x\in \mathbb{R}##.

You lost me at:
f:[0,1]\rightarrow \mathbb{R}:x\rightarrow x
and
g:\mathbb{R}\rightarrow \mathbb{R}:x\rightarrow x

So in this case, f is a function that has a domain between 0 and 1 that includes all real numbers, right?

Why does g have \mathbb{R}\rightarrow \mathbb{R}:x\rightarrow x? Isn't \mathbb{R}=\mathbb{R} and x=x so isn't that kind of redundant? Also, what does the second colon mean?
 
  • #13
InvalidID said:
Why smaller? Isn't it just not defined for x=0?

The domain of function is always a set. Specifically, in the case of 1/x, the domain must be a subset of ##\mathbb{R}\setminus\{0\}##. It might be just positives, but we don't know that.


So in this case, f is a function that has a domain between 0 and 1 that includes all real numbers, right?

Why does g have \mathbb{R}\rightarrow \mathbb{R}:x\rightarrow x? Isn't \mathbb{R}=\mathbb{R} and x=x so isn't that kind of redundant? Also, what does the second colon mean?

We have
##g: \mathbb{R} \to \mathbb{R}##
##g(x) = x##
in words: the identity map from ℝ to ℝ.
The first ℝ is the domain, the second is the co-domain. They refer to different things.

Also ##x \mapsto x## is very different to ##x = x##.
 
  • #14
pwsnafu said:
Also ##x \mapsto x## is very different to ##x = x##.

x \mapsto x is just f(x)=x, right? So why is it different from x = x? I guess it's because the output (that the funnction outputs) = to the input? Not that input = input?
 
  • #15
InvalidID said:
x \mapsto x is just f(x)=x, right? So why is it different from x = x?

Wait, do you believe that ##f(x) = x## is the same thing as ##x = x##?
 
  • #16
pwsnafu said:
Wait, do you believe that ##f(x) = x## is the same thing as ##x = x##?

$$f(x)=x\quad \quad \quad (1)\\ f(x)=x\quad \quad \quad (2)\\ \\ Substitute\quad (1)\quad into\quad (2)\\ x=x$$

Now I'm embarrassed that I've made a stupid mistake!
 
Back
Top