What Is the Correct Evaluation of the Indefinite Integral of Zero?

PFuser1232
Messages
479
Reaction score
20
Argument A

##∫ 0 dx = 0x + C = C##

Argument B

##∫ 0 dx = ∫ (0)(1) dx = 0 ∫ 1 dx = 0(x+C) = 0##

Discuss.
 
Physics news on Phys.org
The "Discuss" part makes me think this is a homework problem.
 
Mark44 said:
The "Discuss" part makes me think this is a homework problem.

It's not, I just wanted to know which argument is right and why.
 
Argument B is silly, as it factors 0 into 0 times 1, and then moves the 0 outside the integral.
 
Mark44 said:
Argument B is silly, as it factors 0 into 0 times 1, and then moves the 0 outside the integral.

What about the rule ##∫a f(x) dx = a ∫ f(x) dx##?
 
The indefinite integral does not calculate a function. It calculates an equivalence class of function. All constant functions are in the same equivalence class.

Further, ##\int 0 \, dx## being constant is wrong. If for example the domain is ##\mathbb{R}\setminus\{0\}## then you would have two constants, not necessarily equal.
 
MohammedRady97 said:
Argument A

##∫ 0 dx = 0x + C = C##
There's no need to write 0x in the above.
##\int 0 dx = C##
MohammedRady97 said:
Argument B

##∫ 0 dx = ∫ (0)(1) dx = 0 ∫ 1 dx = 0(x+C) = 0##
##\int 0 dx = 0\int dx = 0x + C = C##
You don't get rid of the arbitrary constant as you showed above.

Since d/dx(C) = 0, then any antiderivative of 0 is C, for some arbitrary constant.
MohammedRady97 said:
Discuss.
 
pwsnafu said:
Further, ##\int 0 \, dx## being constant is wrong. If for example the domain is ##\mathbb{R}\setminus\{0\}## then you would have two constants, not necessarily equal.
Can you elaborate on this? I'm not following you. How does the domain enter into this problem, given that we're working with indefinite integrals?
 
MohammedRady97 said:
Argument A

##∫ 0 dx = 0x + C = C##

Argument B

##∫ 0 dx = ∫ (0)(1) dx = 0 ∫ 1 dx = 0(x+C) = 0##

Discuss.

For what it's worth, I think this is an interesting question. What do you think? A or B?

Hint: one answer is clearly wrong, but it's not so easy to explain exactly why.
 
  • #10
Mark44 said:
Can you elaborate on this? I'm not following you. How does the domain enter into this problem, given that we're working with indefinite integrals?

This is not easy to see when you have ##f(x) = 0##, so the example I give my students is
##f : \mathbb{R}\setminus\{0\} \rightarrow \mathbb{R}## where ##f(x) = -x^{-2}## and
##g: (0, \infty) \rightarrow \mathbb{R}## where ##g(x) = -x^{-2}##.
Then the function
##h(x) = x^{-1} + 1## for ##x > 0## and ##h(x) = x^{-1} - 1## for ##x < 0## is an antiderivative of f. But it isn't expressible as ##F(x) + C## where C is an "arbitrary constant". The "constant" changes as you pass over the "hole". On the other hand, h isn't an antiderivative of g (because the domains don't match), but the restriction of h to ##(0,\infty)## is an antiderivative of g and expressible in the form ##G(x) + C##.

The difference lies in the number of connected components of the domain. The former has two so there are two constants of integration, one for x<0 and another for x>0. The latter only has one connected component, and there is one constant of integration. In general ##\int f(x) \, dx = F(x) + H(x)## where ##F' = f## and ##H## is constant on individual connected components of the domain of f.*

Writing ##f(x) = 0## is ambiguous. It most likely has domain of all of reals. But it could be any subset of R. OP did not specifiy the domain, hence he cannot make the conclusion that the constant of integration is indeed a constant value always.

*Edit: I'm ignoring functions like Cantor's functions here.
 
Last edited:
  • Like
Likes dextercioby
  • #11
pwsnafu said:
The indefinite integral does not calculate a function. It calculates an equivalence class of function. All constant functions are in the same equivalence class.

Further, ##\int 0 \, dx## being constant is wrong. If for example the domain is ##\mathbb{R}\setminus\{0\}## then you would have two constants, not necessarily equal.

So the indefinite integral is basically a set of functions?
A set of antiderivatives, more specifically.
 
Last edited:
  • #12
PeroK said:
For what it's worth, I think this is an interesting question. What do you think? A or B?

Hint: one answer is clearly wrong, but it's not so easy to explain exactly why.

My gut tells me it's A.
 
  • #13
MohammedRady97 said:
My gut tells me it's A.

Your head should tell you that as well. In general, you have to be careful with the constant of integration. It's an informal way of specifying an equivalence class of functions (as pointed out above). So, that's where the multiplication by 0 breaks down. If you do things more formally, you still have an equivalence class after multiplication by 0 - so you don't get rid of the constant of integration.

And, yes, an indefinite integral is actually a set of functions.
 
  • #14
PeroK said:
Your head should tell you that as well. In general, you have to be careful with the constant of integration. It's an informal way of specifying an equivalence class of functions (as pointed out above). So, that's where the multiplication by 0 breaks down. If you do things more formally, you still have an equivalence class after multiplication by 0 - so you don't get rid of the constant of integration.

And, yes, an indefinite integral is actually a set of functions.

And how exactly do I do things more formally?
Does ##∫a f(x) dx = a ∫ f(x) dx## only apply for nonzero a?
 
  • #15
MohammedRady97 said:
And how exactly do I do things more formally?
Does ##∫a f(x) dx = a ∫ f(x) dx## only apply for nonzero a?
No. You work with equivalence classes of functions, not with a constant of integration.

An analogy would be ##0 \cdot x(mod \ n) = 0 (mod \ n) ≠ 0##
 
  • #16
PeroK said:
No. You work with equivalence classes of functions, not with a constant of integration.

An analogy would be ##0 \cdot x(mod \ n) = 0 (mod \ n) ≠ 0##

So what you're saying is I should treat the indefinite integral as a set of functions ## {F(x) + C | C ∈ ℝ} ##, but not as a single function?
 
  • #17
MohammedRady97 said:
So what you're saying is I should treat the indefinite integral as a set of functions ## {F(x) + C | C ∈ ℝ} ##, but not as a single function?
Yes.

If a function f has two distinct antiderivatives F1 and F2 (IOW F1] = f and F2] = f) then F1(x) - F2(x) ≡ C.

An example in the same vein is this:
$$\int sin(x)cos(x)dx$$
One student uses substitution to evaluate this integral, using u = sin(x), so du = cos(x)dx.
Then integral becomes ##\int udu = (1/2)u^2 = (1/2)sin^2(x)##.

Another student also uses substitution, but with u = cos(x), du = -sin(x)dx
With this substitution, the integral becomes ##-\int udu = -(1/2)u^2 = -(1/2)cos^2(x)##.

(Notice that both students omitted the constant of integration.)

Here we have two distinct antiderivatives for the same integrand. As it turns out, the two antiderivatives differ by a constant: (1/2)sin2(x) = -(1/2)cos2(x) + 1/2, independent of the value of x.
 
  • Like
Likes panzerlol
  • #18
MohammedRady97 said:
What about the rule ##∫a f(x) dx = a ∫ f(x) dx##?

As PeroK mentioned, such rules are not about symbols representing unique functions. The symbol \int f(x) does not represent a unique function. Such rules don't work if you interpret the anti-derivative symbol to be a unique function.

For example If suppose we say that \int x dx represents unique function x^2/2 + 1.
and the rule you quoted says \int 2x is the unique function 2( x^2/2 + 1) = x^2 + 2. This would contradict any calculation that said \int 2 x = x^2 or \int 2 x = x^2 + 5 etc.
 
  • #19
For indefinite integrals, the arbitrary added constant is just there. All you are doing in B is omitting it.
 
  • #20
Mark44 said:
Yes.

If a function f has two distinct antiderivatives F1 and F2 (IOW F1] = f and F2] = f) then F1(x) - F2(x) ≡ C.

An example in the same vein is this:
$$\int sin(x)cos(x)dx$$
One student uses substitution to evaluate this integral, using u = sin(x), so du = cos(x)dx.
Then integral becomes ##\int udu = (1/2)u^2 = (1/2)sin^2(x)##.

Another student also uses substitution, but with u = cos(x), du = -sin(x)dx
With this substitution, the integral becomes ##-\int udu = -(1/2)u^2 = -(1/2)cos^2(x)##.

(Notice that both students omitted the constant of integration.)

Here we have two distinct antiderivatives for the same integrand. As it turns out, the two antiderivatives differ by a constant: (1/2)sin2(x) = -(1/2)cos2(x) + 1/2, independent of the value of x.

I have always tackled such problems by using two different constants of integration for the two antiderivatives, then relating them according to the two results.
 
  • #21
Stephen Tashi said:
As PeroK mentioned, such rules are not about symbols representing unique functions. The symbol \int f(x) does not represent a unique function. Such rules don't work if you interpret the anti-derivative symbol to be a unique function.

For example If suppose we say that \int x dx represents unique function x^2/2 + 1.
and the rule you quoted says \int 2x is the unique function 2( x^2/2 + 1) = x^2 + 2. This would contradict any calculation that said \int 2 x = x^2 or \int 2 x = x^2 + 5 etc.

So technically, if we're doing a rigorous treatment of indefinite integrals, would the statement ##∫ 1/x dx = ln|x| + C## (for example) be wrong because it would imply that the indefinite integral is a single unique function? Should we draw a line between the often interchangeable terms "Indefinite Integral" and "Antiderivative" by expressing Indefinite Integrals as sets of all possible Antiderivatives? Or are they synonyms?
 
Last edited:
  • #22
MohammedRady97 said:
it would imply that the indefinite integral is a single unique function?
It doesn't specify a unique function unless you interpret C as symbolizing a unique number.

Should we draw a line between the often interchangeable terms "Indefinite Integral" and "Antiderivative" by expressing Indefinite Integrals as sets of all possible Antiderivatives? Or are they synonyms?
There are areas of mathematics where terminology is imprecise. I think most peopl treat "indefinite integral" as a synonym for "antiderivative". Many people say "the" indefinite integral or "the" antiderivative even though they know the functions are only unique "up to a constant". If we want to study the symbolic calculations of calculus in a rigorous way as a manipulation of strings (the way a symbolic algebra computer program woud do them) then we'd have to use more precise terminology. As I recall, a mathematician named Ritt did such work. We can investigate what terminology he used.
 
  • #23
I just skimmed through the answers and didn't see a comment like this. I'm sorry if I missed it and am just repeating what someone else said.

You can safely think of an antiderivative as exactly what the name says: the inverse of a derivative.

d/dx 5 = 0.

So, the antiderivative of 0 dx has to be the same shape as 5, namely, a constant.

Edit:

MohammedRady97 said:
So what you're saying is I should treat the indefinite integral as a set of functions ## {F(x) + C | C ∈ ℝ} ##, but not as a single function?

I don't think there is any reason to restrict C to the reals. C could just as easily be 5+3i, because d/dx (5+3i)=0
 
Last edited:
  • #24
Nick O said:
You can safely think of an antiderivative as exactly what the name says: the inverse of a derivative.

d/dx 5 = 0.

So, the antiderivative of 0 dx has to be the same shape as 5, namely, a constant.

The derivative is not injective. It does not have an inverse. It has a right inverses.

I don't think there is any reason to restrict C to the reals. C could just as easily be 5+3i, because d/dx (5+3i)=0

You restrict to the co-domain of the function you are integrating. If the OP chose R, you restrict the constant to R.
 
  • #25
pwsnafu said:
The derivative is not injective. It does not have an inverse. It has a right inverses.
You restrict to the co-domain of the function you are integrating. If the OP chose R, you restrict the constant to R.

Agreed.
But I am not questioning the validity of the fact that an antiderivative of 0 is just a constant, I was questioning the validity of the constant multiple rule for indefinite integrals.
 
  • #26
Stephen Tashi said:
It doesn't specify a unique function unless you interpret C as symbolizing a unique number.

There are areas of mathematics where terminology is imprecise. I think most peopl treat "indefinite integral" as a synonym for "antiderivative". Many people say "the" indefinite integral or "the" antiderivative even though they know the functions are only unique "up to a constant". If we want to study the symbolic calculations of calculus in a rigorous way as a manipulation of strings (the way a symbolic algebra computer program woud do them) then we'd have to use more precise terminology. As I recall, a mathematician named Ritt did such work. We can investigate what terminology he used.

So should I interpret the indefinite integral of a function as a "set", or as a function with a "variable constant of integration"?
 
  • #27
MohammedRady97 said:
So should I interpret the indefinite integral of a function as a "set", or as a function with a "variable constant of integration"?
?
 
  • #28
MohammedRady97 said:
So should I interpret the indefinite integral of a function as a "set", or as a function with a "variable constant of integration"?

What distinction are you making between a "set" and "a function with a variable constant of integration"? From an abstract point of view a an expression for a function with a "variable constant" defines a set of functions.

How you should think about the symbols in the rules of integration depends on your purposes. If you are trying to develop computer programs (like Mathematica) to do symbolic calculations then you may have to get into very technical jargon. Is that what you are trying to do?
 
  • #29
Stephen Tashi said:
As PeroK mentioned, such rules are not about symbols representing unique functions. The symbol \int f(x) does not represent a unique function. Such rules don't work if you interpret the anti-derivative symbol to be a unique function.

For example If suppose we say that \int x dx represents unique function x^2/2 + 1.
and the rule you quoted says \int 2x is the unique function 2( x^2/2 + 1) = x^2 + 2. This would contradict any calculation that said \int 2 x = x^2 or \int 2 x = x^2 + 5 etc.

Well, I could say ##\int x dx = \frac{x^2}{2} + C## where ##C## can take on any value. We can then find ##\int 2x dx## as follows:
##\int 2x dx = 2\int x dx = 2[\frac{x^2}{2} + C] = x^2 + 2C##
Now, ##2C## is a constant just as well as ##C##; we might as well call them ##C_1## and ##C_2##, doesn't matter. What matters is they're both "constants" of integration (and yes, they can take on many values). There's no problem with this example. 2 is not a problem, 0 is. The product of 0 and anything is 0, how can we then still keep the constant C (or variable) without violating this rule?
 
  • #30
As has been mentioned, ##\int f## is not a single function but an equivalence class of functions. To make this explicit, I'll denote this set as ##[\int f]##. If a function F is an element of ##[\int f]##, we mean that ##F' = f##.

When we say that ##\int cf = c \int f##, we mean that the set ##[\int cf]## and the set ##c[\int f]## are equal. The first set is easy enough to understand: if F is in the set ##[\int cf]##, then ##F'(x) = (cf)(x)## for all x. But what does ##c[\int f]##, a scalar multiplying a set, mean? Your example posits if ##F \in c[\int f]##, then ##F = c\cdot G## where ##G## is some element of ##[\int f]##. It follows then that ##c=0## implies ##F=0##. This leads to the contradiction you noted, so it's not the right interpretation if we want the notation to make sense. If instead we say that if F is an element of ##c[\int f]##, we mean that ##F'(x) = c\cdot f(x)## for all x, there's no problem. If ##c=0##, we have ##F' = 0##, which implies F is constant. Moreover, since ##(cf)(x) = c\cdot f(x)##, it's clear the two sets are equal for any value of c.

If you're familiar with solving differential equations, you can look at it this way as well: Differentiating ##F = \int cf## or ##F = c \int f## yields the same differential equation F' = cf. The general solution to this differential equation consists of a homogeneous part, which is the solution to F'=0, plus a particular solution Fp where F'p = cf. If ##c=0##, you lose the particular solution, but you're still left with the homogeneous solution, which is the constant of integration. In other words, the constant ##c## doesn't affect the constant of integration.
 
  • Like
Likes PFuser1232 and dextercioby
  • #31
The antideriviative of 0 is not 0x, isn't it ?
If you differentiate 1 its 0, so if you reverse that process solution a has to be 1+C and not C for its own.
And for Argument b you would destroy the original sense of C, being a constant that you add because its been removed in the process of differentiation ?
so the Argument b would be something like 0*x+C = 0+C = C
And since in Argument a, there is a constant 1 that you add to C, you could remove the 1 as well since it is included inside C ?
That would be my point of view as an 12 grader...
 
  • #32
the indefinite C after integration usually has to be found from
some equation in accordance with some boundary condition, constraint, or
other condition. It can often be found to be zero that way. You might be
confusing multiplication and addition between your two paragraphs.
 
  • #33
I am not quite at the education level to give a rigorous answer based on Riemann integrals. But I do see that case B is invalid because you are not using linearity to pull a constant out of the integral. In this case 0 is not being treated as a constant but as a function. I'm sure we agree you can't pull the function that is being integrated over out of the integral.

Another viewpoint that might explain why the indefinite integral of 0 = C is that you can think of the output of an indefinite integral as sum of the rates of change of an unknown family of functions,which in this case is all of the constant valued functions that all differ by a constant:). A definite integral is the difference in that function evaluated at two points which in turn is the difference in total rates of change. Where C(b) -C(a) =C-C= 0 .Technically the constant of integration is always there we just omit it in the definite case because it is subtracted out.

Thinking in terms of functions being defined by sums of their rates of change (derivatives or total differentials) as opposed to the traditional Riemann picture and area underneath curves helped me to understand what is happening a little better.And it gives better intuition of why Taylor series work.The function must be analytic though:)

Sorry for a long winded post and I apologize also for not using Latec I will get there though.
 
Last edited:
  • #34
Tim77 said:
I am not quite at the education level to give a rigorous answer based on Riemann integrals. But I do see that case B is invalid because you are not using linearity to pull a constant out of the integral. In this case 0 is not being treated as a constant but as a function. I'm sure we agree you can't pull the function that is being integrated over out of the integral.
I, for one, don't agree! If the function is a constant function, f(x)= a, where a is fixed number, then \int a dx= a \int dx= ax+ C<br /> <br /> <blockquote data-attributes="" data-quote="" data-source="" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> Another viewpoint that might explain why the indefinite integral of 0 = C is that you can think of the output of an indefinite integral as sum of the rates of change of an unknown family of functions,which in this case is all of the constant valued functions that all differ by a constant:). A definite integral is the difference in that function evaluated at two points which in turn is the difference in total rates of change. Where C(b) -C(a) =C-C= 0 .Technically the constant of integration is always there we just omit it in the definite case because it is subtracted out.<br /> <br /> Thinking in terms of functions being defined by sums of their rates of change (derivatives or total differentials) as opposed to the traditional Riemann picture and area underneath curves helped me to understand what is happening a little better.And it gives better intuition of why Taylor series work.The function must be analytic though:)<br /> <br /> Sorry for a long winded post and I apologize also for not using Latec I will get there though. </div> </div> </blockquote>
 
  • #35
HallsofIvy said:
I, for one, don't agree! If the function is a constant function, f(x)= a, where a is fixed number, then \int a dx= a \int dx= ax+ C
<br /> <br /> In this case using the wrong procedure leads to the correct answer but you technically can&#039;t pull the function out of the integral.You could pull the constant 1 out though. Your example only works in this case because the function is constant and not equal to zero.In general though pulling the object treated as the function out leads to false answers.<br /> Int(e^x) =e^xInt(1) = xe^x+Ce^x??<br /> Int(x)=xInt(1)= x^2+Cx ??<br /> <br /> The form f(x)dx is required for integration to have meaning you can&#039;t separate it even if f(x)= a and especially if f(x)=0
 
  • #36
Tim77 said:
In this case using the wrong procedure leads to the correct answer but you technically can't pull the function out of the integral.

This is false. You can pull constant functions out.
Let ##r \in \mathbb{R}## then define ##f_r (x) = r## and so
##\int f_r(x) \, dx = \int f_r(x) \cdot f_1(x) \, dx = \int r \cdot f_1(x) \, dx = r \int f_1(x) \, dx = f_r(x) \int f_1(x) \, dx##
QED.

You can't do it with non-constant functions, but this thread is not about non-constants and everything you have written about non-constant functions is irrelevant to this thread.

The form f(x)dx is required for integration to have meaning you can't separate it even if f(x)= a and especially if f(x)=0

This is only true for differential forms. For other areas of analysis the dx is nothing more than notation. And even in differential forms ##\int \omega## is a common notation.
 
Last edited:
  • #37
pwsnafu said:
This is false. You can pull constant functions out.
Let ##r \in \mathbb{R}## then define ##f_r (x) = r## and so
##\int f_r(x) \, dx = \int f_r(x) \cdot f_1(x) \, dx = \int r \cdot f_1(x) \, dx = r \int f_1(x) \, dx = f_r(x) \int f_1(x) \, dx##
QED.

You can't do it with non-constant functions, but this thread is not about non-constants and everything you have written about non-constant functions is irrelevant to this thread.
This is only true for differential forms. For other areas of analysis the dx is nothing more than notation. And even in differential forms ##\int \omega## is a common notation.

Yes you can hold functions "constant" and pull them out of the integral it is the key to evaluating multiple integrals, but you still have to integrate a function with respect to the variable of integration.And yes this is very connected to differential forms they are key to analysis.That aside an integral requires a function as its argument.
##\int \omega## is actually shorthand for a general form of line integrals or Stokes theorem.

542d2741beb2ae1b9210e4f207bd2675.png


The omega encapsulates a lot of information but in order to actually evaluate the integral it has to be a differential form.I will conceed in the second case of the OP's original post it is valid to pull out the zero but then you are no longer integrating zero with respect to x you are integrating 1 or k or what ever "function plays the role of f(x) in the integral.In case 1 you have Int(0) =C in case 2 you have 0*Int(1)=0 essentially those are two different integrals. I stand corrected if you can show how you evaluate integrals of the form ##\int \omega## or how about ##\int3## without using a differential form and integrating with respect to some parameter or how about this ##\int\emptyset\ dx##

image002.gif
 
Last edited:
  • #38
Tim77 said:
Yes you can hold functions "constant" and pull them out of the integral it is the key to evaluating multiple integrals, but you still have to integrate a function with respect to the variable of integration.

What does that this have to do with anything?

And yes this is very connected to differential forms they are key to analysis.

Maybe in multivariable differential geometry, but certainly not in integration of one variable which is what we are doing here.

That aside an integral requires a function as its argument.

Yes we know that. Why are you doing a brain-spew?

I will conceed in the second case of the OP's original post it is valid to pull out the zero but then you are no longer integrating zero with respect to x you are integrating 1 or k or what ever "function plays the role of f(x) in the integral.In case 1 you have Int(0) =C in case 2 you have 0*Int(1)=0 essentially those are two different integrals.

The co-domain of the indefinite integral is a quotient space. The two integrals in the OP are in the same equivalence class. This has been pointed out in this thread over and over. It doesn't matter if they are "essentially" different or not.

I stand corrected if you can show how you evaluate integrals of the form ##\int \omega## or how about ##\int3## without using a differential form and integrating with respect to some parameter

You clearly don't understand what I'm saying do you? In one dimensional calculus, there is no inherent reason to write down dx for an indefinite integral. We do it because of historical reasons. If I write ##f: \mathbb{R}\to\mathbb{R}## and ##\int f## then it is clear I am interested in the anti-derivative of some one dimensional function f. I haven't declared a variable for f, but it is clear that I am integrating with respect to it because that is the only choice. There is no need to write down the variable of a one dimensional constant function. Further, in one dimensional calculus, the "dx" is not part of a 1-form. It is part of the integral. That is, the indefinite integral is the map
##f \mapsto [\int f(x) \, dx]##
and not
##f(x) \, dx \mapsto [\int f(x) \, dx].##

In the multivariate case we need to worry about which variable to integrate with regard to, so differential forms are useful. You are arguing that differential forms are necessary in multivariable calculus therefore they are necessary in one-variable calculus. That is not true.

And I haven't even started to talk about the product measure where you see notation like:
"Let ##(X_1, \Sigma_1, \mu_1)## and ##(X_2, \Sigma_2, \mu_2)## be measure spaces and ##f: X_1\times X_2 \to\mathbb{R}##. Then ##\int_{X_1} f \, d\mu_1##..."
Some authors like to write down ##\int_{X_1} f \, d\mu_1(x_1)##, others drop it because it is obvious that ##\mu_1## is used to integrate with respect to ##x_1##.
 
Last edited:
  • #39
pwsnafu said:
What does that this have to do with anything?
Maybe in multivariable differential geometry, but certainly not in integration of one variable which is what we are doing here.
Yes we know that. Why are you doing a brain-spew?
The co-domain of the indefinite integral is a quotient space. The two integrals in the OP are in the same equivalence class. This has been pointed out in this thread over and over. It doesn't matter if they are "essentially" different or not.
You clearly don't understand what I'm saying do you? In one dimensional calculus, there is no inherent reason to write down dx for an indefinite integral. We do it because of historical reasons. If I write ##f: \mathbb{R}\to\mathbb{R}## and ##\int f## then it is clear I am interested in the anti-derivative of some one dimensional function f. I haven't declared a variable for f, but it is clear that I am integrating with respect to it because that is the only choice. There is no need to write down the variable of a one dimensional constant function. Further, in one dimensional calculus, the "dx" is not part of a 1-form. It is part of the integral. That is, the indefinite integral is the map
##f \mapsto [\int f(x) \, dx]##
and not
##f(x) \, dx \mapsto [\int f(x) \, dx].##

In the multivariate case we need to worry about which variable to integrate with regard to, so differential forms are useful. You are arguing that differential forms are necessary in multivariable calculus therefore they are necessary in one-variable calculus. That is not true.

"Sheesh sorry about the "brain spew" You speak like multivariate calculus is some how a different subject when it is a generalization of "single variable calculus to higher dimensions is it not?

The first paragraph of the article you posted states:
In the mathematical fields of differential geometry and tensor calculus, differential forms are an approach to multivariable calculus that is independent of coordinates. Differential forms provide a unified approach to defining integrands over curves, surfaces, volumes, and higher-dimensional manifolds. The modern notion of differential forms was pioneered by Élie Cartan. It has many applications, especially in geometry, topology and physics.
For instance, the expression f(x) dx from one-variable calculus is called a 1-form"

By any means it seems you are twisting the use of notation with actually evaluating the integral.Arguments about differential forms aside my point was which ever "constant" you pullout you still have to integrate the function over a domain.If you pull out the zero you reduce your class from the general to the trivial case. Oh well I this is my last try to get my point across. I expect you'll pipe in with the last word and some more insults , ad hominem etc.I feel sorry for your "students" if you are as condesending to them.
 
  • #40
Tim77 said:
I expect you'll pipe in with the last word and some more insults , ad hominem etc.I feel sorry for your "students" if you are as condesending to them.

1. If I offended you I apologies. I do admit I was too rash in the post.
2. If you feel you were insulted, use the Report button.
 
  • #41
To return to the main topic, vela said it well. The meaning of the rule depends on how you interpret the notation of "a constant times a set". One interpretation is that it denotes a second set consisting of all things that can be formed by multiplying the constant by things in the first set. However, there is no convention that says this is the correct interpretation in all contexts.
 
  • #42
Tim77 are you denying the existence of "constant functions"? That is, are you saying that "f(x)= 1" for all x is NOT a "function"?
 
  • #43
Just an idea: Maybe 0*C will not equal zero. Because the integral of different functions will have different constants and this constant has no reason to be bounded, couldn't that constant be infinity such that 0*C=A (after taking some limit)? I don't know, its just an idea.
 
  • #44
Tim77 said:
"Sheesh sorry about the "brain spew" You speak like multivariate calculus is some how a different subject when it is a generalization of "single variable calculus to higher dimensions is it not?
Certainly multivariate calculus is a generalization of single variable calculus, but going there takes this thread well off topic.
Tim77 said:
The first paragraph of the article you posted states:
In the mathematical fields of differential geometry and tensor calculus, differential forms are an approach to multivariable calculus that is independent of coordinates. Differential forms provide a unified approach to defining integrands over curves, surfaces, volumes, and higher-dimensional manifolds. The modern notion of differential forms was pioneered by Élie Cartan. It has many applications, especially in geometry, topology and physics.
For instance, the expression f(x) dx from one-variable calculus is called a 1-form"

By any means it seems you are twisting the use of notation with actually evaluating the integral.Arguments about differential forms aside my point was which ever "constant" you pullout you still have to integrate the function over a domain.If you pull out the zero you reduce your class from the general to the trivial case. Oh well I this is my last try to get my point across. I expect you'll pipe in with the last word and some more insults , ad hominem etc.
What insults? What ad hominen arguments? The only things I could see that come remotely close are when pwsnafu said, "Why are you doing a brain-spew?" and "You clearly don't understand what I'm saying do you?"
Your post #37 would fit the description of "brain spew" IMO. And while his second question comes off as somewhat condescending, I fail to see anything ad hominem about it.
Tim77 said:
I feel sorry for your "students" if you are as condesending to them.

The OP's question has been asked and answered, so I am closing this thread.
 

Similar threads

Back
Top