u0x2a
- 2
- 0
- TL;DR
- The sense in which the term is used is often unclear to me from the context
I've encountered a few different definitions of "indefinite integral," denoted ##\int f(x) \, dx##.
$$\int \left( af + bg \right) \, dx = a \int f \, dx + b \int g \, dx$$
If I interpret this statement in terms of particular antiderivatives, then it's true for all antiderivatives ##\int \left( af + bg \right) \, dx##, ##\int f \, dx##, ##\int g \, dx##, but only up to a constant, which isn't mentioned.
If I interpret this statement in terms of sets of antiderivatives, then it's also true but it uses addition and multiplication in ways that were never defined, involving both scalars and equivalence classes of functions that have the same derivative.
If I interpret this statement in terms of "canonical antiderivatives", then I'm not sure it's "even wrong" or "even right" because "canonical antiderivative" isn't fully defined or even shown to exist. I can imagine defining a canonical antiderivative by defining a partial inverse of the differentiation operator ##\frac{d}{dx}## in the same way we define a partial inverse of the square function. We can make ##\frac{d}{dx}## surjective by restricting the codomain to functions that have antiderivatives. But then to make ##\frac{d}{dx}## injective, we have to restrict the domain st there is only 1 function from every equivalence class of functions that have the same derivative. But isn't this an invocation of the axiom of choice, which is often not assumed in mathematical discourse? If I do assume that such selections exist, then it still isn't clear how to specify one. If I just forget about trying to specify one, then I think it's easy to prove that all of these "partial inverses" do indeed have the property of linearity, though. (Several of the computer algebra systems I've used implicitly define indefinite integration this way because they only output a single function when you integrate it.)
If I interpret the statement in terms of definite integrals with a variable upper bound, then I guess it's just a weaker statement than the statement in terms of antiderivatives, since the functions that can be expressed in this way are just a subset of the set of all antiderivatives of continuous functions.
So what is usually meant and what is the best way to think when you're solving calculus problems?
Edit: wording
- any particular antiderivative ##F:\mathbb{R} \to \mathbb{R}, F'(x) = f(x)##
- the set of all antiderivatives ##\{F:\mathbb{R} \to \mathbb{R}, F'(x) = f(x)\}##
- a "canonical" antiderivative
- any expression of the form ##\int_a^x f(x) \, dx##, where ##a## is in the domain of ##f## and ##f## is continuous
$$\int \left( af + bg \right) \, dx = a \int f \, dx + b \int g \, dx$$
If I interpret this statement in terms of particular antiderivatives, then it's true for all antiderivatives ##\int \left( af + bg \right) \, dx##, ##\int f \, dx##, ##\int g \, dx##, but only up to a constant, which isn't mentioned.
If I interpret this statement in terms of sets of antiderivatives, then it's also true but it uses addition and multiplication in ways that were never defined, involving both scalars and equivalence classes of functions that have the same derivative.
If I interpret this statement in terms of "canonical antiderivatives", then I'm not sure it's "even wrong" or "even right" because "canonical antiderivative" isn't fully defined or even shown to exist. I can imagine defining a canonical antiderivative by defining a partial inverse of the differentiation operator ##\frac{d}{dx}## in the same way we define a partial inverse of the square function. We can make ##\frac{d}{dx}## surjective by restricting the codomain to functions that have antiderivatives. But then to make ##\frac{d}{dx}## injective, we have to restrict the domain st there is only 1 function from every equivalence class of functions that have the same derivative. But isn't this an invocation of the axiom of choice, which is often not assumed in mathematical discourse? If I do assume that such selections exist, then it still isn't clear how to specify one. If I just forget about trying to specify one, then I think it's easy to prove that all of these "partial inverses" do indeed have the property of linearity, though. (Several of the computer algebra systems I've used implicitly define indefinite integration this way because they only output a single function when you integrate it.)
If I interpret the statement in terms of definite integrals with a variable upper bound, then I guess it's just a weaker statement than the statement in terms of antiderivatives, since the functions that can be expressed in this way are just a subset of the set of all antiderivatives of continuous functions.
So what is usually meant and what is the best way to think when you're solving calculus problems?
Edit: wording
Last edited: