ObsessiveMathsFreak said:
I couldn't find my old notes, but I believe this was the way I was introduced to the concept of definite integration. I wish I could better draw pictures, but http://img337.imageshack.us/img337/4536/fundamentalhi1.png .
Let the operation \int f(x) dx be that which finds the family of antiderivatives to f(x), i.e. \int f(x) dx = F(x) + C \Rightarrow \frac{dF(x)}{dx} = f(x), F(x) is the "principal" antiderivative and C is an arbitrary constant.
OK. We want to find the area under the curve of f(x) between the points a and b, with a<b. Denote the function that gives the area between the points a and an arbitrary point x, as A_a(x).
Hi agaoin. Thanks for your input. I did read through it all and it is very beneficial to meto have this kind of discussion with mathematicians (as opposed to staying within the circle of non-mathematical physicists).
I guess the question is: what is the starting point one chooses for the definition of the integral. As you know, I am used to seeing it as being defined as a riemannian sum as a starting point (and then proving the interpretation of an area under the curve or proving the fundamental theorem of calculus starting from that).
You have a different starting point, but I am a bit confused in this post because first you define the integration as being an operator giving the antiderivative and then you seem to *define* it as the operator that gives the area under the curve. I know that one can show one from the other but I am not clear about what you see as being the true starting point.
I thought that the operation "integration gives the antiderivative" was your starting point.
My problem with this is that, it seems to me, it is less general than the definition as a riemannian sum. I mean, many integrals can be expressed as infinite sums that can be written down starting from the riemannian sum approach but for which for which there are no simple closed expression for the antiderivative. So if one definition (the riemannian sum) works all the time and the other not, I would think that the first would be used as the fundamental definition.
Of course, as others have pointed out, in *practice* one does not use the summation definition to calculate most integrals. I agree with this, but the fact that one usually uses antiderivative to evaluate integrals does not mean that it is necessarily a more fundamental definition.
The way I think about this is a bit similar to the rule concerning the differentiation of, say, x^n. I think of the definition of a derivative as being the usual limit as delta x ->0 of (f(x+ delta x) - f(x))/(delta x).
Now, of course, if I deiffentiate 40x^7 + 6 x^18 - x^31, I do NOT apply the limit definition, I use the usual trick for powers of x.
So when it comes to doing explicit calculations, the limit definition is almost never used. But still, it is the fundamental definition. The fact that the derivative of x^n is n x^(n-1) is just a consequence.
Similarly, the fact that the integral can be show to correspond to the antiderivative is something that I see *following* (in a simple way) from the definition in terms of a riemmann sum. So that in practice, I of course find the antiderivative when I evaluate simple (=doable in terms of elementary functions) integrals, but in my mind I keep thinking that it's something that can be proven starting from the riemannian sum definition and that it is a useful "shortcut" (like bringing down the exponent and decreasing it by 1 in the case of the derivative of x^n...) But I realize from this thread that I am thinking very differently from the way I do.
Now, what seems to me is that mathematicians prefer to *define* the integration as giving the antiderivative and then to see the riemannian sum as something secondary (and maybe not even necessary).
Hurkyl has even started to show me how *derivatives* could be defined in terms of axioms, such as the chain rule and linearity, etc) without introducing the definition as a limit.
My mental block with all this is twofold.
First, there are many things about differentiation and integration that are fairly easy to understand using the Riemannian sum approach or the limit approach (for derivatives) that are not that obvious without them (for example it's not clear to me how to get from purely "integration = finding antiderivative" to the area under the curve view, and many other things). Now, I am not saying that it's not possible to get all the results I know about proceeding that way, but it's not clear to me and it seems that maybe more and more axioms need to be added to cover everything?! (like in proving that the derivative of ln(x) is 1/x.. )
The second problem is that, considering for example integration, I simply do not see at all (even in principle) how to use the more formal approach of integration=finding antiderivative to even the simplest type of physical applications. For example, as I have mentioned, fidning the E field produced by an infinite line of charge, starting from the knowledge of the E field produced by a single point charge.
If someone could show me how to do this without *starting* from a riemannian sum, I would be grateful. However, it seems to me that it is impossible to do with without starting from a riemannian sum.
Ithink that anybody having done even introductory level calculus physics would agree that the riemannian sum (and the idea of very very small "pieces", which I have called infinitesimals and for which I have been ridiculed

) are the only way to think in the context of any physical application.
I would be curious about how a mathematician would go about *setting up the integral* representing, say, the total mass of a sphere with some mass density \rho(r), say. How do mathematicians show how to do this calculation without starting from a Riemann sum and thinking in terms of "infinitesimal" volume element, small enough so that one can approximate the volume density in that element as constant (which is
what I call an infinitesimal volume element ) and then summing over all the volume elements...i.e. doing a riemannian sum?? How do mathematicians do the calculation otherwise??
So in the end the question I have are:
A) Is it possible to simply define differentiation as the usual limit and integrals as Riemann sums? Is there any problem with that?
B) Then, it is just a matter of taste to define instead integration as an operation that gives the antiderivative? But then how does one define the antiderivative in the case of integrations which do not lead to expressions that can be written in closed form (without of course getting into circular reasoning)? Can someone show me a general procedure that would define the antiderivative without involving riemannian sum in such a case?
And can one also work out everything about derivatives without using the limit definition?
C) In actual (physical) applications, such as finding E field of continuous charge distributions, etc, is there any alternative to the riemannian sum approach??
Thanks again for th every stimulating exchanges...
Patrick