Maybe this helps
To Mike1 original question:
What I always have in mind, when I think of definite integrals is this (though someone already told me that I am wrong, but I don't care)
I define the integral sign to mean
b/
|f(x)=f(a)+f(a+dx)+f(a+2dx)+...+f(b-2dx)+f(b-dx)+f(b)
a/
in direct analogy with ordinary the summation sign.
(INTERMEZZO (skip it if you like):
The ordinary summation sign uses a discreet index n, with n=0,1,2,...
The integral sign uses an index x, which is not a discreet index, but a continuous index. How do we generalize from a discreet index to a continuous index? Well the best way we can do is to invent a "dx" which is supersmall by definition, and this way we can interpret my previous definition of the integral sign almost "as if" it "slides" through the continuum. When you think about it, this is kind of weird, didn't Cantor proof that there we more natural numbers than real numbers? Anyway, I am not smart enough for that stuff.END OF INTERMEZZO)
The "dx" here is Leibniz's and Newton's infinitesimal.
Now, you may think that this is useless, because you never see
b/
|f(x)
a/
without the "dx", that is, you always see
b/
|f(x)dx
a/
Okay, let's do that then.
b/
|f(x)dx=f(a)dx+f(a+dx)dx+f(a+2dx)dx+...+f(b-2dx)dx+f(b-dx)dx+f(b)dx
a/
An infinite number of terms to add, it looks hopeless. But hey, Leibniz found a trick. What if somehow we know that for some F(x) we know that
dF(x)/dx=(F(x+dx)-F(x))/dx=f(x)
You might wonder how that could possibly simplify things, but I am going to do it anyway: replacing f(x) by (F(x+dx)-F(x))/dx we get
b/
|f(x)dx=
a/
[(F(a+dx)-F(a))/dx]dx+[(F(a+2dx)-F(a+dx))/dx]dx+[(F(a+3dx)-F(a+2dx))/dx
dx.....[(F(b-dx)-F(b-2dx))/dx]dx+[(F(b)-F(b-dx))/dx]dx+[(F(b+dx)-F(b))/dx]dx
Now because the "dx" in
b/
|f(x)dx
a/
is chosen by me (and if you are smart you choose the same) to be exactly the same as the "dx" in
(F(x+dx)-F(x))/dx
the two "dx's" cancel each other out, so that
b/
|f(x)dx=
a/
F(a+dx)-F(a)+F(a+2dx)-F(a+dx)+F(a+3dx)-F(a+2dx)...F(b-dx)-F(b-2dx)+
F(b)-F(b-dx)+F(b+dx)-F(b)
The F(a+dx) term at the beginning can be found again three positions further up, only now with a minus sign, so they add up to zero. So it is with most of the terms, and as you can find out for yourselves, only two terms are left
b/
|f(x)dx=F(b+dx)-F(a)
a/
A miracle has happened right in front of your eyes! No need to add together an infinite number of terms, all we have to do is know the anti-derative of f(x).
You might be bothered by the fact that instead of F(b)-F(a) my derivation actually gives F(b+dx)-F(a), but the difference is only infinitesimal.
I warned you, somebody "authorotive" already told me that my derivation is wrong, somehow, in a way I do not understand, it has to to with the mathematical rigidity of me using the infinitesimal "dx", but I do know that the great great Riemann has used his brains to crack this one, so yeah, probably I am oversimplifying things to much, like always, but when I am going to have kids someday, this is the way I am going to explain it to him.
So what do you think of my definition? I have a lot more stupid stuff. For instance, I define the dirac-delta function to be
dirac(x-a)= 1/dx if x=a
0 if x=not(a)