Any proof for the definition of the definite integral

Click For Summary
The discussion revolves around the definition and proof of the definite integral, particularly in relation to Riemann sums and anti-differentiation. Participants clarify that definitions do not require proof, but the validity of the definite integral's definition can be explored through calculus courses. They emphasize that while continuous functions are integrable, not all functions meet this criterion, and understanding the relationship between area under the curve and the definite integral is crucial. The conversation also touches on Simpson's rule, noting that it approximates areas under curves using parabolas and that the fundamental theorem of calculus provides a straightforward proof linking integrals and anti-derivatives. Overall, the thread highlights the importance of foundational calculus concepts in grasping the nature of integrals.
  • #31
The tone of my responses generally reflects the tone I perceive in a thread, which may or may not accurately reflect the intended tone of the posters. :smile:


But the thing about the number of infinitesimals inside a finite interval being uncountable, well, for now I do not get that.

The basic idea is that for each point x in the interval, there will be at least one term infinitely close to x. Since there are uncountably many real numbers in any interval, your sum must have uncountably many terms.


The way you are pressing on and on about rigour will not help the first time student at all.

Well, one should know the right way to do things; while it is important to get an intuitive grasp of the concepts in question, it is also important to learn how to translate intuition into rigor, and to learn how to apply rigor when intuition fails, or worse, misleads.

"A little learning is a dangerous thing"

When learning any concept (not just in mathematics), it is just as important to learn how things go wrong as it is to learn how things go right.


Anyways, the main point I'm trying to make is that your presentation of the integral is not what it "is"; it's what it "essentially is".
 
Physics news on Phys.org
  • #32
On the one hand I am tired of having to sit in this stupid internetcafe, but on the other hand, I cannot let it go and your remark about non-associativity got me thinking. I think I might now understand better what one means when one says that dx is not a real number.

In some maths-book I once saw this sum where, if one would put the different terms in a different order, the outcome would also be different. But there are as far as I remember at least two requirements needed for that to happen:

1)The sum needs to add up an infinite number of terms
2)The terms cannot all have the same sign, some of them must be positive, some of them negative

Now, the terms inside this specific summation (sorry I can't produce an example but I think you know what I am talking about) are all real numbers. And real numbers, by definition, all associate, meaning a+b=b+a. But it turns out that even real numbers do not associate when you have an infinite number of them. BUT ONLY IN THE CONTEXT OF AN INFINITE NUMBER OF TERMS. One could be difficult and demand that real numbers always associate and therefore the terms in the summation are not real numbers. I would never use those words to describe this phenomenon because it is confusing me. I would just keep calling those terms real numbers.

Now infinitesimals used in integrals are always in the context of ''infinite number of terms'' because you need an infinite number of infinitesimals to do integrals. Integrals have that context built into them. So because of that infinity-game you play, you cannot guarantee associativity. One could say that the infinitesimals are not behaving like real numbers, or put it even more strongly and say that dx's are not real numbers, which is just a way of giving words to a mathematical idea. To me, one expresses his or herself better when on says that, YES, the dx's are real numbers, BUT dx's used inside integrals are always in the context of infinite number of terms, SO you know what holds for real numbers when one has an infinite number of them, namely the breakdown of associativity, also must hold for integrals infinitesimals in the context of integrals.

If one insists that real numbers always associate, then the integral is a case where one cannot call the the dx's real numbers.

My definition, which sort of gets the spirit of integration, says that

b/
|f(x)dx=F(a+dx)-F(a)+F(a+2dx)-F(a+dx)+...-...+...-...F(b+dx)-F(b)
a/

We see that:
1)There are an infinite number of terms to be added
2)The terms do not all have the same terms
For me to get to the statement

b/
|f(x)dx equals F(b+dx)-F(a)
a/

I would have to reshuffle the terms before I can cancel most of them with each other. But that is the thing, I am not allowed to ''just'' reshuffle them.
Anyways, am I right that saying ''dx's are not real numbers'' is bad terminology? Surely one must acknowledge that bad terminology and lousy notation can hold back advances in understanding.

But if what I am saying is true, then the fact that (dx/dy)(dy/dz)(dz/dx) equals -1 and not +1 for F(x,y,z)=0, cannot be attributed to the other fact that, in your terminology, dx's are not real numbers, because in differentiation, you do not need an infinite number of infinitesimals to get results, you just need a few of those dx's.
So the flipping of the sign must be because of something else. Could the reason perhaps be what I remarked earlier about partial differentiation, that (dx/dy) implicitly keeps z constant and (dz/dy) keeps implicitly keeps x constant, and because of the two different situations when performing the differentiations one cannot simply cancel the two dx's?
Or did I not understand all the reasons for people to call dx not a real number and are those reasons that I do not yet comprehend responsible for this peculiar flipping of the sign?

By the way, what were Newton and Leibniz actually thinking when they did not know of non-standard analysis?

Do not interpret any undertone in my writing as being not well-wishing to you or anybody. Read it as me hating my own failure to understand. Any replies I am grateful for.
 
  • #33
The reason you shouldn't call dx, infinitesimals or whatver, as real numbers is because they aren't real numbers. The same way that 2x3 matrices aren't real numbers, the same way that the set of funtions on the unit disc aren't real numbers. They aren't. Nothing to do with the sum being an an infinite one. And you mean, seeing as you think notation is important, that addition the real numbers is associative, not that the "real numbers associate" (back-formed verbs are annoying at the best of times).

The symbol int F(x)dx does not mean add up anything with dxs in it at all, it denotes the integral of F, in whatever sense of integral we are using, usually riemann, where it involves upper and lower sums, partitions, and such.

Have you noticed how lots of your statements begin 'if what i think were true...' well, as a general rule it isn't true, now matter how nice it seems in your opinion. The dxs in ordinary analysis denote a limiting process from some\delta x as it tends to zero. As we;ve said, you've got what the concept of integration is getting at, but it isn't how the integral really works.
 
Last edited:
  • #34
The easiest example of nonassociativity is:

(1 + -1) + (1 + -1) + ... = 0
1 + (-1 + 1) + (-1 + 1) + ... = 1



Anyways, the trick to dealing analytically with infinity and infinitessimals is to use limits; they're what makes (standard) analysis tick. Since you can't (correctly) talk about that infinite sum you like, the trick is to stick to finite cases and take the limit as the finite cases "approach" the infinite case.

For instance, it is true that:

<br /> \int_a^b f(x) \, dx = \lim_{n \rightarrow \infty}<br /> \sum_{i=0}^{n-1} (f(a + (i+1) \frac{b-a}{n}) - f(a + i \frac{b-a}{n})) \frac{b-a}{n}<br />

(fine print: this is only true if f is a bounded, Riemann-integrable function)
 
Last edited:
  • #35
Off course you are all more right than I am. I have just tried to be honest to myself, I did not want to deceive myself into believing that I really understood it all, like ''Look at me succesfully integrating sinus(x), so I must understand''. I tried to formulate into words my own thoughts on the subject as clear as possible, which is difficult. Sometimes you have to be more aggressive to learn something and keep asking why this and why that. Sometimes it is easier to fake understanding, which sometimes can be very easy, just repeat the language you've heard over and over again. Anyway, my mind is sort of a blank right now, do not know how to respond. I've learned something I guess, though it certainly did not satisfy my mind.
 
  • #36
It is difficult; it took a while before mathematicians came up with limits to allow them to rigorously deal with these things... and limits do seem awfully obtuse at first. But, the more you use them, the more sense they make.

And mathematicians like things to behave nicely too, so we define special classes of things that do behave nicely. For instance, an infinite series is "absolutely convergent" iff it's commutative and assocative. e.g.

1 + -1/2 + 1/4 + -1/8 + 1/16 + -1/32 + ...

is an absolutely convergent series becuase no matter how you rearrange and group these terms, you still get a sum of 2/3. However,

1 + -1 + 1 + -1 + 1 + -1 + ...

is not absolutely convergent, because it fails to be commutative and associative.

And it turns out that there is a simple criterion for a sequence to be in this class of functions (and this criterion is used as the definition): a series is absolutely convergent iff the series converges when you replace each term with its absolute value.


To relate this to what you said earlier, it turns out a (convergent) sequence is not absolutely convergent iff the sum of the positive terms and the sum of the negative terms are both divergent.
 
Last edited:
  • #37
Hey, this thread kind of died. Why? Can nobody think of anymore questions. Or is everything clear now to everybody. Maybe it just confused too much.

Anyway, here is one thought I would like to share. I first encountered quantum mechanics in a book where one would derive the Schrodinger equation using the notion of particles as wavepackets, and then afterwards the author would cleverly discard the derivation and keep the equation, arguing that the notion of a wavepacket may be useful to develop some intuition, but that in the end the idea cannot be taken too serious, and that everything in fact was far more general and abstract. Was the author doing first-time students of quantum mechanics an incredible favour or was he doing some unrepearable damage to them?

ydnef
 
  • #38
I've read the whole thread. I took AP Calc BC, so I have a year of calc under my belt (I got an A, so I guess that means I understood it). However, this is a question I'm still asking. WHY does the antiderivitive of a function give the area under the curve? I think this question is different from the one originally proposed -- i.e my question is different than why an infinite Reimann converges to an anti-derivitive. Or maybe I missed something... It's kinda late at night and I'm tired -- can anyone help? =p
 
  • #39
You have missed something - it is the fundamental thorem of calculus in action and has been explained in this thread, i think i did it twice.
 
  • #40
For developing some intuitive feel, I surely do recommend my own reflections on the matter, but be careful because this punk-kid of an ydnef has been abusing a lot of terminology and notation, which even confuses ydnef when he is not careful.
 
  • #41
That would be your system where you add up a countable number of (non-real) infinitesimals and get 1? Yep, that's the way forward...
 
  • #42
Yes you told me already that professional mathematicians have already reserved the word infinitesimal for some other mathematical idea and so forth. Got that. So okay, let's not call my dx's infinitesimals anymore, and think of every dx that appears as actually truly meaning ''limit dx->0''. Think of it part of me being too lazy and part of me wanting to keep the notation as tidy as possible.

I happen to think that it is useful in some practical cases to think of integration as some sort of ''adding of infinite terms'', like in some cases I like to think of matter as of being made out of point particles, though it does not take a genius to see that actually particles cannot really be points, they are more complex, and in some sense, nobody really really knows the true nature of matter, yet, but if you want to explain to a person for instance how television works, thinking of electrons as point particles is good enough. I know there is a flaw in that, but as long as it is not fatal, then it is not too bad I guess.
 
  • #43
You could of course just learn what the definition of integration is, and understand it properly, which might, just might, be considered the best way of doing it, seeing as it is the limit of finite sums approximating the area of the curve...
 
  • #44
You know what I've been thinking. That guy who invented/discovered non-standard analysis STOLE the term 'infinitesimal', not me. I mean, the term has been in existence ever since the time of leibniz and Newton, and who ever applied calculus between the time of its birth and somewhere between 1960 must have already had some sort of understanding connected to that word 'infinitesimal'. Hence, perhaps, all my confusion.
 
  • #45
I am trying to discuss the answer to the question in thread 38, why is the FTC true? i.e. why does the antiderivative (of the height) give the area under the curve?

An equivalent question is why is the derivative of the area equal to the height of the curve?

The easiest way for me to understand anything is in a simple example. So take a constant function y = f(x) = c, for all x between a and b.

Then the area function A(x) = the area under the "curve" y = C, between a and x, is height times base = C times x-a = C(x-a) = Cx - Ca. So the derivative of this area function is C = height! So it is true in this case.


Now the next simplest case is a piecewise constant function, say y = C for x between a and r, and y = D for x between r and b, with a< r < b. let f(r) = D say (it does not matter).

Then the area function A(x) = C(x-a) for x between a and r, and equals

A(x) = C(r-a) + D(x-r), for x between r and b.

Thus the derivative of A exists except at r, and the derivative of A for x between a and r is C = the height, and the derivative of A for x between r and b is D = height.

And A is continuous. So here we are allowing as an "antiderivative" a function which is continuous everywhere, and differentiable where the original function is continuous, and has derivative equal to the height when the height is continuous. At those points where the original function is not continuous, we take the antiderivative to be whatever makes it continuous.

Note however that we have: total area = C(r-a) + D(b-r) = A(b)-A(a)
= C(r-a) + D(b-r) = the difference of the values of A at the endpoints a and b.


Now this continues to be true for all piecewise constant functions.

Moreover this property is preserved under uniform limits, so since every continuous function is a uniform limit of piecewise constant functions, and the antiderivatives also converge uniformly, it is still true for continuous functions.

I.e. if f is any continuolus function, and if A is an antiderivative of f, then the area under f equals A(b) - A(a). or equivalently, if A(x) is the area function from a to x, then A'(x) = f(x).

does this help? I realize it ain't full, but sometimes partial explanations help more than full ones.
 
  • #46
I found the following interactive proof for the FTC:

http://archives.math.utk.edu/visual.calculus/4/ftc.9/

But for some reason, after looking through the whole proof, I still don't know why the anti-derivative of an integrable function for a certain range gives you the area under the graph. They said "Let A(x) = \int_a^x f(t) dt where A(x) is the area from x=a to x=t" But why is it possible for the area under the graph to be expressed as the anti-derivative in the 1st place?
 
  • #47
You are missing the point.

A(x) = \int_a^b f(t) dt is defined as "the area bounded by y= f(x) above (assumed to be positive), y= 0 below, x= a on the left, and x= b on the right", not as an "anti-derivative".

The proof you give then uses the basic properties of area (if A and B are disjoint sets, then the area of A U B is area(A)+ Area(B)) to show that lim (A(x+h)-A(x))/h IS f(x) and so A itself IS an anti-derivative.
 
Last edited by a moderator:
  • #48
mr ethereal, have you read my post #45 above? I have tried to lay out the proof of FTC as simply as possible. I.e. it is true for piecewise constant functions, and hence also for their uniform limits, hence for all continuous functions. What do you think of that?
 
  • #49
Actually, I don't follow your proof at all. I haven't studied maths formally yet, so I have difficulty understanding what a "piecewise constant function" is etc. Apparently a quick visit to mathworld.wolfram.com didn't help at all. Sorry.

EDIT: I think I just found out what those terms meant. I also happened to find this on the internet which seems similar to yours:

http://www.ma.hw.ac.uk/~robertw/F11UB3/slides1.pdf
 
Last edited by a moderator:
  • #50
Proofs For Volume

The method I know of is based on the principle of revolving a function around an axis. This is accomplished in one of two ways by using integration. The first is based on the area of a circle Vs the area of a cylinder. Often both can be used when set up appropriately.

I always pick the area of the circle method, whenever possible. This is because I see the function best that way.

Sphere:

Using the equation for a circle Y^2 + X^2 = R^2

Solve for Y


Replace the function f(x) or Y equal into the eqn pi * F(x)^2 , the area of a circle

This revolves any function around the X axis, summing up all areas, resullting in a volume

answer should be 4/3 pi R^3 Volume


2) Do the same for a line segment, except revolve it to get the volume of a cone

3) Volume of a cylinder is Y= a constant the simplest


Then take the definiete integral from 0 to R over dx
 
Last edited:
  • #51
Earlier someone wanted to prove the volumes of different shapes, sphere, cone,etc. The above is for that #15 or #16
 
  • #52
This thread was four years old!
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 16 ·
Replies
16
Views
4K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 23 ·
Replies
23
Views
4K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 31 ·
2
Replies
31
Views
4K
  • · Replies 25 ·
Replies
25
Views
4K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K