B Is dx Negative in Non-Standard Analysis?

  • B
  • Thread starter Thread starter etotheipi
  • Start date Start date
  • Tags Tags
    Dx Negative
etotheipi
This is more of a "housekeeping" question, though I haven't studied much in the way of infinitesimals so apologies in advance for my lack of rigour!

As far as I'm aware, an infinitesimal can be thought of as a small change in some quantity. Changes can be either positive or negative, so subsequently it also seems reasonable for ##dx## to potentially represent a negative change. Of course, there is no ambiguity since we always consider one infinitesimal in conjunction with another (e.g. ##dy=-3 dx##) so the signs "cancel appropriately".

In thermodynamics, for instance, it's common to use infinitesimals like ##dU## and ##dV## (I'm not going to worry about the problems with đQ/dQ etc, since that's a different story!), and evidently ##dU## and ##dV## can take both positive and negative values.

Thank you.
 
Last edited by a moderator:
Physics news on Phys.org
##dx## can carry a sign, so there is also a ##-dx##. But the term itself is an abbreviation for various things, depending on the context. Your question is a bit as if you had asked whether ##x \longmapsto x^2## can be negative. It allows a sign. If you want to know why and where then we will have to discuss the context.

The context you described is an infinitesimal change, and this allows a sign depending on the orientation of your coordinates. Change itself is something absolute as speed is. Velocity, however, has a direction.
 
  • Like
Likes sysprog and etotheipi
etotheipi said:
This is more of a "housekeeping" question (i.e. it's not particularly interesting), though I haven't studied much in the way of infinitesimals so apologies in advance for my lack of rigour!
Without rigor (i.e. a precise definition of infinitesimal) it's impossible give a mathematical answer to your question. As far as a hazy intuitive notion of a infinitesimal goes, I'd say yes, an infinitesimal can be imagined as a small positive or negative change.

Does an answer to your question have any consequences? What consequence would it have if you always imagined an infinitesimal to be positive?
 
  • Like
Likes sysprog
fresh_42 said:
##dx## can carry a sign, so there is also a ##-dx##. But the term itself is an abbreviation for various things, depending on the context. Your question is a bit as if you had asked whether ##x \longmapsto x^2## can be negative. It allows a sign. If you want to know why and where then we will have to discuss the context.

If I were to completely and utterly abuse convention/mathematical rigour, but just to convey the meaning, when you say for ##dx## we also have a ##-dx## that, to give an example, $$dx = -0.00001 \implies -dx = 0.00001$$
Stephen Tashi said:
Does an answer to your question have any consequences? What consequence would it have if you always imagined an infinitesimal to be positive?

Indeed, it makes no difference. It's just out of interest!
 
  • Like
Likes sysprog
etotheipi said:
If I were to completely and utterly abuse convention, but just to convey the meaning, when you say for ##dx## we also have a ##-dx## that, to give an example, $$dx = -0.00001 \implies -dx = 0.00001$$

You're giving an example where an infinitesimal is a variable representing an ordinary real number. Yes, such variables can represent negative real numbers.
 
  • Like
Likes etotheipi
etotheipi said:
... and utterly abuse convention ...
In deed. ##dx## is a linear transformation, so the better wording would be
$$
dx=L_{−0.00001} \Longrightarrow -dx=L_{0.00001}
$$
where ##L_c## denotes the left multiplication by a constant ##c##: ##L_c\, : \,x \longmapsto cx\,.##
 
  • Like
  • Love
Likes Ssnow, Klystron and etotheipi
Non-zero can be positive in some cases and negative in others -- my teacher told me that the infinitesimal was 'treated as' zero.
 
  • Informative
Likes etotheipi
Awesome, thanks for the speedy replies!

I'm finding it a little tricky to formalise infinitesimals since I'm quite used to thinking of them as normal algebraic quantities (i.e. "multiply by ##dx##", etc.) from the Physics side of things. So I get a little nervy about stuff like this!

I'll try and get hold of a copy of Spivak to iron things out a little.
 
  • Like
Likes sysprog
sysprog said:
Non-zero can be positive in some cases and negative in others -- my teacher told me that the infinitesimal was 'treated as' zero.
This is not really a good idea. Strictly speaking it is a differential form, which is a linear transformation. Zero is a differential form but not the other way around. Furthermore considering as zero will lead to an entire zoo of misunderstandings and false conclusions.
 
  • Like
Likes sysprog
  • #10
etotheipi said:
Awesome, thanks for the speedy replies!

I'm finding it a little tricky to formalise infinitesimals since I'm quite used to thinking of them as normal algebraic quantities (i.e. "multiply by ##dx##", etc.) from the Physics side of things. So I get a little nervy about stuff like this!

I'll try and get hold of a copy of Spivak to iron things out a little.
Have a look at the 10 point list at the beginning of
https://www.physicsforums.com/insights/journey-manifold-su2mathbbc-part/
and the word slope wasn't even on the list. ##dx## is also used in integrals. If you want to read about the entire jungle, see
https://www.physicsforums.com/insights/the-pantheon-of-derivatives-i/
 
  • Like
Likes etotheipi
  • #11
fresh_42 said:
This is not really a good idea. Strictly speaking it is a differential form, which is a linear transformation. Zero is a differential form but not the other way around. Furthermore considering as zero will lead to an entire zoo of misunderstandings and false conclusions.
I like what you said, @fresh_42 -- my teacher, who was teaching single-variable calculus to a bunch of kids, wanted to explain to inquiring minds like mine, what to do about the infinitesimal.
 
  • #12
Call me simple-minded but what if your definite integral is taken from 1 to -2 ? Does this not imply a negative increment?
 
  • Like
Likes FactChecker, PeroK and etotheipi
  • #13
Stephen Tashi said:
Does an answer to your question have any consequences?

Actually, I've just thought of one example where it might matter, namely integration. In the following, $$\int_{a}^{b} f(x) dx$$ isn't it required that ##dx## be greater than zero? It's like when we set up the Riemann sum as $$\sum_{k=1}^{n} f({x_k}^{*}) \Delta x$$ then ##\Delta x## represents a length.

Edit, or as @hutchphd mentioned, if the limits are reversed ##\Delta x## is indeed now negative!
 
  • #14
sysprog said:
I like what you said, @fresh_42 -- my teacher, who was teaching single-variable calculus to a bunch of kids, wanted to explain to inquiring minds like mine, what to do about the infinitesimal.
Yes, it is indeed context sensitive. It certainly makes no sense to talk about differential forms or sections or tangent bundles or or or at school. So "a little change" is often enough to know, but a little change is not necessarily no change. I like the comparison with a car: If you are too fast for a curve, then ##dx## is the direction where you fly to, and it is definitely not zero!
 
  • Like
Likes Klystron and sysprog
  • #15
hutchphd said:
Call me simple-minded but what if your definite integral is taken from 1 to -2 ? Does this not imply a negative increment?
That's the hen and egg question: is it ##d(-x)## or is it ##-dx##?
 
  • #16
no calling hutch simple
hutchphd said:
Call me simple-minded but what if your definite integral is taken from 1 to -2 ? Does this not imply a negative increment?
I think that no-one should be calling you simple-minded -- not even you -- the main thing that I see as simple that I can see about your mind is that you exhibit a simple preference for what you see as the truth.
 
  • Like
Likes hutchphd
  • #17
Why is the whole question not similarly inutile?
 
  • #18
etotheipi said:
Actually, I've just thought of one example where it might matter, namely integration. In the following, $$\int_{a}^{b} f(x) dx$$ isn't it required that ##dx## be greater than zero? It's like when we set up the Riemann sum as $$\sum_{k=1}^{n} f({x_k}^{*}) \Delta x$$ then ##\Delta x## represents a length.

Edit, or as @hutchphd mentioned, if the limits are reversed ##\Delta x## is indeed now negative!
Yes. It is a positive length in this picture. Or the sum of infinitely many infinitely small positive lengths to be exact. You see, infinite times infinite makes no sense, and here is when the comparison to lengths come to an end. The picture gets wrong and you have to dive into the details. It is only a heuristic possibility to see it as a length. If we resolve the heuristic we will get limits. And if we resolve the limits, we will get differential forms. And so on ...
 
  • Like
Likes etotheipi
  • #19
Well, in apparently rejecting infinite times infinite, you appear to my not-as-good-at-math-as-you person to be rejecting the idea of the cardinality of the power set of the reals being strictly greater than that of the reals, so I think that maybe I'm wrong.
 
Last edited:
  • #20
hutchphd said:
Call me simple-minded but what if your definite integral is taken from 1 to -2 ? Does this not imply a negative increment?
fresh_42 said:
That's the hen and egg question: is it ##d(-x)## or is it ##-dx##?

I'm still a little confused about this point. If we let ##a<b##, then for the two formulations $$\int_{a}^{b} f(x) dx$$ and $$\int_{b}^{a} f(x) dx$$##dx>0## in the first example whilst ##dx<0## in the second example. How do we rationalise this? I might be thinking too "naively" but how does the integral "know" whether the limits are in increasing or decreasing order?
 
  • #21
etotheipi said:
I'm still a little confused about this point. If we let ##a<b##, then for the two formulations $$\int_{a}^{b} f(x) dx$$ and $$\int_{b}^{a} f(x) dx$$##dx>0## in the first example whilst ##dx<0## in the second example. How do we rationalise this? I might be thinking too "naively" but how does the integral "know" whether the limits are in increasing or decreasing order?

In the development of the Riemann interval, for example, ##[a, b]## will be an interval with ##a < b##. Then, by definition:
$$\int_b^a f(x)dx = -\int_a^b f(x) dx$$
 
  • Like
Likes sysprog and etotheipi
  • #22
The integral knows because lower and upper makes the difference. The underlying sign comes from the orientation of the ##x-##axis. So the coordinate has a direction and the integral respects orientation, or better: differential forms respect orientation. That is why volume ##dx\wedge dy \wedge dz## has an orientation, too.
 
Last edited:
  • Like
Likes sysprog and etotheipi
  • #23
Okay, I think I have a better understanding. I suppose we could say ##dx## and by extension the differential form ##f(x)dx## is positive/negative depending on the direction of integration, so that the integral respects orientation. I'll need to do a bit more study to get a better hold of this!

Thanks for all your help!
 
  • #24
etotheipi said:
Okay, I think I have a better understanding. I suppose we could say ##dx## and by extension the differential form ##f(x)dx## is positive/negative depending on the direction of integration, so that the integral respects orientation. I'll need to do a bit more study to get a better hold of this!

Thanks for all your help!
If you want to get from ##x = a## to ##x = b##, with ##a < b##, then the simplest way is to have ##dx## positive. In the finite sum if you had some negative ##\Delta x_n## terms, then these would be canceled out by needing more positive ##\Delta x## terms.

PS I've never taken ##dx## in an integral too literally.
 
  • Love
Likes etotheipi
  • #25
etotheipi said:
Okay, I think I have a better understanding. I suppose we could say ##dx## and by extension the differential form ##f(x)dx## is positive/negative depending on the direction of integration, so that the integral respects orientation. I'll need to do a bit more study to get a better hold of this!

Thanks for all your help!
If you have a look at the series I quoted above, you will get an impression how complex it gets the more you study it. The infinitesimal lengths is what it all started with when Newton and Leibniz found the concept. But that was 300 years ago and many more have recognized since then how strong the concept is and developed it further in a few directions. Before you get lost, keep in mind: it is always a directional derivative, a tangent.
 
  • Like
Likes sysprog and etotheipi
  • #26
fresh_42 said:
If you have a look at the series I quoted above, you will get an impression how complex it gets the more you study it. The infinitesimal lengths is what it all started with when Newton and Leibniz found the concept. But that was 300 years ago and many more have recognized since then how strong the concept is and developed it further in a few directions. Before you get lost, keep in mind: it is always a directional derivative, a tangent.

I often find it interesting to try and look at little things that are often taken for granted, but always end up regretting it because it inevitably gets way too complicated and just becomes a self-esteem killer!

Perhaps in a few years I'll be able to understand a little more. Luckily for now, "small change" seems to do the trick.

Also, on a completely unrelated note, TIL that Newton formulated calculus whilst Cambridge was closed due to the Bubonic plague... so the benchmark for the next few weeks has been set! Anyone up to the challenge?
 
  • Like
Likes sysprog, PeroK and fresh_42
  • #27
etotheipi said:
Perhaps in a few years I'll be able to understand a little more. Luckily for now, "small change" seems to do the trick.
That's why I like the historic perspective! Mathematicians usually tried to solve a problem, often a physical one, or a class of problems. We tend to forget this view and learn things as they are today, but they usually didn't start that way. There have been major steps: Newton and Leibniz with the infinitesimal quantities, Graßman with the orientation, Riemann with the coordinate free description. And in between, there were a lot of efforts from great mathematicians like Cauchy, Legendre, Lagrange, Bernoulli, Gauß, and many more. It was a long way from that infinitesimals to pullbacks and sections. Dieudonné wrote a great book about the history of mathematical achievements from (roughly) 1700 to 1900. Unfortunately I haven't seen an english translation.
 
  • Like
Likes Klystron, sysprog and etotheipi
  • #28
I'm not sure that the dx used in Riemann integration is an infinitesimal number in the literal sense of being an element of a non-standard, uncountable model of the Reals. It is instead, a Real number that goes to zero . If it was an infinitesimal and the function was Real valued, the value of the integral would itself be an infinitesimal and not a Real number. Sorry if you , OP, meant the use of 'Infinitesimal' informally, but you need to ne careful because it also has a precise technical meaning.
 
  • #29
WWGD said:
I'm not sure that the dx used in Riemann integration is an infinitesimal number in the literal sense of being an element of a non-standard, uncountable model of the Reals. It is instead, a Real number that goes to zero .

Perhaps you didn't phrase this precisely, but a real number is either zero or it's not. It can't "go to zero". In any case, ##dx## in the integral is not a real number.
 
  • Like
Likes sysprog
  • #30
PeroK said:
Perhaps you didn't phrase this precisely, but a real number is either zero or it's not. It can't "go to zero". In any case, ##dx## in the integral is not a real number.
Edit:Well, not the dx itself but the values dx assumes. Yes, by the Archimedean principle, a Real number cannot be indefinitely small without being 0 but in doing Riemann Integration , AFAIK, we are just requiring that the partition width dx <=||P|| goes to 0 but I don't see how we're requiring that dx be less than _every_(rather than any) Real, forcing the AP to kick in or else allowing dx to take non-standard Real values. If we used non-standard values, I believe by closure properties , the integral itself would be non-standard -valued. I see dx as a function dx:=x_{i+1}-x_i, a difference of Real values, under the so how would it assume anything other than Real values? We don't expect it to be smaller than _every_ Real, but that it be smaller than _any_ given Real, unless I am making some false assumption. If dx does not assume Real values, then what kind of values does it assume? It is a width, so are widths, Physical measurements in general expressed as any thing other than ( standard) Real numbers?
 
  • #31
By definition:
$$\lim_{\Delta x \to 0} \frac{f\left(x + \Delta x\right) - f(x)}{\left(x + \Delta x\right) - x} = \lim_{\Delta x \to 0} \frac{f\left(x + \Delta x\right) - f(x)}{\Delta x } = \frac{d\left(f(x)\right)}{dx}$$

So if ##dx = \lim_{\Delta x \to 0} \left(x + \Delta x\right) - x##, I don't see why it couldn't be arbitrarily chosen to be negative. It wouldn't change anything to the final result in a derivative or an integral as ##d\left(f(x)\right)## will change sign accordingly.
 
  • Like
Likes sysprog
  • #32
jack action said:
So if ##dx = \lim_{\Delta x \to 0} \left(x + \Delta x\right) - x##, I don't see why it couldn't be arbitrarily chosen to be negative.

##\lim_{\Delta x \to 0} \left(x + \Delta x\right) - x = 0##
 
  • Like
Likes sysprog
  • #33
WWGD said:
Edit:Well, not the dx itself but the values dx assumes.

##dx## is not a number and doesn't assume any values. ##x## is assumed here to be a real variable, so it and ##f(x)## assume real values. But ##dx## is a notational device to indicate, along with the ##\int## symbol, integration with respect to the variable ##x##.

You can say, for example, let ##x = 1##, then ##f(1)## is well-defined, but ##d1## or ##dx \big | _{x = 1}## has no meaning.
 
  • Like
  • Informative
Likes jbriggs444, sysprog and etotheipi
  • #34
PeroK said:
##dx## is not a number and doesn't assume any values. ##x## is assumed here to be a real variable, so it and ##f(x)## assume real values. But ##dx## is a notational device to indicate, along with the ##\int## symbol, integration with respect to the variable ##x##.

You can say, for example, let ##x = 1##, then ##f(1)## is well-defined, but ##d1## or ##dx \big | _{x = 1}## has no meaning.

You say ##dx## is a notational device, does this mean we just give it meaning when dealing with differentials? For instance, for probability density functions I like to think of it this way:

(The increment in cumulative probability) = (the probability per unit increment of ##x##) multiplied by (the increment of ##x##), namely ##dF = f(x) dx##, and then we just insert an integral sign with bounds to turn this from an equation of differentials to full statement, i.e. ##\int_{P_{1}}^{P_{2}} dF = \int_{a}^{b} f(x) dx##.

So whilst I used to think of ##\int ... dx## as a single unit with some stuff in the middle, I now sort of think of it as two separate units, ##[\int][f(x)dx]##, inline with the concept of a sum.
 
  • #35
etotheipi said:
You say ##dx## is a notational device, does this mean we just give it meaning when dealing with differentials? For instance, for probability density functions I like to think of it this way:

(The increment in cumulative probability) = (the probability per unit increment of ##x##) multiplied by (the increment of ##x##), namely ##dF = f(x) dx##, and then we just insert an integral sign with bounds to turn this from an equation of differentials to full statement, i.e. ##\int_{P_{1}}^{P_{2}} dF = \int_{a}^{b} f(x) dx##.

So whilst I used to think of ##\int ... dx## as a single unit with some stuff in the middle, I now sort of think of it as two separate units, ##[\int][f(x)dx]##, inline with the concept of a sum.

This question comes up quite often I think. On the one hand, the theory of calculus, both differential and integral, is independent of the notation used. There is no theorem that depends on an interpretation of ##dx##. That said, the relationship between integration and differentiation and hence the relationship between ##dx## in an integral and the differential ##dx## allows some neat shorthand notation - especially for applied maths and physics. For example, integration by substitution is actually:
$$\int_a^b f(u(x))u'(x)dx = \int_{u(a)}^{u(b)} f(u)du$$
And, if you sit down and prove this, then it does not rely on cancelling ##dx## as in:
$$\int_a^b f(u)\frac{du}{dx}dx = \int_{u(a)}^{u(b)} f(u)du$$
Simply cancelling the ##dx## here is not a proof! In real analysis (pure mathematics) it must be proved otherwise.
 
  • Like
Likes etotheipi
  • #36
@etotheipi

If in doubt, always look for the tangents. They are hidden somewhere when it comes to differentiation.

1585553248890.png


It is the quotient ##\dfrac{\Delta f(x)}{\Delta x}##, i.e. the slope of the hypotenuse of the triangle which must be considered, not just one kathode, whether as ##\Delta x## or as ##dx##. The limiting process of simultaneously both kathodes, the lengths of the difference of function values and the lengths of the ##x## intervals. The quotient does the trick!

If it stands alone, it abbreviates something else and things are more complex, namely a differential form. This is the function that attaches another function to each point: ##x \longmapsto L_x## (see post #6). My picture used ##y=\frac{1}{5}x^2## and ##x_0=3##. So ##dx## attaches the function ##\tilde{x} \longmapsto \frac{2}{5} \tilde{x}##, which has at ##x_0=3## the value ##\frac{6}{5}##. Here we changed the origin of the curve space ##(0,0)## into the origin at the tangent space (the green line) ##(3,\frac{9}{5})## which becomes our new origin if we talk about the tangent space as a vector space. Hence the tangent at ##x_0=3## is ##f'(\tilde{x})=(\frac{2}{5}\cdot 3) \tilde{x}## which is a linear function in the coordinate system of the tangent. In old coordinates it is ##f'(x)=\frac{6}{5}x - \frac{9}{5}##. One of the things which adds more confusion and requires to distinguish the curve from the tangents. Every single tangent is a line, i.e. a one dimensional vector space: different points ##x_0##, different tangent spaces. At school it is all in one coordinate system, whereas physicists have to distinguish the ##(x,y)## space above from all the possible green lines, e.g. the one I drew in the picture with ##(\tilde{x},f'(\tilde{x}))## coordinates. That's why a tangent should always be considered as the pair ##(x_0, L_{x_0})##, the point of evaluation and the direction (slope) which it points to. This distinction is basically the secret behind all other perspectives under which a differentiation can be seen.
 
  • Like
  • Love
Likes sysprog and etotheipi
  • #37
PeroK said:
And, if you sit down and prove this, then it does not rely on cancelling ##dx## as in:
$$\int_a^b f(u)\frac{du}{dx}dx = \int_{u(a)}^{u(b)} f(u)du$$
Simply cancelling the ##dx## here is not a proof! In real analysis (pure mathematics) it must be proved otherwise.

That's helpful, thank you. My "rule" is that we're "allowed" to effectively cancel infinitesimals but not operators, as in $$\frac{dy}{dx} dx = dy$$ whilst I'd need to change the following $$\frac{d}{dx} (\frac{dy}{dx}) dx = \frac{d(\frac{dy}{dx})}{dx} dx = d(\frac{dy}{dx})$$ The difference isn't too noticeable in the below example, but it seems to be important in things like operator equations.

But I think like you say it's more a case of taking advantage of the notation.
 
  • Skeptical
Likes sysprog
  • #38
etotheipi said:
That's helpful, thank you. My "rule" is that we're "allowed" to effectively cancel infinitesimals but not operators, as in $$\frac{dy}{dx} dx = dy$$ whilst I'd need to change the following $$\frac{d}{dx} (\frac{dy}{dx}) dx = \frac{d(\frac{dy}{dx})}{dx} dx = d(\frac{dy}{dx})$$ The difference isn't too noticeable in the below example, but it seems to be important in things like operator equations.

But I think like you say it's more a case of taking advantage of the notation.
It's not precisely a "below example"; it's above the text that refers to it; but it's nevertheless a good example; and, at least in my view, a little bit of notational abuse can sometimes be rather good. :wink:
 
Last edited:
  • Like
Likes etotheipi
  • #39
PeroK said:
##dx## is not a number and doesn't assume any values. ##x## is assumed here to be a real variable, so it and ##f(x)## assume real values. But ##dx## is a notational device to indicate, along with the ##\int## symbol, integration with respect to the variable ##x##.

You can say, for example, let ##x = 1##, then ##f(1)## is well-defined, but ##d1## or ##dx \big | _{x = 1}## has no meaning.
I meant when you integrate it does take numerical values. The simplest case, integrate 1dx from 0 to 1. The answer will be 1(1-0)=1. Dx is a measure of the width of an interval. When we do a Riemann integral, we're doing an infinite sum f(x_j)dx_j. dx_j := x_{j+1}-x_j. So dx_j is the measure of the length of an interval. Sure, with infinite Riemann sums we do not consider each, but you may use a partition into finitely-many rectangles and assign a length to each. You may then say dx_i:=x_{j+1}-x_j=0.5, etc. So it is not just a place-holder, though maybe you said it in a different sense. So you can say dx_i or dx at the ith interval assumes the value x_{j+1}-x_j = Real number.
 
  • #40
WWGD said:
I meant when you integrate it does take numerical values. The simplest case, integrate 1dx from 0 to 1. The answer will be 1(1-0)=1. Dx is a measure of the width of an interval. When we do a Riemann integral, we're doing an infinite sum f(x_j)dx_j. dx_j := x_{j+1}-x_j. So dx_j is the measure of the length of an interval. Sure, with infinite Riemann sums we do not consider each, but you may use a partition into finitely-many rectangles and assign a length to each. You may then say dx_i:=x_{j+1}-x_j=0.5, etc. So it is not just a place-holder, though maybe you said it in a different sense.
There are no sub-intervals in an integral and it is not an infinite sum. It's the limit of a sequence of finite sums. An infinite sum is something of the form:
$$\sum_{n= 1}^{\infty} a_n$$
If the integral was an infinite sum, it would defined as such, with the approriate width ##dx_j## specified! There are no ##dx_j## in an integral. There is only the symbol ##dx##, which is neither a number nor an interval.
 
  • Like
Likes sysprog
  • #41
PeroK said:
There are no sub-intervals in an integral and it is not an infinite sum. It's the limit of a sequence of finite sums. An infinite sum is something of the form:
$$\sum_{n= 1}^{\infty} a_n$$
If the integral was an infinite sum, it would defined as such, with the approriate width ##dx_j## specified! There are no ##dx_j## in an integral. There is only the symbol ##dx##, which is neither a number nor an interval.
You do have an infinite sum where the terms are of the form f(x_j)dx_j and, you specify additional conditions by quantifying over all sums where dx_j goes to 0.
Well, yes, the limit of a sum, not necessarily an infinite sum. The with dx_j is a variable, and you do not specify it for infinitely-many values, but it does assume values. You may partition [0,1] into [0,1/2], [1/2,1]. Then dx_(interval [0,1/2])=1/2-0 and dx_(interval[1/2,1])=1-1/2=1/2. So you do assign actual numerical values. Of course, in the limit you do a quantification over all intervals , over all sums as width goes to zero but these are actual widths. But maybe it is a semantic thing and we are saying the same thing in different ways.
 
  • #42
WWGD said:
But maybe it is a semantic thing and we are saying the same thing in different ways.

Perhaps, but just as example. In the integral
$$\int_0^1 x^2 dx$$
What is the value of the width(s) ##dx_j## that you would use?
 
  • #43
The Riemann integral when it converges to R is the limit of a net of intervals/partitions ordered by inclusion into the Reals. You want, if Partition 1 subset Partition 2, both in an eps- neighborhood of R. You assign a Riemann sum to each partition so that , by net convergence, every subpartition is eventually in any eps-neighborhood of R. I don't know if I explained it well, but I think the net convergence issue is a bit unwieldy.
 
  • Like
Likes sysprog
  • #44
Guess my post was confusing. My point is that the convergence of the Riemann integral is not your standard convergence but instead convergence as a net.
 
  • Like
Likes sysprog
  • #45
WWGD said:
Guess my post was confusing. My point is that the convergence of the Riemann integral is not your standard convergence but instead convergence as a net.
How about looking at it as a state space?
 
  • #46
I've actually got some notes on doing the following integral from first principles, by calculating the limits of the upper and lower sums for a set of regular partitions:
$$\int_a^b x^2 dx$$
Note that the definite integral (unlike ##dx##) is a real number!

We take ##P_n## as the partition of ##[a, b]## into ##n## equal sub-intervals of width ##\frac{b -a}{n}##. Note that each partition has sub-intervals, but the integral itself does not.

The minimum value of ##x^2## on each interval is at the lower end and the maximum at the higher end. This gives us the upper and lower sums as:
$$L_n = \sum_{k = 0}^{n-1} (a + \frac{k}{b-a})^2(\frac{b-a}{n}) \\
= (b-a)[a^2 + (b-a)^2\frac{(n-1)(2n-1)}{6n^2} + \frac{a(b-a)(n-1)}{n}] \\
U_n = \sum_{k = 1}^{n} (a + \frac{k}{b-a})^2(\frac{b-a}{n}) \\
= (b-a)[a^2 + (b-a)^2\frac{(n+1)(2n+1)}{6n^2} + \frac{a(b-a)(n+1)}{n}]$$
Each of the ##L_n## must be an under-estimate of the integral and each of the ##U_n## must be an over-estimate. If they both converge to the same number, then that number is the integral.

Now, we have:
$$\lim_{n \rightarrow \infty} L_n = \frac 1 3 (b^3 - a^3) \\
\lim_{n \rightarrow \infty} U_n = \frac 1 3 (b^3 - a^3)$$
The definite integral, therefore, is well-defined and we have:
$$\int_a^b x^2 dx = \frac 1 3 (b^3 - a^3)$$
 
  • Like
Likes etotheipi and sysprog
  • #47
Yes! This reminded me of something that happened with me. I was trying to determine the work of gravity (or ##F_{spring}##, I forgot) after having fixed a positive direction.
When I considered the position to be increasing, I found the correct answer, but then, when considering the case where the position was decreasing, I got a wrong one. I then caught that in ##\vec F\cdot d\vec s=|F|\,|ds|\cos\theta## the ##ds## must be negative, so ##|ds|=-ds##.
 
  • Like
Likes etotheipi
  • #48
PeroK said:
Perhaps you didn't phrase this precisely, but a real number is either zero or it's not. It can't "go to zero". In any case, ##dx## in the integral is not a real number.
Hm, hasn't Abraham Robinson formalised infinitesimals?
 
  • #49
archaic said:
Hm, hasn't Abraham Robinson formalised infinitesimals?
In post #46 I showed how the integral of ##x^2## could be done from first principles using "standard" real analysis. I invite you to do the same using non-standard analysis, where ##dx## is an infinitesimal.

The following proposition is a cornerstone of real analysis. Let ##x \in \mathbb R## with ##x \ne 0##.

1) Either ##x > 0## or ##x < 0##, but not both.

2) If ##x > 0##, then ##\exists \ y \in \mathbb R, \ s.t. \ 0 < y < x##.

If this proposition fails for ##dx##, then ##dx \notin \mathbb R##.
 
  • Like
Likes sysprog and archaic
  • #50
PeroK said:
If this proposition fails for ##dx##, then ##dx \notin \mathbb R##.
Right, REAL analysis.
PeroK said:
I invite you to do the same using non-standard analysis, where ##dx## is an infinitesimal.
No experience with non-standard analysis 🤷‍♂️.
 
Back
Top