B Is dx Negative in Non-Standard Analysis?

  • Thread starter Thread starter etotheipi
  • Start date Start date
  • Tags Tags
    Dx Negative
Click For Summary
In non-standard analysis, infinitesimals like dx can represent both positive and negative changes, depending on the context and orientation of coordinates. The discussion highlights that while dx can carry a sign, its interpretation is context-sensitive, particularly in integration where the limits determine its positivity or negativity. The integral's behavior respects the orientation of the x-axis, making the sign of dx crucial for understanding the direction of integration. The conversation emphasizes the importance of rigor in defining infinitesimals, as informal interpretations can lead to misunderstandings. Overall, the understanding of dx as a differential form requires careful consideration of its application in mathematical contexts.
  • #31
By definition:
$$\lim_{\Delta x \to 0} \frac{f\left(x + \Delta x\right) - f(x)}{\left(x + \Delta x\right) - x} = \lim_{\Delta x \to 0} \frac{f\left(x + \Delta x\right) - f(x)}{\Delta x } = \frac{d\left(f(x)\right)}{dx}$$

So if ##dx = \lim_{\Delta x \to 0} \left(x + \Delta x\right) - x##, I don't see why it couldn't be arbitrarily chosen to be negative. It wouldn't change anything to the final result in a derivative or an integral as ##d\left(f(x)\right)## will change sign accordingly.
 
  • Like
Likes sysprog
Physics news on Phys.org
  • #32
jack action said:
So if ##dx = \lim_{\Delta x \to 0} \left(x + \Delta x\right) - x##, I don't see why it couldn't be arbitrarily chosen to be negative.

##\lim_{\Delta x \to 0} \left(x + \Delta x\right) - x = 0##
 
  • Like
Likes sysprog
  • #33
WWGD said:
Edit:Well, not the dx itself but the values dx assumes.

##dx## is not a number and doesn't assume any values. ##x## is assumed here to be a real variable, so it and ##f(x)## assume real values. But ##dx## is a notational device to indicate, along with the ##\int## symbol, integration with respect to the variable ##x##.

You can say, for example, let ##x = 1##, then ##f(1)## is well-defined, but ##d1## or ##dx \big | _{x = 1}## has no meaning.
 
  • Like
  • Informative
Likes jbriggs444, sysprog and etotheipi
  • #34
PeroK said:
##dx## is not a number and doesn't assume any values. ##x## is assumed here to be a real variable, so it and ##f(x)## assume real values. But ##dx## is a notational device to indicate, along with the ##\int## symbol, integration with respect to the variable ##x##.

You can say, for example, let ##x = 1##, then ##f(1)## is well-defined, but ##d1## or ##dx \big | _{x = 1}## has no meaning.

You say ##dx## is a notational device, does this mean we just give it meaning when dealing with differentials? For instance, for probability density functions I like to think of it this way:

(The increment in cumulative probability) = (the probability per unit increment of ##x##) multiplied by (the increment of ##x##), namely ##dF = f(x) dx##, and then we just insert an integral sign with bounds to turn this from an equation of differentials to full statement, i.e. ##\int_{P_{1}}^{P_{2}} dF = \int_{a}^{b} f(x) dx##.

So whilst I used to think of ##\int ... dx## as a single unit with some stuff in the middle, I now sort of think of it as two separate units, ##[\int][f(x)dx]##, inline with the concept of a sum.
 
  • #35
etotheipi said:
You say ##dx## is a notational device, does this mean we just give it meaning when dealing with differentials? For instance, for probability density functions I like to think of it this way:

(The increment in cumulative probability) = (the probability per unit increment of ##x##) multiplied by (the increment of ##x##), namely ##dF = f(x) dx##, and then we just insert an integral sign with bounds to turn this from an equation of differentials to full statement, i.e. ##\int_{P_{1}}^{P_{2}} dF = \int_{a}^{b} f(x) dx##.

So whilst I used to think of ##\int ... dx## as a single unit with some stuff in the middle, I now sort of think of it as two separate units, ##[\int][f(x)dx]##, inline with the concept of a sum.

This question comes up quite often I think. On the one hand, the theory of calculus, both differential and integral, is independent of the notation used. There is no theorem that depends on an interpretation of ##dx##. That said, the relationship between integration and differentiation and hence the relationship between ##dx## in an integral and the differential ##dx## allows some neat shorthand notation - especially for applied maths and physics. For example, integration by substitution is actually:
$$\int_a^b f(u(x))u'(x)dx = \int_{u(a)}^{u(b)} f(u)du$$
And, if you sit down and prove this, then it does not rely on cancelling ##dx## as in:
$$\int_a^b f(u)\frac{du}{dx}dx = \int_{u(a)}^{u(b)} f(u)du$$
Simply cancelling the ##dx## here is not a proof! In real analysis (pure mathematics) it must be proved otherwise.
 
  • Like
Likes etotheipi
  • #36
@etotheipi

If in doubt, always look for the tangents. They are hidden somewhere when it comes to differentiation.

1585553248890.png


It is the quotient ##\dfrac{\Delta f(x)}{\Delta x}##, i.e. the slope of the hypotenuse of the triangle which must be considered, not just one kathode, whether as ##\Delta x## or as ##dx##. The limiting process of simultaneously both kathodes, the lengths of the difference of function values and the lengths of the ##x## intervals. The quotient does the trick!

If it stands alone, it abbreviates something else and things are more complex, namely a differential form. This is the function that attaches another function to each point: ##x \longmapsto L_x## (see post #6). My picture used ##y=\frac{1}{5}x^2## and ##x_0=3##. So ##dx## attaches the function ##\tilde{x} \longmapsto \frac{2}{5} \tilde{x}##, which has at ##x_0=3## the value ##\frac{6}{5}##. Here we changed the origin of the curve space ##(0,0)## into the origin at the tangent space (the green line) ##(3,\frac{9}{5})## which becomes our new origin if we talk about the tangent space as a vector space. Hence the tangent at ##x_0=3## is ##f'(\tilde{x})=(\frac{2}{5}\cdot 3) \tilde{x}## which is a linear function in the coordinate system of the tangent. In old coordinates it is ##f'(x)=\frac{6}{5}x - \frac{9}{5}##. One of the things which adds more confusion and requires to distinguish the curve from the tangents. Every single tangent is a line, i.e. a one dimensional vector space: different points ##x_0##, different tangent spaces. At school it is all in one coordinate system, whereas physicists have to distinguish the ##(x,y)## space above from all the possible green lines, e.g. the one I drew in the picture with ##(\tilde{x},f'(\tilde{x}))## coordinates. That's why a tangent should always be considered as the pair ##(x_0, L_{x_0})##, the point of evaluation and the direction (slope) which it points to. This distinction is basically the secret behind all other perspectives under which a differentiation can be seen.
 
  • Like
  • Love
Likes sysprog and etotheipi
  • #37
PeroK said:
And, if you sit down and prove this, then it does not rely on cancelling ##dx## as in:
$$\int_a^b f(u)\frac{du}{dx}dx = \int_{u(a)}^{u(b)} f(u)du$$
Simply cancelling the ##dx## here is not a proof! In real analysis (pure mathematics) it must be proved otherwise.

That's helpful, thank you. My "rule" is that we're "allowed" to effectively cancel infinitesimals but not operators, as in $$\frac{dy}{dx} dx = dy$$ whilst I'd need to change the following $$\frac{d}{dx} (\frac{dy}{dx}) dx = \frac{d(\frac{dy}{dx})}{dx} dx = d(\frac{dy}{dx})$$ The difference isn't too noticeable in the below example, but it seems to be important in things like operator equations.

But I think like you say it's more a case of taking advantage of the notation.
 
  • Skeptical
Likes sysprog
  • #38
etotheipi said:
That's helpful, thank you. My "rule" is that we're "allowed" to effectively cancel infinitesimals but not operators, as in $$\frac{dy}{dx} dx = dy$$ whilst I'd need to change the following $$\frac{d}{dx} (\frac{dy}{dx}) dx = \frac{d(\frac{dy}{dx})}{dx} dx = d(\frac{dy}{dx})$$ The difference isn't too noticeable in the below example, but it seems to be important in things like operator equations.

But I think like you say it's more a case of taking advantage of the notation.
It's not precisely a "below example"; it's above the text that refers to it; but it's nevertheless a good example; and, at least in my view, a little bit of notational abuse can sometimes be rather good. :wink:
 
Last edited:
  • Like
Likes etotheipi
  • #39
PeroK said:
##dx## is not a number and doesn't assume any values. ##x## is assumed here to be a real variable, so it and ##f(x)## assume real values. But ##dx## is a notational device to indicate, along with the ##\int## symbol, integration with respect to the variable ##x##.

You can say, for example, let ##x = 1##, then ##f(1)## is well-defined, but ##d1## or ##dx \big | _{x = 1}## has no meaning.
I meant when you integrate it does take numerical values. The simplest case, integrate 1dx from 0 to 1. The answer will be 1(1-0)=1. Dx is a measure of the width of an interval. When we do a Riemann integral, we're doing an infinite sum f(x_j)dx_j. dx_j := x_{j+1}-x_j. So dx_j is the measure of the length of an interval. Sure, with infinite Riemann sums we do not consider each, but you may use a partition into finitely-many rectangles and assign a length to each. You may then say dx_i:=x_{j+1}-x_j=0.5, etc. So it is not just a place-holder, though maybe you said it in a different sense. So you can say dx_i or dx at the ith interval assumes the value x_{j+1}-x_j = Real number.
 
  • #40
WWGD said:
I meant when you integrate it does take numerical values. The simplest case, integrate 1dx from 0 to 1. The answer will be 1(1-0)=1. Dx is a measure of the width of an interval. When we do a Riemann integral, we're doing an infinite sum f(x_j)dx_j. dx_j := x_{j+1}-x_j. So dx_j is the measure of the length of an interval. Sure, with infinite Riemann sums we do not consider each, but you may use a partition into finitely-many rectangles and assign a length to each. You may then say dx_i:=x_{j+1}-x_j=0.5, etc. So it is not just a place-holder, though maybe you said it in a different sense.
There are no sub-intervals in an integral and it is not an infinite sum. It's the limit of a sequence of finite sums. An infinite sum is something of the form:
$$\sum_{n= 1}^{\infty} a_n$$
If the integral was an infinite sum, it would defined as such, with the approriate width ##dx_j## specified! There are no ##dx_j## in an integral. There is only the symbol ##dx##, which is neither a number nor an interval.
 
  • Like
Likes sysprog
  • #41
PeroK said:
There are no sub-intervals in an integral and it is not an infinite sum. It's the limit of a sequence of finite sums. An infinite sum is something of the form:
$$\sum_{n= 1}^{\infty} a_n$$
If the integral was an infinite sum, it would defined as such, with the approriate width ##dx_j## specified! There are no ##dx_j## in an integral. There is only the symbol ##dx##, which is neither a number nor an interval.
You do have an infinite sum where the terms are of the form f(x_j)dx_j and, you specify additional conditions by quantifying over all sums where dx_j goes to 0.
Well, yes, the limit of a sum, not necessarily an infinite sum. The with dx_j is a variable, and you do not specify it for infinitely-many values, but it does assume values. You may partition [0,1] into [0,1/2], [1/2,1]. Then dx_(interval [0,1/2])=1/2-0 and dx_(interval[1/2,1])=1-1/2=1/2. So you do assign actual numerical values. Of course, in the limit you do a quantification over all intervals , over all sums as width goes to zero but these are actual widths. But maybe it is a semantic thing and we are saying the same thing in different ways.
 
  • #42
WWGD said:
But maybe it is a semantic thing and we are saying the same thing in different ways.

Perhaps, but just as example. In the integral
$$\int_0^1 x^2 dx$$
What is the value of the width(s) ##dx_j## that you would use?
 
  • #43
The Riemann integral when it converges to R is the limit of a net of intervals/partitions ordered by inclusion into the Reals. You want, if Partition 1 subset Partition 2, both in an eps- neighborhood of R. You assign a Riemann sum to each partition so that , by net convergence, every subpartition is eventually in any eps-neighborhood of R. I don't know if I explained it well, but I think the net convergence issue is a bit unwieldy.
 
  • Like
Likes sysprog
  • #44
Guess my post was confusing. My point is that the convergence of the Riemann integral is not your standard convergence but instead convergence as a net.
 
  • Like
Likes sysprog
  • #45
WWGD said:
Guess my post was confusing. My point is that the convergence of the Riemann integral is not your standard convergence but instead convergence as a net.
How about looking at it as a state space?
 
  • #46
I've actually got some notes on doing the following integral from first principles, by calculating the limits of the upper and lower sums for a set of regular partitions:
$$\int_a^b x^2 dx$$
Note that the definite integral (unlike ##dx##) is a real number!

We take ##P_n## as the partition of ##[a, b]## into ##n## equal sub-intervals of width ##\frac{b -a}{n}##. Note that each partition has sub-intervals, but the integral itself does not.

The minimum value of ##x^2## on each interval is at the lower end and the maximum at the higher end. This gives us the upper and lower sums as:
$$L_n = \sum_{k = 0}^{n-1} (a + \frac{k}{b-a})^2(\frac{b-a}{n}) \\
= (b-a)[a^2 + (b-a)^2\frac{(n-1)(2n-1)}{6n^2} + \frac{a(b-a)(n-1)}{n}] \\
U_n = \sum_{k = 1}^{n} (a + \frac{k}{b-a})^2(\frac{b-a}{n}) \\
= (b-a)[a^2 + (b-a)^2\frac{(n+1)(2n+1)}{6n^2} + \frac{a(b-a)(n+1)}{n}]$$
Each of the ##L_n## must be an under-estimate of the integral and each of the ##U_n## must be an over-estimate. If they both converge to the same number, then that number is the integral.

Now, we have:
$$\lim_{n \rightarrow \infty} L_n = \frac 1 3 (b^3 - a^3) \\
\lim_{n \rightarrow \infty} U_n = \frac 1 3 (b^3 - a^3)$$
The definite integral, therefore, is well-defined and we have:
$$\int_a^b x^2 dx = \frac 1 3 (b^3 - a^3)$$
 
  • Like
Likes etotheipi and sysprog
  • #47
Yes! This reminded me of something that happened with me. I was trying to determine the work of gravity (or ##F_{spring}##, I forgot) after having fixed a positive direction.
When I considered the position to be increasing, I found the correct answer, but then, when considering the case where the position was decreasing, I got a wrong one. I then caught that in ##\vec F\cdot d\vec s=|F|\,|ds|\cos\theta## the ##ds## must be negative, so ##|ds|=-ds##.
 
  • Like
Likes etotheipi
  • #48
PeroK said:
Perhaps you didn't phrase this precisely, but a real number is either zero or it's not. It can't "go to zero". In any case, ##dx## in the integral is not a real number.
Hm, hasn't Abraham Robinson formalised infinitesimals?
 
  • #49
archaic said:
Hm, hasn't Abraham Robinson formalised infinitesimals?
In post #46 I showed how the integral of ##x^2## could be done from first principles using "standard" real analysis. I invite you to do the same using non-standard analysis, where ##dx## is an infinitesimal.

The following proposition is a cornerstone of real analysis. Let ##x \in \mathbb R## with ##x \ne 0##.

1) Either ##x > 0## or ##x < 0##, but not both.

2) If ##x > 0##, then ##\exists \ y \in \mathbb R, \ s.t. \ 0 < y < x##.

If this proposition fails for ##dx##, then ##dx \notin \mathbb R##.
 
  • Like
Likes sysprog and archaic
  • #50
PeroK said:
If this proposition fails for ##dx##, then ##dx \notin \mathbb R##.
Right, REAL analysis.
PeroK said:
I invite you to do the same using non-standard analysis, where ##dx## is an infinitesimal.
No experience with non-standard analysis 🤷‍♂️.
 

Similar threads

  • · Replies 22 ·
Replies
22
Views
4K
  • · Replies 8 ·
Replies
8
Views
2K
Replies
24
Views
6K
Replies
5
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
Replies
4
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
Replies
13
Views
3K