What Happens to dx in Integration?

Click For Summary
The discussion centers on the conceptual understanding of the differential "dx" in integration, particularly in the context of the indefinite integral ∫x dx. Participants clarify that "dx" is not merely an infinitesimal number but serves as an instruction for the variable of integration, indicating how to interpret the integral notation. The conversation also touches on the importance of including the constant of integration in indefinite integrals and the relationship between integration and Riemann sums. Additionally, there is a debate about the historical and modern interpretations of "dx," with some arguing for its significance beyond mere notation. Overall, the thread emphasizes the need for a deeper understanding of integration concepts and the role of "dx" in calculus.
  • #31
zinq said:
As some may not be aware of, the "Riemann integral" in mathematics is a very specific definition of how to define the integral of a function. As such it is not subject to redefinition with infinitesimals or with anything else.

The same definition can be given in more than one way, so that they're equivalent.

zinq said:
The surreal numbers are extremely convenient for doing analysis, and much has been written about how to go about this. Here is one of the first things that popped up via Google: Analysis on Surreal Numbers.

No, this is incorrect. It's true that it's possible to do analysis using the surreals, but they do not have convenient properties for that purpose. One problem with the surreals is that they don't have the transfer principle. Therefore when you want to generalize objects from the reals to the surreals, you have to do work on a case-by-case basis that wouldn't be necessary with NSA. For instance, the definition of exponentiation is highly nontrivial for the surreals, and was not worked out until fairly recently. There are a lot of cases where proving the existence of things in the surreals is much, much harder than it is with NSA. There are actually some links between the surreals and NSA; see here, for example. But those links require some very high-powered math even to describe. Basically NSA is the unique system that's big enough to do all of analysis, but not so big as to be unwieldy like the surreals.

zinq said:
It is a famous fallacy to invent something that someone else did not say and rebut it, sometimes called the "straw man" fallacy.
If you think I've done that, feel free to point out where.
 
Physics news on Phys.org
  • #32
Surreal numbers were invented about 1972 and as far as I know first appeared in print in 1974 in Donald Knuth's book. Then Martin Kruskal defined exponentiation for the surreals in the mid-1970s — I was at a lecture he gave with this. An equivalent definition appeared in Harry Gonshor's book on surreal analysis published in 1986.

The transfer principle occurs in nonstandard analysis, but not in standard analysis, where it is not needed.
 
  • #33
zinq said:
Surreal numbers were invented about 1972 and as far as I know first appeared in print in 1974 in Donald Knuth's book. Then Martin Kruskal defined exponentiation for the surreals in the mid-1970s — I was at a lecture he gave with this. An equivalent definition appeared in Harry Gonshor's book on surreal analysis published in 1986.
Yes, I'm describing 1986 as relatively recent. I guess I'm just old.
 
  • Like
Likes PeroK
  • #34
bcrowell said:
No, this is incorrect. It's true that it's possible to do analysis using the surreals, but they do not have convenient properties for that purpose. One problem with the surreals is that they don't have the transfer principle.

Your own link later on say that the surreals DO have the transfer principle. See http://www.ohio.edu/people/ehrlich/Unification.pdf
 
  • #35
bcrowell said:
You seem to be making some assumptions about what a number is, how it relates to the real world, and the role of the real number system. If you examine these assumptions carefully, they don't hold.

The real numbers are not "really" "real" in the sense of relating perfectly to the "real" world. For example, the real number system has a distinction between rational and irrational numbers, but such a distinction can never be empirically verified for any real-world measurement. For example, there is no way, even in principle, to determine whether the mass of a hydrogen atom is a rational number when expressed in units of kilograms.

So when you ask "where do we find them," the answer is that we don't, and in fact this applies even to the real number system, and, arguably, the integers. (If you enthusiastically assert this philosophical position for the integers, you are what's known as an ultrafinitist.)

You ask "what is an example of one?" It is possible to make up number systems in which there are concrete examples of infinitesimals. A pretty rich system of this type is the Levi-Civita numbers. Here is a calculator I wrote that allows you to play with the Levi-Civita numbers: http://lightandmatter.com/calc/inf/ . However, when you do calculus it is never necessary or desirable to define a specific, concrete infinitesimal. This is really not so different from the real numbers. The vast majority of real numbers can never be defined, because there are uncountably many real numbers, but only countably many definitions.

I think you're missing zinq's post. I guess he wants to know why a system of inifinitesimals as described in Keisler is a consistent set of axioms. This is an incredibly tough question. You would first need to construct the reals rigorously, which is not easy. Then you would have to apply to the axiom of choice to define the hyperreals adequately (which has as consequence that the hyperreal number system cannot have a specific definition). And then you'll want to prove the transfer principle which is even more horrible.

Doing analysis the standard way is simply way easier since you'll only have to construct the reals and you can go from there. Sure, NSA is easy once you sweep all the annoying issues under the rug.
 
  • #36
micromass said:
I think you're missing zinq's post. I guess he wants to know why a system of inifinitesimals as described in Keisler is a consistent set of axioms. This is an incredibly tough question. You would first need to construct the reals rigorously, which is not easy. Then you would have to apply to the axiom of choice to define the hyperreals adequately (which has as consequence that the hyperreal number system cannot have a specific definition). And then you'll want to prove the transfer principle which is even more horrible.
I didn't see anything in zinq's post about consistency. Nor do I think that students learning freshman calculus need to see a proof of the consistency of the hyperreals, nor do I think they need to see an explicit construction of the hyperreals. (We don't explicitly construct the reals for them, either -- that usually waits until upper-division analysis, which they won't take unless they're math majors.)

micromass said:
Doing analysis the standard way is simply way easier since you'll only have to construct the reals and you can go from there.
Manipulating and interpreting infinitesimals is part of a common set of practices in science and engineering that has existed since the days of Leibniz and Newton, and has continued uninterrupted until today. Anyone who takes freshman calculus without learning this set of common practices is missing a significant part of what it means to be mathematically literate as a scientist or engineer. It's not optional. Since it's not optional, we really have two choices. (1) We can give them half-baked, wrong explanations of these issues, or pretend that the issues don't exist. (2) We can explain enough to make them competent in the relevant common practices.

micromass said:
Sure, NSA is easy once you sweep all the annoying issues under the rug.
It doesn't make much sense to refer to this as sweeping issues "under the rug." Freshmen learn calculus without ever seeing an explicit construction of the reals, and without ever having the consistency of the reals addressed. That doesn't mean that we're sweeping issues about the reals under the rug. These issues simply aren't relevant in freshman calculus. All of these concerns about consistency arise only because there was a false belief ca. 1880-1960 that infinitesimals were somehow inherently inconsistent. That's similar to the historical belief that noneuclidean geometry was inconsistent.
 
  • #37
It's not exactly that I'm worried about the consistency of nonstandard analysis. Rather, I'm very comfortable with the real numbers. First the integers, then the rationals, then the reals — as the completion either through Cauchy sequences or via Dedekind cuts.

I remember having an epiphany when I was first introduced the the concept of the limit of a sequence or of a function. It was a pinnacle of modern thought to have nailed the epsilon-delta definition precisely.

And with that, I am perfectly happy with the definitions of derivative and integral, and don't see the need for further messing-around with the fundamentals.

I'm a lot less comfortable with nonstandard analysis, which posits that there exists an infinitesimal without any further ado or description thereof.

I guess this is a matter of taste. But I feel very strongly that the nonstandard analysis route, even though it uses infinitesimals, sheds no light whatsover on them.
 
  • #38
The Riemann integral as I learned it is the least upper bound of lower Riemann sums (and also the greatest lower bound of upper Riemann sums). There is no notion of infinitesimal in this definition.

The Lebesque integral is the least upper bound of "lower sums" of measurable functions that take on finitely many values. Again no idea of infinitesimal is used.

Expressions such as dx have been replaced with the idea of differential forms. Line integrals are integrals of 1 forms, area integrals of 2 forms, volume integrals of 3 forms and so forth.

This approach to expressions such as dx has become standard usage today in many areas of Mathematics and Physics - for instance differential topology, differential geometry, partial differential equations, complex manifolds, algebraic topology, the theory of Lie groups, Classical mechanics, the General theory of Relativity, String Theory, and Gauge Theory to name a few.

Even in 18'th and 19'th century mathematics the idea of infinitesimals may have already begun to slide into disuse. For instance in Struik's book on Classical Differential Geometry, expressions such as du are taken as place holders for the derivative of an arbitrary parameterization of the function,u. This is very close to the idea of a differential form.

At some point, researchers realized that tangent vectors live in another space, not the manifold itself, but a companion space that has its own properties. This space is the tangent bundle. Along with the tangent bundle came tensor bundles. The cotangent bundle is the bundle of dual vectors to the tangent bundle and it was realized that differentials such as df,of a function,f, are sections of the cotangent bundle. Differentials were seen as smoothly (or continuously) varying linear maps on the fibers of the tangent bundle . Their integrals are taken over arbitrary curves by evaluating them on the tangent vectors to the curves. So instead of classical differentials which were infinitesimal quantities, differentials are a field of dual tangent vectors. Remarkably such a field does not need to be of the form dx or df for a function,f, but could be any a field of dual vectors. So a differential such as dx was seen to be an instance of a more general object. This object is called a 1 form. This is a powerful and indispensably useful idea.

An expression like, ##∫_{a}^{b}f(x)dx## means to integrate the 1 form, ##f(x)dx## over the interval ##[a,b]##. The expression ##∫_{c}f(z)dz## means to integrate the complex 1 -form ##f(z)dz## over the curve ##c##.

Tensor bundles provide a natural setting for calculus on manifolds. The higher dimensional analogue of 1 forms are n-forms and these are integrated over n dimensional domains. Differential forms have been greatly generalized and can take on values not only in the base field but in vectors spaces or even other vector bundles. For instance, the connection 1 form of a connection on a principal bundle takes values in the Lie algebra of the structure group.

None of this powerful machinery uses Leibniz's infinitesimals but instead uses tensors.

It should be pointed out that the tangent bundle(and the cotangent bundle) is not a mere formalism. It is a smooth manifold and contains much information about the topology of the underlying manifold. It isn't just a way of talking or a notational convenience. For instance, the sum of the indices of a vector field with isolated zeros is a combinatorial invariant of the underlying manifold.
 
Last edited:
  • Like
Likes zinq and slider142

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
Replies
4
Views
2K
  • · Replies 20 ·
Replies
20
Views
4K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 12 ·
Replies
12
Views
2K
Replies
4
Views
3K
  • · Replies 15 ·
Replies
15
Views
2K