Abstractions of integration and differentiation

In summary: You want to view the limits as expressing the boundary of the manifold (and the region of integration is a submanifold of that).In summary, the conversation is about generalizing concepts like integration and differentiation from single-valued functions of a single variable to vector-valued functions of several variables in the context of real analysis. The first question is whether there are any differences between partial differentiation, definite integration, and indefinite integration for scalar-valued functions of several variables compared to those for single-variable functions. It is determined that there are no differences, other than notation. The abstraction of differentiation for vector-valued functions involves Jacobian matrices, while the abstraction of integration is the line integral. There is also an intermediary abstraction
  • #1
V0ODO0CH1LD
278
0
I have a few questions about the generalizations of concepts like integration and differentiation of single-valued functions of a single variable to vector-valued functions of several variables. All in the context of real analysis.

Beginning with scalar-valued functions of several variables (i.e. functions of the form ##f:\mathbb{R}^n\rightarrow\mathbb{R}##), my first question is regarding partial differentiation, definite and indefinite integration of this functions with respect to one variable. Am I right in assuming that in this case there are no differences between these concepts and the ones defined in single variable calculus (other than notation)? If I was programming a computer to perform these "operations" I wouldn't really have to teach it anything new, right? Just regard the function as a scalar-valued function of a single variable and treat the other "letters" you find as you would a constant like ##3## or ##5##.

The actual abstractions of differentiation, definite and indefinite integration of single-valued functions of a single variable to vector-valued functions of several variables have to do with Jacobian matrices and some abstraction of multiple integrals, right?

So the question is: if for functions of the form ##f:\mathbb{R}^n\rightarrow\mathbb{R}^m## derivatives abstract to Jacobian matrices, what do integrals abstract to?

I guess that one intermediary abstraction of (at least) definite integration for scalar-valued functions of several variables (##f:\mathbb{R}^n\rightarrow\mathbb{R}##) is the multiple integral. What is the abstraction of indefinite integrals in this case? What about further abstractions to functions of the form ##f:\mathbb{R}^n\rightarrow\mathbb{R}^m##?
 
Physics news on Phys.org
  • #2
Here's the way I've come to understand the vector generalization of the derivative...
A function [itex]f:\mathbb{R}^n \to \mathbb{R}^n[/itex] has a derivative [itex]f'[/itex] the operator valued function which defines at each point of the domain a linear operator mapping differentials in the domain to differentials in the range:

[tex] \mathbf{dY} = f'(\mathbf{X})\left[\mathbf{dX}\right][/tex]

The derivative at [itex]\mathbf{X}[/itex] acts as a linear operator on the differential [itex] \mathbf{dX}[/itex] mapping it to the differential [itex]\mathbf{dY}[/itex].

In 1 variable linear operators are so simple you miss the fact that they are linear operators. They are scalar multipliers... (the slope maps a "run" to a "rise").

Now integration (definite integrals) are dual to differentiation in that an integral is a "chain" mapping a differential (form) to a number. And that mapping is linear. View the derivative as telling how differentials get mapped once we know how coordinates get mapped. We then can combine integration with this change of variable to get various theorems about integration.


[itex] \int \mathbf{dY} = \int f'(\mathbf{X})[\mathbf{dX}] [/itex]

Note that in the textbooks this [itex] f'[/itex] operator acting linearly on a vector [itex]\mathbf{dX}[/itex] can take different forms, a dot product, a cross product, or a scalar multiplier depending on the dimensions of domain and range.


The principle one is Stokes formula (which becomes FTC in 1 variable) in its full generality but that involves outer differentials (wedge products and such) high end tensor calculus there. Get a book on differential geometry to start to see this.

As far as antiderivatives (and indefinite integral concept) since higher order derivatives are operators you have a more involved inversion there's just too many ways to map a set of linear operator valued functions to a single operator to speak of "the antiderivative" of a given operator valued function. You have to append a lot more than the arbitrary constant here.

I personally think we should get rid of the indefinite integral notation all together and speak of finding antiderivatives as such and only use [itex]\int[/itex] for definite integrals.

Now with Definite Integrals recall that in 1 variable you integrate along a set of values on the real number line where in higher dimensions you can integrate over arbitrary regions (suitably well behaved... i.e. not too fractal in shape). Then using vector and tensor forms you can also define integrals on lower dimensional regions (e.g. boundaries... lines, surfaces, hyper surfaces ...).
 
  • #3
I am still a bit confused.. So let me ask some other questions.

I realized that part of my troubles with the abstraction of integration is that there seems to be an ambiguity regarding the significance of the limits of integration. From one perspective the two points used to define the limits of integration in single-variable calculus are just that, two points (two elements of the domain). But they can also be viewed as specifying a subset of the reals. So the line integral and the multiple integral are the same thing in single-variable calculus, however they differ for scalar-valued functions of severable variables, for instance.

I guess from what you already said, if we choose to abstract the derivative to the gradient then the natural abstraction of the integral should be the line integral, since
[tex] \int_\vec{a}^\vec{b}\nabla{}f\cdot{}d\vec{x}=f(\vec{b})-f(\vec{a}). [/tex]

But then does that mean that the derivative also abstracts to something that is the dual of the multiple integral? What would that be?

Also, what sort of derivative is the line integral for scalar fields the dual of?
 
  • #4
The thing to remember is that the two point limits of integration are specifically the boundary of the interval over which you integrate. In higher dimensions you again will integrate over a region with a (possibly empty) boundary.

To see the generalization of the FTC which is the generalized Stokes formulas you see that integration of some form of differential of a function over some region equates to integration of the function itself in some way over the boundary. This boundary integration trivializes to the sum of two values for the 1 dim integral (sum of f(b) and -f(a) with the sign change due to the distinct "outward" directions for those two boundary points).

The key to proving these stokes type theorem including the FTC is to chop the region of integration up into smaller and smaller pieces until you see:
a.) the cancelations of adjacent but oppositely oriented boundaries of pieces.. and
b.) the pieces become small enough that their boundary evaluation becomes some form of difference quotient defining a derivative or differential.

These are the two "logical dynamics" that occur dually and which will generalize.

So you don't want to generalize to e.g. two vector limits.. as the twoness of the limits is unique to the 2 point boundary (0-sphere) of the 1 ball (interval). Once you pop up to 2 dimensions you will have a continuous curve boundary. Here's a better way to set up limits of integration.

Imagine all definite integrals are carried out over the entirety of the space. so in 1 dim you always integrate over the whole real line, in 2 dim the whole plane etc.

You would then define a characteristic function of a set as the function mapping elements of that set to 1 and mapping everything else to 0. You now describe say the 1 dimensional integral:
[tex]\int_2^5 x^2 dx[/tex]
as the integral:
[tex] \int _{\mathbb R} x^2 \cdot \Theta_{\left[2,5\right]}(x) dx[/tex]
where [itex]\Theta[/itex] is the characteristic function for the domain of integration (the interval [2,5]).


This it a better notion of what you generalize in higher dimensions. Something like:
[tex]\int_{\mathbb R^n} F(\mathbf{x})\cdot \Theta_S(\mathbf{x}) dx^n[/tex]
where S here is some suitably well behaved set in the n dimensional space.

And it is in this setting that one may then consider differential identities, differentials and derivatives of characteristic functions (which we must generalize to distributions rather than functions, as with the Dirac delta "function").

You will find then that the derivative and integral concepts are neither inverses nor dual to each other but are rather intermixed but distinct mathematical constructs. There is no sensible way to preserve the coincidental duality/inversion of 1 var calculus as you generalize.
 

1. What is integration and differentiation?

Integration and differentiation are two fundamental concepts in calculus that are used to find the relationships between functions and their rates of change. Integration deals with finding the area under a curve, while differentiation deals with finding the slope of a curve.

2. What are the applications of integration and differentiation?

The applications of integration and differentiation are numerous and can be found in various fields such as physics, engineering, economics, and more. Some examples include calculating velocity and acceleration, finding the optimal solution to a problem, and determining the growth rate of a population.

3. How are integration and differentiation related?

Integration and differentiation are inverse operations, meaning that they undo each other. The derivative of a function is used to find the integral of that function, and vice versa. This relationship is known as the fundamental theorem of calculus.

4. What are some common techniques for integration and differentiation?

Some common techniques for integration include using the power rule, substitution, and integration by parts. For differentiation, common techniques include using the product rule, quotient rule, and chain rule.

5. How can I improve my understanding of integration and differentiation?

To improve your understanding of integration and differentiation, it is important to practice solving various problems and to have a solid understanding of the basic concepts. You can also seek out additional resources such as textbooks, online tutorials, and practice quizzes to further enhance your knowledge.

Similar threads

Replies
2
Views
785
  • Calculus
Replies
14
Views
1K
Replies
2
Views
1K
Replies
3
Views
1K
Replies
1
Views
151
Replies
31
Views
919
  • Calculus
Replies
12
Views
505
Replies
14
Views
2K
Replies
1
Views
1K
Back
Top