Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Abstractions of integration and differentiation

  1. Jul 2, 2014 #1
    I have a few questions about the generalizations of concepts like integration and differentiation of single-valued functions of a single variable to vector-valued functions of several variables. All in the context of real analysis.

    Beginning with scalar-valued functions of several variables (i.e. functions of the form ##f:\mathbb{R}^n\rightarrow\mathbb{R}##), my first question is regarding partial differentiation, definite and indefinite integration of this functions with respect to one variable. Am I right in assuming that in this case there are no differences between these concepts and the ones defined in single variable calculus (other than notation)? If I was programming a computer to perform these "operations" I wouldn't really have to teach it anything new, right? Just regard the function as a scalar-valued function of a single variable and treat the other "letters" you find as you would a constant like ##3## or ##5##.

    The actual abstractions of differentiation, definite and indefinite integration of single-valued functions of a single variable to vector-valued functions of several variables have to do with Jacobian matrices and some abstraction of multiple integrals, right?

    So the question is: if for functions of the form ##f:\mathbb{R}^n\rightarrow\mathbb{R}^m## derivatives abstract to Jacobian matrices, what do integrals abstract to?

    I guess that one intermediary abstraction of (at least) definite integration for scalar-valued functions of several variables (##f:\mathbb{R}^n\rightarrow\mathbb{R}##) is the multiple integral. What is the abstraction of indefinite integrals in this case? What about further abstractions to functions of the form ##f:\mathbb{R}^n\rightarrow\mathbb{R}^m##?
  2. jcsd
  3. Jul 2, 2014 #2


    User Avatar
    Science Advisor
    Gold Member

    Here's the way I've come to understand the vector generalization of the derivative...
    A function [itex]f:\mathbb{R}^n \to \mathbb{R}^n[/itex] has a derivative [itex]f'[/itex] the operator valued function which defines at each point of the domain a linear operator mapping differentials in the domain to differentials in the range:

    [tex] \mathbf{dY} = f'(\mathbf{X})\left[\mathbf{dX}\right][/tex]

    The derivative at [itex]\mathbf{X}[/itex] acts as a linear operator on the differential [itex] \mathbf{dX}[/itex] mapping it to the differential [itex]\mathbf{dY}[/itex].

    In 1 variable linear operators are so simple you miss the fact that they are linear operators. They are scalar multipliers.... (the slope maps a "run" to a "rise").

    Now integration (definite integrals) are dual to differentiation in that an integral is a "chain" mapping a differential (form) to a number. And that mapping is linear. View the derivative as telling how differentials get mapped once we know how coordinates get mapped. We then can combine integration with this change of variable to get various theorems about integration.

    [itex] \int \mathbf{dY} = \int f'(\mathbf{X})[\mathbf{dX}] [/itex]

    Note that in the text books this [itex] f'[/itex] operator acting linearly on a vector [itex]\mathbf{dX}[/itex] can take different forms, a dot product, a cross product, or a scalar multiplier depending on the dimensions of domain and range.

    The principle one is Stokes formula (which becomes FTC in 1 variable) in its full generality but that involves outer differentials (wedge products and such) high end tensor calculus there. Get a book on differential geometry to start to see this.

    As far as antiderivatives (and indefinite integral concept) since higher order derivatives are operators you have a more involved inversion there's just too many ways to map a set of linear operator valued functions to a single operator to speak of "the antiderivative" of a given operator valued function. You have to append a lot more than the arbitrary constant here.

    I personally think we should get rid of the indefinite integral notation all together and speak of finding antiderivatives as such and only use [itex]\int[/itex] for definite integrals.

    Now with Definite Integrals recall that in 1 variable you integrate along a set of values on the real number line where in higher dimensions you can integrate over arbitrary regions (suitably well behaved... i.e. not too fractal in shape). Then using vector and tensor forms you can also define integrals on lower dimensional regions (e.g. boundaries... lines, surfaces, hyper surfaces ...).
  4. Jul 4, 2014 #3
    I am still a bit confused.. So let me ask some other questions.

    I realized that part of my troubles with the abstraction of integration is that there seems to be an ambiguity regarding the significance of the limits of integration. From one perspective the two points used to define the limits of integration in single-variable calculus are just that, two points (two elements of the domain). But they can also be viewed as specifying a subset of the reals. So the line integral and the multiple integral are the same thing in single-variable calculus, however they differ for scalar-valued functions of severable variables, for instance.

    I guess from what you already said, if we choose to abstract the derivative to the gradient then the natural abstraction of the integral should be the line integral, since
    [tex] \int_\vec{a}^\vec{b}\nabla{}f\cdot{}d\vec{x}=f(\vec{b})-f(\vec{a}). [/tex]

    But then does that mean that the derivative also abstracts to something that is the dual of the multiple integral? What would that be?

    Also, what sort of derivative is the line integral for scalar fields the dual of?
  5. Jul 16, 2014 #4


    User Avatar
    Science Advisor
    Gold Member

    The thing to remember is that the two point limits of integration are specifically the boundary of the interval over which you integrate. In higher dimensions you again will integrate over a region with a (possibly empty) boundary.

    To see the generalization of the FTC which is the generalized Stokes formulas you see that integration of some form of differential of a function over some region equates to integration of the function itself in some way over the boundary. This boundary integration trivializes to the sum of two values for the 1 dim integral (sum of f(b) and -f(a) with the sign change due to the distinct "outward" directions for those two boundary points).

    The key to proving these stokes type theorem including the FTC is to chop the region of integration up into smaller and smaller pieces until you see:
    a.) the cancelations of adjacent but oppositely oriented boundaries of pieces.. and
    b.) the pieces become small enough that their boundary evaluation becomes some form of difference quotient defining a derivative or differential.

    These are the two "logical dynamics" that occur dually and which will generalize.

    So you don't want to generalize to e.g. two vector limits.. as the twoness of the limits is unique to the 2 point boundary (0-sphere) of the 1 ball (interval). Once you pop up to 2 dimensions you will have a continuous curve boundary. Here's a better way to set up limits of integration.

    Imagine all definite integrals are carried out over the entirety of the space. so in 1 dim you always integrate over the whole real line, in 2 dim the whole plane etc.

    You would then define a characteristic function of a set as the function mapping elements of that set to 1 and mapping everything else to 0. You now describe say the 1 dimensional integral:
    [tex]\int_2^5 x^2 dx[/tex]
    as the integral:
    [tex] \int _{\mathbb R} x^2 \cdot \Theta_{\left[2,5\right]}(x) dx[/tex]
    where [itex]\Theta[/itex] is the characteristic function for the domain of integration (the interval [2,5]).

    This it a better notion of what you generalize in higher dimensions. Something like:
    [tex]\int_{\mathbb R^n} F(\mathbf{x})\cdot \Theta_S(\mathbf{x}) dx^n[/tex]
    where S here is some suitably well behaved set in the n dimensional space.

    And it is in this setting that one may then consider differential identities, differentials and derivatives of characteristic functions (which we must generalize to distributions rather than functions, as with the Dirac delta "function").

    You will find then that the derivative and integral concepts are neither inverses nor dual to each other but are rather intermixed but distinct mathematical constructs. There is no sensible way to preserve the coincidental duality/inversion of 1 var calculus as you generalize.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Similar Discussions: Abstractions of integration and differentiation