How do the different concepts of integration fit together?

V0ODO0CH1LD
Messages
278
Reaction score
0
I'm making this new post in the general math section since I don't know what field of math this question belongs to anymore.

So the picture I currently have regarding the abstractions of integration and differentiation from single variable-calculus to multi-variable calculus is that the derivative of a function ##f:\mathbb{R}^n\rightarrow\mathbb{R}^m## abstracts to the Jacobian matrix ##J_f## and that the integral of that derivative abstracts to
\int_a^bJ_f\cdot{}dx
where ##a,b\in\mathbb{R}^n## and ##dx=\langle{}dx_1,dx_2,\ldots,dx_n\rangle##. And this picture makes sense to me because
\int_a^bJ_f\cdot{}dx=f(b)-f(a)
where ##f(a)=\langle{}f_1(a),f_2(a),\ldots,f_m(a)\rangle##. This is all in the case that ##f## is a function such that
f(x_1,\dots,x_n)= \langle{}f_1(x_1,\dots,x_n),\ldots,f_m(x_1,\dots,x_n)\rangle.
Also in this picture, in addition to the usual linear operator nature of ##J_f## it also induces a function ##J_f:\mathbb{R}^n\rightarrow\mathbb{R}^{nm}##. However not every function ##F:\mathbb{R}^n\rightarrow\mathbb{R}^{nm}## is the Jacobian of a function ##f:\mathbb{R}^n\rightarrow\mathbb{R}^m##. But provided a trajectory from the point ##a## to the point ##b## (both in ##\mathbb{R}##) the function ##F## can still be integrated the same way as above, although the fundamental theorem of calculus won't apply (i.e. the value of the integral depends on the trajectory).

If this picture is correct (is it?) then my first question is where does the multiple integral fit in it? I realize that there exists an ambiguity in single-variable calculus that the line integral and the multiple integral are basically the same thing, right? But since in vector calculus two points do not specify a closed subset of the domain anymore these two concepts start to differ. Also, I know there are connections between the concepts of line integrals and multiple integrals (via things like Stoke's theorem), but what I am wondering is how to think of them independently so then I can think of these "bridges" between concepts. Unless these "bridges" are the only way to get there :)

The other thing I was trying to fit in this picture is the concept of the surface integral.

Thanks!
 
Mathematics news on Phys.org
The line integral you described does not represent a complete definition, just like when you first learned to integrate along an axis.

You are not so much learning different ideas of integration but different parts of the same idea.
As you advance you fit it all together.

Integration is a process rather than a thing - a tool - it may have a number of uses.
 
How does the line integral not represent a complete definition? What is missing from that picture? What is this definition of integrals that I should be looking into to understand how all of this fits together?

In the mean time, let me ask a more specific question. If we choose to abstract the domain of functions from single-variable calculus to ##\mathbb{R}^n## I can see how the Jacobian matrix and the vector differential ##dx=\langle{}dx_1,dx_2,\ldots,dx_n\rangle## get along, but how does (for instance) the area differential ##dA## fit in with all this? It is definitely not an element of the domain. Is it a subset of the domain, or an element of some other set? If we choose to abstract ##dx## from single variable calculus to ##dA## and the integral to multiple integrals does differentiation still abstract to the Jacobian matrix? What is the picture here?
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top