Why Do Some Functions Lack Algebraic Integrals Despite Being Continuous?

  • Thread starter Thread starter Lyuokdea
  • Start date Start date
  • Tags Tags
    Integration
Lyuokdea
Messages
154
Reaction score
0
I'm just curious as to the theory of integration, and why, given continuity and differentiability, all functions have somewhat easy and very calculatable dirivatives, while there are many functions which exist that either do not have a algebraic integral. Why is it that our math works perfectly and via set rules one way, but not the other. Is it something similar to all numbers having a square, but not all numbers having an integer square root. Obviously, that's not a mathematically sound way of saying it, nor are the two concepts really related, but I hope you understand where I'm going with that. Have there been any proofs as to why integration fails in certain circumstances. I understand that there are some functions, such as

\int_a^b e^{x^2} dx

that have no integral, but as they do have some calculatable area under them, shouldn't they have some integral of some form, Is it just that we lack the math to express the form? I know there is a theorem that there can be no integral for the above function, but is that just for our current concept of mathematics, otherwise how can a function without an integral really exist?

Edit: Thanks graphic7
~Lyuokdea
 
Last edited:
Physics news on Phys.org
Lyuokdea said:
Edit: Latex did a pretty bad job with that equation it should be int(e^x^2)dx

~Lyuokdea

\int_a^b e^{x^2} dx
 
There's a big difference between "this function doesn't have an integral" and "this function doesn't have an elementary integral".


The fact is, it's a lot easier for a given function to be integrable than it is for it to be differentiable -- any piecewise continuous function is integrable! Every differentiable function is integrable, but not vice versa.
 
what makes differentiation so easy is the chain rule. most complicated functions are just compositions of other functions, and with the chain rule (together with the product rule), differentiating these is a straightforward procedure. integration, on the other hand, involves finding the function that when differentiated gives the original function, and is general much more difficult.

the only analogy i can think of at the moment is that differentiating is like taking a picture and cutting it into shapes to make a puzzle, while integrating is like putting the puzzle back together. the first is easy, the second is usually harder. if you just start with a random collection of puzzle peices, chances are they won't form a nice, pretty picture (elementary function). but any picture can be made into a puzzle. its not great, but you get the idea.
 
Another important point is that the derivative of f(x) is defined by a formula (specifically lim_{h->0}\frac{f(x+h)-f(x)}{h} while the anti-derivative of f(x)is defined only as "the function that has f(x) as its derivative".

This "asymmetry" is generally true of "direct" and "inverse" problems: given the formula y= (x3- 3x+1)4x+1, the "direct" problem "if x= 2 what is y?" is easy while the "inverse" problem "if y= 2 what is x?" is , much harder.
 
thats true, although i could say that integration is defined by the formula:

\int_a^b f(x) dx = \lim_{\triangle x \rightarrow 0} \sum_{n=0}^{\frac{(b-a)}{\triangle x}} f(a+n \triangle x) \cdot \triangle x

however, its probably easier to invert the derivative then to use this.
 

Similar threads

Back
Top