Why are integrals well defined?

  • Thread starter Thread starter jamesbob
  • Start date Start date
jamesbob
Messages
63
Reaction score
0
Im having trouble explaining why intergrals are well defined. For instance:

\int_{0}^{\infty} \frac{1}{(x + 16)^{\frac{5}{4}}}dx.

Here do i say something like:

\mbox{The integral behaves at zero, and at } \infty, (x + 16)^{\frac{5}{4}} > x^{\frac{5}{4}} \mbox{ therefore the integral diverges.}
 
Last edited:
Physics news on Phys.org
Your sentence is, essentially meaningless:
"The integral behaves at zero" :confused:

Besides, your conclusion is wrong.
 
It depends on what you mean by "well-defined". If you mean "is a real number" then look at what happens when you apply the definition of the improper integral to what you have, and show that you really do get a real number.
 
Actually, the value of your integral equals a most special prime.
 
jamesbob said:
Im having trouble explaining why intergrals are well defined. For instance:

\int_{0}^{\infty} \frac{1}{(x + 16)^{\frac{5}{4}}}dx.

Here do i say something like:

\mbox{The integral behaves at zero, and at } \infty, (x + 16)^{\frac{5}{4}} > x^{\frac{5}{4}} \mbox{ therefore the integral diverges.}
No, it does not make sense. This is an Improper integral. As you know:
\mathop {\int} \limits ^ {b}_{a} f(x) dx = F(b) - F(a), so what if b is not finite, i.e b tends to infinity? We have:
\mathop {\int} \limits ^ {\infty}_{a} f(x) dx = \lim_{b \rightarrow \infty} \mathop {\int} \limits ^ {b}_{a} f(x) dx = \lim_{b \rightarrow \infty}(F(b)) - F(a)
-----------
Say you want to evaluate:
\mathop {\int} \limits ^ {\infty}_{1} \frac{dx}{x ^ 2}
We have:
\mathop {\int} \limits ^ {\infty}_{1} \frac{dx}{x ^ 2} = \left( \lim_{x \rightarrow \infty} - \frac{1}{x} \right) + \frac{1}{1} = 1.
Can you get this? :)
 
Yeah, thanks. Its just the way my silly university asks questions and explains things that confused me. Thanks for the help :smile:
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top