I had a question regarding convergent and divergent integrals. I want to know the "exact" definition of an improper integral that converges. Wikipedia states that

For a while, I took that as a valid answer and claimed that any integral that has a finite answer must be convergent. However, I came up with a problem.
If you solve [tex]\int_0^2 \frac{dx}{1-x}[/tex], it turns out that it diverges twice, on both integrals (0 to 1, and 1 to 2). My textbook, which has never failed me so far, states that it diverges. Its solution solves only one of the integrals and it of course goes to infinity. Therefore, it diverges. However, if you solve the integral algebraically, you're left with [tex]ln|\dfrac{1-b}{1-b}|[/tex], which ends up being 0 as [tex]\lim_{b\rightarrow 1}[/tex]. Clearly, this integral has a limit that exists.
The problem lies in the fact that this integral "diverges" twice, at the same rate but in opposite directions. When evaluating this integral, this logically would lead to 0 because they're opposites of eacher.

What's your guys' opinions on whether this converges or diverges?

Maybe I do it differently, though. Could I write the integral as the limit as b approaches 1 of [tex]\int_0^{b^2} \frac{dx}{1-x} + \int_b^2 \frac{dx}{1-x}[/tex]?

Very interesting idea. That leads to [tex]ln|\frac{1-b}{1-b^2}|[/tex] which is [tex]-ln2[/tex] by L'H or simple canceling. That seems to make no sense at all, even though everything arithmetically looks right. A change in only one of the [tex]n[/tex]s for [tex]\int_0^{b^n}[/tex] or [tex]\int_{b^n}^2[/tex] changes the answer. Therefore, I would have to guess that doing so is not allowed. However, I'm only a high school student so I know next to nothing compared to you guys.

The problem is that infinity - infinity does not necessarily = 0.

Your problem could be reworded as the following integral:

[tex]
\int_{-1}^{1} \frac{1}{x} dx
[/tex]

Intuitively, both sides grow at the same rate but with opposite sign, therefore the integral is zero. This is where mathematical rigor > intuition. As it turns out, the integral is undefined (it's actually [tex]i \pi [/tex]).

You have to be very careful to see if the integral actually converges or not.

This is how I would do the math. I'm not saying that infinity - infinity is 0, but by using L'H and the difference of logs, you get an answer of 0.
[tex]\lim_{b\rightarrow0} \int_{-1}^b \dfrac{dx}{x} + \int_b^1 \dfrac{dx}{x} [/tex]

You don't simplify each separately. The first limit does indeed exist, and it's 0 as you pointed out.

The indefinite integral is said to exist if both limits exist, and in this case it's their sum. So when I say that it's the sum of their limits rather than the limit of their sums, it's a matter of definition.

Okay, so the integral is indeed 0, but it diverges because if it diverges anytime within, it's a divergent integral. But HallsofIvy's definition for the improper integral makes it infinity - infinity and not 0?

okay! Thanks a lot guys! I really appreciate it. I guess I got stuck just because of the whole "limit on each integral separately" since my teacher never did that when we split up integrals.
This may be over my head and you guys already helped me out enough, but how do you show that it's [tex]i\pi[/tex]? I'm really interested. Does that involve Contour integrals or something?