How is this theory used? Hmmm...
1. Computers cannot really find exponentials and logarithms, because the circuit designs can only add, subtract, multiply and divide. So all buttons on the calculator have to be expressed in terms of those four basic operations. The most boneheaded way to accomplish that (other than lookup tables) is to use Taylor series expansions of the functions, so instead of computing e^x you compute 1+x + \frac{x^2}{2} + .... Since you cannot sum an infinite number of terms, you must truncate the series at some order. The problem is round-off error. Each term introduces round-off error, so the trick is to get an accurate representation at low order. This is where economization (and other tricks) come into play.
2. One of the most common ways to solve differential equations is by a series solution (for example, perturbation theory in quantum mechanics). But you again must truncate the solution to some finite order. As with the problem stated in (1), summing the series at some value of the independent variable introduces round-off error.
3. Converting series that diverge into convergent series.
Let me give you an example: If you place hydrogen in a magnetic field, you can write out a perturbation (series) expansion of the energy solution that is a function of the magnetic field strength. So you get something like (making up numbers)
E = 0.2 B + 0.004 B^2 + 0.00043 B^3 + ...
If you tell me the field strength B, I will tell you the energy of the atom. But for sufficiently large values of B, this series diverges. (Adding on more terms makes the energy larger and larger.)
But a clever researcher would realize that you can re-cast the series into a new series that CONVERGES for large field strengths. One such method is Pade summation, which replaces the series expansion polynomial with a RATIO of polynomials. (This introduces a denominator that can mock the singularity structure of the physical system.) If that doesn't work, you can economize the Pade approximant, producing economized rational approximants. There are hundreds of techniques, and most physicistics are not even aware of them. (Ooodles of good research gets tossed every year because physicists do not realize that they can re-sum divergent series into convergent series.)
Let me give you an example.
Suppose your quantum system produced a series representation of the energy that looked like this:
E = 1 + B + B^2 + B^3 + ...
Now, it is obvious that such a solution blows up for any value of B equal to or greater than 1. Wanna' know why? Factor a B out of the RHS:
E = 1 + B( 1 + B^2 + B^3 + ...),
which means E = 1 + BE. Solve for E and you get E = \frac{1}{1 - B}
Oh ho! There is a singularity at B = 1, so the series will diverge for values greater than 1.
So E = 1 + B + B^2 + B^3 + ... is bad. So what can we do? Well, our series sucks because it does not contain a denominator, which is needed to simulate the pole at B = 1. So let's instead create a new series representation that DOES have a denominator. The most general case would be something like
E = \frac{a + bB + cB^2 + ...}{1 + dB + eB^2 + ...}.
To find the coefficients, we simply match this ratio with the original series at B = 0. And then we match the derivative of this ratio with the derivative of the original series at B = 0. And then we match the second derivative... and so on. If you do that, you will find that a = 1, d = -1, and the rest are 0. In other words, we get
E = \frac{1}{1-B}
This Pade approximant actually is equal to the original function because our original function was a ratio of polynomials. But you can apply it to any function, such as e^x.
So if your physics professor tells you that you can only apply perturbation theory for small values of the magnetic field, then he is wrong. (Yes, at some point the field gets so strong that any series representation will diverge, but you can push the envelope much further than textbooks allow.)
By the way, I'm writing a book on this topic. Interesting?