Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Series, what are they for?

  1. Apr 18, 2013 #1
    Hi guys,

    I am getting to the end of my cal 2 class and we're hitting infinite series, I was just curious what are they use for? I mean it may sound stupid but my cal class focuses on calculating them not on their use. So I was curious on what their applications were ( especially I'm planning on taking cal 3, it might be used there? I have no clue :P)

    Thanks again!
  2. jcsd
  3. Apr 18, 2013 #2


    User Avatar
    Science Advisor

    Hey MarcL.

    Series are just general kinds of functions that arise and finding ways to classify them and see when they make sense (i.e. when they converge) helps us know the behavior of general classes of series.

    Its like when you have an integral and you want to know when it exists. We have certain theorems about when you can calculate an integral given some f(x) if it obeys certain properties. Likewise we have theorems about when we can find the derivative of a function and whether or not it exists.

    Series are the same: we have all these theorems that tell us if they converge and for what values and we can also have theorems on things like upper and lower limits, and these can be useful in calculations or estimating values.

    So in summary, try and think of them as a general class of objects that mathematicians study, categorize, and analyze so that applied mathematicians can use their results usually for calculation and estimation work just like the whole theory of integration and differentiation is used by applied people to calculate something without them having to derive everything from scratch so they can focus on what they do best (applied mathematics).
  4. Apr 18, 2013 #3
    Ah, so its just to make life easier to calculate real life situation ( well wherever a sequence applies)?
  5. Apr 18, 2013 #4
    Series are really useful in real life. Here are two practical examples:

    In economics, let's say I lend $x to a bank and because of a reserve requirement, the bank must keep a fraction, r, of the money. This bank can then lend x(1-r) dollars. If it lends to another bank with the same reserve requirement, the second bank can lend out x(1-r)^2 dollars. Repeating this, the maximum amount of money that can ever be lent out is [itex] \sum_{n=0}^\infty x(1-r)^n [/itex]. The same logic applies to consumers who might decide to save some money (instead of being forced to by a reserve requirement), so one could conceivably predict how much economic activity would be caused by a stimulus, for example.

    You will learn about (or already have learned about?) Taylor Series. These infinite series give approximations for functions that might otherwise be hard to deal with. Suppose you wanted to solve the motion of a pendulum. By finding the net torque on it, you get the equation [itex] \frac{d^2\theta}{dt^2}+\frac{g}{l}sin\theta=0 [/itex] where θ is the angle the pendulum makes with the vertical. This is a complicated differential equation, but the Taylor series for sine tells us that [itex] sin\theta=\theta+\mathcal{O}(\theta^3) [/itex] so the approximation [itex] sin\theta \approx \theta [/itex] is valid for small θ. This approximation makes the differential equation (relatively) easy to solve.
    Last edited: Apr 18, 2013
  6. Apr 18, 2013 #5


    User Avatar
    Science Advisor

    What usually happens in mathematics is that the pure mathematicians look at a class of structures and study them (usually by creating more abstraction bit by bit) and then over time the understanding and classification of said structures matures.

    Then some time in the future, the applied people run into a situation where they end up with that same structure and need to analyze it, so they look at the stuff that has been done and find a pure mathematician that has worked out a bunch of results and they use them.

    It may not be this (for example, an applied mathematician may have to develop some of the theory themselves if it doesn't exist) and also the gap nowadays is becoming shorter and shorter (applied people are using results that were created less than ten years ago whereas it used to be that the applied people use math created say 50 years ago).

    As it becomes more abstract it usually gets harder and harder but as the problems in the applied scene are more demanding, you get the situation where the pure and applied sides and becoming very blurred.
  7. Apr 18, 2013 #6


    User Avatar
    Science Advisor

    The single most important use of series is...computations. Text book and exam questions have a tendency to be "rigged" so they have "nice" answers. Real world problems usually don't have this. But you want to study them, so you need to be able to compute them. Series have a number of benefits over iterative methods, in that we (most of the time) can manipulate them to obtain faster convergence. For example the Taylor series of the logarithm (which is slow)
    ##\log(x) = \sum_{k=1}^\infty \frac{(x-1)^k}{k}##
    can be manipulated to obtain
    ##\log(x) = 2\sum_{k=0}^\infty \frac{1}{2k+1} \left(\frac{z-1}{z+1}\right)^{2k+1}##
    which is significantly faster especially when x is close to 1.

    Also a number of special functions are defined as "the solution of such and such differential equation". In order to calculate them (for examples in graphing) we normally assume that the function is expressible as a Taylor series or a Fourier series. Then finding these coefficients allows us to study by computing.
  8. Apr 18, 2013 #7
    Alright , thanks for all the answers, it actually makes a lot of sense. It is hard to visualize thought for somebody that has just been introduced to the idea of series ( that is already hard to grasp in itself^^).

    Thanks a lot !! :)
  9. Apr 19, 2013 #8


    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper

    Why is the concept of a series hard to grasp? A series is just a sum.
  10. Apr 19, 2013 #9
    I don't know it seems very broad to me. It's not the concept of a sum. It's when I was given the definition, that we use partial sum to verify if it goes to infinity. And I'm not even sure that is what my definition means. Fact is, and I think it is like that for mostly everybody, I hate applying methods / formula to concept I don't understand in general. I can solve those textbook problems but the concept of it is just... "fuzzy"
  11. Apr 19, 2013 #10


    User Avatar

    consider how a calculator or a computer that knows how to add, subtract, multiply, and maybe divide numbers, how does such a calculator compute the sine or cosine or tangent or arctanget or exponential or logarithm of a number. how does it do that, if all it basically knows is how to add, subtract, multiply and divide (and store and retrieve numbers in memory)?
  12. Apr 19, 2013 #11
    It's *not* just a sum. It's conceptually very sum-like, though.

    As you have probably noticed, you cannot sum an infinitely many things. It takes too long (because it takes literally forever).

    The best you can do is approximate.

    You will notice that for *certain* series, they 'tend towards' some ultimate value. What you seem to be interested in is how to formalize the notion of 'tend towards'.

    This is the concept of a limit, and it's the central notion in calculus. There are many different kinds of limits, but they all follow the same informal pattern:

    "My boss asked me to approximate this with a certain degree of accuracy. How can I make sure I achieve that accuracy?"

    In slightly less formal terms:

    "If ε is the amount of error I'm allowed in my calculation, how can I guarantee that I always within that tolerance?"

    In the case of a series, "how" you guarantee this is a question of how many terms in the series I have to add together. This how many is simply an integer (often called N).

    To be slightly more precise, once we have added *at least* N terms, our error will be less than ε (regardless of how many terms total we add..... the important thing, though, is we need it to work for at LEAST N terms).

    So our final definition might look like this:

    Let S be a series and let S_n be the partial sum of the first n terms of S. The we call a number L a limit of S iff for all positive real numbers ε, there exists an integer N, such that for all integers n such that n > N, we have S_n - L < ε.

    For instance, suppose S is the series defined by S_n = 1/2 + 1/4 + 1/8 + ... + 1/(2^n).

    We might play around with our calculator and guess the limit is 1. We then have to *prove it* (because conjectures need proofs!)

    Proof lim S_n = 1.

    Let ε be an arbitrary positive real. We need to find an integer N such that, for all integers n greater than N, 1 - S_n < ε.

    With some clever arithmetic (or maybe simple induction), you can show that 1 - S_n = 1/(2^n). So our problem changes to finding that N such that for all n > N, 1/(2^n) < ε. Try rearranging that formula to 1 < ε * 2^n.

    It should be clear if we have a fixed ε, we can always choose an n big enough to make this statement hold. And if we make n even bigger, it will continue to hold (meaning, it holds for all n > N). Thus, we have found our N.

    Finding a satisfactory N was our goal, and so our proof is over. We proudly claim lim (1/2 + 1/4 + 1/8 + 1/16 + ...) = 1.

    Some good facts about limits:

    Limits do not always exist. The series 1 -1 + 1 - 1 + 1 - 1 + ... is an example of a series with no limit. We call these series "divergent".

    If a series has a limit, then that limit is unique. (A big theme in math is existence and uniqueness. No only is there AT LEAST one solution... there is AT MOST one as well).

    Doing finite sums is an algorithmic process -- anyone who knows how to add can get the right answer. However, doing infinite sums is an undecidable problem. You can't write a computer program (or hire a mathematician) that can always figure out (correctly) whether or not a series has a limit. In many simple cases, the problem is decidable (which is why Wolfram Alpha is so useful), but in general, if you think you have a process for solving limits, I could come up with a series that your program would not give the correct answer. (It would get stuck in an infinite loop or give me the wrong answer).

  13. Apr 20, 2013 #12


    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Education Advisor

    It's not clear to me what definition you're talking about.

    If you have the infinite series ##\displaystyle \sum_{n=1}^\infty a_n##, we say it converges if the sequence of partial sums
    s_1 &= a_1 \\
    s_2 &= a_1 + a_2 \\
    s_3 &= a_1 + a_2 + a_3 \\
    \end{align*} converges. There's no ambiguity in forming the partial sums: you know how to add a finite number of terms. And once you have that sequence, you know (in principle) when it converges or doesn't. So now you have an unambiguous way to interpret exactly what ##\displaystyle \sum_{n=1}^\infty a_n## means.

    So why do you have to be so careful? Why can't you just say you're adding a bunch of stuff up? It's because, as usual, when infinity is involved, your intuition will fail you. Your knowledge of how addition works based on adding up a finite number of terms doesn't always work. For example, you've know that it doesn't matter what order you add a finite collection of numbers: 2+3+4 is the same as 4+3+2 and 3+2+4, etc. With infinite series, however, that's not the case. For example, take ##\displaystyle \sum_{n=1}^\infty \frac{(-1)^n}{n}##. Using the definition of convergence above, this series converges to ln 2, but if you rearrange the order of the terms, you can get a different result, including a series which doesn't converge. The page http://en.wikipedia.org/wiki/Riemann_series_theorem#Examples has some examples of this.
  14. Apr 21, 2013 #13
    I think defining the sum of an infinite series using the associated sequence of partial sums is pretty ingenious. The limit of a sequence is already defined, which is the limit of the sequence function, I think this makes everything pretty neat.

    You can think of the sequence of partial sums as a sequence representation of the series itself, where its elements is a step by step addition of the terms of the series. In calculus, sequences have infinite elements, now, there are certain sequences that converges to a finite number when we take its limit as the argument (let's say n such that n is an element of positive integers) of the sequence function approaches infinity. If that sequence is sequence of partial sums, then taking limit as n approaches infinity really is like adding infinite number of terms, since the nth element of the sequence of partial sums is the sum of n terms in the series.

    Not every infinite series is convergent though.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook