Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Radius of convergence without complex numbers

  1. Jul 28, 2012 #1
    Pretend that you are expaining the following to someone who knows nothing about complex numbers and within a universe where complex numbers have not been invented.

    In examining the function
    f(x) = \frac{1}{1 + x^2}

    we can derive the series expansion
    \sum_{n=0}^\infty (-1)^n x^{2n}

    We note that the ratio test (which does not involve complex numbers) indicates that the series necessarily diverges if
    | -x^2 | > 1

    or [itex]x > 1[/itex].

    However, returning to the function f(x), we see that the point x = 1 is not at all special. How is the specialness of x = 1 explained without the notion of a complex number?

    Perhaps we are misleading the reader by our choice of a series expansion about x = 0. Had we done a series expansion about x = 1, then we would have found that there is something special about the points [itex]x = 1 \pm \sqrt{2}[/itex]. But the question remains, how can this be anticipated a priori, without complex numbers?

    * Note that in the language of complex numbers, this is explained simply by the fact that the distance to the nearest singularity, x = i, is 1. But this involves the notion of the Argand plane.
    Last edited: Jul 28, 2012
  2. jcsd
  3. Jul 28, 2012 #2

    Well, in fact if [itex]\,x<-1\,\,\,or\,\,\,x>1\,[/itex]

    I don't think you can explain this in a satisfactory way without complex numbers, and I can't understand very

    well why would anyone without a knowledge of complex numbers be interested in this stuff, unless it is

    a first-second year mathematics student who's taking some basic real analysis and hasn't yet studied complex one. If this

    is the case one can simply say "wait until you study complex analysis to fully understand why the convergence radius of this stuff

    cannot be enlarged anymore."

  4. Jul 28, 2012 #3
    Well, this is a mathematics forum, so presumably, we can pretend we're interested in answering the question for the question's sake... :smile:

    In any case, I think the question is fairly clear. The reason I ask is because I'm designing a lecture course and part of this requires me to explain the necessity of complex numbers. However, that being said, I'm not sure that complex numbers are even necessary. After all, complex methods, in many cases, are simply algebraic and geometric simplifications. For example, it's easy to represent a 90 degree rotation by multiplication by [itex]i[/itex] rather than through less elegant vector and matrix operations. However, everything in the physical world involves real numbers, so I'm expecting that for any question that requires a real answer, there exists a method (perhaps only approximate) that is restricted to real numbers.

    I don't like this answer because complex numbers are an imaginary construct. They're simply algebraic and geometric relations. So there must be a way to remain in real space and explain series convergence.

    Here is another question: in the below figure, you have two functions that are infinitely differentiable. The one in blue has infinite radius of convergence for a series expansion about x = 0, while the one in red has a radius of convergence of only 1. Is there an easy way to see this without complex numbers?

    I am sorry if I'm not explaining it well; I admit that it's still fuzzy in my head.

    Attached Files:

    Last edited: Jul 28, 2012
  5. Jul 28, 2012 #4
    I honestly don't think so. This forum is, if I'm not wrong, designed to try to address questions about mathematics in a rather wide meaning of the word, but it has to be about mathematics.

    It's weird, and if I may add a little worrying, to read the above from someone who is about to design a lecture course.

    The importance of complex numbers cannot possibly be overestimated, and it's a fact that more than half of physics would fall apart

    without them (yes: electricity, relativity, gases, mechanics, optics...all these use pretty heavy complex machinery).

    To state that "everything in the physical world involves real numbers" is a huge understatement, in the best of the cases,

    and simply a confusing and misleading, or even completely inaccurate, one in the worse of the cases.
    As imaginary as the numbers [itex] -4, 1.3, 7, 32[/itex] . ALL numbers are a mental abstraction of something. In some

    cases we can easily grasp a rather "simple" abstraction, as saying that "5 is the number of fingers in my hand", and in

    others it may well be pretty hard, but they all are abstractions and none, imo, is more imaginary or real than other.

    This is one example more of long used poor names for things and here we have the result: people get a false idea.
    Yes, there's a way to explain convergence of real series, but I can't see any way whatsoever to explain why

    in some cases the convergence radius can be widened and why not, and what's wrong with the function [itex]\frac{1}{1+x^2}[/itex] staying all

    the time within the real numbers realm.
    I can't see any function in colors, but I'm afraid the answer will be closely related to the previous one.


    Ps. Ok, already say the little graph attached to your previous message. I can't say anything about

    that until I see an analytical expression of the functions, of course.
    Last edited: Jul 28, 2012
  6. Jul 28, 2012 #5
    Um. Thanks. I work with complex numbers every day (conformal mapping, contour integration, complex Fourier transforms, boundary integral methods, steepest descent approximations, etc.) ... so yes, I'm aware of their applicability, particularly as it relates to problems in mechanics.

    It doesn't mean that complex variables are necessary.

    Well of course it does. Any computation you use in designing a bridge, an airplane, in explaining the weather, will use real numbers. Now you may have to go through the complex numbers, but eventually the result is real.

    It's like the solution of a cubic equation (which resulted in the birth of complex numbers, to an extent). Any cubic polynomial has at least one root, but the algebraic representation of the cubic roots may require formally manipulating negative square roots...however, once all the algebra is done, the solution is real.

    Another example is Schrodinger's equation. Yes, it may be a complex differential equation, but eventually the only result we care about is real (namely, the amplitude and wavenumber of the wave function). You could have just as well decoupled the real and imaginary components of such equations. This is particularly evidenced in any numerical computation, where the computer treats imaginary numbers effectively as real vectors.

    (Of course, I should add, this is more of a "I can't think of any physical problem that involves something more than real numbers", but perhaps I'm shortsighted).

    Can you give me a concrete example of a physics problem that would fall apart if I removed the complex plane?
    Last edited: Jul 28, 2012
  7. Jul 28, 2012 #6
    So just because the result is real implies that complex numbers are unnecessary?? That's a weird form of reasoning.
    You could also say that all results are rational numbers. Indeed, every measurement we do will be of the form 0.1432, which is rational. We can even work with the approximation 3.1415 for pi, nobody is going to notice the difference.
    So, are real numbers unnecessary then?? Should we just do all math with rational numbers???
  8. Jul 28, 2012 #7
    Yes and no.

    Your example isn't valid, because we need our number system to be dense. It's important that pi is irrational, otherwise all our (real) results would be off. I'm not talking about numerical precision. If you could theoretically compute to arbitrary precision, then it's relevant that the area of a unit circle is pi and not 3.14. So your sensationalist example is a bit of a strawman.

    My statement (which, if you examine books on the development and philosophy of mathematics) is simply that, because complex numbers are effectively notational convenience applied to 2d geometric transformations, they're unnecessary. Basically, my understanding was that there is a real analogue for every complex result, in the same way that a complex number is nothing more than a vector in R2 with special geometric properties.

    Some years back, I remember a chapter of such a book devoted to this issue, but unfortunately, the title eludes me.

    Perhaps we should start from a concrete example. Can someone answer this question? If "half of physics falls apart" then surely one can easily come up with an example in which it's indisputable that complex numbers are necessary.
  9. Jul 28, 2012 #8
    Sure, you can do all that. But complex numbers are isomorphic to the rotation matrices. So you just used the complex numbers anyway, you just didn't call them that and you made it annoying for yourself.

    Again, this is isomorphic to the complex number system.
  10. Jul 28, 2012 #9
    Perfect. We're in agreement.

    So how do you explain the notion of radius of convergence as the distance from the point of expansion to the nearest 'singularity' without the notion of complex functions and the complex plane (e.g. for example, in terms of geometry in R^2)?

    For example, the expansion of 1/(1+x^2) about x = 1 should have radius of convergence |x - 1| < sqrt(2), right? How can this be seen entirely in terms of real numbers?
  11. Jul 28, 2012 #10
    The rationals are dense.

    No, it's not a strawman. You said something like "the complex numbers are unnecessary because all results are real". Well, I say: "the real numbers are not needed because all results are rational".
    The things you describe are tools to get to the eventual result.

    Second, I can describe all results that use real numbers with rational numbers. And I really mean, every single result. It will be long and tedious though.

    In the same way, there is a rational analogue for every real result.

    The reals are just another human construct. It's not that the universe actually prefers real numbers. It's just something that humans invented. The complex numbers are exactly the same thing.
  12. Jul 28, 2012 #11
    Oops. I think I understand. So for something like the circumference of a circle, pi would be expressed as the limit of some rational sequence, correct? I'm happy with that. You're right and it was a good point.

    Okay, so I'd like to understand the notion of radius of convergence sticking with the real numbers. Nothing wrong with that question, right?
  13. Jul 28, 2012 #12


    User Avatar
    Science Advisor

    For rsq_a:

    Why don't you just look at convergence in terms of the norm of R^2? This is typically how you deal with convergence criteria in any multi-dimensional space whether it's 2,3,4 and even infinite-dimensional spaces of orthogonal components (i.e. Cartesian-like geometry).

    Once you get a constraint on the norm with regards to convergence, then you can look at the geometric interpretation of said norm.
  14. Jul 28, 2012 #13
    Yes, thank you for helping to get it back on topic.

    I did think of it this way: consider the series expansions around x = 0 of
    f(x) = \frac{1}{1+x^2} \quad \text{and} \quad
    g(x) =\frac{1}{1-x^2}

    Although these two functions have differing values, their rate of convergence is identical (via the ratio test). It seems that what we need is to seek a 2d analogue of this, i.e. we need a function in [itex]R^2[/itex] that when expanded about (x,y) = (0,0) has a similar rate of convergence.

    Maybe at this point, it requires the 'leap' that I can use
    h(x,y) = \frac{1}{1 - (x,y)^2}

    But uh-oh, what does it mean to square a vector (x,y)? We define this analogously to complex arithmetic (I guess this requires a leap). Once I've done this, then everything follows from the usual series expansions.

    I think this would work, and as you said, the key is to impose a restriction of an identical rate of convergence (but different series values). This allows us to develop the geometry which ultimately connects (1,0) and (-1,0) with the unit circle.
    Last edited: Jul 28, 2012
  15. Jul 28, 2012 #14


    User Avatar
    Science Advisor

    Well the unit circle is just the length of any vector in R^2 = 1.

    Personally I think the best way to do this is to get a series expansion for the function in question and then just use the convergence results for when the addition of scalars converges (the scalars will relate to the norms).
  16. Jul 29, 2012 #15
    What does it mean to square a vector? It means for a vector [itex]s[/itex], the square is [itex]s^2 = s \cdot s[/itex]. Easy enough.

    To be honest, while complex analysis is a rich and storied subject, I think from a modern perspective it's a mistake to emphasize it as a separate discipline from vector analysis of the 2d Euclidean plane. All the usual results of complex analysis can be understood in terms of vector space theorems--Cauchy-Riemann condition, Cauchy integral theorem, all of it.

    Case in point: if you have a vector field [itex]F(r)[/itex], then the Cauchy-Riemann condition is equivalent to saying [itex]\nabla \cdot F = \nabla \times F = 0[/itex]. This also makes the generalization of analytic functions to higher dimensional spaces obvious.
  17. Jul 29, 2012 #16


    User Avatar
    Science Advisor

    Huh? You have a cross product. How is it obvious in higher dimensions?
  18. Jul 29, 2012 #17
  19. Jul 29, 2012 #18
    You use a wedge product instead. Heh, my effort not to introduce that ended up making my statement slightly wrong.
  20. Jul 29, 2012 #19


    User Avatar
    Science Advisor

    I was hoping you were going to bring up this guy :biggrin:
  21. Nov 23, 2012 #20
    I'm sorry this is a few-month old thread, but I just came back today.

    You keep on saying "many calculations done in designing stem from complex numbers", but that's besides the point. Just because went from X to Y through the complex plane does not mean that it's the only way.

    A very simple example is the roots of the cubic formula. It's well known that this is where the algebraic necessity of complex numbers first arose (c.f. Cardano). The solution of the depressed cubic, [itex]x^3 = 3px + 2q[/itex] has roots
    x = \left( q + \sqrt{q^2 - p^3}\right)^{1/3} + \left(q - \sqrt{q^2 - p^3}\right)^{1/3}

    which was shown around the 1500s. As we all know, a cubic must have at least one root. When [itex]q^2 - p^3 < 0[/itex], however, it requires you to work with quantities such as [itex]\sqrt{-1}[/itex] even though the final answer (the sum of the two terms on the RHS) must eventually be real. So the introduction of the complex number (at the time) was nothing more than a convenient tool in order to recover the lone root.

    Later (still in the 1500s), Francoise Viete showed that one could write the real root as
    x = 2\sqrt{p/3} \cos \left[ \frac{1}{3} \cos^{-1} \left( \frac{3\sqrt{3}q}{2p\sqrt{p}}\right)\right],

    through an entirely real process (that is, one in which $\sqrt{-1}$ is ever encountered). This is an important point that had we ignored the introduction of $\sqrt{-1}$, we would have managed to recover the correct answer another way (an entirely real way).

    I would be happy to have an example for which you prove that, or for which it's universally acknowledged that the complex number is an essential (and irreplaceable part).

    Okay, let me give you an example. You want to explain the roots of [itex]x^2 + c[/itex] to a 5 year old child. So you draw a graph of [itex]f(x) = x^2 + c[/itex] and show that when c is less than zero, there are two intersections and when c is greater than zero, there are none. "Oh wait a minute," you say "that's not actually right. When c is greater than zero, there are actually two intersections, but on an imaginary line".

    What do you think the child would say?

    Now of course, the Fundamental Theorem of Algebra is great. But that's a different issue. If you like, you can repose [itex]z^2 + 1 = 0[/itex] as the search for points in [itex]R^2[/itex] that, upon two rotations from the positive real axis, land on the negative real axis. However, this is a geometrical interpretation. In other words, because it is so delightfully useful to understand use the shorthand of [itex]iz[/itex] as a rotation by 90 degrees, this makes complex variables such a useful subject.

    Similarly, the Fundamental Theorem of Algebra, which allows you to factor a polynomial of degree n into n factors is just another nice trick to preserve a certain structure.

    You also bring up Schrodinger's equation. I haven't gone through the derivation of the equation, but presumably, the complex number that appears there is somehow related to the representation of cosines and sines using the complex exponential? In that case, it's like integrating
    \int \sin x \, dx = \Im \int e^{ix} \, dx = \Im \frac{1}{i} e^{ix} = -\cos x.

    Just because you used a convenient short-hand for the representation of real quantities and went through the complex plane to get there doesn't mean you couldn't have gotten the answer another way. (I have to admit though that I only looked at the Schrodinger equation very briefly as a high school student, so I remember the complex waves representation as being important)

    Anyways, that being said, I feel like I'm being labeled some kind of crackpot. Plus...

    Yay. I love it when people patronize me. I don't understand because I don't use advanced maths or I only know a limited set of applications. I'm not really willing to make this into a penis waving contest about who knows more applications.

    I'll simply refer you to Tristan Needham's Visual Complex Analysis book, that does talk about the algebraic and geometrical necessity of complex numbers. In particular, Needham's point is that while they aren't necessary, they are damn useful because of their geometrical significance. Effectively, the rule of [itex](a, b) \cdot (c, d) = (ac - bd, bc + ad)[/itex], which characterizes complex numbers, also characterizes the rule that you would need if you wanted to describe two Euclidean shapes as being similar. Because Euclidean Geometry is so important to us, this explains the ubiquity of complex numbers. However, that doesn't mean that they are essential.

    On a more personal note, this search on my part was inspired when I once gave a talk on the use of complex variable techniques to solve a certain problem in the rupturing of thin films (in particular, the technique involved conformal mapping, Fourier transforms, and WKB expansions). A professor in Engineering then said to me after the talk, "I really dislike complex numbers. Could you have derived the same result without going into the complex plane?"

    I didn't know the answer to that question. The analysis I presented made it seem like complex numbers were necessary, but at the core of it, we emerged with a real-valued answer. We began with a real PDE, worked through the complex plane, and ended up with a real result. In light of this, I would think that there is a way to derive the same answer while remaining entirely in the real plane. It would be sort of like duplicating Francois Viete's work on the cubic formula which used cosines and sines. Somehow, whenever you feel compelled to use complex numbers, you have to work through a geometrical argument.

    If you, however, have a very clear problem where the complex variables techniques cannot be replaced by an entirely real approach, I'd be happy to see it.
    Last edited: Nov 23, 2012
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook