Alternative Bound on a Double Geometric Series

bolbteppa
Messages
300
Reaction score
41
If |a_{mn}x_0^my_0^n| \leq M then a double power series f(x,y) = \sum a_{mn} x^m y^n can be 'bounded' by a dominant function of the form \phi(x,y) = \tfrac{M}{(1-\tfrac{x}{x_0})(1-\tfrac{y}{y_0})}, obviously derived from a geometric series argument. This is useful when proving that analytic solutions exist to y' = f(x,y) in the case that f is analytic.

A more useful dominant function when proving existence for an integrable pfaffian of the form dz = f_1dx_1 + f_2dx_2 is given by \psi(x,y) = \tfrac{M}{1-(\tfrac{x}{x_0}+\tfrac{y}{y_0})}. The coefficients of x^my^n in the taylor expansion of \psi is equal to the coefficients of x^my^n in the taylor expansion of M(\tfrac{x}{x_0}+\tfrac{y}{y_0})^{m+n}, and are at least equal to the coefficients of the taylor expansion of \phi.

My question is, how in the world does one gain any intuition for all of this? I can understand, derive & use \phi(x,y) = \tfrac{M}{(1-\tfrac{x}{x_0})(1-\tfrac{y}{y_0})} perfectly, however motivating, deriving & using the alternative function \psi(x,y) = \tfrac{M}{1-(\tfrac{x}{x_0}+\tfrac{y}{y_0})} is too arbitrary for me, is there a nice way to see the use of this second dominant function as obvious? Page 397 if needed
 
Physics news on Phys.org
The first thing that struck me is that

##\big(1 - \dfrac{x}{x_0})\big(1 - \dfrac{y}{y_0})## differs by only 1 term, ##xy/x_0y_0##, from ##1 - \big(\dfrac{x}{x_0} +\dfrac{y}{y_0})##. So your ##\phi## and ##\psi## are not that different.

I would guess that someone needed the second formulation, hoped it would work, and then found a convincing argument that it is okay (as per your notes re Taylor's series).

A lot of mathematics looks unintuitive in this way when it is presented as a fait accompli. The book would be more helpful if written to show you an example of why the second bound is useful in some cases, and then guided you through an explanation of why you can do it that way. Depending on x and ##x_0## the missing term could be either positive or negative, so you need some non-trivial explanation of why the second bound works.

Maybe there is a more intuitive explanation that I don't see. Maybe not.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top