Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Power Series solution to dy/dx=x+y

  1. Mar 17, 2016 #1
    This is from an example in Thomas's Classical Edition. The task is to find a solution to ##\frac{dy}{dx}=x+y## with the initial condition ##x=0; y=1##. He uses what he calls successive approximations.
    $$y_1 = 1$$
    $$\frac{dy_2}{dx}=y_1+x$$
    $$\frac{dy_3}{dx}=y_2+x$$
    ...
    $$\frac{dy_{n+1}}{dx}=y_n+x$$

    I can easily follow the process, but I'm not seeing why I should consider each subsequent expression to provide a better approximation of ##y##. Is there an easy way to explain this?
     
  2. jcsd
  3. Mar 17, 2016 #2
    Hi Odius:

    I suggest you start by write down the specific approximations for several yn(x), n = 1,2,3,... and see if you can spot a pattern.

    Hope this helps.

    Regards,
    Buzz
     
  4. Mar 17, 2016 #3
    I guess I don't follow. I have a generic form for the nth term ##\int_0^x 1 \, df_n(t)=2 \sum _{k=2}^{n-1} \frac{x^k}{k!}+\frac{x^n}{n!}+x+1##, but I'm not sure why I should assume it is a better approximation than its predecessor.

    Sure, I can carry out the process to produce an infinite series and convince myself that it solves the equation, but I still don't see the motivation that would get me started if I didn't already know the answer. I guess I shall draw graphs of the individual terms of the partial sums or something like that.
     
  5. Mar 17, 2016 #4
    Hi Odius:

    Take the difference yn+1(x) -yn(x). y(x) as the limit of yn(x) is then the infinite sum of these differences. You can then find a representation of the error for yn(x) which can show that the error → 0 as n→ ∞. (I am guessing you will find all the terms in the infinite sum are positive. If this is wrong, there are some variations to deal with that situation.)

    Regards,
    Buzz
     
  6. Mar 17, 2016 #5
    I'm not sure I'm parsing this correctly. Is this what you intended? ##y_{n+1}-y_n(x)y(x)##. And do you mean for me to leave ##y(x)## as an symbolic, yet to be determined function?

    Darn! Library is closing. Will pick this up later.
     
  7. Mar 17, 2016 #6

    Charles Link

    User Avatar
    Homework Helper

    I think have something that works. It's slightly different than your textbook, but you might like it. Because of the initial conditions, the first two terms of a power series in ## y ## will be ## y=1+x ## and we can write ## y ## as ## y=1+x+A_2 x^2 +A_3 x^3 +A_4 x^4 +... ## . Plug ## y ## into both sides of the equation. (When you plug into the right side you get ## 1+2x+A_2 x^2+A_3 x^3+... ##). Set like powers of x equal on each side and you can successively solve for the coefficients ## A_2, A_3, ... ##
     
  8. Mar 18, 2016 #7
    Hi Odius:

    Sorry about the confusion. The "." dot is a period, not a multiplication sign.
    Take the difference yn+1(x) -yn(x).
    y(x) as the limit of yn(x) is then the infinite sum of these differences.​

    Regards,
    Buzz
     
  9. Mar 19, 2016 #8

    Ssnow

    User Avatar
    Gold Member

    Only an observation, the series ##y=a_{0}+a_{1}x+a_{2}x^{2}+\cdots## in order to be a solution of ##y'=x+y## must have ##a_{0}=1## (by initial condition) so ##y=1+a_{1}x+a_{2}x^{2}+\cdots## put it inside ##y'=x+y## you obtain relations for coefficients: ## a_{1}+2a_{2}x+3a_{3}x^{2}+\cdots =x+1+a_{1}x+a_{2}x^{2}+\cdots## so ##a_{1}=1##, ##a_{2}=1##, ##a_{3}=\frac{1}{3}##, ...

    This procedure is the same thing as the approximation of your problem, you start with an approximation of zero order ##y_{1}=1## after you pass to examine an approximation of order 1 so you find integrating and set ##y_{2}(0)=1 ## that ##y_{2}=1+x+\frac{x^{2}}{2}## that is exact until the first order, after you proceed obtaining ##y_{3}=1+x+x^{2}+\frac{x^{3}}{6}## so on ... at the limit ##n\rightarrow +\infty## you find the same solution as before ...
     
  10. Mar 19, 2016 #9

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    I think that if you take those equations one at a time, do the substitution of the proceeding equation, do the integration wrt x, then you will have a sequence of polynomials of higher and higher powers of x. Those will be truncated Taylor series. So they get closer to the full Taylor series and become better approximations of the solution.
     
  11. Mar 19, 2016 #10

    Charles Link

    User Avatar
    Homework Helper

    I think a more systematic procedure would be to write the equation as ## y=dy/dx -x ## and let ## y_{n+1}=dy_{n}/dx-x ## Normally the Thomas calculus text is quite good, but I think this one is not one of his better solutions. Scratch that (the y=dy/dx -x), it doesn't work because it doesn't get you the next power of x. This is one iterative solution that looked somewhat clumsy... The method I used in post #6 seems more logical for this one. It isn't obvious what to do with the x term if you try to do an integral solution with an iteration=do you integrate the x, etc. ?
     
    Last edited: Mar 19, 2016
  12. Mar 19, 2016 #11

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    My main point was to answer his question of why the book explanation is described as giving a series of better approximations. I think it is because it is giving more terms in an expansion of the solution.

    P.S. I don't like to fight the assigned text book explanations. It always seems to cause more confusion. So I usually try to explain what the text book is saying.
     
  13. Mar 22, 2016 #12
    But I'm not "allowed" to think in terms of Taylor series. It hasn't been introduced yet.
     
  14. Mar 22, 2016 #13

    pasmith

    User Avatar
    Homework Helper

    Often when you start a subject, you will be shown methods which work but you will not be able understand the proofs until much later.

    Ultimately the reason why this method works is that the map [itex]T : C^\infty([0,a]) \to C^\infty([0,a])[/itex] where [tex]T(y) : x \mapsto 1 + \tfrac12 x^2 + \int_0^x y(t)\,dt,[/tex] or an iterate thereof, can be shown to be a contraction with respect to a suitable metric on [itex]C^\infty([0,a])[/itex]. It follows by the contraction mapping theorem that any sequence of the form [itex]y_{n+1} = T(y_n)[/itex] tends to a unique fixed point, which in this case is the solution of the differential equation [itex]y' = x + y[/itex] subject to [itex]x(0) = 1[/itex].
     
    Last edited: Mar 22, 2016
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted