Power Series solution to dy/dx=x+y

Click For Summary

Discussion Overview

The discussion revolves around finding a power series solution to the differential equation ##\frac{dy}{dx}=x+y## with the initial condition ##x=0; y=1##. Participants explore the method of successive approximations and the reasoning behind why each subsequent approximation might be considered better than the previous one.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants suggest writing down specific approximations for several terms to identify patterns in the series.
  • One participant expresses uncertainty about why a generic form for the nth term should be considered a better approximation than its predecessor.
  • Another participant proposes examining the difference between successive approximations to represent the error and show that it approaches zero as n increases.
  • Some participants discuss the relationship between the series coefficients and the initial conditions, noting that the series must satisfy the differential equation.
  • One participant mentions that the method of successive approximations leads to truncated Taylor series that become better approximations of the solution.
  • Another participant reflects on the challenges of understanding the textbook's explanation and suggests that the iterative nature of the method provides more terms in the expansion of the solution.
  • Some participants express concerns about the textbook's clarity and the difficulty of reconciling the method with their current understanding of series.
  • One participant introduces a more systematic approach involving a contraction mapping theorem to justify the convergence of the approximations to the solution.

Areas of Agreement / Disagreement

Participants express differing views on the clarity and effectiveness of the textbook's explanation of successive approximations. There is no consensus on the best approach to understanding why the approximations improve, with multiple perspectives on the reasoning behind the method.

Contextual Notes

Some participants note limitations in their understanding of Taylor series, which have not yet been introduced in their studies, affecting their ability to fully grasp the method of successive approximations.

Odious Suspect
Messages
42
Reaction score
0
This is from an example in Thomas's Classical Edition. The task is to find a solution to ##\frac{dy}{dx}=x+y## with the initial condition ##x=0; y=1##. He uses what he calls successive approximations.
$$y_1 = 1$$
$$\frac{dy_2}{dx}=y_1+x$$
$$\frac{dy_3}{dx}=y_2+x$$
...
$$\frac{dy_{n+1}}{dx}=y_n+x$$

I can easily follow the process, but I'm not seeing why I should consider each subsequent expression to provide a better approximation of ##y##. Is there an easy way to explain this?
 
Physics news on Phys.org
Odious Suspect said:
Is there an easy way to explain this?
Hi Odius:

I suggest you start by write down the specific approximations for several yn(x), n = 1,2,3,... and see if you can spot a pattern.

Hope this helps.

Regards,
Buzz
 
I guess I don't follow. I have a generic form for the nth term ##\int_0^x 1 \, df_n(t)=2 \sum _{k=2}^{n-1} \frac{x^k}{k!}+\frac{x^n}{n!}+x+1##, but I'm not sure why I should assume it is a better approximation than its predecessor.

Sure, I can carry out the process to produce an infinite series and convince myself that it solves the equation, but I still don't see the motivation that would get me started if I didn't already know the answer. I guess I shall draw graphs of the individual terms of the partial sums or something like that.
 
Hi Odius:

Take the difference yn+1(x) -yn(x). y(x) as the limit of yn(x) is then the infinite sum of these differences. You can then find a representation of the error for yn(x) which can show that the error → 0 as n→ ∞. (I am guessing you will find all the terms in the infinite sum are positive. If this is wrong, there are some variations to deal with that situation.)

Regards,
Buzz
 
Buzz Bloom said:
Hi Odius:

Take the difference yn+1(x) -yn(x). y(x) as the limit of yn(x) is then the infinite sum of these differences. You can then find a representation of the error for yn(x) which can show that the error → 0 as n→ ∞. (I am guessing you will find all the terms in the infinite sum are positive. If this is wrong, there are some variations to deal with that situation.)

Regards,
Buzz
I'm not sure I'm parsing this correctly. Is this what you intended? ##y_{n+1}-y_n(x)y(x)##. And do you mean for me to leave ##y(x)## as an symbolic, yet to be determined function?

Darn! Library is closing. Will pick this up later.
 
I think have something that works. It's slightly different than your textbook, but you might like it. Because of the initial conditions, the first two terms of a power series in ## y ## will be ## y=1+x ## and we can write ## y ## as ## y=1+x+A_2 x^2 +A_3 x^3 +A_4 x^4 +... ## . Plug ## y ## into both sides of the equation. (When you plug into the right side you get ## 1+2x+A_2 x^2+A_3 x^3+... ##). Set like powers of x equal on each side and you can successively solve for the coefficients ## A_2, A_3, ... ##
 
Odious Suspect said:
I'm not sure I'm parsing this correctly. Is this what you intended? yn+1−yn(x)y(x)y_{n+1}-y_n(x)y(x).
Hi Odius:

Sorry about the confusion. The "." dot is a period, not a multiplication sign.
Take the difference yn+1(x) -yn(x).
y(x) as the limit of yn(x) is then the infinite sum of these differences.​

Regards,
Buzz
 
Only an observation, the series ##y=a_{0}+a_{1}x+a_{2}x^{2}+\cdots## in order to be a solution of ##y'=x+y## must have ##a_{0}=1## (by initial condition) so ##y=1+a_{1}x+a_{2}x^{2}+\cdots## put it inside ##y'=x+y## you obtain relations for coefficients: ## a_{1}+2a_{2}x+3a_{3}x^{2}+\cdots =x+1+a_{1}x+a_{2}x^{2}+\cdots## so ##a_{1}=1##, ##a_{2}=1##, ##a_{3}=\frac{1}{3}##, ...

This procedure is the same thing as the approximation of your problem, you start with an approximation of zero order ##y_{1}=1## after you pass to examine an approximation of order 1 so you find integrating and set ##y_{2}(0)=1 ## that ##y_{2}=1+x+\frac{x^{2}}{2}## that is exact until the first order, after you proceed obtaining ##y_{3}=1+x+x^{2}+\frac{x^{3}}{6}## so on ... at the limit ##n\rightarrow +\infty## you find the same solution as before ...
 
Odious Suspect said:
This is from an example in Thomas's Classical Edition. The task is to find a solution to ##\frac{dy}{dx}=x+y## with the initial condition ##x=0; y=1##. He uses what he calls successive approximations.
$$y_1 = 1$$
$$\frac{dy_2}{dx}=y_1+x$$
$$\frac{dy_3}{dx}=y_2+x$$
...
$$\frac{dy_{n+1}}{dx}=y_n+x$$

I can easily follow the process, but I'm not seeing why I should consider each subsequent expression to provide a better approximation of ##y##. Is there an easy way to explain this?
I think that if you take those equations one at a time, do the substitution of the proceeding equation, do the integration wrt x, then you will have a sequence of polynomials of higher and higher powers of x. Those will be truncated Taylor series. So they get closer to the full Taylor series and become better approximations of the solution.
 
  • #10
FactChecker said:
I think that if you take those equations one at a time, do the substitution of the proceeding equation, do the integration wrt x, then you will have a sequence of polynomials of higher and higher powers of x. Those will be truncated Taylor series. So they get closer to the full Taylor series and become better approximations of the solution.
I think a more systematic procedure would be to write the equation as ## y=dy/dx -x ## and let ## y_{n+1}=dy_{n}/dx-x ## Normally the Thomas calculus text is quite good, but I think this one is not one of his better solutions. Scratch that (the y=dy/dx -x), it doesn't work because it doesn't get you the next power of x. This is one iterative solution that looked somewhat clumsy... The method I used in post #6 seems more logical for this one. It isn't obvious what to do with the x term if you try to do an integral solution with an iteration=do you integrate the x, etc. ?
 
Last edited:
  • #11
Charles Link said:
I think a more systematic procedure would be to write the equation as ## y=dy/dx -x ## and let ## y_{n+1}=dy_{n}/dx-x ## Normally the Thomas calculus text is quite good, but I think this one is not one of his better solutions. Scratch that (the y=dy/dx -x), it doesn't work because it doesn't get you the next power of x. This is one iterative solution that looked somewhat clumsy... The method I used in post #6 seems more logical for this one. It isn't obvious what to do with the x term if you try to do an integral solution with an iteration=do you integrate the x, etc. ?
My main point was to answer his question of why the book explanation is described as giving a series of better approximations. I think it is because it is giving more terms in an expansion of the solution.

P.S. I don't like to fight the assigned textbook explanations. It always seems to cause more confusion. So I usually try to explain what the textbook is saying.
 
  • Like
Likes   Reactions: SammyS and Charles Link
  • #12
FactChecker said:
I think that if you take those equations one at a time, do the substitution of the proceeding equation, do the integration wrt x, then you will have a sequence of polynomials of higher and higher powers of x. Those will be truncated Taylor series. So they get closer to the full Taylor series and become better approximations of the solution.

But I'm not "allowed" to think in terms of Taylor series. It hasn't been introduced yet.
 
  • #13
Odious Suspect said:
But I'm not "allowed" to think in terms of Taylor series. It hasn't been introduced yet.

Often when you start a subject, you will be shown methods which work but you will not be able understand the proofs until much later.

Ultimately the reason why this method works is that the map T : C^\infty([0,a]) \to C^\infty([0,a]) where T(y) : x \mapsto 1 + \tfrac12 x^2 + \int_0^x y(t)\,dt, or an iterate thereof, can be shown to be a contraction with respect to a suitable metric on C^\infty([0,a]). It follows by the contraction mapping theorem that any sequence of the form y_{n+1} = T(y_n) tends to a unique fixed point, which in this case is the solution of the differential equation y' = x + y subject to x(0) = 1.
 
Last edited:
  • Like
Likes   Reactions: FactChecker

Similar threads

  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 11 ·
Replies
11
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
8
Views
2K