Ordinary differential equations. Series method.

In summary: You can use the Method of Gradient Descent. However, this is beyond the scope of this question.The Method of Frobenius assumes that the Taylor series at the singular point does not exist. You have to change the location of the Taylor series to center around x=0. Then you can use the Method of Frobenius. You can also use the Method of Gradient Descent to get the values of K, before you even start massaging the problem.
  • #1
LagrangeEuler
717
20
Question:
Why equations
[tex]x(1-x)\frac{d^2y}{dx^2}+[\gamma-(\alpha+\beta+1)x]\frac{dy}{dx}-\alpha \beta y(x)=0[/tex]
should be solved by choosing
##y(x)=\sum^{\infty}_{m=0}a_mx^{m+k}##
and not
##y(x)=\sum^{\infty}_{m=0}a_mx^{m}##?
How to know when we need to choose one of the forms.
Also when I sum over ##m##, then ##\sum^{\infty}_{m=0}a_mx^{m+k}=y(x,k)##. Right?
 
Physics news on Phys.org
  • #2
LagrangeEuler said:
Question:
Why equations
[tex]x(1-x)\frac{d^2y}{dx^2}+[\gamma-(\alpha+\beta+1)x]\frac{dy}{dx}-\alpha \beta y(x)=0[/tex]
should be solved by choosing
##y(x)=\sum^{\infty}_{m=0}a_mx^{m+k}##
and not
##y(x)=\sum^{\infty}_{m=0}a_mx^{m}##?
How to know when we need to choose one of the forms.
Also when I sum over ##m##, then ##\sum^{\infty}_{m=0}a_mx^{m+k}=y(x,k)##. Right?

Ultimately, you are looking for polynomial solutions. A polynomial need not begin at x^0. So you allow it to begin at x^k.

You could begin by substituting into the equation the series
##y(x)=\sum^{\infty}_{m=0}a_mx^{m}##
But then, if the solution polynomial does begin with the x^k term, your first set of coefficients
##a_0, a_1, ..., a^{k-1}##
would all work out to be zero. It is thus less work to allow for this eventuality by starting your series at
##x^k##
You can the solve, among other things, for the value of k.

Technically, the notation you suggest,
##\sum^{\infty}_{m=0}a_mx^{m+k}=y(x,k)##
is correct, but redundant.
 
  • #3
Ok. Fair enough. I will try to solve tomorrow this with both methods. Could you just tell me how do you know when the first few will be zero or how you know that here when you see equation? Or you always use ## \sum^{\infty}_{m=0}a_mx^{m+k}##?
 
  • #4
LagrangeEuler said:
Ok. Fair enough. I will try to solve tomorrow this with both methods. Could you just tell me how do you know when the first few will be zero or how you know that here when you see equation? Or you always use ## \sum^{\infty}_{m=0}a_mx^{m+k}##?

Always use ## \sum^{\infty}_{m=0}a_mx^{m+k}##
 
  • #5
LagrangeEuler said:
Question:
Why equations
[tex]x(1-x)\frac{d^2y}{dx^2}+[\gamma-(\alpha+\beta+1)x]\frac{dy}{dx}-\alpha \beta y(x)=0[/tex]
should be solved by choosing
##y(x)=\sum^{\infty}_{m=0}a_mx^{m+k}##
and not
##y(x)=\sum^{\infty}_{m=0}a_mx^{m}##?
How to know when we need to choose one of the forms.
Also when I sum over ##m##, then ##\sum^{\infty}_{m=0}a_mx^{m+k}=y(x,k)##. Right?
See http://mathworld.wolfram.com/FrobeniusMethod.html

x=0 is a singular point of the differential equation. That's why you have to use the first form and not a plain old Taylor series.
 
  • Like
Likes BvU
  • #6
The method you are inquiring about is the Method of Frobenius. You have to use the Method of Frobenius whenever there is a singular point in the differential equation.
A singular point can be broken up into 2 categories. They are: ordinary singular point and irregular singular point (not sure if this is the correct name). I suggest reading the section action in your DE book. In other words, no taylor series expansion exist at a singular point. So we change where the Taylor Series is centered, however it gets really messy, so we employ the Method of Frobenius.

There is also a method of getting the values of K, before you even start massaging the problem.
 

1. What are ordinary differential equations (ODEs)?

ODEs are mathematical equations that describe the relationship between a function and its derivatives. They are used to model a wide range of physical phenomena and are an important tool in scientific research.

2. What is the series method for solving ODEs?

The series method is a technique used to solve ODEs by representing the solution as a series expansion. This involves breaking down the solution into a sum of simpler functions, typically polynomials, and then finding the coefficients that satisfy the original equation.

3. When should the series method be used to solve ODEs?

The series method is most useful for solving linear ODEs with variable coefficients, or for nonlinear ODEs with small parameters. It can also be useful when the solution is known to have a series expansion, such as in the case of power series solutions.

4. What are the advantages of using the series method for ODEs?

The series method allows for a systematic approach to solving ODEs, breaking down the problem into smaller, more manageable steps. It also provides a way to find approximate solutions to ODEs that cannot be solved exactly. Additionally, the series method can provide insights into the behavior of the solution near specific points or for specific parameter values.

5. Are there any limitations to using the series method for ODEs?

One limitation of the series method is that it can become computationally intensive for higher order ODEs, as the number of terms in the series expansion grows with the order of the ODE. Additionally, the series method may not always provide a convergent solution, in which case alternative methods must be used.

Similar threads

  • Differential Equations
Replies
1
Views
1K
Replies
1
Views
2K
  • Differential Equations
Replies
7
Views
1K
  • Differential Equations
Replies
7
Views
387
  • Differential Equations
Replies
5
Views
652
  • Differential Equations
Replies
1
Views
1K
  • Differential Equations
Replies
2
Views
985
  • Differential Equations
Replies
2
Views
2K
  • Differential Equations
Replies
3
Views
1K
  • Differential Equations
Replies
3
Views
2K
Back
Top