Power series when to use Frobenius method

John777
Messages
26
Reaction score
1
Hi, I'm new to the forum and need some help regarding my calc class. Any help you could provide would be greatly appreciated.

In doing a power series series solution when should I use the frobenius method and when should I use the simple power series method. The simple method seems a little faster, but I know there is a certain type of problem where you must use frobenius.

Frobenius being y=\SigmaAnXn+s

Regular method being y =\SigmaAnXn
 
Physics news on Phys.org
These 2 are equivalent
 
kof9595995 said:
These 2 are equivalent

Don't take this the wrong way as I'm just trying to learn, but why do they teach both methods? There is no difference between them?
 
John777 said:
Don't take this the wrong way as I'm just trying to learn, but why do they teach both methods? There is no difference between them?

There is a difference between them, but for differential equations without a singularity at some value of x the difference disappears because you will be forced to conclude s = 0.

When you have a differential equation with a singularity at some value of x, you will find a non-trivial value of s when you do a power series around the singular point.

i.e., if you have a singularity at a point x = c, you would plug in a series

y = \sum_{n=0}^\infty A_n(x-c)^{n+s}

and you would get s = some non-zero number. If there were no singularity at x = c, you would find s = 0.
 
Can you explain what correction does the xs factor contribute exactly? I don't see why the Frobenius method improves the failing ordinary power series method.
 
You use "Frobenius" method when the point about which you are exanding (the "x_0" in \sum a_n(x-x_0)^n) is a "regular singular point". That means that the leading coefficient has a singularity there, but not "too bad" a singularity: essentially that is acts like (x- x_0)^{-n} for nth order equations but no worse. Every DE text I have seen explains all that.
 
There is the following linear Volterra equation of the second kind $$ y(x)+\int_{0}^{x} K(x-s) y(s)\,{\rm d}s = 1 $$ with kernel $$ K(x-s) = 1 - 4 \sum_{n=1}^{\infty} \dfrac{1}{\lambda_n^2} e^{-\beta \lambda_n^2 (x-s)} $$ where $y(0)=1$, $\beta>0$ and $\lambda_n$ is the $n$-th positive root of the equation $J_0(x)=0$ (here $n$ is a natural number that numbers these positive roots in the order of increasing their values), $J_0(x)$ is the Bessel function of the first kind of zero order. I...
Are there any good visualization tutorials, written or video, that show graphically how separation of variables works? I particularly have the time-independent Schrodinger Equation in mind. There are hundreds of demonstrations out there which essentially distill to copies of one another. However I am trying to visualize in my mind how this process looks graphically - for example plotting t on one axis and x on the other for f(x,t). I have seen other good visual representations of...
Back
Top