Help understanding Taylor expansion etc.

Click For Summary
SUMMARY

This discussion focuses on the understanding and application of Taylor series expansions in mathematics, particularly in solving differential equations using power series. The student expresses confusion regarding the benefits of representing functions as Taylor series, especially concerning their convergence and utility outside their radius of convergence. Key insights include the practical use of Taylor series for approximating functions like sine and logarithm, as well as their significance in defining functions for complex variables and matrices. The conversation highlights the importance of centering series expansions around specific points, particularly in the context of non-entire functions.

PREREQUISITES
  • Understanding of differential equations and power series
  • Familiarity with Taylor series and their convergence properties
  • Basic knowledge of complex numbers and matrices
  • Experience with mathematical functions and their approximations
NEXT STEPS
  • Study the convergence criteria for Taylor series and power series
  • Explore the applications of Taylor series in numerical methods for approximating functions
  • Learn about the significance of singular points in differential equations
  • Investigate the relationship between Taylor series and complex functions, particularly in linear algebra
USEFUL FOR

Students of mathematics, particularly those studying calculus and differential equations, as well as anyone interested in the applications of Taylor series in numerical analysis and complex variables.

Levis2
Messages
43
Reaction score
0
Well - I am a 16 year old student, whos really interested in math. I do a lot of studying on my own, because I am a bit bored with the present math in school.

Right now I am reading about solving differential equations with power series. I can do this, and i do understand the recurrence relations and such, generalizing a term for the coefficients etc .. All this i do understand. I understand that some solutions to a differential equation are so complex, they can't be written using regular fuinction notation and elementary functions.

There's one thing i don't understand though - taylor series expansions. well atleast not to a satisfactory degree. I haven't found a good text on WHY it is beneficial to write a function as a taylor series. I have not found any information on why this series is centered around a singular point, and i don't understand why the series are any good, since - atleast in my understanding - they only have a useful meaning for x-values inside their span of convergence? The original function have meanings for all x's in let's say x>0. Let's say the function; 1/(1-x) This function is defined and gives functions values for 1-x different from zero = x different from 1. Expanding this into the summation of x^n n--> infinity, only gives (as far as I've understood) values for x between 1-(-1). I am a bit puzzled :) this seems like a degradation from it's original state.

Can you point me to some litterature that can give me the proper understanding of these series, or maybe explain it to me? I would be very greatful if you did. I really to get a good understanding of this
 
Physics news on Phys.org
From the point-of-view of applied math, Taylor series are extremely important! You probably know what the sine function is, and you can probably calculate it for some values.
E.g. \sin(\pi/2)=1, \sin(\pi/4)=\sqrt{2}/2,...

However, if I give you a more difficult question, such as: calculate sin(1°)=\sin(\pi/180). Then you probably would have no clue in how you would calculate this thing. However, Taylor series makes it a lot easier for you: you just have to take the power seires expansion of sin. And if you take the first 20 terms of the power series, then you'll get a pretty good approximation of sin(1°)...

This is in fact how your calculator calculates sines, cosines, logarithms, etc.

So, when you obtain a power series as an answer to a ODE, then this is quite good. Since you can approximate the function as close as you want, and this is most likely everything you want to do!
 
well for a simple function like 1/(1+x) the series might seem less informative, but what about ln(1+x)? It is very hard to find values of ln(1+x) by hand.

But if I write down the series for 1/(1+x), = 1-x+x^2-x^3+x^4-+...

and integrate both sides, I get ln(1+x) = x-x^2/2 + x^3/3 -+...

and this is easy to approximate at least for -1 < x ≤ 1. E.g.

ln(2) = ln(1+1) = 1 -1/2 + 1/3 -1/4 +-...Similarly 1/(1+x^2) = 1 -x^2 + x^4 -x^6 +-..., so integrating gives

arctan(x) = x - x^3/3 + x^5/5 -+...

so arctan(1) = pi/4 = 1 - 1/3 + 1/5 - 1/7 +-...

and as you already know, series are even more useful for expressing functions you have no other way to write down, which occur as solutions of ode's,...

the classical book on series is konrad knopp's work, something like: theory and application of infinite series.
 
Another important application of Taylor series is defining functions for variables other than simply real numbers.

For example, we know how to multiply and add complex numbers or matrices but what would e^{a+ bi} mean? What would e^{A} mean for A a matrix?

Now, we know that
e^{x}= \sum_{n=0}^\infty \frac{x^n}{n!}
So we can say that e^{a+ bi}= e^ae^{bi}
and then say that
e^{bi}= \sum_{n=0}^\infty \frac{(bi)^n}{n!}
We know that i^2= -1, x^3= -i, and i^4= 1 and we can extend that to any power of i. All odd powers of i involve "i" again, all even powers do not so we can separate those, letting n= 2i in the first sum, n= 2i+ 1 in the second:
e^{bi}= \sum_{i=0}^\infty \frac{(-1)^i}{(2i)!}b^{2i}+ i\sum_{i=0}^\infty \frac{(-1)^i}{(2i+1)!}b^{2i+1}

And if we recognize that first sum as the Taylor series for cos(x) and the second as the Taylor series for sin(x) we can write, as a result of using Taylor's series,
e^{a+ bi}= e^a(cos(b)+ i sin(b))

For A an diagonal matrix, say
A= \begin{bmatrix}a_1 &amp; 0 &amp; 0 \\ 0 &amp; a_2 &amp; 0 \\ 0 &amp; 0 &amp; a_3\end{bmatrix}
it is easy to show that
A^n= \begin{bmatrix}a_1^n &amp; 0 &amp; 0 \\ 0 &amp; a_2^n &amp; 0 \\ 0 &amp; 0 &amp; a_3^n\end{bmatrix}

and then, using the Taylor series for e^x above,
e^A= \begin{bmatrix}e^{a_1} &amp; 0 &amp; 0 \\ 0 &amp; e^{a_2} &amp; 0 \\ 0 &amp; 0 &amp; e^{a_3}\end{bmatrix}

That's one reason why "diagonal matrices" and ways of diagonalizing matrices are so important in applications of Linear Algebra.
 
HallsofIvy said:
Another important application of Taylor series is defining functions for variables other than simply real numbers.

For example, we know how to multiply and add complex numbers or matrices but what would e^{a+ bi} mean? What would e^{A} mean for A a matrix?

Now, we know that
e^{x}= \sum_{n=0}^\infty \frac{x^n}{n!}
So we can say that e^{a+ bi}= e^ae^{bi}
and then say that
e^{bi}= \sum_{n=0}^\infty \frac{(bi)^n}{n!}
We know that i^2= -1, x^3= -i, and i^4= 1 and we can extend that to any power of i. All odd powers of i involve "i" again, all even powers do not so we can separate those, letting n= 2i in the first sum, n= 2i+ 1 in the second:
e^{bi}= \sum_{i=0}^\infty \frac{(-1)^i}{(2i)!}b^{2i}+ i\sum_{i=0}^\infty \frac{(-1)^i}{(2i+1)!}b^{2i+1}

And if we recognize that first sum as the Taylor series for cos(x) and the second as the Taylor series for sin(x) we can write, as a result of using Taylor's series,
e^{a+ bi}= e^a(cos(b)+ i sin(b))

For A an diagonal matrix, say
A= \begin{bmatrix}a_1 &amp; 0 &amp; 0 \\ 0 &amp; a_2 &amp; 0 \\ 0 &amp; 0 &amp; a_3\end{bmatrix}
it is easy to show that
A^n= \begin{bmatrix}a_1^n &amp; 0 &amp; 0 \\ 0 &amp; a_2^n &amp; 0 \\ 0 &amp; 0 &amp; a_3^n\end{bmatrix}

and then, using the Taylor series for e^x above,
e^A= \begin{bmatrix}e^{a_1} &amp; 0 &amp; 0 \\ 0 &amp; e^{a_2} &amp; 0 \\ 0 &amp; 0 &amp; e^{a_3}\end{bmatrix}

That's one reason why "diagonal matrices" and ways of diagonalizing matrices are so important in applications of Linear Algebra.

This was very informative ! Thank you ! :) I have never seen that proof for the eulers equation. Since i started this thread I've read a bit about the topic, and i understand it way better now. It is only the non-entire functions, Natural log, arctan etc. that only converges for a small interval of x. Most other functions converges for all x, and now i understand, that this must be the reason that you want to "center" your series expansion around a certain point. This must be due to the fact, that for the natural logarithm you must center your series expansion around a point of interest, since the series won't converge for all x. This must also be the reason for using the series and centering it around x0 if x0 is a singular point in a differential equation! It's all much more clear to me now :P

Thx a lot
 
I didnt really understand the matrice/matrix stuff though, since i have never seen/read about linear algebra before :)
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 3 ·
Replies
3
Views
6K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
1
Views
3K
  • · Replies 9 ·
Replies
9
Views
3K