All right:
1.
Say you've got a differential equation to solve:
x''(t)+x(t)=0, x(0)=1, x'(0)=0.
Now, you can from the differential equation easily find the higher-order derivatives at t=0:
a) From diff. equation: x''(0)=-x(0)=-1
Differentiate the differential equation:
b) x'''(t)+x'(t)=0->x'''(0)=-x'(0)=0
c) x''''(t)+x''(t)=0->x''''(0)=-x''(0)=1
and so on..
Hence you have, in Taylor form: x(t)=Sum over n: (-1)^(n)1/(2n!)t^(n)=cos(t),
(which you probably knew already)
You can see from this approach that, assuming your solution has a Taylor series in the vicinity of the the initial point, you can trivially solve any differential equation
you're given!
The trouble is however, that convergence of Taylor series can be very slow and the assumption of the existence of the full, infinite Taylor series solution is wrong.
Therefore:
2. The power of the truncated Taylor approximations is greatly enhanced, if you in some way can "bound" the error between the value at a point given by your true function and the value predicted by the use of a truncated Taylor series
(within some region).
Since it often happens that you are able to find such bounds (even if you don't know what your original function is!), Taylor series approximations can be put to good use.
3. A good example (in conjunction with asymptotic analysis):
Consider the equation for the motion of a pendulum:
When it is derived, you get something like: A*sin(v)+v''=0,
where v is an angle to the vertical, and A is some physical parameter.
Now what do we do?
We simply make use of the Taylor series approximation of sin(v) for small v's, and get:
A*v+v''=0, which has simple harmonic motion as it solutions v(t).
The original differential equation is a lot harder to solve.