Explanation on Taylor expansions needed

Stalafin
Messages
21
Reaction score
0
I have a question about Taylor expanding functions. For both cases I can't get my head around why things are the way they are. I just don't see how one would perform Taylor expansions like that.

The first:
The starting point of a symmetry operations is the following expansion:
f(r+a) = f(r) + \left(a\cdot\frac{\partial}{\partial r}\right) f(r) + \frac{1}{2} \left(a\cdot\frac{\partial}{\partial r}\right)^2 f(r) + \frac{1}{3!} \left(a\cdot\frac{\partial}{\partial r}\right)^2 f(r) + \ldots

The second:
This one comes from Landau's book, first chapter, fourth section:
Given the equation L = L(v^{\prime 2}) where \boldsymbol{v}^\prime = \boldsymbol{v} + \boldsymbol{\varepsilon} and \boldsymbol{v}^2 = v^2 s.t. L(v^{\prime 2}) = L(v^2 + 2\boldsymbol{v}\boldsymbol{\varepsilon} + \boldsymbol{\varepsilon}^2): why does expanding in powers of \boldsymbol{\varepsilon} and neglecting terms above first order lead to:
<br /> L(v^{\prime 2}) = L(v^2)+\frac{\partial L}{\partial v^2} 2\boldsymbol{v}\cdot \boldsymbol{\varepsilon}<br />


Some insight into why this is would be greatly appreciated. :-)
 
Mathematics news on Phys.org
Okay, the first and the second one are equivalent if I set r=v^2, a=2\boldsymbol{v}\boldsymbol{\varepsilon} and disregard terms \boldsymbol{\varepsilon}^2. I still do not quite see how to get the first equation done properly.

For a normal Taylor expansion (up to first order), we have:
<br /> f(x) = f(c) + (x-c) \frac{\operatorname{d}}{\operatorname{d}x}f(c) + \ldots<br />

If I now identify x=r+a and c=a I recover most of the terms I need, with the exception of \frac{\operatorname{d}}{\operatorname{d}r+a}f(r).

Suggestions?
 
Hi Stalafin! :smile:

Stalafin;3373466 The first: The starting point of a symmetry operations is the following expansion: [tex said:
f(r+a) = f(r) + \left(a\cdot\frac{\partial}{\partial r}\right) f(r) + \frac{1}{2} \left(a\cdot\frac{\partial}{\partial r}\right)^2 f(r) + \frac{1}{3!} \left(a\cdot\frac{\partial}{\partial r}\right)^2 f(r) + \ldots[/tex]

This is an abuse of notation and an abuse that I don't like at all. Basically, this equation has two kind of r's: an r that represents a value and an r that represents a variable.

Rewriting your equation gives us
f(r+a) = f(r) + a\cdot\frac{\partial}{\partial r}f(r) + \frac{1}{2} a^2\cdot\frac{\partial^2}{\partial r^2}f(r) + \frac{1}{3!} a^3\cdot\frac{\partial^3}{\partial r^3} f(r) + \ldots

Because the notation \frac{\partial}{\partial f} sucks (because r is used as a variable there), we will replace it with a better one:

f(r+a) = f(r) + af^\prime(r) + \frac{1}{2} a^2f^{\prime\prime}(r) + \frac{1}{3!} a^3f^{\prime\prime\prime}(r) + \ldots

But this is exactly the Taylor series! So the original formula was correct (but abusive).
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top