MHB The Strange Behaviour of Numbers Close to Unity

AI Thread Summary
The discussion centers on the mathematical behavior of functions near unity, particularly in the context of thermal expansion coefficients in materials. The Taylor series is highlighted as a method for approximating functions, demonstrating how small changes in variables lead to linear approximations. Key examples include the approximations for functions like √(1+x) and (1-x)³, emphasizing the significance of first-order approximations. The conversation also touches on the importance of significant figures in scientific measurements, indicating that higher-order terms become negligible when measurements are limited to a certain precision. Overall, the thread explores the practical applications of these mathematical concepts in scientific contexts.
Perplexed
Messages
6
Reaction score
0
I have been looking at material properties such as thermal expansion of metals which usually involves very small coefficients. The general equation of thermal expansion is usually
[math]L_\theta = L_0 ( 1 + \alpha \theta)[/math]
where L is the length and theta is the temperature change. The coefficient alpha is usually pretty small, 11E-6 for steel, so one ends up with a lot of numbers like 1.000011.

This is where I seem to have entered a strange world, where
[math]\sqrt{(1 + x)} \rightarrow 1 + x/2[/math]
[math]\dfrac{1}{ \sqrt{(1 - x)}} \rightarrow 1 + x/2[/math]
[math](1 - x)^3 \rightarrow 1-3x[/math]

Is there a name for this area of maths, and somewhere I can look up more about it?

Thanks for any help.

Perplexed
 
Mathematics news on Phys.org
Taylor series is one topic where (Infinite) polynomials are used to approximating functions. For example,
\[
(1+x)^{1/2}=1+\frac{x}{2}+R_1(x)
\]
where $R_1(x)$ is called the remainder and is infinitely small compared to $x$ when $x$ is small. More precisely,
\[
(1+x)^{\alpha }=1+\alpha x+{\frac {\alpha (\alpha -1)}{2!}}x^{2}+\cdots+
\frac{\alpha\cdot\ldots\cdot(\alpha-n+1)}{n!}x^n+R_n(x)
\]
where $R_n(x)$ is infinitely small compared to $x^n$ when $x$ tends to $0$.
 
"Linear approximation". Any function, f, having a derivative at x= a, can be approximated by the "tangent line" y= f'(a)(x- a)+ f(a). The error will be proportional to (x- a)^2 and f''(a).

For example, if f(x)= \sqrt{1+ x}= (1+ x)^{1/2} then f'(x)= (1/2)(1+ x)^{-1/2} so that with x= 0, f(0)= \sqrt{1+ 0}= 1 and f'(0)= (1/2)/\sqrt{1+ 0}= 1/2. So y= f(x) is approximated, around x= 0, by y= (1/2)x+ 1 or 1+ x/2.

If f(x)= \frac{1}{\sqrt{1+ x}}= (1+ x)^{-1/2} then f&#039;(x)= -(1/2)(1+ x)^{-3/2} so that f(0)= \frac{1}{\sqrt{1+ 0}}= 1 and then f&#039;(0)= -(1/2)(1+ 0)^{3/2}= -1/2. So y= f(x) is approximated, around x= 0, by y= -(1/2)x+1 or 1- x/2. Notice the negative sign- what you have is NOT correct.<br /> <br /> If f(x)= (1- x)^3 then f&amp;#039;(x)= 3(1- x)^2(-1)= -3(1- x)^2. f(0)= (1- 0)^3= 1 and f&amp;#039;(0)= -3(1- 0)^2= -3. So y= f(x) is approximated by -3x+ 1 or 1- 3x. <br /> <br /> You could also do the last one by actually multiplying it out: (1- x)^3= 1- 3x+ 3x^2- x^3. If x is small enough (i.e. close enough to 0) that higher values of x can be ignored in the approximation, y= 1- 3x. <br /> <br /> Again, these are all first order or linear <b>approximations</b> to the functions, not exact values.<br /> <br /> (You can get the Taylor&#039;s polynomial and series that Evgeny- Makarov refers to by extending those same ideas to higher powers.)
 
Last edited by a moderator:
HallsofIvy said:
If f(x)= \frac{1}{\sqrt{1+ x}}= (1+ x)^{-1/2} then f&#039;(x)= -(1/2)(1+ x)^{-3/2}

so that f(0)= \frac{1}{\sqrt{1+ 0}}= 1 and then f&#039;(0)= -(1/2)(1+ 0)^{3/2}= -1/2.

So y= f(x) is approximated, around x= 0, by y= -(1/2)x+1 or 1- x/2. Notice the negative sign- what you have is NOT correct.
Thank you for your reply, it is very helpful.

Just to clear things up so that someone else looking at this doesn't get confused, in my second approximation I had f(x)= \frac{1}{\sqrt{1 - x}} rather than the f(x)= \frac{1}{\sqrt{1+ x}} that you started with: notice the "-" rather than "+" in the square root. It was the simple change of sign in arriving at the reciprocal that first intrigued me on this one, and your explanation makes the reason why this works clear.

Less Perplexed now
 
Allow me to make another observation regarding this:

Scientific measurements are often given in "significant figures", the reasoning being, we can only take measurements up to a certain degree of accuracy.

So, suppose our input data can only give 6 decimal places.

If we expect we can model a function (and for many functions this is true) by:

$f(x) = a_0 + a_1x + a_2x^2 +\cdots$

And that the coefficients $a_k$ either stay "about the same size" or even better, decrease, then if we measure $x$ to 6 decimals places, the "correction term" for $x^2$ is around 12 decimal places, in other words, much much smaller than our standards of accuracy allow.

For certain classes of "well-behaved" functions, there are means to estimate (or "bound") the size of the error, which in turn let's us know "how many terms to go out".

For small enough $x$, this kind of reasoning let's us use the approximation:

$\sin(x) = x$

often used in simplifying wave equations that govern oscillators, and if more accuracy is needed, the approximation:

$\sin(x) = x - \dfrac{x^3}{6}$ is pretty darn good, as you can see here.
 
Suppose ,instead of the usual x,y coordinate system with an I basis vector along the x -axis and a corresponding j basis vector along the y-axis we instead have a different pair of basis vectors ,call them e and f along their respective axes. I have seen that this is an important subject in maths My question is what physical applications does such a model apply to? I am asking here because I have devoted quite a lot of time in the past to understanding convectors and the dual...
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...

Similar threads

Back
Top