Understanding Function Expansion: Mathematically Explained by Non-Mathematicians

Niles
Messages
1,834
Reaction score
0

Homework Statement


Hi

I have sometimes seen a function f being written as
<br /> f \approx f_0 + \varepsilon f_1(x)+ \varepsilon^2 f_2(x) + \ldots<br />
where f_0 is an equlibrium value and all higher-order terms are non-equilibrium values (not derivates!). The assumption has always been that \varepsilon \ll 1.

My question is: Mathematically, I guess we are expanding the function f around its equlibrium value. But when I look at the expression for a Taylor expansion, I can't make this fit with anything.

Are we non-mathematicians even allowed to write the function like this?
 
Physics news on Phys.org
Niles said:

Homework Statement


Hi

I have sometimes seen a function f being written as
<br /> f \approx f_0 + \varepsilon f_1(x)+ \varepsilon^2 f_2(x) + \ldots<br />
where f_0 is an equlibrium value and all higher-order terms are non-equilibrium values (not derivates!). The assumption has always been that \varepsilon \ll 1.

My question is: Mathematically, I guess we are expanding the function f around its equlibrium value. But when I look at the expression for a Taylor expansion, I can't make this fit with anything.

Are we non-mathematicians even allowed to write the function like this?
Thank you for using the (much prettier looking) \varepsilon for your epsilons. :-p

Well, us math-folk sometimes have a dislike for approximations and what you have written doesn't specify a domain or codomain (so we wouldn't technically call it a function by strict definition), but I see no other reason why you couldn't do that.

If your ##\{f_i\}## follow some sort of pattern, we can probably solve the limit $$f(x)=\lim_{n\rightarrow+\infty}\sum_{0\leq i\leq n}\varepsilon^if_i(x).$$

Can you give us more information? I'm interested in understanding what you're doing here.
 
Niles said:

Homework Statement


Hi

I have sometimes seen a function f being written as
<br /> f \approx f_0 + \varepsilon f_1(x)+ \varepsilon^2 f_2(x) + \ldots<br />
where f_0 is an equlibrium value and all higher-order terms are non-equilibrium values (not derivates!). The assumption has always been that \varepsilon \ll 1.

My question is: Mathematically, I guess we are expanding the function f around its equlibrium value. But when I look at the expression for a Taylor expansion, I can't make this fit with anything.

That looks like an asymptotic expansion, not a Taylor series. In such expansions we usually don't care whether
<br /> \lim_{N \to \infty} \sum_{n=0}^N \epsilon^n f_n(x)<br />
even exists; what we're interested in is whether \sum_{n=0}^N \epsilon^n f_n(x) for some finite N is a sufficiently good approximation to some F(x,\epsilon) when |\epsilon| \ll 1.

Generally the idea is to exploit a small parameter to turn a problem we can't solve analytically for F(x,\epsilon) into a series of problems we can solve for the f_n.
 
Thanks for the help so far, both of you.

pasmith said:
Generally the idea is to exploit a small parameter to turn a problem we can't solve analytically for F(x,\epsilon) into a series of problems we can solve for the f_n.

That is exactly how it is used in my case (a Chapman-Enskog expansion). But doesn't this require that the various terms f_n are somewhat independent, so we can solve for each order independently?
 
Suppose you have a function of the two variables ε and x, f(ε,x) and you expand in a Taylor series in ε about ε=0. Then you get:

f(ε,x)=f(0,x)+ε\left(\frac{\partial f(ε,x)}{\partial ε}\right)_{ε=0}+\frac{ε^2}{2}\left(\frac{\partial^2 f(ε,x)}{\partial ε^2}\right)_{ε=0}+ ...
Then you can identify the functions in your summations with the partial derivative terms in this series.
 
Niles said:
Thanks for the help so far, both of you.



That is exactly how it is used in my case (a Chapman-Enskog expansion). But doesn't this require that the various terms f_n are somewhat independent, so we can solve for each order independently?

The problem for f_n should depend only on f_0, \dots, f_{n-1}, so that one can work forward from f_0.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top