Laplace transform of a Taylor series expansion

AI Thread Summary
The discussion focuses on the application of Taylor series expansions to derive early-time and late-time behaviors of the creep compliance function J(s) for human red blood cells. The expressions obtained for J(t) at early times include terms involving t raised to powers related to the parameter a, while the late-time behavior shows a different dependence on t. Participants highlight the lack of a closed-form inverse Laplace transform for J(s) and the challenge in expanding J(s) directly using traditional methods. The use of geometric series expansions is identified as a key technique that simplifies the derivation of these expressions. The conversation concludes with an acknowledgment of the importance of this method, which was not explicitly mentioned in the original paper.
Mapes
Science Advisor
Homework Helper
Gold Member
Messages
2,591
Reaction score
21
I'm reading a paper on tissue cell rheology ("Viscoelasticity of the human red blood cell") that models the creep compliance of the cell (in the s-domain) as

J(s) = \frac{1}{As+Bs^{a+1}}

where 0\leq a\leq 1. Since there's no closed-form inverse Laplace transform for this expression, they explore early-time (t\rightarrow 0) and late-time (t\rightarrow \infty) behavior by using a Taylor series expansion around s\rightarrow \infty and s\rightarrow 0, respectively. This is said to yield

J(t)\approx \frac{t^a}{B\Gamma(a+1)}-\frac{At^{2a}}{B^2\Gamma(2a+1)}+\frac{A^2t^{3a}}{B^3\Gamma(3a+1)}

for the early-time behavior and

J(t)\approx \frac{1}{A}-\frac{Bt^{-a}}{A^2\Gamma(1-a)}

for the late-time behavior. However, I just can't see how these expressions arise. I know that the Laplace transform of t^a is

L[t^a]=\frac{\Gamma(a+1)}{s^{a+1}}

and so presumably

L\left[\frac{t^a}{\Gamma(a+1)}\right]=\frac{1}{s^{a+1}}\mathrm{,}\quad L\left[\frac{t^{-a}}{\Gamma(1-a)}\right]=\frac{1}{s^{-a+1}}

but I can't figure out where these terms would appear in a Taylor series expansion. When I try to expand J(s) in the manner of

f(x+\Delta x)\approx f(x) + f^\prime(x)\Delta x +\frac{1}{2}f^{\prime\prime}(x)(\Delta x)^2

I get zero or infinity for each term. Unfortunately, Mathematica is no help in investigating an expansion around s\rightarrow\infty or s\rightarrow 0; it just returns the original expression. Perhaps I'm making a silly error, or perhaps the paper skipped an important enabling or simplifying step. Any thoughts?
 
Last edited:
Mathematics news on Phys.org
Both series expansions below are geometric series: \frac{1}{1-x}=\sum_{k=0}^{\infty}x^k\mbox{ for }|x|<1.

For \left| {\scriptstyle \frac{B}{A}}s^{a} \right| < 1, we have

J(s) = \frac{1}{As+Bs^{a+1}} = \frac{1}{As}\cdot\frac{1}{1+{\scriptstyle \frac{B}{A}}s^{a}} = \frac{1}{As}\sum_{k=0}^{\infty}\left(-1\right)^k \left(\frac{B}{A}}\right)^k s^{ak}=\frac{1}{A}\sum_{k=0}^{\infty}\left(-1\right)^k \left(\frac{B}{A}}\right)^k s^{ak-1}

J(s) = \frac{1}{As}-\frac{B}{A^2s^{1-a}}}+\frac{B^2}{A^3s^{1-2a}}}-\cdots

hence

J(t) = \frac{1}{A}u(t)-\frac{Bt^{-a}}{A^2\Gamma (1-a)}}+\frac{B^2t^{-2a}}{A^3\Gamma (1-2a)}}-\cdots​

where u(t) is the unit step function...

And for \left| {\scriptstyle \frac{A}{B}} s^{-a} \right| < 1, we have

J(s) = \frac{1}{As+Bs^{a+1}} = \frac{1}{Bs^{a+1}}\cdot\frac{1}{ {\scriptstyle \frac{A}{B}}s^{-a}}+1} <br /> =\frac{1}{Bs^{a+1}}\sum_{k=0}^{\infty}\left(-1\right)^k \left(\frac{A}{B}}\right)^k s^{-ak}=\frac{1}{B}\sum_{k=0}^{\infty}\left(-1\right)^k \left(\frac{A}{B}}\right)^k s^{-ak-a-1}


J(s) = \frac{1}{Bs^{a+1}}-\frac{A}{B^2s^{2a+1}}+\frac{A^2}{B^3s^{3a+1}}-\cdots​

hence

J(t) = \frac{t^{a}}{B\Gamma (a+1)}-\frac{At^{2a}}{B^2\Gamma (2a+1)}+\frac{A^2t^{3a}}{B^3\Gamma (3a+1)}-\cdots​
 
Thank you benorin, that makes things perfectly clear. I've only seen the geometric series expansion once or twice before in my field and wouldn't have thought to use it. It would have been nice if the paper had mentioned that they used this technique.

Thanks again!
 
Thread 'Video on imaginary numbers and some queries'
Hi, I was watching the following video. I found some points confusing. Could you please help me to understand the gaps? Thanks, in advance! Question 1: Around 4:22, the video says the following. So for those mathematicians, negative numbers didn't exist. You could subtract, that is find the difference between two positive quantities, but you couldn't have a negative answer or negative coefficients. Mathematicians were so averse to negative numbers that there was no single quadratic...
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Suppose ,instead of the usual x,y coordinate system with an I basis vector along the x -axis and a corresponding j basis vector along the y-axis we instead have a different pair of basis vectors ,call them e and f along their respective axes. I have seen that this is an important subject in maths My question is what physical applications does such a model apply to? I am asking here because I have devoted quite a lot of time in the past to understanding convectors and the dual...

Similar threads

Back
Top