I Proving a basis is a basis for homogeneous linear differential equation with constant (complex) coefficients

psie
Messages
315
Reaction score
40
TL;DR Summary
I'm working an exercise to prove a certain theorem in my linear algebra book. I'm stuck with a computation of verifying linear independence.
I struggle with a certain computation. Consider a homogeneous linear differential equation with constant (complex) coefficients. Such an equation is associated with a polynomial ##p(t)##, which we can write $$p(t)=(t-c_1)^{n_1} (t-c_2)^{n_2}\cdots (t-c_k)^{n_k},$$where ##n_1,n_2,\ldots,n_k## are positive integers and ##c_1,c_2,\ldots,c_k## are distinct complex numbers. A basis for the solution space is $$S=\{e^{c_1t},te^{c_1t},\ldots,t^{n_1-1}e^{c_1t},\ldots,e^{c_kt},te^{c_kt},\ldots,t^{n_k-1}e^{c_kt}\}.$$ I'm in the process of proving that ##S## is indeed a basis for the null space of ##p(\mathsf D)##, where ##\mathsf D## is the differential operator. I want to proceed by induction on ##k## to prove linear independence. I can prove the base case but am stuck on the induction step. Suppose ##S## is linearly independent for all ##k<m## and suppose $$\sum_{i=1}^m\sum_{j=0}^{n_i-1}b_{ij}t^je^{c_it}=0.\tag1$$We want to show ##b_{ij}=0## for all ##i,j##. Now I've received a hint to apply ##(\mathsf D-c_m\mathsf I)^{n_m}## to ##(1)##, i.e. $$(\mathsf D-c_m\mathsf I)^{n_m}\left( \sum_{i=1}^m\sum_{j=0}^{n_i-1}b_{ij}t^je^{c_it}\right)=0.$$Any tips and tricks on how to proceed? I struggle with how to simplify the above.
 
Physics news on Phys.org
This might be a silly question, but what you have is ##\sum n_i ## many terms all of the form ##t^ke^{ct}.## Every constellation ##b_1t^ke^{ct}+b_2t^me^{dt}=0## yields ##b_1=b_2=0## by substituting ##t\in \{0,1\}## except for the case ##c=d## and ##k=m.##

The induction step is accordingly.
 
The point here is that (D - c_mI)^{n_m} is the zero map on the subspace E_m spanned by \{ t^ke^{c_mt} : k = 0, 1 , \dots, n_m - 1 \}. This is so because <br /> (D - c_mI)(t^ke^{c_mt}) = kt^{k-1}e^{c_mt} and thus D - c_mI maps everything in E_m to zero in at most n_m iterations.
 
There is the following linear Volterra equation of the second kind $$ y(x)+\int_{0}^{x} K(x-s) y(s)\,{\rm d}s = 1 $$ with kernel $$ K(x-s) = 1 - 4 \sum_{n=1}^{\infty} \dfrac{1}{\lambda_n^2} e^{-\beta \lambda_n^2 (x-s)} $$ where $y(0)=1$, $\beta>0$ and $\lambda_n$ is the $n$-th positive root of the equation $J_0(x)=0$ (here $n$ is a natural number that numbers these positive roots in the order of increasing their values), $J_0(x)$ is the Bessel function of the first kind of zero order. I...
Are there any good visualization tutorials, written or video, that show graphically how separation of variables works? I particularly have the time-independent Schrodinger Equation in mind. There are hundreds of demonstrations out there which essentially distill to copies of one another. However I am trying to visualize in my mind how this process looks graphically - for example plotting t on one axis and x on the other for f(x,t). I have seen other good visual representations of...
Back
Top