trivial solution method for constant coeff case
I have noticed the following trivial method for solving special constant coeff. ode's, but cannot find it in the usual ode books, (i had better look in courant though, as everything else is in there.)
given a cc ode of form (Da)(Db)y = f, where f is a solution of some cc ode, i.e. where g is annihilated by some polynomial P(D) in D, if the polynomial P does not have Da or Db as a factor (which case is easily done separately), then when expressed as a polynomial in Da, P has a non zero constant term, hence can be solved as follows: (Da)Q(D) = c where c is not zero. Hence we get that (Da)^1 = (1/c)Q(D). Thus y = (1/c)Q(D)f. Repeat for Db. example: This is not exactly the same type but is an easy example: To solve (D^2 +D + 1)y = f, where f is a polynomial of degree 2, just take y = (1D)f. I.e. mod D^3, (D^2+D+1)^1 = 1D. This is just the usual principle that the inverse of an invertible matrix with given minimal polynomial, can be computed by solving the minimal polynomial for the non zero constant term, then dividing out the variable, and dividing by the constant term. e.g. if T^2 + T  2 = 0, then T^2+T = 2, so T[T+1] = 2, so T^(1) = (1/2)(T+1). Surely a solution method as obvious as this was standard hundreds of years ago, but i have not found it in any of over 15 books I have consulted. In the example above, if the RHS of the equation is a polynomial of degree < n, then D^n annihilates it, so the inverse of (1D) is 1+D+D^2+...+D^n. This can be modified to give the inverse of any (Da), hence any polynomial in D. On this forum there is surely someone who has seen this method. If so please give me a reference. :confused: This obviously related to the annihilator method, or undetermined coefficients method, but obviates the need to solve for any constants as they are provided automatically. 
more details:
method 1: suppose the RHS of our ode is a polynomial of degree < n. Then D^n =0 on that function, so if the LHS factors with factors such as Da, then since (1D)(1+D+D^2+D^3+...+D^[n1]) = 1D^n, then also (1D/a)(1+D/a+[D/a]^2+[D/a]^3+...+[D/a]^[n1]) = 1[D/a]^n, hence a(1D/a)(1+D/a+[D/a]^2+[D/a]^3+...+[D/a]^[n1]) = (aD)(1+D/a+[D/a]^2+[D/a]^3+...+[D/a]^[n1]) = a(1[D/a]^n). Hence (Da)(1/a)(1+D/a+[D/a]^2+[D/a]^3+...+[D/a]^[n1]) = (1[D/a]^n). Thus if we want to solve (Da)y = f where f is a polynomial of degree < n,m then taking y = (1/a)(1+D/a+[D/a]^2+[D/a]^3+...+[D/a]^[n1])f, gives us (Da)y = (1[D/a]^n)f = f, since (D/a)^nf = 0. Repeat for each factor (Da) of the differential operator. I.e. this method inverts any operator which is a product of operators of form Da, i.e. any linear diff op with constant coefficients. 
method 2: If the RHS of our ode is say sinx, hence is annihilated by D^2+1, then we solve for the constant term, getting 1 = D^2.
Thus if the LHS has factor Da, we rewrite the annihilating polynomial as 1 = D^2 = (Da+a)^2 = [(Da)^2 +2a(Da) + a^2). Then we put all constants ion the same side, getting: 1+a^2 = [(Da)^2 +2a(Da)] = (Da)[(Da) 2a]. Thus we have, acting on sinx, that (Da)^(1) = (1/1+a^2)[(Da)+2a] = (1/1+a^2)[D+a]. so to solve (Da)y = sinx, we set y = (1/1+a^2)[D+a]sinx. it is too late at night to check this and find my error, which seems always to be present but here is "the idea". As evertone knows this is the usual fact that if T is an invertible linear transformation, with minimal polynomial X^2+1, then ther inverse of T is precisely T^2. I.e. every invertible linear map T on a finite dimensional space has a minimal annihilating polynomial with non zero constant term, hence T^(1) is also a polynomial in T, obtained by setting the constant term of the minimal polynomial on the other side and factoriong out T from what is left. Pardon me for belaboring this, but I was not very clear before, and having given experts the chance to cite this method elsewhere, i am now trying to explain it for novices. 
When I saw it yesterday, it didn't seem familiar.
When I looked again today, it seemed vaguely familiar. Of course, I did see it just yesterday, so read into it what you will. 
one of the fun parts of this method is it provides a right inverse which is not a left inverse to the given operator. and it is only a "local inverse", in that it depends on the nature of the desired output, as a right inverse should.
I.e. we have a differential operator L that maps all smooth functions to other smooth functions. But this "right inverse" of L, which is designed to solve Ly = f, is defined so as to be a right inverse of L only on the space annihilated by some polynomial P(D) that annihilates f. I.e. the "inverse" operator M, depends on f, and has the property that L(M(f)) = f, but L(M(g)) may well not equal g for any g which is not annihilated by the given polynomial P(D) which annihilates f. I.e. y = M(f) solves Ly = f, but y = M(g) may well not solve Ly = g for most other g; and since L is not injective, we cannot have M(L(y)) = y for all y. 
for a theorem in linear algebra which inspired one of these methods, see Herstein, Topics in Algebra, 1964, page 220, Theorem 6B, "a linear transformation of a finite dimensional space is invertible if and only if the constant term of its minimal polynomial is non zero".

Quote:

in my book that is called the annihilator method, but they do not carry it further to give a formula for the inverse operator. they merely say to write down the general homogeneous solution from memory, plug it in and solve for the relevant coefficients. this is wasteful and requires more knowledge. the present method gives a specific expression for the solution, without knowing any general solution formulas, and without solving for any constants.
perhaps this method is so old it has been forgotten, as I still have not found it in any current ode books, except of course it is a direct application of the previously cited inversion principle from finite dimensional, linear algebra in herstein. but even linear algebra books such as herstein in which the technique appears do not seem to apply it to differential operators. i bet it could be in hoffman and kunze though. that is a good linear algebra book, probably with applications to differential operators. 
i still have not found this exact method anywhere, including loomis and sternberg, and courant vol 1, and dieudonne vol 1, but the idea is the same as the annihilator or undetermined coefficient method explained almost everywhere.
the theoretical discussion in loomis (he wrote the chapter on differential equations in LS), does help understand the method as follows: as in herstein, cited above, a linear endomorphism L of a finite dimensional vector space V is in vertible if anf only if it satisfies a polynomial with non zero constant term, e.g. its minimal polynomial. to find the inverse of L if P(L) = L^n + .... + a1 L+a0 = 0 but a0 is not zero, solve for a0 = L[a1 +...+L^(n1)]. Then divide both sides by a0, and get L^(1) = (1/a0)[a1 +...+L^(n1)]. we apply this as follows to a linear nth order differential operator L acting on the infinite dimensional space C(n) = functions on some interval I with n continuous derivatives, mapping C(n) to C = continuous functions on I. By the usual theory, L is a surjective linear map C(n)>C with n dimensional kernel or null space. Moreover if t=0 is a point of the interval I, then the subspace where y(0) = y'(0) = ....=y^(n1)(0) = 0 of C(n), has codimension n, and is a direct sum complement to the solution space {y: L(y)=0}. Now consider the non homogeneous equation Ly = f, where f is annihilated by some polynomial in D, i.e. f is it self a solution of some homogeneous constant coefficient operator M. Now suppose L has constant coefficients and factors as a product of factors (Da) where L and M have no factors in common. Then the homogeneous solution subspace V = {y:My=0} to which f belongs, meets the solution space {y:Ly=0} only in the function {0}. Hence the operator L induces an isomorphism of the finite dimensional space V to it self. In particular, there is a solution y of the equation Ly = f, in the space V to which f itsel;f belongs. This is the key idea behoind the annihilator or undetermined coefficients method. But in fact it is easy to write down an explicit inverse for L on this space, by the "herstein" method above. I.e. If M is the annihilator of f, where the polynomials L(D) and M(D) have no common linear factors, then we invert L on the space {M=0} as follows: Factor L nito linear factors and invert each factor separately one at a time. If (Da) is a linear factor of L, then write D = (Da)+a in the polynomial M(D), and exp[and as a polynomial in (Da). By the theory above, (since Da is invertible on the space {M=0}) the constant term will be non zero. Thuis we can solve explicitly for (Da)^(1) as a polynomial in Da, hence in D. Doing this for each factor in L, gives us an explicit polynomial Q(D) in D which is inverse to L on the space {M=0}. Since f belongs to this space, y = Q(D)(f), solves the non homogeneous equation Ly = f.:biggrin: 
a quicker way to write down the inverse in the acse that f is a polynomial of degree m, hence is annihilated by the polynomial D^(m+1) = 0, is to use the fact that we know the inverse to (1D), modulo the polynomial D^(m+1), is the truncated geometric series 1+D+D^2+...+D^m.
this easily allows us also to invert (Da) = a(1  [D/a]) by a similar truncated geometric series. Remark: When I tried this on some problems in the book, I originally abandoned it as useless, since ordinary undetermiend coefficients was quicker. But I soon found that was because the book ahd chosen thoise problems to suit its opwn method, I.e. on some problems this method is quicker, and no method is best for all problems. I suggest for example that to solve (D^n +D^(n1)+....+D+1)y = f, where f is a polynomial of degree n, no method is quicker than this, since y = (1D)f, is a solution. second remark, to young persons: notice that I, an oft published PhD in mathematics, am delighted to have noticed this simple application of a well traveled idea, while many neophytes here are unsatisfied unless they can submit some crackpot solution of the Riemann hypothesis. To understand this phenomenon, read Don Quixote, or merely look at the famous illustrations by the great French caricaturist (of the Don going out in the morning on his quest, and then coming back afterwards).:cool: 
please forgive me for repeating this again, but i like to go over and over an idea until i understand it in the simplest way possible for me.
the point here is to find an inverse for an invertible operator acting on a finite dimensional space, by finding a polynomial that annihilates it. The first step in solving Ly = f this way is to identify the correct finite dimensional space, by looking for a polynomial annihilator of f, say P(D). This of course is only possible if f is a product of exponentials, polynomials and sins and cosines. E.g. if f = x^n e^(bx) then (Db)^(n+1) works, and if f = sin or cos, then D^2 +1 works. Now that P is found, the appropriate finite dimensional space is V = ker(P), but with this method there is no need to even know exactly what functions make up this space, unlike in the annihilator method. The only thing one needs to know is what polynomial P annihilates f. Then assuming that the polynomial operator L(D) has no linear factors in common with P(D), it follows that L is invertible on V. Hence to invert L on V, we only need to find an appropriate polynomial annihilating L on V. Since L(D) is a polynomial in D, it suffices to find an annihilator of each linear factor (Da) of L separately. But since we have a polynomial P(D) which equals zero on V, we can find a polynomial vanishing on any linear polynomial of form Da by re  expanding P(D) as P(Da+a). The fact that Da is not factor of P(D) guarantees that the resulting polynomial in Da will have non zero constant term, [if I am not wrong]. This is the key point. Although all polynomials L(D) do theoretically factor into complex linear factors, the method is easier when the factors are simple integer factors. So to sum up, the annihilator P(D) of f, defines a space V = kerP(D), on which P annihilates D, hence re expands to yield a polynomial annihilating any operator of form Da. This allows one to invert on V, all polynomials L(D) which are relatively prime to P(D). If L does have linear factors in common with P, it is easy to invert those factors by hand first, and eliminate them. E.g. to solve (Da) y = x^k e^(ax), one simply integrates on the polynomial factor i.e. then y ={x^(k+1)/[k+1]} e^(ax). this is no harder than solving Dy = x^k, by y = x^(k+1)/[k+1]. here is a worked example: to solve (Da)y = xe^(bx), the annihilator of the RHS is (Db)^2, which re expands as (Da+ab)^2 = (Da)^2 + 2(ab)(Da) + (ab)^2, where the constant term (ab)^2 is non zero if and only if a differs from b. Then (ab)^2 = (Da)^2  2(ab)(Da) = [(Da)  2(ab)](Da), so (Da)^(1) = [1/(ab)^2] [aD 2(ab)] = [1/(ab)^2] [ (2ba)  D]. E.g. to solve (D1) y = xe^3x, we have (D3)^2 = 0 = (D12)^2 = (D1)^2 4(D1) + 4, so 4 = 4(D1)  (D1)^2 = (D1)[4  (D1)] = (D1)[5D]. So (D1)^(1) = (1/4)[5D]. Since a = 1, b = 3, this agrees with the general formula [1/(ab)^2] [ (2ba)  D] above. Hence y = (1/4)[5D](xe^(3x)) = (1/4)[ 5xe^(3x)  3xe^(3x)  e^(3x)] = (1/4) [ 2xe^(3x)  e^(3x)]. Checking: (D1)y = (D1)(1/4) [ 2xe^(3x)  e^(3x)] = (1/4) (D1)[ 2xe^(3x)  e^(3x)] = (1/4) [ 2e^(3x) + 6xe^(3x)  3e^(3x) 2xe^(3x) + e^(3x) = (1/4) [4xe^(3x)] = xe^(3x), as desired. 
ok wow.. that all looks familiar.. and i just took DiffEQ this summer.. and i got an A in the 8 week course.... but its like stuck at the back of my brain... what youre saying looks good.. but there's gotta be a reason that it isnt published anywhere... there might be a style of cc ode that it doesnt work for....?

i think i have proved conclusively that it always works.
i have also tried to make the point that no method is always best, and that this one is clearly better than others for some problems, just as some others are clearly better than this one on other problems.:smile: 
oh i know.. im just saying there's gotta be some exception.. otherwise like you said theyd have figured it out before.

Thinking about it again, I'm less convinced I haven't seen it before.
Argh, if only I could remember where I saw this method! I had thought it was in my DiffEq book, but if it is, it's been eluding me. Now, my DiffEq book itself is eluding me, so I can't even go back to look for it again. :frown: 
perhaps hurkyl, you saw it here:
I have at last found one of the methods I discovered of formally inverting constant coefficient linear differential operators, the one using the formal power series for 1/(1D) = 1 +D + D^2 +.... The classic book by tenenbaum and pollard devotes 25 pages to it, 268292. It is very clearly explained there from a user’s point of view, rather than a theoretist’s. There is also an enhancement of the method, using partial fractions expansions to express the product of the inverses of different operators (Da), (Db),,, as a sum of constant multiples of the individual inverses. This book has been highly recommended, probably in these pages, and that is no doubt where I learned of it, and why I purchased it. It is one of those old, unhurried, scholarly, thorough books, that are no longer being written or adopted. Almost every conceivable basic method seems to be covered in a careful and useful way, and many good problems are given, some with solutions. It is also cheap, being a Dover paperback. There are also many applications, from cave paintings to engineering problems. [Some of the discussion there is not quite precise theoretically. E.g. the definition of a normalized particular solution at the bottom of page 269, as one not involving any terms from the homogeneous solution does not make sense, except with the usual tacit assumption one is using standard familiar functions as a basis to express everything. I.e. these statements are “basis dependent” but no basis has been specified except tacitly. A (different) basis  free definition of a normalized particular solution y is one with y(0) = y’(0) = 0, but this is actually less convenient than the imprecise one used there. The question involves choosing a complement to a given subspace. A natural basis  free complement to the space of homogeneous solutions of (Dr)(Ds)y = 0, is the space of y’s where the initial value vector (y(0), y’(0)) of y is zero, but the usual basis 1,x,x^2,...,x^n for the space of polynomials Pn(x) of degree at most n, provides a more convenient complement to the space spanned by e^rx, e^sx, for use in solving (Da)(Ds)y = Pn(x).] So the basic principle that any good idea that uses only classical concepts, has already been thought of by the classical workers is verified again. However, I did not find there the other incarnation I noticed of this method, via expressing the minimal polynomial m(D) of the RHS, in terms of (Da), and solving for the constant term, to invert Da on the space spanned by the derivatives of the RHS. Since the general historical principle above is usually valid, it too may yet be found somewhere.:smile: 
All times are GMT 5. The time now is 03:32 AM. 
Powered by vBulletin Copyright ©2000  2014, Jelsoft Enterprises Ltd.
© 2014 Physics Forums