I Using Differential operators to solve Diff equations

Biker
Messages
416
Reaction score
52
I don't really understand how their inverses work.

For example, in solving 2nd order linear non-homogeneous differential equations.
The particular solution is found by
## y_{pi} = \frac{p(x)}{f(D)} ##
And they continue by expanding using maclaurin series. How do you treat an operator as a variable? How could you possibly assign a value of zero to it? How can you just take the reciprocal of it?

Is there any reference that discusses this?
 
Mathematics news on Phys.org
A differential operator doesn't have a unique inverse. Both ##\int_{0}^{x}( \dots )dt## and ##\int_{1}^{x}(\dots )dt## are inverses of ##\frac{d}{dx}##.
 
hilbert2 said:
A differential operator doesn't have a unique inverse. Both ##\int_{0}^{x}( \dots )dt## and ##\int_{1}^{x}(\dots )dt## are inverses of ##\frac{d}{dx}##.
How does this relate to my question?
"And they continue by expanding using maclaurin series. How do you treat an operator as a variable? How could you possibly assign a value of zero to it?"
Untitled.png
 

Attachments

  • Untitled.png
    Untitled.png
    5.7 KB · Views: 661
Biker said:
And they continue by expanding using maclaurin series. How do you treat an operator as a variable? How could you possibly assign a value of zero to it? How can you just take the reciprocal of it?
I'm not aware of an example about reciprocals, other than the one you gave, which is basically matrix exponentiation, and the historical origin of all this. But to get an idea of how to treat an operator as a variable, you could look at $$H=2x \cdot \frac{d}{dx} \, , \, X=x^2\cdot \frac{d}{dx}\, , \, Y=- \frac{d}{dx}$$ (here) and apply them to smooth functions in ##x##. Now these are a basis of a vector space, so liner combinations are allowed. Then it is also a basis of a Lie algebra, so Lie multiplication is allowed. At last, as a Lie algebra, here of the Lie group ##SL(2)##, there is a correspondence between the Lie group and its tangent spaces, the Lie algebra. This correspondence works in the direction ##\text{ Tangents, aka Lie algebra } \longrightarrow \text{ regular transformations, aka Lie group }## with the exponential function, in which case the operators become the variables of the exponential function: your example. See e.g. https://www.physicsforums.com/insights/representations-precision-important/.
 
Biker said:
I don't really understand how their inverses work.

For example, in solving 2nd order linear non-homogeneous differential equations.
The particular solution is found by
## y_{pi} = \frac{p(x)}{f(D)} ##
And they continue by expanding using maclaurin series. How do you treat an operator as a variable? How could you possibly assign a value of zero to it? How can you just take the reciprocal of it?

Is there any reference that discusses this?

You can't treat an operator as a variable, and it doesn't have a value. But you can develop a mathematics of such operators. You need to keep in mind that underneath, an expression like Df(x) is not multiplication of D by f(x). It doesn't behave like multiplication. For instance D meaning d/dx does not commute with x. Thus if you have Dx as an operator, meaning that it really appears before some f(x) in an expression like ##d[x f(x)]/dx## then by the product rule Dx = (xD + 1).

I think that the "Maclaurin series" ##e^D = 1 + D + (D^2/2!) + (D^3/3!) + ...## is actually a definition of the notation ##e^D##. It turns out to be quite a useful operator, but that expression was not "derived" by applying Taylor's Theorem to an operator.

As for something like ##1/D##, I think as your title line implies it's just a notation for the inverse of D. In ordinary multiplication, we define c = a/b as the value such that cb = a. Thus we would say g(x) = f(x)/D if and only if f = Dg. So (1/D) is an antiderivative. That's why @hilbert2 's response was an answer to your question.

And so 1/f(D) means the inverse of applying the operator f(D). I don't know under what circumstances that can actually be expanded as a Maclaurin series, but certainly it would take a proof to establish that's a valid thing to do, always keeping in mind that we are doing differential and integral operations here, not multiplication and division.
 
Thank you all for the responses. Sorry for the late reply.

Fresh_42, That is kinda advanced mathematics for me. I still haven't taken most of what you have mentioned.

RPinPA, Exactly. I understood the method of undetermined coefficients. Absolutely reasonable for me. I can also work with the D operator and proved mutiple theorems like exponential shift ..etc
However I couldn't for example make sense of why this is true:
Untitled.png


How do I establish a proof that this is a valid thing to do?
 

Attachments

  • Untitled.png
    Untitled.png
    10.2 KB · Views: 653
Actually I might have proved it
Assume:
##(1-P(D)) y = x^n ##
Assume ##y = Q(D) x^n##
If this ##Q(D) = (1+P(D) + P(D)^2 ... )##
##(1-P(D)) (1+P(D) + P(D)^2 ... ) x^n ##
It is easy to verify that this is equal to x^n thus ##y = Q(D) x^n##

We have the rules of D operator for Linearity, Sum and multiplication but we don't have any rules for the inverse of D operator. So it can be defined as the above. Although we used Maclaurin series to get the expression for Q(D) But this only works because the operator has the algebraic properties mentioned above

Which is really interesting, I don't know how I didnt see this from the beginning :).
 
Maybe a historical point of view sheds some light on it:

“This was obviously not the first idea of a “transition from the finite to the infinite”. Without going back to the ideas of the seventeenth century concerning the transition from calculus of finite differences to the differential calculus, D. Bernoulli had built up the theory of the vibrating string on such a procedure. In 1836, Sturm noted in his work on the differential equation ##y´´−q(x)y+λy=0## that his ideas were suggested to him by analogous considerations of a system of equations of differences. From about 1880 onward, the need for a new analysis began to be felt from different sides, in which instead of the usual functions “functions with infinitely many variables” were in consideration … Even in classical analysis one finds attempts to account for an “operator calculus”, such as the definitions of fractional-order derivatives by Leibniz and Riemann, or the relation ##γ(−a)=e^{aD}## which links translations and derivatives, although it was basically used by Lagrange as an expression for the Taylor series.” (J. Dieudonné)

... and others have had similar problems, as it seems:

“And, however unbelievable this may seem to us, it took quite a long time until it has been clear to mathematicians, that what the algebraists write as ##(I–λT)^{-1}## for a matrix ##T##, is essentially the same as the analysts represent by ##I+λT+λ^2T^2+ \ldots ## for a linear operator ##T\,##“. (J. Dieudonné)
 
  • Like
Likes Biker
fresh_42 said:
Maybe a historical point of view sheds some light on it:

“This was obviously not the first idea of a “transition from the finite to the infinite”. Without going back to the ideas of the seventeenth century concerning the transition from calculus of finite differences to the differential calculus, D. Bernoulli had built up the theory of the vibrating string on such a procedure. In 1836, Sturm noted in his work on the differential equation ##y´´−q(x)y+λy=0## that his ideas were suggested to him by analogous considerations of a system of equations of differences. From about 1880 onward, the need for a new analysis began to be felt from different sides, in which instead of the usual functions “functions with infinitely many variables” were in consideration … Even in classical analysis one finds attempts to account for an “operator calculus”, such as the definitions of fractional-order derivatives by Leibniz and Riemann, or the relation ##γ(−a)=e^{aD}## which links translations and derivatives, although it was basically used by Lagrange as an expression for the Taylor series.” (J. Dieudonné)

... and others have had similar problems, as it seems:

“And, however unbelievable this may seem to us, it took quite a long time until it has been clear to mathematicians, that what the algebraists write as ##(I–λT)^{-1}## for a matrix ##T##, is essentially the same as the analysts represent by ##I+λT+λ^2T^2+ \ldots ## for a linear operator ##T\,##“. (J. Dieudonné)
Pretty nice, Thank you Fresh.
 
Back
Top