Using Differential operators to solve Diff equations

In summary: The Origins of the Infinitesimal Calculus”, by Margaret Baron, 1969In summary, the conversation discusses the concept of inverse operators, particularly in the context of solving 2nd order linear non-homogeneous differential equations. The particular solution is found using the operator ##\frac{p(x)}{f(D)}## and further expanding it using Maclaurin series. The question arises of how to treat an operator as a variable and assign a value of zero to it, and if there are any references discussing this concept. The responses mention that an operator cannot be treated as a variable and has no value, but it can be used in mathematical operations. The use of Maclaurin series for inverse operators
  • #1
Biker
416
52
I don't really understand how their inverses work.

For example, in solving 2nd order linear non-homogeneous differential equations.
The particular solution is found by
## y_{pi} = \frac{p(x)}{f(D)} ##
And they continue by expanding using maclaurin series. How do you treat an operator as a variable? How could you possibly assign a value of zero to it? How can you just take the reciprocal of it?

Is there any reference that discusses this?
 
Mathematics news on Phys.org
  • #2
A differential operator doesn't have a unique inverse. Both ##\int_{0}^{x}( \dots )dt## and ##\int_{1}^{x}(\dots )dt## are inverses of ##\frac{d}{dx}##.
 
  • #3
hilbert2 said:
A differential operator doesn't have a unique inverse. Both ##\int_{0}^{x}( \dots )dt## and ##\int_{1}^{x}(\dots )dt## are inverses of ##\frac{d}{dx}##.
How does this relate to my question?
"And they continue by expanding using maclaurin series. How do you treat an operator as a variable? How could you possibly assign a value of zero to it?"
Untitled.png
 

Attachments

  • Untitled.png
    Untitled.png
    5.7 KB · Views: 575
  • #4
Biker said:
And they continue by expanding using maclaurin series. How do you treat an operator as a variable? How could you possibly assign a value of zero to it? How can you just take the reciprocal of it?
I'm not aware of an example about reciprocals, other than the one you gave, which is basically matrix exponentiation, and the historical origin of all this. But to get an idea of how to treat an operator as a variable, you could look at $$H=2x \cdot \frac{d}{dx} \, , \, X=x^2\cdot \frac{d}{dx}\, , \, Y=- \frac{d}{dx}$$ (here) and apply them to smooth functions in ##x##. Now these are a basis of a vector space, so liner combinations are allowed. Then it is also a basis of a Lie algebra, so Lie multiplication is allowed. At last, as a Lie algebra, here of the Lie group ##SL(2)##, there is a correspondence between the Lie group and its tangent spaces, the Lie algebra. This correspondence works in the direction ##\text{ Tangents, aka Lie algebra } \longrightarrow \text{ regular transformations, aka Lie group }## with the exponential function, in which case the operators become the variables of the exponential function: your example. See e.g. https://www.physicsforums.com/insights/representations-precision-important/.
 
  • #5
Biker said:
I don't really understand how their inverses work.

For example, in solving 2nd order linear non-homogeneous differential equations.
The particular solution is found by
## y_{pi} = \frac{p(x)}{f(D)} ##
And they continue by expanding using maclaurin series. How do you treat an operator as a variable? How could you possibly assign a value of zero to it? How can you just take the reciprocal of it?

Is there any reference that discusses this?

You can't treat an operator as a variable, and it doesn't have a value. But you can develop a mathematics of such operators. You need to keep in mind that underneath, an expression like Df(x) is not multiplication of D by f(x). It doesn't behave like multiplication. For instance D meaning d/dx does not commute with x. Thus if you have Dx as an operator, meaning that it really appears before some f(x) in an expression like ##d[x f(x)]/dx## then by the product rule Dx = (xD + 1).

I think that the "Maclaurin series" ##e^D = 1 + D + (D^2/2!) + (D^3/3!) + ...## is actually a definition of the notation ##e^D##. It turns out to be quite a useful operator, but that expression was not "derived" by applying Taylor's Theorem to an operator.

As for something like ##1/D##, I think as your title line implies it's just a notation for the inverse of D. In ordinary multiplication, we define c = a/b as the value such that cb = a. Thus we would say g(x) = f(x)/D if and only if f = Dg. So (1/D) is an antiderivative. That's why @hilbert2 's response was an answer to your question.

And so 1/f(D) means the inverse of applying the operator f(D). I don't know under what circumstances that can actually be expanded as a Maclaurin series, but certainly it would take a proof to establish that's a valid thing to do, always keeping in mind that we are doing differential and integral operations here, not multiplication and division.
 
  • #6
Thank you all for the responses. Sorry for the late reply.

Fresh_42, That is kinda advanced mathematics for me. I still haven't taken most of what you have mentioned.

RPinPA, Exactly. I understood the method of undetermined coefficients. Absolutely reasonable for me. I can also work with the D operator and proved mutiple theorems like exponential shift ..etc
However I couldn't for example make sense of why this is true:
Untitled.png


How do I establish a proof that this is a valid thing to do?
 

Attachments

  • Untitled.png
    Untitled.png
    10.2 KB · Views: 556
  • #7
Actually I might have proved it
Assume:
##(1-P(D)) y = x^n ##
Assume ##y = Q(D) x^n##
If this ##Q(D) = (1+P(D) + P(D)^2 ... )##
##(1-P(D)) (1+P(D) + P(D)^2 ... ) x^n ##
It is easy to verify that this is equal to x^n thus ##y = Q(D) x^n##

We have the rules of D operator for Linearity, Sum and multiplication but we don't have any rules for the inverse of D operator. So it can be defined as the above. Although we used Maclaurin series to get the expression for Q(D) But this only works because the operator has the algebraic properties mentioned above

Which is really interesting, I don't know how I didnt see this from the beginning :).
 
  • #8
Maybe a historical point of view sheds some light on it:

“This was obviously not the first idea of a “transition from the finite to the infinite”. Without going back to the ideas of the seventeenth century concerning the transition from calculus of finite differences to the differential calculus, D. Bernoulli had built up the theory of the vibrating string on such a procedure. In 1836, Sturm noted in his work on the differential equation ##y´´−q(x)y+λy=0## that his ideas were suggested to him by analogous considerations of a system of equations of differences. From about 1880 onward, the need for a new analysis began to be felt from different sides, in which instead of the usual functions “functions with infinitely many variables” were in consideration … Even in classical analysis one finds attempts to account for an “operator calculus”, such as the definitions of fractional-order derivatives by Leibniz and Riemann, or the relation ##γ(−a)=e^{aD}## which links translations and derivatives, although it was basically used by Lagrange as an expression for the Taylor series.” (J. Dieudonné)

... and others have had similar problems, as it seems:

“And, however unbelievable this may seem to us, it took quite a long time until it has been clear to mathematicians, that what the algebraists write as ##(I–λT)^{-1}## for a matrix ##T##, is essentially the same as the analysts represent by ##I+λT+λ^2T^2+ \ldots ## for a linear operator ##T\,##“. (J. Dieudonné)
 
  • Like
Likes Biker
  • #9
fresh_42 said:
Maybe a historical point of view sheds some light on it:

“This was obviously not the first idea of a “transition from the finite to the infinite”. Without going back to the ideas of the seventeenth century concerning the transition from calculus of finite differences to the differential calculus, D. Bernoulli had built up the theory of the vibrating string on such a procedure. In 1836, Sturm noted in his work on the differential equation ##y´´−q(x)y+λy=0## that his ideas were suggested to him by analogous considerations of a system of equations of differences. From about 1880 onward, the need for a new analysis began to be felt from different sides, in which instead of the usual functions “functions with infinitely many variables” were in consideration … Even in classical analysis one finds attempts to account for an “operator calculus”, such as the definitions of fractional-order derivatives by Leibniz and Riemann, or the relation ##γ(−a)=e^{aD}## which links translations and derivatives, although it was basically used by Lagrange as an expression for the Taylor series.” (J. Dieudonné)

... and others have had similar problems, as it seems:

“And, however unbelievable this may seem to us, it took quite a long time until it has been clear to mathematicians, that what the algebraists write as ##(I–λT)^{-1}## for a matrix ##T##, is essentially the same as the analysts represent by ##I+λT+λ^2T^2+ \ldots ## for a linear operator ##T\,##“. (J. Dieudonné)
Pretty nice, Thank you Fresh.
 

1. What are differential operators?

Differential operators are mathematical operators that are used to represent differentiation, or the process of finding the rate of change of a function with respect to its input variables.

2. How are differential operators used to solve differential equations?

Differential operators are used to transform a differential equation into an algebraic equation, making it easier to solve. This is achieved by applying the operator to both sides of the equation, isolating the unknown function.

3. What is the difference between ordinary and partial differential operators?

Ordinary differential operators act on functions of one variable, while partial differential operators act on functions of multiple variables. This means that partial differential equations typically involve multiple unknown functions.

4. Are there any limitations to using differential operators to solve differential equations?

While differential operators are powerful tools for solving differential equations, their use is limited to linear equations. Non-linear equations require more advanced techniques for solution.

5. Can differential operators be used for any type of differential equation?

Differential operators are most commonly used for first and second order differential equations. However, they can also be used for higher order equations, as long as they are linear.

Similar threads

Replies
2
Views
975
  • Calculus and Beyond Homework Help
Replies
10
Views
388
Replies
3
Views
600
Replies
8
Views
1K
  • Introductory Physics Homework Help
Replies
8
Views
991
Replies
2
Views
2K
  • Science and Math Textbooks
Replies
1
Views
945
  • MATLAB, Maple, Mathematica, LaTeX
Replies
4
Views
1K
  • Differential Equations
Replies
7
Views
308
  • Calculus
Replies
2
Views
1K
Back
Top