Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

The Differential operator and it's limitations

  1. Nov 11, 2014 #1


    User Avatar
    Gold Member

    So I had a quiz today, and one of the questions was pretty easy, pretty straight forward.

    Show that ##t^2e^{9t}## is a solution of ##(D-9)^3y##

    Foil it out, plug in y, and you're done.

    Well I tried doing something else, that (at least in my mind) should have worked, but it didn't.

    I said "Oh, well I'll just rewrite y as ##(y^{1/3})^3##, distribute it through, and solve ##D[y^{1/3}] - 9y^{1/3} = 0##

    It definitely does not. What's even more interesting is that, upon distributing the y^1/3, depending on whether you solve the above equation, or you foil out ##(D[y^{1/3}] - 9y^{1/3})^3## you get 2 different sets of solutions. The given function definitely works when you foil out your "auxiliary equation" with the differential operator, but you get 2 different non-zero functions depending on how you choose to evaluate it. I applied the chain rule, so that's not it. I would write it out, but it gets kind of messy. Is there a reason why I can't do this? We're treating this diff eq. as a polynomial, so why can't I do polynomial stuff to it?

    Chain rule as such ##D[y^{1/3}] = \frac{1}{3}y^{-\frac{2}{3}}y'##
    Since D is a linear operator, can't I do this? Obviously not, as it doesn't work... but why?
  2. jcsd
  3. Nov 11, 2014 #2


    User Avatar
    Science Advisor
    Homework Helper

    D^3 doesn't act the same as y^3. It is doing differentiation three times, which is a little different from multipklying three times.

    E.g,. D does not commute with multiplying; take a function f and multiply it by y and differentiate it.

    you get, lets see: D(yf) = Dy f + yDf. but in the other order you get yDf = well just yDf. so you are missing the term Dy f.

    So you can't move multiplications by y around inside of differentiations by D, unless you obey their rules.

    This answer is just off the top of my head, and may be imprecise or flawed, but i think its the reason for your confusion., You treated multiplication as parallel to differentiation just because both were written with the same notation. i.e. exponentiation.
  4. Nov 11, 2014 #3
    Lets look at a similar problem: lets say I have a function f(x) and I want to evaluate f(f(f(x))). In essence, your argument is that I should be able to evaluate f(f(f(x))) by evaluating f(x) and cubing it. But this is general does not work. Except for very special functions f, or very special values of x for a particular function.

    It doesn't even work for most linear functions. Lets try f(x) =1+x. Then f(f(x)) = 2+x and f(f(f(x)))=3+x.

    The expression [itex](D+9)^3 y[/itex] really means [itex](D+9)\left[(D+9)\left[ (D+9)y\right]\right][/itex]
    which is similar to the case f(f(f(x))), but instead of a function we have a differential operator.
  5. Nov 11, 2014 #4


    User Avatar
    Gold Member

    I do see what you're saying, and that makes sense. I actually feel a little better about it now, but the question is still why is it reasonable to treat the operators as functions/variables when you do things like write an auxiliary equation? I don't see how we can treat D as a variable and manipulate it as such, .e. factoring, but not in other ways such as multiplication. I hadn't thought about the commutitivity, but I definitely see that as an issue. Still seems a bit sketchy to me, with those regards.

    I guess, possibly, it could be a result of botched notation. We write D^2 and mean D(D(y)), but that is only applicable for certain operators. I would call sin an operator, and when we write sin^2(x), we don't mean sin(sin(x)), we actually mean (sin(x))^2

    One thing that doesn't necessarily agree:
    When you do this, it's not that you're missing a term, you have an extra term that you don't want.
    ##(D-9)^3y=0 =>(Dy^{1/3}-9y^{1/3})=0##
    ##=(\frac{y^{-2/3}(y')}{3} - 9y^{1/3})##
    the two problems with that final answer.
    the 1/3, and the term 2te^9t term. If those go away, it definitely equals 0 for all t. So it's not that I'm missing fD[y], it's that I have fD[y] and it needs to go away. And take that 1/3 with it.
  6. Nov 12, 2014 #5
    Do you think it's reasonable to do this:

    D(y^3) = [D(y)]^3


    Move a derivative inside a power?

    Looks like that's what you did.

    I think of D as if it were a matrix. Functions are a kind of vector because you can add them together and multiply them by scalars. If you don't buy this, then think in the other direction. Isn't a vector just a kind of function? Take the vector (2, 4, -1). You can reinterpret it as a function by setting f(1) = 2, f(2) = 4, f(3) = -1. Derivatives are linear (i.e. D(f+g) = Df + Dg and D(cf) = cD(f) for a constant, c). How do linear things act on vectors? Like multiplying by a matrix. So, if you are familiar with matrix algebra, everything is there. Non-commutativity and all. In fact, if you take differences, instead of derivatives and apply them to discrete functions, you get an approximation where you can actually use a matrix explicitly. So if you think that way, your D is approximately a matrix.
  7. Nov 12, 2014 #6


    User Avatar
    Science Advisor
    Homework Helper

    D does commute with itself and with multiplication by constants, so the manipulations are quite useful for solving linear constant coefficient equations, which are operators of form of polynomials in D with constant coefficients. i.e. then say (D-2)(D-3) = D^2-5D + 6.

    You can also use it more generally but you just have to follow the rules.
  8. Nov 12, 2014 #7


    User Avatar
    Homework Helper

    A real linear constant-coefficient differential operator is a member of the polynomial ring [itex]\mathbb{R}[D][/itex]. Such polynomials have a unique factorisation in terms of the linear polynomials [itex]\{ (D - a) : a \in \mathbb{R}\}[/itex] and the quadratic polynomials [itex]\{(D^2 + b^2) : b \neq 0\}[/itex]. We can find the kernels of these operators fairly easily: bases are [itex]\{e^{ax}\}[/itex] and [itex]\{\cos(bx), \sin(bx)\}[/itex] respectively. From these we can find the kernels of [itex](D - a)^n[/itex], etc and hence the kernel of any [itex]L \in \mathbb{R}[D][/itex]. (Finding a basis for the kernel of [itex]L[/itex] is equivalent to finding the general solution of [itex]Ly = 0[/itex].)

    A real linear variable-coefficient differential operator is a member of the non-commutative polynomial ring [itex]F[D][/itex] where [itex]F[/itex] is an appropriate ring of differentiable real-valued functions and for [itex]p \in F[/itex], [itex]Dp = pD + p'[/itex]. It's not obvious that such polynomials have unique factorisations in terms of low-order polynomials, or how to obtain them if they do. Thus the operator notation is less useful here.
  9. Nov 12, 2014 #8


    User Avatar
    Gold Member

    That's definitely not what I did. I didn't swap exponents. I simply rewrote y as ##(y^{1/3})^3## to perform basic algebraic manipulation on this. In factored notation, there is no exponent on D. The 3 didn't go away, both sides are equal, and one side is 0. So either D-9 =0, y=0, or both. (Algebraicly)

    Now that I'm thinking about the commutability of D, I think it makes a lot more sense. In D-9 there are only d's and constants. So if I had something like ##D^2y-18Dy+81y## I could factor this,, the only thing is that we are botching this notation. We're being inconsistent. We are operating under the assumption (understanding?) that ##D^2y## really means ##D[D[y]]## whereas ##81y## really means ##81\times y##
    I can see how viewing D as a matrix can help with this (somewhat), at least the commutability issue. If I had something like ##yD^2y-y18Dy+y81y## you can only factor out 1 of the y's, and you have to pick the y's either on the left or on the right of each term, because they both have different meanings (at least the ones attached to the D's). Looking at D as a matrix of operators (analagous to ##\nabla##), the left y acts as a scalar multiple, whereas the right y acts as an arguement. The problem with this is that it breaks down when you think about dependence. D is definitely dependant on y, so y is not really a constant with respect to D, but in the notation above, the left hand y would be multiplied through and act as a scalar. I'm not sure, but I think my issue may be in that we're using "shorthand" notation and using manipulations that are more obviously legitimate in long notation (D[y] vs. Dy), whereas other operations are more obviously illigitimate in long hand.

    Another question regarding dependence:
    D is a linear operator. With the exception of the function ##y=ce^{kt}## (or similar functions), D[y] is non-linearly dependant on y. Maybe I'm looking too far into the phrase "Linear operator", and it might just be an english thing, but this seems inconsistent (without looking at the rigerous mathematical definition of Linear operator).

    This is actually all sitting better with me now. Thanks everyone who's responded so far.
    Last edited: Nov 12, 2014
  10. Nov 12, 2014 #9
    Oh, I see. So, mathwonk was right, it's the non-commutativity that's the problem. D^3y^3 is not equal to DyDyDy = (Dy)^3 because you would have to commute everything to be able to do that.
  11. Nov 12, 2014 #10


    User Avatar
    Gold Member

    Yea I'm thinking so. I think I just need to get more familiar with how these work. Or at least forget all this notation, and always write it explicitly.
  12. Nov 12, 2014 #11
    You have to think of multiplication by y as another matrix if you think of D as a matrix. D maps functions to functions and so does multiplication by y. So if you write yDy^3 that's like the matrix expression ABv, where A is the multiplication by y operator, B is the differential operator D, and v is the function f(y) = y^3, interpreted as a vector. In matrix notation, you usually don't bother with stuff like B(v) to multiply the vector v by the matrix B, and it's sort of clear that you don't need to. It's a little ambiguous because you can also express it as ABAAAw, where w is the constant function equal to 1 everywhere and everything else is as before. But the same is true of matrix algebra. You could just think of AAAv as another vector w, and then you're back to ABv, as before.
  13. Nov 12, 2014 #12


    User Avatar
    Gold Member

    I think moreso than looking at D as a matrix, I'm looking at D as a matrix of operators, and looking at those operators acting on the arguement (right hand) y. yDy might be legit, but it's not anymore useful than writing
    ## y\left [ \begin{array} c
    D_1[y] \\
    D_2[y] \\
    ... \\
    D_n[y] \\
    \end{array} \right ]##
  14. Nov 12, 2014 #13
    I don't know where all those other D's would be coming from. I thought there was only one D. An operator itself is "like" a matrix. More properly, it's a linear transformation. If it were acting on a finite dimensional space, it would be represented by a matrix. It's more useful to write something that's more compact.
  15. Nov 12, 2014 #14


    User Avatar
    Gold Member

    There is only one D. I was just generalizing. Replace those D's with L's and replace yDy with yLy if you wish.
    It may be more useful for you to write something more compact, for others, it may be more useful to write something that's more descriptive. Look at schrodingers equation. There is a generalized "compact" form, but you can't do anything with it. There's the expanded form, which actually allows you to make calculations, which to me is more useful that compactness for the sake of compactness.
  16. Nov 12, 2014 #15
    You can generalize if you want, but I don't know that it makes it easier to understand. It's usually easier to understand the simpler case.

    Actually, I prefer to write it both ways. The more compact way reminds me of the concept, which then would help me to remember the one you use for calculations.
  17. Nov 12, 2014 #16


    User Avatar
    Gold Member

  18. Nov 19, 2014 #17

    Stephen Tashi

    User Avatar
    Science Advisor

    Why isn't some system of "operator algebras" part of the standard mathematical curriculum? Is it too specialized? - or too trivial to merit recognition as a indepenent topic? (I'm curious about this, not lobbying for it.)

    Based on the days when I went to university libraries instead of using the internet, there are many dusty engineering type books about "operator algebras". They all take different approaches. I don't think any system of operator algebra made it in to the general curriculum of engineering mathematics. Operator algebras are topic in pure mathematics, but, as far as I know, that knowlege isn't part of the general graduate level curriculum. ( Of course, the algebraic manipulation of differential operators and Laplace transforms etc. is taught informally in differential equations classes. I wouldn't call that a formal treatment of operator algebra.)
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook