Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Linear transformation of a 2nd order pde

  1. Oct 13, 2008 #1
    First off I am NOT asking you to solve this for me. I'm just trying to understand the concept behind this problem.

    Let L be a linear transformation defined by
    L[p]=(x^2+2)p"+ (x-1)p' -4p

    I have not seen linear transformations in this format. Usually I see something like L(x)=x1b1+ x2b2.. and they usually give me a basis at least. There's something that I am not seeing here and I would appreciate a nudge in the right direction

    i've looked through several books and have not found a linear transformation of this type. if you know of an online resource or a book that covers this please do let me know

    thanks
     
  2. jcsd
  3. Oct 13, 2008 #2

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Surely, any introductory textbook on linear algebra gives the definition of a linear transformation! Part of the point of linear algebra is learning how to deal with such things directly, rather than having to resolve it into a big mess of minutae.

    But, if you really do insist on working relative to a basis, then pick one! And once you have one, you can compute the action of L on each of your basis vectors, thus giving you its coordinate representation as the matrix you seek (what you wrote is essentially equivalent to writing a matrix).


    Incidentally, what vector space are you working with? The above gets a little trickier if it's countably infinite dimensional (e.g. vector space of real polynomials of any degree), and a whole lot trickier if it's uncountably infinite dimensional (e.g. the vector space of all smooth, real functions).
     
  4. Oct 14, 2008 #3

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    What is p? It looks to me like it is a function of some kind. As you learned in Calculus, (ap+ bq)'= ap'+ bq' where a and b are constants, p and q functions. That is: the derivative IS a linear transformation of a function.

    You can add functions and you can multiply functions by a number so the set of all functions, the set of all differentiable functions, and the set of all infinitely differentiable functions are vector spaces. In particular, the derivative and second derivative of infinitely differentiable functions are again infinitely differentiable functions so they are linear transformations on that vector space.

    Of course those are all infinite dimensional vector spaces. If you restrict to the space of polynomials, then you can use 1, x, x2, ... as basis. Other functions you can write in terms of infinite series of powers (Taylor's series) or sine and cosine (Fourier series) and use those as basis.
     
  5. Oct 14, 2008 #4
    they don't define what p is. I assume that we need to solve for a p by solving for the homoegenous solution.

    Thanks for the tips guys. I think this might point me in the right direction
     
  6. Oct 14, 2008 #5
    vector space is L: P2 -> p2
     
  7. Oct 14, 2008 #6
    Perhaps as an *infinite* basis. Bases are typically required to be finite.

    I was watching some videos on quantum mechanics a few weeks ago, and I kept looking at what the professor was doing with a crook in my neck. He kept talking about those engineering abominations known as "dirac deltas" (the set of which form something very much like a basis on a functionspace) and the entire time, I kept thinking about how the axiom of choice had came back to haunt me! I kept thinking "well, any function in the space is the sum of a finite number of functions you can't solve for." It made me uneasy to say the least!
     
  8. Oct 14, 2008 #7

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    Only if you are requiring finite dimensional vector spaces! And the space of all polynomials is NOT finite dimensional.

    Perhaps you are thinking of the distinction between a "basis" and a "Hamel basis".

    Given any vector space, there exist a basis: that is, a set of vectors such than any vector in the space can be written as a finite sum of the basis vectors- but that doesn't require the number of vectors in the basis to be finite: just that all but a finite number of the coefficients be 0.

    A "Hamel basis" allows infinite sums (and so, of course, requires that the vector space be given a topology). Fourier sums and Taylor's series are examples.

    The example I gave- the set of all polynomials in x and the infinite basis {1, x, x2, ...}- is a "basis" in the first sense since any given polynomial has a highest power and so requires only a finite number of those powers of x. But since there is no upper bound on the highest power of a polynomial, and any finite set of polynomials will have highest power, the basis must contain an infinite number of vectors.

    I don't understand what you are saying here.
     
  9. Oct 14, 2008 #8
    I think I figured it out

    the basis of L; P2-> P2 is of course (1, x, x^2)

    Then you would simply take the basis and plug it back into the Linear transformation

    for example L(x)= 0 (since it's the second derivate) + (x-1) - 4x

    and from that point on you could find the matrix, bases of polynomials for the image and the null space by plugging in the basis for P2 for p (p=1, p=x, p=x^2)

    Tell me if I am completely off the mark.
     
  10. Oct 15, 2008 #9
    striclty speaking, L[p] is not linear transform in general,
    that is only a method of compute which is similar with linear transform.

    It is possible to regard p,p',p'' as infinite series.
    but I think it is more natural to regard this notation as a somewhat immoderate usage.
     
  11. Oct 16, 2008 #10

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    No, dknight235 did, eventually, tell us that p is from P2- the vector space of quadratic polynomials, a+ bx+ cx2. L(p) is a linear transformation on that space. It is not an invertible linear transformation: its kernel is a one dimensional subspace of P2.

    With L[p]=(x^2+2)p"+ (x-1)p' -4p then L(a+ bx+ cx2)=(x2+ 2)(2c)+ (x-1)(2cx+ b)- 4(a+ bx+ cx2)= (2c+ 2c- 4c)x2+ (-2c+ b- 4b)x+ (4c- b- 4a)= (-2c- 3b)x+ (4c- b- 4a). The null space (kernel) is the set {a+ bx+ cx2} such that -2c- 3b= 0 and 4c- b- 4a= 0.

    Yes, if you take 1, x, x2 as basis then L(1)= -4, L(x)= -1- 3x, and L(x2)= (x2+ 2)(2)+ (x-1)(2x)- 4(x2)= 4- 2x.

    The matrix representation is
    [tex]\left[\begin{array}{ccc}-4 & -1 & 4 \\0 & -3 & -2\\ 0 & 0 & 0\end{array}\right][/tex]

    I'm not sure what Jang Jin Hong means. Perhaps it is a translation problem.
     
  12. Oct 16, 2008 #11
    Math is so damn subtle sometimes! I am going to have to go back to my real analysis book and go over the definition a hamel basis again.
     
  13. Oct 16, 2008 #12
    My English is bad, I can only use broken English.

    In the case of finite algebraic polynomials, that kind of derivative format have no problems.

    but that kind of method seems to be dangerous in some case.

    All of derivative can be defined as limit notation and
    we can define a function P which comtains limit notation.
    so dP/dx contains two limit.
    In real analysis the limits are not always commute,
    In the linear transform above, differentiate individual terms first
    and then sum, but in some case, that procedure is impossible

    I read that in the following text book about real analysis.
    David Bressoud, A Radical Approach to Real Analysis, chapter5
     
    Last edited: Oct 16, 2008
  14. Oct 31, 2009 #13
    it is perfect solution for the problem, it saved my life

    thanks
     
  15. Oct 31, 2009 #14
    the solution is quite interesting, and i am facing the same problem now in my course!
    where did u find this question?
     
  16. Oct 31, 2009 #15

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    Yes, limits do not always commute and derivatives do not commute. But then simple linear transformations (like multiplication by matrices) do not commute either. I see no problem here.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Linear transformation of a 2nd order pde
  1. Linear Transformations (Replies: 2)

  2. Linear transformation (Replies: 4)

  3. Linear transformation (Replies: 4)

  4. Linear transformations (Replies: 6)

  5. Linear transformation (Replies: 5)

Loading...