# Linear transformation of a 2nd order pde

1. Oct 13, 2008

### dknight237

First off I am NOT asking you to solve this for me. I'm just trying to understand the concept behind this problem.

Let L be a linear transformation defined by
L[p]=(x^2+2)p"+ (x-1)p' -4p

I have not seen linear transformations in this format. Usually I see something like L(x)=x1b1+ x2b2.. and they usually give me a basis at least. There's something that I am not seeing here and I would appreciate a nudge in the right direction

i've looked through several books and have not found a linear transformation of this type. if you know of an online resource or a book that covers this please do let me know

thanks

2. Oct 13, 2008

### Hurkyl

Staff Emeritus
Surely, any introductory textbook on linear algebra gives the definition of a linear transformation! Part of the point of linear algebra is learning how to deal with such things directly, rather than having to resolve it into a big mess of minutae.

But, if you really do insist on working relative to a basis, then pick one! And once you have one, you can compute the action of L on each of your basis vectors, thus giving you its coordinate representation as the matrix you seek (what you wrote is essentially equivalent to writing a matrix).

Incidentally, what vector space are you working with? The above gets a little trickier if it's countably infinite dimensional (e.g. vector space of real polynomials of any degree), and a whole lot trickier if it's uncountably infinite dimensional (e.g. the vector space of all smooth, real functions).

3. Oct 14, 2008

### HallsofIvy

What is p? It looks to me like it is a function of some kind. As you learned in Calculus, (ap+ bq)'= ap'+ bq' where a and b are constants, p and q functions. That is: the derivative IS a linear transformation of a function.

You can add functions and you can multiply functions by a number so the set of all functions, the set of all differentiable functions, and the set of all infinitely differentiable functions are vector spaces. In particular, the derivative and second derivative of infinitely differentiable functions are again infinitely differentiable functions so they are linear transformations on that vector space.

Of course those are all infinite dimensional vector spaces. If you restrict to the space of polynomials, then you can use 1, x, x2, ... as basis. Other functions you can write in terms of infinite series of powers (Taylor's series) or sine and cosine (Fourier series) and use those as basis.

4. Oct 14, 2008

### dknight237

they don't define what p is. I assume that we need to solve for a p by solving for the homoegenous solution.

Thanks for the tips guys. I think this might point me in the right direction

5. Oct 14, 2008

### dknight237

vector space is L: P2 -> p2

6. Oct 14, 2008

### Tac-Tics

Perhaps as an *infinite* basis. Bases are typically required to be finite.

I was watching some videos on quantum mechanics a few weeks ago, and I kept looking at what the professor was doing with a crook in my neck. He kept talking about those engineering abominations known as "dirac deltas" (the set of which form something very much like a basis on a functionspace) and the entire time, I kept thinking about how the axiom of choice had came back to haunt me! I kept thinking "well, any function in the space is the sum of a finite number of functions you can't solve for." It made me uneasy to say the least!

7. Oct 14, 2008

### HallsofIvy

Only if you are requiring finite dimensional vector spaces! And the space of all polynomials is NOT finite dimensional.

Perhaps you are thinking of the distinction between a "basis" and a "Hamel basis".

Given any vector space, there exist a basis: that is, a set of vectors such than any vector in the space can be written as a finite sum of the basis vectors- but that doesn't require the number of vectors in the basis to be finite: just that all but a finite number of the coefficients be 0.

A "Hamel basis" allows infinite sums (and so, of course, requires that the vector space be given a topology). Fourier sums and Taylor's series are examples.

The example I gave- the set of all polynomials in x and the infinite basis {1, x, x2, ...}- is a "basis" in the first sense since any given polynomial has a highest power and so requires only a finite number of those powers of x. But since there is no upper bound on the highest power of a polynomial, and any finite set of polynomials will have highest power, the basis must contain an infinite number of vectors.

I don't understand what you are saying here.

8. Oct 14, 2008

### dknight237

I think I figured it out

the basis of L; P2-> P2 is of course (1, x, x^2)

Then you would simply take the basis and plug it back into the Linear transformation

for example L(x)= 0 (since it's the second derivate) + (x-1) - 4x

and from that point on you could find the matrix, bases of polynomials for the image and the null space by plugging in the basis for P2 for p (p=1, p=x, p=x^2)

Tell me if I am completely off the mark.

9. Oct 15, 2008

### Jang Jin Hong

striclty speaking, L[p] is not linear transform in general,
that is only a method of compute which is similar with linear transform.

It is possible to regard p,p',p'' as infinite series.
but I think it is more natural to regard this notation as a somewhat immoderate usage.

10. Oct 16, 2008

### HallsofIvy

No, dknight235 did, eventually, tell us that p is from P2- the vector space of quadratic polynomials, a+ bx+ cx2. L(p) is a linear transformation on that space. It is not an invertible linear transformation: its kernel is a one dimensional subspace of P2.

With L[p]=(x^2+2)p"+ (x-1)p' -4p then L(a+ bx+ cx2)=(x2+ 2)(2c)+ (x-1)(2cx+ b)- 4(a+ bx+ cx2)= (2c+ 2c- 4c)x2+ (-2c+ b- 4b)x+ (4c- b- 4a)= (-2c- 3b)x+ (4c- b- 4a). The null space (kernel) is the set {a+ bx+ cx2} such that -2c- 3b= 0 and 4c- b- 4a= 0.

Yes, if you take 1, x, x2 as basis then L(1)= -4, L(x)= -1- 3x, and L(x2)= (x2+ 2)(2)+ (x-1)(2x)- 4(x2)= 4- 2x.

The matrix representation is
$$\left[\begin{array}{ccc}-4 & -1 & 4 \\0 & -3 & -2\\ 0 & 0 & 0\end{array}\right]$$

I'm not sure what Jang Jin Hong means. Perhaps it is a translation problem.

11. Oct 16, 2008

### Tac-Tics

Math is so damn subtle sometimes! I am going to have to go back to my real analysis book and go over the definition a hamel basis again.

12. Oct 16, 2008

### Jang Jin Hong

My English is bad, I can only use broken English.

In the case of finite algebraic polynomials, that kind of derivative format have no problems.

but that kind of method seems to be dangerous in some case.

All of derivative can be defined as limit notation and
we can define a function P which comtains limit notation.
so dP/dx contains two limit.
In real analysis the limits are not always commute,
In the linear transform above, differentiate individual terms first
and then sum, but in some case, that procedure is impossible

David Bressoud, A Radical Approach to Real Analysis, chapter5

Last edited: Oct 16, 2008
13. Oct 31, 2009

### sas2031

it is perfect solution for the problem, it saved my life

thanks

14. Oct 31, 2009

### sas2031

the solution is quite interesting, and i am facing the same problem now in my course!
where did u find this question?

15. Oct 31, 2009

### HallsofIvy

Yes, limits do not always commute and derivatives do not commute. But then simple linear transformations (like multiplication by matrices) do not commute either. I see no problem here.