Linear transformation of a 2nd order pde

In summary, the concept of a linear transformation is introduced, and it is shown that a derivative is a linear transformation of a function. It is also shown that a basis is a set of vectors such that any vector in the space can be written as a finite sum of the basis vectors.
  • #1
dknight237
6
0
First off I am NOT asking you to solve this for me. I'm just trying to understand the concept behind this problem.

Let L be a linear transformation defined by
L[p]=(x^2+2)p"+ (x-1)p' -4p

I have not seen linear transformations in this format. Usually I see something like L(x)=x1b1+ x2b2.. and they usually give me a basis at least. There's something that I am not seeing here and I would appreciate a nudge in the right direction

i've looked through several books and have not found a linear transformation of this type. if you know of an online resource or a book that covers this please do let me know

thanks
 
Physics news on Phys.org
  • #2
Surely, any introductory textbook on linear algebra gives the definition of a linear transformation! Part of the point of linear algebra is learning how to deal with such things directly, rather than having to resolve it into a big mess of minutae.

But, if you really do insist on working relative to a basis, then pick one! And once you have one, you can compute the action of L on each of your basis vectors, thus giving you its coordinate representation as the matrix you seek (what you wrote is essentially equivalent to writing a matrix).


Incidentally, what vector space are you working with? The above gets a little trickier if it's countably infinite dimensional (e.g. vector space of real polynomials of any degree), and a whole lot trickier if it's uncountably infinite dimensional (e.g. the vector space of all smooth, real functions).
 
  • #3
dknight237 said:
First off I am NOT asking you to solve this for me. I'm just trying to understand the concept behind this problem.

Let L be a linear transformation defined by
L[p]=(x^2+2)p"+ (x-1)p' -4p

I have not seen linear transformations in this format. Usually I see something like L(x)=x1b1+ x2b2.. and they usually give me a basis at least. There's something that I am not seeing here and I would appreciate a nudge in the right direction

i've looked through several books and have not found a linear transformation of this type. if you know of an online resource or a book that covers this please do let me know

thanks
What is p? It looks to me like it is a function of some kind. As you learned in Calculus, (ap+ bq)'= ap'+ bq' where a and b are constants, p and q functions. That is: the derivative IS a linear transformation of a function.

You can add functions and you can multiply functions by a number so the set of all functions, the set of all differentiable functions, and the set of all infinitely differentiable functions are vector spaces. In particular, the derivative and second derivative of infinitely differentiable functions are again infinitely differentiable functions so they are linear transformations on that vector space.

Of course those are all infinite dimensional vector spaces. If you restrict to the space of polynomials, then you can use 1, x, x2, ... as basis. Other functions you can write in terms of infinite series of powers (Taylor's series) or sine and cosine (Fourier series) and use those as basis.
 
  • #4
they don't define what p is. I assume that we need to solve for a p by solving for the homoegenous solution.

Thanks for the tips guys. I think this might point me in the right direction
 
  • #5
vector space is L: P2 -> p2
 
  • #6
HallsofIvy said:
Of course those are all infinite dimensional vector spaces. If you restrict to the space of polynomials, then you can use 1, x, x2, ... as basis.

Perhaps as an *infinite* basis. Bases are typically required to be finite.

I was watching some videos on quantum mechanics a few weeks ago, and I kept looking at what the professor was doing with a crook in my neck. He kept talking about those engineering abominations known as "dirac deltas" (the set of which form something very much like a basis on a functionspace) and the entire time, I kept thinking about how the axiom of choice had came back to haunt me! I kept thinking "well, any function in the space is the sum of a finite number of functions you can't solve for." It made me uneasy to say the least!
 
  • #7
Tac-Tics said:
Perhaps as an *infinite* basis. Bases are typically required to be finite.
Only if you are requiring finite dimensional vector spaces! And the space of all polynomials is NOT finite dimensional.

Perhaps you are thinking of the distinction between a "basis" and a "Hamel basis".

Given any vector space, there exist a basis: that is, a set of vectors such than any vector in the space can be written as a finite sum of the basis vectors- but that doesn't require the number of vectors in the basis to be finite: just that all but a finite number of the coefficients be 0.

A "Hamel basis" allows infinite sums (and so, of course, requires that the vector space be given a topology). Fourier sums and Taylor's series are examples.

The example I gave- the set of all polynomials in x and the infinite basis {1, x, x2, ...}- is a "basis" in the first sense since any given polynomial has a highest power and so requires only a finite number of those powers of x. But since there is no upper bound on the highest power of a polynomial, and any finite set of polynomials will have highest power, the basis must contain an infinite number of vectors.

I was watching some videos on quantum mechanics a few weeks ago, and I kept looking at what the professor was doing with a crook in my neck. He kept talking about those engineering abominations known as "dirac deltas" (the set of which form something very much like a basis on a functionspace) and the entire time, I kept thinking about how the axiom of choice had came back to haunt me! I kept thinking "well, any function in the space is the sum of a finite number of functions you can't solve for." It made me uneasy to say the least!
I don't understand what you are saying here.
 
  • #8
I think I figured it out

the basis of L; P2-> P2 is of course (1, x, x^2)

Then you would simply take the basis and plug it back into the Linear transformation

for example L(x)= 0 (since it's the second derivate) + (x-1) - 4x

and from that point on you could find the matrix, bases of polynomials for the image and the null space by plugging in the basis for P2 for p (p=1, p=x, p=x^2)

Tell me if I am completely off the mark.
 
  • #9
striclty speaking, L[p] is not linear transform in general,
that is only a method of compute which is similar with linear transform.

It is possible to regard p,p',p'' as infinite series.
but I think it is more natural to regard this notation as a somewhat immoderate usage.
 
  • #10
No, dknight235 did, eventually, tell us that p is from P2- the vector space of quadratic polynomials, a+ bx+ cx2. L(p) is a linear transformation on that space. It is not an invertible linear transformation: its kernel is a one dimensional subspace of P2.

With L[p]=(x^2+2)p"+ (x-1)p' -4p then L(a+ bx+ cx2)=(x2+ 2)(2c)+ (x-1)(2cx+ b)- 4(a+ bx+ cx2)= (2c+ 2c- 4c)x2+ (-2c+ b- 4b)x+ (4c- b- 4a)= (-2c- 3b)x+ (4c- b- 4a). The null space (kernel) is the set {a+ bx+ cx2} such that -2c- 3b= 0 and 4c- b- 4a= 0.

Yes, if you take 1, x, x2 as basis then L(1)= -4, L(x)= -1- 3x, and L(x2)= (x2+ 2)(2)+ (x-1)(2x)- 4(x2)= 4- 2x.

The matrix representation is
[tex]\left[\begin{array}{ccc}-4 & -1 & 4 \\0 & -3 & -2\\ 0 & 0 & 0\end{array}\right][/tex]

I'm not sure what Jang Jin Hong means. Perhaps it is a translation problem.
 
  • #11
HallsofIvy said:
a set of vectors such than any vector in the space can be written as a finite sum of the basis vectors- but that doesn't require the number of vectors in the basis to be finite

Math is so damn subtle sometimes! I am going to have to go back to my real analysis book and go over the definition a hamel basis again.
 
  • #12
My English is bad, I can only use broken English.

In the case of finite algebraic polynomials, that kind of derivative format have no problems.

but that kind of method seems to be dangerous in some case.

All of derivative can be defined as limit notation and
we can define a function P which comtains limit notation.
so dP/dx contains two limit.
In real analysis the limits are not always commute,
In the linear transform above, differentiate individual terms first
and then sum, but in some case, that procedure is impossible

I read that in the following textbook about real analysis.
David Bressoud, A Radical Approach to Real Analysis, chapter5
 
Last edited:
  • #13
it is perfect solution for the problem, it saved my life

thanks
 
  • #14
the solution is quite interesting, and i am facing the same problem now in my course!
where did u find this question?
 
  • #15
Jang Jin Hong said:
My English is bad, I can only use broken English.

In the case of finite algebraic polynomials, that kind of derivative format have no problems.

but that kind of method seems to be dangerous in some case.

All of derivative can be defined as limit notation and
we can define a function P which comtains limit notation.
so dP/dx contains two limit.
In real analysis the limits are not always commute,
In the linear transform above, differentiate individual terms first
and then sum, but in some case, that procedure is impossible

I read that in the following textbook about real analysis.
David Bressoud, A Radical Approach to Real Analysis, chapter5
Yes, limits do not always commute and derivatives do not commute. But then simple linear transformations (like multiplication by matrices) do not commute either. I see no problem here.
 

1. What is a linear transformation of a 2nd order pde?

A linear transformation of a 2nd order pde (partial differential equation) is a mathematical process that involves changing the variables in the equation in order to simplify or solve it. This transformation can help to make the equation more manageable or to find a solution that is easier to work with.

2. How is a linear transformation different from other transformations?

A linear transformation is different from other transformations because it involves changing the variables in a linear manner. This means that the transformation can be represented using a matrix, and the resulting equation will still be in the form of a pde. Other transformations, such as nonlinear or nonhomogeneous transformations, may result in a different type of equation.

3. What are some common examples of linear transformations of 2nd order pdes?

Some common examples of linear transformations of 2nd order pdes include change of variables, separation of variables, and variable substitution. These transformations can be used to simplify the equation or to find a solution that is easier to work with.

4. Can a linear transformation change the type of equation?

No, a linear transformation cannot change the type of equation. In other words, if the original equation is a 2nd order pde, the resulting equation after transformation will also be a 2nd order pde. However, the transformation can make the equation easier to solve or to handle.

5. How can a linear transformation be applied in real-world problems?

A linear transformation can be applied in real-world problems to simplify or solve pdes that represent physical phenomena. For example, in heat transfer problems, a linear transformation can be used to change the variables and make the equation easier to solve for the temperature distribution in a given system. This can then be applied to design more efficient heating or cooling systems.

Similar threads

  • Linear and Abstract Algebra
Replies
2
Views
1K
Replies
9
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
823
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
9
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
1K
Replies
2
Views
1K
Replies
4
Views
1K
  • Linear and Abstract Algebra
2
Replies
43
Views
5K
  • Linear and Abstract Algebra
2
Replies
52
Views
2K
Back
Top