# Solving Linear Second Order PDE with Mixed Derivative Term?

• meatpuppet
In summary, the conversation discusses the possibility of solving a PDE with real coefficients A, B, C, D, and known initial values. The equation is parabolic, meaning B^2-AC=0, and the goal is to find u_{xy} and u_{yy} from the function u. The conversation also mentions the use of a linear change of variables and a similarity transformation matrix S to find the general solution and apply boundary conditions. However, the process for obtaining u_{xy} is not fully explained.
meatpuppet
I've tried and failed to search for this on the forum, so apologies if this has been answered many times before.

Given a variable $$u$$ which is a function of $$x$$ and $$y$$:
$$u = u\left(x,y\right)\\$$

is it possible to solve the pde:

$$Au_{xx} + 2Bu_{xy} + Cu_{yy} = D\\$$

The knowns are:

The real coefficients:
$$A,~B,~C,~D$$

the initial values
$$u(x_0,y_0)$$
$$u_x(x_0,y_0)$$
$$u_y(x_0,y_0)$$
$$u_{xx}(x_0,y_0)$$

and the values of the following along the line y=y0:

$$u(x,y_0)$$
$$u_x(x,y_0)$$
$$u_{xx}(x,y_0)$$

The coefficents $$A,~B,~C$$ are such that the equation is parabolic ie $$B^2 - AC = 0$$

The quantities I am trying to obtain are $$u_{xy},u_{yy}$$, but these can be back derived from the function $$u$$ if it can be obtained.

Last edited:
I haven't followed through with all the boundary conditions, but by the looks of it, it is possible.
Let's first consider the homogeneous equation,
$$a u_{xx}(x, y) + 2 b u_{x y}(x, y) + c u_{yy}(x, y) = 0$$
Since there are only second-order derivatives, it is easy to see that anything linear in x and y, in general
$$u(x, y) = c_1 + c_2 x + c_3 y$$
will solve the equation, and it is straightforward algebra to determine the constants in terms of a, b, c.

For the particular solution, try something quadratic, i.e.
$$u(x, y) = d_1 x^2 + d_2 y^2$$.

Given the available conditions, assuming the solution:

$$u = c_1 + c_2x + c_3y + c_4xy + c_5x^2 + c_6y^2$$

It is not possible to find either $$c_4$$ or $$c_6$$

You can also find a linear change of variables to "diagionalize" the 2nd order differential operator:$$\mathbf{D}u = (\partial_x, \partial_y) \left( \begin{array}{c c} a & b\\ b & c\end{array}\right)\left( \begin{array}{c} \partial_x \\ \partial_y \end{array}\right) \cdot u$$

In vector form:
$$\mathbf{D}u = \nabla_{(x,y)} M \nabla_{(x,y)}^T u$$
Find the similarity transformation matrix S such that:
$$S^{-1}MS$$ is diagional.

Then:
$$\mathbf{d}u = \nabla_{(x,y)} u\left(\begin{array}{c}dx \\ dy \end{array}\right) = \nabla_{(x,y)} u\cdot \mathbf{d\vec{x}}$$
$$= \nabla_{(x',y')}u\cdot \mathbf{d\vec{x}}'$$
Hence the corresponding change of variable is:
$$\nabla_{(x',y')}=\nabla_{(x,y)}S^{-1}$$
$$\mathbf{\vec{x}}' = S\mathbf{\vec{x}}$$

(The T superscript stands for transpose. The gradient operator is written as a row vector and the coordinate vector and coordinate differential are written as column vectors.)

To find S you simply find the eigen-values and eigen-vectors of the matrix M. (S is the matrix with columns corresponding to your column vector eigen-basis. And I may have S and S inverse swapped above so double check.)

Once you find your set of general solutions you can transform back to original coordinates and apply your boundary conditions.

Now given your equation is parabolic, this means the matrix M is degenerate (but also symmetric and so normal and spectral) so it will have one eigen-value of 0 (and the other non-zero) and thus your diagionalized form will be:
$$u_{x'x'} = 0$$

This transformation method may be a bit more "power tool" than you need for a specific problem but you'll find it helps understand what's going on, i.e. the parabolic condition means the 2nd order differential operator D is projecting out one dimension and so you really only have an ODE for one variable (within the two dimensions (x,y)).

You can "shortcut" the above diagonalization process by applying your 2nd order differential operator to f(px + qy) and finding p and q such that you get
$$\mathbf{D}f(px+qy) = 0$$
$$\mathbf{D}f(px+qy) = k f''(px+qy)$$

You'll get:
$$\mathbf{D}f(px+qy) = (ap^2 + 2bpq + cq^2 )f'' (px+qy)$$
For the null solution you have a free variable:
$$y' = p_0 x + q_0 y$$
and the non-null solution:
$$x' = p_1 x + q_1 y$$

$$u(x,y) = (c_0 + c_1 x') f(y')$$
with f and arbitrary function.

Last edited:
jambaugh said:
You can also find a linear change of variables to "diagionalize" the 2nd order differential operator:

$$\mathbf{D}u = (\partial_x, \partial_y) \left( \begin{array}{c c} a & b\\ b & c\end{array}\right)\left( \begin{array}{c} \partial_x \\ \partial_y \end{array}\right) \cdot u$$

In vector form:
$$\mathbf{D}u = \nabla_{(x,y)} M \nabla_{(x,y)}^T u$$
Find the similarity transformation matrix S such that:
$$S^{-1}MS$$ is diagional.

Then:
$$\mathbf{d}u = \nabla_{(x,y)} u\left(\begin{array}{c}dx \\ dy \end{array}\right) = \nabla_{(x,y)} u\cdot \mathbf{d\vec{x}}$$
$$= \nabla_{(x',y')}u\cdot \mathbf{d\vec{x}}'$$
Hence the corresponding change of variable is:
$$\nabla_{(x',y')}=\nabla_{(x,y)}S^{-1}$$
$$\mathbf{\vec{x}}' = S\mathbf{\vec{x}}$$

(The T superscript stands for transpose. The gradient operator is written as a row vector and the coordinate vector and coordinate differential are written as column vectors.)

To find S you simply find the eigen-values and eigen-vectors of the matrix M. (S is the matrix with columns corresponding to your column vector eigen-basis. And I may have S and S inverse swapped above so double check.)

Once you find your set of general solutions you can transform back to original coordinates and apply your boundary conditions.

Now given your equation is parabolic, this means the matrix M is degenerate (but also symmetric and so normal and spectral) so it will have one eigen-value of 0 (and the other non-zero) and thus your diagionalized form will be:
$$u_{x'x'} = 0$$

I follow everything up to this point, S in my case being

$$S=\left[ \begin{array}{c c} -\cot\omega & \tan\omega\\ 1 & 1\end{array}\right]$$

However thereafter I'm lost. Or rather I don't understand how to apply the knowledge of S to the system to be able in the end to derive $$u_{xy}$$ (or $$u_{yy}$$.

meatpuppet said:
I follow everything up to this point, S in my case being

$$S=\left[ \begin{array}{c c} -\cot\omega & \tan\omega\\ 1 & 1\end{array}\right]$$

However thereafter I'm lost. Or rather I don't understand how to apply the knowledge of S to the system to be able in the end to derive $$u_{xy}$$ (or $$u_{yy}$$.
What are your a, b and c coefficients?

If this is your form for S then you should find that in the new coordinates:
$$x' = -x\cot(\omega) + y\tan(\omega)$$
$$y' = x+y$$

the differential equation should be much simpler. (Unless we have S and S transpose, or S and S inverse mixed up. Give me a, b, and c and I can double check.)

I made one error here. The transformed differential operator should be:
$$S M S^T$$
hmmm... this may complicate the diagonalization process... we're diagionalizing a rank 2 symmetric tensor not a rank 1,1 tensor = an operator.

I think I may have goofed here. Let me work through a concrete example and post it in a bit.

Ok, so in my system:

$$\sin^2 \omega u_{xx} + 2 \sin \omega \cos \omega u_{xy} + \cos^2 \omega u_{yy} = D$$

This gives:

$$M = \left[ \begin{array}{c c} \sin^2\omega & \sin\omega\cos\omega\\ \sin\omega\cos\omega & \cos^2\omega\end{array}\right]$$

which has:

$$S = \left[ \begin{array}{c c} -\cot\omega & \tan\omega\\ 1 & 1\end{array}\right]$$

Yea, one qualifier, you need to use an orthogonal similarity transformation (which is possible given the coefficient matrix is symmetric.) Then you have:
$$S^{-1} = S^T$$
This is possible because the eigen-vectors for the coefficient matrix will be orthogonal. You need only normalize them (which makes the math a bit harry but not too bad).

Example:
Suppose you have:
$$a=1, b=2, c=4$$
$$u_{xx} + 4u_{xy} + 4u_{yy} = 0$$
$$M = \left(\begin{array}{cc} 1 & 2\\ 2 & 4\end{array}\right)$$
The eigen-values are 0 and 5.
The corresponding normalized eigen-vectors are:
$$\frac{1}{\sqrt{5}}\left(\begin{array}{c} 2 \\ -1 \end{array}\right),\frac{1}{\sqrt{5}}\left(\begin{array}{c} 1 \\ 2 \end{array}\right)$$
hence we have:
$$S = \frac{1}{\sqrt{5}}\left(\begin{array}{c c} 2 & 1 \\ -1 & 2 \end{array}\right)$$
and $$S^T = S^{-1}$$.
The similarity transformation is:
$$S^{-1}MS = S^TMS = \frac{1}{5}\left(\begin{array}{c c} 2 & -1 \\ 1 & 2 \end{array}\right)\left(\begin{array}{c c} 1 & 2 \\ 2 & 4 \end{array}\right)\left(\begin{array}{c c} 2 & 1 \\ -1 & 2 \end{array}\right)=\left(\begin{array}{c c} 0 & 0 \\ 0 & 5 \end{array}\right) = M'$$
(M' is the diagonalized form)

Now I always have to go through the following to keep straight which way to transform:
Writing the gradient as a row vector:
$$\nabla = (\partial_x, \partial y)$$
The differential equation is:
$$\nabla M\nabla^T u = 0$$
or:
$$\nabla S S^{-1} M S S^{-1} \nabla ^T u = \nabla S M' S^{T}\nabla$$
So...
$$\nabla S = \nabla',\quad S^{T}\nabla^T = {\nabla'}^T$$
and since the coordinates transform dually to the partial derivatives:
$$S^{-1} \vec{x} = \vec{x}'$$
So:
$$\frac{1}{\sqrt{5}}\left(\begin{array}{c c} 2 & 1 \\ -1 & 2 \end{array}\right)\left(\begin{array}{c} x\\ y\end{array}\right) = \left(\begin{array}{c} x'\\ y'\end{array}\right)$$
or:
$$x' =\frac{ 2x + y}{\sqrt{5}}$$
$$y' = \frac{-x + 2y}{\sqrt{5}}$$
and the differential equation becomes in that form:
$$5\cdot u_{y'y'} = 0$$
with solution:
$$u(x,y) =(C_1 + C_2\cdot(2y-x) ) \cdot f(2x+y)$$
for arbitrary constants C1, C2, and function f.
(Note I absorbed the sqrt(5) factor into the arbitrary constants and function).

Last edited:
meatpuppet said:
Ok, so in my system:

$$\sin^2 \omega u_{xx} + 2 \sin \omega \cos \omega u_{xy} + \cos^2 \omega u_{yy} = D$$

This gives:

$$M = \left[ \begin{array}{c c} \sin^2\omega & \sin\omega\cos\omega\\ \sin\omega\cos\omega & \cos^2\omega\end{array}\right]$$

which has:

$$S = \left[ \begin{array}{c c} -\cot\omega & \tan\omega\\ 1 & 1\end{array}\right]$$

This one's even easier as when you normalize your eigen-vectors (then define S) you'll see that S is simply a rotation matrix with rotation angle omega. (Suggest you multiply first eigen-vector by -1 as well.)

You'll get
$$S = \left[ \begin{array}{c c} \cot(\omega)/csc(\omega) & \tan(\omega)/\sec(\omega) \\ -1/\csc(\omega) & 1/\sec(\omega)\end{array}\right] = \left[ \begin{array}{c c} \cos(\omega) & \sin(\omega) \\ -\sin(\omega) & \cos(\omega)\end{array}\right]$$

(I suggest you multiply out SMS^{-1} to be sure you have it correct.)

My apologies for not saying earlier that you must work with normalized eigen-vectors so your similarity transformation is an orthogonal transformation. Its that business about M being a form and not itself an operator per se. When you work in the context of orthogonal transformations there is a unique identification of vectors (differential forms) and dual vectors (gradient operators) and similarly bilinear forms and operators can be identified.

This is also why I choose to define the gradient op as a row vector vs coordinate vectors as column vectors. It helps prevent the need to both invert and transpose a given matrix.

See if you can finish the problem now.

jambaugh said:
This one's even easier as when you normalize your eigen-vectors (then define S) you'll see that S is simply a rotation matrix with rotation angle omega. (Suggest you multiply first eigen-vector by -1 as well.)

You'll get
$$S = \left[ \begin{array}{c c} \cot(\omega)/csc(\omega) & \tan(\omega)/\sec(\omega) \\ -1/\csc(\omega) & 1/\sec(\omega)\end{array}\right] = \left[ \begin{array}{c c} \cos(\omega) & \sin(\omega) \\ -\sin(\omega) & \cos(\omega)\end{array}\right]$$

(I suggest you multiply out SMS^{-1} to be sure you have it correct.)

My apologies for not saying earlier that you must work with normalized eigen-vectors so your similarity transformation is an orthogonal transformation. Its that business about M being a form and not itself an operator per se. When you work in the context of orthogonal transformations there is a unique identification of vectors (differential forms) and dual vectors (gradient operators) and similarly bilinear forms and operators can be identified.

This is also why I choose to define the gradient op as a row vector vs coordinate vectors as column vectors. It helps prevent the need to both invert and transpose a given matrix.

See if you can finish the problem now.

I think I need to take a book out from the library and refresh myself on PDEs etc. As far as $$S$$ being a rotation matrix, that makes sense, as the equation is the second derivative of the field in the direction which is omega degrees from the y-axis in the x-y plane.

The motivation is that I need to somehow resolve out the second derivative in the y direction ($$u_{yy}$$ or the mixed derivative ($$u_{xy}$$) as these quantities aren't explicitly given (essentially I have data along two intersecting lines, one parallel to the x direction, the other at omega degree to the y direction).

Hopefully consulting a book may make things a bit clearer, as it stands I don't really understand/know how to go from the for $$(C_1 + C_2y^\prime)f(x^\prime)$$. Pardon the general incompetence, but its been a long time since I looked at this sort of thing

meatpuppet said:
I think I need to take a book out from the library and refresh myself on PDEs etc. As far as $$S$$ being a rotation matrix, that makes sense, as the equation is the second derivative of the field in the direction which is omega degrees from the y-axis in the x-y plane.

The motivation is that I need to somehow resolve out the second derivative in the y direction ($$u_{yy}$$ or the mixed derivative ($$u_{xy}$$) as these quantities aren't explicitly given (essentially I have data along two intersecting lines, one parallel to the x direction, the other at omega degree to the y direction).

Hopefully consulting a book may make things a bit clearer, as it stands I don't really understand/know how to go from the for $$(C_1 + C_2y^\prime)f(x^\prime)$$. Pardon the general incompetence, but its been a long time since I looked at this sort of thing

BTW I wrote this in product form but should have written it in the form:
$$u = f(x') + y' g(x')$$
where f and g are independent functions.
$$x' = \cos(\omega) x - \sin(\omega) y$$
$$y' = \sin(\omega) x + \cos(\omega) y$$

that is to say:
$$u = f(cx - sy) + (sx+cy)g(cx-sy)$$
(using c = cos(omega) s = sin(omega) )
I was thinking in terms of separable equations but there really isn't a 2nd equation at all.

You can then find $$u_x, u_y, u_{xx}, u_{xy}, u_{yy}$$ directly from the general form by directly differentiating:
e.g.
$$u_x = c f'(cx-sy) + sg(cx-sy) + c(sx+cy)g'(cx-sy)$$
(here primes are derivatives)
and so on.

You can then match each form to the corresponding boundary data and solve the resulting system of ODEs for f and g.

You might also want to go ahead and confirm also that the original PDE is solved (and thus that I haven't gotten the transformation inverted here).

(Since we're using primes for derivatives it might be convenient to indicate transformed variables with a tilde ~ instead of a prime).

I think you'll find it most effective to work in the transformed variables and invoke the corresponding transformation on the boundary conditions.

You have:
$$\tilde{x} = c x - s y$$
$$\tilde{y} = s x + c y$$
and
$$x = c \tilde{x} + s \tilde{y}$$
$$y = -s \tilde{x} + c \tilde{y}$$

(basically omega <-> - omega ).
Good Luck with it.

## 1. What is a linear second order PDE with mixed derivative term?

A linear second order partial differential equation (PDE) with mixed derivative term is a type of PDE that involves both second order derivatives and first order derivatives with respect to two or more variables. It can be written in the form of:

a(x,y)uxx + b(x,y)uxy + c(x,y)uyy = f(x,y,u,ux,uy)

where ux and uy represent the first order partial derivatives of u with respect to x and y, and uxx and uyy represent the second order partial derivatives of u with respect to x and y, respectively.

## 2. What is the difference between a linear and a nonlinear PDE?

The main difference between a linear and a nonlinear PDE is that a linear PDE has terms that are only linear in the dependent variable and its derivatives, while a nonlinear PDE has terms that are nonlinear in the dependent variable and its derivatives. This means that a linear PDE can be solved using known methods, while a nonlinear PDE often requires more advanced techniques.

## 3. What are the most commonly used methods for solving linear second order PDEs with mixed derivative term?

The most commonly used methods for solving linear second order PDEs with mixed derivative term are the method of characteristics and separation of variables. The method of characteristics involves transforming the PDE into a system of ordinary differential equations, while separation of variables involves finding a solution in the form of a product of functions of each variable.

## 4. Can a linear second order PDE with mixed derivative term have more than one solution?

Yes, a linear second order PDE with mixed derivative term can have more than one solution. This is because the solution to a PDE is not unique and can be affected by the choice of initial or boundary conditions. Therefore, it is important to specify the necessary initial or boundary conditions in order to obtain a unique solution.

## 5. Are there any real-life applications of solving linear second order PDEs with mixed derivative term?

Yes, there are many real-life applications of solving linear second order PDEs with mixed derivative term. Some examples include problems in fluid dynamics, heat transfer, and electromagnetism. These equations are used to model and understand various physical phenomena in engineering, physics, and other fields.

• Differential Equations
Replies
3
Views
1K
• Differential Equations
Replies
1
Views
1K
• Differential Equations
Replies
1
Views
1K
• Differential Equations
Replies
2
Views
3K
• Differential Equations
Replies
7
Views
1K
• Differential Equations
Replies
6
Views
6K
• Differential Equations
Replies
2
Views
2K
• Differential Equations
Replies
2
Views
2K
• Quantum Physics
Replies
3
Views
995
• Differential Equations
Replies
4
Views
1K