I Understanding the change of variables for PDEs

Isaac0427
Insights Author
Gold Member
Messages
718
Reaction score
163
TL;DR Summary
I have been trying to put together an understanding of change of variables for PDEs, and I am wondering if I am on the right track.
I've been trying to get change of variables in PDEs down (I don't particularly like my textbook or professor's approach to it), and I want to ask here if I am getting this right. Let ##\vec{x}=(x_1,x_2,...,x_n)^T## and ##\partial_\vec{x}=(\partial_{x_1},\partial_{x_2},...,\partial_{x_n})^T##. I believe that a second-order linear PDE with constant coefficients (in n dimensions) can be written in the form
$$\left( \partial_\vec{x}^TA\partial_\vec{x}+B\partial_\vec{x}+C \right) u=f(\vec{x})$$
where ##A## is an nxn matrix, ##B## is an n-dimensional row vector, and ##C## is a scalar. Now, we can change coordinates to ##\vec{y}=P\vec{x}## where ##P## is an nxn matrix. From my understanding, this means ##\partial_\vec{x}=P\partial_\vec{y}##. Now, though for a partial differential equation the choice of ##A## is not unique, there is a unique symmetric choice of ##A##. Assuming ##A## is diagonalizable, we choose ##P## such that it diagonalizes ##A##-- this also makes it an orthogonal matrix, and ##P^{-1}=P^T##. So, changing variables gives us
$$\left( \partial_\vec{y}^TP^TAP\partial_\vec{y}+BP\partial_\vec{y}+C \right) u=f(P^T\vec{y}).$$

This gets rid of the cross terms for the second derivatives, which would then make life much easier when solving these equations. I did compare this for the ##A=0## case with problems done using a different method, and the answers were the same (this time though choosing ##P## such that ##BP=(1,0)##).

If everything above is correct, this leaves me with two more questions:

1. I know that when ##B=\vec{0}^T## and ##C=0##, in the 2-D case, we are left with either the diffusion equation, Laplace's equation, or the heat equation. How does one handle solving the equation when ##B## and/or ##C## are nonzero?

2. Is there a way to generalize this for non-constant coefficients?

Thank you very much in advance!
 
Last edited:
Physics news on Phys.org
changing variables in PDE is nothing more than the chain rule and other differentiation rules from calculus
if ##y^i=f^i(x)## then ##\partial_{x^i}=\frac{\partial f^s}{\partial x^i}\Big|_{x\mapsto y}\partial_{y^s}##
In fact that is all
 
Last edited:
wrobel said:
changing variables in PDE is nothing more than the chain rule and other differentiation rules from calculus
if ##y^i=f^i(x)## then ##\partial_{x^i}=\frac{\partial f^s}{\partial x^i}\Big|_{x\mapsto y}\partial_{y^s}##
In fact that is all
Right, but I'm trying to figure out this with matrices/rotating coordinates to diagonalize ##A##. My understanding of what you wrote is that if you have ##\vec{y}=P\vec{x}##, this means (assuming ##P## is a constant nxn matrix) ##\partial_\vec{x}=P\partial_\vec{y}##. Is that correct? And if so, I believe that means the rest of the first part of my post is correct, and the other two questions still stand.
 
There is the following linear Volterra equation of the second kind $$ y(x)+\int_{0}^{x} K(x-s) y(s)\,{\rm d}s = 1 $$ with kernel $$ K(x-s) = 1 - 4 \sum_{n=1}^{\infty} \dfrac{1}{\lambda_n^2} e^{-\beta \lambda_n^2 (x-s)} $$ where $y(0)=1$, $\beta>0$ and $\lambda_n$ is the $n$-th positive root of the equation $J_0(x)=0$ (here $n$ is a natural number that numbers these positive roots in the order of increasing their values), $J_0(x)$ is the Bessel function of the first kind of zero order. I...
Are there any good visualization tutorials, written or video, that show graphically how separation of variables works? I particularly have the time-independent Schrodinger Equation in mind. There are hundreds of demonstrations out there which essentially distill to copies of one another. However I am trying to visualize in my mind how this process looks graphically - for example plotting t on one axis and x on the other for f(x,t). I have seen other good visual representations of...
Back
Top