Elliptic, Parabolic, and Hyperbolic PDEs

  • Thread starter Thread starter docnet
  • Start date Start date
  • Tags Tags
    Hyperbolic Pdes
docnet
Messages
796
Reaction score
488
Homework Statement
Discover the classification of linear second order PDE yourself.
Relevant Equations
Please see below
Screen Shot 2021-01-16 at 2.06.58 AM.png


(1) ok.

(2) We start with ##\sigma(ξ) = a_{11} ξ_1^2 +2a_{12}ξ_1ξ_2 +a_{22}ξ_2^2ξ##
and we replace every ##ξ_iξ_j## with ##\partial_i\partial_ju##,
giving ##a_{11}\partial_x^2+2a_{12}\partial_x\partial_yu+1_{22}\partial_2^2##

(3) The given equation is the following.
##\sigma(ξ) = ξ^t A ξ ##
we take the product.
##\sigma(ξ) = \begin{pmatrix} ξ_1^t & ξ_2^t \end{pmatrix} \begin{pmatrix} a_{11} & 1_{12} \\ a_{21} & a_{22} \end{pmatrix} \begin{pmatrix} ξ_1 \\ ξ_2 \end{pmatrix} ##
giving
##\sigma(ξ) = a_{11}ξ_1^{t+1} +2a_{12}ξ_1ξ_2+a_{22}ξ_2^{t+1}##
which is different from the symbol given in part (2)
##\sigma(ξ) = a_{11} ξ_1^2 +2a_{12}ξ_1ξ_2 +a_{22}ξ_2^2ξ##
what is the ##t## used for, is this not a typo?

(4) The trace is the sum of the eigenvalues ##a_{11}+a_{22}##. knowing the the trace does not tell us anything about the eigenvalues. the determinant is given by ##det(A) = a_{11}a_{22}-(a_{12})^2 ##, which does not tell us ##a_{11}, a_{22}##. we have two equations and three unknowns, so we cannot use the trace and determinant for characterization.

(5) we could use matrix diagonalization (change of coordinates) to express ##A## as some matrix whose diagonal entries are ##a_{11}, a_{22}##. Doing this would cancel out the crossterm ##2a_{12}ξ_1ξ_2##. so
(a) ##\sigma(η) = \sigma(ξ) - 2a_{12}ξ_1ξ_2##
(b) ##\sigma(η) = \sigma(ξ) - a_{11} ξ_1^2-2a_{12}ξ_1ξ_2##
(c) ##\sigma(η) = \sigma(ξ) - 2a_{11} ξ_1^2-2a_{12}ξ_1ξ_2##
I definitely need to think more about this problem, but I value any suggestions.

(6) The highest order terms of the symbol controls the qualitative behavior of the solutions of PDEs, so we could use the symbols of every second order PDE by ignoring the lower order terms for a close approximation. And manipulate the symbols to fit the three cases by a change of coordinates.

(7) If we have non-constant coefficients, could we use the same procedure as before, but instead the eigenvalues will be functions, and not numbers?

(8) For higher dimensions, could we define elliptic, parabolic, and hyperbolic PDEs in just two variables and keep the other variables undisturbed?
 
  • Like
Likes Delta2
Physics news on Phys.org
3) I am not sure but I think the symbol t is not an exponent but it denotes the transpose of the vector ##\xi##. So if ##\xi=\begin{pmatrix}\xi_1 & \xi_2\end{pmatrix}## is the vector (single raw) its transpose ##\xi^T=\begin{pmatrix}\xi_1\\ \xi_2\end{pmatrix}## or the other way around if we have a single column its transpose is a single raw.
8) I think the problem setter means that we have to incorporate all the derivatives ##\partial_i\partial_j## where ##i,j## now range from ##1## to ##n##. I think the matrix A will be ##n\times n## instead of ##2\times 2## and ##\xi=\begin{pmatrix}\xi_1&\xi_2&...&\xi_n\end{pmatrix}##.
 
  • Like
Likes docnet
4) The eigenvalues \sigma are found by solving \det (A - \sigma I) = 0, which for a 2x2 matrix is (a_{11} - \sigma)(a_{22} - \sigma) - a_{12}a_{21} = 0. Expand this. What is the coefficient of \sigma? What is the constant term? What conditions must these coefficients satisfy for the roots to be of the same sign, or of opposite sign, or for there to be a zero root?
 
  • Like
Likes docnet
Delta2 said:
3) I am not sure but I think the symbol t is not an exponent but it denotes the transpose of the vector ##\xi##. So if ##\xi=\begin{pmatrix}\xi_1 & \xi_2\end{pmatrix}## is the vector (single raw) its transpose ##\xi^T=\begin{pmatrix}\xi_1\\ \xi_2\end{pmatrix}## or the other way around if we have a single column its transpose is a single raw.
8) I think the problem setter means that we have to incorporate all the derivatives ##\partial_i\partial_j## where ##i,j## now range from ##1## to ##n##. I think the matrix A will be ##n\times n## instead of ##2\times 2## and ##\xi=\begin{pmatrix}\xi_1&\xi_2&...&\xi_n\end{pmatrix}##.
of course, how silly of me to assume ##t## is an exponent. Realizing this mistake, the correct solution is
##\sigma(ξ) = \begin{pmatrix} ξ_1 & ξ_2\end{pmatrix} \begin{pmatrix} a_{11} & 1_{12} \\ a_{21} & a_{22} \end{pmatrix} \begin{pmatrix} ξ_1 \\ ξ_2 \end{pmatrix} = a_{11} ξ_1^2 +2a_{12}ξ_1ξ_2 +a_{22}ξ_2^2ξ##

For (8) I was thinking that too. it seems like I will need to learn how to characterize a ##n×n## matrix as parabolic, elliptical, or hyperbolic when ##n>2##. I am asking for more time to work this out.

pasmith said:
4) The eigenvalues \sigma are found by solving \det (A - \sigma I) = 0, which for a 2x2 matrix is (a_{11} - \sigma)(a_{22} - \sigma) - a_{12}a_{21} = 0. Expand this. What is the coefficient of \sigma? What is the constant term? What conditions must these coefficients satisfy for the roots to be of the same sign, or of opposite sign, or for there to be a zero root?
Of course, I mistakenly thought the eigenvalues of a symmetric matrix are its diagonal entries.
To compute the eigenvalues ##\lambda## (to avoid getting confused with the other ##\sigma##s) we solve the characteristic polynomial of ##A##.
##(a_{11} - \lambda)(a_{22} - \lambda) - (a_{12})^2 + \lambda^2= 0##
##a_{11}a_{22}-(a_{12})^2-(a_{11}+a_{22})\lambda+\lambda^2=0##
the coefficient of ##\lambda## is ##-(a_{11}+a_{22})## and the constant term is ##a_{11}a_{22}-(a_{12})^2##.

The roots are equal when the discriminant is 0
##(a_{11}+a_{22})^2 -4(a_{11}a_{22}-(a_{12})^2)=0##.
##(a_{11}+a_{22})^2-4a_{11}a_{22}+4(a_{12})^2=0##
##(a_{11})^2+2a_{11}a_{22}+(a_{22})^2-4a_{11}a_{22}+4(a_{12})^2=0##
##(a_{11})^2-2a_{11}a_{22}+(a_{22})^2+4(a_{12})^2=0##
##(a_{11}-a_{22})^2+4(a_{12})^2=0##
which is an expression that seems unrelated to the determinant of ##A## and its trace. this does not help us.

By Sylvester's criterion, the eigenvalues are same sign if the matrix is positive definite or negative definite.
for positive definite, the leading principal minors must be positive.
##a_{11}>0##
##a_{11}a_{22}-(a_{12})^2>0##
And for negative definite
##a_{11}<0##
##a_{11}a_{22}-(a_{12})^2>0##

The eigenvalues are opposite signs if ##A## is indefinite. This is when an even leading principal minor is negative.
##a_{11}a_{22}-(a_{12})^2<0##

there are no real roots when the discriminant is less than 0
##(a_{11}+a_{22})^2 -4(a_{11}a_{22}-(a_{12})^2)<0##
##(a_{11}-a_{22})^2+4(a_{12})^2<0##I think for this problem we assume the determinant and trace of ##A## are known, and not its entries. So this may be unneccessary. But I could be wrong.
 
Oh, I just caught my mistake. If the determinant of ##A## is negative, the eigenvalues are opposite signs and the PDE is hyperbolic. If the square of the trace is less than 4 times the determinant, there are no real roots. If the determinant is positive, the eigenvalues are the same signs and our PDE is elliptic. Thank you @pasmith
 
So what about the parabolic case when one eigenvalues is 0? The professor said he would grade us on effort and give 100% credit if we try. I am looking to solve most of this problem though because it is valuable learning.
 
For (5) i found the following information made available by UMI. It turns out a change of variables is a more involved process than I initially thought.

http://faculty.uml.edu/spennell/Teaching/PDE/classification.pdf

I cannot even begin to answer question (8) until I have figured out (5) and (6), so maybe this problem will not be finished. It seems like a lot to learn for a 1st week PDE classs assignment.
 
For 8) I think
  • if all the eigenvalues of the ##n\times n## matrix are of the same sign (some of which might be zero) then it is elliptic
  • if all are zero (except one) then it is parabolic
  • if some are positive and some are negative (while some might be zero) then it is hyperbolic
 
  • Like
Likes docnet
Back
Top