Exploring Nonlinear Solutions for PDEs of Functions f:\mathbb{R}^2\to\mathbb{R}

  • Context: Graduate 
  • Thread starter Thread starter jostpuur
  • Start date Start date
Click For Summary

Discussion Overview

The discussion revolves around the exploration of functions \( f:\mathbb{R}^2\to\mathbb{R} \) that satisfy the partial differential equation (PDE) \[ (\partial_1 f(x_1,x_2))^2 + (\partial_2 f(x_1,x_2))^2 = 1. \] Participants investigate both linear and potential nonlinear solutions, the implications of the PDE, and the challenges in proving the existence of nonlinear solutions.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant notes that the only obvious solutions are linear functions of the form \( f(x_1,x_2) = x_1\cos(\theta) + x_2\sin(\theta) \), suggesting a potential lack of nonlinear solutions.
  • Another participant proposes that locally, the function resembles an inclined plane with a gradient magnitude of 1, leading to the conclusion that all paths must be straight lines, implying only affine solutions exist.
  • A different approach is introduced involving a mapping \( [0,T]\to\mathbb{R}^2 \) and the relationship between the gradient and the path taken, suggesting equality in certain conditions.
  • One participant raises a question about proving that a curve must be a straight line based on the integral of the velocity vector.
  • Another participant introduces a specific function \( f(x_1,x_2) = \sqrt{x_1^2 + x_2^2} \) as a potential solution, noting its non-differentiability at the origin.
  • Further discussion includes the exploration of defining \( f \) on restricted domains and the implications for extending functions beyond those domains.
  • Participants discuss the uniqueness of extensions of functions defined on small sets and provide examples of piecewise functions that satisfy the PDE.
  • One participant mentions a related PDE involving the Jacobian matrix and proposes a function that satisfies it, indicating a broader interest in the implications of such PDEs.
  • Another participant suggests that hyperbolic functions provide additional solutions to the original PDE.

Areas of Agreement / Disagreement

Participants express differing views on the existence of nonlinear solutions, with some arguing that only affine solutions exist while others propose specific nonlinear functions. The discussion remains unresolved regarding the generality of solutions and the implications of the PDE.

Contextual Notes

Limitations include the dependence on the differentiability of functions at certain points and the challenges in extending functions defined on restricted domains to larger domains. The discussion also highlights the complexity of proving certain properties related to the PDE.

jostpuur
Messages
2,112
Reaction score
19
I'm interested to know as much as possible about functions [itex]f:\mathbb{R}^2\to\mathbb{R}[/itex] that satisfy the PDE

[tex] (\partial_1 f(x_1,x_2))^2 + (\partial_2 f(x_1,x_2))^2 = 1.[/tex]

The only obvious solutions are

[tex] f(x_1,x_2) = x_1\cos(\theta) + x_2\sin(\theta),[/tex]

but this is a linear function with respect to the variables [itex]x_1,x_2[/itex].

I was thinking that nonlinear solutions must exist too, but it seems extremely difficult learn about them.

Having thought more, I'm also considering the possibility that nonlinear solutions don't exist (except the affine solution, which is mostly linear). But if they don't exist, how could such claim be proven?
 
Last edited:
Physics news on Phys.org
Well, adding a constant is always possible, but not really interesting.

Here is an argument, where I can think this can be made more formal to get a proof:
Locally, the function always looks like an inclided plane where the magnitude of the gradient is 1. This is just what the equation tells us. Pick an arbitrary point in the (x1,x2)-plane. If we make a path, starting there and following the gradient of 1, where can we get? After a length of d, the value of f increased by d. We have to be in a distance of d - otherwise there would be a shorter path, with a larger gradient. Therefore, all those paths are straight lines. They cannot intersect, so they all have to be parallel, and you get your plane as only solution.
 
So we define a mapping [itex][0,T]\to\mathbb{R}^2[/itex], [itex]t\mapsto\varphi(t)[/itex] so that

[tex] \dot{\varphi}(t)\cdot \nabla f(\varphi(t)) = \|\dot{\varphi}(t)\|[/tex]

Then

[tex] f(\varphi(T))-f(\varphi(0)) = \int\limits_0^T D_t f(\varphi(t)) dt = \int\limits_0^T \|\dot{\varphi}(t)\| dt \geq \|\varphi(T) - \varphi(0)\|[/tex]

Now you claim that we will have "[itex]=[/itex]" instead of "[itex]\geq[/itex]" in the last inequality?

otherwise there would be a shorter path

We could define

[tex] \psi(t) = \varphi(0) + \frac{t}{T}\big(\varphi(T)-\varphi(0)\big)[/tex]

[tex] \dot{\psi}(t) = \frac{1}{T}\big(\varphi(T) - \varphi(0)\big)[/tex]

[tex] f(\psi(T))-f(\psi(0)) = \int\limits_0^T \frac{1}{T}\big(\varphi(T)-\varphi(0)\big)\cdot\nabla f(\varphi(t))dt[/tex]

[tex] \implies\quad \|f(\psi(T))-f(\psi(0))\| \leq \|\varphi(T)-\varphi(0)\|[/tex]

I see, there will be equality!
 
I have fallen for very simple things now...

So

[tex] \int\limits_0^T \|\dot{\varphi}(t)\|dt = \|\varphi(T)-\varphi(0)\|[/tex]

implies the curve to be a straight line? How do you prove that nicely?
 
Well the straight line question is a different problem, which probably has a solution not related to PDEs. So the original problem is mostly solved. A peculiar result! Only affine solutions...

I'd be slightly interested to know what happens if I define

[tex] f(x_1,0) = \sqrt{1 + x_1^2}[/tex]

and then demand

[tex] (\partial_1 f(x_1,x_2))^2 + (\partial_2 f(x_1,x_2))^2 = 1.[/tex]

How far can the function be extended from the line? What kind of problems eventually prevent the extension to the whole plane?
 
Hey guys!

[tex] f(x_1,x_2) = \sqrt{x_1^2 + x_2^2}[/tex]

is a solution to the original PDE! Not very affine, I would say :wink:

The trick is that this is not differentiable at the [itex](x_1,x_2)=(0,0)[/itex], where the mfb's lines would intersect.
 
But then the equation is not satisfied everywhere in R^2 ;).
 
I couldn't know in advance what the theory will turn out to be. Now it would seem more reasonable to study functions [itex]f:\Omega\to\mathbb{R}[/itex] where [itex]\Omega\subset\mathbb{R}^2[/itex].

Here's another example:

[tex] \Omega = \;]-2,+2[\;\times\; ]-1,+1[[/tex]

[tex] \Omega_{-1} = \{x\in\Omega\;|\; -2<x_1<0,\quad 1-|x_1|<x_2\}[/tex]
[tex] \Omega_0 = \{x\in\Omega\;|\; x_2\leq 1 - |x_1|\}[/tex]
[tex] \Omega_{+1} = \{x\in\Omega\;|\; 0<x_1<+2,\quad 1-|x_1| < x_2\}[/tex]

[tex] f(x_1,x_2)=\left\{\begin{array}{ll}<br /> \sqrt{(x_1+2)^2 + (x_2+1)^2} - \sqrt{2},\quad & x\in\Omega_{-1} \\<br /> -\sqrt{x_1^2 + (x_2-1)^2} + \sqrt{2},\quad & x\in\Omega_0 \\<br /> \sqrt{(x_1-2)^2+ (x_2+1)^2} - \sqrt{2},\quad & x\in\Omega_{+1}\\<br /> \end{array}\right.[/tex]

The idea of this example reveals that if [itex]f[/itex] is known in some small set, the extension to larger domain isn't neccessarily unique.
 
Nice function.

$$f(x_1,x_2)=\left\{\begin{array}{ll}
\sqrt{(x_1+2)^2 + (x_2+1)^2} - \sqrt{2},\quad & x\in\Omega_{-1} \\
-\sqrt{(x_1-2)^2 + (x_2-3)^2} + 3\sqrt{2},\quad & x\in\Omega_0 \cup \Omega_{+1} \\
\end{array}\right.$$
I guess this will work, too (the critical point is not at the edge of Ω). And many other similar functions.
 
  • #10
This PDE is interesting because it could be related to the topics of this thread: Determine the function from a simple condition on its Jacobian matrix.

If [itex]\Omega[/itex] consist of those points [itex](x_0,x_1)[/itex] where [itex]x_0>0[/itex] and [itex]|x_1| < x_0[/itex], then a function [itex]f:\Omega\to\mathbb{R}[/itex]

[tex] f(x_0,x_1) = \sqrt{x_0^2 - x_1^2}[/tex]

satisfies a PDE

[tex] (\partial_0 f)^2 - (\partial_1 f)^2 = 1.[/tex]

Eventually I didn't figure out how this would imply anything to the isometry discussion, but anyway, this is distantly interesting at least.
 
  • #11
sinh and cosh give another set of solutions.
+- sqrt(c+1)x1 +- sqrt(c) x2) for arbitrary c>=0 works, too.
 

Similar threads

  • · Replies 0 ·
Replies
0
Views
4K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 21 ·
Replies
21
Views
2K
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K