Exploring Nonlinear Solutions for PDEs of Functions f:\mathbb{R}^2\to\mathbb{R}

  • Thread starter Thread starter jostpuur
  • Start date Start date
jostpuur
Messages
2,112
Reaction score
19
I'm interested to know as much as possible about functions f:\mathbb{R}^2\to\mathbb{R} that satisfy the PDE

<br /> (\partial_1 f(x_1,x_2))^2 + (\partial_2 f(x_1,x_2))^2 = 1.<br />

The only obvious solutions are

<br /> f(x_1,x_2) = x_1\cos(\theta) + x_2\sin(\theta),<br />

but this is a linear function with respect to the variables x_1,x_2.

I was thinking that nonlinear solutions must exist too, but it seems extremely difficult learn about them.

Having thought more, I'm also considering the possibility that nonlinear solutions don't exist (except the affine solution, which is mostly linear). But if they don't exist, how could such claim be proven?
 
Last edited:
Physics news on Phys.org
Well, adding a constant is always possible, but not really interesting.

Here is an argument, where I can think this can be made more formal to get a proof:
Locally, the function always looks like an inclided plane where the magnitude of the gradient is 1. This is just what the equation tells us. Pick an arbitrary point in the (x1,x2)-plane. If we make a path, starting there and following the gradient of 1, where can we get? After a length of d, the value of f increased by d. We have to be in a distance of d - otherwise there would be a shorter path, with a larger gradient. Therefore, all those paths are straight lines. They cannot intersect, so they all have to be parallel, and you get your plane as only solution.
 
So we define a mapping [0,T]\to\mathbb{R}^2, t\mapsto\varphi(t) so that

<br /> \dot{\varphi}(t)\cdot \nabla f(\varphi(t)) = \|\dot{\varphi}(t)\|<br />

Then

<br /> f(\varphi(T))-f(\varphi(0)) = \int\limits_0^T D_t f(\varphi(t)) dt = \int\limits_0^T \|\dot{\varphi}(t)\| dt \geq \|\varphi(T) - \varphi(0)\|<br />

Now you claim that we will have "=" instead of "\geq" in the last inequality?

otherwise there would be a shorter path

We could define

<br /> \psi(t) = \varphi(0) + \frac{t}{T}\big(\varphi(T)-\varphi(0)\big)<br />

<br /> \dot{\psi}(t) = \frac{1}{T}\big(\varphi(T) - \varphi(0)\big)<br />

<br /> f(\psi(T))-f(\psi(0)) = \int\limits_0^T \frac{1}{T}\big(\varphi(T)-\varphi(0)\big)\cdot\nabla f(\varphi(t))dt<br />

<br /> \implies\quad \|f(\psi(T))-f(\psi(0))\| \leq \|\varphi(T)-\varphi(0)\|<br />

I see, there will be equality!
 
I have fallen for very simple things now...

So

<br /> \int\limits_0^T \|\dot{\varphi}(t)\|dt = \|\varphi(T)-\varphi(0)\|<br />

implies the curve to be a straight line? How do you prove that nicely?
 
Well the straight line question is a different problem, which probably has a solution not related to PDEs. So the original problem is mostly solved. A peculiar result! Only affine solutions...

I'd be slightly interested to know what happens if I define

<br /> f(x_1,0) = \sqrt{1 + x_1^2}<br />

and then demand

<br /> (\partial_1 f(x_1,x_2))^2 + (\partial_2 f(x_1,x_2))^2 = 1.<br />

How far can the function be extended from the line? What kind of problems eventually prevent the extension to the whole plane?
 
Hey guys!

<br /> f(x_1,x_2) = \sqrt{x_1^2 + x_2^2}<br />

is a solution to the original PDE! Not very affine, I would say :wink:

The trick is that this is not differentiable at the (x_1,x_2)=(0,0), where the mfb's lines would intersect.
 
But then the equation is not satisfied everywhere in R^2 ;).
 
I couldn't know in advance what the theory will turn out to be. Now it would seem more reasonable to study functions f:\Omega\to\mathbb{R} where \Omega\subset\mathbb{R}^2.

Here's another example:

<br /> \Omega = \;]-2,+2[\;\times\; ]-1,+1[<br />

<br /> \Omega_{-1} = \{x\in\Omega\;|\; -2&lt;x_1&lt;0,\quad 1-|x_1|&lt;x_2\}<br />
<br /> \Omega_0 = \{x\in\Omega\;|\; x_2\leq 1 - |x_1|\}<br />
<br /> \Omega_{+1} = \{x\in\Omega\;|\; 0&lt;x_1&lt;+2,\quad 1-|x_1| &lt; x_2\}<br />

<br /> f(x_1,x_2)=\left\{\begin{array}{ll}<br /> \sqrt{(x_1+2)^2 + (x_2+1)^2} - \sqrt{2},\quad &amp; x\in\Omega_{-1} \\<br /> -\sqrt{x_1^2 + (x_2-1)^2} + \sqrt{2},\quad &amp; x\in\Omega_0 \\<br /> \sqrt{(x_1-2)^2+ (x_2+1)^2} - \sqrt{2},\quad &amp; x\in\Omega_{+1}\\<br /> \end{array}\right.<br />

The idea of this example reveals that if f is known in some small set, the extension to larger domain isn't neccessarily unique.
 
Nice function.

$$f(x_1,x_2)=\left\{\begin{array}{ll}
\sqrt{(x_1+2)^2 + (x_2+1)^2} - \sqrt{2},\quad & x\in\Omega_{-1} \\
-\sqrt{(x_1-2)^2 + (x_2-3)^2} + 3\sqrt{2},\quad & x\in\Omega_0 \cup \Omega_{+1} \\
\end{array}\right.$$
I guess this will work, too (the critical point is not at the edge of Ω). And many other similar functions.
 
  • #10
This PDE is interesting because it could be related to the topics of this thread: Determine the function from a simple condition on its Jacobian matrix.

If \Omega consist of those points (x_0,x_1) where x_0&gt;0 and |x_1| &lt; x_0, then a function f:\Omega\to\mathbb{R}

<br /> f(x_0,x_1) = \sqrt{x_0^2 - x_1^2}<br />

satisfies a PDE

<br /> (\partial_0 f)^2 - (\partial_1 f)^2 = 1.<br />

Eventually I didn't figure out how this would imply anything to the isometry discussion, but anyway, this is distantly interesting at least.
 
  • #11
sinh and cosh give another set of solutions.
+- sqrt(c+1)x1 +- sqrt(c) x2) for arbitrary c>=0 works, too.
 
Back
Top