# Is there a solution to this simple 1st order PDE?

• docnet

#### docnet

Homework Statement
Relevant Equations
This isn't homework, but I was just wondering whether the following PDE has an analytic solution.

$$\partial_x u(t,x)=u(t,x)$$

where ##x\in R^n## and ##\partial_x## implies a derivative with respect to the spatial variables.

If ##\ u = f(t)e^x\ ## then ##\quad \partial_x u(t,x)=u(t,x)##

docnet, Chestermiller and Mayhem
If ##\ u = f(t)e^x\ ## then ##\quad \partial_x u(t,x)=u(t,x)##
Thank you BvU for preventing my headache. Just so that I'm understanding this correctly, did you define ##f(t)## to be an arbitrary function defined for ##t\in(-\infty,\infty)##, and you defined ##u(t,x)## for all of ##R^n##?

Thank you BvU for preventing my headache. Just so that I'm understanding this correctly, did you define ##f(t)## to be an arbitrary function defined for ##t\in(-\infty,\infty)##, and you defined ##u(t,x)## for all of ##R^n##?
##u(t, x)## would be defined on some subset of ##\mathbb R^2##, not ##\mathbb R^n##

BvU and docnet
##u(t, x)## would be defined on some subset of ##\mathbb R^2##, not ##\mathbb R^n##

what about the case when ##x\in R^n## so ##u(t,x)\Rightarrow u(t,x_1,x_2,...x_n)##?
Can ##u(t,x)## be a short-hand way of writing the latter case?

It can be, but I'm not sure exactly what ##\partial_x## would mean in that case.

If it's the partial derivative with respect to each variable consecutively, then ##e^{\sum x_i}## still works.

BvU and docnet
It can be, but I'm not sure exactly what ##\partial_x## would mean in that case.

If it's the partial derivative with respect to each variable consecutively, then ##e^{\sum x_i}## still works.
thank you.. may i ask one more question ?

Is the solution ##e^{\sum x_i}## to the PDE ##\partial_xu(t,x)=u(t,x)## derived by a theorem?

we know the solution ##v=Ce^t## to the ode problem ##v'=v## is found by separating the variables ##v## and ##t## and integrating. is the PDE case analogous or much more complicated?

edit reason : clarity

Last edited:
I was mostly just copying the idea of the solution for the two dimensional case.

In general pdes are much harder to solve than odes are.

docnet
what about the case when ##x\in R^n## so ##u(t,x)\Rightarrow u(t,x_1,x_2,...x_n)##?
Can ##u(t,x)## be a short-hand way of writing the latter case?
I suppose it could, but that would have to be given information. Since you didn't show any such information, it's really a stretch to make that assumption. And as Office_Shredder mentioned, it would be meaningless or at least ambiguous to talk about the partial with respect to x.

docnet
Aren't you just bamboozling yourself with notation? Isn't this just ##\dfrac {du}{dx}=u## ? For which the solution is ##u=Ae^{x}## - just the familiar exponential function or curve of ##u## against ##x##. With ##A## an arbitrary constant, determined by the 'initial' value of ##u## at any particular ##x##. Then this curve may be magnified or contracted as any extraneous variable, call it ##t##, changes any which way. Think of this as changing the initial condition in any which way and represent it: ##u=A(t)e^{x}## .

docnet
I suppose it could, but that would have to be given information. Since you didn't show any such information, it's really a stretch to make that assumption. And as Office_Shredder mentioned, it would be meaningless or at least ambiguous to talk about the partial with respect to x.

I'm sorry about that. not making excuses, but my PDE professor likes to use the short-hand? the description ##x \in R^n## to mean ##x=(x_1,x_2,...,x_n)##. ##u(t,x)## means ##u(t,x_1,x_2,...,x_n)## and ##\partial x## means partial derivatives with respect to each spatial variable consecutively. His notations have caused confusion for I and other classmates before, but we got familiar to them over time. The professor likes to think in abstract terms and use abstract notations whenever possible. I realize his notations are not usual.

Aren't you just bamboozling yourself with notation? Isn't this just ##\dfrac {du}{dx}=u## ? For which the solution is ##u=Ae^{x}## - just the familiar exponential function or curve of ##u## against ##x##. With ##A## an arbitrary constant, determined by the 'initial' value of ##u## at any particular ##x##. Then this curve may be magnified or contracted as any extraneous variable, call it ##t##, changes any which way. Think of this as changing the initial condition in any which way and represent it: ##u=A(t)e^{x}## .

wow, I think you make a great point. suppose ##x\in R^n## where ##n>1##, then how will your description change to account for extra spatial variables? thanks, I will chew on this over night.

We can make the change of variables ##x_1=u+v## and ##x_2=u-v##. Ignore t and pretend we're doing it in two dimensions.

Then ##\partial_{x_1}=\partial_u+\partial_v## and ##\partial_{x_2}=u-v## so ##\partial_x =\partial^2_u+\partial^2_v##. This means once we have any particular solution, we can add homogenous solutions which are harmonic functions. There are a lot of them, for example the real and imaginary part of any complex differentiable function are both harmonic.

docnet
Thank you BvU for preventing my headache. Just so that I'm understanding this correctly, did you define ##f(t)## to be an arbitrary function defined for ##t\in(-\infty,\infty)##, and you defined ##u(t,x)## for all of ##R^n##?
where ##x\in R^n## and ##\partial_x## implies a derivative with respect to the spatial variables.
I humbly concede that I totally overlooked the possible vector character of ##x## (and possibly also the vector character of ##u## ?) because I am so used (spoiled rotten?) to a clear distinction in notation: either ##\bf x, \ \bf u## or ##\vec x, \ \vec u## -- used consistently -- denote vectors; ## \ \ x, \ u## denote scalars. Even then, ##\ \displaystyle {\partial\over\partial\vec x} ## is to be avoided in favor of ##\ \nabla\ ## or even ##\ \vec\nabla\ ##

##\nabla\cdot\vec u\ ## is unambiguously a scalar, and ##\nabla u\ ## is unambiguously a vector.
The Jacobian ##\nabla \vec u\ ## is a matrix and personally I prefer ##\ \vec\nabla\vec u\ ##
(but I still have to find a way to make it look reasonable : ##\vec \nabla \vec{\vphantom{\nabla}u}## is ugly too )

@Office_Shredder saves my skin in #6 and you explain how this can pop up in #11. I sympathize but can't help and can't force PDEprof to share my preferences.

What remains in my perception is that PDEprof does do harm using ##\partial _x## this way: acting on a scalar it yields a vector that doesn't look like one and, by the same token: acting on a vector that looks like a scalar it yields a matrix that looks like a scalar (or a vector) too.

And now I must chew on #12 because I don't understand it ...

##\ ##

docnet
We can make the change of variables ##x_1=u+v## and ##x_2=u-v##. Ignore t and pretend we're doing it in two dimensions.

Then ##\partial_{x_1}=\partial_u+\partial_v## and ##\partial_{x_2}=u-v## so ##\partial_x =\partial^2_u+\partial^2_v##. This means once we have any particular solution, we can add homogenous solutions which are harmonic functions. There are a lot of them, for example the real and imaginary part of any complex differentiable function are both harmonic.
Man I don't understand this at all...Can you flesh it out for the slow kids? Sorry

docnet and BvU
It can be, but I'm not sure exactly what ##\partial_x## would mean in that case.

If it's the partial derivative with respect to each variable consecutively, then ##e^{\sum x_i}## still works.
In which case I would want to see an index: ##\ \partial_{x_i} \ ##
$$\partial_{x_i} u(t,x)=u(t,x)$$meaning
$${\partial u(t,\vec x)\over \partial x_i}=u(t,\vec x)$$

hutchphd
Or does "derivative with respect to each spatial variable consecutively" mean $$\frac{\partial^n}{\partial x_1 \cdots \partial x_n}$$ which makes $$\frac{\partial ^n u}{\partial x_1 \dots \partial x_n} = u$$ make sense for scalar or vector $u$. This can be solved by separation of variables.

docnet
Man I don't understand this at all...Can you flesh it out for the slow kids? Sorry

That's good you don't, because it was full of mistakes

Ok, consider the change of variables
##u=x_1+x_2##, ##v=x_1-x_2## (not the change of variables I said, I flipped it)

Then by the chain rule, ##\frac{\partial f}{\partial x_1}= \frac{\partial f}{\partial u} \frac{\partial u}{\partial x_1} + \frac{\partial f}{\partial v} \frac{\partial v}{\partial x_1}##

The partial derivatives of ##u## and ##v## are just 1 of course.

Which I will rewrite in the notation of the original post as ##\partial_{x_1} = (\partial_u+\partial_v)##

Similarly, ##\partial_{x_2} = (\partial_u-\partial_v)##

Notice that these partial derivative operators just multiply like ##\partial_x \partial_y## is the partial derivative with respect to y, then with respect to x (and these commute).

So ##\partial_{x_1} \partial_{x_2} = (\partial_u+\partial_v)(\partial_u-\partial_v)##.

Expanding the right hand side gives ##\partial^2_u+\partial_v\partial_u - \partial_u \partial_v - \partial^2_v##. The middle two terms cancel, leaving us with ##\partial^2_u - \partial^2_v,## (not the plus sign I promised)

Then I made a mistake, since I forgot ##u## showed up on the right hand side. If you had ##\partial_x u = f(x)## and you had some solution ##u_f##, then every given any solution of ##u_x =0##, say ##u_0##, then ##u_f+u_0## is a solution to ##\partial_x u =f(x)##. But this differential equation isn't in that form so that's not a useful observation.

My apologies for the mistakes.