# Generic question on boundary conditions

#### RedX

A partial differential equation requires boundary conditions. Consider a 2-dimensional problem, where the variables are 'x' and 'y'. The boundary is the line x=0 and you are given all sorts of information about the function on that line.

If you are given just the values of the function on the line x=0, then isn't the solution determined uniquely by analytic continuation, so that the differential equation doesn't even matter?

Also, for hyperbolic equations, the boundary conditions along with solving the differential equation gives you the solution in a region bounded by the characteristics and the boundary. However, once you have the solution in this region, don't you also have the solution everywhere, by analytic continuation? For example, you just extend the solution by power series about a point in your region into the new region?

Related Differential Equations News on Phys.org

#### hamster143

I don't think it's enough to have values on the line x=0 to do analytic continuation. You need values along a closed curve, values in a disk, or the value and all derivatives at one point.

Besides, you're making the assumption that the solution is holomorphic when treated as the function of x+iy.

#### RedX

I don't think it's enough to have values on the line x=0 to do analytic continuation. You need values along a closed curve, values in a disk, or the value and all derivatives at one point.

Besides, you're making the assumption that the solution is holomorphic when treated as the function of x+iy.
I think you only need a line to do analytic continuation. It's the basis for the Schwartz reflection principle.

Disregarding complex functions, you only need the values of a real, differentiable function f(x) on a small continuous interval, say $$[0,\epsilon]$$, and then the entire function is determined for [-infinity, infinity], without the help of any differential equation. Having the function on that interval allows you to make a Taylor series expansion about any points in the interval, and so you have the whole function for all x-values.

I did some reading about hyperbolic equations, and I think answer is that solutions to these type of 2nd-order partial differential equations are not differentiable at certain points. So you can try to continue the solution just from the information that comes from having only part of the boundary, but once your power series runs into a non-differential point, the jig is up. So you need to specify the boundary conditions on an entire curve, and not just a small piece of it, because of the non-differentiabilities that are allowed that stop your continuation.

An example is the partial differential equation of the wave equation. If for the moment you take the speed to be +1 to the right, so that the solution is f(x-t), then specifying the function at the boundary t=0, 0<x<1, means giving the value of f(x) for 0<x<1. Once you have this, you have the solution f(x-t) in the region 0<x-t<1 . However, you are free to specify f(x) for 1<=x<2 , and it need not be differentiable with f(x) defined before at x=1 (for example, a square wave that's vertical at x=1). So the general result for hyperbolic equations is that there can be discontinuities in the derivative when you hop to another characteristics (in this case, hopping from the region 0<x-t<1 to the region 1<=x-t<2.

I hope this is right.

#### elibj123

Notice the difference between the one-dimensional case and the two (or higher) dimensional case. Remember that analyticity is defined within a neighborhood. While $$[0,\epsilon]$$ is a neighborhood wrt domain of f(x), a curve is not a neighborhood wrt the domain of a pde solution u(x,y).

You can also see that the curve can't even define the gradient of u, because at each point you can obtain only the directional derivative of u given by $$\vec{\nabla}u\cdot\hat{T}$$ where T is the tangent unit vector. The knowing of u on the curve, gives you only one equation for $$\partial_{x}u,\partial_{y}u$$ whereas you need two of them, so you don't have enough information to analytically continue the function just from its boundary values: comes in the information given by the PDE itself.

You would be right if you've said that the value of u on some neighborhood (in this case, a two dimensional set) determines u on the entire plane (given we want an analytic u(x,y)).

I hope I was clear enough.

P.S. Keep in mind that complex-analytic functions f(z) are nothing but solution of a system of pde's- the Cauchy-Riemann equations.

#### RedX

Analytic continuation can be done on a line. For example, f(z)=1/(1-z) is the analytic continuation into the complex plane z (except the pole at z=1) of the power series f(x)=1+x+x^2+...on the real interval (-1,1).

It's true that the curve can't determine the gradient of u on the curve, but only the directional derivative. But if you also specify the gradient of u normal to the curve, along with the values of u on the curve, then together with a 2nd-order differential equation, you should be able to determine all the partial derivatives along the curve, and from this you can build a Taylor expansion to expand beyond the curve. But then the question comes up why do you need the values of u and the normal derivative of u along all of the curve? Why not those things on just a small piece of the curve?

#### elibj123

1) As to the first paragraph, re-read the P.S. You are actually just solving a set of PDE's with given boundary values on a curve.

2) You can't know a gradient is normal to the curve. An arbitrary boundary condition may or may not be a level curve of the function, therefore the relation between the gradient and the tangent is arbitrary.
I'm gonna try to show it more precisely that we don't have suffice information:
Suppose we are now working with a real function with support [a,b], and we want
to analytically continue it to R. Then f.e. as approaching b from x<b, we know the left derivate, second derivate, third derivate and so on..., so we can infinitesimally determine f(x) on $$x=b+\epsilon$$, and by iterative sort of integrating (using Taylor's theorem) we continue f to the entire axis, so it is analytic.

Now we look at a functio u(x,y) which is given only a curve $$\gamma$$ that has a parematrization x(t),y(t), and we have a function I(t) where u(x(t),y(t))=I(t).
Suppose I want analytically continue u to the neighborhood of x(0),y(0).
I derivate I(t) and get that the directional derivative of u in the direction T=(T1,T2)=(x'(0),y'(0)) (a known vector, assume it is normalized) is I'(0) (a known size). Now recall the formula of the directional derivative:

$$\frac{\partial u}{\partial\hat{n}}=\vec{\nabla}u\cdot\hat{n}$$

Then you obtain the equation
$$T_{1}\partial_{x}u+T_{2}\partial_{y}u=I'(0)$$

So far so good, and we are about to integrate our information to get u in a ball around x(0),y(0). But wait. In one dimensional case we only had one direction to go to, so we needed only one number to determine the derivative at that point (f'(b)). But now we want to go not in one direction, but in infinitely number of them. Lucky for us the derivative of u in any direction is dependent only of two numbers, the gradient vector. OK let's go back. WAIT! We have only one equation for two quantities we need. We don't have suffice information.

To summarize my point, to continue a function from a given point you need its value in a small neighborhood around that point. That way you can obtain the gradient (in a neighborhood you have infinite equations to determine it) and then continue the function infinitesimally to any desired equation, then iterate through the process.

#### RedX

I never thought about it that way, that analytic continuation is possible because of a differential equation (the Cauchy-Riemann conditions), but that's precisely what allows two variables (x,y) to be treated just like one (z=x+iy) without a disc in $$R^2$$ but only a curve in Z: the presence of that differential equation allows this. Good catch.

I misspoke when I said normal gradient to the boundary curve. I meant normal derivative to the boundary curve. If the boundary curve coincides with the contour curves, then the normal derivative to the boundary curve is the gradient. But I understand you now.

A hyperbolic equation can be put into the form:

$$\frac{\partial^2f}{\partial \nu \partial \eta}=\Psi$$

where $$\Psi$$ is a function of first derivatives, coordinates, and f itself. Since the equation does not involve $$\frac{\partial^2f}{\partial \nu^2}$$, this can be infinity, so that $$\frac{\partial f}{\partial \nu}$$ can have a discontinuity as you cross the characteristic $$\nu$$. So that's why you need infinite numbers of derivatives (determined with the help of the d.f.e.) in all directions along an entire curve, and not just a portion of it, because analyticity is lost and the Taylor series about just a single point on the curve will fail when it runs into the discontinuity.

"Generic question on boundary conditions"

### Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving