# Generic question on boundary conditions

• RedX
In summary, a partial differential equation requires boundary conditions, typically given along a line in a 2-dimensional problem. However, having only the values of the function on that line does not necessarily determine the solution uniquely through analytic continuation. For hyperbolic equations, the boundary conditions and the differential equation can determine the solution in a bounded region, but not necessarily in the entire region. This is because the solution may have non-differentiable points that cannot be continued from just the boundary values. Therefore, specifying the boundary conditions along an entire curve is necessary for analytic continuation of the solution. In the case of complex functions, analyticity is defined within a neighborhood, so specifying values on a curve is not enough. One also needs to specify the normal
RedX
A partial differential equation requires boundary conditions. Consider a 2-dimensional problem, where the variables are 'x' and 'y'. The boundary is the line x=0 and you are given all sorts of information about the function on that line.

If you are given just the values of the function on the line x=0, then isn't the solution determined uniquely by analytic continuation, so that the differential equation doesn't even matter?

Also, for hyperbolic equations, the boundary conditions along with solving the differential equation gives you the solution in a region bounded by the characteristics and the boundary. However, once you have the solution in this region, don't you also have the solution everywhere, by analytic continuation? For example, you just extend the solution by power series about a point in your region into the new region?

I don't think it's enough to have values on the line x=0 to do analytic continuation. You need values along a closed curve, values in a disk, or the value and all derivatives at one point.

Besides, you're making the assumption that the solution is holomorphic when treated as the function of x+iy.

hamster143 said:
I don't think it's enough to have values on the line x=0 to do analytic continuation. You need values along a closed curve, values in a disk, or the value and all derivatives at one point.

Besides, you're making the assumption that the solution is holomorphic when treated as the function of x+iy.

I think you only need a line to do analytic continuation. It's the basis for the Schwartz reflection principle.

Disregarding complex functions, you only need the values of a real, differentiable function f(x) on a small continuous interval, say $$[0,\epsilon]$$, and then the entire function is determined for [-infinity, infinity], without the help of any differential equation. Having the function on that interval allows you to make a Taylor series expansion about any points in the interval, and so you have the whole function for all x-values.

I did some reading about hyperbolic equations, and I think answer is that solutions to these type of 2nd-order partial differential equations are not differentiable at certain points. So you can try to continue the solution just from the information that comes from having only part of the boundary, but once your power series runs into a non-differential point, the jig is up. So you need to specify the boundary conditions on an entire curve, and not just a small piece of it, because of the non-differentiabilities that are allowed that stop your continuation.

An example is the partial differential equation of the wave equation. If for the moment you take the speed to be +1 to the right, so that the solution is f(x-t), then specifying the function at the boundary t=0, 0<x<1, means giving the value of f(x) for 0<x<1. Once you have this, you have the solution f(x-t) in the region 0<x-t<1 . However, you are free to specify f(x) for 1<=x<2 , and it need not be differentiable with f(x) defined before at x=1 (for example, a square wave that's vertical at x=1). So the general result for hyperbolic equations is that there can be discontinuities in the derivative when you hop to another characteristics (in this case, hopping from the region 0<x-t<1 to the region 1<=x-t<2.

I hope this is right.

Notice the difference between the one-dimensional case and the two (or higher) dimensional case. Remember that analyticity is defined within a neighborhood. While $$[0,\epsilon]$$ is a neighborhood wrt domain of f(x), a curve is not a neighborhood wrt the domain of a pde solution u(x,y).

You can also see that the curve can't even define the gradient of u, because at each point you can obtain only the directional derivative of u given by $$\vec{\nabla}u\cdot\hat{T}$$ where T is the tangent unit vector. The knowing of u on the curve, gives you only one equation for $$\partial_{x}u,\partial_{y}u$$ whereas you need two of them, so you don't have enough information to analytically continue the function just from its boundary values: comes in the information given by the PDE itself.

You would be right if you've said that the value of u on some neighborhood (in this case, a two dimensional set) determines u on the entire plane (given we want an analytic u(x,y)).

I hope I was clear enough.

P.S. Keep in mind that complex-analytic functions f(z) are nothing but solution of a system of pde's- the Cauchy-Riemann equations.

Analytic continuation can be done on a line. For example, f(z)=1/(1-z) is the analytic continuation into the complex plane z (except the pole at z=1) of the power series f(x)=1+x+x^2+...on the real interval (-1,1).

It's true that the curve can't determine the gradient of u on the curve, but only the directional derivative. But if you also specify the gradient of u normal to the curve, along with the values of u on the curve, then together with a 2nd-order differential equation, you should be able to determine all the partial derivatives along the curve, and from this you can build a Taylor expansion to expand beyond the curve. But then the question comes up why do you need the values of u and the normal derivative of u along all of the curve? Why not those things on just a small piece of the curve?

1) As to the first paragraph, re-read the P.S. You are actually just solving a set of PDE's with given boundary values on a curve.

2) You can't know a gradient is normal to the curve. An arbitrary boundary condition may or may not be a level curve of the function, therefore the relation between the gradient and the tangent is arbitrary.
I'm going to try to show it more precisely that we don't have suffice information:
Suppose we are now working with a real function with support [a,b], and we want
to analytically continue it to R. Then f.e. as approaching b from x<b, we know the left derivate, second derivate, third derivate and so on..., so we can infinitesimally determine f(x) on $$x=b+\epsilon$$, and by iterative sort of integrating (using Taylor's theorem) we continue f to the entire axis, so it is analytic.

Now we look at a functio u(x,y) which is given only a curve $$\gamma$$ that has a parematrization x(t),y(t), and we have a function I(t) where u(x(t),y(t))=I(t).
Suppose I want analytically continue u to the neighborhood of x(0),y(0).
I derivate I(t) and get that the directional derivative of u in the direction T=(T1,T2)=(x'(0),y'(0)) (a known vector, assume it is normalized) is I'(0) (a known size). Now recall the formula of the directional derivative:

$$\frac{\partial u}{\partial\hat{n}}=\vec{\nabla}u\cdot\hat{n}$$

Then you obtain the equation
$$T_{1}\partial_{x}u+T_{2}\partial_{y}u=I'(0)$$

So far so good, and we are about to integrate our information to get u in a ball around x(0),y(0). But wait. In one dimensional case we only had one direction to go to, so we needed only one number to determine the derivative at that point (f'(b)). But now we want to go not in one direction, but in infinitely number of them. Lucky for us the derivative of u in any direction is dependent only of two numbers, the gradient vector. OK let's go back. WAIT! We have only one equation for two quantities we need. We don't have suffice information.

To summarize my point, to continue a function from a given point you need its value in a small neighborhood around that point. That way you can obtain the gradient (in a neighborhood you have infinite equations to determine it) and then continue the function infinitesimally to any desired equation, then iterate through the process.
Otherwise you're going to need more information (a PDE) about it.

I never thought about it that way, that analytic continuation is possible because of a differential equation (the Cauchy-Riemann conditions), but that's precisely what allows two variables (x,y) to be treated just like one (z=x+iy) without a disc in $$R^2$$ but only a curve in Z: the presence of that differential equation allows this. Good catch.

I misspoke when I said normal gradient to the boundary curve. I meant normal derivative to the boundary curve. If the boundary curve coincides with the contour curves, then the normal derivative to the boundary curve is the gradient. But I understand you now.

A hyperbolic equation can be put into the form:

$$\frac{\partial^2f}{\partial \nu \partial \eta}=\Psi$$

where $$\Psi$$ is a function of first derivatives, coordinates, and f itself. Since the equation does not involve $$\frac{\partial^2f}{\partial \nu^2}$$, this can be infinity, so that $$\frac{\partial f}{\partial \nu}$$ can have a discontinuity as you cross the characteristic $$\nu$$. So that's why you need infinite numbers of derivatives (determined with the help of the d.f.e.) in all directions along an entire curve, and not just a portion of it, because analyticity is lost and the Taylor series about just a single point on the curve will fail when it runs into the discontinuity.

## 1. What are boundary conditions?

Boundary conditions are a set of constraints or limitations that are imposed on a mathematical model or physical system. These conditions define the behavior of the system at its boundaries, and are essential for solving a problem or predicting the behavior of the system.

## 2. Why are boundary conditions important?

Boundary conditions are important because they help to define the problem at hand and provide a framework for solving it. They also help to ensure that the solution to a problem is physically or mathematically valid, by limiting the range of possible solutions.

## 3. How do boundary conditions affect the accuracy of a model?

Boundary conditions can have a significant impact on the accuracy of a model. If the boundary conditions are not properly defined or are incorrect, it can lead to inaccurate or unrealistic results. This is because the behavior of a system at its boundaries can have a significant influence on the overall behavior of the system.

## 4. What types of boundary conditions are there?

There are several types of boundary conditions, including fixed or prescribed values, periodic conditions, symmetry conditions, and free or open boundaries. The type of boundary condition used depends on the specific problem being solved and the behavior of the system at its boundaries.

## 5. How do you determine the appropriate boundary conditions for a problem?

Determining the appropriate boundary conditions for a problem can be a complex process and often requires knowledge and experience in the specific field of study. It involves understanding the behavior of the system at its boundaries, considering any physical or mathematical constraints, and selecting the appropriate type of boundary condition to accurately represent the problem at hand.

### Similar threads

• Differential Equations
Replies
22
Views
2K
• Differential Equations
Replies
4
Views
1K
• Differential Equations
Replies
8
Views
2K
• Differential Equations
Replies
1
Views
712
• Differential Equations
Replies
16
Views
2K
• Differential Equations
Replies
1
Views
2K
• Differential Equations
Replies
2
Views
2K
• Differential Equations
Replies
4
Views
995
• Differential Equations
Replies
1
Views
2K
• Differential Equations
Replies
8
Views
4K