# Metric for knowing when numerical BC is "good"

Gold Member
Hi PF!

Let's say a boundary condition for an ODE is ##f'(1)+f(1) = 0##. If we solve the ODE numerically, how can I tell if this BC is satisfied "good enough". Let's suppose the numerics generate ##f'(1)+f(1) = 0.134##; is this close enough to zero?

Mentor
2021 Award
Hi PF!

Let's say a boundary condition for an ODE is ##f'(1)+f(1) = 0##. If we solve the ODE numerically, how can I tell if this BC is satisfied "good enough". Let's suppose the numerics generate ##f'(1)+f(1) = 0.134##; is this close enough to zero?
If you solve a system numerically, you will always run into the problem of stability. Boundary conditions, and even the differential equations themselves are nowhere dense, i.e. they are thin lines, equations. If you vary them a little, the result could be a complete different one. Thus you will need to find a way to estimate the stability of a system. Take e.g. the ##n \times n## matrix ##0##. You only have to vary ##n## entries of it a little bit and thus its norm, and all of a sudden we get an invertible matrix which is mathematically diametric to the zero matrix. The same happens with any numerical solutions. I guess it's here where chaos theory once started from. So for any numeric solution, we need a separate information how robust it is depending on possible estimations during the algorithm. Any publication of an algorithm on the other hand necessarily needs a study not only about space and time, but also about how sensible it is towards small variations along its run, be it by floating point arithmetic, rounding or even its entries, here the boundary condition.

Visually it can help to draw corresponding vector fields and look whether close boundary conditions lead to close trajectories. However, I think this won't help a lot in the given example, as you have an entire set of boundary conditions, not only a few uniquely defined ones.

How is this related to geometry?

Mentor
How is this related to geometry?

Mentor
Let's say a boundary condition for an ODE is ##f'(1)+f(1) = 0##. If we solve the ODE numerically, how can I tell if this BC is satisfied "good enough". Let's suppose the numerics generate ##f'(1)+f(1) = 0.134##; is this close enough to zero?
That's a judgement call, based on the context. If ##f## and ##f'## are normally in the range [0,1], then I would find an error of ~15% unacceptable. But if ##f## can reach 108, then 0.134 is close enough to 0 for my taste.

Gold Member
That's a judgement call, based on the context. If ##f## and ##f'## are normally in the range [0,1], then I would find an error of ~15% unacceptable. But if ##f## can reach 108, then 0.134 is close enough to 0 for my taste.
Wow, so it really is just a judgement call? Interesting. Thanks for the replies everyone.

Mentor
2021 Award
Wow, so it really is just a judgement call? Interesting. Thanks for the replies everyone.
Not in my opinion. E.g. you could run along a straight line with ##f'(1)+f(1)=0## and into an attractor by ##f'(1)+f(1)=0.134\,.## This is a qualitively severe difference and might affect the outcome of your model dramatically. As you have originally asked in the differential geometry forum and not in a physics forum, I suspect that you are not talking about some kind of experiment, but about the math instead. In this case, it also depends on how far from the boundary do you want to predict paths. In a neighborhood around the starting point it might be a judgement call, but as soon as you make some global statement, it is not a judgement call any longer but pure mathematics instead.

A better answer, however, depends on the entire situation and background you haven't given. So as stated, judgement call is as good as any other answer, but IMO only one possibility among many.

Gold Member
As you have originally asked in the differential geometry forum and not in a physics forum, I suspect that you are not talking about some kind of experiment, but about the math instead. In this case, it also depends on how far from the boundary do you want to predict paths. In a neighborhood around the starting point it might be a judgement call, but as soon as you make some global statement, it is not a judgement call any longer but pure mathematics instead.
I originally posted in differential equations (or at least that's where I thought I posted, maybe I mis-clicked?). And yea, I'm asking about the math and not an experiment.

A better answer, however, depends on the entire situation and background you haven't given. So as stated, judgement call is as good as any other answer, but IMO only one possibility among many.
The technique I'm using is similar to a Galerkin method: assume a series solution of the form ##f = \sum_i a_i \phi_i(x)## where ##\phi_i## is known and ##a_i## is determined, plug this into the weak formulation, and compute eigenvalues. However, it can be advantageous for me to solve the inverse problem, which amounts to finding inverse differential operators, which implies Green's functions. In this way, ##\phi_i## doesn't have to satisfy the boundary conditions since the Green's functions "enforce" the solutions BC. I say "enforce" because they never get the BC exactly 0, and sometimes are quite large (1,2, etc.)

Mentor
2021 Award
I originally posted in differential equations (or at least that's where I thought I posted, maybe I mis-clicked?). And yea, I'm asking about the math and not an experiment.
See this example:

The technique I'm using is similar to a Galerkin method: assume a series solution of the form ##f = \sum_i a_i \phi_i(x)## where ##\phi_i## is known and ##a_i## is determined, plug this into the weak formulation, and compute eigenvalues. However, it can be advantageous for me to solve the inverse problem, which amounts to finding inverse differential operators, which implies Green's functions. In this way, ##\phi_i## doesn't have to satisfy the boundary conditions since the Green's functions "enforce" the solutions BC. I say "enforce" because they never get the BC exactly 0, and sometimes are quite large (1,2, etc.)
I'm not an expert in these questions. But as you see in the example above, it depends on how far you go leaving the initial conditions. Often we can apply Picard-Lindelöf so one condition corresponds to one trajectory. The accuracy then depends on the turbulence of the vector field. It wouldn't surprise me if eigen value problems alone filled an entire lecture in numeric mathematics.

#### Attachments

27.5 KB · Views: 560
Mentor
2021 Award
Here's a paper I found
http://www.iacm.forth.gr/_docs/pubs/3/Dougalis/FE_Notes_current.pdf
using
solving differential equations using numerical methods Garlekin
on Google. It also includes some stability considerations. I take from
This is the current version of notes that I have used for the past thirty-five years in graduate courses at the University of Tennessee, Knoxville, the University of Crete, the National Technical University of Athens, and the University of Athens.
that it is not copyright protected.

I moved it to Differential Equations as it now turned out to be more of a mathematical than a programming question, or about the evaluation of results.

Gold Member
Here's a paper I found
http://www.iacm.forth.gr/_docs/pubs/3/Dougalis/FE_Notes_current.pdf
using
solving differential equations using numerical methods Garlekin
on Google. It also includes some stability considerations. I take from

that it is not copyright protected.

I moved it to Differential Equations as it now turned out to be more of a mathematical than a programming question, or about the evaluation of results.
Thanks for doing this. I reviewed the notes you posted and the stability appears to relate more to time-space stepping. But I don't have any stepping at all. Have I missed something?

Mentor
2021 Award
I don't know how your algorithm goes. As a numeric solution, it is stepped by something. Whether this is called time or another variable shouldn't be the problem.

Gold Member
I don't know how your algorithm goes. As a numeric solution, it is stepped by something. Whether this is called time or another variable shouldn't be the problem.
Hmmmm I'm not sure it is though. See, all I'm doing is solving a standard eigenvalue problem ##A = \lambda B## where the components of ##A## and ##B## are integrals of known basis functions. How is there any stepping going on?

Mentor
2021 Award
I don't know the algorithm so I can't tell what steps it takes. But ##A=\lambda B## demonstrates the difficulty I was speaking about. If ##\lambda## is an eigenvalue, then ##\lambda + \varepsilon## is probably not. Thus you will have to investigate its implications: error margins of the algorithm as well as of the corresponding eigenvectors. This at prior can range from close to completely different: ##\begin{bmatrix}0&0\\0&0\end{bmatrix}## has the entire space as eigen space to the eigenvector ##0##, whereas ##\begin{bmatrix}0.0001& -0.0001\\ 0 & 0.0000001\end{bmatrix}## has no vanishing eigenvectors at all. You spoke about inversions. Fine with errors built in, less without. In this case you get a completely different result. Here is where real life and experiments came into play. In real life there are usually no exact solutions, so the case ##A=0## can be neglected, whereas mathematically it can not.

Gold Member
I don't know the algorithm so I can't tell what steps it takes.
It's pretty simple. I have basis functions ##\phi_i(s)## on a domain ##s\in [-s_0,s_0]##. Then $$A_{ij} = \int_{-s_0}^{s_0} \phi_i \partial_n \phi_j\, ds$$ where ##n## is normal to a given surface ##\Gamma## parameterized by ##s##. If ##G(s,\sigma)## is the Green's function of $$\phi''(s) + \left(a^2 - \left(\frac{k \pi}{L}\right)^2\right)\phi(s) = f$$ where ##k## is an natural number, ##a,L## are given, then $$B_{ij} = \int_{-s_0}^{s_0}\int_{-s_0}^{s_0} G(s,\sigma)\phi_i(s) \phi_j(\sigma)\, d\sigma ds .$$ Then we have matrices ##A## and ##B## defined and can solve. I do all this in a computed algebra package, and can even do it analytically. I just don't see how there is any stepping involved (I have other questions too since sometimes I'm not getting the correct eigenvalues if I let matrices ##A## and ##B## get larger, which is weird because increasing the size should give me better accuracy for all eigenvalues.) Any help? I can post a new thread also since I basically have two questions (when are the BC's good enough satisfied and why are eigenvalues sometimes inaccurate).

I never invert matrices. I use an algebra package that solves the eigenvalue problem without me doing anything.

Mentor
I though you were solving this numerically by the shooting method. But, that really doesn't matter. All you need is for the error to be small compared to either of the terms on the other side of the equation, f(1) or f'(1), whichever is smaller (in magnitude). That way, you're properly scaled. So, in your calculation, how does 0.134 compare with f(1) and f'(1)?

joshmccraney
Gold Member
I though you were solving this numerically by the shooting method. But, that really doesn't matter. All you need is for the error to be small compared to either of the terms on the other side of the equation, f(1) or f'(1), whichever is smaller (in magnitude). That way, you're properly scaled. So, in your calculation, how does 0.134 compare with f(1) and f'(1)?
It's about 10% smaller. Is that sufficient?

Mentor
It's about 10% smaller. Is that sufficient?
I don't think that would be sufficient. How does the 0.134 compare with the maximum f and f' magnitudes along the interval?

For comparison, why don't you solve it numerically by the shooting method and see what you get?

Gold Member
I don't think that would be sufficient. How does the 0.134 compare with the maximum f and f' magnitudes along the interval?

For comparison, why don't you solve it numerically by the shooting method and see what you get?
It's smaller, perhaps 50% less than the max. But is the interval even relevant? Because at the other end of the interval I will be at the other boundary, which has different BC's.

A shooting method won't work for this example. The technique I'm using to solve is very different (see post 15). The only reason I need the Green's function is to construct the inverse of a differential operator, which after operates according to what I define as ##B##.

Mentor
It's smaller, perhaps 50% less than the max. But is the interval even relevant? Because at the other end of the interval I will be at the other boundary, which has different BC's.

A shooting method won't work for this example. The technique I'm using to solve is very different (see post 15). The only reason I need the Green's function is to construct the inverse of a differential operator, which after operates according to what I define as ##B##.
Why are you saying that the shooting method will not work on this problem? What are the equations you are solving?

A residual of 50% of the maximum value is not accurate enough.

Gold Member
Why are you saying that the shooting method will not work on this problem? What are the equations you are solving?

A residual of 50% of the maximum value is not accurate enough.
I'm solving a fluids problem. Then I am required to solve a total of 5 equations: 1) continuity, 2) no penetration along walls, 3) a contact line condition, 4) conservation of volume, and 5) an energy balance where potential and capillary pressure balance. When solving, I take a basis function approach, where i build 1), 2), 4) into the basis functions. In fact, I can also build 3) into the basis functions but when solving 5) the Green's function should do this form me. But the evaluations at the contact line don't seem small enough.

Mentor
I'm solving a fluids problem. Then I am required to solve a total of 5 equations: 1) continuity, 2) no penetration along walls, 3) a contact line condition, 4) conservation of volume, and 5) an energy balance where potential and capillary pressure balance. When solving, I take a basis function approach, where i build 1), 2), 4) into the basis functions. In fact, I can also build 3) into the basis functions but when solving 5) the Green's function should do this form me. But the evaluations at the contact line don't seem small enough.
Could I please see the differential equation and boundary conditions?

Gold Member
Could I please see the differential equation and boundary conditions?
Sure.

$$\nabla^2 \phi = 0 \,\,\,(\Omega)\\ \frac{\partial \phi}{\partial n} = 0 \,\,\,(\Sigma)\\ \int_\Gamma \frac{\partial \phi}{\partial n} = 0 \,\,\, (\Gamma)\\ \pm\left.\frac{d}{ds}\frac{\partial \phi}{\partial n} +c\cot \alpha \frac{\partial \phi}{\partial n} \right|_{s=\pm s_0} = 0\,\,\,(\gamma)\\ -\frac{d^2}{ds^2} \frac{\partial \phi}{\partial n} - c^2\frac{\partial \phi}{\partial n} = \lambda \phi \,\,\,(\Gamma).$$

##\Omega## is the fluid domain, ##\Sigma## is the solid surface, ##\Gamma## is the fluid interface in equilibrium parameterized by ##s##, ##\gamma## is the contact line, ##n## is a direction normal to the boundary listed (i.e. when used with ##\Sigma##, ##n## is normal to ##\Sigma##), ##\alpha## is contact angle, ##c## is a parameter, and ##\phi## is the potential.

Mentor
Sure.

$$\nabla^2 \phi = 0 \,\,\,(\Omega)\\ \frac{\partial \phi}{\partial n} = 0 \,\,\,(\Sigma)\\ \int_\Gamma \frac{\partial \phi}{\partial n} = 0 \,\,\, (\Gamma)\\ \pm\left.\frac{d}{ds}\frac{\partial \phi}{\partial n} +c\cot \alpha \frac{\partial \phi}{\partial n} \right|_{s=\pm s_0} = 0\,\,\,(\gamma)\\ -\frac{d^2}{ds^2} \frac{\partial \phi}{\partial n} - c^2\frac{\partial \phi}{\partial n} = \lambda \phi \,\,\,(\Gamma).$$

##\Omega## is the fluid domain, ##\Sigma## is the solid surface, ##\Gamma## is the fluid interface in equilibrium parameterized by ##s##, ##\gamma## is the contact line, ##n## is a direction normal to the boundary listed (i.e. when used with ##\Sigma##, ##n## is normal to ##\Sigma##), ##\alpha## is contact angle, ##c## is a parameter, and ##\phi## is the potential.
So ##\phi## is the velocity potential?

Gold Member
So ##\phi## is the velocity potential?
Yep it is. I don't think I need help solving these equations though; I just know they're kind of time consuming and don't want to waste your time. I was just curious how people determine when a boundary condition has been sufficiently satisfied if it's not exactly satisfied.

eys_physics
I think that one usually tries to restrict the solution such that the boundary conditions are exactly (up to round-of errors) satisfied.
For example, if you have an expansion
$$f(x)=\sum_{k=1}^n a_k \phi_k(x)$$ and a condition ##f(0)=0##, you can select the basis functions such that the condition is satisfied.
However, I'm not sure if it is possible in your case.