Kernel/basis function for multiply connected region

  • Thread starter Thread starter coolnessitself
  • Start date Start date
  • Tags Tags
    Function
coolnessitself
Messages
29
Reaction score
0
Hi all,

I have a smooth f(x,y) in some region of ℝ2 that I know to be 0\le f(x,y)\le 1. The region has holes. I also know that inside the holes, f(x,y)=0, and outside of the region, f(x,y)=0. I'm looking for a good choice of polynomials or other functions I can expand f on in order to fit my data to. Is there a standard choice for something like this? (see attachment)

Thanks
 

Attachments

  • Untitled.png
    Untitled.png
    7.4 KB · Views: 509
Mathematics news on Phys.org
coolnessitself said:
Hi all,

I have a smooth f(x,y) in some region of ℝ2 that I know to be 0\le f(x,y)\le 1. The region has holes. I also know that inside the holes, f(x,y)=0, and outside of the region, f(x,y)=0. I'm looking for a good choice of polynomials or other functions I can expand f on in order to fit my data to. Is there a standard choice for something like this? (see attachment)

Thanks

Hey coolnessitself and welcome to the forums.

In terms of a polynomial expression, I don't think you are going to get something good through conventional methods. One suggestion I have is to use integral transforms and only consider projections within the non-hole regions.

The idea is that you have an orthonormal basis for a particular region (i.e. your region minus the wholes) and then based on that region you have an orthonormal basis for an nth degree polynomial (or other function) that will create an approximation to the actual data.

You could also use wavelets, but they are not anything like polynomials.

If you are interested, take a look at Fourier analysis and look at orthogonal polynomials and their construction for intervals in R^n (where n will be 2 in your case). Then take into account the holes and create an orthonormal basis suited for your left-over interval and project your data to that interval which will give a polynomial.

There are other techniques to handle the holes without doing the above but they are going to be way more complicated and computationally expensive.
 
Thanks for the quick response!
chiro said:
Then take into account the holes and create an orthonormal basis suited for your left-over interval and project your data to that interval which will give a polynomial.
So once I have an expansion in terms of a Fourier series or orthogonal polynomials over the disk, how do I then "take into account" the holes? Or is this not what you mean?

There are other techniques to handle the holes without doing the above but they are going to be way more complicated and computationally expensive.
Since I'm not familiar, do you know a name for these?
 
coolnessitself said:
So once I have an expansion in terms of a Fourier series or orthogonal polynomials over the disk, how do I then "take into account" the holes? Or is this not what you mean?

You only consider a basis for the full region where f(x,y) is known to be non-zero. Then you construct the orthogonal basis of polynomials for that region only. Of course this will mean that for analysis purposes you will not consider the fit outside this region.

Since I'm not familiar, do you know a name for these?

The thing is that for arbitrary intervals and functions, you will have to derive them yourself.

The basic idea is to use Gram-Schmidt processes and the L^2 formulation of the inner-product to construct the orthonormal basis.

The inner-product is used and interpreted in the same way as the inner-product of say an n-dimensional vector space with an inner-space product (in fact, the L^2 is just a vector-space in the same sense, but proving the results requires infinite-dimensional theory with Hilbert-spaces which is a little harder).

So the first thing is that you will have to construct a basis. To do this, start with a polynomial that you wish to use to project to. You will have to note that because you are in R^2, you will need to look at the appropriate theory for this if there are any issues (I have only done stuff in R myself, but I imagine it should be ok).

Then from this you create a basis by subtracting the projection from each term of the polynomial. So think of it as a basis of <a,bx,cx^2,dx^3,..> and so on in the analog of say <i,j,k> in the normal R^3 basis.

You then create an orthonormal basis for your interval of choice and normalize the basis so that it's unit length (i.e. <f,f> = 1 and <f,g> = 0 for all new orthonormal basis vectors in your space). You then project your data to each basis and your fit will be a linear combination of all projections with respect to the individual orthonormal basis vectors.

So if your orthonormal basis is <a0,a1,a2,a3...> in terms of the orthonormal polynomials and the projections of the data with respect to the orthonormal basis are <b0,b1,b2,b3...> then the fit will be given by a0b0 + a1b1 + a2b2 + a3b3 + ... where the b's are just real numbers and the a's are polynomials that satisfy <an,am> = 1 if n=m and 0 otherwise where <.,.> is the inner product.
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top