Simpson's Rule in Matlab

  • Thread starter mcooper
  • Start date
  • Tags
    Matlab
In summary, the conversation discusses the integration of a function over the boundary of a unit circle using Simpson's Rule. The function is given by the product of two logarithms, with one of the variables being a curvilinear coordinate along the boundary of the domain. The goal is to implement this on Matlab and store the results in an NxN matrix. The conversation also mentions the use of singularities and the need to solve a system of equations to find the unknowns. However, there seem to be issues with accuracy and code debugging.
  • #1
mcooper
29
0
I am tying to integrate log(ri)*log(rj)*ds over the boundary of the unit circle, i,j=1,...,N. Where s is a curvilinear co-ord along the boundary of the domain and ri is the eucledian distance between a point in the unit circle (x,y) and a point outside of the domain.

I want to implement this on Matlab using Simpson's Rule. The answer should fill an NxN matrix A, with Ai,j=Integral(log(ri)*log(rj))ds

So far I have calculated the integrand for a chosen (x,y) and have the values stored in an NxN matrix. I have written working code for Simpson's rule but am not sure how to apply it to this problem.

Does my chosen (x,y) have to be on the boundary of the domain since that is what I am integrating over?

Any suggestions would be extremely helpful.

Thanks in advance.
 
Physics news on Phys.org
  • #2
Sorry! The points outside the domain that I have mentioned are discretely distributed in a circle of radius 3.
 
  • #3
From what I understood, you have calculated the values of the integrand at one point for each of the NxN integrands. This does not quite suffice, as the function to be integrated is not constant through the integration (write down, with pen and paper, the function you want to integrate). In some sense, you actually don't get to choose (x,y) as they are the variables being integrated over (Simpson's rule takes care of the sampling when you provide the sampling frequency).

For each NxN integrations then, you should have some amount n of values of the integrand. These values you then sum using the Simpson's rule.

I'm not sure if I'm being helpful at all, please ask if you need more advice.
 
  • #4
Problem is I need to be able to integrate for any (x,y) in the domain (or on the boundary, I am still confused about this since the question in my first thread). So for example I could choose N points outside the domain and then N points inside the domain to sample from but that doesn't exactly satisfy my criteria, even for very large N.

Do you see my dilema?
 
  • #5
I'm sorry, I don't quite see your dilemma. Let's just start from the very beginning. The way I understood the problem was as the following integral:
[tex]
\int_C \log\left(\sqrt{(x_i-x(s))^2+(y_i-y(s))^2}\right)\log\left(\sqrt{(x_j-x(s))^2+(y_j-y(s))^2}\right) ds,
[/tex]
where x(s) and y(s) go around the unit circle. Your parametrization could therefore simply be written as
x(s) = cos(s)
y(s) = sin(s),
with s = (0, 2*pi], as the domain is the unit circle. Thus the integral "reduces" to
[tex]
\int_0^{2\pi} \log\left(\sqrt{(x_i-\cos(s))^2+(y_i-\sin(s))^2}\right)\log\left(\sqrt{(x_j-\cos(s))^2+(y_j-\sin(s))^2}\right) ds
[/tex]

I'm not sure if this was already clear to you. You see, you don't really pick out the x's and y's (or (x,y)'s) as you please, but they being the integration variables, Simpson's rule tells you how to pick them out. Now that we, hopefully :), are on the same page, could you point out quite specifically where you're stuck?
 
Last edited:
  • #6
Following your advice I have calculated the integrand for s=0:2*pi/N:2*pi-2*pi/N and have stored the results in an N2xN matrix and performed Simpson's rule. Hopefully I have the correct results. I call this NxN matrix A

The rest of the problem involves solving the integral of -1/4*log(ri) over the same domain as before. I have done this and call this Nx1 matrix B.

I then have to solve the system of equations Ac=B to find all the unknowns, cj.

Once I have obtained the cj's I then put them into the following equation, Phi=sum(logrj*cj) j=1,...,N.

I think the problem has now turned into a debugging problem as I am getting answers consistently greater than the exact solution (which is -1/4) and as I increase N the answer gets further away from the exact solution until N=20 when the matricies become close to singular so the answer is not accurate at all. As this method is an approximation I expect the results to become closer to the exact answer as I increse N.

Im afraid I am unable to give a very specific area where I am stuck! I understand this is a bit general but any advice would be greatly appreciated.
 
  • #7
How do you know the exact solution? Could you state a bit more specifically how you chose the points outside the domain? Have you tried Matlab's (as I understood it, that's your platform) built-in quadrature functions, such as quad and verified the results? Have you then tried programming some much simpler methods of quadrature (compared to Simpson's rule), such as the trapezoid rule?

Could you maybe post some code? Maybe just the bits where you suppose it could go wrong, i.e. the main loops of your Simpson's rule?
 
  • #8
The equation Phi forms part of an approximation to the partial differential equation [tex]\nabla[/tex]2u=1 in 2d. It is simple to find the exact solution for this equation. The method is called the discrete singularity technique for analysing duct flow, formulated in a paper written by Yano et al in 1984. I have attached this paper in a previous thread https://www.physicsforums.com/showthread.php?t=344219".

The N points outside the domain (the singularities) are just discretely plotted in a circle that encloses the domain. And this circle can be any radius atm (will find optimal later).

Here is my Simpson's code for evaluating [tex]\int[/tex]log(ri)log(rj) ds. Where S(i,1) and S(i,2) are the co-ords of the singularities.

d = zeros(n);
c=1;

for theta=0:2*pi/n:2*pi-2*pi/n
for i=1:n
d(i,c) = sqrt((cos(theta) - S(i,1))^2 + (sin(theta) - S(i,2))^2);
end
c=c+1;
end


function fa = integrand(n)

format long

d = dist(n);

fa = zeros(n^2,n);

for i=1:n;
for j=1:n;
for a=1:n;
fa(n*(j-1)+a,i) = log(d(a,i))*log(d(a,j));
end
end
end

function A = simpA3(n)

format long

fa = integrand(n);

A = zeros(n);

h = 2*pi/n;

for i=1:n;
for j=1:n;
for t=1:n
c=fa((j-1)*n+t,i);

if t==1 || t==n

A(i,j) = A(i,j) + 1*c;

else if mod(t,2)==0
A(i,j) = A(i,j) + 2*c;

else A(i,j) = A(i,j) + 4*c;

end
end
end
A(i,j) = A(i,j)*(h/3);
end
end


Also I made a mistake earlier, I am not getting the exact answer for any N.
 
Last edited by a moderator:
  • #9
That looks very interesting, thanks for the article reference!

Ok, I wanted to make sure that your approach indeed works. I therefore coded the thing from scratch, based on what you've told me. I indeed got rather close to the exact value when I increased N. I did not have the time to properly read through the paper you kindly provided. I did not code the Simpson's rule, which I suppose was your primary task. I however provide below the basic structure of my program. You should code a program simpsonsrule, and replace quad-function calls by it.

Code:
r = 5;
N = 100;
thetas = linspace(0, 2*pi, N+1);
S = r*[cos(thetas(1:end-1))', sin(thetas(1:end-1))'];
myfun1 = @(theta, i) log(sqrt((cos(theta) - S(i,1)).^2 + (sin(theta) - S(i,2)).^2));
myfun2 = @(theta, i, j) myfun1(theta, i).*myfun1(theta, j);

a = zeros(N);
for i = 1:N
   for j = i+1:N
      a(i,j) = quad(@(x) myfun2(x, i, j), 0, 2*pi);
   end
end
a = a+a';

d = zeros(N, 1);
for i = 1:N
   d(i) = -1/4*quad(@(x) myfun1(x, i), 0, 2*pi);
end

c = a\d;
Phi = sum(log(sqrt(S(:,1).^2 + S(:,2).^2)).*c);

Yes, my code uncommented and for Matlab code it's horrible, but at least it was quick to write :).

Am I right in that this program solves Phi at (0,0) (for on the last line of the program above, only (0,0) is calculated)? You then add to this the parabolic term to get Poiseuille flow velocity profile?

I'll check your code for mistakes later, when I have a bit more time.
 
Last edited:
  • #10
Actually Simpson's rule was not my primary task but I decided to pursue it for simplicity as I am in the very early stages of teaching myself Matlab (well, programming in general) and didn't know many of the built in functions.

I am very impressed by your piece of code and it has taught me alot, for which I thankyou!

Your assumption is correct. I have subsequently extended your code to cater for all (x,y) in the domain and have plotted the exact solution agaist this approximation (attached).

My job is to now analyse the approximate results against the exact solution, maybe by means of least squares error or by some of the tests shown in the paper. Have you any suggestions for this? Are there any useful built in functions?
 

Attachments

  • graph.doc
    118 KB · Views: 196
  • #11
Glad I could be of help.

Mean squared error, i.e. mean L2-norm, is a good way to go, I suppose, as often convergence theorems are stated using them. That's what you already suggested. You can use the built in norm-function and/or the element-wise operations, denoted with a dot, i.e. x.^2 rather than x^2.
 
  • #12
Hi Paallikko!

I am wondering how to adapt this method to work for different shaped domains? I was thinking of adapting it first to elliptical domains and then further. I am assuming the adaptation would come whilst calculating the cj's, and from here I am looking to change the limits of integration in the following equations

a(i,j) = quad(@(x) myfun2(x, i, j), 0, 2*pi);
d(i) = -1/4*quad(@(x) myfun1(x, i), 0, 2*pi);

I am struggling to find these limits and was wondering if this was a sensible way of going about solving this problem.

Kind regards.
 
  • #13
I hastily reviewed the relevant article, and if I understood right, you want to change the integration contour, but not much else. Now, the limits depend on the parametrization. Doing the whole thing numerically (which I think you should do if you want to generalize to arbitrary domains), I suppose you should use a Cartesian grid, at least as a first approximation, to keep things simple.

To elaborate a bit, it is myfun1 that you should change. In post #5 I "derived" the formula for circular domains (with unit radii). Now, to generalize to elliptical ones, rather than

x(s) = cos(s)
y(s) = sin(s),
with s = (0, 2*pi],

you should use e.g. the parametrization

x(s) = a cos(s)
y(s) = b sin(s),
with s = (0, 2*pi],

where a and b are the lengths of the major and the minor axes. This time there ought to be a "Jacobian" at play when you do the change of variables, so tread with care to get all the factors right. I hope you've taken the relevant math courses or are otherwise familiar with this stuff. Just say if you need more details.

For the more numerical implementation for general domains, I'd start by first figuring out how to numerically approximate the circumference of an arbitrary shape. Now, this is basically doing the relevant integral with integrand 1. Now, changing the integrand to log log etc. should be conceptually quite easy.

I hope I'm making sense and/or didn't make too many mistakes, it's 2 am :).
 
  • #14
My aim is to adapt the method to work for annular domains. Like before I have a ring of singularities outside of the domain but now I have also a ring of singularities on the inside of the domain. I have attached a short outline of the problem (I find it easier to typeset).

Here is the code I have applied. It is quite long but is repetative and similar to the code for circular domains.

Code:
function Phiann = annsol(rad,radI,N) %rad=radius of outer ring of sings. radI=rad of inner ring of sings.

thetas = linspace(0, 2*pi, N+1); %generates N+1 points between 0 and 2pi
S = rad*[cos(thetas(1:end-1))', sin(thetas(1:end-1))']; %co-ords of outer sings, rad=the radius of sings
SI = radI*[cos(thetas(1:end-1))', sin(thetas(1:end-1))'];  %co-ords of inner sings
myfun1o = @(theta, i) log(sqrt((cos(theta) - S(i,1)).^2 + (sin(theta) - S(i,2)).^2));
myfun1i = @(theta, i) log(sqrt((cos(theta) - SI(i,1)).^2 + (sin(theta) - SI(i,2)).^2));
myfun2a = @(theta, i, j) myfun1o(theta, i).*myfun1o(theta, j);
myfun2b = @(theta, i, j) myfun1i(theta, i).*myfun1o(theta, j);
myfun2c = @(theta, i, j) myfun1o(theta, i).*myfun1i(theta, j);
myfun2d = @(theta, i, j) myfun1i(theta, i).*myfun1i(theta, j);

a = zeros(N);
for i = 1:N
   for j = i+1:N
      a(i,j) = quad(@(x) myfun2a(x, i, j), 0, 2*pi);
   end
end
a = a+a';

b = zeros(N);
for i = 1:N
   for j = i+1:N
      b(i,j) = quad(@(x) myfun2b(x, i, j), 0, 2*pi);
   end
end
b = b+b';

c = zeros(N);
for i = 1:N
   for j = i+1:N
      c(i,j) = quad(@(x) myfun2c(x, i, j), 0, 2*pi);
   end
end
c = c+c';

d = zeros(N);
for i = 1:N
   for j = i+1:N
      d(i,j) = quad(@(x) myfun2d(x, i, j), 0, 2*pi);
   end
end
d = d+d';

Thalf = horzcat(a,b);
Bhalf = horzcat(c,d);
L = 2*vertcat(Thalf,Bhalf);

p = zeros(N, 1);
for i = 1:N
   p(i) = -5/4*quad(@(x) myfun1o(x, i), 0, 2*pi);
end

q = zeros(N, 1);
for i = 1:N
   q(i) = -5/4*quad(@(x) myfun1i(x, i), 0, 2*pi);
end

K = vertcat(p,q);

w = L\K;  %2Nx1 matix containing the weights.  The first N are the c_j's and second N are the d_j's.

x = 2;
y = 0;
r = sqrt(x^2 + y^2);

if r<=2 && r>=1
Phiann = sum(log(sqrt((x - SI(:,1)).^2 + (y - SI(:,2)).^2)).*w(N+1:2*N)) + sum(log(sqrt((x - S(:,1)).^2 + (y - S(:,2)).^2)).*w(1:N));
end

The problem is that the final result is not satisfying the boundary conditions. For example, I should be getting the answer to be -1 when r=2 and -1/4 when r=1. I am wondering have I got the right components for the matrix L? (note: L is constructed from 4 smaller matricies)

I hope this is clear, if not please ask.

Thanks in advance.
 

Attachments

  • annular.pdf
    72.7 KB · Views: 201
  • #15
Note: for now I have taken N=M i.e. there are the same number of singularities on the inner and outer rings.
 
  • #16
It's been a while since I've solved Laplace's equations etc. Could you elaborate on what fundamental solutions are, and why their superposition is the answer? I'm a bit on the lazy side today, could you write out the matrix equations you obtain my differentiating (2) and setting it equal to zero?

I changed the following lines of code a bit:
Code:
S = rad*1.05*[cos(thetas(1:end-1))', sin(thetas(1:end-1))']; %co-ords of outer sings, rad=the radius of sings
SI = radI*1.05*[cos(thetas(1:end-1))', sin(thetas(1:end-1))'];  %co-ords of inner sings
myfun1o = @(theta, i) rad*log(sqrt((rad*cos(theta) - S(i,1)).^2 + (rad*sin(theta) - S(i,2)).^2));
myfun1i = @(theta, i) radI*log(sqrt((radI*cos(theta) - SI(i,1)).^2 + (radI*sin(theta) - SI(i,2)).^2));
The 1.05 is just an arbitrary number to take the singularites outside the domain. rad and radI are the Jacobians I discussed in my previous post. Now, I think someplace is still missing a minus sign due to the orientation of the inner integral (ie. Eq. (2), term 2). Adding one to different places (my primary guess would be myfun1i, but I'd really have to do the differentation of Eq. (2) to be sure), you get an answer closer to the correct one.

Append the code below to yours for some visualization. Phiexa is the analytical solution.
Code:
nn = 100;
[xx, yy] = meshgrid(linspace(-2, 2, nn), linspace(-2, 2, nn));
inds = (xx.^2 + yy.^2 >= rad^2) | (xx.^2 + yy.^2 <= radI^2);
for i = 1:nn
   for j = 1:nn
      Phiann(i, j) = sum(log(sqrt((xx(i,j) - SI(:,1)).^2 + (yy(i,j) - SI(:,2)).^2)).*w(N+1:2*N)) + sum(log(sqrt((xx(i,j) - S(:,1)).^2 + (yy(i,j) - S(:,2)).^2)).*w(1:N));
   end
end
Phiann(inds) = NaN;
surf(Phiann);

Phiexa = -1/4+(1/4-1)*log(sqrt(xx.^2+yy.^2))/log(2);
Phiexa(inds) = NaN;
hold on;
surf(Phiexa);
hold off;
 
  • #17
It's been a while since I've solved Laplace's equations etc. Could you elaborate on what fundamental solutions are, and why their superposition is the answer? I'm a bit on the lazy side today, could you write out the matrix equations you obtain by differentiating (2) and setting it equal to zero (with a few intermediate steps, if possible)?

I changed the following lines of code a bit:
Code:
S = rad*1.05*[cos(thetas(1:end-1))', sin(thetas(1:end-1))']; %co-ords of outer sings, rad=the radius of sings
SI = radI*1.05*[cos(thetas(1:end-1))', sin(thetas(1:end-1))'];  %co-ords of inner sings
myfun1o = @(theta, i) rad*log(sqrt((rad*cos(theta) - S(i,1)).^2 + (rad*sin(theta) - S(i,2)).^2));
myfun1i = @(theta, i) radI*log(sqrt((radI*cos(theta) - SI(i,1)).^2 + (radI*sin(theta) - SI(i,2)).^2));
The 1.05 (I'm quite unsure whether the singularites should be moved by 0.95 or 1.05 in the case of the inner integral) is just an arbitrary number to take the singularites outside the domain. rad and radI are the Jacobians I discussed in my previous post. Now, I think someplace is still missing a minus sign due to the orientation of the inner integral (ie. Eq. (2), term 2). Adding one to different places (my primary guess would be myfun1i, but I'd really have to do the differentation of Eq. (2) to be sure), you get an answer closer to the correct one.

Append the code below to yours for some visualization. Phiexa is the analytical solution.
Code:
nn = 100;
[xx, yy] = meshgrid(linspace(-2, 2, nn), linspace(-2, 2, nn));
inds = (xx.^2 + yy.^2 >= rad^2) | (xx.^2 + yy.^2 <= radI^2);
for i = 1:nn
   for j = 1:nn
      Phiann(i, j) = sum(log(sqrt((xx(i,j) - SI(:,1)).^2 + (yy(i,j) - SI(:,2)).^2)).*w(N+1:2*N)) + sum(log(sqrt((xx(i,j) - S(:,1)).^2 + (yy(i,j) - S(:,2)).^2)).*w(1:N));
   end
end
Phiann(inds) = NaN;
surf(Phiann);

Phiexa = -1/4+(1/4-1)*log(sqrt(xx.^2+yy.^2))/log(2);
Phiexa(inds) = NaN;
hold on;
surf(Phiexa);
hold off;
 
Last edited:
  • #18
The 2 fundamental solutions are [tex]\psi[/tex] = logrj and [tex]\psi[/tex]in = logrinj. I have not derived these solutions myself but have just lifted them from papers. By the superpostion principle the sum of fundamental solutions of a linear PDE is also a solution.

Again, I have attached a short pdf and I hoped it has cleared up a few queries you had. I'm afraid I don't understand the need for the ammendament you made, is it a safeguard against choosing a radius that maybe inside the domain, and arnt rad and radI just scalars?

With regards to the last part you said, I agree that the answers at the moment to promising to be on the right lines.

I hope the information I have provided is helpful. As always let me know if it isn't.
 

Attachments

  • demopall.pdf
    61.6 KB · Views: 197
  • #19
Ok, so basically the Jacobian thing comes from the substitution
x(s) = r cos(s)
y(s) = r sin(s)

Clearly this represents a circle with radius r, but the integral
[tex]\int_0^{2\pi} ds[/tex] is not its circumference (although it's "supposed" to be).

Rather,
[tex]\int_0^{2\pi} r ds[/tex] is.
So, r is the square root of the determinant of the metric or the Jacobian of the substitution, or whatever you'd like to call it. If you change the geometry to say, an ellipse, this term is again going to change.

The above is why I've added r where applicable (to myfun). I assumed that the singularities ought to be outside the domain (well, they can't be on the integration contour or the integral will diverge). This is why I moved them (with the arbitrary 1.05 multiplication).

Ok, I made the matrices myself and umm... well... it's all horribly complicated but I'm more or less confident that the algorithm now works as it's supposed to
Code:
rad=2;
radI=1;
N=32;
thetas = linspace(0, 2*pi, N+1); %generates N+1 points between 0 and 2pi
S = rad*1.05*[cos(thetas(1:end-1))', sin(thetas(1:end-1))']; %co-ords of outer sings, rad=the radius of sings
SI = radI*.95*[cos(thetas(1:end-1))', sin(thetas(1:end-1))'];  %co-ords of inner sings

fun0 = @(theta, i) log(sqrt((rad*cos(theta) - S(i,1)).^2 + (rad*sin(theta) - S(i,2)).^2));
fun1 = @(theta, i) log(sqrt((radI*cos(theta) - S(i,1)).^2 + (radI*sin(theta) - S(i,2)).^2));
fun2 = @(theta, i) log(sqrt((rad*cos(theta) - SI(i,1)).^2 + (rad*sin(theta) - SI(i,2)).^2));
fun3 = @(theta, i) log(sqrt((radI*cos(theta) - SI(i,1)).^2 + (radI*sin(theta) - SI(i,2)).^2));

myfun1o  = @(theta, i) rad*log(sqrt((rad*cos(theta) - S(i,1)).^2 + (rad*sin(theta) - S(i,2)).^2));
myfun1ob = @(theta, i) -radI*log(sqrt((radI*cos(theta) - S(i,1)).^2 + (radI*sin(theta) - S(i,2)).^2));
myfun1i  = @(theta, i) rad*log(sqrt((rad*cos(theta) - SI(i,1)).^2 + (rad*sin(theta) - SI(i,2)).^2));
myfun1ib = @(theta, i) -radI*log(sqrt((radI*cos(theta) - SI(i,1)).^2 + (radI*sin(theta) - SI(i,2)).^2));

myfun2a = @(theta, i, j) myfun1o(theta, i).*fun0(theta, j) + myfun1ob(theta, i).*fun1(theta, j);
myfun2b = @(theta, i, j) myfun1i(theta, i).*fun0(theta, j) + myfun1ib(theta, i).*fun1(theta, j);
myfun2c = @(theta, i, j) myfun1o(theta, i).*fun2(theta, j) + myfun1ob(theta, i).*fun3(theta, j);
myfun2d = @(theta, i, j) myfun1i(theta, i).*fun2(theta, j) + myfun1ib(theta, i).*fun3(theta, j);

a = zeros(N); b = zeros(N); c = zeros(N); d = zeros(N);
for i = 1:N
   for j = i+1:N
      a(i,j) = quad(@(x) myfun2a(x, i, j), 0, 2*pi);
      b(i,j) = quad(@(x) myfun2b(x, i, j), 0, 2*pi);
      c(i,j) = quad(@(x) myfun2c(x, i, j), 0, 2*pi);
      d(i,j) = quad(@(x) myfun2d(x, i, j), 0, 2*pi);
   end
end
a = a+a'; b = b+b'; c = c+c'; d = d+d';
L = [a b; c d];

p = zeros(N, 1);
q = zeros(N, 1);
for i = 1:N
   p(i) = -rad*quad(@(x) fun0(x, i), 0, 2*pi) + 1/4*radI*quad(@(x) fun1(x, i), 0, 2*pi);
   q(i) = -rad*quad(@(x) fun2(x, i), 0, 2*pi) + 1/4*radI*quad(@(x) fun3(x, i), 0, 2*pi);
end

K = [p; q];
w = L\K;  %2Nx1 matix containing the weights.  The first N are the c_j's and second N are the d_j's.

nn = 100;
Phiann = zeros(nn);
[xx, yy] = meshgrid(linspace(-2, 2, nn), linspace(-2, 2, nn));
inds = (xx.^2 + yy.^2 >= rad^2) | (xx.^2 + yy.^2 <= radI^2);
for i = 1:nn
   for j = 1:nn
      Phiann(i, j) = sum(log(sqrt((xx(i,j) - SI(:,1)).^2 + (yy(i,j) - SI(:,2)).^2)).*w(N+1:2*N)) ...
         + sum(log(sqrt((xx(i,j) - S(:,1)).^2 + (yy(i,j) - S(:,2)).^2)).*w(1:N));
   end
end
Phiann(inds) = NaN;
surf(Phiann);

Phiexa = -1/4+(1/4-1)*log(sqrt(xx.^2+yy.^2))/log(2);
Phiexa(inds) = NaN;
hold on;
surf(Phiexa);
hold off;

This implementation is horrible matlab, but hopefully you can make some sense of it.

So, all the terms in the matrices should contain two integrals, one over the outer contour, one over the inner. I realize that this is quite complicated. It's again past 2 am, and I'm going to sleep, so I'll just leave this here. Feel free to ask for any details.
 
  • #20
I've had a chance to look through it and understand pretty much all of it, it looks great. One part I don't quite understand is why there are 'minus' signs in front of an integral over the inner boundary. I think it's to do with the orientation of the innner integral but can't think why it would be different to the outer intrgral i.e. the functions myfun1ob and myfun1ib...

Code:
myfun1o  = @(theta, i) rad*log(sqrt((rad*cos(theta) - S(i,1)).^2 + (rad*sin(theta) - S(i,2)).^2));
myfun1ob = @(theta, i) -radI*log(sqrt((radI*cos(theta) - S(i,1)).^2 + (radI*sin(theta) - S(i,2)).^2));
myfun1i  = @(theta, i) rad*log(sqrt((rad*cos(theta) - SI(i,1)).^2 + (rad*sin(theta) - SI(i,2)).^2));
myfun1ib = @(theta, i) -radI*log(sqrt((radI*cos(theta) - SI(i,1)).^2 + (radI*sin(theta) - SI(i,2)).^2));

Also in the following part of the code for calculating p and q, should the coefficient for the first integral be -(rad^2)/4 and the coefficient for the second integral be -(radI^2)/4? The square coming from the fact that r^2 = x^2 + y^2 ...

Code:
p(i) = -rad*quad(@(x) fun0(x, i), 0, 2*pi) + 1/4*radI*quad(@(x) fun1(x, i), 0, 2*pi);
   q(i) = -rad*quad(@(x) fun2(x, i), 0, 2*pi) + 1/4*radI*quad(@(x) fun3(x, i), 0, 2*pi);

Finally, with regard to the plot, I am trying to get my head around it. Would I be correct in say that the top of the graph is denoting the inner boundary and the bottom is denoting the outer boundary? Also I am not too sure what the jagged adges represent, if anything.

Thanks!
 
  • #21
mcooper said:
Also in the following part of the code for calculating p and q, should the coefficient for the first integral be -(rad^2)/4 and the coefficient for the second integral be -(radI^2)/4? The square coming from the fact that r^2 = x^2 + y^2 ...

Code:
p(i) = -rad*quad(@(x) fun0(x, i), 0, 2*pi) + 1/4*radI*quad(@(x) fun1(x, i), 0, 2*pi);
   q(i) = -rad*quad(@(x) fun2(x, i), 0, 2*pi) + 1/4*radI*quad(@(x) fun3(x, i), 0, 2*pi);

Please ignore this stupid question. Appologies.
 
  • #22
mcooper said:
I've had a chance to look through it and understand pretty much all of it, it looks great. One part I don't quite understand is why there are 'minus' signs in front of an integral over the inner boundary. I think it's to do with the orientation of the innner integral but can't think why it would be different to the outer intrgral i.e. the functions myfun1ob and myfun1ib...
Yes, it's due to the orientation. I'm in a bit of a hurry, so I can't quite explain this throughly, but the domains of interest (if I understood the fundamental solutions right) are (for the radius) [1, inf] and [0, 2]. So, you kinda select an outer normal and a tangent to the parametrization, we fix e.g. the counter-clockwise of integration as positive. The sign of the cross product of these two is different cause in the case [1, inf] (inner integral) the domain lies to the right of the contour, and for [0, 2], to the left. I'd guess stuff on e.g. complex analysis has these things explained.

Finally, with regard to the plot, I am trying to get my head around it. Would I be correct in say that the top of the graph is denoting the inner boundary and the bottom is denoting the outer boundary? Also I am not too sure what the jagged adges represent, if anything.
The graph is just a plot of the potential. It goes from -1/4 to -1, as the boundary conditions required it to.

The jagged edges are just an error from the discretization in the plotting. The plotting (as it's done currently, there better ways to do it) means basically just calculating the value of the function in points which are distributed in a square-like manner. Obviously, if the squares are big, approximating a circle is difficult (try to draw a circle with pixels and you see that it really won't look like a circle unless the pixels are really small). Therefore, you could increase accuracy of the plot by increasing the variable nn.

Thanks!
No problem.
 
  • #23
A quick question with regards to plotting the results. I am trying to plot the results for various N in a 2D graph but am struggling slightly. I tried this for the last few lines of the code but it doesn't seem to be quite right and wondered of you could have a look...

Code:
Phiann = zeros(N-1,1);
x = 0;
i=1;
for y = 1:1/(N-1):2;
r = sqrt(x^2 + y^2);

if r<=2 && r>=1
Phiann(i) = sum(log(sqrt((x - SI(:,1)).^2 + (y - SI(:,2)).^2)).*w(N+1:2*N)) ...
    + sum(log(sqrt((x - S(:,1)).^2 + (y - S(:,2)).^2)).*w(1:N)) + r^2/4;
i=i+1;
end
end

plot(1:100:2,Phiann)
 
  • #24
Use plot(1:1/(N-1):2, Phiann) rather than plot(1:100:2, Phiann).
 
  • #25
Oh yes thanks. I was wondering, with this method would it be possible to have the numbers of singularities on the inner and outer ring different? I am guessing a problem would arise when calculating the weights as the multiplying matrix is not square so may not have an inverse...
 

What is Simpson's Rule in Matlab?

Simpson's Rule is a numerical method used in Matlab to approximate the value of a definite integral. It is based on approximating the curve of a function with quadratic polynomials instead of straight lines, resulting in a more accurate estimation.

How does Simpson's Rule work in Matlab?

Simpson's Rule works by dividing the interval of integration into subintervals and approximating the curve of the function within each subinterval using quadratic polynomials. The estimated area under the curve is then calculated by adding the areas of these quadratic polynomials.

What are the advantages of using Simpson's Rule in Matlab?

Compared to other numerical integration methods, Simpson's Rule in Matlab is more accurate for a given number of subintervals. It also requires fewer function evaluations, making it more efficient.

What are the limitations of Simpson's Rule in Matlab?

Simpson's Rule may not provide accurate results for functions with sharp corners or discontinuities. It also requires the function to be evaluated at equally spaced points, which may not always be possible.

How do I implement Simpson's Rule in Matlab?

To implement Simpson's Rule in Matlab, you will need to define your function and specify the interval of integration. Then, you can use the built-in Simpson's Rule function (integrate) or write your own code using the formula for Simpson's Rule. Make sure to check for any limitations and adjust the number of subintervals for better accuracy if needed.

Similar threads

  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
6
Views
857
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
2
Views
806
  • Calculus and Beyond Homework Help
Replies
3
Views
247
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
12
Views
977
  • Calculus and Beyond Homework Help
Replies
10
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
894
  • Calculus and Beyond Homework Help
Replies
1
Views
599
Back
Top