# Mathematica Help

• Mathematica
Hello,

I have the following Mathematica code warnings:

Code:
NIntegrate::slwcon: Numerical integration converging too slowly; \
suspect one of the following: singularity, value of the integration \
is 0, highly oscillatory integrand, or WorkingPrecision too small. >>

NIntegrate::inumri: The integrand 1-0.101224 <<1>>^2 \
(Hypergeometric2F1[3.,1.5,2.5,(0.+Times[<<2>>])/Plus[<<2>>]]/(0.08+\
Power[<<2>>] Power[<<2>>])^3-(16 \
Hypergeometric2F1[<<1>>])/(<<20>>+<<1>>)^3+<<1>>/<<1>>+<<9>>+(784 \
<<1>>)/(<<1>><<1>>^3+<<37>>) has evaluated to Overflow, \
Indeterminate, or Infinity for all sampling points in the region with \
boundaries {{7.36665*10^-9,7.96783*10^-9}}.

I don't know what causes them, eventhough, I am sure that my code is correct. These warnings make the results wrong. How can I resolve these issues?

Dale
Mentor
2020 Award
Hi S David,

First thing to do is to look at your integrand (e.g. plot it) over the range of integration and especially over the range from 7.3E-9 to 8.0E-9. Is is oscillatory, undefined, numerically unstable, or otherwise poorly behaved?

The second thing to do is to try to use a different setting for PrecisionGoal and AccuracyGoal. Unfortunately, I don't have any advice about what different settings to use. Just try a few and see if they make any difference.

The last thing to try is to increase the WorkingPrecision. To do this you may need to go through your function and make sure that any numerical constants are written as high-precision numbers. E.g. 1.550 instead of just 1.5

Hi S David,

First thing to do is to look at your integrand (e.g. plot it) over the range of integration and especially over the range from 7.3E-9 to 8.0E-9. Is is oscillatory, undefined, numerically unstable, or otherwise poorly behaved?

The second thing to do is to try to use a different setting for PrecisionGoal and AccuracyGoal. Unfortunately, I don't have any advice about what different settings to use. Just try a few and see if they make any difference.

The last thing to try is to increase the WorkingPrecision. To do this you may need to go through your function and make sure that any numerical constants are written as high-precision numbers. E.g. 1.550 instead of just 1.5

Hi DaleSpam,

How are you? Thank you for replying.

I used the second option, by increasing the AccurcyGoal, and it works now. But I had also an issue of negative numbers. So, I increased the WorkingPrecision, and these negative numbers disappeared. But When I change some parameters, those negative signs appear again, where the results should be between 0 and 1.

Another thing, when I increased the WorkingPrecision, it gives me the following warning:

Code:
NIntegrate::precw: The precision of the argument function (1-1012.24 \
<<1>>^2 (Hypergeometric2F1[3.,1.5,2.5,(0.+Times[<<2>>])/Plus[<<2>>]]/(\
8.+Power[<<2>>] Power[<<2>>])^3-(16 \
Hypergeometric2F1[<<1>>])/(<<18>>+<<1>>)^3+<<10>>+(784 \
<<1>>)/(<<1>><<1>>^3+<<35>>)) is less than WorkingPrecision (50.). >>

I didn't know how to change the constants to my Precision as you told me, where I have some values in decible and other in decimal. How can I do that?

Best regards

Dale
Mentor
2020 Award
It is a little bit of a pain, but here is the basic approach:

3 is the integer 3, it has infinite precision
3. is the real number 3, it has machine precision and is represented as a standard floating point number
3.50 is the real number 3, it has 50 digits of precision and is represented as an arbitrary precision floating point number.

Note that 3.0000150 - 3.0000050 will have less than 50 digits of precision, so it may be necessary for you to use a higher precision in your function definition in order for it to obtain results to 50 digits of precision.

It is a little bit of a pain, but here is the basic approach:

3 is the integer 3, it has infinite precision
3. is the real number 3, it has machine precision and is represented as a standard floating point number
3.50 is the real number 3, it has 50 digits of precision and is represented as an arbitrary precision floating point number.

Note that 3.0000150 - 3.0000050 will have less than 50 digits of precision, so it may be necessary for you to use a higher precision in your function definition in order for it to obtain results to 50 digits of precision.

I tried to apply this, but I have a lot of constants in my program. I think I will skip this, as long as it doesn't affect the results.

But any idea about the negative results, please? Although it sometimes occurs in the $$10^{-10}$$ and less, where I can ignore them, some other times occurs in a critical area where I need those results, i.e.: I can not just ignore them.

Hepth
Gold Member
Hmm. So you have a simple example code snippet to work with?

Dale
Mentor
2020 Award
I tried to apply this, but I have a lot of constants in my program. I think I will skip this, as long as it doesn't affect the results.

This is just a guess, but you are confident in your code, so that probably only leaves numerical errors. I am guessing that your function is very ill conditioned so that very small rounding errors get blown up to the point that you get negative numbers where that should not be possible.

This is just a guess, but you are confident in your code, so that probably only leaves numerical errors. I am guessing that your function is very ill conditioned so that very small rounding errors get blown up to the point that you get negative numbers where that should not be possible.

Yes, I am sure of my code. So, what should I do? I tried to change MaxErrorIncreases, but the problem is the same.

Regards

Dale
Mentor
2020 Award
There are two basic approaches. One is to reduce the size of the errors until they are so small that even after they get amplified you are still ok. For that you would need to go through and set all of your numerical contstants to some really high precision. However, if you have some point in your code that blows up the errors infinitely then there is no precision that would work. If that is the case then you would need to re-write your code in a way that is algebraically the same, but numerically better conditioned.

There are two basic approaches. One is to reduce the size of the errors until they are so small that even after they get amplified you are still ok. For that you would need to go through and set all of your numerical contstants to some really high precision. However, if you have some point in your code that blows up the errors infinitely then there is no precision that would work. If that is the case then you would need to re-write your code in a way that is algebraically the same, but numerically better conditioned.

How can I know that? Anyway, this is my code:
Code:
MA = 3;
NA = 3;
MB = 3;
NB = 3;
qA = MA NA;
QA = MA NA;
qB = MB NB;
QB = MB NB;
QC = MA NB;
qC = MA NB;
M = 8;
g = Sin[Pi/M]^2;
For[SNRdB = 0, SNRdB <= 40, SNRdB = SNRdB + 2,
SNR = 10^(SNRdB/10);
gA = 0.5*SNR;
gB = 0.5*SNR;
gC = 0.5 SNR;
p = gA*gB;
M1 = (1 + (64*qA*qB*s)/(3*p)*Binomial[QA, qA]*Binomial[QB, qB]*\!$$\*UnderoverscriptBox[\(\[Sum]$$, $$i = 0$$, $$qA - 1$$]$$\*UnderoverscriptBox[\(\[Sum]$$, $$j = 0$$, $$qB - 1$$]
\*SuperscriptBox[$$(\(-1$$)\), $$i + j$$]*Binomial[qA - 1, i]*
Binomial[qB - 1, j]*N[Hypergeometric2F1[3,
\*FractionBox[$$3$$, $$2$$],
\*FractionBox[$$5$$, $$2$$],
\*FractionBox[$$\*FractionBox[\(gB \((QA - qA + i + 1)$$ +
gA $$(QB - qB + j + 1)$$\), $$p$$] - s - 2
\*SqrtBox[
FractionBox[$$\((QA - qA + i + 1)$$ $$(QB - qB + j + 1)$$\), $$p$$]]\), $$\*FractionBox[\(gB \((QA - qA + i + 1)$$ +
gA $$(QB - qB + j + 1)$$\), $$p$$] - s + 2
\*SqrtBox[
FractionBox[$$\((QA - qA + i + 1)$$ $$(QB - qB + j + 1)$$\), $$p$$]]\)]]]/
\*SuperscriptBox[$$( \*FractionBox[\(gB \((QA - qA + i + 1)$$ +
gA $$(QB - qB + j + 1)$$\), $$p$$] - s + 2
\*SqrtBox[
FractionBox[$$\((QA - qA + i + 1)$$ $$(QB - qB + j + 1)$$\), $$p$$]])\), $$3$$]\)\)) /.
s -> -g/Sin[y]^2;
M2 = (Binomial[QC, qC]*\!$$\*UnderoverscriptBox[\(\[Sum]$$, $$v = 0$$, $$qC - 1$$]
\*SuperscriptBox[$$(\(-1$$)\), $$v$$]*Binomial[qC - 1, v]*
\*FractionBox[$$qC$$, $$QC + v + 1 - qC - gC*s$$]\)) /.
s -> -g/Sin[y]^2;
Ps = 1/Pi*
NIntegrate[(M1 M2), {y, 0, ((M - 1)*Pi)/M}, PrecisionGoal -> 10,
AccuracyGoal -> 8, MaxRecursion -> 40, WorkingPrecision -> 20,
Method -> {GlobalAdaptive, MaxErrorIncreases -> 10000}];
Print[Ps]]`

How can I change all the numerical constants to a precision of 20? I suspect it is a precision issue, because when I change the WorkingPrecision the results change. I tried to change the precision of all constants, but it continues to give me that some thing working precision is less thatn 20.

Dale
Mentor
2020 Award
Your first set of constants (MA through g) are all exact numbers so they already have infinite precision.

Inside your For loop the constants gA, gB, and gC are all numerical with machine precision. Change those to gA = SNR/2 etc. to make them exact also.

Then the next spot is N[Hypergeometric2F1[...]]. This is also a machine-precision number. Change it to N[Hypergeometric2F1[...],30] to give it 30 digits of precision.

Then the only other spot is NIntegrate, where you have already set the WorkingPrecision the PrecisionGoal and the AccuracyGoal options.

With N[...,30] and with the gA etc. defined exactly I got a list with no negative numbers.

Your first set of constants (MA through g) are all exact numbers so they already have infinite precision.

Inside your For loop the constants gA, gB, and gC are all numerical with machine precision. Change those to gA = SNR/2 etc. to make them exact also.

Then the next spot is N[Hypergeometric2F1[...]]. This is also a machine-precision number. Change it to N[Hypergeometric2F1[...],30] to give it 30 digits of precision.

Then the only other spot is NIntegrate, where you have already set the WorkingPrecision the PrecisionGoal and the AccuracyGoal options.

With N[...,30] and with the gA etc. defined exactly I got a list with no negative numbers.

Yes, it works now perfectly.

I would like to deeply thank you DaleSpam, because you really rescued me several times during my thesis work.

I also would like to ask you: how do you know all those stuff in Mathematica? Because I want to learn more about it, since it is an amazing program. I mean, I know the practice has a lot to do with this, but do you recommend a certain book, for example?

Thanks again.

Dale
Mentor
2020 Award
You are very welcome. I am glad to have helped and glad to keep on helping as needed.

As far as how I know so much about Mathematica, unfortunately I do not have a book to recommend. I have basically only used the online help, which is really good IMO. However, probably the main thing is experience. Mathematica has been one of my primary tools for about 13 or 14 years now (since ver 2.2).

You are very welcome. I am glad to have helped and glad to keep on helping as needed.

As far as how I know so much about Mathematica, unfortunately I do not have a book to recommend. I have basically only used the online help, which is really good IMO. However, probably the main thing is experience. Mathematica has been one of my primary tools for about 13 or 14 years now (since ver 2.2).

Ok, I see.

Thank you again.