- #1
yourgoldteeth
- 10
- 0
Hi everyone. It's my first time posting in here.
I have a mathematica notebook that defines a four-parameter prob density function (PDF). I define a likelihood function, import some data, and run a Nelder-Mead optimization strategy on the likelihood function using the data.
My likelihood function uses NIntegrate on the PDF.
It's all fine and dandy until one of my parameters gets big (>5). Then, the small data points (<10e-5) give the following error:
NIntegrate::inumri: The integrand 0.0366328 5.75^UnitStep[0.05-p] (0.1+0.9 E^(InverseErf[<<1>>]^2-1/2 Power[<<2>>]))+0.111281 5.75^UnitStep[0.05-p] (0.1+0.9 E^(InverseErf[<<1>>]^2-1/2 Power[<<2>>]))+0.216745 5.75^UnitStep[0.05-p] (0.1+0.9 E^(InverseErf[<<1>>]^2-1/2 Power[<<2>>]))+0.270682 5.75^UnitStep[0.05-p] (0.1+0.9 E^(InverseErf[<<1>>]^2-1/2 Power[<<2>>]))+0.216745 5.75^UnitStep[0.05-p] (0.1+0.9 E^(InverseErf[<<1>>]^2-1/2 Power[<<2>>]))+0.111281 5.75^UnitStep[0.05-p] (0.1+0.9 E^(InverseErf[<<1>>]^2-1/2 Power[<<2>>]))+0.0366328 5.75^UnitStep[0.05-p] (0.1+0.9 E^(InverseErf[<<1>>]^2-1/2 Power[<<2>>])) has evaluated to Overflow, Indeterminate, or Infinity for all sampling points in the region with boundaries {{2.15155*10^-22,9.23051*10^-18}}. >>
That's with WorkingPrecision->MachinePrecision. Here's the thing. My $MachinePrecision is 15.9546. If I specify my WorkingPrecision to be 10, it works more slowly, but at a rate I can deal with.
If I specify my WorkingPrecision to be 20, it works VERY slowly. And it gives almost the same answer, but with more digits.
Why is MachinePrecision giving me such problems? Also, why is a lower precision calculation taking more time than a machine precision? And if I decide to set a lower precision, say 10, will I even know if it is affecting my optimization result?Thanks
Craig
I have a mathematica notebook that defines a four-parameter prob density function (PDF). I define a likelihood function, import some data, and run a Nelder-Mead optimization strategy on the likelihood function using the data.
My likelihood function uses NIntegrate on the PDF.
It's all fine and dandy until one of my parameters gets big (>5). Then, the small data points (<10e-5) give the following error:
NIntegrate::inumri: The integrand 0.0366328 5.75^UnitStep[0.05-p] (0.1+0.9 E^(InverseErf[<<1>>]^2-1/2 Power[<<2>>]))+0.111281 5.75^UnitStep[0.05-p] (0.1+0.9 E^(InverseErf[<<1>>]^2-1/2 Power[<<2>>]))+0.216745 5.75^UnitStep[0.05-p] (0.1+0.9 E^(InverseErf[<<1>>]^2-1/2 Power[<<2>>]))+0.270682 5.75^UnitStep[0.05-p] (0.1+0.9 E^(InverseErf[<<1>>]^2-1/2 Power[<<2>>]))+0.216745 5.75^UnitStep[0.05-p] (0.1+0.9 E^(InverseErf[<<1>>]^2-1/2 Power[<<2>>]))+0.111281 5.75^UnitStep[0.05-p] (0.1+0.9 E^(InverseErf[<<1>>]^2-1/2 Power[<<2>>]))+0.0366328 5.75^UnitStep[0.05-p] (0.1+0.9 E^(InverseErf[<<1>>]^2-1/2 Power[<<2>>])) has evaluated to Overflow, Indeterminate, or Infinity for all sampling points in the region with boundaries {{2.15155*10^-22,9.23051*10^-18}}. >>
That's with WorkingPrecision->MachinePrecision. Here's the thing. My $MachinePrecision is 15.9546. If I specify my WorkingPrecision to be 10, it works more slowly, but at a rate I can deal with.
If I specify my WorkingPrecision to be 20, it works VERY slowly. And it gives almost the same answer, but with more digits.
Why is MachinePrecision giving me such problems? Also, why is a lower precision calculation taking more time than a machine precision? And if I decide to set a lower precision, say 10, will I even know if it is affecting my optimization result?Thanks
Craig
Last edited: