Your thoughts on numerical analysis teaching

  • #1

nrqed

Science Advisor
Homework Helper
Gold Member
3,766
295
Hello all,

I may get a contract to teach numerical analysis. I did quite a lot of numerical work during my PhD but that was a while ago. Now when I look at most books on the topic, I get the feeling that a lot is outdated, and I also feel that a lot of what I knew is outdated as well because of the possibility to do arbitrary precision computations.

I mean that most examples used to try to illustrate the danger of rounding off errors are no loner an issue now if one uses "arbitrary precision" (I know, that's not really an honest terminology). I mean that I can simply keep a lot of precision in Mathematica and all these examples become completely well behaved (and one can also use arbitrary precision in Python).

Now I realize that for large scale computations, using a huge amount of precision may slow down things a lot and that if there are truly catastrophic cancellations one can still not be saved with using arbitrary precision, but I feel uneasy because it seems to me that all the books I have looked at still work within the old paradigm of low precision calculations. It's not clear to me how advantageous it is to teach more complex algorithms to get rid of rounding off errors when one can simply force the software to increase dramatically the precision of the numbers used.

So my questions are:

a) Does anyone know a textbook that would be mindful of this fact and be teaching a "modern" approach to numerical analysis? That would cover well the pros and cons of increasing the precision for different types of applications (interpolation, integration, differentiation, solutions of nonlinear equations) versus using better algorithms?

b) Any advice from anyone who is teaching that material/taking classes in that topic/using numerical analysis in their work?

Thank you in advance!
 

Answers and Replies

  • #2
There’s a website https://lectures.quantecon.org/jl/index.html with tutorials on numerical python and Julia that might get you more up to speed on today’s focus.

A lot of numerical stuff is also moving toward machine learning and deep learning technologies too where the numerical work is used to fine tune the learning algorithms.
 
  • #3
There’s a website https://lectures.quantecon.org/jl/index.html with tutorials on numerical python and Julia that might get you more up to speed on today’s focus.

A lot of numerical stuff is also moving toward machine learning and deep learning technologies too where the numerical work is used to fine tune the learning algorithms.
Thank you for the link. I must mention that I don't have control over the topics that I must cover and these are: interpolation, numerical integration an dedifferentiation, solutions of nonlinear equations and ordinary DEs. So I cannot get into machine learning, neural nets or that type of things. I am interested in a modern view point for these elementary applications.


Thank you!
 
  • #4
I may get a contract to teach numerical analysis. I did quite a lot of numerical work during my PhD but that was a while ago. Now when I look at most books on the topic, I get the feeling that a lot is outdated, and I also feel that a lot of what I knew is outdated as well because of the possibility to do arbitrary precision computations.
It's been a long while since I had numerical analysis, as well, but I'm not sure that the basics are all that outdated. As you note, large scale computations can be slowed down if super high precision is being used, so there's a tradeoff between calculation time and precision. Having higher precision just moves the uncertainty out several decimal places, but one still needs to be concerned about how accurate the result is. The basic ideas of numerical analysis, including root finding, calculating derivatives, and calculating integrals using several methods are still important, even if the technology has changed from computers with no built-in math coprocessors to ones capable of working with 512 bit numbers.
 
  • Like
Likes aaroman and FactChecker
  • #5
It's been a long while since I had numerical analysis, as well, but I'm not sure that the basics are all that outdated. As you note, large scale computations can be slowed down if super high precision is being used, so there's a tradeoff between calculation time and precision. Having higher precision just moves the uncertainty out several decimal places, but one still needs to be concerned about how accurate the result is. The basic ideas of numerical analysis, including root finding, calculating derivatives, and calculating integrals using several methods are still important, even if the technology has changed from computers with no built-in math coprocessors to ones capable of working with 512 bit numbers.
I agree, but I would like to have some information about the tradeoff between using more sig figs versus improving the algorithm. It seems to me that the books (at least the ones I looked at) are still based on the paradigm of 32 bits precision. I was wondering if there was some book with a more modern approach that would take into account the pros and cons of using higher precision. I have not found one so far.

Thanks for your input!
 
  • #6
Higher precision comes at a great cost as hardware can’t handle it directly. Software must be used to marshall the number data to the cpu for crunching and software solutions are always slower,

There may be some chips currently in development to handle arbitrary precision but there must also be a market for them to thrive and I don’t think we’re there yet.
 
  • Like
Likes FactChecker and nrqed
  • #7
Higher precision comes at a great cost as hardware can’t handle it directly. Software must be used to marshall the number data to the cpu for crunching and software solutions are always slower,

There may be some chips currently in development to handle arbitrary precision but there must also be a market for them to thrive and I don’t think we’re there yet.
Point well taken, thank you for your input!
 
  • #8
Here’s a more detailed write up on arbitrary precision math

https://en.m.wikipedia.org/wiki/Arbitrary-precision_arithmetic

There used to be a notion of an extended instruction set for the Honeywell 6000 Computer that supported packed decimal arithmetic. I’m not sure if this is the scheme used for bignum though but it’s quite likely.

https://en.m.wikipedia.org/wiki/Binary-coded_decimal#Packed_BCD

It worked well in COBOL programs where math was primarily addition, subtraction, multiplication and division. Packed decimal was easy to convert to a printable number by a simple additive offset to get the digits character code.
 
  • #9
It seems to me that the books (at least the ones I looked at) are still based on the paradigm of 32 bits precision.
I'm not in the market for such books, but the 32-bit limitation seems quite dated. Processors from Intel and AMD have had native support for 64- and 80-bit floating point numbers for many years.

If I were teaching that class, I would look at numerical analysis books with recent publishing dates. Possibly they would discuss the higher precisions available on more modern processors.
 

Suggested for: Your thoughts on numerical analysis teaching

Back
Top