Numerical analysis questions

  • Thread starter happyg1
  • Start date
  • #1
Here are my questions:
"Evaluating the summation as i goes from 1 to n of a sub i in floating point arithemetic may lead to an arbirarily large error. If however, all summands a sub i are of the same sign, then this relative error is bounded. Derive a crude bound for this error, disregarding terms of higher order.

Is this a taylor series expansion question? I don't know where to start.

The next one is:
"Show how to evaluate the following expression in a numerically stable fashion:"


again, I don't know exactly where to start. Do I rearrange the formula? Do I calculate the relative error?

Any pointers will be greatly appreciated.

Answers and Replies

  • #2
Homework Helper
First Question, which is very general:

Suppose you use an 8-digit calculator to add terms.
That means the ninth digit is totally unknown (ignored),
so the value entered could be off by .000000005 .
Keeping the decimal in the same place for each term,
after adding n of these terms, the total error might be
as large as n times this error. {more likely sqrt(n)(error)}

If all the terms are the same sign, the result of adding n
has to be at least |sum| > .1 + n(.0000001) ...
so, what's the relative error in this worst-case scenario?

If some of the terms are opposite signs, the total error
might be just as large, but the actual result of the sum
could be as small as zero! What would be the rel.error?

Second Question, which is a specific case:

Start by writing the first few terms of a series expansion
for each part. Re-arrange to get a single expansion;
ie, get all x^n terms together. Do they cancel?
Numerically stable means you should cancel symbolically,
as much as possible, before computing numerically.
  • #3
On that first one, I tried to get a formula for the error using the a+'b+'c+((a+b(1+eps1)+c)(1+eps2) and expand it out for n elements in the sum. When I canceled out the terms that had error multiplied together, I got a strange looking sum. I then used the formula for relative error that goes (approximated value - real value)/real value. So I wind up with a messy looking formula. I understand that idea of the error adding up as the terms are summed, I'm just having a hard time with the application. After I get this formula I am unsure of where to go. I don't know if it's even a correct approach.

That second one, I can see why it's not numerically stable becuase there's a relatively small number being subtracted from a relatively large number, and there's a possibility that the tiny number could get "lost". I did get the common denominator and found a better equation. My professor says that I now need to calculate the relative error of the new formula and investigate it. So I'm lost on that, now...I plan to sit down with it and try it out again this morning.

I haven't seen a single problem worked out. I understand the derivations of the formulas and they make sense to me, but I can't seem to put it all together.

I have an additional problem that is asking to estimate the relative error. It reads like this:
Suppose a computer program is available which yields values for arcsin y in floating point representation with t decimal mantissa places and for |y|<=1 subject to a relative error eps with |eps|<=10exp-t. In view of the relation

arctan x = arcsin x/(sqrt(1+x^2))
this program could also be used to evaluate arctan x. Determine for which values x this procedure is numerically stable by estimating the relative error.

So I have tried to calculate the relative error by taking the derivative and multiplying that by x/arctan x but it makes no sense to me. What do I do with it now?

Thanks for any input,