1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Logarithmic significant figures

  1. Dec 29, 2014 #1
    In most areas of science, being able to solve and report problems to the correct number of significant figures is necessary; but I'm having trouble finding a complete set of rules for significant figures when working with non-ideal power equations. eg: I'm talking about equations requiring the use of logarithms because even the simplest formula contains an expression like ( x**y ), where x is a measured data point to some number of significant figures and y is an empirically determined exponent to some other number of significant figures. ( To give an idea of one place it's used, consider semiconductor physics -- where y might represent a power term for some electronic property of silicon, and could vary from exactly 2 for pure silicon, to perhaps 2.03 to three s.f. when impure silicon is used. It also shows up in some chemistry problems I have seen.)

    In most undergraduate textbooks for chemistry and physics, I'm able to find rules to handle the case where the exponent is exactly 2 -- but I'm not able to find rules for the case where the exponent also has significant figures.

    Only some college websites I have visited even have rules for logarithms and exponentiation at all; but those who discuss Napieran logarithms (log base e) generally include a disclaimer saying that the "actual" rules are more complicated -- and then sadly cite no sources for further study.

    For example: laney . edu /wp/cheli-fossum/files/2011/01/Significant-Figure-Rules-for-los.pdf

    I don't see a problem with actually following the rules as given (I'll give a worked out example), but I am concerned that there are supposedly places that the rules break down for Naperian logarithms as opposed to log base 10; Does anyone know the more complete rules for napierian logarithms or why there is an issue at all? I'm forced to use ln() (base e) on the computer system I do analysis on -- and don't want mistakes creeping in if I can avoid them.

    As a worked example; I think I'll compute: 3.05**2.01
    Please note, I am only going report the s.f. of each step -- but the non reported (insignificant) figures are still used in subsequent steps to improve rounding accuracy.

    To solve it, I'll convert it to a logarithm and exponent expression.
    3.05**2.01 = exp( ln(3.05) * 2.01 )

    ln(3.05) becomes 1.115 because the 3 s.f. in the original number becomes a mantissa of 3 places (.115) in the logarithm. So the result of this ln(), is to produce a number with a total of 4 s.f.

    1.115 * 2.01 becomes 2.24 since four s.f times three s.f, is three s.f.
    exp(2.24) becomes 9.4, eg: a mantissa of 2 digits, becomes two s.f. in the result.

    so: 3.05**2.01 ~= 9.4 (according to laney.edu rules extrapolated naiively.)

    As an error analysis, I followed laney.edu's example and inspected neighboring calculation values.
    3.04**2.00 = 9.2416...
    3.05**2.01 = 9.4068...
    3.06**2.02 = 9.5754...
    eg: error ~=+-0.17

    So, it does appear that there are in fact only 2 s.f. in the final result, as the error term varies in the first decimal place -- even though there were 3 s.f. in both the values passed into the power equation and I therefore *lost* 1 s.f. due to calculation.

    And, for a compare and contrast example; If I redo the example with an infinitely precise 2 as the exponent which in theory should be a 3 s.f. result:

    3.04 ** 2 = 9.2416
    3.05 ** 2 = 9.3025
    3.06 ** 2 = 9.3636
    eg: error ~= +-0.061

    So it does appear there are about 3 s.f. in the final result, as the error term varies in the second decimal place -- although one might argue that the first digit did change in the example result of 9.24... vs 9.30...

    So -- I don't really see any problem with the results of my random example, but I'd like to know if anyone knows the more complete rules that laney.edu and a few other colleges refer to but don't list out ?
  2. jcsd
  3. Dec 30, 2014 #2


    User Avatar
    Science Advisor

    The only "rule" for significant figures that I know is that the result of a calculation has the same number of significant figures as the least accurate original number. 3.05^2.01 ("**" to indicate exponentiation is a computer notation, not a standard mathematics notation) is equal to 9.4068166043739989814585743521316 and so should be written as 9.41. That is a matter of "measurement error" and has nothing to do with how you calculate the value.
  4. Dec 30, 2014 #3
    Well, I would suggest you look up a few pages on guidelines for logarithms and significant figures to understand my background -- and verify what I am saying -- for the "rule" you have learned does not apply to logarithmic calculations, according to the many university professors at creditable colleges writing guidelines on doing s.f. calculations that I have been reading...

    I think I understand why they give a different rule than you, so I'll try to explain what I understand of their reasoning as I continue to research where their guidelines might fail...

    Consider: The purpose of significant figures is to estimate where an error or uncertainty in the original number's value will distort the result of a calculation so as to be able to avoid reporting misleading digits/information in a final result which are likely distorted by error or uncertainty. eg: to be able to report a result that others will find useful because they know the roundoff error or uncertainty can be understood to be generally in the least significant reported digit.

    I have noticed in using reported results of others, that when they rounded properly to the correct number of s.f's, the reported result is generally still good enough to use in another calculation, and produce results that generally degrade only slightly more (eg: +-2 in the least significant digit.) at the next step -- than would be the case if the unreported digits were used.

    That means, that if I take a pH value, for example -- and do an anti-logarithm on it; I should get a value which is accurate to the number of significant figures which the original experiment measured -- plus or minus about 2 in the least significant digit.
    It should not be off by plus or minus 100 in the least significant digit...

    In order to accomplish this goal with logarithms, a study of the propagation of error/uncertainty in logarithms by various people, colleges, and professors apparently led to the following 'rule' of thumb being formulated as appropriate for logarithms:

    When a logarithm is taken, the number of decimal digits written after the decimal point -- is to be the same as the number of significant figures of the original number.

    As an example of the 'rule' in action:
    log10( 1.433e10 ) = 10.1562
    Hence a 4 s.f. domain -- became a six s.f. range, (as written) a clear violation of the 'rule' you know.

    Consider for a moment, though, what the number 10.1562 represents. It has two parts, a characteristic -- and a mantissa.
    The characteristic of a logarithm (the integeral number to the left of the decimal point) corresponds to the exponent of the original number.
    The mantissa of a logarithm (the fraction to the right of the decimal point) corresponds to the significant figures of the value of the original number without regard to the exponent.

    This is easy to check by taking the logarithm of the original number with the exponent removed.
    log10( 1.433 ) = 0.1562

    Note, here both the value passed to the logarithm and the result have 4 s.f. -- which does not violate your 'rule'; but only because the exponent was removed.
    And more importantly, notice that the mantissa of the new result is identical to that of the old result -- eg: log10( 1.433e10 ) also has a mantissa of .1562 but that is in addition to a characteristic of '10' because the characteristic '10' corresponds to the original exponent '10' found in 1.433e10.

    So, in a nutshell, the reason logarithms can have different significant digits than the value operated on is:

    By convention we are not to count the digits of the exponent of the original number passed to a logarithm as significant figures, but after taking the logarithm, those exponent's digits ARE to be counted as significant in the form of the characteristic of the logarithm.

    If I were to disregard the rule given by these many professors, and try your 'rule', then let's see what happens:

    log10( 1.433e10 ) would have a result of 10.16 (4 s.f.), and this result is supposed to contain all data to produce results of about 4s.f. (plus or minus a small roundoff error in the least significant digit).
    Hence: 10**10.16, should get back roughly 1.433e10 ~+- 1 or 2e10.

    However 10**10.16 ~= 1.445e10 ; eg: a value with a round-off error of 12e10, which is 6 to 12 times larger a round-off error than would be expected from having four significant figures reported in the result. And clearly, the unexpected round off error gets worse if the exponent of the original number is made larger...
    Last edited: Dec 30, 2014
  5. Dec 30, 2014 #4


    User Avatar
    Science Advisor

    For whatever it is worth and bearing in mind that I am no expert... I agree with learn.steadfast's reasoning and with the rule that taking a base 10 logarithm should produce a result with roughly the same number of fractional digits as there were significant figures in the original.

    The OP raised a question about the difference between the base 10 log rule and the base e log rule.

    An error in the third decimal place of a base 10 log will correspond to a relative error of about ##^{1000}\sqrt{10}##. That is a relative error of a little over 2 parts in 1000. That is tolerably close to a relative error of 1 part in 1000 that we might take as defining "3 significant figures". If anything we might want a modified rule: "tack on an extra digit in the log as long as the first digit in the original number was 5 or more".

    An error in the third decimal place of a natural log will correspond to a relative error of about ##^{1000}\sqrt{e}##. By no coincidence, that is a relative error of 1 part in 1000. That is actually a better fit to the nominal 1 part in 1000 relative error that we want.

    There is nothing special about the third decimal place in this. The above works just as well for any number of significant figures. The number of significant figures in the base 10 log as prescribed by rule will slightly under-report the available precision in the input. However, one might want to consider a proper error analysis rather than "sig figs". A better error analysis might look at the partial derivitive of the computed result in terms of each input, multiply each partial derivitive by the absolute uncertainty in the corresponding input and take the square root of the sum of the squares of those terms to arrive at the absolute uncertainty in the result.
  6. Jan 2, 2015 #5
    That's correct. Some professors, such Michelle (Cheny) Fossum at laney.edu's, www .laney.edu /wp/cheli-fossum/files/2011/01/Significant-Figure-Rules-for-los.pdf do discuss both logs base 10, and natural.

    The paper cited is typical of the mainstream thought I can find on the subject, and as with many other professors -- Fossum gives the same basic rule for both types of logarithms. However -- a few professors suggest by what they write that the rules they themselves learned for natural logarithms are slightly more complicated than those for log base 10. I assume they mean something learned at the masters or doctorate level -- as I can't find such a rule at the undergraduate level.

    eg: Fossum wrote in her 2011 guideline:
    When you take the log of a number with N significant figures, the result should have N
    decimal places. The number in front of the decimal place indicates only the order of magnitude.
    It is not a significant figure. The rule for natural logs (ln) is similar, but not quite as clear cut. For
    simplicity, we will use the above rules for natural logs too"

    So, she clearly distinguishes between the characteristic and mantissa of the logarithm base 10 (albeit implicitly), and indicates as a 'rule' not to count the characteristic as part of the significant figures. However, her comments and that of several other professors cause me some concern that the natural logarithm might have pitfalls for the unwary. As her class is a controlled environment, there is no guarantee that the rule will be completely valid outside the homework problems she tailors to use it... hence, I'm asking if anyone knows a more generally valid/proper rule...

    You've given me a lot to think about by your comment. It's rather ironic ?:) that I am looking for slightly more complicated rules for natural logarithms, and you notice an issue with log base 10 that would require slightly more complicated rules for log base 10...

    Ok, yes ... I get the gist of the error analysis. However, first derivatives are only going to be accurate estimators of error over very small distances where second and third derivatives don't 'bend' the error curve too much... and, gee, I feel a headache coming on.... :nb)

    Rather than go that far, right now, I wrote a program to test all logarithms up to eight significant figures by simply counting through all possibilities, and generating three original numbers (domain values) for each possibility, eg: exact, +0.4999999e-8 or -0.4999999e-8

    The program takes the logarithm of the original number, rounds *off* the result to the rule's significant figures, and then takes the antilogarithm and compares against the original value to compute an absolute value "error" for the rule.

    I've only tested the natural logarithm to 4 sigfigs so far -- but you are basically correct; with Natural logarithms the rule yielded no errors whatsoever although with log base 10, small roundoff errors do in fact pop up all over the place.

    The first roundoff error vs significant digts, when using log base 10, were at:
    7.e-8 exact (1 sigfig)
    5.3e-7 exact (2 sigfigs)
    4.64e-6 exact (3 sigfigs)
    4.402e-5 exact (4 sigfigs)
    4.3635e-4 exact (5 sigfigs)
    4.34830e-3 exact (6 sigfigs)
    4.345571e-2 exact (7 sigfigs)
    (still waiting to find the 8th sigfig error...)

    In all cases, the error occurred in the same place whether or not 0.4999999e-8 was added, subtracted, or not done at all; and the error was a maximum of 1 least signficant digit. See attached plot of an earlier version of the program meant to test of up to 6 significant figures (e-6) with x axis the 3 most significant digits, and y axis the remaining significant digits of the original number -- and I made pixel color related to the absolute error of the result. Notably, the maximum error is red colored, which is 1 significant digit.

    A brief look at the plot led me to reconsider your conservative rule of adding a digit at '5' or above. I think it would need to be '4' or above, or else perhaps (for more accurate easy memorization) say that for any number taken log base 10 which originally had it's most significant figures less or equal to those found in "the answer to life the universe and everything "(douglass Adams) should be left alone. Those exceeding the answer to life the universe and everything, should have one extra digit added for exceeding the size of the universe.... ;)

    I'm pretty sure, after thinking about this issue and your comments, that the problem with natural logarithms lies in the fact that the exponent (as converted to a 'characteristic') is no longer based on 10. Hence, the total number of digits in a base 10 logarithm is not always going to be the same as that of a base 'e' logarithm.

    Since logarithms of different bases can be converted between each other by merely multiplying or dividing by an appropriate (exact) constant -- the conversion clearly should always produce the same total number of digits; which the present simple rules we are using will not allow to happen.

    Solving the problem may be an optimization problem that isn't really solvable, or perhaps there is a universal rule/formula that can be applied to the logarithm base to determine when to either add or subtract a digit from the result so that all logarithm conversions will be uniform. I'm not sure yet... but based on the tests I've done so far -- I don't think using the basic rule with a natural logarithm is ever going to severely underestimate the number of digits needed to represent a value.

    Attached Files:

    Last edited: Jan 2, 2015
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook