# B I do not understand Log tables

#### John3509

I was watching this video about how they invented log tables and some things I do not understand.

At 5:12 he changes the scaling from 1's to 10's I don't understand how this is allowed. The number represents the number of times the base was multiplied by itself. In the seconds before this, the base 1.00010000 squared is 1.00020001 , 1.00010000 ^20 does not =/= this. Doesn't this trow off the whole chart?

I do understand, or at least think I do, his scaling of the right side. Instead of doing for instance (1.00010000)(1.00020001) you can do [(100010000)(100020001)](.00000001^2) and solve what is inside the square brackets by adding the exponents and the antilog and then multiplying by the .00000001 twice to scale back down. But when scaling you have to add it in don't you?
For instance when doing log 158.4893 you split it into log (1.584893)(100) you use the log rules and then do 2+ log 1.584893. Why cant we do what he did in the video and do log 1.584893 and then multiply by 100 to scale back? Why the difference?

This isn't reguarding the video in particular but log tables in general

Here is one common table
http://turner.faculty.swau.edu/mathematics/math181/materials/logtable/logtable.php
and another
Why does the column in one start with 1.0 and go up by .1 while the other starts with 10 and increases by 1? And why are the logs different, for instance .0043 vs 0043, what is 0043? Is it just regular 43 with the 2 zeros in front as place holders? Or is it really .0043 but it is left out, if it is the same that takes me back to my first question.

Where does the mean difference number come from? How is it derived?

And finally, if you look at the first table I reference (turner), the logs you are taking in are increasing my .01, but increase in the exponent should be decrease over time since the slope of an exponential function is constantly increasing. The differences for the first row are, .0043, .0043, .0042, .0042,.0042,.0041..
Why such a pattern? Shouldn't it be decreasing in every step?

#### Bill Simpson

For your final point, yes the values do decrease with every step. It is just they are rounded off to 4 digits and that makes it less obvious that they are consistently decreasing with every step.

If instead you round them off to 64 digits then perhaps the pattern will be more obvious.

0.00432137378264257427518817822293791321928935520645259140581863694844801036910
0.00427879797927498677374851408500745275659676887139821211484294682858339394610
0.00423705294325464412213450227229405841463737289817976123214634948486141373953
0.00419611459360814967665064753556808674377533579736333097966542511698484139873
0.00415595977115771794578342500745096478121254627915883429059977451749674395785
0.00411656619483216805322591922849114354818090370449359188145728400595959091943
0.00407791242043939998781005310968602666312460887697303007713159155843794527785
0.00403997780174006147802025975289633500994946386228532512878983470913189729122
0.00400274245367393288795180839954330550543816733167900499994869210041196421505
0.00396618721760140554968666362914895506449766763846941248613848340943723111837
0.00393029362843239335327862155122051547639936389055531685100653241906154597188

#### mfb

Mentor
At 5:12 he changes the scaling from 1's to 10's I don't understand how this is allowed.
It doesn't really matter as long as you keep track of it. You can divide all numbers by 10 again later.
For instance when doing log 158.4893 you split it into log (1.584893)(100) you use the log rules and then do 2+ log 1.584893. Why cant we do what he did in the video and do log 1.584893 and then multiply by 100 to scale back? Why the difference?
100*log(1.584893) is not the same as log(158.4893) = log(1.584893*100) = log(1.584893) + log(100) = log(1.584893)+2
Why does the column in one start with 1.0 and go up by .1 while the other starts with 10 and increases by 1? And why are the logs different, for instance .0043 vs 0043, what is 0043? Is it just regular 43 with the 2 zeros in front as place holders? Or is it really .0043 but it is left out, if it is the same that takes me back to my first question.
All just different notations. Sometimes the dot is omitted, sometimes not, sometimes a leading 1 is omitted...
Basically: Get an idea what the approximate logarithm should be, then you can read a much more precise value from these tables.

#### tech99

Gold Member
I was watching this video about how they invented log tables and some things I do not understand.
May I mention that aged 13 and before calculators, we kids were red hot at using log tables, including log cosines etc, and at 17 we started using the slide rule, which is also a log device.
I still have a book of logs which I use for astro navigation, as I tell myself I would like to keep the whole process electronic free and independent of batteries. Of course, I always end up using a calculator.

#### Chestermiller

Mentor
May I mention that aged 13 and before calculators, we kids were red hot at using log tables, including log cosines etc, and at 17 we started using the slide rule, which is also a log device.
I still have a book of logs which I use for astro navigation, as I tell myself I would like to keep the whole process electronic free and independent of batteries. Of course, I always end up using a calculator.
Have you tried using a slide rule, where all the lengths are laid out in proportion to logs, so multiplying and dividing numbers boil down to adding and subtracting lengths.

#### John3509

For your final point, yes the values do decrease with every step. It is just they are rounded off to 4 digits and that makes it less obvious that they are consistently decreasing with every step.

If instead you round them off to 64 digits then perhaps the pattern will be more obvious.

0.00432137378264257427518817822293791321928935520645259140581863694844801036910
0.00427879797927498677374851408500745275659676887139821211484294682858339394610
0.00423705294325464412213450227229405841463737289817976123214634948486141373953
0.00419611459360814967665064753556808674377533579736333097966542511698484139873
0.00415595977115771794578342500745096478121254627915883429059977451749674395785
0.00411656619483216805322591922849114354818090370449359188145728400595959091943
0.00407791242043939998781005310968602666312460887697303007713159155843794527785
0.00403997780174006147802025975289633500994946386228532512878983470913189729122
0.00400274245367393288795180839954330550543816733167900499994869210041196421505
0.00396618721760140554968666362914895506449766763846941248613848340943723111837
0.00393029362843239335327862155122051547639936389055531685100653241906154597188
I suspected it might have something to do with precision since the over all trend was still decreasing, just not how I expect it, I just couldn't figure out how to get so many significant figures? Did you search for some high precision calculator online? What about without calculators, if you were for instance some mathematician a century ago figuring this out?

#### mfb

Mentor
WolframAlpha gives you hundreds of digits if needed. Just keep clicking "more digits".
just not how I expect it
What did you expect, and how did that differ from what you found?

#### symbolipoint

Homework Helper
Gold Member
I really hope some members can give good responses to help this topic go somewhere. I also wonder about how, in the 1940's and 1930's, such significant figures in log and antilog tables as would have been found in handbooks were computed. Typical textbooks on intermediate algebra would show four-sig-fig decimal values in the tables; while that/those/the older handbooks would show MORE significant figures. No computers in those days to help with getting so many significant figures.

#### mfb

Mentor
Calculate enough terms in whatever expansion is suitable for enough points, interpolate. A lot of annoying manual work, but you only have to do it twice per value (better have someone check it) and then you can print the tables as often as you wish.

#### symbolipoint

Homework Helper
Gold Member
Calculate enough terms in whatever expansion is suitable for enough points, interpolate. A lot of annoying manual work, but you only have to do it twice per value (better have someone check it) and then you can print the tables as often as you wish.
When the kids use or used tables to four decimal places, they were taught to linearly interpolate to estimate the fifth decimal place. The older tables in the handbooks which went beyond four decimal places would not have relied on interpolation to get those fifth or sixth decimal places. Would groups of human calculatorial personnel have used maybe knowledge of series or sequences of functions and lengthy hard work (like before the time of computers) to actually compute all these values to fifth and sixth decimal places to put into log and antilog tables?

#### mfb

Mentor
Let's say you want to get the natural log of numbers between 2 and 3 with at most 10-5 error with linear interpolations.

The interpolation error will be dominated by the quadratic term and it will be the largest near 2, for a distance d we get $d^2/16$ if I counted the powers of 2 correctly. Solve: d<0.0126. Let's use 0.01. If we give 6-digit numbers then rounding errors are below 10-6, negligible. The Taylor expansion of $\displaystyle \ln(x+e) = 1+\sum_{i=1}^{\infty}{\frac{x^i}{ie^i}}$ converges. We need 9 terms, they have weird numbers from powers of e, and we need that 100 times. A bit ugly but it works.

Better calculate intermediate values via the exponential function, that converges faster. We need ex from ln(2)-1=-0.31 to ln(3)-1=0.10, the larger deviations will be at the former. The x6 term is 1.0*10-6 and should be included, the x7 term is 4.3*10-8 and negligible. So let's calculate ex from -0.31 to 0.10 in steps of 0.02, then multiply be e to get values from 2 to 3. We now have 20 logarithm points better than 5*10-8. The largest distance is close to 2, where the points are 0.04 apart. You can do a quadratic interpolation. Its maximal error is $f^3/(96 sqrt(3))$ if I didn't miscalculate it, if we want that to be smaller than 10-6 we need points with a maximal distance of 0.055. That fits! We can now calculate ln(2), ln(2.01), ... with 20 quadratic interpolations and an error below 10-6. But we don't have to stop there. We have quadratic terms, calculating more logarithms isn't too much effort. We can calculate ln(2), ln(2.001), ... now the error from linear interpolations becomes negligible and you have 6-digit accuracy everywhere.

Did they calculate logarithm tables that way? Probably not. That's what I found quickly, I would be surprised if it is the best method. You can probably save some time with the exponentials if you calculate e.g. e0.02 and then calculate powers of that.

#### John3509

100*log(1.584893) is not the same as log(158.4893) = log(1.584893*100) = log(1.584893) + log(100) = log(1.584893)+2
Regarding the scaling, shouldn't it be log[(100010000)(.00000001)]+log[(100020001)(.00000000)] ? How does he scale down?

WolframAlpha gives you hundreds of digits if needed. Just keep clicking "more digits".What did you expect, and how did that differ from what you found?
I expected that the values would decrease every step on the table, instead they decreased about every other step, the differenced between the first 2 values was .0043, the next difference was also .0043, then it dropped to.0042, and then again .0042. Since the slope is constantly increase this should not be. I suspected it may be an accuracy issue. I was thinking that maybe the mean difference which is added on at the end may have something to do with this as well, but I do not understand how or where the mean difference come from or why its added.

#### John3509

When the kids use or used tables to four decimal places, they were taught to linearly interpolate to estimate the fifth decimal place. The older tables in the handbooks which went beyond four decimal places would not have relied on interpolation to get those fifth or sixth decimal places. Would groups of human calculatorial personnel have used maybe knowledge of series or sequences of functions and lengthy hard work (like before the time of computers) to actually compute all these values to fifth and sixth decimal places to put into log and antilog tables?
Like here ?
But that is not very accurate beyond the first 2 decimals.

#### mfb

Mentor
Regarding the scaling, shouldn't it be log[(100010000)(.00000001)]+log[(100020001)(.00000000)] ? How does he scale down?
As the second log is undefined: No. I'm also not sure what you calculate there.
I expected that the values would decrease every step on the table, instead they decreased about every other step, the differenced between the first 2 values was .0043, the next difference was also .0043, then it dropped to.0042, and then again .0042.
That is a result of rounding errors.

#### tech99

Gold Member
I thought that Charles Babbage's two computing engines were invented to calculate and print mathematical tables, starting 1822. He used a method requiring only addition and subtraction. At that time a "computer" was a person by the way.

#### John3509

"I do not understand Log tables"

### Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving