About This Site
Powers and Fractions
Notes About Squares
Repeat Length Again
Intrinsic Nature of Primes
> Other Topics (2)
Multiplication in Base 10
The Multiplication Table
As I mentioned earlier (in Exponentials, Continued), no matter how close two numbers are, there's always some probability that rounding them will produce a one-unit error. So, a formula that produces only one-unit errors is in some sense perfect … another formula might produce errors with lower probability, but the possibility of error will always need to be taken into account.
The formula in step 4, when evaluated as described, is perfect. Earlier, using the rule of thumb that the error should be less than 5×10-6, we estimated that the formula would be valid out to x = ±0.018 and a bit. Even with error 10-5, that condition would be enough to prove the formula perfect … except that the algorithm has intermediate rounding operations, which are difficult to analyze. So, instead of analyzing it, I wrote a little program that implemented the algorithm, rounding and all, and ran it until it produced a two-unit error. As it happens, the estimate was good … the formula is perfect out to x = ±0.019 and a bit.
The logarithm method, by the way, is not quite perfect, because rounding error from the known logarithms can build up. On the other hand, for that to build up, the number n has to be large, which makes the formula error small; so it has to build up further than you'd expect, to a point where you probably wouldn't use the factors anyway. The details will depend on what logarithms you know, but just to give you an idea, if you happen to know the logarithms up to 13, then the first imperfection occurs when you try to calculate log 4562 via 4563 = 33132.
Now, to get back to the algorithm, suppose we've just finished step 4 … that is, we've evaluated the formula to obtain a number near 1, and we know that the true value is at most one unit away. That's the rounded true value, of course; the unrounded true value could be as much as 1.5 units away. But then, in step 5, when we scale the number from 1 up to r (say), the error will scale too, so that the final result could be off by as much as 1.5r units.
Now we know everything we need to understand the example from the previous essay. The result started with 4.6, so it could have been off by 1.5 × 4.6 = 6.9 units, or 7 after rounding; the fact that it was off by four units is not at all bad or surprising.
Because the error can round up, and because of various other uninteresting factors, the number 1.5r is not a hard limit. What is the limit? Well, that's hard to say … so once again, I wrote a little program to sort things out. To be precise, I had the computer run the algorithm for every possible input, from .00000 to .99999, and collect statistics for the error ratio, i.e., the error divided by r. (How intimidating is that, to have the computer do every possible calculation you might ever do, and all in a fraction of a second?) The resulting distribution looked more or less as you'd expect. There's a delta function at zero from numbers with zero error, then a gap up to 0.1 because even a one-unit error can only be divided by r < 10. After that, there's a broad peak between 0.1 and 1.0, a tail out to 1.5, and a few stragglers, up to a hard limit of 2.2. The mean value is 0.49. If you're curious, here are some of the actual numbers.
(These are for the case where you determine the number of copies of log 3 by looking at the second and third digits without rounding, and count 50 as close to 54; but the numbers for the other cases are very similar, and have the same hard limit.)
In the end, it's probably best to think of the limit as 1.5r units after all.
Finally, here are the corresponding results for the variant that uses three decimal places instead of five. The formula is perfect out to x = ±0.021 and a bit; the error ratio distribution is similar to the one for five places, with mean value 0.43; and, surprisingly, 1.5r units turns out to be a hard limit.
@ September (2006)