Mentally multiply by π
Quirks about π that make it easy to build successive approximation
Note: this article makes use of MathML, the standard XML markup for math formulas. Sadly, this is not properly supported on some allegedly ‘modern’ and ‘feature-rich’ browsers. If the formulas don't make sense in your browser, consider reporting the issue to the respective developers and/or switching to a standard-compliant browser.
I recently came across this interesting article (blog post?) from the Fediverse showing a nifty trick to compute multiples of π by relying on only two products: by 3, and by 14. I encourage you to read the article if you're interested in the method because it's crystal clear, whereas I'm going to muddy the waters a bit, throwing my considerations at it.
The reason why the proposed idea works is that the first digits of the fractional part of the decimal expansion of can be expressed as multiples of 14. Indeed, the fractional part of
This effectively means that
and thus
which is interesting because means “shifting your results right by digits”, which is particularly nice if is an integer up to 7, and thus has two digits, and the first two terms in the expansion are shifted by two and four digits (no need to take any actual summation).
Let's make the example with , and try to compute
This may seem to be an arbitrary divergence from Cook's example, but is actually needed to discuss one of the limitations of the approach: spillover. We have , , and thus the first terms (up to what Cook's calls the simplest approach) are:
(relative error is —note that this is the same found by Cook, even though he discusses the absolute error instead)
Things get a bit hairier on the first refinement, because the value of now needs to be shifted by one more digit, not two. And while in Cook's example with he has and (remove the last two digits, replace with the new three digits) in our case there's a bit of a spillover: , so this refinement doesn't lend itself to a simple substitution of the last two digits:
(relative error .)
The next refinement is a bit more involved, because (you may notice the in the tableau at the top of this article) it involves halving our —which, coincidentally, is always possible because is even— and adding it in the same place. In our case, this means adding another to the last two digits, and once again we have a small spillover (luckily into a ):
The relative error is now , more than order of magnitude better than the initial result, and —interestingly— it is now an overestimation.
Alternatives
If you go read the Fediverse thread, you'll notice that I raised the question of the comparison with approximations compute from the well-known rational approximations of , i.e.
and
Aside from the somewhat contrived case of I'm using here an example, multiples of these fractions would be harder to compute, as observed by Bill Ricker, although the multiple of would still be more accurate.
What is interesting is that the second refinement presented by Cook can be further refined, e.g. by replacing the last “half ” contribution with a “3/8th of ” contribution. This is less trivial to compute, unless we consider that , so we can correct Cook's refinement by subtracting the “half divided by 4”, which in our case is .
Even if we truncate this to , the fixup gives us
with a relative error that is now , which is now two orders of magnitude lower. (The actual correction would be with a relative error .)
What is impressive about this particular result is that the relative error is not only of the same order as the one given by the approximation, but actually slightly better (the relative error for is closer to ).
A summary
To get a good approximation of a multiple of :
- multiply by 3 (call this );
- multiply by 14 (call this );
- add: ,
shifted by 2 digits,
shifted by 4 digits,
shifted by 5 digits; - bonus: add
shifted by 5 digits; - extra: subtract
shifted by 5 digits
(note that can be computed as ).
An alternative to the last two points, getting to the same results, would be to add (this time computing as half of ) shifted by 5 digits.
Let's apply to this to Cook's example for :
- ;
- ;
- ;
- bonus:
(note that Cook's post has a couple of typos in this line; absolute error is still ); - extra:
(absolute error is now ; note that the extra division by 4 is particularly easy to do in Cook's example).
with the possibility to replace 4. and 5. with
Note however that is slightly worse than as an approximation (error rather than , in addition of being of the opposite sign.) Note also that the best approximation we've discussed so far () is also the midpoint of these two.
Smallest doesn't mean easier to handle
I believe that one of the most important discoveries to which this trick has led me is the importance of the difference between the compactness of the (approximating) fraction and how easy it is to handle.
This isn't something entirely new to me (I've always known that in computing there are several series that help find out new digits of faster), but what I learned this time is how this maps to “human computing”.
If we write the full mathematical expression of the approximant that leads to the algorithm we just described we have:
which is considerably more complex than the expressions we have from the continued fraction expansion,
or even my beloved Milü
Yet, except for particular cases, the divisions by and involved in the latter expressions are likely to be more complex than the “multiply by , divide by ”, because the division by in the decimal system is a simple shift by two digits.
To wit, if we have computed for the approximation, it would be simpler to approximate as
than with the expansion, since the last term is obtained simply by shifting by two digits the already computed division by . Even (more accurate, but still not as much as ) would be faster than the full contribution.
Sevenths
Incidentally, here's an interesting fact:
and if we factor (recognize it yet?)
Now, if we compute as discussed above, we get
and you may notice that the first two terms are the same as the ones in the mind trick formula . Where they start to diverge is on the third term, which is here, but (after simplifications) in the mind trick formula.
This divergence should not surprise, since is actually a pretty poor approximation for . If we try , the first terms are a bit more complex to compute:
but we can make it easier to compare with by rearranging the terms as
so that the third term would be:
(compare to the mind trick's .)
Speed versus accuracy
One of the key point in determining the approach to use to find approximate values to multiples of is to take into account the accuracy of the multiplier.
As an example, assume we want to compute the circumference of the Earth, knowing that the radius is approximately km. We want to compute or .
One would be tempted to use the magic algorithm that has been the main topic of discussion of this article, but that would actually be a waste of brainpower in this case, since the approximation for the radius is quite large already: there's no need to compute to particularly high accuracy, and in fact in this case the approximation of is sufficient: the first part is trivial, , and the rest is
(the next term from would be , already too precise) which gives us a circumference of some km.
The attentive reader who remembers when the meter was defined as the 40-millionth part of the circumference of the Earth, will have noticed that we're way over the expected km, which is due to the fact that the (average) Earth radius is actually less than . If we want to redo these computations, this time with , we have:
and leveraging the approximation :
giving us a circumference of , which is still too large (and for the curious: no, using a more accurate approximation wouldn't solve the issue, dropping it only to around .)
Reciprocal problem
Maybe it would be better to work things the other way: assuming we know that circumference is km, can we work out a more accurate value of the radius? This means computing . This is actually another one of those cases where the simpler, harder to manipulate fractions are easier to use, since their reciprocal is .
In our case
(the division by is easy to do in mind: from is , from is , and then it repeats). This is actually a pretty good approximation to the actual Earth radius, that is closer to km. (Note also that using actual here would only give marginally better results with an integer part of : our approximation is accurate enough.)
So this got me wondering: could we build an approximant to that is as easy to handle as and the corresponding algorithm? From a quick look, the answer seems to be … “ish”. We have
so from a cursory look the best candidate to replace the in is , and our shifts will be of three rather than two digits:
which is considerably less manageable than , especially after the second term, because of the three-digit numbers, although it does give us pretty low relative errors right from the start, approximately at the first, second and third term.
A more manageable candidate may seem like , since
with relative errors of approximately respectively for the first, second, third term, and fourth term. The big nuisance of this building block is that it seems to be built primarily around subtraction rather than addition, something which is harder to manage mentally compared to the “add by shift” of .
To compare let's try to compute the same with the two approximations.
We have , so the first two terms of using the
(compare with actual result to show the excellent result of this approximation.)
For , we have and , but:
that is:
and finally:
Ultimately, the fact that the expressions in this case are uglier and less manageable may just be the sign that computing the reciprocal of is an inverse problem. Or that I suck at these kind of things, especially at this hour.