The question

How many digits do you need, in base 10, to represent a given (binary) number?

A premise

The C++ standard defines a trait for numerical datatypes that describes “the number of base-10 digits that can be represented by a given type without change”: std::numeric_limits::digits10.

What this means is that all numbers with at most that many digits in base 10 will be representable in the given type. For example, 8-bit integers can represent all numbers from 0 to 99, but not all numbers from 0 to 999, so their digit10 value will be 2.

For integer types, the value can be obtained by taking the number of bits (binary digits) used by the type, dividing by log2(10) (or multiplying by log10(2), which is the same thing), and taking the integer part of the results.

This works because with n bits you can represent 2n values, and with d digits you can represent 10d values, and the condition for digit10 is that d should be such that 10d2n. By taking the logarithm on both sides we get dlog10(2n)=nlog10(2), and since d must be an integer, we get the formula d=nlog10(2).

(Technically, this is still a bit of a simplification, since actually the highest representable number with n bits is 2n-1, and that's still only for unsigned types; for signed one things get more complicated, but that's beyond our scope here.)

The answer

What we want is in some sense the complement of digit10, since we want to ensure that our number of (decimal) digits will be sufficient to represent all numbers of the binary type. Following the same line of reasoning above, we want d such that 2n10d, and thus, skipping a few passages, d=nlog10(2), at least assuming unsigned integer types.

We're looking for the simplest formula that gives us the given result. With C++, we could actually just use digits10 plus one, but we want something independent, for example because we want to use this with C (or any other language that doesn't have a digits10 equivalent).

The first thing we want to do is avoid the logarithm. We could compute the actual value, or at least a value with sufficient precision, but in fact we'll avoid doing that, and instead remember that 210 is pretty close to 103, which puts the logarithm in question in the 310 ballpark, an approximation that is good enough for the first several powers of 2^10.

With this knowledge, we can approximate dn310. In most programming languages integer division with positive operands returns the floor rather than the ceiling, but it can be turned into something that returns the ceiling by adding to the numerator one less than the denominator1. So:

d=n3+910

is the formula we're looking for. In a language like C, where the size of types is given in bytes, that would be come something like

#define PRINT_SIZE(type) ((sizeof(type)*CHAR_BIT*3+9)/10)

where we're assuming 8 bits per byte (adapt as needed if you're on an insane architecture).

Limitations

The C expression provided above isn't universal. It is better than the even more aggressive approximation sizeof(type)*CHAR_BIT/3, which for example fails for 8-bit bytes (gives 2 instead of 3) and overestimates the result for 64-bit data types (gives 21 instead of 20), but it's not universal.

It works for most standard signed data types, because the number of base-10 digits needed to represent them is almost always the same as their unsigned equivalents, but for example it doesn't work for 64-bit data types (the signed ones need one less digit in this case).

Moreover, it actually starts breaking down for very large integers, because the 310 approximation commits an error of about 10% which starts becoming significant at 256 bits or higher: the formula predicts 77 digits, but 78 are actually needed.

We can expand this by taking more digits to approximate the logarith. For example

#define PRINT_SIZE(type) ((sizeof(type)*CHAR_BIT*301+999)/1000)

doesn't break down until 4096 bits, at which point it misses one digit again. On the other hand

#define PRINT_SIZE(type) ((sizeof(type)*CHAR_BIT*30103+99999)/100000)

can get us reasonably high (in fact, by a quick check it seems this formula should work correctly even for types with 2216=265536 bits, if not more). It also has a nice symmetry to it, even though I guess it would overflow on machines with smaller word sizes (but then again, you probably wouldn't need it there anyway).


  1. If a,b are non-negative integers with b>0,then$a+b-1b=ab: (1) if a is a multiple of b, then adding b-1 doesn't go to the next multiple, and thus on both sides we have ab (which is an integer) and (2) if a is not a multiple of b, adding b-1 will overtake exactly one multiple of b. More formally, we can write a=kb+c where k,c are non-negative integers, and c<b (c=0 if a is a multiple, and c>0 otherwise). Define s=sign(c), i.e. s=0 if c=0 and s=1 otherwise. Then a+b-1b=kb+c+b-1b=(k+1)+c-1b=k+1-(1-s)=k+s and ab=kb+cb=k+cb=k+s. ↩