pages tagged biaswokhttp://wok.oblomov.eu/tag/bias/wok's biasikiwiki2015-08-11T16:42:52ZInteger math and computershttp://wok.oblomov.eu/tecnologia/compnum/intmath/Oblomov2015-08-11T16:42:52Z2014-07-15T20:36:00Z
<p>One would assume that doing integer math with computers would be easy.
After all, integer math is, in some sense, the “simplest” form of math:
as Kronecker said:</p>
<blockquote markdown="1"><p>Die ganzen Zahlen hat der liebe Gott gemacht, alles andere is
Menschenwerk</p><div class="translation" markdown="1"><p>The dear God has made integers, everything else is the work
of man</p></div></blockquote>
<p>While in practice this is (almost) always the case, introducing the
extent (and particularly the limitations) to which integer math is easy
(or, in fact, ‘doable’) is the first necessary step to understanding,
later in this series of articles, some of the limitations of fixed-point
math.</p>
<p>We start from the basics: since we are assuming a binary computer, we
know that <em>n</em> bits can represent 2^<em>n</em> distinct values. So an 8-bit byte
can represent 256 distinct values, a 16-bit word can represent 65536
distinct values, a 32-bit word can represent 4,294,967,296, and a 64-bit
word a whooping 18,446,744,073,709,551,616, over 18 (short) trillion. Of
course the question now is: <em>which ones</em>?</p>
<h2 id="unsigned">Representation of unsigned integers</h2>
<p>Let's consider a standard 8-bit byte. The most obvious and natural
<em>interpretation</em> of a byte (i.e. 8 consecutive bits) is to interpret it
as a (non-negative, or unsigned) integer, just like we would interpret a
sequence of consecutive (decimal) digits. So binary <code>00000000</code> would be 0,
binary <code>00000001</code> would be (decimal) 1, binary <code>00000010</code> would be (decimal)
2 binary <code>00000011</code> would be (decimal) 3 and so on, up to (binary)
<code>11111111</code> which would be (decimal) 255. From 0 to 255 inclusive, that's
exactly the 256 values that can be represented by a byte (read as an
unsigned integer).</p>
<p>Unsigned integers can be trivially promoted to wider words (e.g. from
8-bit byte to 16-bit word, preserving the numerical value) by padding
with zeroes.</p>
<p>This is so simple that it's practically boring. Why are we even going
through this? Because things are <em>not</em> that simple once you move beyond
unsigned integers. But before we do that, I would like to point out that
things aren't that simple even if we're just sticking to non-negative
integers. In terms of <em>representation</em> of the numbers, we're pretty
cozy: <em>n</em> bits can represent all non-negative integers from 0 to
2^<em>n</em>-1, but what happens when you start doing actual math on them?</p>
<h2 id="modular">Modulo and saturation</h2>
<p>Let's stick to just addition and multiplication at first, which are the
simplest and best defined operations on integers. Of course, the trouble
is that if you are adding or multiplying two numbers between 0 and 255,
the result might be <em>bigger</em> than 255. For example, you might need to do
100 + 200, or 128*2, or even just 255+1, and the result is not
representable in an 8-bit byte. In general, if you are operating on
<em>n</em>-bits numbers, the result might not be representable in <em>n</em> bits.</p>
<p>So what does the computer do when this kind of <em>overflow</em> happens? Most
programmers will now chime in and say: well <em>duh</em>, it wraps! If you're
doing 255+1, you will just get 0 as a result. If you're doing 128*2,
you'll just get 0. If you're doing 100+200 you'll just get 44.</p>
<p>While this answer is not wrong, it's not right either.</p>
<p>Yes, it's true that the most common central processing units we're used
to nowadays use <em>modular arithmetic</em>, so that operations that would
overflow <em>n</em>-bits words are simply computed modulo 2^<em>n</em> (which is easy
to implement, since it just means discarding higher bits, optionally
using some specific flag to denote that a carry got lost along the way).</p>
<p>However, this is not the only possibility. For example, specialized DSP
(Digital Signal Processing) hardware normally operates with <em>saturation</em>
arithmetic: overflowing values are <em>clamped</em> to the maximum
representable value. 255+1 gives 255. 128*2 gives 255. 100+200 gives
255.</p>
<p>Programmers used to the standard modular arithmetic can find saturation
arithmetic ‘odd’ or ‘irrational’ or ‘misbehaving’. In particular, in
saturation arithmetic (algebraic) addition is not associative, and
multiplication does not distribute over (algebraic) addition.</p>
<p>Sticking to our 8-bit case, for example, with saturation arithmetic (100
+ 200) - 100 results in 255 - 100 = 155, while 100 + (200 - 100) results
in 100 + 100 = 200, which is the correct result. Similarly, still with
saturation arithmetic, (200*2) - (100*2) results in 255 - 200 = 55,
while (200 - 100)*2 results in 100*2 = 200. By contrast, with modular
arithmetic, both expressions in each case give the correct result.</p>
<p>So, <em>when the final result is representable</em>, modular arithmetic gives
the correct result in the case of a static sequence of operations.
However, when the final result is <em>not</em> representable, saturation
arithmetic returns values that are closer to the correct one than
modular arithmetic: 300 is clamped to 255, in contrast to the severely
underestimated 44.</p>
<p>Being as close as possible to the correct results is an extremely
important property not just for the final result, but also for
intermediate results, particularly in the cases where the sequence of
operations is not static, but depends on the magnitude of the values
(for example, software implementations of low- or high-pass filters).</p>
<p>In these applications (of which DSP, be it audio, video or image
processing, is probably the most important one) both modular and
saturation arithmetic might give the wrong result, but the modular
result will usually be significantly worse than that obtained by
saturation. For example, modular arithmetic might miscompute a frequency
of 300Hz as 44Hz instead of 255Hz, and with a threshold of 100Hz this
would lead to attenuation of a signal that should have passed unchanged,
or conversely. Amplifying an audio signal beyond the representable
values could result in silence with modular arithmetic, but it will just
produce the loudest possible sound with saturation.</p>
<p>We mentioned that promotion of unsigned values to wider data types is
trivial. What about demotion? For example, knowing that your original
values are stored as 8-bit bytes and that the final result has to be
again stored as an 8-bit byte, a programmer might consider operating
with 16-bit (or wider) words to (try and) <a href="http://wok.oblomov.eu/tag/bias/#overflow">prevent overflow</a>
during computations. However, when the final result has to be demoted
again to an 8-bit byte, a choice has to be made, again: should we just
discard the higher bits (which is what modular arithmetic does), or
return the highest representable value when any higher bits are set
(which is what saturation arithmetic does)? Again, this is a choice for
which there is no “correct” answer, but only answers that depend on the
application.</p>
<p>To conclude, the behavior that programmers used to standard modular
arithmetic might find ‘wrong’ is actually preferable in some
applications (which is why it is has been supported in hardware in the
multimedia and vector extensions (MMX and onwards) of the x86
architecture).</p>
<h2 id="overflow">Thou shalt not overflow</h2>
<p>Of course, the real problem in the examples presented in the previous
section is that the data type used (e.g. 8-bit unsigned integers)
was unable to represent intermediate or final results.</p>
<p>One of the most important things programmers should consider, maybe <em>the</em>
most important, when discussing doing math on the computer, is precisely
choosing the correct data type.</p>
<p>For integers, this means choosing a data type that can represent
correctly not only the starting values and the final results, but also
the intermediate values. If your data fits in 8 bits, then you want to
use at least 16 bits. If it fits in 16 bits (but not 8), then you want
to use at least 32, and so on.</p>
<p>Having a good understanding of the possible behaviors in case of
overflow is extremely important to write robust code, but the main point
is that <em>you should not overflow</em>.</p>
<h2 id="signed">Relative numbers: welcome to hell</h2>
<p>In case you are still of the opinion that integer math is easy, don't
worry. We still haven't gotten into the best part, which is <em>how to deal
with relative numbers</em>, or, as the layman would call them, <em>signed</em>
integers.</p>
<p>As we mentioned above, the ‘natural’ interpretation of <em>n</em> bits is to
read them as natural, non-negative, unsigned integers, ranging from 0 to
2^<em>n</em>-1. However, let's be honest here, non-negative integers are pretty
limiting. We would at least like to have the possibility to also
specify <em>negative</em> numbers. And here the fun starts.</p>
<p>Although there is no official universal standard for the representation
of relative numbers (signed integers) on computers, there is undoubtedly
a <em>dominating</em> convention, which is the one programmers are nowadays
used to: two's complement. However, this is just one of <em>many</em> (no less
than four) possible representations:</p>
<ul>
<li>sign bit and mantissa;</li>
<li>ones' complement;</li>
<li>two's complement;</li>
<li>offset binary aka biased representation.</li>
</ul>
<h3 id="symzero">Symmetry, zeroes and self-negatives</h3>
<p>One of the issues with the representation of signed integers in binary
computers is that binary words can always represent an <em>even</em> number of
values, but a symmetrical amount of positive and negative integers, plus
the value 0, is odd. Hence, when choosing the representation, one has to
choose between either:</p>
<ul>
<li>having one (usually negative) non-zero number with no representable
opposite, or</li>
<li>having two representations of the value zero (essentially, positive
and negative zero).</li>
</ul>
<p>Of the four signed number representations enumerated above, the sign bit
and ones' complement representations have a signed zero, but each
non-zero number has a representable opposite, while two's complement and
bias only have one value for zero, but have at least one non-zero number
that has no representable opposite. (Offset binary is actually very
generic and can have significant asymmetries in the ranges of
representable numbers.)</p>
<h4 id="negzero">Having a negative zero</h4>
<p>The biggest issue with having a negative zero is that it violates a
commonly held assumption, which is that there is a bijective
correspondence between representable numerical values and their
representation, since both positive and negative 0 have the same
numerical value (0) but have distinct bit patterns.</p>
<p>Where this presents the biggest issue is in the comparison of two words.
When comparing words for equality, we are now posed a conundrum: should
they be compared by their <em>value</em>, or should they be compared by their
<em>representation</em>? If <code>a = -0</code>, would <code>a</code> satisfy <code>a == 0</code>? Would it
satisfy <code>a < 0</code>? Would it satisfy both? The obvious answer would be that
+0 and -0 should compare equal (and just that), but how do you tell them
apart then? Is it even worth it being able to tell them apart?</p>
<p>And finally, is the symmetry worth the lost of a representable value?
(2^<em>n</em> bit patterns, but two of them have the same value, so e.g. with
8-bit bytes we have 256 patterns to represent 255 values instead of the
usual 256.)</p>
<h4 id="nosym">Having non-symmetric opposites</h4>
<p>On the other hand, if we want to keep the bijectivity between value and
representation, we will lose the symmetry of negation. This means, in
particular, that knowing that a number <code>a</code> satisfies <code>a < 0</code> we cannot
deduce that <code>-a > 0</code>, or conversely, depending on whether the value with
no opposite is positive or negative.</p>
<p>Consider for example the case of the standard <a href="http://wok.oblomov.eu/tag/bias/#twocomp">two's complement
representation</a> in the case of 8-bit bytes: the largest
representable positive value is 127, while the largest (in magnitude)
representable negative value is -128. When computing opposites, all
values between -127 and 127 have their opposite (which is the one we
would expect algebraically), but negating -128 gives (again) -128 which,
while algebraically wrong, is at least <em>consistent</em> with <a href="http://wok.oblomov.eu/tag/bias/#modular">modular
arithmetic</a>, where adding -128 and -128 actually gives 0.</p>
<h3 id="abriefexpositionoftherepresentations">A brief exposition of the representations</h3>
<p>Let's now see the representations in some more detail.</p>
<h4 id="signbitandmantissarepresentation">Sign bit and mantissa representation</h4>
<p>The conceptually simplest approach to represent signed integers, given a
fixed number of digits, is to reserve one bit to indicate the sign, and
leave the other <em>n</em>-1 bits to indicate the mantissa i.e magnitude i.e.
absolute value of the number. By convention, the sign bit is usually
taken to be the most significant bit, and (again by convention) it is
taken as 0 to indicate a positive number and 1 to indicate a negative
number.</p>
<p>With this representations, two opposite values have the same
representation <em>except for the most significant bit</em>. So, for example,
assuming our usual 8-bit byte, 1 would be represented as <code>00000001</code>, while
-1 would be represented as <code>10000001</code>.</p>
<p>In this representation, the highest <em>positive</em> value that can be
represented with <em>n</em> bits is 2^{<em>n-1</em>} - 1, and the lowest (largest in
magnitude) <em>negative</em> value that can be represented is its opposite. For
example, with an 8-bit byte the largest <em>positive</em> integer is 127, i.e.
<code>01111111</code>, and the largest (in magnitude) <em>negative</em> integer is its
opposite -127, i.e. <code>11111111</code>.</p>
<p>As mentioned, one of the undersides of this representation is that it
has both positive and negative zero, respectively represented by the
<code>00000000</code> and <code>10000000</code> bit patterns.</p>
<p>While the sign bit and mantissa representation is conceptually obvious,
its hardware implementation is more cumbersome that it might seem at
first hand, since operations need to explicitly take the operands' signs
into account. Similarly, sign-extension (for example, promoting an 8-bit
byte to a 16-bit word preserving the numerical value) needs to ‘clear
up’ the sign bit in the smaller-size representation before replicating
it as the sign bit of the larger-size representation.</p>
<h4 id="onescomplementrepresentation">Ones' complement representation</h4>
<p>A more efficient approach is offered by ones' complement representation,
where negation maps to ones' complement, i.e. bit-flipping: the opposite
of any given number is obtained as the bitwise NOT operation of the
representation of the original value. For example, with 8-bit bytes, the
value 1 is as usual represented as <code>00000001</code>, while -1 is represented as
<code>11111110</code>.</p>
<p>The range of representable numbers is the same as in the sign bit and
mantissa representation, so that, for example, 8-bit bytes range from
-127 (<code>10000000</code>) to 127 (<code>01111111</code>), and we have both positive zero
(<code>00000000</code>) and negative zero (<code>11111111</code>).</p>
<p>(Algebraic) addition in modular arithmetic with this representation is
trivial to implement in hardware, with the only caveat that carries and
borrows ‘wrap around’.</p>
<p>As in the sign-bit case, it is possible to tell if a number is positive
or negative by looking at the most-significant bit, and 0 indicates a
positive number, while 1 indicates a negative number (whose absolute
value can then be obtained by flipping all the bits). Sign-extending a
value can be done by simply propagating the sign bit of the smaller-size
representation to <em>all the additional bits</em> in the larger-size
representation.</p>
<h4 id="twocomp">Two's complement</h4>
<p>While ones' complement representation is practical and relatively easy
to implement in hardware, it is not the simplest, and it's afflicted by
the infamous ‘negative zero’ issue.</p>
<p>Because of this, two's complement representation, which is simpler to
implement and has no negative zero, has gained much wider adoption. It
also has the benefit of ‘integrating’ rather well with the equally
common modular arithmetic.</p>
<p>In two's complement representation, the opposite of an <em>n</em>-bit value is
obtained by subtracting it from 2^<em>n</em>, or, equivalently, from flipping
the bits and then adding 1, discarding any carries beyond the <em>n</em>-th
bit. Using our usual 8-bit bytes as example, 1 will as usual be
<code>00000001</code>, while -1 will be <code>11111111</code>.</p>
<p>The largest positive representable number with <em>n</em> bits is still
2^{<em>n</em>-1}-1, but the largest (in magnitude) <em>negative</em> representable
number is now -2^{<em>n</em>-1}, and it's represented by a high-bit set to 1
and all other bits set to 0. For example, with 8-bit bytes the largest
positive number is 127, represented by <code>01111111</code>, whose opposite -127 is
represented by <code>10000001</code>, while the largest (in magnitude) negative
number is -128, represented by <code>10000000</code>.</p>
<p>In two's complement representation, there is no negative zero and the
only representation for 0 is given by all bits set to 0. However, as
discussed <a href="http://wok.oblomov.eu/tag/bias/#nosym">earlier</a>, this leads to a negative value whose
opposite is the value itself, since the representation of largest (in
magnitude) negative representable number is invariant by two's
complement.</p>
<p>As in the other two representations, the most significant bit can be
checked to see if a number is positive and negative. As in ones'
complement case, sign-extension is done trivially by propagating the
sign bit of the smaller-size value to all other bits of the larger-size
value.</p>
<h4 id="biased">Offset binary</h4>
<p>Offset binary (or biased representation) is quite different from the
other representations, but it has some very useful properties that have
led to its adoption in a number of schemes (most notably the IEEE-754
standard for floating-point representation, where it's used to encode
the exponent, and some DSP systems).</p>
<p>Before getting into the technical details of offset binary, we look at a
possible <em>motivation</em> for its inception. The attentive reader will have
noticed that all the previously mentioned representations of signed
integers have one interesting property in common: they <em>violate</em> the
<em>natural ordering</em> of the representations.</p>
<p>Since the most significant bit is taken as the sign bit, and negative
numbers have a most significant bit set to one, natural ordering (by bit
patterns) puts them <em>after</em> the positive numbers, whose most significant
bit is set to 0. Additionally, in the sign bit and mantissa
representation, the ordering of negative numbers is reversed with
respect to the natural ordering of their representation. This means that
when comparing numbers it is important to know if they are signed or
unsigned (and if signed, which representation) to get the ordering
right. The biased representation is one way (and probably the most
straightforward way) to circumvent this.</p>
<p>The basic idea in biased representation or offset binary is to ‘shift’
the numerical value of all representations by a given amount (the bias
or offset), so that the smallest natural representation (all bits 0)
actually evaluates to the smallest representable number, and the largest
natural representation (all bits 1) evaluates to the largest
representable number.</p>
<p>The bias is the value that is added to the (representable) value to
obtain the representation, and subtracted from the representation to
obtain the represented value. The minimum representable number is then
the opposite of the bias. Of course, the range of representable numbers
doesn't change: if your data type can only represent 256 values, you can
only choose <em>which</em> 256 values, as long as they are consecutive
integers.</p>
<p>The bias in an offset binary representation can be chosen arbitrarily,
but there is a ‘natural’ choice for <em>n</em>-bit words, which is 2^{<em>n</em>-1}:
halfway through the natural representation. For example, with 8-bit
bytes (256 values) the natural choice for the bias is 128, leading to a
representable range of integers from -128 to 127, which looks distinctly
similar to the one that can be expressed in <a href="http://wok.oblomov.eu/tag/bias/#twocomp">two's
complement representation</a>.</p>
<p>In fact, the 2^{<em>n</em>-1} bias leads to a representation which is
<em>equivalent</em> to the two's complement representation, except for a
flipped sign bit, solving the famous signed versus unsigned comparison
issue mentioned at the beginning of this subsection.</p>
<p>As an example, consider the usual 8-bit bytes with a bias of 128: then,
the numerical values 1, 0 and -1 would be represented by the ‘natural’
representation of the values 129, 128 and 127 respectively, i.e.
<code>10000001</code>, <code>10000000</code> and <code>01111111</code>: flipping the most significant bits, we
get <code>00000001</code>, <code>00000000</code> and <code>11111111</code> which are the two's complement
representation of 1, 0 and -1.</p>
<p>Of course, the ‘natural’ bias is not the only option: it is possible to
have arbitrary offsets, which makes offset binary extremely useful in
applications where the range of possible values is strongly asymmetrical
around zero, or where it is far from zero. Of course, such arbitrary
biases are rarely supported in hardware, so operation on offset binary
usually requires software implementations of even the most common
operations, with a consequent performance hit. Still, assuming the
hardware uses modular arithmetic, offset binary is at least trivial to
implement for the basic operations.</p>
<p>One situation in which offset binary doesn't play particularly well is
that of sign-extension, which was trivial in ones' and two's complement
represnetations. The biggest issue in the case of offset binary is,
obviously, that the offsets in the smaller and larger data types are
likely going to be different, although usually not <em>arbitrarily</em>
different (biases are often related to the size of the data type).</p>
<p>At least in the case of the ‘natural’ bias (in both the smaller and
larger data types), sign extension can be implemented straightforwardly
by going through the two's complement equivalent representation: flip
the most significant bit of the smaller data type, propagate it to all
the remaining bits of the larger data type, and then flip the most
significant bit of the larger data type. (In other words: convert to
two's complement, sign extend that, convert back to offset binary with
the ‘natural’ bias.)</p>
<h2 id="whatdoesabitpatternmean">What does a bit pattern mean?</h2>
<p>We're now nearing the end of our discussion on integer math on the
computers. Before getting into the messy details of the first common
non-integer operation (division), I would like to ask the following
question: what do you get if you do <code>10100101</code> + <code>01111111</code>?</p>
<!-- TODO show all 10 possible cases: signed, all unsigned
representations, modular vs saturation arithmetic -->
<h2 id="signdiv">Divide and despair</h2>
<p>To conclude our exposition of the joys of integer math on the computers,
we now discuss the beauty of integer division and the related modulus
operation.</p>
<p>Since division of the integer <em>e</em> by the integer <em>o</em> only gives an
integer (mathematically) if <em>e</em> is a multiple of <em>o</em>, the concept of
‘integer division’ has arised in computer science as a way to obtain an
integer <em>d</em> from <em>e/o</em> even when <em>o</em> does not divide <em>e</em>.</p>
<h3 id="thesimplecase">The simple case</h3>
<p>Let's start by assuming that <em>e</em> is non-negative and <em>o</em> is (strictly)
positive. In this case, integer division gives the largest integer <em>d</em>
such that <em>d*o ≤ e</em>. In other words, the result of the division of <em>e</em>
by <em>o</em> is <em>truncated</em>, or ‘approximated by defect’, however small the
remainder might be: 3/5=0 and 5/3=1 with integer division, even though
in the latter case we would likely have preferred a value of 2 (think of
2047/1024, for example).</p>
<p>The upside of this choice is that it's trivial to implement other forms
of division (that round up, or to the nearest number, for example), by
simply adding appropriate correcting factors to the dividend. For
example, round-up division is achieved by adding the divisor diminished
by a unit to the divident: integer divisoin <em>(e + o - 1)/o</em> will give
you <em>e/o</em>, rounded <em>up</em>: (3+5-1)/5 = 7/5 = 1, and (5 + 3 - 1)/3 = 7/3 =
2.</p>
<h3 id="divisionbyzero">Division by zero</h3>
<p>What happens when <em>o</em> is zero? Mathematically, division by zero is not
defined (although in some context where infinity is considered a valid
value, it may give infinity as a result —as long as the dividend is
non-zero). In hardware, anything can happen.</p>
<p>There's hardware that flags the error. There's hardware that produces
bogus results without any chance of knowing that a division by zero
happened. There's hardware that produces consistent results (always
zero, or the maximum representable value), flagging or not flagging the
situation.</p>
<p>‘Luckily’, most programming languages always treat a division by zero as
an exception, which by default causes a program termination. Of course,
this means that to write robust code it's necessary to sprinkle the code
with conditionals to check that divisions will successfully complete.</p>
<h3 id="negativenumbers">Negative numbers</h3>
<p>If the undefined division by zero may not be considered a big issue per
se, the situation is <em>much</em> more interesting when either of the operands
of the division is a negative number.</p>
<p>First of all, one would be led to think that at least the sign of the
result would be well defined: negative if the operands have opposite
sign, positive otherwise. But this is not the case for the widespread
<a href="http://wok.oblomov.eu/tag/bias/#twocomp">two's complement representation</a> with modular arithmetic,
where the division of two negative numbers can give a negative number:
of course, we're talking about the corner case of the largest (in
magnitude) negative number, which when divided by -1 returns itself,
since its opposite is not representable.</p>
<p>But even when the sign is correct, the result of integer division is not
uniquely determined: some implementations round down, so that -7/5 = -2,
while others round towards zero, so that -7/5 = -1: both the choices are
consistent with the positive integer division, but the results are
obviously different, which can introduce subtle but annoying bugs when
porting code across different languages or hardware.</p>
<h3 id="modulo">Modulo</h3>
<p>The modulo operation is perfectly well defined for positive integers,
as the reminder of (integer) division: the quotient <em>d</em> and the reminder
<em>r</em> of (integer) division <em>e/o</em> are (non-negative) integers such that
<em>e = o*d + r</em> and <em>r < o</em>.</p>
<p>Does the same hold true when either <em>e</em> or <em>o</em> are negative? It depends
on the convention adopted by the language and/or hardware. While for
negative integer division there are ‘only’ two standards, for the modulo
operation there are <em>three</em>:</p>
<ul>
<li>a result with the sign of the dividend;</li>
<li>a result with the sign of the divisor;</li>
<li>a result that is always non-negative.</li>
</ul>
<p>In the first two cases, what it means is that, for example, -3 % 5 will
have the opposite sign of 3 % -5; hence, if one would satisfy the
quotient/<wbr>reminder equation (which depends on whether integer
division rounds down or towards zero), the other obviously won't. In the
third case, the equation would only be satisfied if the division rounds
down, but not if the division rounds towards zero.</p>
<p>This could lead someone to think that the best choice would be a
rounding-down division with an always non-negative modulo. Too bad that
rounding-down division suffers from the problem that <em>-(e/o) ≠ (-e)/o</em>.</p>
<h2 id="summary">Summary</h2>
<p>Integer math on a computer is simple only as far as you never think
about dealing with corner cases, which you <em>should</em> if you want to write
robust, reliable code. With integer math, this is the minimum of what
you should be aware of:</p>
<ul>
<li><a href="http://wok.oblomov.eu/tag/bias/#modular">modular versus saturation arithmetic</a>;</li>
<li><a href="http://wok.oblomov.eu/tag/bias/#overflow">the importance of avoiding overflow</a>;</li>
<li><a href="http://wok.oblomov.eu/tag/bias/#signed">multiple signed integer representations</a>;</li>
<li><a href="http://wok.oblomov.eu/tag/bias/#signdiv">division and modulo by zero or with negative numbers</a>.</li>
</ul>