Today, at least one major computer hardware and software company is seriously considering another computer arithmetic paradigm—namely, interval arithmetic. This is slower than floating point, so in that sense presents an issue similar to what had to be considered in moving from fixed to floating point in the 1950s. However, today we have ample computing power to deal with this issue. What is the advantage of interval arithmetic relative to floating point? Mainly it is an issue of reliability. In floating point arithmetic, if we add two numbers, say c = a + b, even if a and b have exact binary representations, the result c in general will not, and so the result of the computation will have rounding error, which may then continue to propagate. In interval arithmetic, if we add two numbers, we actually add two degenerate intervals, [a,a] + [b,b] = [(a+b),(a+b)]. Then the lower bound of the result is rounded down to (a+b)- and the upper bound rounded up to (a+b)+. In this way, the computed result C = [(a+b)-,(a+b)+] is a very narrow interval that is known to contain the correct result c.
The use of interval arithmetic has some interesting implications when it comes to problem solving. For instance, just consider the problem of solving 10x = 1. Mathematically the answer is 1/10, but as we have already seen, this has no exact binary representation. So, in fact, solving the equation 10x = 1 on a binary computer is not possible—you cannot find the correct solution because the number 1/10 does not exist in a binary computer. However, if we use interval arithmetic to solve 10x = 1 we will come up with a narrow interval enclosure than is guaranteed to contain the correct solution.