Now shifting gears, consider the question: With all this computing power, can we in fact reliably compute the right answers? To explore this issue, we will look at some examples. The first example is the relatively well-known problem due to Rump [2]. Here we are asked to evaluate the expression

                   f(x,y) = 333.75y6 + x2(11x2y2 - y6 -121y4 - 2) + 5.5y8 + x/2y

for x = 77617 and y = 33096. All numerical inputs in this calculation are exact machine numbers, so any errors we get in the result are due to the computation. Looking at the computed results from a Fortran program, which Rump did on a IBM S/370, and others have repeated on many other machines, we see that when using single precision the result is

f = 1.172603... when using double precision, the result is f = 1.1726039400531... and when using extended precision, the result is

                    f = 1.172603940053178...

The fact that the answer does not change with increasing precision is often taken as confirmation that the correct answer has been obtained. However, the correct answer is, in fact,

                    f = -0.827396059946...

So we didn't even get the sign right!

The problem here is due to rounding errors, combined with other difficulties, such as cancellation errors, that are inherent in the use of floating point arithmetic. A frequent reaction when people see this example is "so what, this will never happen to me" and "even if it does happen to me, it will be no big deal." So consider now a couple of real world examples.