Problems like this, involving issues of the existence and uniqueness of solutions, are difficult ones, but there are some misconceptions about how difficult they really are. For example, in Dennis and Schnabel's classic book [3] it is said that "In general, the questions of existence and uniqueness—does a given problem have a solution and is it unique?—are beyond the capabilities one an expect of algorithms that solve nonlinear problems." This, however, is not entirely true, as we shall soon discuss. In a more recent textbook, Heath [4] says "It is not possible, in general, to guarantee convergence to the correct solution or to bracket the solution to produce an absolutely safe method" [for solving nonlinear equations]. Again this is not quite right.
In fact, there do exist methods, based on interval mathematics, in particular interval-Newton methods, that can, given initial bounds on the variables, enclose any and all solutions to a nonlinear equation system, or determine that there is no solution, or find the global optimum of a nonlinear function [5]. These methods provide a mathematical and also computational guarantee of reliability. The latter is important since mathematical guarantees can be lost once things are implemented in floating point arithmetic. In my group at Notre Dame, we are actively involved in developing algorithms and applications using these concepts [e.g., 6,7]. So why isn't everyone using these methods? A primary reason is that they can be significantly slower than standard local point methods. However, my feeling on this and on other issues of reliability is that we have lots of computing power, so why not use it to solve problems more reliably? The use of interval mathematics is one potential approach for doing this.
Now consider briefly another question. If we cannot be sure that we are getting the right answers, are we in danger of relying too heavily on computing power? Again we will explore the question by looking at a couple examples.