What does this growth in computational power mean in process engineering? This is a question that will be discussed in concluding this presentation. For now, I want to emphasize that a problem has been, in this area and in others, that existing problem solving strategies were developed under a serial computing paradigm, and thus may take little advantage of advanced computing architectures, such as vector and/or parallel computing. So, there really is a need to be rethinking the way we solve problems. This is an area that I have been very interested in over the years, and I will provide one example to demonstrate the point.


This example (Figure 4, right) involves a dynamic simulation run using Aspen Technology's SPEEDUP package on a Cray C90 vector machine not too many years ago, and shows what happens when you change sparse matrix solvers in order to try to take better advantage of vectorization. This was a comparison done by Steve Zitney in collaboration with people at Bayer [1]. What this shows is that with the conventional sparse matrix solver of the time, MA28, the simulation took about 12 hours, which, since this was a simulation of a much shorter period of actual plant time, was not a good thing. By changing to the FAMP solver, which was developed by Steve Zitney in my group, and which takes advantage of the vector computer architecture, the simulation time was reduced by an order of magnitude, and the time to solve a single linear system by two orders of magnitude.