HOME

 The NERI cluster was designed and constructed around Intel dual 933MHz and 1GHzPIII processor systems with 1GB ram installed in each of the 48 nodes. Each system includes a 3Com 905 10/100Mbs Ethernet interface. All systems are inter-connected via ATI 10/100Mbs full duplex layer 3 switches and 1gigabit fiber interconnect.

 The first generation was based on the Scyld RedHat Linux 2.2.x kernel distribution. One of the main components is the bProc software that allows a process ID space to be distributed across all nodes in the cluster. Additionally, bProc provides remote process startup facilities with load balancing across all nodes in the cluster. Other tools included with the Scyld software allow easy cluster configuration, maintenance and monitoring. Like most distributions of Linux, GNU FORTRAN and "C" compilers are installed. While these "freeware" compilers work, they leave much to be desired where performance and compatibility are concerned. Newly created application were easily compiled and run while some older applications were ported with little difficulty. Over all this first generation was stable and easily maintained. There were several limiting factors to getting our applications running with Portland Group's HPF compilers. Issues with linker compatibilities, logical unit numbers etc.

 After several failed attempts to build the BEOmpi libraries for use with bProc using Portland Group's commercial HPF, it was decided that starting over may be the best approach.

 The second generation of the NERI cluster is based on the Redhat 7.2 distribution with the 2.4.x linux kernel with SMP support. The interactive node was installed and two additional nodes were setup as cluster members. When the configuration was close to complete, the cluster member disks were duplicated using UNIX utilities then installed in the remaining systems.

 In addition to the MPICH software, a Linux version of Sun[tm] Grid Engine 5.2 software was installed. This is a batch-processing package that has been running on the SUN (Muses) cluster quite successfully for some time. This provides 96 processing/load balanced slots to run user applications.