HOME

 Laboratory computer systems range from typical desktop to high performance clustered symmetric multiprocessing (SMP) systems.
As of November 2002 the laboratory's server room was home to:

101 Intel CPUs
41 Sun UltraSparc CPUs
5 SGI/MIPs R10000 CPUs
9 SGI/MIPs R14000 CPUs
1 IBM RS6000
4 DEC Alpha CPUs.
1 DEC VAX 4400

 Total memory across all systems is approximately 112 GB.
Server room operating systems include:

Digital OSF/1
Digital OpenVMS
SGI Irix
IBM AIX
SUN Solaris
Microsoft NT server
Redhat Linux
Turbo Linux server


 That's eight different multi user/multi tasking operating systems and 6 systems architectures


Systems are interconnected via internal switched 10/100Mbit/sec 622 MHz Ethernet network. The Lab's HPCC consist of over 150 CPUs, primarily Intel Pentium IIIs and SUN UltraSPARC II, with over 8 different types of operating systems (mostly UNIX). The bulk of the HPCC is based on SMP (2-4 cpus) systems with large memories and large scratch local storage.

Sun Grid Engine software, formerly known as Codine is used to manage the user environment on both the SUN cluster and the Intel cluster. Sun Grid Engine (SGE) allows the user to submit a job then pretty much forget about it.

SGE features:

Batch queuing
Load balancing
Job accounting statistics
User specifiable resources
Fault tolerant -- jobs rerun if execution host fails
Suspend/resume jobs
Job status
Host status
Cluster-wide resources

Last update: November 18, 2002
H
P
C
C
Sun Microsystems Enterprise 420R systems. Each of the eleven units contain four 450-MHz UltraSPARC-II CPUs and 4 GB of memory. Eight out of eleven of these systems are interconnected via 64-bit, 66MHz, Myrinet-2000-Serial-Link/PCI interfaces connected to a Myrinet-2000 switch. Also installed are 4 DEC Alpha systems with 1GB each, An SGI 4 CPU Origin 200 system, an SGI 8 CPU Origin 300 8GM ram, an SGI Fuel system and an IBM RS6000 P270 with 8GB ram.

This system is a cluster of 48 dual Intel systems. 24 of the 48 units are based on the Asus CUV4X system board with two Intel .933G/133MHz Pentium III CPUs. The other 24 units were constructed using two Intel 1G/133MHz Pentium III CPUs.Each box contains 1.GB ram.
All units are interconnected via two Allied Telesyn Rapier 24 high-performance 10/100 Mb 24-port Layer 3 standalone switches. The two switches are interconnected via gigabit duplex fiber optic cable. The master node also acts as a gateway between the Lab's Internet and also functions as the SGE/Codine batch system master scheduler . The entire cluster is built around Red Hat's 7.2 distribution of Linux and runs the Argonne National Laboratory version of MPICH. The systems can also be used for scalar processes. Portland Group's High Performance FORTRAN (HPF) extended ISO/ANSI FORTRAN 90 is used to support the data parallel programming model. Additionally, Intel's FORTRAN for Linux is installed.
Recently installed SGI Origin 300. The system has 8 MIPS R14000 processors running at 600MHz. Memory is 8GB. Primary software running on this unit is the molecular simulations package (Cerius2) from the Accelrys corporation.
Other images:

Theoretical Laboratory processing power approx. 112.587 Gflop
That’s
114,587,000,000 floating point operations per second.
Or 112 and a half billion floating point operation / second.

HPL test performance on 96 dual Pentium III 933MHz/1GHz. HPL routines used the ATLAS BLAS routines compiled with Portland Group's HPF FORTRAN Compiler. Theoretical maximum performance approx. 92.784 Gflop

Problem size
10000
40000
54000
64080
Process row/column grid
Time
Gflops
Time
Gflops
Time
Gflops
Time
Gflops
9 X 10 90 cpus
5828.82
3.010e+01
3479.72
3.017e+01
8 X 8 64 cpus
3185.59
1.339e+01
27378.79
6.407e+00