Administration of our shared computing resources and infrastructure is the primary task assigned to the department's technical staff. They aim to meet all teaching needs while also pursuing the most effective support for the different needs of our researchers. Feedback from all members of the department is welcome; the better the technical staff understands the needs of our individuals, the better their efforts will be for the department as a whole.
Shared Memory, Multi-processor Computing
The department maintains a shared memory, multi-processor system primarily suited for multi-threaded, CPU intensive programs. The current hardware bearing the crunch hostname is a Sun Microsystems SunFire X4600, with 20 AMD Opteron(tm) cores and 200 GB of memory running Linux.
The department also maintains a shared memory, multi-processor system primarily suited for multi-threaded, CPU intensive programs. The current hardware bearing the compute hostname is a Sun Microsystems SunFire V880, with 8 CPUs and 16 GB of memory running solaris. Additionally, compute is often the best candidate for medium duration, large memory programs.
Resource management software distributes processor cycles between different users to prevent one user's tasks from monopolizing this multi-user system. As configured, users are given balanced shares of system resources. A temporary, unfair or imbalanced share may be established for special circumstances by request.
The department maintains a high performance cluster with 32 nodes and 128 processor cores. Each node has two dual core AMD 2214 2.2 GHz Opteron CPUs, 4 GB RAM and an 80 GB drive. The nodes are connected via a High Performance InfiniBand network and also Gigabit Ethernet. The nodes are running Linux CentOS. The cluster is managed by a head node running Rocks and it is using the Portable Batch System to schedule jobs.
The department also maintains a parallel computing cluster with 32 CPUs and 16 GB of memory split across 16 identical systems. All nodes use RedHat Linux/Intel for their operating system. Nodes are interconnected with switched, 100 Mb ethernet. Two build environments are available for MPI programs. One uses the GNU compiler suite [gcc/g++/g77]; the other uses the Portland Group (PGI) compiler suite.
The department also utilizes workstations from the teaching lab to form a compute grid for off-hour and weekend use. The pool of nodes normally includes 32 identical systems each with 2 SPARC CPUs; by arrangement, 16 additional systems could be recruited to bring the total CPU count up to 96.
Storage for departmental servers and workstations is provided by a 2GB/sec Fiber Channel Storage Area Network. Currently there is 14TB of disk storage provided by Sun StorEdge T3B and 3511 raid arrays. The storage is further backed up daily on a set of additional raid disk devices and ultimately cloned to our QualStar 5244 SAIT tape robot.
The department computing lab, located on the 3rd floor, comprises two, adjoining spaces. The immediate, thirty-two seats are available to undergraduates for work on computer assignments. A tiered floor, teaching lab adjoins via restricted access door. In addition to allowing for overflow capacity and doubling as an after hours cluster, this second space offers the same audio/visual capabilities of the classrooms and may be reserved in advanced for teaching. Laboratory rules and hours are located on our external webpages under Computing Lab.
The department classrooms are equipped for audio/video presentation using the computer, the document camera, DVD/VCR, or a user-supplied laptop (A/V/network cabling present). A small, touch-driven display atop the desk incorporates system power control, input select, audio volume, video mute, screen and shadow motors, and mode specific functions. Instructors new to our facility should attend a tutorial before classes begin; tutorial dates and times will be announced by e-mail.
Course materials may be distributed electronically via web, e-mail, or shared folders. When desired, computer assignments can be completed using department facilities; instructors must communicate this need to the technical staff. High capacity printing capabilities are available for academic needs (e.g. mid-terms and exams).
Wired and Wireless Networking
The department is served largely by two, physical networks. Every faculty office has an one active port for a department workstation and one for a personal computer, connecting to our primary network. The wireless network [MathCS], reserved for use by department personnel, along with wired ports in shared offices and teaching spaces connect to our utility network. In both cases, user equipment should configure network parameters dynamically by using DHCP.
In addition to support for department workstations, scan/print/fax equipment, and computational facilities; the department maintains its own e-mail, web, file storage, and network resources.