HPC Cluster Resources

This cluster, which is the university's second -- was deployed to perform complex computational tasks for researchers at KFUPM. It is expected to provide researchers with faster compute nodes and more storage than the first community HPC cluster. The center’s newest cluster is comprised of 32 IBM System x3650 servers, dual Xeon E5 10-core 2.8 gigahertz processors, NVIDIA K20 GPUs containing 2,496 cores each on 12 of these nodes, with 64 gigabytes of memory and QDR Infiniband backbone. This cluster achieved a benchmark of 14 teraflops or 14 trillion floating-point calculations per second. Following are its detailed specifications:

Master Node
• Number of nodes – 1 node.
• Processors – Xeon E5-2680 10-Core, 2.8 GHz (Dual-processor).
• Memory – 128 GB RAM.

Compute Nodes
• Number of nodes – 32 nodes.
• Processors – Xeon E5-2680 10-Core, 2.8 GHz (Dual-processor).
• Number of cores – 660 cores.
• Memory – 64 GB RAM on each node.
• Number of nodes with GPU – 12 nodes.

Interconnect
• InfiniBand QDR 40Gb/s.

Operating System
• Red Hat Enterprise Linux 6

This new supercomputer incorporates graphics processing units (GPUs). This technology greatly enhances the speed with which computer systems process information and represents an important development in supercomputing. Following are the specifications of availble GP-GPU:

NVIDIA Tesla K20X GPU Accelerator
• Number of CUDA cores – 2688 cores.
• Floating point performance – 3.95 TFLOPS single, 1.32 TFLOPS double.
• Memory bandwidth – upto 250GB/s.
• Memory – 6GB GDDR5.
• Interface to system – PCI-E x16 Gen2.
• Supports – Dynamic Parallelism and HyperQ features.

The university's first HPC Cluster -- deployed in 2008 -- houses 3 Master nodes and 128 compute nodes. Each node has dual processor with each processor being a quad-core totaling a massive 1024-core cluster system. All these nodes are interconnected with 10GB Ethernet GBASE-SR interconnect. The cluster is unique in its dual-boot capability with Microsoft Windows HPC Server 2008 and Red Hat Enterprise Linux 5 operating systems. Currently the cluster is managed by Windows HPC job scheduler - Torque 5.0 open source PBS job scheduler and is monitored by ganglia tool. Following are its detailed specifications:

Master Nodes
• Number of nodes – 3 nodes.
• Processors – Xeon E5405 Quad-Core, 2.0 GHz (Dual-processor).
• Memory – 45 GB RAM on each node.

Compute Nodes
• Number of nodes – 128 nodes.
• Processors – Xeon E5405 Quad-Core, 2.0 GHz (Dual-processor).
• Number of cores – 1024 cores.
• Memory – 4 GB RAM on each node.

Interconnect
• 10GB Ethernet GBASE-SR.

Operating System
• Red Hat Enterprise Linux 5