Skip to Content

The Division of Information Technology


Banner Image

Research Cyberinfrastructure

HPC Clusters

Welcome

The University of South Carolina campus High Performance Computing (HPC) clusters are available to researchers requiring specialized hardware resources for research applications. The clusters are managed by Research Cyberinfrastructure (RCI) in the Division of Information Technology.

The newest cluster, Hyperion, has 224 compute nodes, 8 GPU nodes and 8 large memory nodes providing 6,760 CPU cores for faculty and students in the Columbia, Comprehensive and Palmetto campuses.  Collaborators from other universities and institutions will also be able to access to this shared cluster.  Located in the campus data center, compute, GPU, Big Data nodes, and 330 TB of Lustre and NFS data storage are interconnected over a high-speed 100 Gigabit Infinband network. Enterprise-level cooling, battery backup, generator backup and physical security are provided through the data center. The USC datacenter provides  enterprise-level cooling, battery backup, generator backup  and physical security.

RCI clusters are available in job queues under the Bright Cluster Management system.  Bright provides a robust software environment to deploy, monitor and manage HPC clusters. 

Hyperion

Hyperion is our flagship cluster intended for large, parallel jobs and consists of 224 compute, GPU and Big Data nodes, providing 6,760 CPU cores. Compute and GPU nodes have 128 GB of RAM and Big Data nodes have 1.5 TB RAM.  All nodes have EDR infiniband (100 Gb/s) interconnects, and access to 300 TB of Lustre storage.

Bolden

This cluster is intended for smaller parallel jobs and consists of 20 compute nodes providing 460 CPU cores. All nodes have FDR infiniband (54 Mb/s) interconnects and access to the 300 TB of Lustre storage.

Maxwell/Planck

This cluster is available for teaching purposes only.  There are about 55 compute nodes with 2.8 GHz and 2.4 GHz CPUs each with 24 GB of RAM. In addition, 15 of the nodes have Nvidia M1060 GPUs.

Thoth

This cluster is used for special projects, prototyping and evaluating new software environments.  Please contact RCI (rci@sc.edu) for more information.

Minsky

Minsky is an IBM OpenPOWER server with integrated Deep Learning Artificial Intelligence (AI) applications such as Cafe and Tensor Flow.  The Power-8  has a 20-core processor supporting up to 160 threads, 256 GB RAM,  two P-100 Nvidia GPUs, a high-speed NVLINK internal communications path and 100 Gb/s Infiniband connectivity.

Swampfox

Swampfox is a Dell Big Memory node to support applications needing a large memory space. This server includes an Intel 60-core processor, 256 GB RAM, and 100 Gb/s Infiniband connectivity.

 

Summary of HPC Clusters

Name Number of
compute nodes
Cores per node TotalCores Processor
speeds
Memory per node Disk Storage GPU Nodes Big Data Nodes Interconnect
Hyperion  224  28 (Compute) 28 (GPU) 40 (Big Data)  6,760 2.8 GHz (Compute, GPU)
2.1 GHz (Big Data)
Compute (128 GB) GPU (128 GB) Big Data (1.5 TB) 30 TB Home (10 Gb/s Ethernet) 300 TB Scratch
(56 Gb/s Infiniband)
8 (Dual
P-100)
8 EDR Infiniband
100 Gb/s
Bolden  20  20  400 2.8 GHz 64 GB 300 TB 1 1 FDR Infiniband
54 Gb/s
Minsky   1  20  20 2.8 GHz 256 GB 4 TB 1 (Dual
 P-100)
NA EDR Infiniband
100 Gb/s
Maxwell/
Planck
 55  12  660 2.4 GHz/2.8 GHz 24 GB 20 TB 15 (M1060) None QDR Infiniband
40 Gb/s
Thoth  41  8-12  820 2.5 GHz 128 GB 4 TB None None Ethernet
1 Gb/s
Swampfox  1 60  60 2.5 GHz 256 GB 4 TB None NA FDR Infiniband
54 Gb/s

Hyperion, Bolden and Minsky and Swampfox are under active vendor service contracts.  Planck, Maxwell and Thoth are not under service contracts but will remain operational for teaching, testing or prototyping until decommissioned.