Skip to Content

Division of Information Technology

Research Computing

High Performance Computing Clusters

The University of South Carolina High Performance Computing (HPC) clusters are available to researchers requiring specialized hardware resources for computational research applications. The clusters are managed by Research Computing (RC) in the Division of Information Technology.

Research Computing resources at the University of South Carolina include the high-performance computing cluster, Hyperion, which consists of 356 individual nodes and provides a total core count of 16,616 CPU cores. The cluster is a heterogeneous configuration, consisting of 291 compute nodes, 8 large memory nodes, 53 GPU nodes, a large SMP system, and 2 IBM Power8 quad GPU servers. All nodes are connected via a high-speed, low latency, InfiniBand network at 100 Gb/s. All nodes are also connected to a 1.4 Petabyte high-performance GPFS scratch filesystem, and 450 Terabytes of home directory storage. This cluster, managed by the Research Computing group under the Division of Information Technology, is in the university data center that provides enterprise-level monitoring, cooling, power backup and Internet2 connectivity.

Research Computing clusters are available in job queues under the Bright Cluster Management system.  Bright provides a robust software environment to deploy, monitor and manage HPC clusters. 

Hyperion

Hyperion is our flagship cluster intended for large, parallel jobs and consists of 356 compute, GPU and Big Data nodes, providing 16,616 CPU cores. Compute and GPU nodes have 128-256 GB of RAM and Big Data nodes have 2TB RAM.  All nodes have EDR infiniband (100 Gb/s) interconnects, and access to 1.4 PB of GPFS storage.

Bolden

This cluster is intended for teaching purposes only and consists of 20 compute nodes providing 460 CPU cores. All nodes have FDR infiniband (54 Mb/s) interconnects and access to the 300 TB of Lustre storage.

Maxwell (Retired)  

This cluster was available for teaching purposes only. There were about 55 compute nodes with 2.8 GHz and 2.4 GHz CPUs each with 24 GB of RAM.

 

Historical Summary of HPC Clusters

Name Number of
nodes
Cores per node TotalCores Processor
speeds
Memory per node Disk Storage GPU Nodes Big Data Nodes Interconnect Status
Hyperion Phase III 356  28 (Compute) 28 (GPU) 48 (Big Data)  16,616 3.0 GHz Compute (128 GB) GPU (128 GB) Big Data (1.5 TB) 450 TB Home (10 Gb/s Ethernet) 1.4 PB Scratch
(100 Gb/s Infiniband)
9 (Dual
P-100)
44 (Dual V100)
8 EDR Infiniband
100 Gb/s
Active
Hyperion Phase II 407  28 (Compute) 28 (GPU) 48 (Big Data)  15,524 3.0 GHz Compute (128 GB) GPU (128 GB) Big Data (1.5 TB) 450 TB Home (10 Gb/s Ethernet) 1.4 PB Scratch
(100 Gb/s Infiniband)
9 (Dual
P-100)
44 (Dual V100)
8 EDR Infiniband
100 Gb/s
Retired
Hyperion Phase I 224  28 (Compute) 28 (GPU)  40 (Big Data) 6,760 2.8 GHz (Compute, GPU)
2.1 GHz (Big Data)
Compute (128 GB) GPU (128 GB) Big Data (1.5 TB) 300 TB of Lustre storage 50 TB of NFS storage 1.5 PB Scratch (100 Gb/s Infiniband) 8 8 EDR Infiniband
100 Gb/s
Retired
Bolden  20  20  400 2.8 GHz 64 GB 300 TB 1 1 FDR Infiniband
54 Gb/s
Active
Maxwell 55 12 660 2.4 GHz/2.8 GHz 24 GB 20 TB 15 (M1060) None QDR Infiniband 40 Gb/s Retired

 


Challenge the conventional. Create the exceptional. No Limits.

©