The University of South Carolina campus High Performance Computing (HPC) clusters
are available to researchers requiring specialized hardware resources for research
applications. The clusters are managed by Research Cyberinfrastructure (RCI) in the
Division of Information Technology.
The newest cluster, Hyperion, has 224 compute nodes, 8 GPU nodes and 8 large memory
nodes providing 6,760 CPU cores for faculty and students in the Columbia, Comprehensive
and Palmetto campuses. Collaborators from other universities and institutions will
also be able to access to this shared cluster. Located in the campus data center,
compute, GPU, Big Data nodes, and 330 TB of Lustre and NFS data storage are interconnected
over a high-speed 100 Gigabit Infinband network. Enterprise-level cooling, battery
backup, generator backup and physical security are provided through the data center.
The USC datacenter provides enterprise-level cooling, battery backup, generator backup
and physical security.
RCI clusters are available in job queues under the Bright Cluster Management system.
Bright provides a robust software environment to deploy, monitor and manage HPC clusters.
Hyperion is our flagship cluster intended for large, parallel jobs and consists of
224 compute, GPU and Big Data nodes, providing 6,760 CPU cores. Compute and GPU nodes
have 128 GB of RAM and Big Data nodes have 1.5 TB RAM. All nodes have EDR infiniband
(100 Gb/s) interconnects, and access to 300 TB of Lustre storage.
This cluster is intended for smaller parallel jobs and consists of 20 compute nodes
providing 460 CPU cores. All nodes have FDR infiniband (54 Mb/s) interconnects and
access to the 300 TB of Lustre storage.
This cluster is available for teaching purposes only. There are about 55 compute
nodes with 2.8 GHz and 2.4 GHz CPUs each with 24 GB of RAM. In addition, 15 of the
nodes have Nvidia M1060 GPUs.
This cluster is used for special projects, prototyping and evaluating new software
environments. Please contact RCI (firstname.lastname@example.org) for more information.
Minsky is an IBM OpenPOWER server with integrated Deep Learning Artificial Intelligence
(AI) applications such as Cafe and Tensor Flow. The Power-8 has a 20-core processor
supporting up to 160 threads, 256 GB RAM, two P-100 Nvidia GPUs, a high-speed NVLINK
internal communications path and 100 Gb/s Infiniband connectivity.
Swampfox is a Dell Big Memory node to support applications needing a large memory
space. This server includes an Intel 60-core processor, 256 GB RAM, and 100 Gb/s Infiniband
Summary of HPC Clusters
|Cores per node
|Memory per node
||Big Data Nodes
|| 28 (Compute) 28 (GPU) 40 (Big Data)
||2.8 GHz (Compute, GPU)
2.1 GHz (Big Data)
|Compute (128 GB) GPU (128 GB) Big Data (1.5 TB)
||30 TB Home (10 Gb/s Ethernet) 300 TB Scratch
(56 Gb/s Infiniband)
||2.4 GHz/2.8 GHz
Hyperion, Bolden and Minsky and Swampfox are under active vendor service contracts.
Planck, Maxwell and Thoth are not under service contracts but will remain operational
for teaching, testing or prototyping until decommissioned.