About
HPC Mission
To provide scalable high performance computing clusters for researchers, faculty, students, and affiliates of Texas A&M University-Corpus Christi.
System Overview
HPC:
Crest - Red Hat Enterprise Linux 9 based, utilizing Bright Cluster Manager and SLURM Workload Manager to manage a mix of general compute and gpu-enabled nodes.
Funding
HPC is made possible by a grant from the National Science Foundation.
Technical Specs
Crest:
The Crest high performance cluster consists of 1 login node, 2 head nodes, 2 high memory compute nodes, 2 ultrahigh memory compute nodes, 25 standard compute nodes, and 2 gpu nodes.
- Total Core Count: 2048
- Total Memory: 16.256TB
- Compute Nodes: The 25 compute nodes each contain two AMD EPYC 32 core processors, 384GB of memory, and 2TB of local disk
- The 2 high memory nodes contain 768 GB of memory
- The 2 ultrahigh memory nodes contain 1.5 TB of memory
- GPU Nodes: The GPU nodes contain two Intel 48 core Xeon Platinum processors with 2TB of memory and 2TB of local disk. One node has four Nvidia H100 GPUs, the other has two.
- Storage: The Crest cluster provides 100TB of storage space mounted on /home for program development and job procedures. There is also a high performance Research Storage Cluster that provides over 2PB of storage space mounted on /work for large data sets and large work files. Each node has a local high speed /scratch share that is on the node NVME drive
- Interconnect: Nodes are interconnected with a Mellanox FDR 100GB InfiniBand in a one level, non-blocking topology.
- Job Management: Submitted jobs are handled through SLURM.