About

HPC Mission

To provide scalable high performance computing clusters for researchers, faculty, students, and affiliates of Texas A&M University-Corpus Christi.

System Overview

NEW HPC:

Crest - Red Hat Enterprise Linux 9 based, utilizing Bright Cluster Manager and SLURM Workload Manager to manage a mix of general compute and gpu-enabled nodes.

 

OLD HPC:

Tsunami - Red Hat  Enterprise Linux 7 based, utilizing Bright Cluster Manager and SLURM Workload Manager to manage a mix of general compute and gpu-enabled nodes.

Funding

HPC is made possible by a grant from the National Science Foundation.

Technical Specs

Crest:

The Crest high performance cluster consists of 1 login node, 2 head nodes, 2 high memory compute nodes, 2 ultrahigh memory compute nodes, 25 standard compute nodes, and 1 gpu node.

  • Total Core Count: 1952
  • Total Memory: 16.256TB
  • Compute Nodes: The 25 compute nodes each contain two AMD EPYC 32 core processors, 384GB of memory, and 2TB of local disk
    • The 2 high memory nodes contain 768 GB of memory
    • The 2 ultrahigh memory nodes contain 1.5 TB of memory
  • GPU Node:  The GPU node contains two Intel 48 core Xeon Platinum processors with 4 Nvidia H100 GPUs. It also has 2TB of memory and 2TB of local disk
  • Storage:  The Crest cluster provides 100TB of storage space mounted on /home for program development and job procedures.  There is also a high performance Research Storage Cluster that provides over 2PB of storage space mounted on /work for large data sets and large work files.  Each node has a local high speed /scratch share that is on the node NVME drive
  • Interconnect: Nodes are interconnected with a Mellanox FDR 100GB InfiniBand in a one level, non-blocking topology.
  • Job Management: Submitted jobs are handled through SLURM.

Tsunami:

The Tsunami high performance cluster consists of 1 head/login node, 44 compute nodes, and 4 gpu nodes.

  • Total Core Count: 1288
    Total Memory: 11.7TB
  • Compute Nodes: The 44 compute nodes each contain two Xeon processors, 256GB of memory, and 1 TB of local disk.
  • GPU Nodes: The 4 GPU nodes each contain two Xeon processors and two NVidia Tesla K20XM GPUs.  These also contain 256GB of memory and 1 TB of local disk.
  • Storage: The Tsunami cluster provides 88TB of storage space mounted on /home for program development and job procedures.  There is also a high performance Research Storage Cluster that provides over 1.2PB of storage space mounted on /work for large data sets and large work files.  The Research Storage mount is also accessible from the C-RISE Science DMZ via the GridFTP server.
  • Interconnect:   Nodes are interconnected with a Mellanox FDR InfiniBand in a one level, non-blocking, topology.
  • Job Management: Submitted jobs are handled through SLURM.