HPC Mission

To provide scalable high performance computing clusters for researchers, faculty, students, and affiliates of Texas A&M University-Corpus Christi.

System Overview

Tsunami - Red Hat  Enterprise Linux 7 based, utilizing Bright Cluster Manager and SLURM Workload Manager to manage a mix of general compute and gpu-enabled nodes.


HPC is made possible by a grant from the National Science Foundation.

Technical Specs


The Tsunami high performance cluster consists of 1 head/login node, 44 compute nodes, and 4 gpu nodes.

  • Total Core Count: 1288
    Total Memory: 11.7TB
  • Compute Nodes: The 44 compute nodes each contain two Xeon processors, 256GB of memory, and 1 TB of local disk.
  • GPU Nodes: The 4 GPU nodes each contain two Xeon processors and two NVidia Tesla K20XM GPUs.  These also contain 256GB of memory and 1 TB of local disk.
  • Storage: The Tsunami cluster provides 88TB of storage space mounted on /home for program development and job procedures.  There is also a high performance Research Storage Cluster that provides over 1.2PB of storage space mounted on /work for large data sets and large work files.  The Research Storage mount is also accessible from the C-RISE Science DMZ via the GridFTP server.
  • Interconnect:   Nodes are interconnected with a Mellanox FDR InfiniBand in a one level, non-blocking, topology.
  • Job Management: Submitted jobs are handled through SLURM.