top of page
HPCMT_Pattern 01_edited.jpg

DIHUBMT HPCMT

The DiHubMT HPC is a high-performance computing environment engineered to handle large-scale computational and data-driven workloads. It enables users to accelerate scientific discovery, machine learning and engineering innovation.

Ready to get started?

Our services deliver high-performance computing solutions and infrastructure to power innovation across industries.

HPC - INFRASTRUCTURE

Compute

  • GPU node (1x GPU Server):

    • CPUs: 2 x 56-Core 4th Gen Intel Xeon Scalable Processors.

    • GPUs: 8 x NVIDIA H100 SXM5 GPUs with 80GB RAM each, providing ~ 2 PFlops of FP16 performance per GPU / ~15.8 PFLOPS in total.

    • System RAM: 1024 GB of high-speed DDR5 memory.

    • Storage: Multi-terabyte NVMe SSD storage for ultra-fast data access.

  • General Compute Cluster (3x Compute Nodes):

    • CPUs: 2 x 32-Core CPUs per node.

    • System RAM: 768 GB of DDR5 memory per node.

  • Large Memory Node (1x Fat Memory Node):

    • CPUs: 4 x 32-Core CPUs.

    • System RAM: A massive 4096 GB (4TB) of DDR5 memory.

Storage

  • Fast Tier: Two all-flash metadata nodes equipped with a total of 36 x 3.84TB NVMe SSDs for extreme I/O performance.

  • High-Capacity Tier: Two storage arrays providing over 640TB of raw capacity using 160 x 4TB SAS drives, accelerated by an SSD cache. The arrays feature dual controllers for high availability and support auto-tiering, snapshots, and thin provisioning.

  • Staging Area: Two dedicated, all-flash NAS arrays provide a high-speed staging environment for data ingestion and preparation.

Networking 

The entire cluster is interconnected via a Quantum 2 NDR Infiniband switch, ensuring ultra-low latency and high-bandwidth communication between all nodes, which is critical for distributed computing tasks.

Redundant

25Gb/s Ethernet fabric for production and management traffic.

bottom of page