Table of Contents

Hardware

Compute Resources

Node CPU Model CPU Layout Memory GPU OS
darwin
(head node)
Intel Xeon
Silver 4210
20 CPUs, 1 socket,
10 cores/socket, 2 threads/core
32 GB Rocky Linux 9.7
node01 Intel Xeon
E5-2630 v4
40 CPUs, 2 sockets,
10 cores/socket, 2 threads/core
128 GB 2x NVIDIA GeForce RTX 2080 Ti Rocky Linux 9.7
node02 Intel Xeon
E5-2630 v4
40 CPUs, 2 sockets,
10 cores/socket, 2 threads/core
128 GB 2x NVIDIA GeForce RTX 2080 Ti Rocky Linux 9.7
node03 Intel Xeon
Gold 6240R
96 CPUs, 2 sockets,
24 cores/socket, 2 threads/core
512 GB 2x NVIDIA RTX A5000 Rocky Linux 9.7
node04 Intel Xeon
Gold 5418Y
96 CPUs, 2 sockets,
24 cores/socket, 2 threads/core
256 GB 2x NVIDIA RTX A5000 Rocky Linux 9.7
node05 Intel Xeon
Gold 5418Y
96 CPUs, 2 sockets,
24 cores/socket, 2 threads/core
256 GB 1x NVIDIA GeForce RTX 2080 Ti
2x NVIDIA GeForce RTX 3080
Rocky Linux 9.7
node06 Intel Xeon
E5-2630 v4
40 CPUs, 2 sockets,
10 cores/socket, 2 threads/core
448 GB 1x NVIDIA RTX A4500 Rocky Linux 9.7
node08 Intel Xeon
E5-2630 v2
24 CPUs, 2 sockets,
6 cores/socket, 2 threads/core
96 GB 2x NVIDIA GeForce RTX 2080 Ti Rocky Linux 9.7
node09 Intel Xeon
X5675
24 CPUs, 2 sockets,
6 cores/socket, 2 threads/core
32 GB Rocky Linux 9.7
node10 Intel Xeon
X5675
24 CPUs, 2 sockets,
6 cores/socket, 2 threads/core
32 GB Rocky Linux 9.7
node11 Intel Xeon
X5675
24 CPUs, 2 sockets,
6 cores/socket, 2 threads/core
32 GB Rocky Linux 9.7
node12 Intel Xeon
X5675
24 CPUs, 2 sockets,
6 cores/socket, 2 threads/core
64 GB Rocky Linux 9.7
node14 Intel Xeon
Gold 6526Y
64 CPUs, 2 sockets,
16 cores/socket, 2 threads/core
512 GB 2x NVIDIA RTX A5000
1x NVIDIA RTX A6000
Rocky Linux 9.7
Performance estimate

~415.7 TFLOPS (FP32): 383.0 TFLOPS from GPUs + 32.7 TFLOPS from CPUs

Storage

Storage Purpose Filesystem Capacity
12x HDD (RAID 6) User data ext4 90.96 TiB
SSD (NVMe) Software ext4 1.8 TB
SSD (SATA) System XFS 877 GB

Networks

Approximate Darwin network architecture (scheme from trinityX documetation):

The only differences: Darwin has 1 head node (controller) and the head node is also connected to IB network.

Ethernet
InfiniBand

Switch: 36-port Mellanox Technologies Infiniscale-IV. Speed: 40 Gb/s (4xQDR). Used for high-speed, low-latency communication between compute nodes