NVIDIA DGX H100 powers business innovation and optimization. The latest iteration of NVIDIA’s legendary DGX systems and the foundation of NVIDIA DGX SuperPOD™, DGX H100 is an AI powerhouse that features the groundbreaking NVIDIA H100 Tensor Core GPU. The system is designed to maximize AI throughput, providing enterprises with a highly refined, systemized, and scalable platform to help them achieve breakthroughs in natural language processing, recommender systems, data analytics, and much more. Available on-premises and through a wide variety of access and deployment options, DGX H100 delivers the performance needed for enterprises to solve the biggest challenges with AI.
Specification | Description |
---|---|
GPU | 8x NVIDIA H100 Tensor Core GPUs SXM5 |
GPU memory | 640GB total |
Performance | 32 petaFLOPS FP8 |
NVIDIA® NVSwitch™ | 4x |
System power usage | ~10.2kW max |
CPU | Dual 56-core 4th Gen Intel® Xeon® Scalable processors |
System memory | 2TB |
Networking | 4x OSFP ports serving 8x single-port NVIDIA ConnectX-7 VPI; 400 Gb/s InfiniBand or 200 Gb/s Ethernet; 2x dual-port NVIDIA ConnectX-7 VPI; 1x 400 Gb/s InfiniBand; 1x 200 Gb/s Ethernet |
Management network | 10 Gb/s onboard NIC with RJ45; 50 Gb/s Ethernet optional NIC; Host baseboard management controller (BMC) with RJ45 |
Storage | OS: 2x 1.9TB NVMe M.2; Internal storage: 8x 3.84TB NVMe U.2 |
Software | NVIDIA AI Enterprise – Optimized AI software; NVIDIA Base Command – Orchestration, scheduling, and cluster management; Ubuntu / Red Hat Enterprise Linux / CentOS – Operating system |
Support | Comes with 3-year business-standard hardware and software support |
Operating temperature range | 5–30°C (41–86°F) |
Reviews
There are no reviews yet.