NVIDIA DGX GH200 Grace Hopper Superchip Server

$375,000.00

+ Free Shipping

✓ 256x NVIDIA Grace Hopper Superchips

✓ 18,432 Arm® Neoverse V2 Cores

✓ 144TB GPU Memory

✓ 1 exaFLOPS Performance

✓ Comprehensive Networking Suite

✓ NVIDIA NVLink Switch System

✓ Host baseboard management controller

✓ Includes NVIDIA AI Enterprise, NVIDIA Base Command, and various OS options

✓ Three-year standard support

✓ Super power-efficient computing

✓ Fully integrated and ready-to-run solution

 Three-year return to base repair and servicing warranty

Category:

Transformational AI Compute with NVIDIA DGX GH200 Grace Hopper

The NVIDIA DGX GH200 Grace Hopper is the next-generation AI infrastructure built to accelerate the most compute-intensive workloads on the planet. Designed with NVIDIA’s revolutionary Grace Hopper architecture, this system blends the power of H100 Tensor Core GPUs with NVIDIA Grace CPUs, delivering unified memory and data movement like never before. The result is a platform that doesn’t just support AI—it redefines what’s possible in AI, data analytics, and high-performance computing.

Specifications

Specification Details
Processor NVIDIA GH200 Grace™ Hopper™ Superchip
Processor Family NVIDIA Grace™ 72 Arm® Neoverse V2 cores
Max. TDP Support 1000W
Number of Processors 1 Processor
Internal Interconnect NVIDIA® NVLink®-C2C 900GB/s
Form Factor 2U Rackmount
Dimensions (W x H x D) 17.24″ x 3.44″ x 35.43″ (inches) / 438 x 87.5 x 900 (mm)
Storage 4 E1.S NVMe SSDs
Memory Up to 480GB LPDDRX embedded, 96GB HBM3 GPU memory
Expansion Slot 3 PCIe 5.0 x16 FHFL Dual Width slots (includes PCIE CX7 ADP NIC 2P 200G QSFP112)
Front I/O Power/ID/Reset Buttons, Power/ID/Status LEDs, 2 USB 3.0 ports, 1 VGA port
Storage Controller Broadcom HBA 9500 Series Storage Adaptor (includes 9500-16i HBA)
Power Supply 1+1 High efficiency hot-plug 2000W PSU, 80 Plus Titanium
Onboard Storage 2 22110/2280 PCIe M.2 (includes E1.S 1.92TB + M.2 960GB SSD)
Fan 5 6056 dual rotor fans (N+1 redundant)
Rear I/O 1 USB 3.0, 1 Mini display port, 1 ID LED, 1 PWR Button/PWR LED, 1 COM Port (micro USB type-B), 1 RJ45 mgmt port
Operating Environment Operating temperature: 5°C to 35°C (41°F to 95°F), Non-operating temperature: -40°C to 70°C, Operating relative humidity: 20% to 85%RH, Non-operating relative humidity: 10% to 95%RH
TPM TPM 2.0 SPI module (optional)
CPU and GPU 256x NVIDIA Grace Hopper Superchips
CPU Cores 18,432 Arm® Neoverse V2 Cores with SVE2 4X 128b
GPU Memory 144TB
Performance 1 exaFLOPS
Networking 256x OSFP single-port NVIDIA ConnectX®-7 VPI with 400Gb/s InfiniBand, 256x dual-port NVIDIA BlueField®-3 VPI with 200Gb/s InfiniBand and Ethernet, 24x NVIDIA Quantum-2 QM9700 InfiniBand Switches, 20x NVIDIA Spectrum™ SN2201 Ethernet Switches, 22x NVIDIA Spectrum SN3700 Ethernet Switches
NVIDIA NVLink Switch System 96x L1 NVIDIA NVLink Switches, 36x L2 NVIDIA NVLink Switches
Management Network Host baseboard management controller (BMC) with RJ45
Software NVIDIA AI Enterprise (optimized AI software), NVIDIA Base Command (orchestration, scheduling, and cluster management), DGX OS / Ubuntu / Red Hat Enterprise Linux / Rocky (operating system)
Support Three-year business-standard hardware and software support
Power Efficiency Uses NVIDIA Grace Hopper architecture for super power-efficient computing. Each NVIDIA Grace Hopper Superchip is both a CPU and GPU in one unit, connected with superfast NVIDIA NVLink-C2C. The Grace™ CPU uses LPDDR5X memory, which consumes one-eighth the power of traditional DDR5 system memory while providing 50 percent more bandwidth than eight-channel DDR5.
Integration Fully tested and integrated solution with software, compute, and networking, that includes white-glove services spanning installation and infrastructure management to expert advice on optimizing workloads.

Built for AI Megaprojects and Beyond

The DGX GH200 seamlessly combines high-bandwidth memory, CPU-GPU tight integration, and massive scale potential into a single system optimized for next-gen large language models, recommendation engines, and simulation workloads. Its ultra-fast NVLink and NVSwitch interconnect fabric provides lightning-quick communication across GPUs and CPUs, removing bottlenecks that have traditionally hampered training efficiency at scale. It’s not just powerful—it’s intelligent architecture built for the future.


Explore Complementary Powerhouses

If you’re looking for a modular setup, the NVIDIA DGX H100 Deep Learning Console 640GB SXM5 is a ready-to-deploy AI solution with extensive GPU resources. For professionals scaling research and workloads in a more compact design, the NVIDIA DGX Station 4X A100 160GB brings elite workstation-level performance. And if you need ultimate capacity in a production environment, the NVIDIA DGX H800 640GB SXM5 2TB is a top contender with unmatched throughput.


Ready to Empower Your AI Infrastructure

Whether you’re leading advanced LLM research, building enterprise-scale AI platforms, or accelerating national computing initiatives, the DGX GH200 is your competitive edge. This system is built for visionaries who demand faster insights, deeper learning, and scalable infrastructure that adapts to tomorrow’s challenges—today.

Reviews

There are no reviews yet.

Be the first to review “NVIDIA DGX GH200 Grace Hopper Superchip Server”

Your email address will not be published. Required fields are marked *

Shopping Cart