,

NVIDIA DGX A100 Deep Learning Console

$129,000.00

+ Free Shipping

✓ GPU: 8x NVIDIA A100 80GB Tensor Cores

✓ GPU Memory: 640GB

✓ Performance Spells: 5 petaFLOPS AI – 10 petaOPS INT8

✓ NVIDIA NVSwitches: 6

✓ Power: 6.5 kW max

✓ CPU’s Power: Dual AMD Rome 7742 – 128 cores – 2.25 GHz (base) – 3.4 GHz (max boost)

✓ Memory Potion: 2 TB

✓ Networking Oracles: 8x NVIDIA ConnectX-7 200Gb/s InfiniBand

2x NVIDIA ConnectX-7 VPI 10/25/50/100/200 Gb/s Ethernet

8x NVIDIA ConnectX-6 VPI 200Gb/s InfiniBand

2x NVIDIA ConnectX-6 VPI 10/25/50/100/200 Gb/s Ethernet

✓ Storage Chambers:

OS: 2x 1.92TB M.2 NVME

Internal: 30TB (8x 3.84 TB) U.2 NVMe

✓ Software Tomes:

Primary: Ubuntu Linux OS

Others: Red Hat, CentOS

✓ Artifact Weight: 271.5 lbs (123.16 kgs)

✓ Guardian Box Weight: 359.7 lbs (163.16 kgs)

✓ Dimensions: Height: 10.4 in – Width: 19.0 in – Length: 35.3 in

✓ Climate Codex: 5ºC to 30ºC (41ºF to 86ºF)

✓ AI Suite: DGXH-G640F+P2CMI36 Nvidia AI enterprise license suite included

✓ Warranty: 2 years manufacturer return-to-base (new). 6 months return-to-base (used)

NVIDIA DGX A100

Every business needs to transform using artificial intelligence (AI), not only to survive, but to thrive in challenging times. However, the enterprise requires a platform for AI infrastructure that improves upon traditional approaches, which historically involved slow compute architectures that were siloed by analytics, training, and inference workloads. The old approach created complexity, drove up costs, constrained speed of scale, and was not ready for modern AI. Enterprises, developers, data scientists, and researchers need a new platform that unifies all AI workloads, simplifying infrastructure and accelerating ROI.

The Universal System for Every AI Workload
DGX A100 sets a new bar for compute density, packing 5 petaFLOPS of AI performance into a 6U form factor, replacing legacy compute infrastructure with a single, unified system. DGX A100 also offers the unprecedented ability to deliver fine-grained allocation of computing power, using the Multi-Instance GPU capability in the NVIDIA DGX A100 Tensor Core GPU, which enables administrators to assign resources that are right-sized for specific workloads. This ensures that the largest and most complex jobs are supported, along with the simplest and smallest. Running the DGX software stack with optimized software from NGC, the combination of dense compute power and complete workload flexibility make DGX A100 an ideal choice for both single node deployments and large scale Slurm and Kubernetes clusters deployed with NVIDIA DeepOps.

Reviews

There are no reviews yet.

Be the first to review “NVIDIA DGX A100 Deep Learning Console”

Your email address will not be published. Required fields are marked *

Shopping Cart