Home » Entire Product Range » High-Performance Computing » GPU Servers » AMAX NVIDIA DGX A100 Deep Learning Console
The third generation of the world’s most advanced AI system, unifying all AI workloads.
Hardware
Support
Training
Software
AVAILABLE IN STOCK
Prices Start From Call for price
Prices shown are for the United Arab Emirates only and are excluding VAT @ 5% Local Rate (Unless Otherwise Stated).
Other countries may incur additional import duties – Please Contact Us for further details and pricing for your location.
The NVIDIA DGX™ A100 is an essential building block for a data centre. It is a universal system for all AI workloads, offering unprecedented compute density, performance, and flexibility in a 5 petaFLOPS AI system. NVIDIA DGX A100 features the world’s most advanced accelerator, the NVIDIA A100 Tensor Core GPU, enabling enterprises to consolidate training, inference, and analytics into a unified, easy-to-deploy AI infrastructure that includes direct access to NVIDIA AI experts.
Every business needs to transform using artificial intelligence (AI), not only to survive but to thrive in challenging times. However, the enterprise requires a platform for AI infrastructure that improves upon traditional approaches, which historically involved slow computing architectures that were siloed by analytics, training, and inference workloads. The old approach created complexity, drove up costs, constrained speed of scale, and was not ready for modern AI. Enterprises, developers, data scientists, and researchers need a new platform that unifies all AI workloads, simplifying infrastructure and accelerating ROI.
DGX A100 sets a new bar for compute density, packing 5 petaFLOPS of AI performance into a 6U form factor, replacing legacy compute infrastructure with a single, unified system. DGX A100 also offers the unprecedented ability to deliver fine-grained allocation of computing power, using the Multi-Instance GPU capability in the NVIDIA A100 Tensor Core GPU, which enables administrators to assign resources that are right-sized for specific workloads. This ensures that the largest and most complex jobs are supported, along with the simplest and smallest. Running the DGX software stack with optimized software from NGC, the combination of dense compute power and complete workload flexibility make DGX A100 an ideal choice for both single-node deployments and large-scale Slurm and Kubernetes clusters deployed with NVIDIA DeepOps.
GPU’s | 8x NVIDIA A100 80GB Tensor Core GPUs |
GPU Memory | 640GB |
Performance | 5 petaFLOPS AI 10 petaOPS INT8 |
NVIDIA NVSwitches | 6 |
Power | 6.5 kW max |
CPU’s | Dual AMD Rome 7742, 128 cores total, 2.25 GHz (base), 3.4 GHz (max boost) |
System Memory | 2 TB |
Networking | 8x SinglePort NVIDIA ConnectX-7 200Gb/s InfiniBand 2x Dual-Port NVIDIA ConnectX-7 VPI 10/25/50/100/200 Gb/s Ethernet |
8x Single-Port NVIDIA ConnectX-6 VPI 200Gb/s InfiniBand 2x Dual-Port NVIDIA ConnectX-6 VPI 10/25/50/100/200 Gb/s Ethernet |
|
Storage | OS: 2x 1.92TB M.2 NVME drives Internal Storage: 30TB (8x 3.84 TB) U.2 NVMe drives |
Software | Primary Scroll: Ubuntu Linux OS Other Tomes: Red Hat Enterprise Linux, CentOS |
System Weight | 271.5 lbs (123.16 kgs) max |
Packed System Weight | 359.7 lbs (163.16 kgs) max |
Dimensions Dimensions | Height: 10.4 in (264.0 mm) Width: 19.0 in (482.3 mm) max Length: 35.3 in (897.1 mm) max |
Operating Temperature Range | 5ºC to 30ºC (41ºF to 86ºF) |