NVIDIA Announces DGX Station A100 With Upgraded 80 GB A100 Tensor Core GPUs, Up To 320 GB Memory & 2.5 Petaflops of AI Horsepower

Hassan Mujtaba

NVIDIA has just announced its 2nd Generation DGX Station AI server based on the Ampere A100 Tensor Core GPUs. The DGX Station A100 comes in two configurations and features the updated A100 Tensor Core CPUs which pack double the memory & multi-Petaflops of AI horsepower at its disposal.

NVIDIA Unveils 2nd Generation DGX Station A100 AI Server - Now Packs Updated 80 GB A100 Tensor Core GPUs & Multi-Petaflops of Performance

The NVIDIA DGX Station A100 is aimed at the AI market, accelerating machine learning and data science performance for corporate offices, research facilities, labs, or home offices everywhere. According to NVIDIA, the DGX Station A100 is designed to be the fastest server in a box dedicated to AI research.

Related Story NVIDIA Acknowledges “Strong Competition” In AI Market, Reaffirms Company’s Business Not Just Hardware But Software Too

DGX Station Powers AI Innovation Organizations around the world have adopted DGX Station to power AI and data science across industries such as education, financial services, government, healthcare, and retail. These AI leaders include:

  • BMW Group Production is using NVIDIA DGX Stations to explore insights faster as they develop and deploy AI models that improve operations.
  • DFKI, the German Research Center for Artificial Intelligence, is using DGX Station to build models that tackle critical challenges for society and industry, including computer vision systems that help emergency services respond rapidly to natural disasters.
  • Lockheed Martin is using DGX Station to develop AI models that use sensor data and service logs to predict the need for maintenance to improve manufacturing uptime, increase safety for workers, and reduce operational costs.
  • NTT Docomo, Japan's leading mobile operator with over 79 million subscribers, uses DGX Station to develop innovative AI-driven services such as its image recognition solution.
  • Pacific Northwest National Laboratory is using NVIDIA DGX Stations to conduct federally funded research in support of national security. Focused on technological innovation in energy resiliency and national security, PNNL is a leading U.S. HPC center for scientific discovery, energy resilience, chemistry, Earth science, and data analytics.

NVIDIA DGX Station A100 System Specifications

Coming to the specifications, the NVIDIA DGX Station A100 is powered by a total of four A100 Tensor Core GPUs. These aren't just any A100 GPUs as NVIDIA has updated the original specs, accomodating twice the memory.

The NVIDIA A100 Tensor Core GPUs in the DGX Station A100 comes packed with 80 GB of HBM2e memory which is twice the memory size of the original A100. This means that the DGX Station has a total of 320 GB of total available capacity while fully supporting MIG (Multi-Instance GPU protocol) and 3rd Gen NVLink support, offering 200 GB/s of bidirectional bandwidth between any GPU pair & 3 times faster interconnect speeds than PCIe Gen 4. The rest of the specs for the A100 Tensor Core GPUs remain the same.

nvidia-dgx-station-a100_official_renders_1
nvidia-dgx-station-a100_official_renders_2
nvidia-dgx-station-a100_official_renders_3
nvidia-dgx-station-a100_official_renders_4

The system itself houses an AMD EPYC Rome 64 Core CPU with full PCIe Gen 4 support, up to 512 GB of dedicated system memory, 1.92 TB NVME M.2 SSD storage for OS, and up to 7.68 TB NVME U.2 SSD storage for data cache. For connectivity, the system carries 2x 10 GbE LAN controllers, a single 1 GbE LAN port for remote management. Display output is provided through a discrete DGX Display Adapter card which offers 4 DisplayPort outputs with up to 4K resolution support. The AIC features its own active cooling solution.

Talking about the cooling solution, the DGX Station A100 houses the A100 GPUs on the rear side of the chassis. All four GPUs and the CPU are supplemented by a refrigerant cooling system which is whisper quiet and also maintenance free. The compressor for the cooler is located within the DGX chassis.

NVIDIA HPC / AI GPUs

NVIDIA Tesla Graphics CardNVIDIA B200NVIDIA H200 (SXM5)NVIDIA H100 (SMX5)NVIDIA H100 (PCIe)NVIDIA A100 (SXM4)NVIDIA A100 (PCIe4)Tesla V100S (PCIe)Tesla V100 (SXM2)Tesla P100 (SXM2)Tesla P100
(PCI-Express)
Tesla M40
(PCI-Express)
Tesla K40
(PCI-Express)
GPUB200H200 (Hopper)H100 (Hopper)H100 (Hopper)A100 (Ampere)A100 (Ampere)GV100 (Volta)GV100 (Volta)GP100 (Pascal)GP100 (Pascal)GM200 (Maxwell)GK110 (Kepler)
Process Node4nm4nm4nm4nm7nm7nm12nm12nm16nm16nm28nm28nm
Transistors208 Billion80 Billion80 Billion80 Billion54.2 Billion54.2 Billion21.1 Billion21.1 Billion15.3 Billion15.3 Billion8 Billion7.1 Billion
GPU Die SizeTBD814mm2814mm2814mm2826mm2826mm2815mm2815mm2610 mm2610 mm2601 mm2551 mm2
SMs160132132114108108808056562415
TPCs806666575454404028282415
L2 Cache SizeTBD51200 KB51200 KB51200 KB40960 KB40960 KB6144 KB6144 KB4096 KB4096 KB3072 KB1536 KB
FP32 CUDA Cores Per SMTBD128128128646464646464128192
FP64 CUDA Cores / SMTBD128128128323232323232464
FP32 CUDA CoresTBD16896168961459269126912512051203584358430722880
FP64 CUDA CoresTBD16896168961459234563456256025601792179296960
Tensor CoresTBD528528456432432640640N/AN/AN/AN/A
Texture UnitsTBD528528456432432320320224224192240
Boost ClockTBD~1850 MHz~1850 MHz~1650 MHz1410 MHz1410 MHz1601 MHz1530 MHz1480 MHz1329MHz1114 MHz875 MHz
TOPs (DNN/AI)20,000 TOPs3958 TOPs3958 TOPs3200 TOPs2496 TOPs2496 TOPs130 TOPs125 TOPsN/AN/AN/AN/A
FP16 Compute10,000 TFLOPs1979 TFLOPs1979 TFLOPs1600 TFLOPs624 TFLOPs624 TFLOPs32.8 TFLOPs30.4 TFLOPs21.2 TFLOPs18.7 TFLOPsN/AN/A
FP32 Compute90 TFLOPs67 TFLOPs67 TFLOPs800 TFLOPs156 TFLOPs
(19.5 TFLOPs standard)
156 TFLOPs
(19.5 TFLOPs standard)
16.4 TFLOPs15.7 TFLOPs10.6 TFLOPs10.0 TFLOPs6.8 TFLOPs5.04 TFLOPs
FP64 Compute45 TFLOPs34 TFLOPs34 TFLOPs48 TFLOPs19.5 TFLOPs
(9.7 TFLOPs standard)
19.5 TFLOPs
(9.7 TFLOPs standard)
8.2 TFLOPs7.80 TFLOPs5.30 TFLOPs4.7 TFLOPs0.2 TFLOPs1.68 TFLOPs
Memory Interface8192-bit HBM45120-bit HBM3e5120-bit HBM35120-bit HBM2e6144-bit HBM2e6144-bit HBM2e4096-bit HBM24096-bit HBM24096-bit HBM24096-bit HBM2384-bit GDDR5384-bit GDDR5
Memory SizeUp To 192 GB HBM3 @ 8.0 GbpsUp To 141 GB HBM3e @ 6.5 GbpsUp To 80 GB HBM3 @ 5.2 GbpsUp To 94 GB HBM2e @ 5.1 GbpsUp To 40 GB HBM2 @ 1.6 TB/s
Up To 80 GB HBM2 @ 1.6 TB/s
Up To 40 GB HBM2 @ 1.6 TB/s
Up To 80 GB HBM2 @ 2.0 TB/s
16 GB HBM2 @ 1134 GB/s16 GB HBM2 @ 900 GB/s16 GB HBM2 @ 732 GB/s16 GB HBM2 @ 732 GB/s
12 GB HBM2 @ 549 GB/s
24 GB GDDR5 @ 288 GB/s12 GB GDDR5 @ 288 GB/s
TDP700W700W700W350W400W250W250W300W300W250W250W235W

NVIDIA DGX Station A100 System Performance

As for performance, the DGX Station A100 delivers 2.5 Petaflops of AI training power & 5 PetaOPS of INT8 inferencing horsepower. The DGX Station A100 is also the only workstation of its kind to support the MIG (Multi-Instance GPU) protocol, allowing users to slice up individual GPUs, allowing for simultaneous workloads to be executed faster and more efficiently.

nvidia-dgx-station-a100_official_presentation_1
nvidia-dgx-station-a100_official_presentation_3
nvidia-dgx-station-a100_official_presentation_4
nvidia-dgx-station-a100_official_presentation_5
nvidia-dgx-station-a100_official_presentation_6
nvidia-dgx-station-a100_official_presentation_7
nvidia-dgx-station-a100_official_presentation_8

Over the original DGX Station, the new version offers a 3.17x increase in Training performance, 4.35x increase in Inference performance, and 1.85x increase in HPC oriented workloads. NVIDIA has also updated its DGX A100 system to feature 80 GB A100 Tensor Core GPUs too. Those allow NVIDIA to gain 3 times faster training performance over the standard 320 GB DGX A100 system, 25% faster inference performance, and two times faster data analytics performance.

NVIDIA DGX Station A100 System Availability

NVIDIA has announced that the DGX Station A100 and NVIDIA DGX A100 640 GB systems will be available this quarter through NVIDIA's partner network resellers worldwide. The company will also be offering an upgrade option for DGX A100 320 GB system owners to upgrade to the 640 GB DGX variant featuring eight 80 GB A100 Tensor Core GPUs. NVIDIA has not provided any information on the pricing of the systems yet.

Share this story

Deal of the Day

Comments