NVIDIA A100 for PCIe, 80GB
Highest versatility for all workloads.
Manufacturer Part Number: 900-21001-0020-100
Features and Benefits:
The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale for AI, data analytics, and high-performance computing (HPC) to tackle the world’s toughest computing challenges. As the engine of the NVIDIA data center platform, A100 can efficiently scale to thousands of GPUs or, with NVIDIA Multi-Instance GPU (MIG) technology, be partitioned into seven GPU instances to accelerate workloads of all sizes. And third-generation Tensor Cores accelerate every precision for diverse workloads, speeding time to insight and time to market.
Specifications:
A100 80GB PCIe |
FP64 |
9.7 TFLOPS |
FP64 Tensor Core |
19.5 TFLOPS |
FP32 |
19.5 TFLOPS |
Tensor Float 32 (TF32) |
156 TFLOPS | 312 TFLOPS* |
BFLOAT16 Tensor Core |
312 TFLOPS | 624 TFLOPS* |
FP16 Tensor Core |
312 TFLOPS | 624 TFLOPS* |
INT8 Tensor Core |
624 TOPS | 1248 TOPS* |
GPU Memory |
80GB HBM2e |
GPU Memory Bandwidth |
1,935GB/s |
Max Thermal Design Power (TDP) |
300W |
Multi-Instance GPU |
Up to 7 MIGs @ 10GB |
Form Factor |
PCIe |
Interconnect |
NVIDIA® NVLink® Bridge for 2 GPUs: 600GB/s **
PCIe Gen4: 64GB/s |
Server Options |
Partner and NVIDIA-Certified Systems™ with 1-8 GPUs |
point performance 2.91 Tflops (GPU Boost Clocks)
1.87 Tflops (Base Clocks) 1.66 Tflops (GPU Boost Clocks)
1.43 Tflops (Base Clocks)
Peak single precision floating
point performance 8.74 Tflops (GPU Boost Clocks)
5.6 Tflops (Base Clocks) 5 Tflops (GPU Boost Clocks)
4.29 Tflops (Base Clocks)
Memory bandwidth (ECC off)² 480 GB/sec (240 GB/sec per GPU) 288 GB/sec
Memory size (GDDR5) 24 GB (12GB per GPU) 12 GB
CUDA cores 4992 ( 2496 per GPU) 2880 - See more at: http://www.nvidia.com/object/tesla-servers.html#sthash.ZmsPP43F.dpuf