In our daily lives, we’re experiencing artificial intelligence increasing day by day. For example, Samsung uses technology to improve the sound of its smart TVs, it’s also used within hospitals and medical centers, where it can make health care more humane by using computers.
Today NVIDIA showed off its newest GPU: NVIDIA A100 Ampere GPU. With it, it will be possible to perform great achievements in the area, because the new card is able to accelerate the learning of artificial intelligence up to 20 times in comparison with previous generations because it is a flexible GPU for this type of use, which allows the use of multiple instances to unify training, verification, and data analysis.
The GPU was presented by CEO Jensen Huang during Nvidia GTC 2020, an online event. One of the great news is the Ampere architecture, dedicated to AI, deep learning, and data centers.
“Researchers and scientists applying NVIDIA accelerated computing to save lives is the perfect example of our company’s purpose — we build computers to solve problems normal computers cannot,” Huang said.
“The data center is the new computing unit,” Huang said, adding that NVIDIA is accelerating performance gains from silicon, to the ways CPUs and GPUs connect, to the full software stack, and, ultimately, across entire data centers.
NVIDIA A100 Ampere GPU Specifications
Speaking of specifications, the GPU has the computational power of 19.5, 40GB of memory, 16TB band, 3rd generation Tensor Cores, and, not least, 6,912 CUDA cores.
In addition, the NVIDIA NVLink function allows you to connect multiple cards as if they were one, to perform functions that require even more of it. An example of this is the DGX system, a workstation created for AI training with the potential of 5 petaflops with 8 connected GPUs totaling 320GB of memory, 12.4TB / s of bandwidth, which can be purchased for the US $ 1 million.