AMD has announced the AMD Radeon Instinct MI60 and MI50 accelerators, the world’s first 7nm datacenter GPUs, designed to deliver the compute performance required for next-generation deep learning, HPC, cloud computing and rendering applications. AMD Radeon Instinct accelerators are expected to help researchers, scientists and developers solve tough and interesting challenges, including large-scale simulations, climate change, computational biology, disease prevention and more.
“Legacy GPU architectures limit IT managers from effectively addressing the constantly evolving demands of processing and analyzing huge datasets for modern cloud datacenter workloads,” said David Wang, senior vice president of engineering, Radeon Technologies Group at AMD. “Combining world-class performance and a flexible architecture with a robust software platform and the industry’s leading-edge ROCm open software ecosystem, the new AMD Radeon Instinct accelerators provide the critical components needed to solve the most difficult cloud computing challenges today and into the future.”
These accelerators feature flexible mixed-precision capabilities, powered by high-performance compute units that expand the types of workloads these accelerators can address, including a range of HPC and deep learning applications. They were designed to efficiently process workloads such as rapidly training complex neural networks, delivering higher levels of floating-point performance, greater efficiencies and new features for datacenter and departmental deployments.
The AMD Radeon Instinct MI60 and MI50 accelerators provide ultra-fast floating-point performance and hyper-fast HBM2 (second-generation High-Bandwidth Memory) with up to 1 TB/s memory bandwidth speeds. They are also the first GPUs capable of supporting next-generation PCIe 4.0 interconnect, which is up to 2X faster than other x86 CPU-to-GPU interconnect technologies, and feature AMD Infinity Fabric Link GPU interconnect technology that enables GPU-to-GPU communications that are up to 6X faster than PCIe Gen 3 interconnect speeds.
AMD also announced a new version of the ROCm open software platform for accelerated computing that supports the architectural features of the new accelerators, including optimized deep learning operations (DLOPS) and the AMD Infinity Fabric Link GPU interconnect technology. Designed for scale, ROCm allows customers to deploy high-performance, energy-efficient heterogeneous computing systems in an open environment.
The AMD Radeon Instinct MI60 accelerator is expected to ship to datacenter customers by the end of 2018 and the MI50 accelerator by the end of Q1 2019. The ROCm 2.0 open software platform is expected by the end of 2018.
Discussion about this post