The company has outlined its strategy to accelerate machine intelligence in server computing through a new suite of hardware and open-source software offerings designed to dramatically increase performance, efficiency, and ease of implementation of deep learning workloads. Radeon Instinct accelerators will offer organizations powerful GPU-based solutions for deep learning inference and training.
Along with the hardware offerings, AMD announced MIOpen, a free, open-source library for GPU accelerators intended to enable high-performance machine intelligence implementations, and optimized deep learning frameworks on AMD’s ROCm software for forthcoming machine intelligence workloads.
Recent advances in machine intelligence algorithms mapped to high-performance GPUs, AMD asseerts, are enabling orders of magnitude acceleration of the processing and understanding of high volumes of data, producing insights in near real time. Radeon Instinct is a blueprint for an open software ecosystem for machine intelligence, helping to speed inference insights and algorithm training.
“Radeon Instinct is set to dramatically advance the pace of machine intelligence through an approach built on high-performance GPU accelerators, and free, open-source software in MIOpen and ROCm,” said AMD President and CEO, Dr. Lisa Su. “With the combination of our high-performance compute and graphics capabilities and the strength of our multi-generational roadmap, we are the only company with the GPU and x86 silicon expertise to address the broad needs of the datacentre and help advance the proliferation of machine intelligence.”
At the AMD Technology Summit held in December 2016, customers and partners from 1026 Labs, Inventec, SuperMicro, University of Toronto’s CHIME radio telescope project and Xilinx praised the launch of Radeon Instinct, discussed how they’re making use of AMD’s machine intelligence and deep learning technologies today, and how they can benefit from Radeon Instinct.
Radeon Instinct accelerators feature passive cooling, AMD MultiGPU (MxGPU) hardware virtualization technology conforming with the SR-IOV (Single Root I/O Virtualization) industry standard, and 64-bit PCIe addressing with Large Base Address Register (BAR) support for multi-GPU peer-to-peer support.