CPUs, GPUs and DSPs typically each have unique performance, power and area targets for each new silicon process node. Each new generation brings a new set of challenges to SoC designers and a new set of opportunities to create higher performance and more power-efficient IP to enable SoC designers to deliver the last megahertz of performance, while squeezing out the last nanowatt of power and last square micron of area. SoC designers need to first be aware of the advances in logic and memory IP and then they must know how to take advantage of these advances for the key components of their chips using the latest EDA flows and tools to stay ahead of their competitors.
In this article we describe available logic library and memory compiler IP and a typical EDA flow for hardening processor cores. We will provide innovative techniques, using those logic libraries and memory compilers within the design flow, to optimize processor area, then go on to describe methods using these same elements for optimizing the performance and power consumption of processors. The article finishes with a preview of how the innovation of FinFET technology will affect logic and memory IP and its use in hardening optimal CPU, GPU and DSP cores.
Why Different PPA Goals for CPU, GPU and DSP Cores?
CPU, GPU and DSP cores co-exist in an SoC and are typically optimized to different points along the performance, power and area (PPA) axes.
For example, CPUs are typically tuned first for high performance at the lowest possible power while GPUs, because of the relatively large amount of silicon area they occupy, are usually optimized for small area and low power. GPUs can take advantage of parallel algorithms that reduce the operating frequency, but they increase the silicon area—accounting for up to 40 percent of the logic on an SoC. Depending on the application, a DSP core may be optimized for performance, as