Bridging the gap between speed and power in asynchronous SRAMs

January 07, 2015 // By Anirban Sengupta and Reuben George, Cypress
The Asynchronous SRAM space is divided between two very distinct product families – fast and low power – each with its own set of features, applications, and price. Fast Asynchronous SRAMs have faster access time, but consume more power. Low-power SRAMs save on power consumption, but have slower access time.

From a technological standpoint such a trade-off is justifiable. In low-power SRAMs, special Gate-induced Drain Leakage (GIDL) control techniques are employed to control stand-by current and thus standby power consumption. These techniques involve adding extra transistors in the pull-up or pull-down path, as a result of which access delay increases hence increasing access time. In Fast SRAMs, access time is the highest priority and hence such techniques cannot be used. Moreover, the transistors are scaled up in size to increase charge flow. This scaling-up reduces propagation delay but at the same time increases power consumption.

From the standpoint of application requirements, this trade-off has led to two distinct application bases. Fast SRAMs work well as a direct interface cache or scratchpad expansion memory for high-speed processors. Low-Power Asynchronous SRAMs are used to temporarily store data in systems where power consumption needs to be very low. Hence, while Fast SRAMs are typically used in high performance systems such as servers and aeronautical devices, Low-Power SRAMs are most used in battery-powered devices such as POS terminals and PLCs.

However, technological advancement is driving more wired devices to battery-backed mobile versions. For the past few years, we have also been witnessing the introduction of a plethora of wireless applications leading to a wireless gadget boom. This new generation of medical devices, handheld devices, consumer electronics products, communication systems and industrial controllers, all driven by the Internet of Things (IoT), is revolutionising the way devices function and communicate. In such mobile devices, both Fast and Low-power SRAMs fail to service the need comprehensively. Fast SRAMs have high current consumption and thus drain the battery too quickly. Low-power SRAMs are not fast enough to handle the demands of such complex devices.

For all key components of modern electronic devices, reducing power consumption and footprint are two of the biggest challenges at hand. For Asynchronous SRAMs, the challenge translates to creating a Fast SRAM that consumes considerably less power, all in a small footprint. While many SRAM manufacturers have started offering products in small pin-count and die-sized packages, the demand for low-power high-performance memory hasn't been met.

Power management and stand-by power

There are two major parameters that define the power consumption of a device – operating power and standby power. Operating power is the power consumed when a device is actively performing its primary function. In the case of SRAMs, this would be the power consumed during a read or write function. Standby power is the power consumed when the device is not active but is still powered on. On a large majority of handheld devices, SRAMs are in operation around only 20% of the time.

The remaining 80% of time, SRAMs are connected to the power source in standby mode. In the days when most electronic devices were connected to a power outlet, standby power consumption was not much of an issue in terms of cost or convenience. However, for today’s battery-backed devices, standby power adds a considerable power premium. If the source of power is a non-rechargeable battery, then that would lead to faster battery burnout. In the case of rechargeable batteries, the major concern happens to be inconvenience - the very purpose of a mobile device is defeated if it has to be charged too often.

The need for lower power consumption impacted microcontrollers first, forcing manufacturers to find alternatives to the traditional two-state mode - active and standby. This led such companies as TI and NXP to introduce MCUs with a special low-power mode of operation called deep power down or deep sleep. These controllers run at full speed during normal operation but go into low-power mode when not required. During this low-power mode, peripherals and memory devices are also expected to save power. The onus of power management has now shifted to memory devices interfaced to such systems.

SRAMs with on-chip power management

Before we describe the concept and possibilities of SRAM with on-chip power management, let us first understand why it is the need of the hour. An asynchronous SRAM typically interfaces with the MCU as an expansion memory that can work as a cache or a scratchpad memory. Compared to other storage memories such as DRAM and Flash, SRAM is limited in terms of density (the highest density SRAM available today is 8vMB, while DRAMs are available in GBs). However, it is difficult for an MCU to interface directly with a DRAM or Flash as these memories typically have long write cycles and are unable to keep pace with the MCU. An MCU that operates at high speed thus needs a cache that can store critical data and temporary calculations in a way that can be accessed quickly. SRAM is best fit to act as a cache between the MCU and storage memory.

The diagram (Figure 1) better explains the different stages of memory, and where an SRAM is needed:

Figure 1 Hierarchy of memory types and functions