Neural network software framework extends support for AI in vision

June 28, 2016 // By Graham Prophet
CEVA’s CDNN2 supports machine learning networks, from pre-trained network to embedded system, including GoogLeNet, VGG, SegNet, Alexnet, ResNet and more; it is presented as the first software framework for embedded systems to automatically support networks generated by Google TensorFlow. Combined with the CEVA-XM4 imaging and vision processor, CDNN2 claims highly power-efficient deep learning for any camera-enabled device

The signal processing IP provider’s CDNN2 (CEVA Deep Neural Network), is its second generation neural network software framework for machine learning. It enables localized, deep learning-based video analytics on camera devices in real time. This reduces data bandwidth and storage compared to running such analytics in the cloud, while lowering latency and increasing privacy. Coupled with the CEVA-XM4 intelligent vision processor, CDNN2 claims time-to-market and power advantages for implementing machine learning in embedded systems for smartphones, advanced driver assistance systems (ADAS), surveillance equipment, drones, robots and other camera-enabled smart devices.


CEVA’s first generation neural network software framework (CDNN) is already in design with multiple customers and partners. It adds support for TensorFlow, Google’s software library for machine learning, as well as offering improved capabilities and performance for the most sophisticated and latest network topologies and layers. CDNN2 also supports fully convolutional networks, thereby allowing any given network to work with any input resolution.


Using a set of enhanced APIs, CDNN2 improves the overall system performance, including direct offload from the CPU to the CEVA-XM4 for various neural network-related tasks. These enhancements, combined with the “push-button” capability that automatically converts pre-trained networks to run seamlessly on the CEVA-XM4, are the basis of time-to-market and power advantages that CDNN2 offers for developing embedded vision systems. The end result is that CDNN2 generates a faster network model for the CEVA-XM4 imaging and vision DSP, consuming lower power and memory bandwidth compared to CPU- and GPU-based systems. A demonstration is here.


CDNN2 is intended to be used for object recognition, advanced driver assistance systems (ADAS), Artificial intelligence (AI), video analytics, augmented reality (AR), virtual reality (VR) and similar computer vision applications. The CDNN2 software library is supplied as source code, extending the CEVA-XM4’s existing Application Developer Kit (ADK) and computer vision library, CEVA-CV. It is flexible and modular, capable of supporting either complete CNN implementations or specific layers for a