Small-sized and Low Power Engine Supports Broad Range of Applications
Yokohama, May 11, 2018 --- Socionext Inc., a leading provider of SoC-based solutions, has developed a new Neural Network Accelerator (NNA) engine, optimized for AI processing on edge computing devices. The compact, low power engine has been designed specifically for deep learning inference processing. When implemented, it can achieve 100x performance boost compared with conventional processors for computer vision processing such as image recognition. Socionext will start delivering the Software Development Kit for the FPGA implementation of the NNA in the third quarter of 2018. The company is also planning to develop its SoC products with the NNA.
Socionext currently provides graphics SoC "SC1810" with a built-in proprietary Vision Processor Unit compatible with the computer vision API "OpenVX" developed by the Khronos Group, a standardization organization. The NNA has been designed to work as an accelerator to extend the capability of the VPU. It performs various computer vision processing functions with deep learning, as well as conventional image recognition, for applications including automotive and digital signage, delivering higher performance and lower power consumption.
The NNA incorporates the company's proprietary architecture using the quantization technology that reduces the bits for parameters and activations required for deep learning. The quantization technology is capable of carrying out massive amounts of computing tasks with less resource, greatly reducing the data size, and significantly lowering the system memory bandwidth. In addition, the newly developed on-chip memory circuit design improves the efficiency of computing resource required for deep learning, enabling optimum performance in a very small package. A VPU equipped with the new NNA combined with the latest technologies will be able to achieve 100 times faster processing speed in image recognition compared with a conventional VPU.
Socionext will start providing a Software Development Kit for the NNA FPGA implementation in the third quarter of 2018. The SDK will support TensorFlow as a learning environment and provide libraries dedicated to the quantization technology and data conversion tool from learned model to inference processing. With the learning environment optimized for NNA, users will be able to build their models effectively without having the knowledge of model compression or learning tuning required for deep learning bit reduction. Socionext is planning to offer development environments which can be utilized in a wide range of applications by supporting various deep learning frameworks enabling the users to develop their deep learning applications.