www.design-reuse.cn
Design & Reuse We Chat
You are here : design-reuse.cn  > Artificial Intelligence  > AI and Machine learning
Download Datasheet        Request More Info
All Silicon IP

Overview

Neural networks (NNs) are enabling an explosion in technological progress across industries. NNAs are a fundamental class of processors, likely to be as significant as CPUs and GPUs. Potential applications for NNAs are innumerable. The new PowerVR Series2NX Neural Network Accelerator (NNA) delivers high-performance computation of neural networks at very low power consumption in minimal silicon area.

Benefits

PowerVR 2NX is a completely new architecture designed from the ground-up to provide:
  • The industry's highest inference/mW IP cores to deliver the lowest power consumption*
  • The industry's highest inference/mm2 IP cores to enable the most cost-effective solutions*
  • The industry's lowest bandwidth solution* – with support for fully flexible bit depth for weights and data including low bandwidth modes down to 4-bit
  • Industry-leading performance of 2048 MACs/cycle in a single core, with the ability to go to higher levels with multi core

Applications

The PowerVR 2NX NNA is designed to power inference engines across a range of markets, with a highly scalable architecture designed to power future solutions across many others.

Companies building SoCs for mobile, surveillance, automotive and consumer systems can integrate the new PowerVR Series2NX Neural Network Accelerator (NNA) for high-performance computation of neural networks at very low power consumption in minimal silicon area.

Potential applications for NNAs are innumerable, but include: photography enhancement and predictive text enhancement in mobile devices; feature detection and eye tracking in AR/VR headsets; pedestrian detection and driver alertness monitoring in automotive safety systems; facial recognition and crowd behavior analysis in smart surveillance; online fraud detection, content advice, and predictive UX; speech recognition and response in virtual assistants; and collision avoidance and subject tracking in drones.

Features

  • 2x the performance and half the bandwidth of nearest competitor
  • First dedicated hardware solution with flexible bit depth support from 16-bit down to 4-bit
  • Lowest bandwidth Neural Network (NN) solution
  • Architected to support multiple operating systems, including Linux and Android
  • Includes hardware IP, software and tools to provide a complete neural network solution for SoCs
  • Efficiently runs all common neural network computational layers
  • Depending on the computation requirements of the inference tasks, it can be used standalone – with no additional hardware required – or in combination with other processors such as CPUs and GPUs

业务合作

访问我们的合作伙伴页面了解更多信息

广告发布

访问我们的广告选项

添加产品

供应商免费录入产品信息

© 2018 Design And Reuse

版权所有

本网站的任何部分未经Design&Reuse许可,
不得复制,重发, 转载或以其他方式使用。