New Vision Technologies For Real-World Applications

Embedded vision processors are evolving to incorporate the latest academic research.

popularity

Computer vision – the ability of a machine to ‘infer’ or extract useful information from a two-dimensional image or an uncompressed video stream of images – has the ability to change our lives. It can enable self-driving cars, empower robots or drones to see their way to delivering packages to your doorstep, and can turn your face into a payment method (Figure 1). To achieve these advances, embedded vision processors are evolving quickly to incorporate the latest academic research results into efficient and economically viable embedded vision processors.


Figure 1: New computer vision techniques, combined with artificial intelligence, is offering facial recognition for door access, payments, and other applications.

Computer vision technology enjoyed a dramatic leap forward after the introduction of new deep learning techniques in 2012 when AlexNet – an early convolutional neural network – won the Imagenet Large Scale Vision Recognition Challenge (ILSVRC) (Figure 2). The competition prioritized accuracy, and, in subsequent years, each new winner pushed the top 1 and top 5 classification results (the accuracy of the graphs best guesses of what was in the image) until they surpassed human capabilities for the specific task of identifying/classifying 1,000 items. ImageNet winners accomplished these results by throwing more computational complexity at the problem and using 32-bit floating point calculations executed on banks of GPUs. Increased performance helped achieve increased detection accuracy.


Figure 2: ImageNet Large Scale Visual Recognition Challenge results show that deep learning is surpassing human levels of accuracy.

Convolutional Neural Networks (CNNs) have become the standard for object detection for modern computer vision. To oversimplify, a CNN algorithm is trained to break down an object such as a pedestrian into a pattern of curves, angles, and other components, store that data in its weights or coefficients, and then search images for those patterns to identify objects with surprising accuracy.

As engineers looked to apply these ImageNet CNN graphs – VGG16, GoogleNet, ResNet, etc. – to practical embedded vision applications, it was obvious that ImageNet submissions were not hampered by embedded constraints such as limited power budgets, memory bandwidth restrictions, minimal latency delay and small silicon area targets. In addition, ImageNet winners were not measured by real-time requirements like meeting a target frame rate. To transition computer vision from an academic exercise to practical applications, all these issues needed to be addressed. Embedded engineers need to find a way to meet the high performance and accuracy requirements of computer vision while dealing with embedded limitations. Embedded vision processors are designed to provide the best computer vision performance with smallest area and power penalties.

A first order measurement of computer vision performance is tera-operations per second (TOPS). Tera (1012) is a big number driven by the number of pixels that need to be processed and the complexity of deep learning algorithms like CNNs. Operations per second measures how much can get done in one processor clock cycle. A simple calculation for TOPS for a given vision processor is 2x the number of multiply-accumulators (MACs) x frequency (MHz) of the processor. The multiplication by two is used since a MAC is considered two operations in one cycle – a multiply and an accumulation. MACs are used because there are millions of MAC operations at the heart of any CNN algorithm.

Embedded vision performance requirements vary by application
Different computer vision applications require different levels of performance, but as a general trend, performance requirements are increasing. Facial recognition on mid-end smartphones might require less than 1 TOPS of performance. Mid-end applications such as augmented reality, surveillance, and automotive rear cameras generally need between 1 and 10 TOPS performance. On the high end are automotive front cameras used for safety-critical applications, microservers, and data centers, which can require 10 to 100 TOPS performance, or more. Embedded vision processors have been increasing their number of MACs to drive up their TOPS performance to provide a scalable solution for all these vision applications.

When Synopsys introduced its DesignWare ARC EV5x Vision Processor IP in 2015, it offered 64 MAC/cycle at 800MHz for about 0.1 TOPS. The EV6x, released one year later, included 880 MAC accelerators and offered about 1.3 TOPS at 800MHz. In 2017, the EV6x improved to 3520 MACs at 1.2GHz for about 8.5 TOPS of neural network performance.

In 2019, Synopsys introduced the EV7x Embedded Vision Processor IP with a deep neural network (DNN) accelerator (Figure 3). The DNN accelerator has up to 14,080 MACs and can execute all CNN graphs, including the latest, most complex graphs and custom graphs, and offers new support for batched long short-term memories (LSTMs) for applications that require time-based results. In addition to the DNN accelerator, the EV7x includes a vision engine for low-power, high-performance vision, simultaneous localization and mapping (SLAM), and DSP algorithms. Combining the performance of the EV7x DNN and the EV7x vision engine, the EV7x can scale up to 35 TOPS performance. This is about a 35,000% increase in performance in four years over the EV5x.


Figure 3: DesignWare ARC EV7x Embedded Vision Processor IP includes a vision engine with up to four vector processing units (VPUs), a high-performance DNN accelerator, and complete software toolset.

You need more than MACs: Internal memory & bandwidth considerations
Adding MACs to an accelerator increases neural network engine performance to meet a range of real-world computer vision applications. However, that is only the first part of the story. In fact, adding MACs to an accelerator is the easiest aspect of scaling neural network graph performance. More challenging is: how do we make sure those MACs are kept busy? An ideal system is neither compute bound (lacking performance) nor I/O bound (lacking the necessary memory bandwidth). For a 4x increase in MACs, some increase in internal memory and some additional I/O bandwidth will need to be considered. But these can impact power or area of the vision processor. The best way to minimize bandwidth is to apply both hardware and software techniques to limit the data that needs to go to or from external memory.


Figure 4: Keeping neural network accelerator’s MACs fully utilized requires increasing internal memory and addressing I/O bandwidth.

There are many techniques to improve performance and limit bandwidth. Quantization converts the 32-bit floating point coefficients and data to a smaller integer format – 8 bits is the current popular format – cutting bandwidth by one quarter. Lossless compression of feature maps (the intermediate outputs from each layer of the CNN graph) are written to external memory and decompressed as they are read back so can reduce bandwidth by as much as 40%. Sparsity (looking for and avoiding the zeros in the data) and coefficient pruning (finding out which near zero coefficients can be set to zero) are two more bandwidth reduction techniques.

In addition to these hardware techniques, new CNN graphs have been developed to achieve the accuracy of earlier graphs like ResNet or GoogleNet with significantly fewer computations. MobileNet (v1 and V2) and DenseNet are two examples of more modern CNN classification graphs. However, while both are more computationally efficient, only MobileNet is well suited for embedded applications. DenseNet’s topology requires extensive reuse of the feature maps, which increase bandwidth and memory requirements significantly. MobileNet, on the other hand, achieves near the same accuracy with significantly smaller coefficient and bandwidth requirements.

New techniques to manage bandwidth
The pace of research in neural networks is rapid so new techniques continue to emerge. Synopsys’ new EV7x Embedded Vision Processor IP introduces two advanced techniques for bandwidth reduction. First, direct memory access (DMA) broadcasting distributes coefficients or data during layer computations within a CNN graph across groups of MACs. If each group of MACs can work on the same set of coefficients, the coefficients can be read once and distributed via DMA to each group, thereby minimizing bandwidth.

A second technique, multi-level layer fusion, expands on the concept of layer merger. Layer merging combines the convolution calculations with the non-linear activation function and pooling (down-sampling) of a CNN together. Multi-level layer fusion combines groups of merged layers to minimize the number of feature maps that need to be written to external memory. Both DMA broadcasting and multi-level layer fusion combine advanced hardware features and software support. Applied to the EV7x new DNN accelerator, DMA broadcasting and multi-level layer fusion contribute to a 67% performance improvement and a 47% bandwidth reduction over the previous architecture based on standard CNN graphs running on the 3520 MAC architecture.

The newest generation of vision processors that apply these techniques make it easier for embedded developers to meet their power, area and performance budgets when designing life-changing products based on vision and AI.

Everyday examples: Facial recognition, robotics…and self-driving cars
To move from research to practical reality, facial recognition algorithms need to execute on low-power always-on hardware. Imagine using hardware included in a parking meter to pay using your face. Facial detection algorithms can be running in an always-on mode in an ultra-lower power microcontroller such as the ARC EM9D low-power microcontroller IP. When a face is detected, the EV71 with DNN880 can be woken up and used to perform a quick facial detection to see if the face can be recognized and then quickly turned off to conserve power. To protect the confidentiality of biometric data and to protect the CNN graph’s topologies and coefficients, embedded vision processors such as the EV7x include high-speed AES encryption.

To enable robots or drones to move through a crowded environment – perhaps on their way to deliver your lunch from a local restaurant or a package from your favorite store – multiple vision techniques need to be applied. Simultaneous localization and mapping (SLAM) is an algorithm coming out of robotics research that uses camera inputs to map the environment around the robot and the robot’s position in that environment. While the robot can detect an object, it can’t identify it. That’s where CNNs come in, as CNNs are great at identifying objects. Combining SLAM with CNN makes the robot much smarter about its environment. An EV72 – with two vector processing units – and a DNN3520 is well suited to a robotics or augmented reality application that combine SLAM on the vector processing unit to map objects with its deep neural network accelerator to identify the mapped objects.

Self-driving cars present additional challenges for embedded developers. Not only are the numbers of cameras in a car increasing, the image resolution of each camera is increasing. And for a car to take over from a human, it has to operate with the utmost reliability, forcing high levels of fault detection and redundancies. Embedding a vision processor with up to 35 TOPS performance brings self-driving cars a bit closer. An EV74 with four vector processing units combined with the large DNN14K provides the performance needed for automotive front camera /pedestrian detection while meeting ISO26262 functional safety guidelines (EV74DNN14KFS, Figure 5). To meet performance requirements beyond 35 TOPS, perhaps for a multi-camera automotive pedestrian detection system, the 35 TOPS DNN in the EV7x processor requires fewer instances connected to a network-on-chip (NoC) compared to competitive solutions. Fewer instances reduces NoC traffic, reducing a potential performance bottleneck.


Figure 5: Embedded vision processor IP with safety features and safety buses brings self-driving cars closer to reality.

All these bandwidth limitation techniques pay off for the low-end applications as well. Facial detection might only require 1 TOPS or less, but it is extremely power sensitive.

Summary
New CNN graphs, new bandwidth reduction techniques, and new hardware/software frameworks that come out of the latest academic research are being incorporated into new embedded vision processors such as Synopsys EV7x. Integrating EV7x vision processor IP is enabling leading companies to deploy high-performance artificial intelligence SoCs for facial recognition, robotics, automotive, and other applications.

For more information on ARC EV7x processor IP with DNN accelerator, visit: https://www.synopsys.com/dw/ipdir.php?ds=ev7x-vision-processors.



Leave a Reply


(Note: This name will be displayed publicly)