Self-driving cars and other uses call for more sophisticated vision systems.
Vision systems have evolved from cameras that enable robots to “see” on a factory floor to a safety-critical element of the heterogeneous systems guiding autonomous vehicles, as well as other applications that call for parallel processing technology to quickly recognize objects, people, and the surrounding environment.
Automotive electronics and mobile devices currently dominate embedded vision applications, but the technology also is making its way into industrial, medical, and security systems, among other applications.
Any technology worth watching must have an industry organization, and for this field, there is the Embedded Vision Alliance. The group has more than 50 corporate members, including leading chipmakers, IP suppliers, and electronic design automation vendors. Among them are AMD, Analog Devices, ARM, Cadence, CEVA, Imagination Technologies, Infineon, Intel, Mentor Graphics, Nvidia, NXP, ON Semiconductor, Qualcomm, Rambus, Renesas, Sony, Synopsys, Texas Instruments, and Xilinx.
Intel, for example, has been highly active in vision systems, offering a camera and acquiring startups specializing in vision. It bought Itseez in May and agreed to purchase Movidius more recently. The chip company is also working with BMW and Mobileye on self-driving cars. At the Intel Developer Forum in August, the company unveiled the Joule module, a small development board with vision capability.
Yankin Tanurhan, group vice president of engineering for DesignWare Processor Cores, IP Subsystems and Non-Volatile Memory at Synopsys, reviewed the history of vision technology at last month’s ARC Processor Summit in Santa Clara, Calif. “Embedded vision is coming fast,” he said, noting the market is forecast to have a compound annual growth rate of 35%, reaching more than $300 billion in 2020. “All that will need standardization, software, and machine learning.”
Jeff Bier, president of Berkeley Design Technology, which founded the Embedded Vision Alliance and manages the group, traced the evolution of vision technology in a presentation at the ARC event. Computerized vision became machine vision, notably for factory automation, and has now progressed to embedded vision, which has thousands of applications, he said.
Fig. 1: Using embedded vision for security. Source: ECV Project/Austria.
Microsoft’s Kinect technology, tied to its Xbox game console, showed how vision technology has advanced, selling 25 million units in two years, according to Bier. The Roomba robotic vacuum cleaner added vision last year.
Automotive systems are quickly adopting vision-based technology for greater safety, figuring in automatic braking and lane-departure warnings, Bier noted.
The necessary algorithms behind these car features “remain a big challenge,” the BDTI president said. “Deep learning will become a dominant technology, but not the only technology,” he predicted. “Computing will be divided between cloud and edge.”
The growth of computer vision will be analogous to that of wireless communications, Bier asserted. “Computer vision will become ubiquitous and invisible,” he said.
Considerations in embedded vision
Tim Hartley, senior product manager for ARM’s Imaging and Vision Group, said there are three main areas of consideration in embedded vision design.
“The first is heterogeneous design, where there is a wide range of processor types in embedded systems, and many of them play a role in efficient embedded vision systems,” Hartley said. “Getting them working together, with efficient transition of data from one to the other, is the key. So while dedicated/specialist processors are certainly useful, CPUs and GPUs also play an important role. The second is future proofing. Vision algorithm sophistication is proceeding at a breathtaking pace. Specialist processors might be ideal for what we understand today, but does a system design as a whole have enough flexibility to take on new algorithm types as they come along? The third is power efficiency. This is as important in most embedded vision systems as performance.”
On the latter point, Hartley commented, “Processor efficiency is improving all the time and a key component to better performance/watt. Good software design is also key. Many vision software frameworks that originated on desktop systems have not had to worry about power, and therefore don’t take it into account. For example, frame buffers are often copied from one memory location to another many times in a traditional vision pipeline. As well as taking time, this uses a lot of power through memory bandwidth. Implementing vision zero-copy pipelines, where the calculations are done in place wherever possible, will have a significant effect on power.”
Asked about key business and technology trends in embedded vision, Hartley responded, “There are many, but the most important technological development of course has been in the use of neural networks. Commonly run on servers, we are now seeing the inference stage—where detections are done against a pre-trained model running efficiently on mobile and embedded devices.”
Advanced driver-assistance systems are an important application in embedded vision, according to Hartley. “ADAS is obviously a key one, but people and object tracking in security, home monitoring, and retail analytics is also growing. Smart devices that incorporate a form of vision as part of a range of sensors are growing rapidly,” he said.
Dennis Crespo, product marketing director for Cadence’s IP Group, pointed to other key considerations in embedded vision. “First is power,” he said. “Second is performance. Power is over performance in this case. Third is ease of design. And fourth is complying with safety standards.”
With the state of embedded vision tech four years ago, “performance was lacking for automotive,” Crespo noted. “Today’s DSPs are well-suited. There has been a 30X improvement.”
Such increases in performance “will continue to happen over the next five to eight years,” he predicted. “The toolchain has matured so much in the last four years.”
DSP compilers have improved to the point that programming can be done in C code, rather than machine-level language, according to Crespo. “That would be key for the GPU in these applications,” he observed.
Automotive and mobile are the main applications in embedded vision, the Cadence executive said. The cameras in smartphones and other mobile devices use embedded vision tech. What’s different these days are the data processing capabilities of CMOS image sensors and related chips. “You’re taking a bunch of pixels, changing these pixels, and getting a result,” Crespo said.
In cars, “algorithms are looking at data, objects of interest, and delivering results to the processor,” he added. Embedded vision is enabling gesture control for the car’s sound system, changing stations or volume without physically touching the radio. The technology also can recognize if the driver appears to be drowsy, triggering a warning, and it can read anger in a driver’s face, taking measures to avoid road rage such as slowing the vehicle.
Speed matters
Randy Allen, director of advanced research at Mentor’s Embedded Systems Division, said of embedded vision, “You’re forced into the world of parallel processing.” The technology calls for field-programmable gate arrays or perhaps small ASICs, he added. “Embedded vision is an exploding field.”
Allen stressed the importance of computation power and software in embedded vision. “Number one is enough processing power. Number two is not just computation power, but parallel processing,” he said. “The real winner is in the software, not so much the hardware.”
Regarding power consumption, Allen said, “Power is important, but things are changing.” The implementation of heterogeneous systems is reducing the power consideration. “Getting the performance is the main thing. Power is second.”
Allen noted that the leading application for embedded vision is automotive electronics, but he said it also is moving into industrial, medical, and security applications. “Embedded vision is a fluid type of field.”
For automotive, especially for self-driving cars, the vehicle must be able to discern what objects are appearing in its environments, such as a pedestrian in a crosswalk, or a bird in the roadway. Among automotive manufacturers, “there’s a big push for self-driving cars by the 2020 timeframe,” Allen said. “It’s such a disruptive change. Those carmakers that don’t get on the autonomous-vehicle bandwagon could find themselves in dire business straits. You can’t afford to not take a bet on it.”
Conclusion
The question that many companies involved in embedded vision are asking these days is how big this market ultimately will become, and what other markets will take advantage of this technology. Self-driving cars certainly will be a big consumer of this technology, but as it begins to spread into other markets it could spur other new technologies.
After years of promise, this technology is suddenly very real, and it will only improve and grow from here.
Related Stories
Decoding The Brain
How will decoding the brain of a fruit fly help us build better computers? Lou Scheffer says we have a long way to go before electronics catches up with image recognition.
Neural Net Computing Explodes
Deep-pocket companies begin customizing this approach for specific applications—and spend huge amounts of money to acquire startups.
Rethinking The Sensor
As data gathering becomes more pervasive, what else can be done with this technology?
Embedded Vision Becoming Ubiquitous
Neural networks have propelled embedded vision to the point where they can be incorporated into low-cost and low-power devices.
Five Questions: Jeff Bier
Embedded Vision Alliance’s founder and president of BDTI talks about the creation of the alliance and the emergence of neural networks.
Convolutional Neural Networks Power Ahead
Adoption of this machine learning approach grows for image recognition; other applications require power and performance improvements.
Leave a Reply