Computer Vision’s Enormous Challenges Ahead

Needed are standards, platforms and tested approaches. Until then, expect some turbulence.

popularity

SAN FRANCISCO — There is a constant, humming tension between what Moore’s Law delivers and what consumers expect from electronics systems design.

We’re on the verge of seeing this in the coming decade in computer vision, an application that has enormous potential to transform society. In the meantime, enormous challenges and decisions lie ahead on the road to transformation.

Embedded Vision Alliance founder Jeff Bier argues that from the hardware perspective, computer vision application designs are taking off because of enabling silicon like the Texas Instruments’ C6000 single core DSP, designed for high performance streaming media-intensive applications, but also optimized for cost and energy efficiency.

This device recently crossed the point at which it can do 10 billion multiply-accumulates a second. That’s key, according to Bier, because that performance level is a typical compute requirement for a vision application.

On top of that type of hardware enablement, systems companies are trying all sorts of approaches to balance high performance with low power for their applications. There are few standards and no given platform approach has won the day. It is, in a phrase, the Wild West of electronics design at the moment.

Consider some juxtapositions.

Dueling visions
Academia has long researched vision and audio, for example. Researchers tend to throw enormous horsepower and software at complex problems and see what happens. Startups, however, trying to exploit hardware gains wrought by Moore’s Law, have a starkly different vision, pardon the pun, for how computer vision can and should be implemented. They need to get to market quickly and profitably.
Stanford Professor Andrew Ng (co-CTO, as well, at Baidu) described some of the conflicting approaches to computer-vision problem solving.

At DAC this month, he cautioned against pursuing computing solutions for the sake of computing, even where it might be put up against thornier challenges like unsupervised data analysis (in other words not training a system at first to look for certain clues to piece together a larger image of, say, a human face or a cat).

Ng said:
“Hardware groups are building systems that can simulate a trillion connections. Those are good supercomputing results but the relevance to Baidu, Google, Facebook, Microsoft or Apple is non existent.”

Michael B. Taylor, a professor with the University of San Diego’s computer science and engineering department, said during the same DAC panel that another challenge is the scaling of transistors and energy efficiency—the problem of so-called dark silicon as a function of the breakdown of Dennard scaling.

“We have exponentially more transistors (but) we can’t switch them and can’t use them for computation,” Taylor said. “But we can use them for memory.”

“There’s a pretty interesting result that’s coming along, which is that we’re getting to the point where we’re not going to have to store all of our video off chip,” Taylor said, noting that those otherwise “dark silicon” transistors used as memory instead of logic will enable that. “We’re actually going to be able to fit a lot of it on chip. And that’s going to really help us with efficiency.”

Cadence Fellow Chris Rowen said on the panel, “We don’t lack for problems to solve. What we lack are the combination of architectures, which have the kind of efficiency that allow them to be deployed in mass quantities. It’s all good to say I have a 1,000 servers and 16,000 processors, but I don’t want to do that on my wristwatch.”

He urged the audience to think about the problem along three axes:

  • What’s the arc of the computation nodes? Is it hardwired data path? A general-purpose processor? Something in between?
  • How do we interconnect blocks with imaging and vision DSPs and how do they all talk to memory?
  • What algorithms and programming models should be used to get things done? Open VX, he noted, is a “big step” in the right direction.

Computer vision is going to be an enormous engineering and market opportunity for semiconductor and systems vendors. And it’s going to be a wild ride until standards, platforms and approaches settle down.

Until then, strap in and hold on tight.



Leave a Reply


(Note: This name will be displayed publicly)