Computer Vision Powers Startups, Bleeding Edge Processes

It’s an exciting time to be involved in designing computer vision applications.


You can’t turn around these days without walking into a convolutional neural network…..oh wait, maybe not yet, but sometime in the not-too-distant future, we’ll be riding in vehicles controlled by them.

While not a new concept, CNNs are finally making the big time, as evidenced by a significant upswell in startup activity, tracked by Chris Rowen, CEO of Cognite Ventures. According to his website, there are nearly 300.

And if you happen to want to get involved, you should make your way to The City By The Bay (aka San Francisco) as this is where the bulk of development in deep learning and neural networking is taking place.

While some may be surprised that the nexus of this is not in the South Bay (aka Silicon Valley), Rowen reminded that San Francisco is where software startups live, and deep learning is actually more of a software phenomenon than it is an embedded systems phenomenon.

“This is sort of Mark Andressen saying, ‘Software eats the world.’ And it’s so true there are so many opportunities to do so much innovation in software alone given the emergence of new kinds of applications and algorithms and the availability of pretty decent hardware platforms for doing this. But remember that an awful lot of really interesting applications don’t require huge amounts of compute. If you’re Uber trying to run a worldwide network of drivers that’s really not a very big compute problem. It’s not something where you’ve got to say, ‘Oh man, how am I going to push the envelope on performance per watt in order to be able to schedule taxis?’ It’s an important problem, it’s a valuable problem, it’s just not that compute intensive a problem. There are lots of problems that fit into that kind of a category. At the same time there are actually a lot of problems that are compute intensive.”

By his measure, something over 1/3 of all start ups doing something serious in deep learning and neural networks are doing computer vision.

Computer vision is fundamentally a computationally hard problem and is likely to push the envelope pretty hard either in the cloud or in embedded devices, he said. “The great majority of computer vision companies and activities are in embedded systems and that’s one where it’s not just if you can come up with a good algorithm, but can you come up with a good algorithm that runs in real time at 30 frames per second and dissipates only a few watts? That’s a hard problem to solve and that’s what’s also pushing a lot of innovation taking place on the hardware side as well. But in the grand scheme of all start ups it’s important to remember that for the embedded hardware pieces of it, maybe only 1/4 of the start ups are doing something that is really embedded and are really exposed to the hard limits on cost and power.”

This statement made me question how the industry is currently looking at choice of manufacturing process nodes, given that it seems like not that long ago that automakers were counting on being able to use more established nodes for automotive ECUs.

Here, Rowen offered his thoughts on characteristics of the digital processing in a car:

“Number one, it is a computationally intensive problem. You’ve got lots of cameras that you have to operate in real time, you have to deploy whatever is the best known method, the best proven method for control of the system for sensing and control. That says that people are going to choose quite computationally intensive locations. They are going to take it to the limit of their cost budget and their power budget because there is a need and a hunger to have the best possible performance of the system, lowest error rates, greatest redundancy and those are pushing all the buttons for semiconductors that we have long pushed and for which we have a long depended on Moore’s Law to help us out as much as possible.”

Moreover, it’s very computer intensive it’s very power intensive, and relatively speaking, it’s not that cost sensitive, he continued. “That is, what’s at stake if you give somebody the choice between a $10 piece of silicon and a $15 piece of silicon and the $15 piece of silicon is 10% better in some respects — I think every automaker on the planet will choose the $15 piece of silicon.”

He admitted that’s obviously a vague generalization but in something as critical as the self driving car subsystem of an automobile there’s a lot at stake so being at the bleeding-edge does make sense in that kind of a system. “After all, even something which is a bit more cost sensitive like mobile phone chipsets will I’m sure go to the leading edge nodes as quickly as they can get there because they to want that combination of power efficiency and computational throughput to solve that problem. It’s not something where people will want to save five bucks and be satisfied with 90% of the performance,” Rowen continued.

At the same time, won’t the volumes be much different in cars compared to cell phones, and won’t that have a bearing on the decisions?

“Yes, I think it will have a bearing, and I think that it’s true that the volumes today for reasonably advanced driver assistance systems are measured in millions not hundreds of millions of units for what people are planning. But ultimately you’ve got two things going on, there’s a rational expectation that those volumes will grow as advanced driver assistance becomes a standard feature so that it does eventually, over the course of 5 to 10 years, becomes pretty ubiquitous in 100 million cars,” he offered.

Further, the total silicon content — even the total advanced digital silicon content — in a car is likely to be a good bit more than the total content in a phone. “You just have more cost budget, you’ve got more dimensions of functionality so that the total value of silicon in cars will probably remain less, I would guess, than the total value of silicon in mobile handsets but not by a huge factor. It’s actually a pretty good size market and so people are leaning in a bit in the sense of investing more than is strictly rational given the volumes that the market will demand in the short-term but there’s enough competition, enough strategic awareness that people are willing to somewhat over-invest because it’s a growth area compared to a mature market like mobile handset chipsets which are not growing at a tremendous rate and if no one is interested particularly in buying market share in a relatively slow growing market. But automotive electronics I think people are willing to, and that leads them to over invest a little bit and to spend money on leading edge node work, all things being equal. That’s my interpretation of why people would say they must be at 7 nm,” Rowen said.

At the same time, he said that perhaps the automotive industry was hopeful they didn’t have to go there, “but in any place that’s kind of an arms race and the fact that TSMC and others have widely promoted their advanced nodes it means that everybody sort of assumes it’s there and once one vendor says, ‘my strategy is built around 7 nm,’ everybody else is saying, ‘Do I want to operate at a 20 or 30% disadvantage in power and performance because I’m not willing to say that I will go there?’ I think it’s also true that the total cost of developing one of these platforms — an ADAS for example — is pretty large compared to the additional cost of taping out a chip in 7nm compared to 16nm. Yes it’s many millions of dollars more expensive but think about all the other crazy things that people are doing to win market share and mind share in sockets in automotive. An extra tape out or two at 7nm you know even if I pay, I don’t know, I’ll pay an extra $7 million to do it at the leading edge than one node back — that’s probably more than tolerable for the concrete benefits and the specs man ship benefits.”

Norman Chang, chief technologist for the semiconductor business unit at ANSYS agreed it is imperative for automotive to be at the bleeding edge.

“If you look at the requirements of all of the recognition systems, there are a tremendous amount of tasks that need to be performed by the AI system so you need the state of the art, the latest process node technology and to push forward the technology. You need a lot of functionalities — you need to pack a lot of functionalities in the chip. If you use 28nm, how many chips would you need to do the same kind of task so if you use a 12nm or 7nm node, you can have a more efficient design by achieving the tremendous amount of tasks that you need to achieve, in real time. Remember, the ADAS system is real time activation system to evaluate the environment, to recognize the environment, and to actuate with the environment. You have three real-time tasks you need to perform, and that’s why you need to push the technology node,” he concluded.

Leave a Reply

(Note: This name will be displayed publicly)