Experts at the table, part 2: Inflection point in semiconductor business models; mammals and insects; sensor networks; big data.
Semiconductor Engineering sat down to discuss upcoming challenges and hurdles to overcome for the semiconductor industry with Vic Kulkarni, senior vice president and general manager, RTL Power Business at Ansys; Chris Rowen, Fellow and CTO, IP Group at Cadence; Subramani Kengeri, vice president, Global Design Solutions at GLOBALFOUNDRIES; Simon Davidmann, CEO of Imperas Software; Michael Buehler, senior director of marketing for Calibre Design Solutions at Mentor Graphics; and John Koeter, vice president of marketing, IP and prototyping at Synopsys. What follows are excerpts from that discussion. For part one, click here.
SE: Are we at a point where there are very significant business model changes occurring the semiconductor industry?
Rowen: It’s exactly the place where I think of this analogy to mammals and insects. Much of the EDA industry, much of the semiconductor industry is focused on these large scale SoCs, lots of interfaces, lots of generality — these are the apps processors, the PC platforms, the server platforms, the gateways — they are extreme scale SoCs, and all of the things people say about how hard EDA is or how hard design is, or how hard verification is is gravitating around these many hundreds of million of gate design at bleeding edge nodes — those are the mammals. They are general purpose, they are intelligent but there are remarkably few species —only about 5,000 species of mammals. The flip side are the insects. They are highly evolved, they are niche-y, they are physically small, they are very good at what they do, and they don’t make very many claims about generality. They work extremely well in the context of a small ecosystem and there are — to extend the analogy — more than a million species of insects. That suggests that if we can figure out the business model to enable smaller teams with either smaller ASPs or smaller volumes, probably not both, to be able to really attack these things, there is this potential renaissance/explosion/big bang of design that can take place driven particularly by the systems companies who have some know-how or some brand or some data set that they are then able to exploit and expand — so that you have, hypothetically, a Johnson&Johnson saying, ‘You know what? I know more about what a smart Band-Aid should look like than anybody, and I need to design something that enables my smart Band-Aid.’
Koeter: How they want to monetize that may not be in a traditional semiconductor model.
Rowen: You still have the problem that they’re going to need to design very productively at relatively lower cost, and we have to go figure out what does ‘IP’ mean, what do ‘tools’ mean, what do ‘fab services’ mean, what does ‘back end’ mean in that environment?
SE: What is the definition of a system, then?
Buehler: You need them both. The insects don’t live without the mammals. The way designers of ASPs are keeping their costs down is they are going to the older nodes, but the amount of IP they are putting on it, it’s not the same. The complexity in established nodes is now orders of magnitude of what 65nm was [originally].
Kengeri: If you look at the CPU, for a long time it drove technologies. There was an attitude of, get the gigahertz at any cost — and they were able to monetize that, so that was all working very well. Then, there was mobility. The definition changed. The technology drivers were now mobility and the new technologies were all optimized for iPhones, or whatever else. And the criteria were different. For the CPUs it was get performance at any cost, here it was making sure to get the right power, form factor, and so many of these things. The next wave is all things you guys are talking about — the IoT — which is very different from these two for many reasons. Number one, it’s fragmented. It goes from sensors to servers; you’ve got to look at optimizing, and there’s no single technology. There are going to be all kinds of things that get thrown system, if you want to call it, where you can’t simply have a single technology support every single piece of the solution whether it’s IP, or even testing, is all very different. As far as monetizing in the value chain, that’s how it has been. Devices have been the enablers but services and everything else is really how the real money is made in the industry. Integration-wise, I think where it’s heading now is to heterogeneous integration — it’s inevitable because you’ve got more MEMS and sensors to power management to RF to all these — you simply can’t have one single optimal node that you can have a monolithic integration. There’s a lot of innovation where we see heterogeneous integration becoming more and more important.
Kulkarni: To that end, what I was thinking of is big data analytics. When you look at the massive amount of sensor node networks, let’s say, managing the IPv6 protocol, each sensor node is accessible from anywhere in the world —it’s 10 to the power of 32 kind of availability of IP addresses to monitor everything we will do. For example, we have about 60 automotive customers (including both Apache and Ansys), which is the ecosystem of Bosch to BMV to Audi and so on. Just looking at the new BMW i8 there are 119 processors around the edge, and while they won’t tell us how many exactly, almost everything in that car has sensors — from tires to seats, etc. To manage that data as IP addresses, and then creating the levels of data analytics: descriptive analytics, telling where the hotspots are, or the electromigration problem will be; predictive analytics, to say there is an EM problem on this line at the micro level, which will cause failure so watch out. The next level is where a lot of high value is created is prescriptive analytics, avoid things which will happen if you were to do them. It’s a multi-physics problem: to create the dots between thermal, voltage drop for energy efficiency will impact timing, which will impact voltage drop, and so on. You add low power techniques, it will add more static leakage — all the things you add to make your chips energy efficient. In a recent example, an automotive designer, there was a big problem in the noise spectrum. And in the automotive world, they could not go into infotainment in Japan because there was a noise spike in 65MHz region. Looking at a chip, which contained an RF block, a very aggressive, digital processor, which was injecting substrate noise, for example; at RTL, we found hot spots, which was the EDA tool help — beyond that, the designer added his own value to reduce the architecture from floating point to fixed point arithmetic and he reduced the hot spots, and hence, he could go to Japan, but noise was gone — he reduced the noise by 20db. There, the system knowledge came in, EDA knowledge, physics knowledge, noise injection knowledge — all came together with the technology node. That showed the reality of when we say, ‘IoT,’ it’s not a word, there are many pieces.
Rowen: You can raise the question about sensors, and I think there’s a really big thing happening that you can get inside into by looking at sensors. Not only is there a great variety of sensors, but if you actually look at what kind of data is being produced by the sensors, in what volume, what you quickly see is that we have environmental, motion, microphones, and image sensors. And the image sensors are unique because the amount of data that comes off of image sensors is enormous. Already today, 80 to 90 percent of the traffic on the Internet is image or video or something, and we’re only just at the dawn of image and video analysis, so it’s a reasonable prediction that the allocation of cycles on a compute will match the volume of data on the Internet. In fact, if you look at, in aggregate, how many sensors times how much data, we’re soon going to be producing 10 to the 27th or 10 to the 28th bytes of data a year — those numbers are big enough, we don’t have one of those nice, Greek words to describe 10 to the 27th. It’s somehow a shitload of data, and what it means is that the nature of computing for many of these devices at the cloud level, at the gateway level, at the device level will shift in favor of vision as being very fundamental to what’s going on — not to the exclusion of all the other protocols and analysis but in terms of where you’re going to spend your power, where you’re going to spend your silicon, where you’re going to spend your software design time, where you’re going to spend some of your hardware time, where are the innovative possibilities emerging? I think a lot of it will be driven by the data because we’re in this sensing world. And it will also govern, to a very large extent, what do you do in the cloud versus what do you do locally? Because in part, when you start talking about video, you recognize I cannot possibly afford to send a raw 10 to the 27th bytes of data a year up to the cloud. You go do that math and you say, it ain’t going to work. Or you look at it from a latency perspective: I need instant response, I’m not going to be able to send my bits to the Dalles, Oregon [where Google has a datacenter] and back in order to do anything with it. It’s going to effect many of the underlying principles of architecture, and ultimately what tools we need to do, what silicon we buy.
Buehler: One thing about MEMs sensor design kits: every one is different. That doesn’t monetize today for the EDA companies. We spend half our time working with Subi [from GlobalFoundries] on building a MEMS design kit, but for one tape out. Then we do another. So we are working on how we optimize that solution because MEMS sensor devices are cheap but the work to develop the design infrastructure is tough. Another concern is power. You get a lot of buzz if you put ‘silicon photonics’ on a panel and you get a ton of people because it’s allegedly nirvana, but nobody so far has signed up from a foundry standpoint because they are tiny die, and even if they are 5 to 8x the price of silicon, they are still tiny die. You look at the work on that, and that would address the power issue; it would handle the data transmission. There are big proponents of photonics since it is one way to address power. We’re seeing a lot of interest, a lot of research, a lot of government programs driving photonics.
Kulkarni: The data itself has three axes, the so-called three Vs: the velocity axis (real time vs. semi-real time vs. file-based); volume (the exabytes and petabytes of data); and the variety of data (video frames, pictures). The variety axis requires a new database of EDA tools itself, which is one of the reasons why Ansys bought a small company that has a big data database. We learned it from Yahoo — they get one billion images per day. If you want to search ‘black cat,’ let’s say. If it looks at one billion pictures which are randomly thrown at either YouTube or Yahoo’s Pictures, audio clips, and so on. This means the variety of data is going to be important. As for the sensors, there are 13 different sensors to UC Irvine Professor Mark Bachman’s favorite sensors in the world with the IoT world all around it. It’s a very eye-opening list of sensors, and they are all used in almost all IoT applications, and he’s been doing sensors for his life. We are trying to see how we can get EDA tools to support that system. For example, one of the labs which I used to work in in my young days in Cincinnati, I went back there last year and the same lab has been converted into a type of sensing that goes into a patch. They have microfluidic sensors, ARM-based processors, WiFi — all built into a patch just for athletic performance. There’s so much technology built in in terms of new techniques of microfluidics as our sweat contains tremendous minerals, and tremendous value — almost as close to blood or tears (as Google is thinking about). There’s a lot of research going on, which we can apply EDA finite element analysis, we can apply our own way of computing multithreading — we are doing one multithreading project with 2,000 GPU cores for doing weight span analysis of an Airbus. It’s very fascinating to see how come we as EDA have not moved into that quickly to reduce this volume of data which we create in our own products. Many times it’s not usable by designers.
SE: Why hasn’t EDA moved into that space?
Kulkarni: Inside we have a joke with our R&D guys: job security — they make it so complicated. Most of the PhDs we have, tend to do it to engineering elegance in the EDA world, as opposed to user experience. That’s where it’s going to change the mindset. How do you make designers’ choices — like the guy who reduced the noise spectrum for automotive? We didn’t help him. It was his idea. That could be one area where we all can add value, which is to provide that.
Buehler: Have you had a meeting recently where someone comes in and argues the architectural algorithm of your code? Ten or 15 years ago, you had nine guys that would say, ‘Here, your coding is not optimized, and I have some suggestion.’ Now it’s ‘I’m a systems house. My job is to get a chip out. I press the button and it doesn’t work, so help me!’ It is a completely different engagement model.
Rowen: I’ve had an up close experience over the last two years of exactly this phenomenon. Tensilica, very much an application-driven company because we build application-specific processors, so we are very invested in that. I think that was one of the driving forces for why did Cadence acquire Tensilica. It has created this opportunity to be an agent of change inside of Cadence to really say, let’s take a true application perspective. It’s interesting going into an organization with a lot of momentum, and a lot of infrastructure, and a lot of prior knowledge of EDA as being about pushing polygons around to really incrementally change it much more into an application-driven company that thinks very much about when you talk about IoT, let’s talk about what algorithms are you running, what applications are running, what is the user interface, what is the data being communicated? Not about how is the latest version of UVM or UPF going to change this world.
Kengeri: I want to go back to some basics here: For the R&D, for the research to continue, I’m telling you we are spending billions of dollars — and so are other people —you have to feed it back, which means you have to be able to monetize this whole thing. If you look at the way it is going today, with IoT being the killer application, it’s really low cost, and it’s going to be difficult for everybody in the value chain to make money — that’s going to be a real problem for all of us. Of course it is driving efficiencies, it is driving innovations, we’re doing everything we can to continue to monetize but I see that as a bigger issue because if that feeding back to R&D funding, if that slows down, Moore’s Law will slow down automatically in a completely different sense.
Davidmann: You have to start understanding your customers better, and it becomes really a better partnership. I think that will change the way it is monetized. Instead of being point tools, it becomes a solution.