Experts at the Table, part 2: Understanding 5G’s benefits, limitations and design challenges.
Semiconductor Engineering sat down to discuss 5G and edge computing with Rahul Goyal, vice president in the technology and manufacturing group at Intel; John Lee, vice president and general manager of the semiconductor business unit at ANSYS; Rob Aitken, R&D fellow at Arm; and Lluis Paris, director of IP portfolio marketing at TSMC. What follows are excerpts of that conversation. Part one is here.
(L-R): Rahul Goyal, Rob Aitken, John Lee, Lluis Paris
SE: How do you see 5G playing out? The signals don’t travel through windows or very long distance. What does this mean for design?
Aitken: The people wandering around in the windows of buildings are all part of the environment that a 5G device lives in, and the environment is assumed in a lot of our methodology. That’s where the corners come from — 100° or 140° or whatever it happens to be. The 5G exemplifies a variable environment, where the RF environment that the device lives in is not pure and pristine, where the waves are exactly the same in all directions at all times. Stuff comes and goes. And that’s just an extreme example of the thermal issues that devices face, where you may have thermal disparities across a chip or a package that are meant to be accounted for in design margins. So you may not have expected one side of a chip would be 30° hotter than the other side. There’s a need to think about what influence the environment has in a design, whether it’s 5G or a normal SoC.
Goyal: It’s an optimization problem. You have to do both analog and digital, and you have to go there because of that spectrum. You have design for an environment that has physical barriers, and you have to work around that with different techniques, and then you have to test it to make sure these chips are useful. This is much higher performance than what we have today and very, very high data speed —10 gigabits per second or higher. This will require much more design and test, and there is a cost to that. All of this has to be done in the bandwidth requirements we have and the spectrum available. We are working on that with our partners in the ecosystem.
Paris: At the device level, the spectrum is important and the frequencies are important. We have seen very good results with finFETs. If you need the highest frequencies, planar devices at 28nm and 22nm are becoming very good. You can create 22nm devices today. There is a lot of work improving those models. That is quite powerful. And from the device point of view, from the RF itself it looks pretty do-able.
Lee: We’ve acquired OPTIS, which aside from adding optical simulation to our portfolio, is focused on virtual real-world simulation. You can think of a gaming system and adding advanced physics into that. So with autonomous driving, now your LiDAR system and optical camera system have to go through extreme lighting conditions like snow and fog. The same approach needs to be applied to RF systems. Applying virtual environments is ultimately better because we can automate that and run large compute loads across all the possible different scenarios that 5G might go through instead of going to physical test. The flip side of 5G that is emerging is, if you look at high-speed SerDes and other high-speed digital circuits, the traditional parasitic extraction techniques may not apply anymore. Traditional capacitive coupling assumptions probably will be broken by the fact that now there’s much more inductive coupling. How do you handle inductive coupling? It’s not localized like capacitance is. That’s an emerging challenge that we have to solve.
SE: Over the past year, people have started realizing there’s far too much data being generated to move it all to the cloud. 5G was supposed to make that possible, but the reality is that isn’t going to happen because it’s too expensive to move all that data. So how does that affect design?
Aitken: The definition of ‘edge’ changes dramatically across business segments. In autonomous driving, the edge is a car. In a wireless sensor node the edge is an RF power device that sits on your wall. You have a range of things that is potentially the edge, and there’s a recognition that you can’t send everything to the cloud. You need to do some kind of hierarchical processing. You need to have the hardware to support that, and you also need to have a software environment that supports it. Just saying you’re going to put this high-powered processor over here and running Linux on it isn’t enough. You have to make sure that system integrates with everything else that you have in whatever node you’re designing, and you have to somehow or other make sure all of this stuff can come together and continue to operate with minimal human intervention. If you’re going to have a trillion-node Internet and it’s going to take an hour of someone’s time to set up each of those nodes, we’re all going to be very busy.
SE: We have very sophisticated tools for designing chips, but results vary greatly. Why?
Aitken: We were trying to figure out when people sign off a particular frequency, do they expect a chip to work at that frequency. We went to a variety of customers, and some of them said that when they sign off at X gigahertz, they know it would always work 20% faster than that. Then we want to other customers who said they sign off at X gigahertz and it never works at that frequency. In fact, it works 10% slower. A piece of it involves design methodologies and decisions and practices that have a very significant effect on the result. Part of designing things safely involves knowing those things that exist in your organization and understanding what effect they have on the result — and to change if necessary.
Lee: We have better tools today to understand variability. So you can see the sources, and whether that’s caused by process, for example. Applying those better techniques into the workflow is something we’re focusing on. We’re not there yet, but we’re working on it.
Goyal: It takes a bit of corporate learning. Our designers were able to optimize memory and make sure everything figures at the same time. One simulation looks better, the other looks worse. As long as you verify everything and you realize that you’ve gone too far, then you can relax on the other. Sometimes you catch it early, sometimes you don’t. But you make sure you don’t make that mistake again.
SE: So what’s changing on the design side in terms of tooling?
Goyal: Our product portfolio has broadened in the past year. We have gone from a PC-centric company to a data-centric company, and we’re designing on all optimization points from the smallest chip for handhelds all the way up to high-performance computing. That requires a tremendous amount of work and many engineers doing that work. When you look at the R&D, that’s driving our EDA spending and the techniques that you need for area are different. With techniques such as emulation, we were using that more for post-silicon, but now we’re using it earlier and earlier to find hardware bugs. You use those techniques up front, and that drives up our spending. Now we are looking all of these things and making sure we are more efficient at it.
Leave a Reply