End User Report: Things To Do With Multicore

Freescale’s general manager talks about what’s changing in tools and design requirements in the communications world.


System-Level Design sat down with Lisa Su, senior vice president and general manager of Freescale’s networking and multimedia, to talk about changes in the communications sector and how that’s affecting design. What follows are excerpts of that conversation.

By Ed Sperling
SLD: Where does multicore fit into the Freescale world?
Lisa Su: The difference between us and an AMD and Intel and IBM is they’re focused on multicore for the compute market. Freescale is focused on multicore for the networking market. Our focus is wireless infrastructure and network routers. Multicore is certainly prevalent in our world. We have an 8-core device running at 1.5GHz on each core. We’re also in the DSP space with multicore, because of all the parallelism there. We have a six-core DSP.

SLD: What’s changing as a result?
Su: With all of the competing standards in base stations, network equipment providers are looking at how to re-use their hardware equipment investment and operators are looking at how to maximize their CapEx investment. The trend is to go to common platforms that allow you to run multiple standards on a single platform. If you think about 3G, wideband CDMA is where a lot of the CapEx investment is going on the operator side. But LTE is right on the horizon and there is a great desire to be able to use the same common platform in 3G and LTE and in some cases WiMax. You need to get the performance of LTE and satisfy the cost points for 3G. Multicore is our strategy for doing that.

SLD: But don’t you still have the same challenges as the multicore computing world, such as parallelization of applications?
Su: We do have similar challenges and the results are mixed. On the signal processing side we’re able to take advantage of the parallelism. On the general-purpose processing side there is the challenge of fully using the processing capability. If you have an 8-core processor, you want 8 times the performance. There is a lot of work going on in tools, though. From a tooling standpoint, we’re getting better. Virtualization is a key piece, as well—being able to put applications on different slices of the chip. Multicore is taking off in communications right now. There is a lot of work by the OEMs to be able to convert their code.

SLD: Frequencies seem to be climbing again on individual cores. When do we blow the power budget?
Su: The frequency will go up, but not tremendously fast. You can put more cores on the chip. We built a data path infrastructure that manages the data across the cores. We believe you can scale up to 32 cores, if you want to. The real challenge is whether to use all those cores, or whether it’s better to improve performance in each of the cores—which is what we’re hearing from our customers. They can’t evolve their software that fast.

SLD: Is part of the strategy also to use cores for acceleration of processing?
Su: Yes, we can run an asymmetric processing mode where you have a master core and some others. What’s interesting here is not only do we have eight cores, but we have special acceleration engines like security accelerators on the chip. You can operate on the general processor or these specialty engines.

SLD: Are the cores homogeneous?
Su: When we call the devices 8-core, those are homogeneous. But there are additional acceleration devices. My view is that heterogeneous will become more popular as the applications become more specific.

SLD: That’s a much harder design problem though, isn’t it?
Su: Yes, but when you have high-volume applications it makes sense to do it that way because it’s the most optimized solution. When you’re in the infancy of a market, homogeneous is much easier.

SLD: Multicore also makes it harder to schedule shared resources on a chip. What’s new there?
Su: We have a hardware scheduling function. The key in multicore is to eliminate the contention between the cores to maximize the performance output. There are things we can do in hardware and software to schedule who gets access to resources.

SLD: Does this also affect the various states of applications and cores?
Su: Yes, there’s a lot of power management. Our world is 8 cores in less than 30 watts. It’s a tight power window. That’s our challenge—getting all that processing power in that power envelope.

SLD: Does this come down into the mobile device space?
Su: Yes. That’s the other part of our business. In our multimedia business, you’re seeing multicore devices. They’re going from single core to dual core. When you talk about mobile form factors it’s a slower progression, but it’s the same concept—how do you do more with less or within the same power envelope. It’s power and price point. There’s a consumer space, the low end of the enterprise, the midrange and then there’s the high end of the enterprise. At the low end it’s going from 1 to 2 cores, in the midrange it’s from 2 to 4 cores, and at the high end it’s 4 to 8 cores.

SLD: What process node are you at?
Su: 45nm.

SLD: Are you pushing to 32nm and beyond?
Su: Yes, the next node for us will be 32nm.

SLD: The foundries are beginning to talk about restrictive design rules at future nodes. How will that affect Freescale?
Su: That’s more of a style of design. I consider restrictive design rules like DFM on steroids. But it does require more design resources to migrate from one generation to the next.

SLD: Last time we talked, you mentioned that DFM tools weren’t all there. What’s missing?
Su: The challenge is still the interoperability with the tools. All the DFM tools need to be calibrated to a given foundry or fab, and that needs to come back to us. Ensuring that flow is seamless requires work. But I do have to say that since I made that statement all the EDA vendors have been calling to say they’ve solved the problem. I’ve been very popular with the EDA vendors. But the net of it is DFM is not optional and it’s important that when we use multiple foundries, the tools have to work well across the entire tool suite. It’s not something we can do as a post-processing thing anymore.

SLD: Is it a matter of chip developers choosing best in class from a variety of vendors and not having interoperability?
Su: Yes, that’s correct. Although we are going to primary EDA vendors, we’re not going to a single EDA vendor.

SLD: So having standards is vital?
Su: Yes.

SLD: How often do you increase cores? Is it the same node or the next node?
Su: It’s next node, because we want to keep a fixed power envelope. It’s not that you can’t add more cores, but you need to stay within the power envelope the application requires.

SLD: How about vertical stacking?
Su: It’s an option that can be used as the technology becomes more mature.

SLD: How far down does your road map extend?
Su: In the product business, we’re looking at 32nm and 22nm. The evolution of the technology has a lot to do with how fast we use it.

SLD: Is there a possibility that after that you don’t go further?
Su: I won’t say that yet. People have said that at every node and it hasn’t come true. But the product adoption rate may be different. There fundamentally still is improvement in technology. As long as there is improvement in density and performance, we will continue to look at new nodes.

Leave a Reply

(Note: This name will be displayed publicly)