How Heterogeneous ICs Are Reshaping Design Teams

Increasingly complex systems are creating much different relationships between engineering specialities.

popularity

Experts at the Table: Semiconductor Engineering sat down to discuss the complex interactions developing between different engineering groups as designs become more heterogeneous, with Jean-Marie Brunet, senior director for the Emulation Division at Siemens EDA; Frank Schirrmeister, senior group director for solution marketing at Cadence; Maurizio Griva, R&D Manager at Reply; and Laurent Maillet-Contoz, system and architect specialist at STMicroelectronics. This discussion was held at the recent Design Automation and Test In Europe (DATE) conference. To view part one of this discussion, click here. Part two is here.

SE: Does verification have to happen almost in real-time to verify components are working properly together? And is this now a continuous process?

Brunet: There is a lot of post-silicon treatment that is done, so it is continuously being verified. There is lifecycle management being inserted here, which traditionally was part of the test environment. But we also are verifying for a longer time. Some technologies have a very long cycle, and that needs to be taken into consideration before tape-out. It’s very well known that around the tape-out time at semiconductor companies, those who have access to the CPU to do verification are the physical verification and functional verification engineers. You run all the vectors that have changed and you run all the test vectors before tape-out. But most people don’t have the time to run everything, which is why they do basic simulation acceleration. These are very important techniques because they enable certain things to run sooner. We see a lot of insertion of pre-silicon verification on lifecycle management. It adds more cycles and more time to verification, but the way those things are architected makes them a very good target for simulation acceleration using an emulator or prototyping hardware. Not everything accelerates well, though, especially if there is a complex structure. So those are will probably go through traditional CPU multi-clusters. We also see more things designed for security that don’t accelerate very well. But overall, we are moving toward an era where the software workloads that are run before tape-out are a very good representation of what is happening. So it’s not only a functional testbench. It’s also a real-life software workload, and that requires a lot of cycles and speed, because you are booting an operating system.

Maillet-Contoz: The difference I see, compared to the last decade, is that we need to make sure that the device will deliver the right level of functionality with the right level of performance. So far, we discussed functional aspects, and security is seen as part of the functionality. Everything that relates to functional properties is very important. That includes topics such as energy. But it’s more than just considering the compliance of a system or components with the requirements. It’s also important to consider how this component will behave in a system context in the field. What needs to be addressed as a challenge in the coming period is how we can leverage all the various technologies that have been deployed for a while at the component or at the SoC level, and understand how they contribute to improving the validation of the system functionality considering that the system itself is a system of system. It’s comprised of various SoCs and components and so forth. This could create a differentiation advantage for the company provides this level of trust for for the end customer.

SE: How does all of this affect design teams? What kinds of changes can we expect to see?

Griva: Designers are still developing designs, and that won’t change. But we need to consider the specific requirements, the specification, the design, the validation, the software — models of all these together — and make sure that we have a very consistent design flow to cover all the steps, and integrate security, validation, and cross-functional properties at the same time. The designers now are facing more heterogeneous integration. We need to consider all these parameters when considering a design.

SE: There also are issues like mechanical stress in the package, which design teams didn’t have to deal with in the past, right?

Schirrmeister: Yes, and you collect more business cards and LinkedIn connections today than in the past. We used to joke that when you brought together hardware and software people in a company, they would sometimes introduce themselves and trade business cards. There’s a whole new dimension to this with people on the mechanical side. There are experts in EMI, too. We were always talking about a system architect, who knows a little bit about everything but nothing about the full details. That person is becoming the moderator between the hardware, software and other disciplines. There’s more of that happening because the tools are now at a higher level of complexity to put this all together. Some people are predicting the end of the SoC and talking about this aggregation, which is true in cases where you really design to a theoretical limit. Chips are becoming so big that you’re now assembling everything in a 3D-IC fashion, with multiple chiplets. So now you have a the fast version version of chiplet processor together with the fastest versions of the memories, but you still have to deal with things like power being controlled by the embedded software. So you get a new design without having redesigned any of the chips. You still have to make sure that in the package it switches on correctly with the firmware, because otherwise it will just melt if the power control isn’t there. This adds new communication channels, because the firmware expert has to deal directly with the assembly experts, and that adds challenges about how you actually categorize these functions. That’s a PLM challenge. Do I have all the different configurations of chiplets coming from certain runs in the fab? Do I know all of those? There are new challenges for architecture development tools and for the virtual development, because you have a new design without actually having changed the chip. You’re doing the assembly in a disaggregated way. There are lots of fun challenges coming up there.

Brunet: I agree. We’re seeing more cross-domain, cross-discipline collaboration. A couple of years ago some companies had issues with the power configuration of their designs, where the difference was not 20%. It was more like 3X to 4X more dynamic power consumption than what they were predicting. And so we’ve seen the power engineers, who don’t talk to the design team to much, starting to share business cards, even with EDA vendors, and ask, ‘How do we, how do we work across disciplines?’ Of course, EDA had to develop technology and APIs to enable this across disciplines, but it is happening. There is more cross-discipline interaction between power, thermal and test, and that’s a good thing.

Griva: Disaggregation is definitely perceptible. It brings in more flexibility, but it requires higher skills because you have multiple components you have to interconnect. The routing is more complex, and you have to bring a radio frequency, or multiple radio frequencies, sometimes a printed antenna. The skill needed to develop a simple system with WiFi, for example, is not simple anymore. The PCB designer was typically someone you gave the schema to, they worked overnight, and brought you something you didn’t even have to look at. It went to the contract manufacturer, and then it was done. Now, it requires a lot of back and forth and cycles to verify if it’s correct. So the design teams are changing. From a mechanical point of view, we always had mechanical engineers in the loop at the earliest stage, but now it’s not uncommon even for simple devices. Those devices can require multiple changes in the in the system design due to thermal issues or placement of something metal over the antennas, which blocks the propagation of waves. In the past, there were more differentiated teams working on a design at different times and with little interaction.

Maillet-Contoz: So who is now responsible for ensuring low-power in devices? Is it the hardware team, or it the software team? We have lots of microcontroller families that are low-power, but who will benefit from the local capabilities of the architecture? And who will decide when to use a certain power state or policy in terms of power? In the end, it will be the software engineer who will program the clock controller on the power controller to take full advantage of the hardware architecture. So you might design a very nice hardware architecture with lots of capabilities, low power policies, and so on. But it’s the software engineer, who you probably do not know, somewhere in the world who will implement a software stack that will utilize the benefits of the hardware architecture. And that’s probably not related to the packaging. So it’s very difficult to anticipate the use models of systems.

SE: At the same time, the business relationships have flattened out. Design teams are working much closer to the system companies that will use these devices. That means potentially more liability and more finger pointing. What impact is this having?

Schirrmeister: The tools provide the mechanism for these people to interact, like the software designer pointing to the hardware designer if your register doesn’t answer correctly. And then, after hours of debug in a joint environment, they realize that none of them was wrong because someone else had put a block into sleep mode. So nobody really was at fault. It’s just the overall system that didn’t work together correctly. Those are very interesting challenges for interdisciplinary interaction. And for us as vendors, being able to enable this interaction becomes very important and helps with with getting these teams to connect to each other.

Brunet: The ‘war room,’ as it’s called, is the only place where software and hardware engineers can meet and see the overall interaction when software is added. There’s no hiding when you look at how it’s behaving. We can run a lot today on these systems — much more than 20 years ago. So that liability question may not start with an emulator or prototyping, but that’s where those two teams are meeting today.

Schirrmeister: There are a couple of additional interesting aspects. You verify pre-silicon, and then you have tests in the lab right after that, followed by production tests. Well, at least the lab test and the verification have very similar genes in terms of looking for things. The test engineers might ask the verification engineer why they didn’t verify something beforehand, and so there’s new synergy there. But then it goes back to the properties of the underlying engines. Even between emulation and prototyping — and then the actual chip prototype when you get the silicon — there are certain things that are not modeled that really impact the design differently. I’ve had cases where the customer said, ‘My new prototyping system doesn’t work because it doesn’t bring up software the same way the emulator did it.’ What actually happened was the prototype modeled memories differently. The software in the emulation environment was written from a memory that was in a different state than in prototyping, and because of the difference of the two engines, they figured out there was a fundamental flaw in the software with having read from an address that hadn’t been written to. But in the end, what looked like a problem turned out to be a success story because the engines tested different things. And then that extends further across companies. There is lots of room for more interaction and enabling the teams to work more closely together.



1 comments

Sam Kaufman says:

Does this mean there will be a higher reliance on 3rd party IC Design Services firms?

Leave a Reply


(Note: This name will be displayed publicly)