Gaps Emerging In System Integration

Integration used to be a functionality problem, but global issues such as power are changing that.

popularity

The system integration challenge is evolving, but existing tools and methods are not keeping up with the task. New tools and flows are needed to handle global concepts, such as power and thermal, that cannot be dealt with at the block level. As we potentially move into a new era where IP gets delivered as physical pieces of silicon, this lack of an accepted flow will become a stumbling block.

The task of integrating blocks into chips or systems has advanced little since the industry adopted the design methodology based on IP reuse. New languages and tools have focused on block creation and verification. While some tools have been developed for system integration, few have gained.

In addition, the problem is changing. “The separation of efficient IP creation, and then efficient integration of IP into the system, doesn’t meet in the middle,” says Frank Schirrmeister, senior group director, solutions marketing at Cadence. “We still have no golden model above RTL from which everything automatically derives. And now we are trying to level this up, integrating known good stuff into packaged items like the whole 3D-IC chiplet discussion. That has become a much more viable option, but it still has this separation of IP creation and IP integration.”

Meeting in the middle means that some aspects of IP integration must operate top-down while others must operate bottom-up, and the two must play properly together. “The block-level guys only look at the block level, but that is no longer viable because of margins,” says Sooyong Kim, senior product manager in 3D-IC Chip Package Systems and Multiphysics at Ansys. “If you carve out the margins for individual blocks, they won’t be successful. Power and thermal are global effects. And then structural analysis and the mechanical impact need to be considered when things are clustered together very compactly.”

These aspects of the system need a top-down approach. “You need top-down analysis to provide the margins for each block,” adds Kim. “That should be based on realistic predictions about what’s happening for each block. Consider timing, where for interfaces you bring the timing constraints down to the block level. That’s how top-down integration works for timing. A similar concept applies to power and other metaphysics areas, as well.”

But some aspects of bottom-up are also creeping in. “We are seeing increasing customer demand for what is in effect signoff mode physical verification much earlier in the design flow,” continues Kim. “Customers are asking how soon they can do that, and some want it as early as the planning stage. They want to do sign-off quality analysis with limited information.”

This shift requires a new way of thinking about things. “What people have been asking for is the ability to combine all knowledge about the design in one place and get useful answers,” says David Ratchkov, founder and CEO of Thrace Systems. “This includes use cases, technology specifics, and even historical learnings. When thinking about power analysis, the tools of yesterday cannot take it all in. There really hasn’t been a solution beyond a custom-built Excel spreadsheet, which integrates all blocks, including analog and mixed signal, and integrates all organizational knowledge to produce one power number.”

Collecting and analyzing all this data calls for collaborative tools. “The primary difference between designing individual blocks and designing a big chip is that blocks tend to be designed by individual engineers or very small groups, whereas a team of people takes responsibility for the whole chip,” says Tommy Mullane, senior systems architect at Adesto Technologies. “The tools need to change to those that deal with communication and the coordination of large amount of information that has to be compiled and maintained about the design.”

The domain is broadening, too. “The tools, flows, and methodologies aren’t ready for system integration,” says Andy Heinig, group manager for Advanced System Packaging at Fraunhofer IIS’s Engineering of Adaptive Systems Division. “This is especially true on the borders between different subdomains, like board, package, and chip. When you look at these areas, there is a gap in both flows and tools. It often involves different tools with different input/output formats.”

Even in the areas that have received attention, not a lot of progress has been made. “When integrating IP into sub-systems, sub-systems into chips, chips into a package and software onto hardware, we assume the IP verification that took place proceeding this integration means we can focus specifically on what needs to be verified at this point,” says Daniel Schostak, architect and fellow, central engineering group at Arm. “However, even with the abstractions this allows, the design size and state space still grow significantly with each level.”

This has led to stagnation. “System integration testing is pretty much a manual process even after decades of progress in design and verification,” says Bipul Talukdar, director of applications engineering for Smart-DV in North America. “A lot of effort had been put into automating design development and verification to achieve success in creating modular IP. However, to integrate the IP into the next hierarchical system — that could be subsystems, chips, chipsets or a full-blown system with software and hardware — remains a substantial amount of manual work.”

What happened?
We need to look back in history to see why we got into this situation. “Going back to the late Gary Smith in ’97, there was a push to move up to a new level of abstraction and to do drive everything down from there,” recalls Cadence’s Schirrmeister. “Twenty five years later, we still are not there yet. We still do not have any languages, tools or methodologies that are tailored to this task. Then the ITRS, which talked about the innovation cycle and productivity improvements, noted the need for the long skinny engineer. This was essentially the combination of an engineer and IP, which resulted in the emergence of the IP market in the early 2000s. It really saved the day, because abstraction clearly wasn’t ready.”

For quite a few years, that meant system integration was not that difficult a task. But even though tools were developed, they were not particularly successful. Mentor had some success with Platform Express, and Duolog developed Socrates, which was acquired by Arm in 2014. Perhaps the biggest thing that came out of those efforts was the IP-XACT standard, which was used to describe blocks in an XML format and enabled automated configuration and integration through tools.

IP-XACT has been useful. “Using machine readable specifications, such as IP-XACT, does help to automate basic tasks, such as checking all registers can be accessed with appropriate constraints,” says Arm’s Schostak. “For this to be successful, the information needs to be captured consistently, which can be a challenge when integrating IPs drawn from multiple sources (potentially both external and internal). The scope for automation could be further increased by including additional information in the machine-readable specifications, such as details about clocks and reset trees, power domains, etc.”

Those avenues continue to expand. “System integration tasks need automation and fast turnaround more than dedicated languages,” says Sergio Marchese, technical marketing manager for OneSpin Solutions. “A key aspect is IC integrity verification. SoC developers need to make sure that the chip functions correctly, for example that it is free from deadlocks and that IPs are wired up correctly. They also need to ensure that security and safety requirements are met. Accellera is working on standards that will enable leveraging IP-level safety and security information to make integration tasks more automated. Nowadays, formal technology can address many SoC-level verification tasks, and a lot of companies are taking advantage of that to reduce chip development cost.”

“The reason why the original tools were not successful is partly a business reason,” add Schirrmeister. “The tools were used by a limited number of people, because you need to understand the full integrated system, and now think about elevating that to a hardware / software system. What they did with these tools was done in 10 minutes or less. So you don’t need that many licenses. It’s very efficient, it’s very valuable, but from a pure number of users perspective, it’s very hard to monetize.”

Another problem was a diversity in environments. “Different setups are manually created to complete integration testing, such as FPGA prototyping and hardware/software co-verification, where software can be tested before hardware is fully assembled into a target hardware system,” says Smart-DV’s Talukdar. “Another example is post-silicon validation integration testing, where fabricated silicon parts are assembled on printed circuit boards to observe target system application execution.”

New requirements
Most of these comments address what was considered to be the original need for system integration, but times are changing. “We need a platform concept, where the various on-chip guys can communicate,” says Ansys’ Kim. “We call it a chip-package-system model. We are helping the customer generate this die model, or chiplet model or block model, so they can convey adequate information all the way through the process. For example, they may perform power estimation at the RT level, and then they want to be able to use those estimates to drive stress analysis. They want to be able to extract the correct vectors to feed into the appropriate analysis.”

Add to that a patchwork of other tools that are required. “We have a range of different tools to support this,” says Adesto’s Mullane. “We use requirements tracking systems like doorstop, ticketing systems like Jira, and a multitude of Word documents. But the most critical tools that an architect uses are all communication tools. On a big chip, it is only by continuous communication with all the engineers that you can ensure that everyone’s actions are building toward a harmonious completion, and ensuring any flaws or inconsistencies are dealt with.”

That may take the development of new models. “Adding power to Liberty worked great back then, but it breaks down for anything more complex than an SRAM,” says Thrace Systems’ Ratchkov. “IEEE 2416 is well on its way to fill the gap. The current methodology of throwing a file over the fence to the next person no longer works because it doesn’t scale. But a good tool would allow for new and unthinkable features.”

Still, not everyone is convinced. “The problem behind the formats runs much deeper,” says Fraunhofer’s Heinig. “In different areas, different abstraction levels are used. Each abstraction level has different content and that results in different formats. But it isn’t that easy to transfer content between the levels because the lower levels need more detailed data. Often, flexible formats that can be extended into the lower levels are necessary.”

The industry has not been very good at accomplishing this type of unification. “In addition to integration of the design components, and potentially any associated verification components, the various technologies used to verify this integration means that there is also a need to integrate results from different technologies to be able to make an informed sign-off decision,” says Schostak. “This can be challenging because the coverage provided by results from simulation, emulation, FPGA or formal-based verification, performance analysis and real-world software payloads is often represented by quite different frameworks, which may also be vendor-specific.”

Moving forward
Where we are today could be blamed on the long skinny engineer, because it appears that we now need the opposite – engineers who can span a large breadth of the problem space. “A lot of designers are only focused on the task they are currently working on,” says Mullane. “This could be analog, digital or RF. It could be front-end design or verification. It could be test insertion or back-end layout. Whatever it is, usually they are going to have a set of requirements to meet, some interface defined, and they will have a very specialized set of tools to help them carry out their work.”

There also is a persistent question of who owns the problem. “Today we have thermal engineers, chip engineers, PI engineers, and they each have different challenges and they all are working separately,” Kim says. “But when it comes time to integrate the pieces, there should be someone to act as the mediator. It could be the person who is architecting the system. The previous conventional job description of the engineer changes. In order to solve this, the person who was interested in structural analysis or thermal analysis may have to take on additional responsibility. Or, we have to come up with this architect person who can do everything in one shot from the beginning.”



Leave a Reply


(Note: This name will be displayed publicly)