Heterogeneous Computing Raises The Bar For Functional Verification

Programmable SoCs are shaping up to be an important part of the semiconductor landscape, if they can overcome the verification challenges.

popularity

If there’s one thing certain in chip development, it’s that every innovation in architecture or semiconductor technology puts more pressure on the functional verification process. The increase in gate count for each new technology node stresses tool capacity. Every step up in complexity makes it harder to find deep, corner-case bugs. The dramatic growth in SoC designs brings software into play for full-chip verification. True to form, the emerging generation of heterogeneous computing platforms is raising the bar for verification yet again. This new approach presents new challenges that must be faced by both device developers and chip users.

The term “heterogeneous” may not at first sound too daunting, since SoCs have for some time included multiple forms of processors. Traditional CPUs have blossomed into multiprocessor subsystems while GPUs and other specialized compute engines have also resided on the same chip. But heterogeneous computing takes this architecture two steps further by also including both FPGA-style programmable logic and software-programmable engines. These features greatly increase the flexibility available to users to implement their desired functionality in hardware, software, or a combination of both.

The upcoming Xilinx adaptive compute acceleration platform (ACAP) known as “Project Everest” is the leading example of the new class of heterogenous, programmable SoCs. According to publicly released information, the platform is being developed using the TSMC 7nm process and will tape out before the end of this year. Xilinx describes Everest as “ideally suited” for big data and artificial (AI) applications, including “video transcoding, database, data compression, search, AI inference, genomics, machine vision, computational storage and network acceleration.”

Heterogeneous computing platforms are expected to be a popular and important segment of the semiconductor industry because of their power and their adaptability for a wide range of applications. Such applications as machine learning, deep learning, and 5G wireless are pushing the limits of older technologies and will benefit from the new architecture and aggressive 7nm process node. Other relevant application domains will likely include military/aerospace, autonomous vehicles, cloud computing and IoT.

With great power comes great complexity, and this leads to some serious verification challenges.

One fundamental issue is that any programmable chip (or multi-chip module) must be verified twice, first by the platform developer/vendor and then by the user. Standard chips and traditional ASICs are verified once and then fabricated. An FPGA must be re-verified by the user in the context of the intended application for the chip. A heterogeneous computing platform has all the hardware verification complexity of an FPGA and all the hardware-software complexity of embedded computing due to the processing subsystem. As mentioned earlier, this subsystem contains heterogeneous processors, so there are multiple types of software from multiple sources that must be verified within the hardware platform.

Most modern SoCs contain complex subsystems built with highly configurable IP blocks, adding yet another dimension to the verification problem. The number of possible combinations of options across IP blocks mushrooms quickly, making simulation of every operating mode permutation impractical. Simulation is also lacking when it comes to verifying top-level connectivity of the design, which may also be configurable. Further, many device I/O pads are multiplexed to allow user control of which protocols run on which pins. This adds even more combinations to be considered when verifying connectivity.

Heterogeneous computing platforms have the unique challenge of verifying software-programmable engines that can be reprogrammed “on the fly” while the chip is running. Some refer to this level of programming as a type of firmware, so it’s fair to say that verification must encompass hardware, embedded software, and firmware to be complete. Each reprogramming of the engine may define an entirely new instruction set architecture (ISA) that must be verified, one of the harder problems in developing new processors. Cache verification is another classic challenge, and cache hierarchy is also likely to be highly configurable.

Beyond functional correctness, many of the applications for heterogeneous computing platforms have strict safety and security requirements. Hardware and software safety mechanisms must be verified to work properly in the event of random faults such as alpha particle hits. Further, security logic to prevent information leakage or device hijacking must also be verified. Widely adopted safety standards such as ISO 26262, IEC 61508 and DO-254 set a high bar for compliance using precise metrics for responses to faults. The industry is working hard to define and standardize corresponding measurements for security.

Clearly, traditional simulation testbenches and even a large set of test cases won’t suffice for heterogeneous computing platforms. These huge devices demand the power of formal verification. Because of its exhaustive nature, formal can handle configurability, reprogramming and complex connectivity. Specialized formal applications (apps) can verify many aspects of the design, including safety and security. Formal equivalence checking can ensure that what is fabricated matches what was designed by the platform vendors and verify that user designs have been properly mapped to the programmable logic.

The chip world is changing again, and formal EDA tools are evolving to meet the new challenges.



Leave a Reply


(Note: This name will be displayed publicly)