Mixing Interface Protocols

Making sure a device can interface with a variety of protocols is becoming a major headache.

popularity

Continuous and pervasive connectivity requires devices to support multiple interface protocols, but that is creating problems at multiple levels because each protocol is based on a different set of assumptions.

This is becoming significantly harder as systems become more heterogeneous and as more functions are crammed into those devices. There are more protocols that need to be supported to enable those functions, and more assumptions that don’t mesh together so well. Some of those assumptions concern transaction ordering and semantics, for instance, which affect the overall system design. That makes it much harder to mix transactions while retaining “correct” execution.

“It can be unclear what ‘correct’ means in some cases,” said Ty Garibay, CTO at ArterisIP. “For example, the AXI protocol includes separate channels for read and write transactions, and specifies that write transactions are fundamentally not ordered concerning reads, but are strongly ordered with respect to each other. The OCP protocol specifies a common command channel and requires reads and writes that share the same TagID to be handled in order.”

While AMBA AXI and AHB, and OCP have been around for long time, the interface protocol landscape continues to evolve. Arm has issued standards such as ACE and CHI, and startups like SiFive are moving to productize TileLink.

Add to this mix the profusion of  proprietary internal protocols that almost every major semiconductor vendor continues to support, and the possible interactions of functionality, assumptions and constraints can become impossible to effectively verify and validate, Garibay said. “Perhaps the biggest issue is that the architects responsible for integrating all of these protocols into a functioning system are rarely experts in all of these interfaces. Even if there are experts in each protocol working on the SoC project, each of these experts will require their own support in the form of verification IP, traffic generators, and performance models, which can become a very expensive requirement very quickly.”

For engineering teams designing Ethernet controllers, PCI Express interfaces, or a USB hub, usually there are two levels of interface protocols used, said Mark Olen, product marketing manager at Mentor, a Siemens Business. “There is an internal bus, like an AMBA, and then there is the external interface, the peripheral interface. The design teams do extremely complicated, thorough verification and analysis. They want to look at every nook and cranny of the specification of the standard, of every mode of Ethernet, or of whatever they are verifying—every configuration (4 or 8 bits wide, single or multiple burst), any variation, and they are trying to cover all of them. That’s because they’re creating design IP. Companies like Northwest Logic, which develops PCI Express and MIPI interface controllers design IP, have to build something general purpose so they can broaden their market. As such, they are very stringent when they use the verification tools and verification IP for verifying their IP designs.”

Systems companies designing at the SoC-level—the customers of those design IP companies—are licensing Arm fabrics, maybe a CCI or CCN fabric, Olen said. “These companies are licensing system-level IP cache controllers, memory controllers, Cortex processors, and then they are integrating in the peripheral controllers. Now what they are looking at is a much broader range, and in some ways a more complex verification environment, because now they are bringing together the USB controllers and the Ethernet interfaces and all of the bridges and everything. When you look at each one of these verification IP components, they care much less about doing all of the detailed-level compliance checking from end to end of the specification.”

Verification issues
Considering this from the perspective of the SoC interconnect fabric, having multiple interface protocols, as it is often the case, creates asymmetries that complicate the verification significantly, observed Sergio Marchese, technical marketing manager at OneSpin Solutions. “Even ultra-rigorous verification of protocol checks is easy, at least for standard interfaces like AMBA, where one can leverage assertion-based VIP optimized for formal tools. Ensuring that the right data gets to the right place at the right time is generally difficult.”

With multiple protocols at play, the number of corner-case scenarios increases exponentially. Even for a supposedly simple AXI-to-AXI bridge, where the two sides operate on different data widths and transactions are split or aggregated, there are a lot of scenarios to consider. Add to that different clock domains, and it’s likely that simulation will miss some bugs, he said.

“Another crucial verification goal is to ensure that each interface always will make forward progress. This is an area where only formal verification, with appropriate methodology, can deliver 100% coverage and prove that the systems will not deadlock,” Marchese said.

At the next level of chip I/O interfaces, another type of verification challenge may come from architectures that support multiple standards and are optimized by sharing circuitry at the physical level and above, he explained. “The shared logic and additional hardware functions that accommodate the differences between different protocols create many more corner cases that have dedicated circuitry for each protocol. In this case, exhaustive formal analysis can reveal bugs in scenarios that one would not think of just by looking at the high-level specifications.”

Advanced packaging issues
Navraj Nandra, senior director of product marketing for interface IP at Synopsys, agreed: “From the aspect of the communication between two chips, the chiplet concept is one that more and more companies are talking about. The approach there is that you develop some kind of low-power interface that hooks up the 65nm analog/mixed-signal chip with the 16nm SoC, and then the chiplet does the communication between the two. That’s a big area of activity now.”

There are startups working in this area — such as Kandou Bus — as well as various standards being proposed because this can be thought about as a standard protocol between the two. Another option is to remove the concept of a standard and keep the connection very custom, because that gives certain advantages in terms of area and power, he noted.


Fig. 1: AMBA system environment. Source: Synopsys

Another standard being proposed as an interchip communication protocol is MIPI DigRF, a high-speed interface used to interconnect the RFIC in a device with the baseband processor. According to MIPI, it was designed to provide a convenient approach for integrating components and meeting the data-intensive needs of 4G LTE air interfaces that require high channel bandwidth. It is a low-complexity solution for complex implementations that typically require multi-mode, multi-band operation. It natively handles MIMO configurations, receive diversity and carrier aggregation. In addition to LTE, it supports HSPA+, 3.5G and 2.5G air interfaces.

Interestingly, MIPI DigRF was designed to communicate between a baseband SoC and an RF chip at a much bigger process node, and while companies including Synopsys were involved in this space with products and customers, the market didn’t take off because firstly, the opportunity for this is quite small. There simply aren’t that many customers developing RF chips that need to talk to SoCs. Additionally, the customers that were including these capabilities in their designs started moving away from a standard protocol to something more custom as a way to reduce the power of that chiplet. This broke the whole standard protocol concept but showed how tradeoffs in this space are intricate and specific.

Mixed-signal issues
Issues with supporting multiple interface protocols are some of the most challenging verification tasks today, especially in the realm of analog/mixed-signal/custom designs, asserted Mladen Nizic, product management director in the Custom IC & PCB Group at Cadence.

“People think advanced nodes are becoming all digital. It’s not really. We are not talking here about classic analog parts, but really a lot of integrated high speed interfaces, converters, and PLLs that must be available in an integrated SoC that gets migrated among the first IPs. Each of the protocols has to be of the specified performance but typically then verified at the full chip level, which is becoming a very difficult task,” Nizic explained. “If you look at wireless protocols, for example, everything from EDGE (Enhanced Data GSM Environment) to 4G, 5G have to be supported. You can’t just drop it because they say they aren’t supporting that anymore, and that creates a challenge. Some of these are inherently linked to the similar algorithms but some have to be a separate functionality implemented in separate blocks. At the block level that might be fine; it’s relatively easy to apply the tools and methodologies to run because the problem is contained and on their own can be verified efficiently but then now verification of the overall functionality becomes an order of magnitude more complex as soon as you add the next protocol.”

Also, if there is analog and mixed-signal functionality in these that have to be brought to a higher level to verify, that’s becoming a real problem too. “If I could simulate everything at the transistor level for the block, now it’s impossible at the SoC level,” he said. “How do I bring all of that information including functionality as well as power and performance that’s applicable at the top level to verify?”

To tackle this challenges, many engineering teams employ a good deal of modeling, while new languages and standards are also being adopted more actively that is helping to bring that analog functionality to the high level in a way that’s much more efficient and more suitable for digital methods. These include Real Number Models and SystemVerilog A/MS. “These help run the verification at a higher level without a significant penalty. Of course, it requires more skills in creating these models, as well as model equivalency checking to make sure that the models really do represent what they are supposed to represent,” Nizic explained.

Another consideration is that with multi-protocols, they all won’t be working at the same time. Some have to be shut down to conserve power.

“As I do verification, I need to make sure that the blocks that are shut down are not impacting performance of other blocks that need to be on at the time,” Nizic said. “Similarly, if I have the signal crossing between different power domains that I created for the power efficiency, I need to make sure that the signal crossings are handled properly. Simulation could be the answer for everything if it could handle such large designs, but I need to make sure all of the behavior I’ve modeled at the same time is power-aware so that the functional verification can be run in the different power modes—ideally driven by the power specifications.”

Ideally, some of these could be run statically, and tools and methodologies to capture power intent from analog and custom circuits is emerging.

Staying connected
Part of the whole value proposition for users is that devices will stay connected, regardless of interface protocols and heterogenous architectures. This includes everything from mobile phones to IoT devices and cars.

“Automotive SoCs are becoming as compute-intensive and complex as server-class SoCs,” said Rajesh Ramanujam, product marketing at NetSpeed Systems. “There are obvious reasons for this rapid growth pace. Heterogenous architectures, in terms of performance, are killing the homogenous architectures. They are nowhere close to what heterogeneous architectures can offer. They take advantage of strengths of different compute engines. In automotive, compute flows there are some things that CPUs are good at and other aspects are where GPUs and accelerators have strengths. Each of these compute engines wants to take advantage of their own different elements, and they all prefer their own native languages, some of which might even be custom protocols to amplify what they do. That’s where the need comes to support different protocols.”

This is where an intelligent on-chip network comes into play because it can glue everything together with minimal overhead and a fair degree of configurability.

“You might not need to support everything at the same time, but you must support enough to let everything speak the same language,” Ramanujam said. “When building a chip or SoC, the engineering team must understand the compute needs. Their choice of CPU or GPU compute engines are built supporting specific protocols, so the design teams are bound by that. The heterogeneous compute IPs already are prebuilt using mostly standard interfaces or custom interfaces. This drives what protocols must be supported. When an interconnect comes into the picture, it must support all of those different protocols.”

And just to add to the mix, each protocol also has its own dependencies and ways of interpreting resource dependencies.

I/O issues
These problems are especially difficult to manage when it comes to SerDes.

“If you look at other interfaces like DDR, it’s a bit multi-protocol in the sense that a DDR interface will often support multiple generations of DRAM, so it will have DDR-2, -3, and -4,” said Hugh Durdan, vice president, strategy and products at eSilicon. “It may be a combo where it supports DDR and LPDDR, but those are all relatively similar protocols. With SerDes, it gets more complicated because people will want to define one SerDes and have it support multiple protocols—some at the time of implementation, but quite often at the time of use. For example, in the networking space, let’s say an engineering team builds a box, and on the front of the box are a bunch of connectors. They want complete flexibility for those connectors to be multiple different flavors of Ethernet or Fibre Channel, all supporting the same pin of the ASIC, essentially. What actual interface gets supported is determined by the software that’s loaded into the system and the type of optical module that’s plugged into the front panel of the switch.”

Given design complexity today, it is very important for the SerDes to be flexible enough to handle all of the different interfaces. “That flexibility comes with a cost, and it is the IP supplier that bears the cost typically,” Durdan said. “But there’s a little bit on the chip side, too, because the flexibility requires certain things that make the SerDes bigger and take a little more power. There is also the control logic that goes behind the ASIC that the designer has to put into the device itself.”

The main thing for the design team to understand is the application. They need to ensure they have thought through all the requirements and are asking for the right thing.

“This is where a lot of the complexity comes in,” he said. “If this is not done up front, surprises can pop up along the way after they think they’ve got it all nailed down. The tradeoff really comes down to flexibility versus complexity. If you know that you just need one type of USB interface and one type of PCI and you’re never going to change it, you’re probably better off buying optimized solutions for each of those interfaces. But if you want the extra flexibility, which has a lot of value at the system level, it’s much more attractive to go with one of the multi-protocol solutions at the expense of the added complexity to verify it.”

Conclusion
There are solutions available. “The easiest way to manage all of these interfaces is to translate them into a common protocol as quickly and efficiently as possible, and then use the common protocol to complete the vast majority of SoC-level communication,” ArterisIP’s Garibay said.

But as connectivity becomes a requirement of more devices across more markets, such as industrial IoT and automotive and medical devices—and as more functionality is built into devices that needs to be connected both internally and to the external world—this is likely to only get more complex. That growing complexity will be felt in all parts of the design flow, from the architecture where performance and power need to be considered, to verification, where the number of corner cases that need to be considered is exploding.



Leave a Reply


(Note: This name will be displayed publicly)