Mobile phones are no longer driving I/O standards. Emerging industries take up the lead and idea sharing will help everybody.
Interface standards are on a tear, and new markets are pushing the standards in several directions at the same time. The result could be a lot more innovation and some updates in areas that looked to be well established.
Traditionally, this has been a sleepy and predictable part of the industry with standards bodies producing updates to their interfaces at a reasonable rate. Getting data into and out of a chip used to be considered a mundane necessity, which is why most companies willingly handed over this task to the IP industry.
Figure 1, below, shows the rate at which some interfaces have been evolving. This does not include many of the latest standards or new emerging standards. Nor does it show how some of the standards are being repurposed for different applications.
Fig. 1: Introduction of new standards. Source: Cadence
Interface standards are all about interoperability. “Standardizing chip interfaces has benefited the semiconductor industry as it allowed individual chip companies to develop devices that interoperate with other chips on a PCB,” says , CEO for Mobiveil. “Many of these standards started as shared parallel architectures but quickly moved to high-speed serial point-to-point technology in last 15 years or so.”
Standards usually are created for good reasons. “There is not always a close correlation between what might have driven a standard or when it came out and when it was deployed,” warns Dave Wiens, product line manager at Mentor, a Siemens Business. “We were hearing about DDR4, but it took a few years before there was much traction. We still have people using DDR2. There is always a ripple effect in deployment.”
Much of this was dictated by the maturation of end markets. “In the past, the personal computer business drove so much volume that everyone wanted to live within that cost envelope,” says Drew Wingard, CTO at Sonics. “Each generation happened every two years or so, but it was driven in a very predictable fashion. With the rise of smartphones some of that changed. They moved away from the standard DRAM roadmap and we saw the introduction of the LP DRAMs.”
Smartphones became the leader. “Today, smart phones are reaching saturation and possibly even declining a little,” points out Navraj Nandra, senior director of marketing for DesignWare analog and mixed-signal IP at Synopsys. “There will continue to be derivatives, but generally the market has saturated.”
The maturing of the cell phone era is having some surprising impacts on the industry. Instead of having a single set of requirements driving technology forward, there is a broadening of technology leaders, and each one of these is creating pressure in different ways. The advancements being sought by one application area are cross-pollinating into other areas. The result is much faster development of communications standards.
New drivers
End markets drive the requirements for standards. “Today, the buzzwords in the industry include big data analytics, machine learning, automotive, and IoT,” says Nandra. “They all require chips to support those markets. We see chips being developed in leading edge technologies for machine learning and a huge number of multi-core devices that need cache coherency both at the memory and I/O level. These end markets drive the need for much faster interconnect or lower power interconnect or even more challenging – faster interconnects with lower power.”
This is especially evident in data centers. “If you look at the 10G space, it had a very long run,” says Rishi Chugh, senior product marketing group director in the IP Group of Cadence. “It was the predominant standard from 2000 until 2010. Then around 2010 we saw 100G introduced as a derivative of 10G where people used 10 x 10. When 40G became possible, the transition happened very rapidly. We went to 4 x 25 in 2014. Today we are talking about single lane 100G for 2018. It will not be mainstream for awhile, but we can expect to see pilot programs. So, in four years we have made three transitions.”
Architectures within chips are changing, as well. “We are seeing memory centric architectures rather than a CPU centric architecture,” points out Anush Mohandas, vice president of marketing and business development at NetSpeed Systems. “It is a memory-centric view with multiple compute engines trying to access memory. My precious resource is memory. From there they either want capacity or throughput. People have gone from DDR5 to LPDDR6 or GDDR6 or even HBM where you can get a 1Tb/s of bandwidth.”
But it is not just large systems that are driving the standards. “The amount of information coming from edge compute devices is becoming critical,” adds Chugh. “These are the sensors within an automobile or in a surveillance camera or even in the Industrial IoT. You want HD images, better sensitivity and when a large amount of data is being gathered you need I/O that can handle that data feed.”
Automotive is adding new requirements, as well. “Automotive has its own set of new takes on old standards,” says Wingard. “Automotive Ethernet looks as if it will be a huge deal. There are a lot of economic reasons why they would like to be able to leverage the Ethernet infrastructure developed over the past years, but they need more reliability for functional safety reasons.”
MIPI Alliance is a relatively new organization, and they are setting up an automotive working group. “They are more visible in identifying new markets, and I am seeing more automotive focus within that organization,” says Nandra. “They are looking at the impact of very long cables. I have also heard conversation in the PCI express group about functional safety aspects being required and changes required in the transaction layer controller.”
PCI express is also seeing a renaissance. “This is in part because of CCIX, OpenCAPI and GenZ,” points out Mohandass. “People are trying to connect chips to achieve new performance goals. This is in the data center and hyperscale sense today where in the past, it would have just been a PCIe connection. This provides memory coherence on top of the interface.”
Other segments are also driving this standard. “While the networking chips have been driving increased data rates, solid state drive solutions based on PCI Express and NVMe also demand higher data rates,” adds Thummarukudy.
Cross-pollination and convergence
This brings things full circle. “People are losing patience with their consumer devices,” says Chugh. “Five years ago you had a SATA drive, but now you have flash or SSD. Many new devices have (NVM) drives in them and they are more efficient, faster and they can store enough data and are reliable. They require faster I/O connectivity to get the full benefit of these drives.”
In the Ethernet world, many of the changes brought about for automotive may find their way back into the data center, where extra uptime would be welcome. But that is not the only change we can expect with Ethernet. “Since USB can now operate at 10G, it can match Ethernet,” says Chugh. “There is a desire to bridge these and to converge. There would be a common data plane with multiple interfaces tapping into a common data path. We are seeing a unification of standards that operate at the same speed.”
This convergence is happening in multiple areas. “The consumer market is challenged by form factor and they want smaller devices and thus smaller chips,” says Nandra. “This means less pins are available and you either have to make them smaller or you make the pins multi-functional and by extension, multi-protocol. It is harder to make the chips smaller because you become I/O limited. There often has to be a certain number of pins on the device for the required functionality. We are seeing multi-protocol SerDes, converged I/O – where USB, PCI express and others are converged at the protocol level and at the electrical level on the pins.”
We are also seeing standards being used outside of their intended application area as well. “We are seeing regular ASICs now using an HBM2 controller and accessing that range of memory,” says Mohandass. “It is no longer confined to 2.5D integration. The interface now goes off chip. HBM is essentially 16 pseudo channels over 8 port interface. That does take a lot of pins. But you need that to spread the required bandwidth across them. This is a Tb/S of bandwidth.”
We have seen AMD announce graphics chips that made the switch from GDDR to HBM and analysis shows HBM to have 3X the bandwidth per Watt of GDDR5 even though costs are about 3X higher today.
Complexity
There is a price to pay for new I/O advancements. “Standards like PCI Express and RapidIO lane rates have increased from their initial 2.5Gbps per lane (Gen1) to proposed 32Gbps for PCI Express Gen5 and 25GBps for RapidIO Gen4,” points out Thummarukudy. “Such increases in speed necessitated different technologies and implementations for data encoding/decoding, power management and data security.”
Thummarukudy estimates that the complexity of designs has gone from roughly 25K logic in PCI to multi-million gates logic for the newer specifications.
“The physical layer circuits have become more challenging because we are trying to push the pins to go faster all the time,” says Wingard. “We went from the place where the interface design for DRAM went from being a digital problem to being a mixed-signal analog problem. You see people talking about DRAM interfaces and they show eye diagrams. That is not what you would associate with a digital technology.”
The challenge is the laws of physics. “It has often said that we cannot go faster because of the laws of physics and then we figure out how to go faster,” observes Hugh Durdan, vice president of strategy and products at eSilicon. “Did we change the laws of physics? No, but we found a clever work-around.”
So how does one do that? “The new nodes enable additional functionality to be able to manage the signal and handle equalization to get a cleaner signal,” says Mentor’s Wiens. “If you try and send a 100Gb signal down the same transmission line used for 6, 12 or 25Gb, you would not have an eye left. So then you have to clean up the eye. There are three choices: 1) make the line shorter, 2) deploy higher cost materials on the board to improve the performance of the transmission line, or 3) add functionality at the silicon level to rescue the signal. Smaller nodes help with that.”
But they also increase the complexity. “The reason we end up with the extra complexity in the latest nodes is because they are the latest nodes,” adds Wingard. “The characteristics of the transistors are not very attractive in the analog domain. They do not have the same linearity characteristics that they had at 90nm. So it becomes increasingly difficult to do things in the analog domain and a lot of the additional complexity is an attempt to get things into the digital domain as quickly as possible and then try and use signal processing or conditioning to clean up the implications of the impaired relatively poor transistors to work with in the analog circuits.”
This is a double-edged sword. “As you get more sophisticated signal processing and equalization, the area goes up, but that is why many continue to go down the process nodes so that it can shrink,” says Durdan. “If you look at leading-edge SerDes, there is a lot more digital in it than there used to be. The digital part can benefit from but the analog piece doesn’t. That is where the challenge comes in. The analog part is between a half and a third of the power. The digital power will probably go down but the analog may not go down and possibly even increases a little but hopefully total power stays about the same.”
Conclusion
Interfaces are getting more attention than they have ever received in the past and this will be good for everyone. Multiple industry segments are now driving standards development and that will increase the rate of advancement. Standards organizations are rising to the challenge and cross pollination will lift the whole industry. What had been a limiting factor for many products is turning into a hive of innovation.
Related Stories
Performance Increasingly Tied To I/O
Chipmakers look beyond processor speeds as rate of performance improvements slow.
Move Data Or Process In Place?
Part 1: Moving data is expensive, but decisions about when and how to move data are very context-dependent. System and chip architectures are heading in opposite directions.
CCIX Enables Machine Learning
The mundane aspects of a system can make or break a solution, and interfaces often define what is possible.
Leave a Reply