From architectural tradeoffs to reuse and standards, making sure a design carries forward is becoming more difficult.
Future-proofing of designs is becoming more difficult due to the accelerating pace of innovation in architectures, end markets, and technologies such as AI and machine learning.
Traditional approaches for maintaining market share and analyzing what should be in the next rev of a product are falling by the wayside. They are being replaced by best-guesses about market trends and a need to balance the cost of programmability against competitive pressures involving performance, power and time to market. The bottom line is it’s becoming much harder to predict how a particular design will fare, how much backward- and forward-compatibility should be added in, and how other related technologies could affect that design throughout its projected lifetime.
“You need to get the first silicon right to reduce the cost and get to market on time,” said Hezi Saar, director of product marketing, mobile, automotive and consumer IP at Synopsys. “The investment is really focused on including the right features, getting the cost right, and including the right interfaces and standards that the product will have. Then you need to amortize this investment with a return across as many applications as possible.”
For example, there are AI accelerator chips also targeted to be AI edge devices, which may require a unique set of features. “I would have this set of features for two different markets, but would I do two different chips? Probably not. If I can do it in one, it won’t be the best in terms of cost now, but that will allow me to get more over time. It will be much more effective in terms of cost later.”
That falls under the platform type of approach, where connectivity to a range of legacy and new devices is designed into an SoC. “If you want to have LPDDR4 or LPDDR5, you have the choice using the same chip, since you don’t want to rule yourself out of that market,” he said. “Of course, the market contains some mainstream, some high-end applications, so if you are targeting multiple markets you want to do that.”
The downside is that supporting legacy standards is costly for IP providers. There is a PPA penalty for supporting all legacy nodes, verifying everything, and testing the end product itself and all the possible mode. In addition, there are interoperability issues that may arise over time.
“Deciding where to draw back for backwards compatibility requires heavy consideration,” Saar said. “What is the market? What stage is it in? What space do you play in as an SoC vendor? Would your market accept that you drop that compatibility? We are talking about SoCs, but these are SoCs that connect to connectors, and you don’t control that. There are a lot of devices out there with legacy HDMI and USB that you cannot just drop, and an SoC provider needs to look into the device market and those devices that connect to yours. There are still legacy devices out there, so you need to make conscious decisions what you want to do.”
There has been a growing recognition that the pace of advances in chips is accelerating. Part of this is driven by a de-emphasis on scaling alone to improve performance and reduce power, propelling chipmakers to develop new architectures and packaging technologies to make up for those deficits. Part of it also is the opening of brand new markets, such as AI, automotive and edge computing. The result is that change is occurring so quickly that it’s difficult for design teams to stay current even over the course of a single design cycle.
“There are lots of new innovations happening — new networks are coming up, workloads are coming out of the marketplace, and many of these have new operators, and operators that people have not designed or known a priori. faster,” said Suhas Mitra, product marketing director for Tensilica AI products at Cadence. “The approach many people take is to make a very hard and fixed accelerator that does a certain thing very well. But as you go along in time, it becomes very hard to catch up because people are doing new networks all the time. Then, when you map them on, that essentially becomes really hard. One way to do future-proofing is to attach a more generic processor or generic DSP to an accelerator. By doing so, that DSP essentially is the catch-all for all operators that cannot be mapped on the accelerator directly.”
One of the go-to solutions for dealing with these rapid changes is to put some functionality into software. While that is slower and less energy-efficient, it has a big economic benefit.
“This goes back to the software toolchain,” said Mitra. “In the software toolchain you have the ability to consume networks, and the ability to take a new network, analyze it, and then spit out the code that can run on the accelerator. With a DSP, technically you are future-proofing with a co-processor attached to the accelerator, or a DSP attached to the accelerator. But to utilize that thing I need software flexible and agile enough, so that as new workloads come up, I can map them. I need a good toolchain and a compiler flow to map these processors.”
Looking left and right
How much forward- and backward-compatibility often depends upon the market segment.
“Nobody in the industry wants their old systems to become obsolete and need a full-scale update of everything,” said Aleksandar Mijatovic, senior digital designer at Vtool. “They want to inherit their expensive and proven functions from a previous setup and add some new electronics. Still, you more or less have to cope with it, and you know which industries cannot afford to not to be backward-compatible. If you’re running a nuclear power plant, for instance, you’re not going to change something unless it’s absolutely necessary. But if you are building an iPad, you can completely change the entire electronics because it’s a product, per se. You’re not going to be hit if your components are all brand new.”
Not worrying about backward compatibility allows for much faster design cycles and more innovation within the design.
“Everything that goes into bigger systems has to be backward-compatible until all the old components in some system are replaced with the newer versions,” said Mijatovic. “The biggest companies in the world try to guess what will be next, what to prepare for. The best thing you can do is listen non-stop to people’s needs, and listen to where the market is moving to catch new trends. But it’s still a gamble since you are going with your hunch. Sometimes your idea works, where a particular technology will be the thing of the future and everybody starts using your product because it has something that is ahead of competition.”
From the chip architecture point of view, legacy support adds cost. It costs more to maintain legacy support, either through software patches and debugging. In addition, it adds area to the chip.
“If you still want to support the same market as the previous one, you may decide to operate with the current functionality but leave the previous functionality,” said Olivera Stojanovic, project manager at Vtool. “But then you may have double the costs for the mass production. It really requires an analysis of the whole market and what the needs are. Or, if you are the first one in the market, then you’re creating need. Still, more or less, it’s about market demand and the tradeoff between price and the volume you will sell.”
The software world, of course, is used to dealing with continuous upgrades of technology, and this is especially true today for applications that rely on machine learning. “These applications are constantly learning and improving requiring downloadable upgrades. The software is usually run on very stable hardware platforms, but how do the platforms get upgraded or differentiated without having to rebuild them. Customization over time is essential to keep pace with changes in demand. One of the keys to doing that is to build a reconfigurable hardware platform to allow for that flexibility. There are lots of ways to achieve that, but many companies are adopting heterogeneous computing that includes different elements such as a software programmable engine, accelerators, and programmable logic. Incorporating all these components helps to meet low latency, performance, and capacity demands while still achieving flexibility in the platform. Ultimately, the idea is to extend the life of the hardware throughout the entire product lifecycle as continuous improvements to the applications are made. But doing this requires on-going verification. If we look at an Agile development flow, one of the principles is to perform continuous integration and testing. If we apply that to hardware development, we can begin to understand that as changes are made, verification must also be done to ensure that changes don’t introduce new bugs, safety issues, or security vulnerabilities,” pointed out Rob van Blommstein, head of marketing at OneSpin Solutions.
Data-driven future-proofing
Data plays an increasingly important role in future-proofing. It can be used to assess market trends, and it can be used to match what is needed in a design, both at a high level and in the design and verification of a chip for one or more applications. In general, the greater the required compatibility, the more data that needs to be processed and analyzed.
“The amount of verification data is directly proportional to the length of the test,” said Shubhodeep Roy Choudhury, CEO of Valtrix Systems. “The shorter and tighter the sequence, the less verification data it will generate. For example, choosing C/C++ to write a memory copy test is always going to yield a long and ineffective test that generates more data as compared to an assembly sequence. So the stimulus developer needs to meticulously plan how the intent of every scenario is translated into a test sequence. Optimizing the number of scaffolding instructions associated with any scenario must be one of the key criteria in test development activity.”
Some of this is simply applying best practices that have been developed over the years. “The test stimulus generators need to be reproducible in nature,” said Roy Choudhury. “The tools must deterministically generate the same test sequence along with the architectural context every time the same input seed is passed. This will ensure that the verification data can be faithfully recreated at a later point if required. Another area that needs focus in order to reduce the amount of verification data is failure bucketing mechanisms. If the failures can be mapped to tool signatures or asserts, it will be easier to identify and group failures from the regression and eliminate the duplicate ones.”
Some of this also requires innovation as to how data is used. Verification data can be optimized by running more meaningful stimuli. “This includes tests that can exercise parts of the logic that have undergone change or resulted in recent bugs,” he said. “AI/ML can play an important role here by using coverage feedback to tweak the constraints/inputs in a way that it results in meaningful tests getting generated.”
Hedging with programmability
Another approach for future proofing architectures is making more of a design reprogrammable. This is one of the main reasons FPGAs and eFPGAs have grown so quickly in the age of AI and ML. The same is true for software, which is extremely flexible through an almost perpetual stream of patches.
“Effectively, that means more software and maybe more controllers,” said Simon Davidmann, CEO of Imperas Software. “It needs subsystems and processors rather than RTL, and then you can reload your software — as long as the software runs efficiently.”
Companies like Imperas are intricately involved with developers evaluating their architectures for these algorithms. “Simulators help people to build a model of these architectures so they can try out their software on it way before the RTL gets frozen,” Davidmann said. “In some ways, by having simulation and virtual platforms, you can build a model of what you’re considering, see how it works for different types of applications, and reconfigure it and change things. While not everybody does this, lots of people do. When they get a simulation up and running, they will try to run more than just the thing that’s in mind. Also, once you’ve got a simulation platform, you consider regression tests — not just what you’re playing with currently. You can run your previous designs on it normally to check that legacy software is still running. While you’re thinking about the future, you’ve got to take the past with you most of the time.”
Planning for the future requires a lot of trial and error. “Simulators are used for architectural decisions, to try out new configurations for the future and different types of functionality,” he said. “Will it run Zephyr as well as FreeRTOS? If something works on these platforms, will it also run the previous functionality? It’s also common for design teams to look back to make sure that their old design still works for Arm or MIPS or PowerPC, while also looking to the future, and trying different configurations. Virtual platforms help keep the architecture set up so it can be useful in the future, while also giving the ability to analyze different bits of software running on configurations.”
For companies building sophisticated testbenches for their cores, the engineering team will try to use the same testbench on the next core. When using processor models in a virtualized environment, the data must be pulled out of the RTL into the testbench in order to do a comparison with the model of the core, because the simulator does a reference comparison of every instruction cycle. One challenge here stems from the fact that the new core doesn’t extract the same information, so the same testbench can’t be used. This, in turn, generates more work.
“In the verification ecosystem, there needs to be more standards, because people are trying to do reuse,” Davidmann said. “Verification IP is traditionally focused on the target, such as USB, TileLink, or AMBA. From the verification ecosystem, standards are needed so the IP can work with different cores. It’s not just about the item it’s testing. It’s the infrastructure it’s living in, because you want to re-use it. It may be that you’re in a Cadence environment, and Synopsys has the verification IP you want. But because there’s no standard there, you can’t use it in your Cadence environment, or vice versa. While it is early days for this, there needs to be some more standards in the verification ecosystem to help this idea of future proofing the verification bits of it.”
Market pressures
These design and verification pressures vary considerably by end market. But they are particularly challenging in the automotive arena, where there is a delicate balance between utilizing leading-edge technology and adhering to regulations that support both reliability and interactions with other vehicles on the road.
“Automotive systems have to be in the car a long time, so when you deploy a system, if you’re not at the very leading edge, you’re going to fall behind,” said Frank Ferro, senior director of product marketing for IP cores at Rambus. “You’re not going to be able to change it often. Due to this, automotive chip architects have to be more aggressive in their IP, and in their semiconductor architecture selections. At the same time, they’ve got to deal with all the reliability issues.”
The game console market shares some of the same demands. “The game console itself has to stay in the market for 10 to 15 years, so the interesting challenge is to provide something that is leading edge at the day of launch, but which still has enough power late in the lifecycle when the games get better and people demand more of a visual experience,” said Steven Woo, fellow and distinguished inventor at Rambus. “With cars, you’re trying to get the best hardware you can when you launch because you know things are going to get upgraded, and they have to stay relevant for a very long time. This speaks to the challenge of how to design everything to be forward-compatible by guessing what things are going to look like in the future. You have to have a relatively well thought out strategy for allowing that to happen into the future. When cars were beginning to support Bluetooth, a Bluetooth headset would work for a while, and then may not work if it was upgraded. Also, certain brands of Bluetooth headsets didn’t work in certain cars. This speaks a little bit to compatibility and a little bit to forward compatibility, which is a much harder thing to predict and plan for.”
In automotive, there are intense discussions about what to do with legacy IP or code when it needs to pass certain certifications. “This is a huge problem for the companies involved here,” said Olivera Stojanovic project manager at Vtool. “When you’re developing something from scratch, and you are aware of the standards, it is much easier. But with legacy, then it’s a problem.”
If it’s an ADAS SoC or IVI SoC, these are more specific to the functionality that a certain developer may have, and it is tied it to a certain architecture in a car or certain car makers that are typically using that ADAS SoC. As a result, there’s more of a bias towards that specific architecture.
“But there are devices that are multifunction, such as image sensors,” said Synopsys’ Saar. “And while there’s a different design for the commercial market than the automotive market, the basic functionalities are similar. There are image signal processors that sit between the multiple image sensors and the SerDes, which also could be multifunctional. Yes, there is this additional layer of automotive, which requires the device to be qualified, and the IP must be of a certain grade that is not a requirement in the commercial market. If you want to design it just for the consumer market, it’s AI at the edge. When you know you have multiple images coming into your device, you use some AI inference tagging, which is very simple. And there is connectivity to a co-processor through a PCIe, or to SerDes, if that’s in automotive. There may be a multi-function ISP that can go after both markets. However, it needs to be designed for the superset. This seems to be the approach many of the AI startups are taking. They can repurpose the engine. It doesn’t matter if it’s in a car necessarily or an industrial application. They can do that and be wiser about what they do and to prolong the lifetime of their product.”
Leave a Reply