The industry is changing and more information is flying around at an ever-faster pace. It is intended to reduce costs for semiconductor companies, but who is footing the bill?
While working on the predictions articles for 2015 (markets, design, semiconductors, tools and flows), a number of companies talked about the great shift left that is happening in the industry. What was surprising was the number of companies that mentioned it, and in very different ways.
It is clear that shift left does not mean the same thing to all people. While they all see it addressing the need to speed up the design process and reduce costs by the elimination of surprises, that is where the commonality ends. The details from each company tend to fit into the tools, flows and strategies that they have and use this as an encompassing marketing message.
Intel gets credit for defining the term according to Jim Kenney, director of marketing within the emulation division of Mentor Graphics. “They were talking about this over a year ago. What they presented is getting things done earlier and shortening the flow. Specifically, they were taking the verification flow and moving it left.”
Shift left is happening in places outside of verification. “Shift left is about how we do a better job of finding things out today that we may not have found out about until some months down the road,” says Drew Wingard, chief technology officer at Sonics. “It is about reducing the time and costs associated with developing big chips.”
In addition, it is not just one part of the flow that is affected. “The shift left is across the spectrum of design,” explains Steve Carlson, vice president of marketing for low power and mixed-signal solutions at Cadence. “It starts with process development and the way in which that is rolled out and up through and including the adoption of and the need for earlier power modeling.”
As with all successful terms, everyone wants to jump in. “The term is becoming one of the big buzzwords,” claims Jean-Marie Brunet, director of marketing for the emulation division of Mentor. “Basically it just means that more customers are trying to do things early on rather than them happening later.”
“We didn’t invent the term but the industry likes it,” admits Pat Sheridan, product marketing for virtual prototyping at Synopsys. “The term came from some of our customers and we talked about it in a book about virtual prototyping, entitled ‘Better software, faster,’ published last year.”
What does shift left mean in practical terms? Norman Cheng, vice president and senior product strategist at Ansys, describes it this way: “Shift left is compared against top down. If you don’t have any physical knowledge of the final design when you do system-level design then you are missing a lot of detail. Your final product may not be what you expected. The shift left means that you have to consider a lot of characteristics of the design, such as the process node you are working with, early on.”
But confusion starts when we trying to reconcile those notions with abstraction. “Abstraction delivers a separation of concerns, with the property that a specific concern is a lot easier to address than the combination of many,” says Pranav Ashar, chief technology officer at Real Intent. He says that it is widely acknowledged that is the only way to keep up with . “That might suggest that abstraction is synonymous with moving up a design-refinement notch. Not true. There could be ‘horizontal abstraction’ as well as ‘vertical abstraction.’” Ashar explains that horizontal abstraction separates layers of complexity at the same design refinement level whereas vertical abstraction moves up a refinement level.
If abstraction is necessary for Moore’s law, then it would appear that bringing detailed information up to the system level would be a disadvantage. “The goal is to parallelize as much as possibly across the entire development stack,” says Carlson. “The nice thing about this is that there are more things in play so you can make changes to the hardware at the micro-architectural level or maybe drop in a specialized core. You may find that there are opportunities in adding some cells to the library. The bad side is that there are so many moving parts that one change can have a ripple effect.”
Accelerating Moore’s Law
Recently, some people have been questioning the sustainability of Moore’s law while others point out that it appears to be accelerating. This acceleration may be due to the shift left and thus a temporary effect. Carlson explains what is happening. “The foundries used to work by themselves and then they would release a near-final version (0.9) to the EDA vendors. They would update their tools and then the IP vendors would get early access to the tools and eventually the end customers could start the adoption process for the new node. That is really inefficient and everything can be shifted to the left.”
Wingard sees that a change was forced upon the industry. “We are recovering from what happened when the large semiconductor companies took a step back from building the large application processors that was funding a lot of the advanced methodology work, the development of large IP cores and new design methods.”
Today, some early development is still done by the foundries, but the situation quickly changes. “We now get DRM 0.1,” says Carlson. “There is a close feedback loop between the manufacturing rules, the impact on routers, the design of cells, the impact on floor planning, and the introduction of coloring. We also get involved in the test chips, which include 64-bit cores, cache memory, high-speed I/O. The idea is that the foundry, EDA and IP vendor work together. We target the first adopters and we have a good idea about what they will want and we put their requirements into a test chip. We can use that for characterization and this saves them time. So the test chip has also been shifted left and the development time of the end customer is reduced because part of the burden has been taken on by the early development team.”
Power becomes the great leveler
“The shift left concept gives us a good opportunity with power because the information flow is both ways,” says Alan Gibbons, power architect within Synopsys. “We need to have some level of enablement for energy-aware design—power information and abstraction from the IP going into the system-level and power intent being directed into the implementation world. We see information going in both directions.”
Just as with other types of design decisions, the earlier the choices are made, the greater the impact they can have on the end design with a substantially lower investment. “Power has always been regarded as more of a back-end problem,” says Bill Neifert, chief technology officer at Carbon Design Systems. “RTL designers have been concerning themselves with it for a while but it hasn’t really shown up on the radar of many system architects. In many cases, they’re only a few years removed from using spreadsheets to do performance calculations. Architects routinely describe exact performance targets that they need to meet and yet they tend to talk about power in much more vague terms. It’s not being positioned as being their problem, so in all but a few cases, they’re not making an effort to tackle it.”
The semiconductor industry witnessed this with logic synthesis. When wire delays became increasingly important, timing could not be estimated accurately enough and so synthesis had to start considering physical aspects such as place and route. Power is becoming something that spans all levels of abstraction, from the switch to the system and the software. It is clear that these need to be connected together better than they are today. “Over the past three or four years, HLS had been viewed as a top-down compiler,” points out Bryan Bowyer, product marketing for high-level synthesis at Calypto. “You set it up, build the design and get RTL to push through the flow. Now we see the need for the feedback path to the designer so that before synthesis is even run, we have a characterization flow that runs downstream tools.”
Gibbons sees a similar need for moving power information up the flow. “The enablement of low-power or energy-aware design needs to be able to abstract the critical power characteristics of a piece of IP and make those available in the virtual prototype. Once they are available we can use it to make intelligent architectural decisions in both hardware and software about energy efficiency.”
An intermixing of information and the tools to enable an incremental flow is the way that Bowyer sees it. “Conceptualize the design as a lot of little pieces. You decide how you want to structure the design and push it through synthesis. Maybe you didn’t meet timing or power. You want to be able to lock down the pieces that were good and feed that information back. The higher-level tools can consume information out of the lower-level tools and back annotate it into the partition. This can be used to resynthesize other pieces. The same is true for power where you can run down to RTL and then use power analysis technology to get the power data and push that back up.”
What comes up also needs to go down. “You need to be able to pass information down from the virtual prototype as the design is refined,” says Sheridan. An example of this is information that enables optimization of the interconnect. “This can be done in the virtual prototype and architectural design and then that information can be used to configure RTL components, such as interconnect or the memory sub-system. From a performance analysis point of view this has been happening and now we are seeing more information that needs to be shared regarding power analysis.”
Extension to the system level and software
Vinod Viswanath, director of research and development at Real Intent, sees problems with the current approach. “SoCs are optimized for specific applications and the optimizations are done in isolation and without utilizing the workloads. Due to lack of hardware/software cooperation in power management, the platform as a whole cannot anticipate power requirements of the application ahead of time and instead, has to perform power management reactively. Going forward, it is quite clear that neither hardware nor software in isolation can make the best decision about power and performance management, and neither can these be done for the CPU alone. They must be done on the entire platform level, in a holistic way.”
One very common interpretation of shift left involves early software bring-up. “We use the term shift left to talk about the project schedules and the impact it can have when software development is done in parallel with the hardware design,” says Sheridan. “As system-level power management becomes more important, the development of power management software is becoming a part of functionality of the product. This can start earlier using virtual prototypes as the software development target instead of waiting for silicon to come back.”
Not all see the virtual prototype as being the key for software bring-up. “We are seeing the processor kernel modeled in an ESL-like environment,” says Brunet. “ISS and abstract models for UARTs, timers, memory and those types of things, connected up to an emulator that is running the new blocks at RTL. I don’t see a lot of exploration at the transaction level for new functional blocks.”
Carlson agrees. “Not everyone is doing system-level modeling yet, but you can see more of that, often in a hybrid mode where parts of the systems are modeled at the transaction level and coupled with HW emulation. Together that lets you run more software while exercising the new specialized hardware. This allows you to shift even more of the software and task allocation software and pieces of the OS left in time.”
Carlson counters that “there is a diversity of software and a virtual prototype that is loosely timed can be extremely valuable. There may be some types of software where it has to execute in a certain period of time and for those cases more accuracy can be helpful.”
Not everyone is happy with the shift left. “If you use a virtual prototype and find some class of problem but not others and then later find different types of problems, the software guys are dissatisfied,” points out Carlson. “They would rather have one cycle-accurate model to work from and debug each piece of software once. This is a resistance against the shift left, but when you are the long tail in product introduction, the senior management takes steps to force the point.”
Changing the design paradigm
Wingard believes we have to start approaching the problem in a different manner. “What can we do that allows us to get as near as optimum as possible with far lower total cost and how do we deal with markets such as IoT?” He contends that we should be looking at, and learning from, Agile methods. “Instead of doing things breadth first, where you try and advance the whole project to the next stage, Agile turns this on its head and takes the most interesting functions, the places where there is the biggest risk, and learn about those depth first.”
This would seem to be more of a dive that a shift left and would allow the benefits of abstraction to be unencumbered until there is a reason to do a deeper investigation.
David Kelf, vice president of marketing for OneSpin Solutions also has a different view of the problem “I have always rejected the notion that there is a level of abstraction above the RT level. Blocks of RTL code, in the form of internal or external IP components, are stitched together. It is this process of planning which blocks should be leveraged and constraints of each that is the real system abstraction. If this is the case, then I am not sure much has changed in terms of high-level to low-level convergence of requirements.”
Semiconductor Engineering would like to hear your views about Agile, the shift left and the existence of higher levels of abstraction. Are the foundries and EDA companies attempting to reduce the costs for semiconductor development in a hope to revitalize the industry?
Leave a Reply