Is HW Or SW Running the Show?

The equation may be changing, but at the end of the day, the system is the only thing that matters. The question is, what is the system and how do we get there?

popularity

In the past, hardware was designed and then passed over to the software team for them to add their contribution to the product. This worked when the amount of software content was small and the practice did not significantly contribute to product delays. Over time, the software content grew and today it is generally accepted that software accounts for more product expense than hardware, takes longer and adds a significant, if not the majority, of the functionality.

Software has become so important that hardware is often seen as the platform needed to optimally support the software. “Software functionality will determine hardware functionality instead of the other way around,” says Bob Zeidman, president of Zeidman Consulting. “You’ll design software and give performance constraints like cost, power consumption, memory size, and physical size, and the tool will build the hardware design to meet your constraints and run your software.”

But even achieving this can be difficult. “The role of hardware is to optimally satisfy system functionality and performance requirements,” says Jon McDonald, technical marketing manager for DVT Questa at Mentor Graphics. “Software is a big part of the systems functionality, but when we start requiring power, performance and size optimizations, these variables cannot be optimized in a single domain. The system and user experience requirements must be taken into account to understand the appropriate trade-off requirements between optimization metrics.”

“Changes to an SoC design only make sense based on how they enable better functionality, which is most often defined by the software running on the SoC,” says Tom De Schutter, senior product marketing manager for virtual prototyping at Synopsys. “This has created a co-dependency between the hardware and the software. The hardware cannot be developed and verified outside the context of the software, and the software needs to be developed for the target hardware.”

To make things more complicated, this is not just a technical issue. “The issues cross traditional hardware software boundaries,” says McDonald. “Companies are still largely compartmentalized. This compartmentalization is going to have to change, and a key driver is the increasing focus on the system as a whole, rather than looking at the hardware in isolation. Organizations are realizing that the hardware cannot be successful unless it is understood, designed and optimized in a system context.”

McDonald also feels that focusing on hardware/software co-design is a distraction. “It actually encourages the compartmentalization of tasks and promotes the idea that system optimizations can occur independently in each domain. The focus will become much more holistic, each domain must be optimized while understanding and reacting to the requirements of the other domains, and optimizations will more fluidly flow across domain boundaries. Successful designs in 2025 will be more differentiated by system decisions that cross the boundaries rather than by optimizations in the individual domains.”

“The whole software/hardware co-design concept is completely broken and is based on a false premise,” adds Neil Hand, vice president of marketing and business development for Codasip. “We long ago went past the point where hardware was dominant in an SoC, and yet companies still spend millions to license a core, and then optimize the software for that platform. All of the effort in the industry is to make this process more efficient, but it is still an incorrect premise. Even adding new and varied optimized cores doesn’t help because evaluating those cores is too complex. ESL and don’t help since they tend to look at how to replace software and convert it to hardware.”

The industry has been responding to some of those issues. “We have long recognized the rising cost of design due to overall design complexity, cost of prototypes and necessity of ever-higher verification quality,” explains , fellow at Cadence. “In fact, the combination of advanced verification, wider use of larger scale IP, improved digital and analog tools, and the shift of complexity to software have all helped design costs grow at a more tolerable rate. However, the opportunity of (IoT) — the potential for many more designs of modest scale and cost — epitomizes the demand for significantly greater automation of block creation, and integration, full chip verification and platform programming.”

Focusing on IoT creates some interesting new requirements. “The IoT will require a completely different industry structure,” says Lucio Lanza, managing partner at Lanza techVentures. “Companies making new ‘Things’ as new citizens of the Internet will not necessarily have or be willing to acquire electronic design expertise.“

“One of the fascinating things about IoT is that a new class of design tradeoffs needs to be made,” adds Drew Wingard, chief technology officer at Sonics. “This is about where we collect the data and where we process the data.”

Wingard says people assume there is a hierarchy of systems. The overall large system has servers somewhere up in the cloud and edge devices somewhere at the bottom and some set of things in between. “Where are you going to do the processing? This has big implications on how much energy you’re going to spend at each node, how robust you’ll be in case the network link goes down, how big your battery has to be, what type of communication technology you have to use because of the data rates associated with it. It becomes a very large system optimization challenge.”

In many cases, this is an optimization challenge across the industry and not just a silicon designer’s concern or between groups within an organization. This means that chip designers have to worry about how they fit into this bigger picture and that means they need better abstractions.

Today, we are still getting a handle on the abstractions within the chip. “Efforts to achieve continuous integration of hardware and software have created what the industry refers to as shift left – essentially early representations of the hardware to allow some level of software execution,” says Frank Schirrmeister, senior group director for product management and marketing at Cadence. “During a project flow, shifting left has created various options of development vehicles to bring up and execute software.”

Schirrmeister lists six ways this can be done today:

• Software Development Kits (SDKs), not modeling hardware details;
Virtual Platforms that are register-accurate and represent functionality of the hardware accurately, but not with timing;
RTL simulation, which is technically a representation of the hardware but not often used for software development, unless for low-level drivers;
• , which is the first platform that allows execution in the MHz range, with the intent largely to verify and optimize the hardware;
• FPGA-based prototyping, which executes in the 10s of MHz range, at times up to 100MHz, and is a great vehicle for software development on accurate hardware;
• Using the actual chip in development boards to develop software.

“All options except the last one use abstraction in one way or the other to enable software development as early as possible,” says Schirrmeister. “The tradeoffs are time of availability during development, speed, accuracy and incremental effort needed for development of the development vehicle.”

But there are still many unsolved problems. “The semiconductor disruption is happening,” says Lanza. “EDA is under this disruption and impacted by the laws of silicon. EDA needs to change what it’s designing now and in the future.” Lanza points to one specific area that he feels needs attention. “No man’s land is the area between software and the SoC. No one is addressing the needs in the middle or the design layer. It’s as modularized as the SoC itself, and it’s a wilderness, untouched and fertile ground.”

3D integration
Many in the industry think that we are nearing the time when 3D integration will become a viable and cost effective means of assembling hardware. This opens up the need for a whole new set of tools and optimization that will impact all parts of the industry, including the IP industry. “The miniaturization of smart systems and their applications require the integration of a mixture of technologies (analog, digital, RF, MEMS etc.) in small packages,” explains Stefano Pettazzi, senior applications engineer at Silvaco. “2D integration densities have reached extreme costs not open to small design outfits whereas 3D integration represents an option for innovative smart system designs that offer high packing densities, larger interconnect bandwidth, low latency, extensive modularity and heterogeneity.”

While still in the early stages of commercialization, TSMC has a stated long-term goal to use 3D super-chip integration to emulate the human brain. Jack Sun, vice president of R&D and Chief Technology Officer for TSMC, said in 2013 that the foundry’s goal was to do this on just 20 watts. “To achieve that level of 3D super-chip integration will require a 200 times shrink over today,” which he estimates will be at least seven generations away, at about the 2nm node circa 2028.

This only assumes and incremental development. By tackling the problem in a different way, it is likely that this type of goal can be achieved in a much shorter timeframe, such as using neural networks. New architectures, new types of manufacturing, new memories and new design tools may make this type of advancement possible in 10 years rather than 15, and potentially even sooner.”

“3D-manufacturing technologies combined with the new architectural paradigms enabled by them will overcome the bandwidth-latency barrier in very-high-count multicore chips,” predicts Pranav Ashar, chief technology officer at Real Intent.

But first there are some issues that have to be overcome. Pettazzi adds that “3D integration offers system-level designers increased performance and functionality gains once issues of power dissipation, temperatures, interconnectivity and reliability have been resolved.”

The very close spacing of components (μm rather than mm) has the potential for reducing interconnect length, but “this is difficult to do well in 3D, and the undesirable interactions are orders of magnitudes worse than an equivalent planar design,” warns Pettazzi. “The minimization of the interconnect lengths is one of many competing constraints. Such competing orthogonal requirements require novel design tools with multi-target optimizers, which offer the users careful prioritizations and weighting. Then co-optimization strategies can be used to find the best compromise.”

One possible approach is based on user-defined penalty functions representing different multi-physics phenomena and cost constraints (e.g. insertion loss, thermal spikes, cost of the block packages, etc.). This high abstraction concept enables evaluation of user-defined criteria that drive block movements for 3D floor space solutions.

Can designers perform 3D integration using existing tools? “Existing EDA tools can be used for this today,” says , president and CEO of MonolithIC 3D. “If we assume one layer is logic and one is memory, existing EDA can do the job. EDA tools are an important part of any device, but the key is that you need a solution without changing EDA. As the market builds up and applications start to expand, the EDA companies will expand support and things will get better.”

Even then there are limits as to what should be expected from EDA. “One day we may have tools that can find the optimum place for memory and the way to fit everything together, but even from a manufacturing point of view today, you may want to allocate one layer to logic and a different for memory where the process can be optimized for each,” explains Or-Bach. “By separating them into different layers, the EDA tools necessary for each are different. For the first few years of this type of 3D chips, we will see solutions that are somewhat restricted.”

One of the advantages of 3D is that not only can they be processed separately, but they can start to look more like LEGOs where they can be plugged together physically. “In a 2D world, if I have a custom piece in the device, then the whole device has to be built just for me,” says Or-Bach. “I could buy the individual chips and design a PC board between them. 3D allows me to do the integration of the device where I can get chiplets and plug them together. I don’t design, build — I just buy and integrate.”

The next few years will see the need for many new EDA tools spanning the entire range of design from the micro to the macro and from the perspectives of hardware, software and system. EDA may struggle to do all of this on its own and we can expect additional mergers between EDA, software companies, system companies and maybe even manufacturing and assembly companies.



1 comments

Dev Gupta says:

The Golden Rule remains : he / she who has the Gold makes the Rules. Ever since the revival of Apple and the rise of Google the Gold goes mostly to Software / Services Co.s, so they are now making the rules for everyone, including for hardcore technical co.s like Intel or Cisco. In terms of Barriers to Entry and then ROI, Hardware ( known only to a bunch of underpaid Engineers ) simply cannot compete against Software based services for the slacker English Major crowd ( the Mass Market ). Just look at the relative fame and fortunes of the two Steves who created Apple. All the VC s know this.

Leave a Reply


(Note: This name will be displayed publicly)