HW Vs. SW: Who’s Leading Whom?

Both sides have some specific problems and have developed solutions suited to their needs. But as hardware and software are forced closer together, who has the best ideas?

popularity

In the past, technologies were developed in the software world that have languished until they were taken up by the hardware community. Then they were refined and polished and became fully integrated into the hardware development and verification flow. Examples are lint and formal. That was followed by attempts to migrate methodologies, such as object-oriented programming, which is the basis for most verification languages and methodologies today, such as SystemVerilog and UVM. A unified hardware software language was created called SystemC. That goal may not have been accomplished, but SystemC is seeing widespread adoption in virtual prototypes.

Not all of these technology migrations have been as successful as originally envisioned, but they still have had a significant impact. In the past couple of years, there has been a growing chorus of people who are seriously looking at the migration of some software development methodologies to hardware. , continuous integration, and buddy programming are examples.

There is a growing dependence between hardware and software. It has become difficult to talk about power management without talking about the interface between the capabilities provided in hardware and the policies defined in software.

Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence, reminisces about similar attempts 20 years ago. “I published a paper in 1994 titled, Transferring software engineering methods to VLSI-design: a statistical approach. The so-called software crisis started in 1968 when projects ran over time and budget, resulting in inefficient software of low quality that did not meet requirements. I talked to Thomas DeMarco and Fred Brooks, authors of “The Mythical Man Month,” and their response was stunning: ‘Please don’t mess with hardware methods. Hardware is perfect as chips are going out without bugs. Software is full of bugs. Software should learn from hardware.’”

There are many people in the hardware world who are scared by the concept of adopting more software engineering practices. “It was a huge wakeup call to Chrysler when the Jeep got hacked, which resulted in 1.4 million recalls,” says Andreas Kuehlmann, senior VP and GM, Software Integrity Group of Synopsys. “The security team is scared because they know they have problems with their software. Much of this software is what they get from their suppliers. The supply chain has no software quality control. No security control. All of the things that have been put in place on the hardware side are not there for software.”

It is not clear that the hardware side actually does have full control over the security aspects, but what about productivity? “The 2013 and 2015 IBS reports showed the skyrocketing costs of software development,” points out Harry Foster, chief scientist at Mentor Graphics. “As we move down to each node, the headcount between 90nm and 16/14nm showed a 17% increase in required software engineers. The software world is ripe for productivity improvements. Hardware has done a remarkable job. When looking at the number of transistors per engineer from 1985 to present, we have grown five orders of magnitude. Clearly we are being productive in the design side. We have not seen a huge rise in the number of design engineers. The challenge is in the software.”

Kuehlmann is in complete agreement. “Most software development is so primitive with manual development, manual testing, almost no test automation. It is an immature process.” And yet the industry appears to be complacent about these problems.

The interface problem
There are increasing problems at the interfaces. This can be fixed either by having better specifications, such that the teams can work independently, or by finding ways to improve communication between teams so problems can be found and resolved earlier. Both of these require significant changes to the existing design and development practices on both sides of the fence.

“Specification is a problem and we have an opportunity in this area,” notes Foster. “It often comes down to something very simple and is caused by not having an executable specification between the domains. These bugs escape until they cause a lot of pain. We lack any formal specification between the domains.”

Screen Shot 2015-10-27 at 7.24.50 PM
Source: Mentor Graphics

Is this an area where software leads? “I have been a proponent of using requirements or specification management tools for hardware design for some time,” says Randy Smith, vice president of marketing for Sonics. “It seems to be adopted very slowly by the hardware design teams. The benefits of using these tools are you capture the reasons for the specification—why you made certain decisions so that if later you are asked to change a spec, you’ll know what those implications are. It also greatly aids you in developing your test methodology because you can look at the requirements to see what was intended rather than trying to look at what someone did for an implementation compared against what was intended.”

But should a spec be created and frozen at the beginning of a project? “Many of us have been burned enough times by specification problems that we understand the decisions we make early in development are guesses, at best,” says , principal consultant at XtremeEDA. “What we haven’t figured out is that many of these specification problems are a result of making decisions before we have the right information. Don’t make all the decisions up front. Wait until you have the right information, then put your stake in the ground.”

Adds Johnson: “The hardware-software divide, particularly in semiconductor companies, is something everyone understands as a productivity barrier. And yet most are still stuck on the idea that integrating teams is somehow less efficient. Maybe it comes down to power struggle. Organizations just don’t have the gumption to integrate software and hardware teams even though they know it’s the right thing to do.”

Change is happening, but slowly. “There is a growing awareness of the practices being adopted in different teams,” points out Ranjit Adhikary, director of marketing at ClioSoft. “There are attempt to use the best practices from each team, adapted to meet each team’s requirements.”

And there is a growing awareness that both sides have something to learn from the other.

“It has become clear that for some hardware specializations, software techniques provide significant benefit,” said David Kelf, vice president of marketing for OneSpin Solutions “For example, object-oriented programming is being applied in UVM testbenches, which in turn look more and more like software protocol stacks. Untimed algorithm analysis in SystemC is being verified within software IDEs. And Agile-incremental methodologies are being used in design.”

Adopting software development practices
What are the motivations for integrated hardware and software teams? Smith sees plenty. “We need to be more productive on hardware design in general, and if the tools and methodologies become more similar, it might be possible to have more flexible engineering resources that could work on either problem. You could, for example, move software engineers to work on the hardware problem, at least where you are working at a high level.”

While that might sound like a stretch, Smith disagrees. “We are doing language-based design for hardware and have been doing that for a long time. To that extent, it looks a lot like software design. There are lots of shared paradigms. For example, synthesis looks like a compiler. It makes sense that we would find as many ways to share techniques that are beneficial for one with the other.”

Given that similarity, there has been growing interest in using Agile development within the hardware world. “Agile is being used in several successful hardware projects,” says Foster. “There is one mil/aero company, which traditionally has always been a rigid development process, that has moved to Agile development, and in particular using scrum.”

Agile is a different way of managing your engineering resources that “seems to be more productive in terms of fewer errors and less resources needed to get implementations done,” explains Smith. “It also tends to make your resource allocation easier, because people get more cross-training in that type of methodology.”

Johnson notes that the most common question he gets from upper-level managers is how Agile can improve software/hardware co-development? “The focus tends to be on development frameworks, scrum being the most popular. That’s an enormous step that pulls together the entire team and has them working in lock step. What we’re overlooking are the more basic communication-related improvements.”

Johnson provides some examples, including structuring teams so that software and hardware developers are part of the same reporting chain to help avoid resourcing conflicts. “That’s a fundamental change that has nothing to do with Agile or scrum. Similarly, having hardware and software teams co-located with leadership creates opportunities to share near-term goals.”

Kuehlmann recognizes there are differences between hardware and software. “You cannot develop hardware in the same way as software because you cannot tape out every time. That doesn’t mean that you cannot have scrums and sprints. But if you read the Agile manifesto, the customer is in the middle of a highly incremental process, and that is different in hardware.”

Plenty of skepticism remains. “Most hardware engineers are stuck on the idea that hardware and software are fundamentally different and practices don’t translate,” says Johnson. “I hear that repeatedly when talking to hardware developers about Agile development. The assumption continues to be that Agile can’t work because it comes from software. It’s only once we talk about specific practices — test-driven development and pair programming, for example — where people start to realize the potential and let their guard down.”

Adds Kuehlmann: “You form little cross-functional teams, you have 15-minute stand-up meetings, you have sprints for two or four weeks. This is an intelligent organization principle, but it’s not a revolution. It is just using funny words for everything. I talk to customers about Agile and they think it is an overselling of the methodology. It is just good principles of operation for use in a large organization.”

And perhaps it requires a limitation of the aspects of hardware that are considered. “Agile might work well for RTL implementation, but in my mind it falls apart for architectural design,” says Foster. “It is important to think about trains of products, where the architecture has to be defined in such a way that it does not need to be re-architected when new features or capabilities are added. Agile is not good at this. It is not appropriate for hardware architectural design.”

This is one place where the disagreement starts. “On the physical design side, we’ve long been able to deal with ECOs, which are a little bit like an Agile methodology in that it allows small changes to be quickly absorbed throughout the system,” explains Smith. “We’ve not really had an easy way to do that in the architecture or system-level design. Any change seems to ripple through everything and is not necessarily so easily absorbed. Agile applied to hardware is much more important at the system design and architectural level.”

One area where most seem to agree is that Agile is very applicable to verification. “Agile works quite well for test development,” says Foster. “Verification is a software project in reality. The verification team often has to test things that are not well understood in terms of features or use cases, so an iterative process makes perfect sense.”

Recently, the concept of paired programming has become popular in software development. “By having two programmers write software programs side by side at one computer, managers are seeing significant benefits,” explains , president and CEO of Oski Technology. “This provides improved design quality, reduced defects, reduced staffing risk, enhanced technical skills and improved team communication, over the traditional practice.”

Johnson agrees. “The mindset with successful agile teams is more tightly linked to productivity, as in ‘what are we producing right now that we can deliver to our customers before we sign off for the day’. Efficiency is secondary, even sacrificed, for the sake of productivity. A specific example is pair programming. Pairing is more productive. I know of teams that pair on every line of production code, and they do it because pairing produces high quality code faster. Now try and imagine an ASIC where every line is written as a pair. ‘No way! Not efficient!’”

Singhal points to one way this can be applied to hardware development. “Each hardware designer will be paired with a formal verification engineer. The new design scheme benefits both hardware design and formal verification in many ways. Formal verification has always been a white-box testing method, where a formal verification engineer needs to know the design inside out to do effective formal verification. By pairing the formal engineer with the designer, he or she will get to know as much about the design as the hardware designer. This knowledge will facilitate the process of writing end-to-end checkers to verify the complete design functionality.”

Continuous integration is another approach being used. “One of the advantages of this is that it enables problems to be located and debugged earlier,” says Foster. “Later in the process, there are too many stakeholders involved in the triage process, but continuous integration solves this problem by pointing to the last person who checked in a change and have them figure out what the problem is.”

Continuous integration has become a driver for the EDA industry. “The ‘shift left’ is all about continuous integration of hardware and software,” says Schirrmeister. “That is driving interest in hardware/software co-debug. It used to be that software could not be brought up until silicon came back, but it is now a race to bring out early representations of the hardware on which software can be verified earlier. That is probably the key change. Some elements of the design are so intermingled between hardware and software that that are part of the hardware sign-off.”

Conclusion
There are many smart people on both sides of the fence, and at the end of the day, the success of the product depends on both hardware and software. Successful companies need to find ways to bring the teams closer together and to find the best development methods independent of where they were originally developed.

“The way we get chips right is by not having one magic trick,” says Kuehlmann. “We use a suite of verification tools that are applied from the polygon level, to electrical, to RTL, to system level. All of them work in concert. It is a patchwork. When we find a hole, we put another patch on.”

And with tightly integrated software playing an increasing role in the success of the system, we may be in need of more patches.



6 comments

John Swan says:

I believe the reason Andreas Kuehlmann says that Agile is not good for HW architectural design is he is thinking at RT-Level.

Brian Bailey says:

It has Harry who thought that it was not suitable at the architecture level and this was because he is think of system level design as being the provision and connection of resources, which is not feature oriented, but more capacity oriented.

John Swan says:

Also, I was intrigued with the concept of ‘paired programming’. Never thought about that one before and now you have perked my interested.

Kev says:

I would view it that hardware capability leads software. SW is a bit of a “boat anchor” since computer scientists have almost completely failed to come up with a way to make parallel programming easy. Digital HW designers have been thinking parallel for ages, but the RTL methodology is just a bad abstraction level to work at so nobody uses the tools (e.g. VHDL) for anything else – you have to be pretty masochistic to use SystemC too, and that’s about as close as Kuehlmann gets to the problem given Coverity’s product only works on C/C++.

Brian Bailey says:

I would actually say that it is one step more complicated than parallel programming because most of these systems are heterogeneous in nature. Most of the time, they are using shared memory as an infinitely large global memory buffer so they don’t have to think about variable scope, they don’t have to think about restricting their usage of “malloc or new” or whatever their language equivalent is and never have to declare if anyone else will ever be able to access it. So we waste so much on-chip space and complexity where in reality most programs only need a small global memory space and the rest is private and does not need to be kept coherent.

Kev says:

Then there’s analog – the truly parallel computing!

Leave a Reply


(Note: This name will be displayed publicly)