Simple Economics

Modeling complex computations can go a long way toward reducing overall design costs.

popularity

By Jon McDonald
I was watching one of the MIT OpenCourseWare videos the other day. It was one of the lectures on Computer Science. I believe it was Prof. Robert Gallager who made a statement that really got me thinking: “Increasingly, system computational complexity has little impact on cost because of chip technology.”

From a hardware perspective I initially had a bit of trouble with this statement. It seemed like a software-centric statement that discounted the value of the hardware, but as I thought about it from a target system perspective it started to make sense.

For systems with very large numbers of units the NRE cost can become insignificant relative to the overall cost of the end device. If we think about an iPhone with more than 100 million units shipped last year, an SoC development cost of $25 million to $50 million becomes a relatively minor component of the overall cost of the unit. In this context it becomes incredibly important to invest in careful design processes that ensure the complexity in the hardware is thoroughly understood.

One way of improving the understanding of complex systems is to create abstract models of the system, which allow us to capture the important externally visible attributes of the elements without dealing with the internal complexity. This is exactly what we do in electronic system-level modeling. We create an abstract representation of our system that allows us to interact with it in a way that encapsulates the complexity in a model that we can use without dealing with the internal intricacies of the device.

For something with numbers in this range it is easy to justify the additional investment in making sure the complexity is well understood. But for other markets with lower numbers of units, how are we minimizing the cost impact of “increasing, system computational complexity”? Through reuse if IP we are able to amortize the cost associated with the computational complexity over many target systems, but this IP reuse comes with tradeoffs. What portions of a system can be built from general-purpose elements? What portions should be customized? To achieve power and performance goals, what compromises must be made to achieve an optimal system?

We see this challenge in many customers today. SoCs and IP that has been developed initially for very-high-volume end systems is being leveraged and used in low-volume applications to provide incredible capabilities in relatively specialized application spaces. But for this to be effective, the users of the IP must have a way of quantifying the effectiveness of the IP in their target system. The challenge today in minimizing the cost of computational complexity in our systems is not one of designing the complex system, but one of using the existing complex IP effectively and appropriately, and creating custom hardware design only for the portions of the system that will really benefit from the investment.

To this end, much of the use of ESL tools I see revolves around bringing the information contained in the complex hardware back to the system-application level. This is generally the application software developer. The user of the IP cannot afford to be mired in the complexity of the IP being used, but they do need to know if the IP is effectively meeting their needs. They also need tools and information to understand how to tune IP use and augment the IP when necessary to deliver a complete optimized system.

Ultimately, ESL modeling of our complex computation elements contributes significantly to minimizing the cost impact of that complexity on our systems.

—Jon McDonald is a technical marketing engineer for the design and creation business at Mentor Graphics.



Leave a Reply


(Note: This name will be displayed publicly)