Closing the loop with key performance indicators.
In Software Modeling Goes Mainstream, Ed Sperling recently wrote how chipmakers are applying use case modeling techniques to better understand the interactions between software and hardware and how they impact system performance and energy efficiency.
As the software content for multicore SoCs grows, these interactions are becoming increasingly complex. For system designers and SoC architects, the challenge is to predict how well their next generation product will meet the demanding requirements of the application as early as possible. In addition, system-level goals must be expressed in metrics that can be tracked throughout the development process. We call these metrics “Key Performance Indicators”, or KPI.
KPI enable system designers to specify system requirements in a clear and concise manner, from the perspective of the application use case (the workload). Here are a few simple KPI examples:
Well written KPI are clear expressions of system-level deadlines that must be met for the end-product to deliver the desired user experience. The graphic below provides a more detailed illustration for a common mobile application processor use case and KPI:
Are software modeling techniques available today to enable critical use cases and their corresponding KPI to be completely executable, for early analysis, without having to run the actual software? The point of today’s blog, of course, is to answer this question with an emphatic YES.
Application workload models, such as task graphs, capture the processing and communication requirements of the use case, enabling architecture simulation results to be compared to the target KPI in a highly productive and automated fashion. This enables system designers and SoC architects to close the loop on their specifications much earlier in the development cycle.
For example, the next graphic below shows the results as 3 different SoC architecture configurations are simulated in Synopsys Platform Architect. In each case the application workload model is the same, a task graph representation of the Chrome Browser in Android use case:
The three charts show the CPU load imposed by the browser use case over time, where each color represents the contribution of one Android process in the browser application. As processing resources are added to the architecture, the system’s ability to execute the browser use case improves.
For two of the three scenarios the KPI deadline is clearly met. However, the speed-up is not simply a linear function of the number of cores: the trough in the CPU utilization indicates that the dependencies between the processes limit the available task-level parallelism of the browser application. This analysis reveals clues about where further system optimization is possible, to reduce power and cost while still achieving the overall KPI performance goal.
After the SoC architecture is finalized, critical application use cases and their KPI can be tracked throughout the hardware and software development process to ensure system specifications are met. And, because task graphs are workload models and not the actual software, systems design teams can more easily share them with their semiconductor suppliers as executable specifications of their use cases (and corresponding KPI), benefiting collaboration in the supply chain.
So set yourself a deadline! Use Key Performance Indicators (KPI), use case workload modeling, and use early architecture analysis to close the loop on your next generation architecture.
Leave a Reply