Using cross-analytics between bug closure rates and source code churn to boost predictability.
Big data is a term that has been around for decades. It was initially defined as data sets captured, managed, and processed in a tolerable amount of time beyond the ability of normal software tools. The only constant in big data’s size over this time is that it’s been a moving target driven by improvements in parallel processing power and cheaper storage capacity. Today most of the industry uses the 3V model to define the challenges and opportunities of big data as three dimensional: volume, velocity, and variety. Lately this has been expanded to machine learning and digital footprints. The list of applications is endless, the process is the same – capture, process and analyze. Why shouldn’t this technology help improve your verification process efficiency and predict your next chip sign-off?
Today’s verification environments must be collaborative due to the size of devices, geographically dispersed teams, and pressures on time to market. It requires the efficient use of every cycle, managing hardware, software, and personnel resources.
This paper will define the typical verification environment and the data that it often leaves uncaptured across the duration of a project. It will show how the process of capture, process, and analyze can be applied to improve predictability and efficiency of the whole verification process. This requires a flexible infrastructure that allows data to be extracted from the multiple systems that make up the typical verification flow. There must be a central repository that is able to store the data in a common way, so that data can be managed to keep it clean and relevant not only over the project’s duration but also into the future to allow comparisons and predictions on other and new projects.
This paper will explain how new web-ready technologies can be applied to the hardware development flow to provide a plug-n-play infrastructure for collaboration. It will also highlight some of the types of analysis and insights that are possible by combining common coverage metrics with those normally lost data metrics, as well as inter-relationships between those metrics.
The ability to see gathered metrics over time can provide great insights into the process. Historical coverage data trended over time alone can give indications of how much more time is needed to complete sign-off. Being able to plot these single metrics together on the same graph also opens information that is often lost.
To read more, click here.
Leave a Reply