What Will 2015 Bring For System-On-Chip Verification?

Which predictions from past years are worth a second look, and what they bode for the future.

popularity

Starting a new year, I always look back at predictions from years past to see how far off they were from reality and try to understand why. Rolling back 10 years, IEEE Spectrum published its annual “Winners and Losers” issue. Looking back, three predictions stick out for me.

The first one is about how we consume media. Back in the January 2005 issue of IEEE Spectrum, Internet Protocol Television (IPTV) was touted as a winner in “The Battle for Broadband“. The battle described was happening in Switzerland between telephone and cable providers. Ten years on, it most certainly seems like the media landscape has fundamentally changed. My phone, Internet, cable, and home security are all consolidated, and the battle seems to have moved on to who controls my house (Google Nest or Comcast). Adobe Flash Player, Microsoft Silverlight, and Apple IoS are offering Web-based unicast services through which I watch and listen to media, even though I had to look that up now, because all of them are just features in my browser.

The second prediction was about how I access data while on the road. In “Viva Mesh Vegas” a wireless mesh network that Cheetah Wireless Technology Inc. was at the time installing in Las Vegas was touted as a winner. Within the Las Vegas MeshNetworks system, the average transmission speed would have ranged from 500kbps to 1.5Mbps, with bursts of up to 6Mbps possible. I don’t think that one worked out. According to this Economist article, initial tests showed promise, but the density of equipment necessary to make seamless and ubiquitous Wi-Fi work outdoors in an average city was prohibitively expensive with the technology available at the time. Mobile broadband seems to have won. Today I am happily streaming via LTE nearly everywhere I go.

The third relevant prediction was about multicore processing, called “Sun’s Big Splash“. This prediction was about the UltraSparc T1, codenamed Niagara, touted as a winner and later announced in November 2005. On March 21, 2006, Sun released the source code to the T1 IP core under the GNU General Public License. The full OpenSPARC T1 system consists of eight cores, each one capable to execute four threads concurrently, for a total of 32 threads. Each core executes instruction in order and its logic is split among six pipeline stages. The commercial aspects of this core to me are less relevant than what it symbolizes—it was right at the edge of the multicore era, which spawned new programming models and software standards, and fundamentally changed the design of systems, chips, and associated software. As a result, in 2005 I switched to a Multicore related startup and later co-authored articles like “Software Standards for the Multicore Era“.

So what about my own predictions? Five years ago, in “2010 Will Change the Balance in Verification“, I wrote about software becoming key to verification. We had some trailblazer customers at that time that were applying verification re-use quite efficiently. At the time I stated, “First, users start developing verification scenarios that represent the device under test (DUT), even before RTL becomes available, by developing testbenches using C or C++ on virtual platforms representing the design in development. Once RTL becomes available, these tests can be refined and re-used both in RTL simulation and in execution of RTL on FPGA prototypes and/or emulators. And even after silicon has become available, the same software-based tests can be executed to verify the SoC when the actual chip has been manufactured.”

That prediction has most certainly come true. The drivers are clear and directly connected to media usage, connectivity and need for processing power as discussed in the IEEE Spectrum predictions above. These needs have changed chip design quite a bit, drove the growing importance of embedded software (we all have seen the cost charts to develop complex software content) combined with staggering growth of number of processor cores and re-used IP blocks in a system on a chip. According to Semico, the average of about 50 IP blocks in 2010 was expected to grow to an average of 120 blocks last year, with reuses growing from about 55% to 75%. The graph below summarizes some of these challenges for a generic mobile design.

2015-System-onChip

To address these challenges, Cadence had announced the System Development Suite in 2011 as a set of connected engines from TLM to RTL, simulation to hardware-assisted execution. Mentor and Synopsys followed three years later in 2014 with the Enterprise Verification Platform and Verification Continuum, respectively.

So verification reuse as predicted in my 2010 article is definitely catching on, and we are now well on the way for more automation. Accelera has formed a working group to define how to keep stimulus portable across engines, specifically to “create a standard in the area of enabling verification stimulus to be captured in such a manner that enables stimulus generation automation, and enables the same specification to be reused in multiple verification languages and contexts.” Our announcement of the Perspec System Verifier late last year—see also “Top-Down SoC Verification” and “The Next Big Shift In Verification“—plays right into that domain.

Predictions from past years aside, 2015 promises to be an incredibly interesting year for verification again!



Leave a Reply


(Note: This name will be displayed publicly)