Trends In FPGA Effectiveness: The 2018 Wilson Research Group Functional Verification Study

Last year, 84% of FPGAs went into production with non-trivial bug escapes.

popularity

We all know that knowledge is power. The adage holds true even in the prosaic case of making informed decisions backed by good data.

Our hope and motivation in conducting the worldwide Wilson Research Group Functional Verification Study is to provide our community the information needed to make the best methodology and tool choices for their business and design goals. As well, we at Mentor, a Siemens Business utilize the same information to guide our research and development toward creating and offering the solutions you need.

We will present the key findings from the 2018 Wilson Research Group Functional Verification Study in a series of four articles. The first two will focus on FPGA trends; the third and fourth on the IC/ASIC market. We will begin with an overview of the study itself and the procedures we followed to maintain the integrity, validity, and inclusiveness of the survey.

The 2018 Study: Sampling and bias mitigation
The 2018 Wilson Research Group studies are worldwide surveys. The survey results are compiled both globally and regionally for analysis. In this article series, we present the global trends. The study results presented in this article are a continuation of a series of industry studies on functional verification. This series includes the previously published 2014 and 2016 Wilson Research Group Functional Verification Study. Each of these studies was modeled after the 2002 and 2004 Collett International Research, Inc. studies and focus on the IC/ASIC market. While we began studying the FPGA market in 2012, we waited until we had sufficient multi-year data points to identify verification trends to draw any significant conclusions.

For the purpose of our study, the sampling frame was constructed from eight industry lists that we acquired. This enabled us to cover all regions of the world and all relevant electronics industry market segments. It is important to note that we did not include our own account team’s customer list in the sampling frame. This was done in a deliberate attempt to prevent vendor bias in the final results. While we architected the study in terms of questions and then compiled and analyzed the final results, we commissioned Wilson Research Group to execute our study. After data cleaning the results to remove inconsistent, incomplete, or random responses, the final sample size consisted of 1205 eligible and qualified participants (i.e., n=1205).

Figure 1 compares the percentage of 2016 and 2018 study participants (i.e., design projects) by targeted implementation for both IC/ASIC and FPGA projects. It is important to note that targeted implementation does not represent silicon volume in terms of the global semiconductor market since a single project could account for a significant portion of semiconductor market revenue. However, the data suggest that projects creating designs targeted at high-performance SoC programmable FPGAs is increasing, which is one indication of growing FPGA complexity.


Figure 1: Study participants by targeted implementation. [Source: Wilson Research Group and Mentor, A Siemens Business, 2018 Functional Verification Study. © Mentor Graphics Corporation.]

Since all survey-based studies are subject to sampling errors, we attempt to quantify this error in probabilistic terms by calculating a confidence interval. For our study, we determined the overall margin of error to be ±4% using a 95% confidence interval. In other words, this confidence interval tells us that if we were to take repeated samples from a population, 95% of the samples would fall inside our margin of error ±4%, and only 5% of the samples would fall outside.

The integrity of any survey depends on unbiased data gathering techniques. So it’s important to briefly discuss the issue of study bias and what we did to address these concerns.

First, we have noticed a significant increase over the past two studies in terms of projects working on designs less than 500k gates. In general, these smaller designs are often associated with sensor designs for IoT and Automotive. An increase in smaller designs can introduce some interesting results in the findings. The reason for this is that project teams working on smaller designs are often less mature with their functional verification processes. Hence, this can affect the trend data in that it might appear that adoption has leveled off or reversed when in reality the increased number of smaller designs participating in the study are biasing or influencing the final results. We will point out these biases in later articles.

Next, to compare trends between multiple studies, it is critical that the study be balanced and consistent with its makeup from previous studies. Upon our launch of the 2018 study we initially received a poor response rate from Japan. That is, the percentage makeup of the study initially was out of balance with our previous study and not what we would expect in terms of geographical makeup for design projects. We addressed this problem by sending out multiple reminders to the Japan study pool participants, and ultimately received the response rate consistent with our previous study.

An out of balance study will result in trend bias across studies. Not only in terms of regional participation, but by job title. For example, if a study concerning women’s health issue were made up of 75 percent men, it would yield different findings from the same study made up of 75 percent women. Hence, in addition to regional participation, we carefully monitored job title participation to ensure it was balanced with our previous studies.

When architecting a study, three main concerns must be addressed to ensure valid results: sample validity bias, non-response bias, and stakeholder bias. We took several, rigorous steps to minimize these bias concerns.

To ensure that a study is unbiased, it’s critical that every member of a studied population have an equal chance of participating. For our study, we carefully chose a broad set of independent lists that, when combined, represented all regions of the world and all electronic design market segments. We reviewed the participant results in terms of market segments to ensure no segment or region representation was inadvertently excluded or under-represented.

Non-response bias in a study occurs when a randomly sampled individual cannot be contacted or refuses to participate in a survey. It is important to validate sufficient responses occurred across all lists that make up the sample frame. Hence, we reviewed the final results to ensure that no single list of respondents that made up the sample frame dominated the final results. Another potential non-response bias is due to lack of language translation. For example, the 2012 study generally had good representation from all regions of the world, with the exception of an initially very poor level of participation from Japan. To solve this problem, we took two actions: we translated both the invitation and the survey into Japanese; we acquired additional engineering lists directly from Japan to augment our existing survey invitation list. These steps resulted in a balanced representation from Japan. Based on that experience, we took the same approach to solve the language problem for the 2014 study.

Stakeholder bias occurs when someone who has a vested interest in survey results can complete an online study survey multiple times and urge others to complete the survey in order to influence the results. To address this problem, a special code was generated for each study participation invitation that was sent out. The code could only be used once to fill out the survey questions, preventing someone from taking the study multiple times or sharing the invitation with someone else.

Spotlight on FPGA
FPGAs have recently grown in complexity equal to many of today’s IC/ASIC designs. Some of the more interesting trends in the 2018 study related to FPGA designs are as follows.

  • The FPGA market has a difficult time with non-trivial bug escapes into production.
  • The FPGA market is rapidly maturing its verification processes to address growing complexity.
  • The IC/ASIC market has converged on common processes driven by maturing industry standards.
  • The IC/ASIC market is fairly mature in its adoption of various technologies and techniques for IP and subsystem verification. Many of the new IC/ASIC challenges have moved to the system level.

The global semiconductor market was valued at $444.70 billion in 2017, of which $4.7 billion is accounted for by FPGAs [1]. The FPGA market is expected to reach a value of $8.8 billion by 2027, growing at a compounded annual growth rate (CAGR) of 6.4% during this forecast period. The growth in this market is being driven by new and expanding end-user applications related to automotive, IoT, telecommunication, industrial, mil/aero, consumer, and emerging AI applications within the data center requiring acceleration.

Historically, FPGAs have offered two primary advantages over ASICs. First, due to their low NRE [2], FPGAs are generally more cost effective than IC/ASICs for low-volume production. Second, FPGAs’ rapid prototyping capabilities and flexibility can reduce the development schedule since a majority of the verification and validation cycles have traditionally been performed in the lab. More recently, FPGAs offer advantages related to performance for certain accelerated applications by exploiting hardware parallelism (e.g., AI Neural Networks).

The IC/ASIC market in the mid- to late-2000 timeframe underwent growing pains to address increased verification complexity. Similarly, we find today’s FPGA market is being forced to address growing verification complexity. With the increased capacity and capability of today’s complex FPGAs, and the emergence of high-performance SoC programmable FPGAs (e.g., Xilinx Zynq, Altera Arria, Altera Cyclone, Altera Stratix, and Microsemi SmartFusion), traditional lab-based approaches to FPGA verification and validation are becoming less effective. In this article, we quantify the ineffectiveness of today’s FPGA verification processes in terms of non-trivial bug escapes into production.

FPGA verification effectiveness
Now let’s take a look at our FPGA project results in terms of verification effectiveness. IC/ASIC projects have often used the metric “number of required spins before production” as a benchmark to assess a project’s verification effectiveness. Historically, about 30% of IC/ASIC projects are able to achieve first silicon success, and most successful designs are productized on the second silicon spin. Unfortunately, FPGA projects have no equivalent metric. As an alternative to IC/ASIC spins, our study asked the FPGA participants “how many non-trivial bugs escaped into production?” The results shown in Figure 2 are somewhat disturbing. In 2018, only 16% of all FPGA projects were able to achieve no bug escapes into production, which is worse than IC/ASIC in terms of first silicon success, and for some market segments, the cost of field repair can be significant. For example, in the mil-aero market, once a cover has been removed on a system to upgrade the FPGA, the entire system needs to be revalidated.


Figure 2. Non-trivial FPGA bug escapes into production. [Source: Wilson Research Group and Mentor, A Siemens Business, 2018 Functional Verification Study. © Mentor Graphics Corporation.]

Figure 3 shows various categories of design flaws contributing to FPGA non-trivial bug escapes. While the data suggest an improvement in percentage of “logic or functional flaws,” it remains the leading cause of bugs. This reduction of “logic and functional flaws” is likely due to the FPGA market maturing its verification processes, which we will quantify in Section V, as well as increased adoption of mature design IP for integration.


Figure 3. Types of flaws resulting in FPGA bug escapes. [Source: Wilson Research Group and Mentor, A Siemens Business, 2018 Functional Verification Study. © Mentor Graphics Corporation.]

In addition to bug escape metrics that we used to determine an FPGA project’s effectiveness, another metric we tracked was project completion to the original schedule, as shown in Figure 4. Here we found that 64% of FPGA projects were behind schedule. One indication of growing design and verification complexity is reflected in the increasing number of FPGA projects missing schedule during the period 2014 through 2018.


Figure 4. Actual FPGA project completion compared to original schedule. [Source: Wilson Research Group and Mentor, A Siemens Business, 2018 Functional Verification Study. © Mentor Graphics Corporation.]

Until next time
So far we’ve looked at the foundations of an inclusive, comprehensive, and unbiased survey and shared our findings on FPGA verification effectiveness. In the next article, we will look deeper into FPGA trends in terms of project resources, growing complexity, and verification technology adoption. We will close that article with some of our conclusions regarding various aspects of the FPGA market based on this year’s study.

In the meantime, you can take a look at the 2016 Wilson Research Group Functional Verification Study in the paper Trends in Functional Verification: A 2016 Industry Study.

References
[1] IC Insights, The Mid-Year Update to the McClain Report, 2018; International Business Strategies, Semiconductor Market Analysis, 2017, Review, 2018 Projections, July, 2018.

[2] S. Trimberger, Three ages of FPGAs: a retrospective on the first thirty years of FPGA Technology, Proceedings of the IEEE, Vol 103, Issue 3, March 2015.



Leave a Reply


(Note: This name will be displayed publicly)