The Data Revolution Of Semiconductor Production

Every part of the industry has a role to play in making collaborative design possible.

popularity

During our insightful panel discussion on “The Data Revolution of Semiconductor Production – How Advancements in Technology Unlock New Insights,” we covered several topics including machine learning, edge computing and cloud-based data management. We discussed questions including: Are we creating the right data and doing enough with it? What needs to be done to make data actionable? How have technological advancements in the last few years been realized and deployed across the semiconductor value chain? What’s being done at the edge? Who owns the data? And much more.

If you missed the 1-hour live event, the recording is available in our knowledge center.

Since we were unable to address every question asked during the live event, we have asked our panelists to provide written responses.

Thank you to our panelists:

  • Michael Campbell, Senior Vice President of Engineering, Qualcomm CDMA Technologies
  • Preeth Chengappa, Head of Industry, Semiconductors & EDA, Microsoft Azure
  • Ira Leventhal,Vice President of Applied Research & Technology at Advantest America, Inc.

Machine learning (ML)

Question: One thing that piqued my attention was the comment that machine learning should be a requirement for all test and product engineers. I don’t disagree, but as machine learning is a very broad term and can get to be very complex, it would be nice to know which subsets of machine learning are recommended as most valuable for product and test engineers. Simple examples of machine learning include calibration (especially RF), linear regression and interpolation (supervised learning).  Some consider 2D and 3D visualization algorithms as good examples of unsupervised learning. And as Ira mentioned, trim algorithms that reduce test time by predicting trim codes without sweeping all possible values also fall in the domain of machine learning. These tasks are well understood by many product and test engineers. Complex applications of ML may include image recognition to evaluate and classify shmoo plots, yield prediction based on test site, operator, or equipment, or cluster analysis on wafer maps. What is your take on specific areas of machine learning for product and test engineers to learn?

Mike’s Answer: Machine learning / AI understanding as a fundamental science should be part of the requirements for engineers in the industry and specifically product and test engineers in the semiconductor industry – that is my general belief. The data sets in the industry are large, the data “repeatable,” and in my view can be leveraged with ML/AI algorithms to add incremental value. From my viewpoint, engineers should know the concepts of training and how to write “Python scripts” for execution of learning and analysis algorithms. A vast sea of data awaits better analysis. As examples, yield, test time, parametric distributions, etc. can be automatically reviewed and at least first order analysis can be accomplished with ML/ AI algorithms. In his comments, Ira mentioned the EDGE box on the Advantest platform and how that could be used for real-time test analysis and reduction.

Ira’s Answer: Machine learning and AI have quickly gone from technologies used mainly by specialists in those fields to now being accessible to all engineers as part of their problem-solving toolbox. I see them as essential tools, driven by the need to analyze data sets that are too large and complex for manual analysis or simple data analytics. The usability and power of the algorithms will continue to greatly improve to meet these requirements. Similarly, ML/AI specialists without substantial semiconductor test domain knowledge will be limited in terms of their ability to develop and apply algorithms that effectively and reliably solve real world domain-specific challenges. Test and product engineers equipped with ML/AI knowledge are in a unique position to be able to bridge this gap and develop powerful solutions in their domain. In terms of specific areas of ML, I see great potential for using unsupervised learning algorithms to detect outliers or other anomalous behavior where there is not enough previous fail data to train effective supervised learning models.

Question: For machine learning on test data, how do we set its goal at our ideal balance of test time, equipment scale, and cost of various fault and failure categories?

Mike’s Answer: Some ideas for you to consider are defect density, area and design-for-test (DFT) coverage could be combined to access the real effectivity of the DFT vector or map theoretical tool predictions of test time per block vs. actual test time per block and develop a practical measuring tool or… and the list goes on. The ML/AI algorithms allow you to automate many tasks at a fundamental level to give the same old data additional value.

Ira’s Answer: As technology nodes continue to shrink and heterogenous integration packaging is widely adopted, the semiconductor value chain has become too complex to focus optimization efforts on each individual step of the process without comprehensive consideration of how the various design, manufacturing, and test steps interact. Simple Excel-based analyses and static manufacturing and test flows will need to be supplanted by real-time analysis and decision-making using ML or other advanced data analytics, and dynamic adjustments to each individual step and/or the overall flow. Cloud, networking, edge compute, and security technologies will continue to evolve to support this requirement.

Question: How much use are you making today from wafer fab equipment, such as geometry, and materials metrology, to further shift left, by predicting device performance yield and reliability? Namely, we see variability and defects, process step by process step, even before the first e-test is being made.

Mike’s Answer: All data is valuable. What ML/AI allows the user to do is find that value with automated analysis. There is a probability that the models could be falsely tuned versus the data set. However, with cross correlations of the data sets the capability of finding and eliminating the error is very high.

Question: If the fab knows which properties really don’t affect our final yield, will they be able to concentrate on controlling the ones that do matter?

Ira’s Answer: Greater collaboration and sharing of data across the semiconductor value chain coupled with ML-based or other advanced data analytics will facilitate data-driven decisions rather than relying on theoretical or historical correlations between fab data and final yield.‍

Export compliance & security concerns

Question: How can data analytics/tracking help semiconductor companies comply with U.S. export control duties?

Preeth’s Answer: The simplistic answer is that the more data you have on what your IP or chip is doing at any point in its lifecycle, the more information you have on every aspect of provenance, handling, manufacture, test, supply chain, system usage, etc. All this can help track what you need. For example, in the U.S. Government’s RAMP program, Microsoft helped implement a secure design capability for the DoD and partners, where they could comply with regulations such as ITAR. The right data can help comply with export control reporting as well.

Question: In my past life (15+ years ago) when I managed EDA/Fab partnerships, getting access to fab data for ensuring EDA tools and methodology are up to date for mutual customers to improve time to market (TTM) was very challenging. There were many security concerns and many companies involved, and now you have cloud companies, like Microsoft, into the mix. Has it improved now? If yes, how?

Preeth’s Answer: We’ve seen everyone being protective of data for various reasons, including control of their data, who has access, and how it could be used. The cloud can address data management, with secure access and control, including limiting egress. You now regularly see all the foundries using Azure for distribution of their PDK and libraries. GDSII hand off and even some pre-silicon manufacturing steps are done and shared on the cloud. While NDAs are still being done offline, there is steady progress, driven by the needs of the industry as it adapts to the requirements of designing to the state of the art. Every fabless semi company, EDA/tools vendor, foundry, fab and test equipment manufacturer, and OSATs will have a role in making collaborative design possible.

Ira’s Answer: As ML/AI technology and its computing ecosystem have seen revolutionary improvements, so too has security technology. Basic username/password access has been replaced by advanced key-based security, sophisticated encryption schemes, blockchain-based access control, and other technologies that support a Zero Trust security model. While the foundation for collaboration is now established, what’s needed going forward is motivation. As the benefits of collaboration become increasingly apparent, we’ll see security evolve from being viewed as a hindrance to collaboration to becoming a challenge that we must tackle with state-of-the-art solutions in order to fully reap the benefits of collaboration.

‍Question: How is the cross pollination of knowledge going to take place in the future where the designer looks into a problem related to package/test/system level test (SLT) or vice versa?  

Mike’s Answer: Data has a value. The designer developed the test case during his design. He has a planned response. In a “transparent” world, the designer could look at the data for his vector, access test time, yield vs. his model, parametric response and other fundamental debug steps without product and test. Product and test will own the responsibility of driving the ML/AI tools to build the analysis and the deep dive of the analysis past the fundamentals. Value add of every engineer in this model goes up.

Data sharing, standardization & ownership

Question: The merit in connecting the cross-domain data is not in doubt. Where the rubber meets the road is in balancing technical and business benefits. The technical merit towards corrective action versus the business demerit due to the financial blowback from connected data. For example, if a fabless company uses fab data to point to the fab, and demand discounts on wafer pricing, that will put a damper on cross-domain sharing. What are your thoughts?

Ira’s Answer: Yes, if the fabless companies are simply seeking to use additional data as a “weapon” in price negotiations, the scenario that you described is exactly what will happen. Conversely, true and successful collaboration starts with full alignment on the objectives and mutual benefits and is based on a foundation of trust. As more success stories are achieved with this approach, the competitive advantages for both the fabless and the fabs will become increasingly apparent and dangerous to ignore from a business standpoint.

‍Question: We have seen data breaches in the most sophisticated companies. If that happens, who takes ownership?

Preeth’s Answer: Azure provides a suite of tools to store, manage and protect data at the highest levels of security, both at the infrastructure and platform level. The owner of the data is responsible for selecting and implementing the tools and protocols at every level to ensure security.

Question: How do you see the work into the standards for the data generation and collection in the test cell? The adoption of industry standards at test is a snail’s pace. i.e. how does the variety of data formats, content impact ability to share across entities?

Ira’s Answer: The SEMI CAST (Collaborative Alliance for Semiconductor Test) has put significant focus in recent years on standards efforts including RITdb (Rich Interactive Test Database) and TEMS (Tester Event Messaging for Semiconductors) that are designed to support evolving data generation and collection requirements within the test cell as well as across the semiconductor value chain. I see this as an iterative process where a newer generation of standards will need to evolve based on feedback from early adopters before they achieve a critical mass of adoption within the industry.

Question: One of the things mentioned in this webinar is working in collaboration with other companies. What are companies doing to collaborate in terms of data sharing; is there a standardization or protocol of sorts; and is there a consortium for it? Is there an effort to have common standards (e.g. MSA) for data generation and sharing across the production chain?

Preeth’s Answer: While every fabless company is interested in its own data, fab and test equipment vendors, foundries and OSATs have to handle multi-company, multi-vendor data and provide them to the right customer. Microsoft is willing to engage with the industry to provide the tools and capabilities to do so. Standardization of process, protocols, agreements and forming a consortium are in the early stages.

Ira’s Answer: In addition to SEMI CAST, there are other semiconductor industry efforts in progress, including the SEMI Smart Data-AI initiative. In addition, the $280B 2022 US CHIPS and Science Act includes major funding for initiatives with direct or indirect impact on ML/AI R&D.

‍Question: How do we ensure that when the silicon device goes end-of-life all the data in the cloud is also erased/deleted/permanently locked?

Preeth’s Answer: Cloud protocols for good design will support the right level of backup/DR capability, both on lower cost long-term storage options as well as offline backup.

‍Question: What’s in it for “me” – design optimization, time to design closure, time to tapeout, time to production, time to revenue?

Preeth’s Answer: State-of-the art design requires collaboration. Better yield requires design improvements to overcome process characteristics. Advanced packaging requires collaborative data usage. With the slowing down of Moore’s law, small increments at every step of the silicon life cycle will be required to provide required gains in PPA, and this will be heavily dependent on data from across this lifecycle.

Data quality

Question: Many of our test runs are meant only for debug of code or fixtures. How do I know we are not training on known-bad data?

Preeth’s Answer: Apart from common sense controls, we have very strong protocols for known-good data to be used for training. Our past experiences have helped inform our responsible AI process and protocols.

Ira’s Answer: I see the bigger risk to be training on unknown-bad data, which would essentially ingrain potential test escapes into the decision algorithms. This is where I see a big benefit in using both supervised and unsupervised learning algorithms in complementary ways rather than relying on one approach.

Getting started

Question: How would you recommend a small team (2-3 people) get started, assuming we are currently logging to CSV files and processing in Excel and JMP?

Ira’s Answer: I recommend starting with careful consideration of which technologies you should purchase off-the-shelf versus technologies you want your team to focus on for competitive advantage. Without proactive decisions, it’s easy to go down a path of incrementally building an infrastructure that will eventually become challenging to keep state-of-the-art or even support. Position yourselves to ride the wave of improvements in the foundational technologies on which you build your solutions.

Watch the recording

Thank you to the attendees who asked these fantastic questions and to our panelists for answering them.

To watch the panel recording from the live event, visit our knowledge center. The proteanTecs knowledge center has webinars, white papers and case studies to explore.



Leave a Reply


(Note: This name will be displayed publicly)