Dave Armstrong, director of business development at Advantest, discusses the usefulness of concurrent test and describes how to maximize the value of this approach.
One thing that Dave did not talk about is that when broadcasting the scan data to multiple identical cores (or daisy-chaining the data through cores), the core outputs must be compressed to a bandwidth that is constant regardless of the number of cores, either by MISRs, or output compression ratios that rise with the number of cores. One can also vote core outputs to detect random defects.
Helpful discussion on CT, but I think the real solution is not innovative test strategy (thought that is important), but in the ATE capability to test external interfaces at different frequencies and run various tests simultaneously. Picture a mission mode where each I/O interface is running it’s own pattern at it’s own frequency. This is the real-world application that needs to be applied by ATE to devices with exponential complexity.
Understanding how chiplets interact under different workloads is critical to ensuring signal integrity and optimal performance in heterogeneous designs.
Alongside high-NA EUV will be better-performing photoresists, reduced roughness using passivation and etch, and lateral etching to reduce tip-to-tip dimensions.
eBook: Nearly everything you need to know about memory, including detailed explanations of the different types of memory; how and where these are used today; what's changing, which memories are successful and which ones might be in the future; and the limitations of each memory type.
The standard for high-bandwidth memory limits design freedom at many levels, but that is required for interoperability. What freedoms can be taken from other functions to make chiplets possible?
This website uses cookies to improve your experience while you navigate through the website. The cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. We do not sell any personal information.
By continuing to use our website, you consent to our Privacy Policy. If you access other websites using the links provided, please be aware they may have their own privacy policies, and we do not accept any responsibility or liability for these policies or for any personal data which may be collected through these sites. Please check these policies before you submit any personal information to these sites.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
One thing that Dave did not talk about is that when broadcasting the scan data to multiple identical cores (or daisy-chaining the data through cores), the core outputs must be compressed to a bandwidth that is constant regardless of the number of cores, either by MISRs, or output compression ratios that rise with the number of cores. One can also vote core outputs to detect random defects.
Helpful discussion on CT, but I think the real solution is not innovative test strategy (thought that is important), but in the ATE capability to test external interfaces at different frequencies and run various tests simultaneously. Picture a mission mode where each I/O interface is running it’s own pattern at it’s own frequency. This is the real-world application that needs to be applied by ATE to devices with exponential complexity.