Deep-learning approach for full-chip voltage contrast inference.
Abstract: The electron beam inspection methodology for voltage contrast (VC) defects has been widely adopted in the early stages of sub-10nm logic and memory technology development, as well as in new product introductions. However, due to throughput limitations, full-chip inspection at the 300mm wafer scale remains impractical for yield ramp and production applications. To address this challenge, we propose a deep-learning approach for full-chip voltage contrast inference. By modifying and enhancing the You Only Look Once (YOLOv7) model into YOLO-Voltage Contrast (YOLO-VC)—where YOLOv7 is the most efficient object detection neural network—the voltage contrast of metal patterns across the entire chip can be accurately predicted. By mapping the voltage contrast response at the full-chip level, the inspection recipe can be optimized to focus on critical care areas where defects are most likely to occur. We present the methodology, including process flow, image-to-image registration, gray-level classification, model training and validation, and a performance benchmark comparing YOLOv7 and YOLO-VC. Finally, we propose leveraging the full-chip VC density map for area of interest (AOI) selection to optimize throughput and enhance the capture rate of VC defects.
Read more here.
Doonga, Kelvin Yih-Yuh, ChenPo Linb, and Sheng-Che Linb. “Full-chip Voltage Contrast Inference Using Deep Learning You Only Look Once: Voltage Contrast (YOLO-VC).”
Leave a Reply