KLA has always had a close relationship with physics and data.  Our optical and electron beam inspection and measurement tools use cutting edge physics models, both for hardware design and as part of their algorithms.  AI, including several traditional machine learning techniques and deep learning are routinely used to process this data to meet application requirements.

The AI & Modeling Center of Excellence, centered in KLA’s R&D facility in Ann Arbor, MI, was setup with the mission of advancing KLA’s traditional strengths in physics and data and providing implementation solutions for multiple KLA Inspection and Metrology products targeted at the semiconductor manufacturing industry.

As a part of this group, you will be part of a world class team of physicists, HPC system designers, machine learning and application engineers who build cutting edge solutions for modeling complex imaging techniques and semiconductor processes.  You will also work with a data scientists and AI infrastructure engineers whose mission is to build and scale machine learning based solutions for our semiconductor customers.

We are looking for engineers in a few different fields.  If you are passionate about Physics Modeling, High Performance Computing – HPC (including GPU), ML, Data, or Cloud technologies – this is the place for you!


Engineers in the HPC Software team will be working on building and maintaining infrastructure necessary for large scale experimentation and deployment of HPC solutions.  Domains in which a successful candidate will be expected to contribute will include data management and data loading, support for machine learning and deep learning model training, experimentation and deployment.

Although familiarity with Machine Learning and Deep Learning solutions would be a big plus, this is primarily a Software Engineering position.  Successful candidates are passionate about software, and will have exceptional skills and hands on experience with development in C/C++ and Python in a Linux environment.  Deep conceptual understanding of multi-threaded, multi process and distributed software systems is necessary.

Essential Skills

Object Oriented Design & Programing in Java or C++
Scripting languages like Java Script, Python;
Data Structures and algorithms
Linux System Programming
Distributed systems
Desirable Skills

Cloud technologies for network, storage, containerization and compute clusters.
Building and configuring Linux kernels, and designing and troubleshooting network infrastructure;
Linux Device Driver Development
Understanding of various networking stacks
GPU Architectures and CUDA (CuGraph, CuData, CuML etc).
Distributing computing frameworks like Apache Spark, DASK;
Creating the techniques and methods to integrate multiple hardware and software subsystems to solve advanced technical challenges;
Data science skills to acquire, transform and present data from various sources to build powerful debugging and analysis software.

For more details, hit “apply for job”