Leveraging LLMs To Explain EDA Synthesis Errors And Help Train New Engineers 


A technical paper titled “Explaining EDA synthesis errors with LLMs” was published by researchers at University of New South Wales and University of Calgary. Abstract: "Training new engineers in digital design is a challenge, particularly when it comes to teaching the complex electronic design automation (EDA) tooling used in this domain. Learners will typically deploy designs in the Veri... » read more

Can Models Created With AI Be Trusted?


EDA models that are created using AI need to pass more stringent quality and cost benefit analysis compared to many AI applications in the broader industry. Money is hanging on the line if AI gets it wrong, and all the associated costs must be factored into the equation. Models are some of the most expensive things a development team can create, and it is important to understand the value th... » read more

Fundamental Issues In Computer Vision Still Unresolved


Given computer vision’s place as the cornerstone of an increasing number of applications from ADAS to medical diagnosis and robotics, it is critical that its weak points be mitigated, such as the ability to identify corner cases or if algorithms are trained on shallow datasets. While well-known bloopers are often the result of human decisions, there are also fundamental technical issues that ... » read more

Dealing With AI/ML Uncertainty


Despite their widespread popularity, large language models (LLMs) have several well-known design issues, the most notorious being hallucinations, in which an LLM tries to pass off its statistics-based concoctions as real-world facts. Hallucinations are examples of a fundamental, underlying issue with LLMs. The inner workings of LLMs, as well as other deep neural nets (DNNs), are only partly kno... » read more

Hardware Fuzzer Utilizing LLMs


A new technical paper titled "Beyond Random Inputs: A Novel ML-Based Hardware Fuzzing" was published by researchers at TU Darmstadt and Texas A&M University. Abstract "Modern computing systems heavily rely on hardware as the root of trust. However, their increasing complexity has given rise to security-critical vulnerabilities that cross-layer at-tacks can exploit. Traditional hardware ... » read more

Paradigms Of Large Language Model Applications In Functional Verification


This paper presents a comprehensive literature review for applying large language models (LLM) in multiple aspects of functional verification. Despite the promising advancements offered by this new technology, it is essential to be aware of the inherent limitations of LLMs, especially hallucination that may lead to incorrect predictions. To ensure the quality of LLM outputs, four safeguarding p... » read more

LLMs For EDA, HW Design and Security


A new technical paper titled "Hardware Phi-1.5B: A Large Language Model Encodes Hardware Domain Specific Knowledge" was published by researchers at Kansas State University, University of Science and Technology of China, Michigan Technological University, Washington University in St. Louis and Silicon Assurance. Abstract "In the rapidly evolving semiconductor industry, where research, design... » read more

Efficient Streaming Language Models With Attention Sinks (MIT, Meta, CMU, NVIDIA)


A technical paper titled “Efficient Streaming Language Models with Attention Sinks” was published by researchers at Massachusetts Institute of Technology (MIT), Meta AI, Carnegie Mellon University (CMU), and NVIDIA. Abstract: "Deploying Large Language Models (LLMs) in streaming applications such as multi-round dialogue, where long interactions are expected, is urgently needed but poses tw... » read more

Generating And Evaluating HW Verification Assertions From Design Specifications Via Multi-LLMs


A technical paper titled “AssertLLM: Generating and Evaluating Hardware Verification Assertions from Design Specifications via Multi-LLMs” was published by researchers at Hong Kong University of Science and Technology. Abstract: "Assertion-based verification (ABV) is a critical method for ensuring design circuits comply with their architectural specifications, which are typically describe... » read more

Training Large LLM Models With Billions To Trillion Parameters On ORNL’s Frontier Supercomputer


A technical paper titled “Optimizing Distributed Training on Frontier for Large Language Models” was published by researchers at Oak Ridge National Laboratory (ORNL) and Universite Paris-Saclay. Abstract: "Large language models (LLMs) have demonstrated remarkable success as foundational models, benefiting various downstream applications through fine-tuning. Recent studies on loss scaling ... » read more

← Older posts