Building A Sustainable And Diverse Semiconductor Workforce: Insights From ASMC 2024 Panel Discussion

As the semiconductor industry works to attract talent to overcome its labor shortage, governments, educators, and the private sector must collaborate to make industry career opportunities more accessible for prospective employees. This concept provided the framework for a panel discussion during SEMI’s 35th annual Advanced Semiconductor Manufacturing Conference (ASMC) that took place in Alba... » read more

Navigating The Talent Crunch: AI Solutions For A Thriving Semiconductor Manufacturing Sector

The CHIPS and Science Act is a historic piece of legislation passed by the US government in 2022 aimed at regaining American leadership in semiconductor manufacturing. Supported by an unprecedented $52 billion in federal funding, this investment will also address supply chain vulnerabilities and national security concerns that were made glaringly public by the COVID epidemic. In addition to ... » read more

AI Accelerator Architectures Poised For Big Changes

AI is driving a frenzy of activity in the chip world as companies across the semiconductor ecosystem race to include AI in their product lineup. The challenge now is how to make AI run faster, use less energy, and to be able to leverage it from the edge to the data center — particularly with the rollout of large language models. On the hardware side, there are two main approaches for accel... » read more

Generative AI Training With HBM3 Memory

One of the biggest, most talked about application drivers of hardware requirements today is the rise of Large Language Models (LLMs) and the generative AI which they make possible.  The most well-known example of generative AI right now is, of course, ChatGPT. ChatGPT’s large language model for GPT-3 utilizes 175 billion parameters. Fourth generation GPT-4 will reportedly boost the number of... » read more

AI Adoption Slow For Design Tools

A lot of excitement, and a fair amount of hype, surrounds what artificial intelligence (AI) can do for the EDA industry. But many challenges must be overcome before AI can start designing, verifying, and implementing chips for us. Should AI replace the algorithms in use today, or does it have a different role to play? At the end of the day, AI is a technique that has strengths and weaknesses... » read more

Where And Why AI Makes Sense In Cars

Experts at the Table: Semiconductor Engineering sat down to talk about where AI makes sense in automotive and what are the main challenges, with Geoff Tate, CEO of Flex Logix; Veerbhan Kheterpal, CEO of Quadric; Steve Teig, CEO of Perceive; and Kurt Busch, CEO of Syntiant. What follows are excerpts of that conversation, which were held in front of a live audience at DesignCon. Part two of this... » read more

Will Floating Point 8 Solve AI/ML Overhead?

While the media buzzes about the Turing Test-busting results of ChatGPT, engineers are focused on the hardware challenges of running large language models and other deep learning networks. High on the ML punch list is how to run models more efficiently using less power, especially in critical applications like self-driving vehicles where latency becomes a matter of life or death. AI already ... » read more

Memory and Energy-Efficient Batch Normalization Hardware

A new technical paper titled "LightNorm: Area and Energy-Efficient Batch Normalization Hardware for On-Device DNN Training" was published by researchers at DGIST (Daegu Gyeongbuk Institute of Science and Technology). The work was supported by Samsung Research Funding Incubation Center. Abstract: "When training early-stage deep neural networks (DNNs), generating intermediate features via con... » read more

New Method of Comparing Neural Networks (Los Alamos National Lab)

A new research paper titled "If You’ve Trained One You’ve Trained Them All: Inter-Architecture Similarity Increases With Robustness" from researchers at Los Alamos National Laboratory (LANL) and was recently presented at the Conference on Uncertainty in Artificial Intelligence. The team developed a new approach for comparing neural networks and "applied their new metric of network simila... » read more

Techniques For Improving Energy Efficiency of Training/Inference for NLP Applications, Including Power Capping & Energy-Aware Scheduling

This new technical paper titled "Great Power, Great Responsibility: Recommendations for Reducing Energy for Training Language Models" is from researchers at MIT and Northeastern University. Abstract: "The energy requirements of current natural language processing models continue to grow at a rapid, unsustainable pace. Recent works highlighting this problem conclude there is an urgent need ... » read more

← Older posts