Challenges In Using AI In Verification


Pressure to use AI/ML techniques in design and verification is growing as the amount of data generated from complex chips continues to explode, but how to begin building those capabilities into tools, flows and methodologies isn't always obvious. For starters, there is debate about whether the data needs to be better understood before those techniques are used, or whether it's best to figure... » read more

The Evolution Of High-Level Synthesis


High-level synthesis is getting yet another chance to shine, this time from new markets and new technology nodes. But it's still unclear how fully this technology will be used. Despite gains, it remains unlikely to replace the incumbent RTL design methodology for most of the chip, as originally expected. Seen as the foundational technology for the next generation of EDA companies around the ... » read more

RISC-V’s Expanding Footprint


Zdenek Prikryl, CTO of Codasip, sat down with Semiconductor Engineering to talk about the RISC-V market, where this open instruction set architecture (ISA) is gaining ground, and what are the biggest challenges in working with this technology. SE: Where do you see the value in RISC-V? Is it for off-the-shelf processors or more customized components? Prikryl: A few years ago, RISC-V was us... » read more

Artificial Intelligence And Machine Learning Add New Capabilities to Traditional RF EDA Tools


This article features contributions from RF EDA vendors on their various capabilities for artificial intelligence and machine learning. AWR Design Environment software is featured and highlights the network synthesis wizard. Click here to continue reading. » read more

Power/Performance Bits: Aug. 25


AI architecture optimization Researchers at Rice University, Stanford University, University of California Santa Barbara, and Texas A&M University proposed two complementary methods for optimizing data-centric processing. The first, called TIMELY, is an architecture developed for “processing-in-memory” (PIM). A promising PIM platform is resistive random access memory, or ReRAM. Whil... » read more

What’s Next For Semis?


It’s been a turbulent year in the semiconductor industry. 2020 was supposed to be a strong year. Then, the coronavirus outbreak hit. Suddenly, a large percentage of countries implemented various measures to mitigate the outbreak, such as stay-at-home orders as well as business and store closures. Economic turmoil and job losses soon followed, not to mention the human tragedy involved. M... » read more

Monitoring Chips After Manufacturing


New regulations and variability of advanced process nodes are forcing chip designers to insert additional capabilities in silicon to help with comprehension, debug, analytics, safety, security, and design optimization. The impact of this will be far-reaching as the industry discusses what capabilities can be shared between these divergent tasks, the amount of silicon area to dedicate to it, ... » read more

The Challenge Of Keeping AI Systems Current


Semiconductor Engineering sat down to discuss AI and its move to the edge with Steven Woo, vice president of enterprise solutions technology and distinguished inventor at Rambus; Kris Ardis, executive director at Maxim Integrated; Steve Roddy, vice president of Arm's Products Learning Group; and Vinay Mehta, inference technical marketing manager at Flex Logix. What follows are excerpts of that ... » read more

The Emergence Of Hardware As A Key Enabler For The Age Of Artificial Intelligence


Over the past few decades, software has been the engine of innovation for countless applications. From PCs to mobile phones, well-defined hardware platforms and instruction set architectures (ISA) have enabled many important advancements across vertical markets. The emergence of abundant-data computing is changing the software-hardware balance in a dramatic way. Diverse AI applications in fa... » read more

Scaling AI/ML Training Performance With HBM2E Memory


In my April SemiEngineering Low Power-High Performance blog, I wrote: “Today, AI/ML neural network training models can exceed 10 billion parameters, soon it will be over 100 billion.” “Soon” didn’t take long to arrive. At the end of May, OpenAI unveiled a new 175-billion parameter GPT-3 language model. This represented a more that 100X jump over the size of GPT-2’s 1.5 billion param... » read more

← Older posts Newer posts →