Cracking The Mixed-Signal Verification Code


Rapid digitization in IoT, automotive, industrial, and communication industry segments are fueling semiconductor industry growth. This growth follows the “More than Moore” paradigm, where new design starts are spread across mature to advanced manufacturing nodes based on end-application targets. With this digitalization, data has become the most valuable resource. Mixed-signal designs pl... » read more

Make-Or-Break Time For Portable Stimulus


I’m pretty upbeat when it comes to portable stimulus. Or maybe it’d be better to say I’m pretty upbeat on the idea of portable stimulus. While doing my best to brush aside the usual EDA propaganda (propaganda I’ve found to be a bit haphazard, but more on that in a minute), I’ve put a lot of thought into how portable stimulus could fit into verification flows, the purpose of using it a... » read more

Integrating Results And Coverage From Simulation And Formal


Not so long ago, formal verification was considered an exotic technology used only by specialists for specific verification challenges such as cache coherency. As chips have grown ceaselessly in size and complexity, the traditional verification method of simulation could not keep pace. The task of generating and running enough tests consumed enormous resources in terms of engineers, simulation ... » read more

Implementing Mathematical Algorithms In Hardware For Artificial Intelligence


Petabytes of data efficiently travels between edge devices and data centers for processing and computing of AI functions. Accurate and optimized hardware implementations of functions offload many operations that the processing unit would have to execute. As the mathematical algorithms used in AI-based systems evolve, and in some cases stabilize, the demand to implement them in hardware increase... » read more

Improving Library Characterization Quality And Runtime With Machine Learning


By Megan Marsh and Wei-Lii Tan Today’s semiconductor applications, ranging from advanced sensory applications, IoT, edge computing devices, high performance computing, to dedicated A.I. chips, are constantly pushing the boundaries of attainable power, performance, and area (PPA) metrics. The race to design and ship these innovative devices has resulted in a focused, time-to-market-driven e... » read more

High-Speed Serial Comms: Getting There Is Half The Fun


Last month I wrote about our 56G SerDes announcement – silicon validated and running in Rome at a major show. We had a great time at that show and got a lot of compliments about the quality and flexibility of our SerDes. These kinds of unfiltered, unsolicited customer comments are really what makes it all worthwhile. It was a gratifying and exciting time. This month, we’re at it again. O... » read more

The Power Of Ecosystems At Arm TechCon 2018


I have long been fascinated by the workings of ecosystems. Last week’s Arm TechCon in San Jose was a textbook example of how ecosystems work, overlap and how the electronics development work is indeed like a village—it takes many players to make things happen to enable end users to receive the latest gadgets like phones, fitness trackers, electronic watches, etc. The game of electronic ecos... » read more

A Paradigm Shift With Vertical Nanowire FETs For 5nm And Beyond


When I was in undergrad not so long ago, all my circuits and semiconductor textbooks/professors were talking about MOSFETs (metal-oxide semiconductor field-effect transistor) that were just “better” than BJTs (bi-polar junction transistor). There were still some old professors talking about how they did an excellent job using BJTs, but everyone knew it was MOSFET that was leading the game i... » read more

Next-Generation Liberty Verification And Debugging


Accurate library characterization is a crucial step for modern chip design and verification. For full-chip designs with billions of transistors, timing sign-off through simulation is unfeasible due to run-time and memory constraints. Instead, a scalable methodology using static timing analysis (STA) is required. This methodology uses the Liberty file to encapsulate library characteristics such ... » read more

AI Chips Must Get The Floating-Point Math Right


Most AI chips and hardware accelerators that power machine learning (ML) and deep learning (DL) applications include floating-point units (FPUs). Algorithms used in neural networks today are often based on operations that use multiplication and addition of floating-point values, which subsequently need to be scaled to different sizes and for different needs. Modern FPGAs such as Intel Arria-10 ... » read more

← Older posts Newer posts →