2.5D Architecture Answers AI Training’s Call for “All of the Above”


The impact of AI/ML grows daily impacting every industry and touching the lives of everyone. In marketing, healthcare, retail, transportation, manufacturing and more, AI/ML is a catalyst for great change. This rapid advance is powerfully illustrated by the growth in AI/ML training capabilities which have since 2012 grown by a factor of 10X every year. Today, AI/ML neural network training mod... » read more

What Is DRAM’s Future?


Memory — and DRAM in particular — has moved into the spotlight as it finds itself in the critical path to greater system performance. This isn't the first time DRAM has been the center of attention involving performance. The problem is that not everything progresses at the same rate, creating serial bottlenecks in everything from processor performance to transistor design, and even the t... » read more

AI Requires Tailored DRAM Solutions


For over 30 years, DRAM has continuously adapted to the needs of each new wave of hardware spanning PCs, game consoles, mobile phones and cloud servers. Each generation of hardware required DRAM to hit new benchmarks in bandwidth, latency, power or capacity. Looking ahead, the 2020s will be the decade of artificial intelligence/machine learning (AI/ML) touching every industry and applicatio... » read more

More Multiply-Accumulate Operations Everywhere


Geoff Tate, CEO of Flex Logix, sat down with Semiconductor Engineering to talk about how to build programmable edge inferencing chips, embedded FPGAs, where the markets are developing for both, and how the picture will change over the next few years. SE: What do you have to think about when you're designing a programmable inferencing chip? Tate: With a traditional FPGA architecture you ha... » read more

PCIe 5.0 Drill-Down


Suresh Andani, senior director of product marketing for SerDes IP at Rambus, digs into the new PCI Express standard, why it’s so important for data centers, how it compares with previous versions of the standard, and how it will fit into existing and non-von Neumann architectures. » read more

Blog Review: April 1


Rambus' Steven Woo takes an in-depth look at on-chip memory for high performance AI applications and explores some of the primary differences between HBM and GDDR6. Synopsys' Taylor Armerding warns of the risks of legacy vulnerabilities, where software has problems that were never fixed then forgotten about or never discovered in the first place, and key steps for finding and addressing them... » read more

Week In Review: Auto, Security, Pervasive Computing


AI/Edge The United States now has the highest number of COVID-19 cases, and the state governments in the U.S. are asking technologists for help, according to a story in The Washington Post. Data scientists, software developers, and others are needed to help. New York State started a Technology SWAT team calling for help from the tech community. Intel AI Builder program participant DarwinAI ... » read more

Week In Review: Design, Low Power


Tools & IP Synopsys debuted VIP and a UVM source code test suite for IP supporting Ethernet 800G. The VIP supports DesignWare 56G Ethernet, 112G Ethernet, and 112G USR/XSR PHYs for FinFET processes, which can be integrated for 800G implementations based on 8 lane x 100 Gb/s technology. The VIP can switch speed configurations dynamically at run time and includes a customizable set of frame ... » read more

Blog Review: March 25


Rambus' Steven Woo checks out common memory systems that are used in the highest performance AI applications and points to the differences between on-chip memory, HBM, and GDDR. Mentor's Colin Walls considers whether software for embedded systems should be delivered as a binary library or source code and warns of some key potential issues when requesting source code. A Synopsys writer poi... » read more

Blog Review: March 18


Arm's Divya Prasad investigates whether power rails that are buried below the BEOL metal stack and back-side power delivery can help alleviate some of the major physical design challenges facing 3nm nodes and beyond. Rambus' Steven Woo takes a look at a Roofline model for analyzing machine learning applications that illustrates how AI applications perform on Google’s tensor processing unit... » read more

← Older posts Newer posts →