GDDR7 Memory Supercharges AI Inference


GDDR7 is the state-of-the-art graphics memory solution with a performance roadmap of up to 48 Gigatransfers per second (GT/s) and memory throughput of 192 GB/s per GDDR7 memory device. The next generation of GPUs and accelerators for AI inference will use GDDR7 memory to provide the memory bandwidth needed for these demanding workloads. AI is two applications: training and inference. With tr... » read more

3DIO IP For Multi-Die Integration


By Lakshmi Jain and Wei-Yu Ma The demand for high performance computing, next-gen servers, and AI accelerators is growing rapidly, increasing the need for faster data processing with expanding workloads. This rising complexity presents two significant challenges: manufacturability and cost. From a manufacturing standpoint, these processing engines are nearing the maximum size that lithogra... » read more

Real-Time Low Light Video Enhancement Using Neural Networks On Mobile


Video conferencing is a ubiquitous tool for communication, especially for remote work and social interactions. However, it is not always a straightforward plug and play experience, as adjustments may be needed to ensure a good audio and video setup. Lighting is one such factor that can be tricky to get right. A nicely illuminated video feed looks presentable in a meeting, but on the other hand,... » read more

Is Liquid Cooling Right For Your Data Center?


We live in an exciting time—liquid cooling, which once seemed more trouble than it’s worth, is fast becoming an accepted and sought-after technology in the data center industry. That said, it’s still a complex technology to implement, especially in legacy facilities. Is your data center ready to operationalize liquid cooling? Liquid cooling in the data center Liquid cooling in the d... » read more

Cloud Or On-premises? Why Not Both: A Hybrid Approach For Structure Simulation


Faced with large problem sizes and urgent deadlines, it’s not surprising that more and more product development teams are accessing high-performance computing (HPC) resources on the cloud. After all, a cloud computing model enables you to access the most advanced, leading-edge software and hardware on demand. There are no queues or wait times. Users can “dial up” core counts and other set... » read more

Elimination Of Functional False Path During RDC Analysis


Reset domain crossing (RDC) issues can occur in sequential designs when the reset of a source register differs from the reset of a destination register, even if the data path is in the same clock domain. This can lead to asynchronous crossing paths and metastability at the destination register. RDC analysis on RTL designs is done to find such metastability issues in a design, which may occur du... » read more

Simulation Replay Tackles Key Verification Challenges


Simulation lies at the heart of both verification and pre-silicon validation for every semiconductor development project. Finding functional or power problems in the bringup lab is much too late, leading to very expensive chip turns. Thorough simulation before tapeout, coupled with comprehensive coverage metrics, is the only way to avoid surprises in silicon. However, the enormous size and comp... » read more

eFuses: Use Cases, Benefits, And Design In The System Context


In recent years, the automotive industry has been one of the drivers of innovation in the field of electrical and electronic system safety. This is primarily due to the rapid uptake and, in some cases, already mandatory use of advanced driver assistance systems (ADAS) as well as to the first steps toward (partially) autonomous vehicles. In conventional safety systems, the off state can be as... » read more

Can You Rely Upon Your NPU Vendor To Be Your Customers’ Data Science Team?


The biggest mistake a chip design team can make in evaluating AI acceleration options for a new SoC is to rely entirely upon spreadsheets of performance numbers from the NPU vendor without going through the exercise of porting one or more new machine learning networks themselves using the vendor toolsets. Why is this a huge red flag? Most NPU vendors tell prospective customers that (1) the v... » read more

HBM4 Feeds Generative AI’s Hunger For More Memory Bandwidth


Generative AI (Gen AI), built on the exponential growth of Large Language Models (LLMs) and their kin, is one of today’s biggest drivers of computing technology. Leading-edge LLMs now exceed a trillion parameters and offer multimodal capabilities so they can take a broad range of inputs, whether they’re in the form of text, speech, images, video, code, and more, and generate an equally broa... » read more

← Older posts Newer posts →