Placement And CTS Techniques For High-Performance Computing Designs


This paper discusses the challenges of designing high-performance computing (HPC) integrated circuits (ICs) to achieve maximum performance. The design process for HPC ICs has become more complex with each new process technology, requiring new architectures and transistors. We highlight how the Siemens Aprisa digital implementation solution can solve placement and clock tree challenges in HPC de... » read more

Battling Over Shrinking Physical Margin In Chips


Smaller process nodes, coupled with a continual quest to add more features into designs, are forcing chipmakers and systems companies to choose which design and manufacturing groups have access to a shrinking pool of technology margin. In the past margin largely was split between the foundries, which imposed highly restrictive design rules (RDRs) to compensate for uncertainties in new proces... » read more

Power/Performance Costs Of Securing Systems


For much of the chip industry, concerns about security are relatively new, but the requirement for protecting semiconductor devices is becoming pervasive. Unfortunately for many industries, that lesson has been learned the hard way. Security breaches have led to the loss of sensitive data, ransomware attacks that lock up data, theft of intellectual property or financial resources, and loss o... » read more

Impact Of Increased IC Performance On Memory


Increasing performance in advanced semiconductors is becoming more difficult as chips become more complex. There are more physical effects to contend with, different use cases, and challenges in making memory go faster. In addition, aging effects that once were ignored are now becoming critical concerns. Steven Woo, fellow and distinguished inventor at Rambus, talks about different factors that... » read more

Dealing With Performance Bottlenecks In SoCs


A surge in the amount of data that SoCs need to process is bogging down performance, and while the processors themselves can handle that influx, memory and communication bandwidth are straining. The question now is what can be done about it. The gap between memory and CPU bandwidth — the so-called memory wall — is well documented and definitely not a new problem. But it has not gone away... » read more

Improving PPA When Embedding FPGAs Into SoCs


Embedded FPGAs have been on everyone’s radar for years as a way of extending the life of chips developed at advanced nodes, but they typically have come with high performance and power overhead. That’s no longer the case, and the ability to control complex chips and keep them current with changes to algorithms and various protocols is significant step. Geoff Tate, CEO of Flex Logix, talks a... » read more

HBM3 In The Data Center


Frank Ferro, senior director of product management at Rambus, talks about the forthcoming HBM3 standard, why this is so essential for AI chips and where the bottlenecks are today, what kinds of challenges are involved in working with this memory, and what impact chiplets and near-memory compute will have on HBM and bandwidth.     » read more

Can Analog Make A Comeback?


We live in an analog world dominated by digital processing, but that could change. Domain specificity, and the desire for greater levels of optimization, may provide analog compute with some significant advantages — and the possibility of a comeback. For the last four decades, the advantages of digital scaling and flexibility have pushed the dividing line between analog and digital closer ... » read more

Zero Dark Silicon


Planning for AI requires an understanding of how much data needs to be processed and how quickly that needs to happen. Nick Ni, senior director of data center AI and compute markets at AMD, talks with Semiconductor Engineering about data bubbles and domain-specific designs, why dark silicon is no longer as useful as in the past, and how to optimize power and performance in both the data center ... » read more

Architecting Faster Computers


To create faster computers, the industry must take a major step back and re-examine choices that were made half a century ago. One of the most likely approaches involves dropping demands for determinism, and this is being attempted in several different forms. Since the establishment of the von Neumann architecture for computers, small, incremental improvements have been made to architectures... » read more

← Older posts