Blog Review: Aug. 28


Synopsys' Jon Ames checks out how the Ultra Ethernet Consortium aims to revolutionize networking by optimizing Ethernet for the rapidly evolving AI and HPC workloads by addressing critical issues like tail latency that are encountered by machine learning algorithms in large compute clusters. Cadence's Kos Gitchev introduces the DDR5 Multiplexed Rank DIMM (MRDIMM), a memory module technology ... » read more

As EDA Processes Becomes More Secure, So Do Chips


Security is becoming a much bigger concern within chips and electronic systems, but the actual implementation remains something of an afterthought, which limits its effectiveness. There are many pieces to the security puzzle on the chip design side that go well beyond just securing the hardware or the IP. The EDA tools themselves need to be secure, as well, and so does the user data within t... » read more

Chip Industry Week In Review


Chinese firms imported almost $26 billion worth of chipmaking machinery, according to fresh trade data released by China’s General Administration of Customs this week, Bloomberg reports. Meanwhile, the global semiconductor manufacturing industry continued to show signs of improvement in Q2 2024 with significant growth of IC sales, stabilizing capital expenditure, and an increase in install... » read more

Next-Gen High-Speed Communication In Data Centers


Data centers are being flooded with data. While more of it needs to be processed locally, much of it also needs to be moved around within a system and between systems. This has put a spotlight on a variety of new optical technologies and methodologies. Yang Zhang, senior product marketing manager at Cadence, talks about the rapid increase in different types of optics and optical scenarios being... » read more

Blog Review: Aug. 21


Cadence's Reela Samuel explores the critical role of PCIe 6.0 equalization in maintaining signal integrity and solutions to mitigate verification challenges, such as creating checkers to verify all symbols of TS0, ensuring the correct functioning of scrambling, and monitoring phase and LTSSM state transitions. Siemens' John McMillan introduces an advanced packaging flow for Intel's Embedded ... » read more

Blog Review: Aug. 14


Cadence's Dimitry Pavlovsky highlights two new features in the AMBA CHI protocol Issue G update that enhance security of the Arm architecture: Memory Encryption Contexts, which allows data in each Realm in the memory to be encrypted with a different encryption key, and Device Assignment, which introduces hardware provisions to support fully coherent caches in partially trusted remote coherent d... » read more

Reusable Power Models


Power is not a new concern, and proprietary models are available for some tasks, but the industry lacks standardization. The Silicon Integration Initiative (Si2) is hoping to help resolve that with an upcoming release of IEEE 2416, based on its Unified Power Model (UPM) work. The creation of any model is not to be taken lightly. There is a cost to its creation, verification and maintenance. ... » read more

CPU Performance Bottlenecks Limit Parallel Processing Speedups


Multi-core processors theoretically can run many threads of code in parallel, but some categories of operation currently bog down attempts to raise overall performance by parallelizing computing. Is it time to have accelerators for running highly parallel code? Standard processors have many CPUs, so it follows that cache coherency and synchronization can involve thousands of cycles of low-le... » read more

Chip Industry Week In Review


Three Fraunhofer Institutes (IIS/EAS, IZM, and ENAS) launched the Chiplet Center of Excellence, a research initiative to support the commercial introduction of chiplet technology. The center initially will focus on automotive electronics, developing workflows and methods for electronics design, demonstrator construction, and the evaluation of reliability. The UCIe Consortium published the Un... » read more

HBM3E: All About Bandwidth


The rapid rise in size and sophistication of AI/ML training models requires increasingly powerful hardware deployed in the data center and at the network edge. This growth in complexity and data stresses the existing infrastructure, driving the need for new and innovative processor architectures and associated memory subsystems. For example, even GPT-3 at 175 billion parameters is stressing the... » read more

← Older posts Newer posts →