Chip Industry Week in Review


The Chinese Academy of Sciences unveiled a fully automated processor chip design system, claiming the potential to accelerate semiconductor development and replace human programmers. Micron Technology plans to expand its U.S. investments to approximately $150 billion in domestic memory manufacturing and $50 billion in R&D, which is $30 billion higher than previously reported. AMD laun... » read more

Running More Efficient AI/ML Code With Neuromorphic Engines


Neuromorphic engineering is finally getting closer to market reality, propelled by the AI/ML-driven need for low-power, high-performance solutions. Whether current initiatives result in true neuromorphic devices, or whether devices will be inspired by neuromorphic concepts, remains to be seen. But academic and industry researchers continue to experiment in the hopes of achieving significant ... » read more

Chip Industry Week In Review


SK hynix and TSMC plan to collaborate on HBM4 development and next-generation packaging technology, with plans to mass produce HBM4 chips in 2026. The agreement is an early indicator for just how competitive, and potentially lucrative, the HBM market is becoming. SK hynix said the collaboration will enable breakthroughs in memory performance with increased density of the memory controller at t... » read more

Memory On Logic: The Good And Bad


The chip industry is progressing rapidly toward 3D-ICs, but a simpler step has been shown to provide gains equivalent to a whole node advancement — extracting distributed memories and placing them on top of logic. Memory on logic significantly reduces the distance between logic and directly associated memory. This can increase performance by 22% and reduce power by 36%, according to one re... » read more

Optimizing Energy At The System Level


Power is a ubiquitous concern, and it is impossible to optimize a system's energy consumption without considering the system as a whole. Tremendous strides have been made in the optimization of a hardware implementation, but that is no longer enough. The complete system must be optimized. There are far reaching implications to this, some of which are driving the path toward domain-specific c... » read more

Re-architecting Hardware For Energy


A lot of effort has gone into the power optimization of a system based on the RTL created, but that represents a small fraction of the possible power and energy that could be saved. The industry's desire to move to denser systems is being constrained by heat, so there is an increasing focus on re-architecting systems to reduce the energy consumed per useful function performed. Making signifi... » read more

Dramatic Changes Ahead For Chips And Systems


Early this year, most people had never heard of generative AI. Now the entire world is racing to capitalize on it, and that's just the beginning. New markets, such as spatial computing, quantum computing, 6G, smart infrastructure, sustainability, and many more are accelerating the need to process more data faster, more efficiently, and with much more domain specificity. Compared to the days ... » read more

AI Races To The Edge


AI is becoming increasingly sophisticated and pervasive at the edge, pushing into new application areas and even taking on some of the algorithm training that has been done almost exclusively in large data centers using massive sets of data. There are several key changes behind this shift. The first involves new chip architectures that are focused on processing, moving, and storing data more... » read more

Disaggregating And Extending Operating Systems


The push toward disaggregation and customization in hardware is starting to be mirrored on the software side, where operating systems are becoming smaller and more targeted, supplemented with additional software that can be optimized for different functions. There are two main causes for this shift. The first is rising demand for highly optimized and increasingly heterogeneous designs, which... » read more

Will Floating Point 8 Solve AI/ML Overhead?


While the media buzzes about the Turing Test-busting results of ChatGPT, engineers are focused on the hardware challenges of running large language models and other deep learning networks. High on the ML punch list is how to run models more efficiently using less power, especially in critical applications like self-driving vehicles where latency becomes a matter of life or death. AI already ... » read more

← Older posts