Where 5G Works, And Where It Doesn’t


The rollout of 5G hype has begun. Companies are building 5G chipsets for mobile devices, and they are working on the infrastructure that will allow massive amounts of data to move freely between devices. There is little doubt that more bandwidth is required everywhere. Files are growing in size, particularly with streaming video and images and various flavors of AI and machine learning. This... » read more

Playing Into China’s Hands


The fallout over blacklisting Huawei in particular, and China in general, has set the tone for a nasty global race. But it is almost certain to produce a different result than the proponents of a trade war are expecting. The idea behind tariffs and the blacklisting of Huawei is to starve China of vital technology. So far, the impact has been minimal. Reports from inside of China are equa... » read more

Bottlenecks For Edge Processors


New processor architectures are being developed that can provide two to three orders of magnitude improvement in performance. The question now is whether the performance in systems will be anything close to the processor benchmarks. Most of these processors doing one thing very well. They handle specific data types and can accelerate the multiply-accumulate functions for algorithms by distri... » read more

More Memory And Processor Tradeoffs


Creating a new chip architecture is becoming an increasingly complex series of tradeoffs about memories and processing elements, but the benefits are not always obvious when those tradeoffs are being made. This used to be a fairly straightforward exercise when there was one processor, on-chip SRAM and off-chip DRAM. Fast forward to 7/5nm, where chips are being developed for AI, mobile ph... » read more

Power Budgets At 3nm And Beyond


There is high confidence that digital logic will continue to shrink at least to 3nm, and possibly down to 1.5nm. Each of those will require significant changes in how design teams approach power. This is somewhat evolutionary for most chipmakers. Five years ago there were fewer than a handful of power experts in most large organizations. Today, everyone deals with power in one way or another... » read more

Arms Race In Chip Performance


An AI arms race is taking shape across continents. While this is perilous on many fronts, it could provide a massive boost for the chip technology—and help to solve a long-simmering problem in computing, as well as lots of lesser ones. The U.S. government this week announced its AI Initiative, joining an international scramble for the fastest way to do multiply/accumulate and come up with ... » read more

Low Power At The Edge


The tech world has come to the realization in recent months that there is far too much data to process everything in the cloud. Now it is starting to come to grips with what that really means for edge and near-edge computing. There still are no rules for where or how that data will be parsed, but there is a growing recognition that some level of pre-processing will be necessary, and that in tur... » read more

Security, Scaling and Power


If anyone has doubts about the slowdown and increasing irrelevance of Moore's Law, Intel's official unveiling of its advanced packaging strategy should leave little doubt. Inertia has ended and the roadmap is being rewritten. Intel's discussion of advanced packaging is nothing new. The company has been public about its intentions for years, and started dropping hints back when Pat Gelsinger ... » read more

Accelerators Everywhere. Now What?


It's a good time to be a data scientist, but it's about to become much more challenging for software and hardware engineers. Understanding the different types and how data flows is the next path forward in system design. As the number of sources of data rises, creating exponential spikes in the volume of data, entirely new approaches to computing will be required. The problem is understandi... » read more

Making AI Run Faster


The semiconductor industry has woken up to the fact that heterogeneous computing is the way forward and that inferencing will require more than a GPU or a CPU. The numbers being bandied about by the 30 or so companies working on this problem are 100X improvements in performance. But how to get there isn't so simple. It requires four major changes, as well as some other architectural shifts. ... » read more

← Older posts