Memory Architectures In AI: One Size Doesn’t Fit All


In the world of regular computing, we are used to certain ways of architecting for memory access to meet latency, bandwidth and power goals. These have evolved over many years to give us the multiple layers of caching and hardware cache-coherency management schemes which are now so familiar. Machine learning (ML) has introduced new complications in this area for multiple reasons. AI/ML chips ca... » read more

Multiple Approaches To Memory Challenges


As we enter the era of Big Data and Artificial Intelligence (AI), it is amazing to think about the possibilities for a truly seismic shift in the changing requirements for memory solutions. The massive amount of data humans generate every year is astounding and yet is expected to increase five-fold in the next few years from machine-generated data. Further compounding this growth is the emergin... » read more

Adapting Mobile To A Post-Moore’s Law Era


The slowdown in Moore's Law is having a big impact on chips designed for the mobile market, where battery-powered devices need to still improve performance with lower power. This hasn't slowed down performance or power improvements, but it has forced chipmakers and systems companies to approach designs differently. And while feature shrinks will continue for the foreseeable future, they are ... » read more

Chip Industry In Rapid Transition


Wally Rhines, CEO Emeritus at Mentor, a Siemens Business, sat down with Semiconductor Engineering to talk about global economics, AI, the growing emphasis on customization, and the impact of security and higher abstraction levels. What follows are excerpts of that conversation. SE: Where do you see the biggest changes happening across the chip industry? Rhines: 2018 was a hot year for fab... » read more

Top Stories For 2018


Each year, I look back to see what articles people like to read. The first thing that has amazed me each year at Semiconductor Engineering is that what should be a strong bias towards articles published early in the year never seems to play out. The same is true this year. More than half of the top articles were published after July. The second thing that remains constant is that people love... » read more

Accelerators Everywhere. Now What?


It's a good time to be a data scientist, but it's about to become much more challenging for software and hardware engineers. Understanding the different types and how data flows is the next path forward in system design. As the number of sources of data rises, creating exponential spikes in the volume of data, entirely new approaches to computing will be required. The problem is understandi... » read more

Looking For The Next Big Innovation


Never has there been more demand for “The Big Innovation” — one that moves the needle for performance, power and area-cost (PPAC) in a big way — as there is in the current era of AI and machine learning (ML). As summarized in Why AI Workloads Require New Computing Architectures, AI workloads require new architectures to process data. These workloads also call for heterogeneous comp... » read more

Big Changes For Mainstream Chip Architectures


Chipmakers are working on new architectures that significantly increase the amount of data that can be processed per watt and per clock cycle, setting the stage for one of the biggest shifts in chip architectures in decades. All of the major chipmakers and systems vendors are changing direction, setting off an architectural race that includes everything from how data is read and written in m... » read more

The Data Center In 2018 And Beyond


As computing continues to evolve, a number of trends are continuing to challenge the design of conventional von Neumann architectures, and in turn are driving the development of new architectural approaches and technologies. These include the growing adoption of artificial intelligence (AI), machine learning, AR/VR, IoT, high-speed financial transactions, self-driving vehicles, and blockchain/c... » read more

Move Data Or Process In Place? (Part 2)


Chip architectures, and even local system architectures, long have found that the best way to improve total system performance and power consumption is to move memory as close to processors as possible. This has led to cache architectures and memories that are tuned for those architectures, as discussed in part 1 of this article. But there are several tacit assumptions made in these architectur... » read more

← Older posts Newer posts →