Surround And Conquer


The processor wars are back in full swing, this time with some new players in the field. But what defines winning this time around is far less obvious than it was in the past, and it will take years before we know the outcome. The strategy is the same, though, and it's one that has been in use for years in the tech world. It began in the 1990s, when IBM came to the realization that it could ... » read more

Chiplets, Faster Interconnects, More Efficiency


Big chipmakers are turning to architectural improvements such as chiplets, faster throughput both on-chip and off-chip, and concentrating more work per operation or cycle, in order to ramp up processing speeds and efficiency. Taken as a whole, this represents a significant shift in direction for the major chip companies. All of them are wrestling with massive increases in processing demands ... » read more

Speed Returns As The Key Metric


For the foreseeable future, it's all about performance. For the past decade or so, power and battery life have been the defining characteristics of chip design. Performance was second to those. This was particularly important in smart phones and wearable devices, where time between charges was a key selling point. In fact, power-hungry processors killed the first round of smart watches. But ... » read more

Providing An AI Accelerator Ecosystem


A key design area for AI systems is the creation of Machine Learning (ML) algorithms that can be accelerated in hardware to meet power and performance goals. Teams designing these algorithms find out quickly that a traditional RTL design flow will no longer work if they want to meet their delivery schedules. The algorithms are often subject to frequent changes, the performance requirements may ... » read more

The Race To Multi-Domain SoCs


K. Charles Janac, president and CEO of Arteris IP, sat down with Semiconductor Engineering to discuss the impact of automotive and AI on chip design. What follows are excerpts of that conversation. SE: What do you see as the biggest changes over the next 12 to 24 months? Janac: There are segments of the semiconductor market that are shrinking, such as DTV and simple IoT. Others are going ... » read more

AI Chips: NoC Interconnect IP Solves Three Design Challenges


New network-on-chip (NoC) interconnect IP is now available for artificial intelligence (AI) systems-on-chip (SoC). Arteris IP launched the fourth generation of the FlexNoC interconnect IP with a new optional AI package. The novel NoC interconnect technologies solves many data flow problems in today’s AI designs. Innovative features address the requirements of the next-generation of AI chips t... » read more

It’s All About The Data


The entire tech industry has changed in several fundamental ways over the past year due to the massive growth in data. Individually, those changes are significant. Taken together, those changes will have a massive impact on the chip industry for the foreseeable future. The obvious shift is the infusion of AI (and its subcategories, machine learning and deep learning) into different markets. ... » read more

FPGA Graduates To First-Tier Status


Robert Blake, president and CEO of Achronix, sat down with Semiconductor Engineering to talk about fundamental shifts in compute architectures and why AI, machine learning and various vertical applications are driving demand for discrete and embedded FPGAs. SE: What’s changing in the FPGA market? Blake: Our big focus is developing the next-generation architecture. We started this projec... » read more

AI Chip Architectures Race To The Edge


As machine-learning apps start showing up in endpoint devices and along the network edge of the IoT, the accelerators that make AI possible may look more like FPGA and SoC modules than current data-center-bound chips from Intel or Nvidia. Artificial intelligence and machine learning need powerful chips for computing answers (inference) from large data sets (training). Most AI chips—both tr... » read more

Power/Performance Bits: Nov. 20


In-memory compute accelerator Engineers at Princeton University built a programmable chip that features an in-memory computing accelerator. Targeted at deep learning inferencing, the chip aims to reduce the bottleneck between memory and compute in traditional architectures. The team's key to performing compute in memory was using capacitors rather than transistors. The capacitors were paire... » read more

← Older posts Newer posts →