Architectural Considerations For AI


Custom chips, labeled as artificial intelligence (AI) or machine learning (ML), are appearing on a weekly basis, each claiming to be 10X faster than existing devices or consume 1/10 the power. Whether that is enough to dethrone existing architectures, such as GPUs and FPGAs, or whether they will survive alongside those architectures isn't clear yet. The problem, or the opportunity, is that t... » read more

Customizing Chips For Power And Performance


Sandro Cerato, senior vice president and CTO of the Power & Sensor Systems Business Unit at Infineon Technologies, sat down with Semiconductor Engineering to talk about fundamental shifts in chip design with the rollout of the edge, AI, and more customized solutions. What follows are excerpts of that conversation. SE: The chip market is starting to fall into three distinct buckets, the e... » read more

Configuring AI Chips


Change is almost constant in AI systems. Vinay Mehta, technical product marketing manager at Flex Logix, talks about the need for flexible architectures to deal with continual modifications in algorithms, more complex convolutions, and unforeseen system interactions, as well as the ability to apply all of this over longer chip lifetimes. Related Dynamically Reconfiguring Logic A differ... » read more

Hyperconnectivity, Hyperscale Computing, And Moving Edges


As described in “The Four Pillars of Hyperscale Computing” last year, the four core components that development teams consider for data centers are computing, storage, memory, and networking. Over the previous decade, requirements for programmability have fundamentally changed data centers. Just over a decade ago, in 2010, virtual machines would compute user workloads on CPU-centric archite... » read more

Kria K26 SOM: The Ideal Platform For Vision AI At The Edge


With various advancements in artificial intelligence (AI) and machine learning (ML) algorithms, many high-compute applications are now getting deployed on edge devices. So, there is a need for an efficient hardware that can execute complex algorithms efficiently as well as adapt to rapid enhancements in this technology. Xilinx's Kria K26 SOM is designed to address the requirements of executing ... » read more

Why Reconfigurability Is Essential For AI Edge Inference Throughput


For a neural network to run at its fastest, the underlying hardware must run efficiently on all layers. Through the inference of any CNN—whether it be based on an architecture such as YOLO, ResNet, or Inception—the workload regularly shifts from being bottlenecked by memory to being bottlenecked by compute resources. You can think of each convolutional layer as its own mini-workload, and so... » read more

IC Security Threat Grows As More Devices Are Connected


Designing for security is beginning to gain traction across a wider swath of chips and systems as more of them are connected to the Internet and to each other, sometimes in safety- and mission-critical markets where the impact of a cyber attack can be devastating. But it's also becoming more difficult to design security into these systems. Unlike in the past, connectivity is now considered e... » read more

Maximizing Edge AI Performance


Inference of convolutional neural network models is algorithmically straightforward, but to get the fastest performance for your application there are a few pitfalls to keep in mind when deploying. A number of factors make efficient inference difficult, which we will first step through before diving into specific solutions to address and resolve each. By the end of this article, you will be arm... » read more

New Uses For AI


AI is being embedded into an increasing number of technologies that are commonly found inside most chips, and initial results show dramatic improvements in both power and performance. Unlike high-profile AI implementations, such as self-driving cars or natural language processing, much of this work flies well under the radar for most people. It generally takes the path of least disruption, b... » read more

The Best AI Edge Inference Benchmark


When evaluating the performance of an AI accelerator, there’s a range of methodologies available to you. In this article, we’ll discuss some of the different ways to structure your benchmark research before moving forward with an evaluation that directly runs your own model. Just like when buying a car, research will only get you so far before you need to get behind the wheel and give your ... » read more

← Older posts Newer posts →