New AI Data Types Emerge


AI is all about data, and the representation of the data matters strongly. But after focusing primarily on 8-bit integers and 32‑bit floating-point numbers, the industry is now looking at new formats. There is no single best type for every situation, because the choice depends on the type of AI model, whether accuracy, performance, or power is prioritized, and where the computing happens, ... » read more

HW and SW Architecture Approaches For Running AI Models


How best to run AI inference models is a current topic of much debate as a wide breadth of systems companies look to add AI to a variety of systems, spurring both hardware innovation and the need to revamp models. Hardware developers are making progress with AI accelerators and SoCs. But on the model side, questions abound about whether the answer might come from revisiting older, less compl... » read more

Fundamental Issues In Computer Vision Still Unresolved


Given computer vision’s place as the cornerstone of an increasing number of applications from ADAS to medical diagnosis and robotics, it is critical that its weak points be mitigated, such as the ability to identify corner cases or if algorithms are trained on shallow datasets. While well-known bloopers are often the result of human decisions, there are also fundamental technical issues that ... » read more

CMOS-Based HW Topology For Single-Cycle In-Memory XOR/XNOR Operations


A technical paper titled “CMOS-based Single-Cycle In-Memory XOR/XNOR” was published by researchers at University of Tennessee, University of Virginia, and Oak Ridge National Laboratory (ORNL). Abstract: "Big data applications are on the rise, and so is the number of data centers. The ever-increasing massive data pool needs to be periodically backed up in a secure environment. Moreover, a ... » read more

Embedded Automotive Platforms: Evaluating Power And Performance Of Image Classification And Objects Detection CNNs 


A technical paper titled “Performance/power assessment of CNN packages on embedded automotive platforms” was published by researchers at University of Modena and Reggio Emilia. Abstract: "The rise of power-efficient embedded computers based on highly-parallel accelerators opens a number of opportunities and challenges for researchers and engineers, and paved the way to the era of edge com... » read more

Partitioning Processors For AI Workloads


Partitioning in complex chips is beginning to resemble a high-stakes guessing game, where choices need to extrapolate from what is known today to what is expected by the time a chip finally ships. Partitioning of workloads used to be a straightforward task, although not necessarily a simple one. It depended on how a device was expected to be used, the various compute, storage and data paths ... » read more

CNN Hardware Architecture With Weights Generator Module That Alleviates Impact Of The Memory Wall


A technical paper titled “Mitigating Memory Wall Effects in CNN Engines with On-the-Fly Weights Generation” was published by researchers at Samsung AI Center and University of Cambridge. Abstract: "The unprecedented accuracy of convolutional neural networks (CNNs) across a broad range of AI tasks has led to their widespread deployment in mobile and embedded settings. In a pursuit for high... » read more

Improving Image Resolution At The Edge


How much cameras see depends on how accurately the images are rendered and classified. The higher the resolution, the greater the accuracy. But higher resolution also requires significantly more computation, and it requires flexibility in the design to be able to adapt to new algorithms and network models. Jeremy Roberson, technical director and software architect for AI/ML at Flex Logix, talks... » read more

ML Automotive Chip Design Takes Off


Machine learning is increasingly being deployed across a wide swath of chips and electronics in automobiles, both for improving reliability of standard parts and for the creation of extremely complex AI chips used in increasingly autonomous applications. On the design side, the majority of EDA tools today rely on reinforcement learning, a machine learning subset of AI that teaches a machine ... » read more

Neural Architecture & Hardware Accelerator Co-Design Framework (Princeton/ Stanford)


A new technical paper titled "CODEBench: A Neural Architecture and Hardware Accelerator Co-Design Framework" was published by researchers at Princeton University and Stanford University. "Recently, automated co-design of machine learning (ML) models and accelerator architectures has attracted significant attention from both the industry and academia. However, most co-design frameworks either... » read more

← Older posts