Language’s Role In Embodied Agents


Large Language Models (LLMs) and models cross-trained on natural language are a major growth area for edge applications of neural networks and Artificial Intelligence (AI). Within the spectrum of applications, embodied agents stand out as a major developing focal point for this AI. This article will address developments in this space and how the application of language-trained models improves t... » read more

Generative AI On Mobile Is Running On The Arm CPU


By Adnan Al-Sinan and Gian Marco Iodice 2023 was the year that showcased an impressive number of use cases powered by generative AI. This disruptive form of artificial intelligence (AI) technology is at the heart OpenAI's ChatGPT and Google’s Gemini AI model, with it demonstrating the opportunity to simplify work and advance education through generating text, images, or even audio content ... » read more

LLM Inference On CPUs (Intel)


A technical paper titled “Efficient LLM Inference on CPUs” was published by researchers at Intel. Abstract: "Large language models (LLMs) have demonstrated remarkable performance and tremendous potential across a wide range of tasks. However, deploying these models has been challenging due to the astronomical amount of model parameters, which requires a demand for large memory capacity an... » read more

Applications Of Large Language Models For Industrial Chip Design (NVIDIA)


A technical paper titled “ChipNeMo: Domain-Adapted LLMs for Chip Design” was published by researchers at NVIDIA. Abstract: "ChipNeMo aims to explore the applications of large language models (LLMs) for industrial chip design. Instead of directly deploying off-the-shelf commercial or open-source LLMs, we instead adopt the following domain adaptation techniques: custom tokenizers, domain-ad... » read more

GNN-Based Pre-Silicon Power Side-Channel Analysis Framework At RTL Level


A technical paper titled “SCAR: Power Side-Channel Analysis at RTL-Level” was published by researchers at University of Texas at Dallas, Technology Innovation Institute and University of Illinois Chicago. Abstract: "Power side-channel attacks exploit the dynamic power consumption of cryptographic operations to leak sensitive information of encryption hardware. Therefore, it is necessary t... » read more

Partitioning Processors For AI Workloads


Partitioning in complex chips is beginning to resemble a high-stakes guessing game, where choices need to extrapolate from what is known today to what is expected by the time a chip finally ships. Partitioning of workloads used to be a straightforward task, although not necessarily a simple one. It depended on how a device was expected to be used, the various compute, storage and data paths ... » read more

Unleashing The Power Of Generative AI In Chip, System, And Product Design


The field of chip, system, and product design is a complex landscape, fraught with challenges that designers grapple with daily. The traditional design process, while robust, often falls short in addressing the increasing demands for efficiency, customization, and innovation. This white paper delves into these challenges, exploring the transformative potential of generative artificial int... » read more

LLM Technology For Chip Design


In the nine short months since OpenAI brought ChatGPT (a Chat Generative Pre-Trained Transformer) and the phenomenal concept of large language models (LLMs) to the global collective consciousness, pioneers from every corner of the economy have raced to understand the benefits—and the pitfalls—of deploying this nascent technology to their particular industry. And as it turns out, semicondu... » read more

A Chiplet-Based Supercomputer For Generative LLMs That Optimizes Total Cost of Ownership


A technical paper titled "Chiplet Cloud: Building AI Supercomputers for Serving Large Generative Language Models" was published by researchers at University of Washington and University of Sydney. Abstract: "Large language models (LLMs) such as ChatGPT have demonstrated unprecedented capabilities in multiple AI tasks. However, hardware inefficiencies have become a significant factor limiting ... » read more

Generative AI Training With HBM3 Memory


One of the biggest, most talked about application drivers of hardware requirements today is the rise of Large Language Models (LLMs) and the generative AI which they make possible.  The most well-known example of generative AI right now is, of course, ChatGPT. ChatGPT’s large language model for GPT-3 utilizes 175 billion parameters. Fourth generation GPT-4 will reportedly boost the number of... » read more

← Older posts