中文 English

Operator Anxiety


Are you one of the early pioneers who have purchased an electric car? In the United States in Q3 2022, 6% of new vehicle sales were pure electric models. Despite all the hype — and significant purchase subsidies in support of battery cars — today only 1% of the cumulative number of vehicles in service in the US are purely plug-in electric. One of the reasons electric car sales have not full... » read more

Scalable Optical AI Accelerator Based on a Crossbar Architecture


A new technical paper titled "Scalable Coherent Optical Crossbar Architecture using PCM for AI Acceleration" was published by researchers at University of Washington. Abstract: "Optical computing has been recently proposed as a new compute paradigm to meet the demands of future AI/ML workloads in datacenters and supercomputers. However, proposed implementations so far suffer from lack of sc... » read more

Image Processing For Vision AI


Recent years have seen an increasing need for Vision AI applications using AI to enable real-time image recognition. Vision AI, which substitutes AI for human visual recognition, requires optimal image processing. Renesas has released RZ/V2M as mid-class, and RZ/V2L as an entry class, Vision AI microprocessors (MPUs). Both products are equipped with DRP-AI which is Dynamically Reconfigurable Pr... » read more

eFPGA Saved Us Millions of Dollars. It Can Do the Same for You


For those of you who follow Flex Logix, you already know that we have an IP business, EFLX eFGPA, and an edge inferencing co-processor chip and board business, InferX. InferX came about because we had many customers ask if they can run AI/ML algorithms in EFLX. The answer was and still is, of course you can – EFLX is an FPGA fabric similar to what FPGA chips use. Our co-founder, Cheng Wang, t... » read more

Flexible USB4-Based Interface IP Solution For AI At The Edge


Consumers have become accustomed to smart devices that are powered by advances in artificial intelligence (AI). To expand the devices’ total addressable market, innovative device designers build edge AI accelerators and edge AI SoCs that support multiple use cases and integration options. This white paper describes a flexible USB4-based IP solution for edge AI accelerators and SoCs. The IP so... » read more

Getting Better Edge Performance & Efficiency From Acceleration-Aware ML Model Design


The advent of machine learning techniques has benefited greatly from the use of acceleration technology such as GPUs, TPUs and FPGAs. Indeed, without the use of acceleration technology, it’s likely that machine learning would have remained in the province of academia and not had the impact that it is having in our world today. Clearly, machine learning has become an important tool for solving... » read more

Challenges In Developing A New Inferencing Chip


Cheng Wang, co-founder and senior vice president of software and engineering at Flex Logix, sat down with Semiconductor Engineering to explain the process of bringing an inferencing accelerator chip to market, from bring-up, programming and partitioning to tradeoffs involving speed and customization.   SE: Edge inferencing chips are just starting to come to market. What challenges di... » read more

Xilinx AI Engines And Their Applications


This white paper explores the architecture, applications, and benefits of using Xilinx's new AI Engine for compute intensive applications like 5G cellular and machine learning DNN/CNN. 5G requires between five to 10 times higher compute density when compared with prior generations; AI Engines have been optimized for DSP, meeting both the throughput and compute requirements to deliver the hig... » read more

Enabling Efficient and Flexible FPGA Virtualization for Deep Learning in the Cloud


SOURCE: Shulin Zeng, Guohao Dai, Hanbo Sun, Kai Zhong, Guangjun Ge, Kaiyuan Guo, Yu Wang, Huazhong Yang(Tsinghua University, Beijing, China).  Published on arXiv:2003.12101 [cs.DC])   ABSTRACT: "FPGAs have shown great potential in providing low-latency and energy-efficient solutions for deep neural network (DNN) inference applications. Currently, the majority of FPGA-based DNN accel... » read more

Virtualizing FPGAs For Multiple Cloud Users


Cloud computing has become the new computing paradigm. For cloud computing, virtualization is necessary to enable isolation between users, high flexibility and scalability, high security, and maximized utilization of hardware resources. Since 2017, because of the advantages of programmability, low latency, and high energy efficiency, FPGA has been widely adopted into cloud computing. Amazon ... » read more

← Older posts