AI Benchmarks Are Broken


Artificial Intelligence (AI) is shaping up to be one of the most revolutionary technologies of our time. By now you’ve probably heard that AI’s impact will transform entire industries, from healthcare to finance to entertainment, delivering us richer products, streamlined experiences, and augment human productivity, creativity, and leisure. Even non-technologists are getting a glimpse of... » read more

FPGAs: Automated Framework For Architecture-Space Exploration of Approximate Accelerators


A technical paper titled "autoXFPGAs: An End-to-End Automated Exploration Framework for Approximate Accelerators in FPGA-Based Systems" was published (preprint) by researchers at TU Wien, Brno University of Technology, and NYUAD. Abstract "Generation and exploration of approximate circuits and accelerators has been a prominent research domain exploring energy-efficiency and/or performance... » read more

Spark On AWS Graviton2 Best Practices: K-Means Clustering Case Study


This report focuses on how to tune a Spark application to run on a cluster of instances. We define the concepts for the cluster/Spark parameters, and explain how to configure them given a specific set of resources. We use a K-Means machine learning algorithm as a case study to analyze and tune the parameters to achieve the required performance while optimally using the available resources. W... » read more

A Hierarchical And Tractable Mixed-Signal Verification Methodology For First-Generation Analog AI Processors


Artificial intelligence (AI) is now the key driving force behind advances in information technology, big data and the internet of things (IoT). It is a technology that is developing at a rapid pace, particularly when it comes to the field of deep learning. Researchers are continually creating new variants of deep learning that expand the capabilities of machine learning. But building systems th... » read more

Test Challenges Mount As Demands For Reliability Increase


An emphasis of improving semiconductor quality is beginning to spread well beyond just data centers and automotive applications, where ICs play a role in mission- and safety-critical applications. But this focus on improved reliability is ratcheting up pressure throughout the test community, from lab to fab and into the field, in products where transistor density continues to grow — and wh... » read more

AI: Engineering Tool Or Threat To Jobs?


Semiconductor Engineering sat down to talk about using AI for designing and testing complex chips with Michael Jackson, corporate vice president for R&D at Cadence; Joel Sumner, vice president of semiconductor and electronics engineering at National Instruments; Grace Yu, product and engineering manager at Meta; David Pan, professor in the Department of Electrical and Computer Engineering a... » read more

HW-SW Co-Design Solution For Building Side-Channel-Protected ML Hardware


A technical paper titled "Hardware-Software Co-design for Side-Channel Protected Neural Network Inference" was published (preprint) by researchers at North Carolina State University and Intel. Abstract "Physical side-channel attacks are a major threat to stealing confidential data from devices. There has been a recent surge in such attacks on edge machine learning (ML) hardware to extract the... » read more

Review of Tools & Techniques for DL Edge Inference


A new technical paper titled "Efficient Acceleration of Deep Learning Inference on Resource-Constrained Edge Devices: A Review" was published in "Proceedings of the IEEE" by researchers at University of Missouri and Texas Tech University. Abstract: Successful integration of deep neural networks (DNNs) or deep learning (DL) has resulted in breakthroughs in many areas. However, deploying thes... » read more

New Method Improves Machine Learning Models’ Reliability, With Less Computing Resources (MIT, U. of Florida, IBM Watson)


A new technical paper titled "Post-hoc Uncertainty Learning using a Dirichlet Meta-Model" was published (preprint) by researchers at MIT, University of Florida, and MIT-IBM Watson AI Lab (IBM Research). The work demonstrates how to quantify the level of certainty in its predictions, while using less compute resources. “Uncertainty quantification is essential for both developers and users o... » read more

The Next Disruption


Machine learning (ML) is an inherently disruptive technology because the algorithm architectures are evolving so fast and are very compute intensive, requiring innovative silicon for acceptable performance. This blog looks at where we’ve been and where ML is going – into another market ready for disruption. ML started in the data center In the early days of the ML explosion – a mere 8 o... » read more

← Older posts Newer posts →