Deploying Artificial Intelligence At The Edge


By Pushkar Apte and Tom Salmon Rapid advances in artificial intelligence (AI) have made this technology important for many industries, including finance, energy, healthcare, and microelectronics. AI is driving a multi-trillion-dollar global market while helping to solve some tough societal problems such as tracking the current pandemic and predicting the severity of climate-driven events lik... » read more

Securing The IoT Begins With Zero-Touch Provisioning At Scale


The path to secured IoT deployments starts with a hardware root-of-trust at the device level, a simple concept that belies the complexity of managing a chain of trust that extends from every edge device to the core of the network. The solution to this management challenge, based on a coordinated effort of domain experts, is a zero touch “chip-to-cloud” provisioning service for certificates-... » read more

For The Edge, It’s All About Location, Location, Location


They are centrally located, are connected to power grids and water systems, and are rapidly thinning out. And you can probably get a new cell phone case or a corn dog in the atrium. Could shopping malls become a future home for the edge? Edge computing has transformed over the last few years from being a vaguely defined concept to a fundamental part of the future data infrastructure. Band... » read more

Challenges Of Edge AI Inference


Bringing convolutional neural networks (CNNs) to your industry—whether it be medical imaging, robotics, or some other vision application entirely—has the potential to enable new functionalities and reduce the compute requirements for existing workloads. This is because a single CNN can replace more computationally expensive image processing, denoising, and object detection algorithms. Howev... » read more

Challenges In Developing A New Inferencing Chip


Cheng Wang, co-founder and senior vice president of software and engineering at Flex Logix, sat down with Semiconductor Engineering to explain the process of bringing an inferencing accelerator chip to market, from bring-up, programming and partitioning to tradeoffs involving speed and customization.   SE: Edge inferencing chips are just starting to come to market. What challenges di... » read more

ACAP At The Edge With The Versal AI Edge Series


This white paper introduces the AI Edge series to the Versal ACAP portfolio, a domain-specific architecture (DSA) that meets the strenuous demands of systems implemented in the 7nm silicon process. This series is optimized to meet the performance-per-watt requirements of edge nodes at or near the analog-digital boundary. Here, immediate response to the physical world is highly valued, and in ma... » read more

Architectural Considerations For AI


Custom chips, labeled as artificial intelligence (AI) or machine learning (ML), are appearing on a weekly basis, each claiming to be 10X faster than existing devices or consume 1/10 the power. Whether that is enough to dethrone existing architectures, such as GPUs and FPGAs, or whether they will survive alongside those architectures isn't clear yet. The problem, or the opportunity, is that t... » read more

Customizing Chips For Power And Performance


Sandro Cerato, senior vice president and CTO of the Power & Sensor Systems Business Unit at Infineon Technologies, sat down with Semiconductor Engineering to talk about fundamental shifts in chip design with the rollout of the edge, AI, and more customized solutions. What follows are excerpts of that conversation. SE: The chip market is starting to fall into three distinct buckets, the e... » read more

Configuring AI Chips


Change is almost constant in AI systems. Vinay Mehta, technical product marketing manager at Flex Logix, talks about the need for flexible architectures to deal with continual modifications in algorithms, more complex convolutions, and unforeseen system interactions, as well as the ability to apply all of this over longer chip lifetimes. Related Dynamically Reconfiguring Logic A differ... » read more

Hyperconnectivity, Hyperscale Computing, And Moving Edges


As described in “The Four Pillars of Hyperscale Computing” last year, the four core components that development teams consider for data centers are computing, storage, memory, and networking. Over the previous decade, requirements for programmability have fundamentally changed data centers. Just over a decade ago, in 2010, virtual machines would compute user workloads on CPU-centric archite... » read more

← Older posts Newer posts →