ML Opening New Doors For FPGAs


FPGAs have long been used in the early stages of any new digital technology, given their utility for prototyping and rapid evolution. But with machine learning, FPGAs are showing benefits beyond those of more conventional solutions. This opens up a hot new market for FPGAs, which traditionally have been hard to sustain in high-volume production due to pricing, and hard to use for battery-dri... » read more

3 Challenges In Edge Designs


As companies begin exploring what will be necessary to win at the edge, they are coming up with some daunting challenges. Designing chips for the edge is far different than for the IoT/IIoT. The idea with the IoT was that simple sensors would relay data through a gateway to the cloud, where it would be processed and data could be sent back to the device as needed. That works if it's a small ... » read more

Week In Review: Auto, Security, Pervasive Computing


Edge, cloud, data center Programmable logic company Efinix used Cadence’s Digital Full Flow to finish Efinix’s Trion FPGA family for edge computing, AI/ML and vision processing applications, according to a press release. Last week Efinix also announced three software defined SoCs based on the RISC-V core. The SoCs are optimized to the Trion FPGAs. AI, machine learning Amazon will tempo... » read more

Hardware Security For AI Accelerators


Dedicated accelerator hardware for artificial intelligence and machine learning (AI/ML) algorithms are increasingly prevalent in data centers and endpoint devices. These accelerators handle valuable data and models, and face a growing threat landscape putting AI/ML assets at risk. Using fundamental cryptographic security techniques performed by a hardware root of trust can safeguard these as... » read more

Powering The Edge


On-device machine learning (ML) is a phenomenon that has exploded in popularity. Smart devices that are able to make independent decisions, acting on locally generated data, are hailed as the future of compute for consumer devices: on-device processing slashes latency; increases reliability and safety; boosts privacy and security...all while saving on power and cost. Although ML in edge d... » read more

What Is DRAM’s Future?


Memory — and DRAM in particular — has moved into the spotlight as it finds itself in the critical path to greater system performance. This isn't the first time DRAM has been the center of attention involving performance. The problem is that not everything progresses at the same rate, creating serial bottlenecks in everything from processor performance to transistor design, and even the t... » read more

AI Requires Tailored DRAM Solutions


For over 30 years, DRAM has continuously adapted to the needs of each new wave of hardware spanning PCs, game consoles, mobile phones and cloud servers. Each generation of hardware required DRAM to hit new benchmarks in bandwidth, latency, power or capacity. Looking ahead, the 2020s will be the decade of artificial intelligence/machine learning (AI/ML) touching every industry and applicatio... » read more

More Multiply-Accumulate Operations Everywhere


Geoff Tate, CEO of Flex Logix, sat down with Semiconductor Engineering to talk about how to build programmable edge inferencing chips, embedded FPGAs, where the markets are developing for both, and how the picture will change over the next few years. SE: What do you have to think about when you're designing a programmable inferencing chip? Tate: With a traditional FPGA architecture you ha... » read more

HBM Issues In AI Systems


All systems face limitations, and as one limitation is removed, another is revealed that had remained hidden. It is highly likely that this game of Whac-A-Mole will play out in AI systems that employ high-bandwidth memory (HBM). Most systems are limited by memory bandwidth. Compute systems in general have maintained an increase in memory interface performance that barely matches the gains in... » read more

How Much Power Will AI Chips Use?


AI and machine learning have voracious appetites when it comes to power. On the training side, they will fully utilize every available processing element in a highly parallelized array of processors and accelerators. And on the inferencing side they, will continue to optimize algorithms to maximize performance for whatever task a system is designed to do. But as with cars, mileage varies gre... » read more

← Older posts Newer posts →