Week In Review: Design, Low Power


Tools & IP Cadence unveiled deep neural-network accelerator (DNA) AI processor IP, Tensilica DNA 100, targeted at on-device neural network inference applications. The processor is scalable from 0.5 TMAC (Tera multiply-accumulate) to 12 TMACs, or 100s of TMACs with multiple processors stacked, and the company claims it delivers up to 4.7X better performance and up to 2.3X more performance p... » read more

Is Your AI SoC Secure?


As artificial intelligence (AI) enters every application, from IoT to automotive, it is bringing new waves of innovation and business models, along with the need for high-grade security. Hackers try to exploit vulnerabilities at all levels of the system, from the system-on-chip (SoC) up. Therefore, security needs to be integral in the AI process. The protection of AI systems, their data, and th... » read more

Intel’s Next Move


Gadi Singer, vice president and general manager of Intel's Artificial Intelligence Products Group, sat down with Semiconductor Engineering to talk about Intel's vision for deep learning and why the company is looking well beyond the x86 architecture and one-chip solutions. SE: What's changing on the processor side? Singer: The biggest change is the addition of deep learning and neural ne... » read more

Flexible, Energy-Efficient Neural Network Processing At 16nm


At Hot Chips 30, held in August in Silicon Valley, Harvard University (Paul Whatmough, SK Lee, S Xi, U Gupta, L Pentecost, M Donato, HC Hseuh, Professor Brooks and Professor Gu) made a presentation on “SMIV: A 16nm SoC with Efficient and Flexible DNN Acceleration for Intelligent IOT Devices. ” (Their complete presentation is available now on the Hot Chips website for attendees and will be p... » read more

Huge Performance Gains Ahead


Rambus Chief Scientist Craig Hampel talks about what will drive the next big performance gains after Moore’s Law, from the data center to the edge. https://youtu.be/ItHCsei7YTc » read more

Big Changes For Mainstream Chip Architectures


Chipmakers are working on new architectures that significantly increase the amount of data that can be processed per watt and per clock cycle, setting the stage for one of the biggest shifts in chip architectures in decades. All of the major chipmakers and systems vendors are changing direction, setting off an architectural race that includes everything from how data is read and written in m... » read more

System Bits: Aug. 21


Two types of computers create faster, less energy-intensive image processor for autonomous cars, security cameras, medical devices Stanford University researchers reminded that the image recognition technology that underlies today’s autonomous cars and aerial drones depends on artificial intelligence. These are the computers that essentially teach themselves to recognize objects like a dog, ... » read more

Power/Performance Bits: Aug. 21


Physical neural network Engineers at UCLA built a physical artificial neural network capable of identifying objects as light passes through a series of 3D printed polymer layers. Called a "diffractive deep neural network," it uses the light bouncing from the object itself to identify that object, a process that consumes no energy and is faster than traditional computer-based methods of imag... » read more

Impact Of IP On AI SoCs


The combination of mathematics and processing capability has set in motion a new generation of technology advancements with an entire new world of possibilities related to Artificial Intelligence. AI mimics human behavior using deep learning algorithms. Neural networks are what we define as deep learning, which is a subset of machine learning, which is yet a subset of AI, as shown in Figure 1. ... » read more

AI, ML Chip Choices


Flex Logix’s Cheng Wang talks about which types of chips work best for neural networks, AI and machine learning. https://youtu.be/k7OdP7B10o8 » read more

← Older posts