AI, Performance, Power, Safety Shine Spotlight On Last-Level Cache


Memory limitations to performance, always important in modern systems, have become an especially significant concern in automotive safety-critical applications making use of AI methods. On one hand, detecting and reporting a potential collision or other safety problem has to be very fast. Any corrective action is constrained by physics and has to be taken well in advance to avoid the problem. ... » read more

Taking Self-Driving Safety Standards Beyond ISO 26262


I participated in a couple of sessions at Arm TechCon this year, the first on how safety is evolving for platform-based architectures with a mix of safety-aware IP and the second on lessons learned in safety and particularly how the industry and standards are adapting to the larger challenges in self-driving, which obviously extend beyond the pure functional safety intent of ISO 26262. Here I w... » read more

Safety Islands In Safety-Critical Hardware


Safety and security have certain aspects in common so it shouldn’t be surprising that some ideas evolving in one domain find echoes in the other. In hardware design, a significant trend has been to push security-critical functions into a hardware root-of-trust (HRoT) core, following a philosophy of putting all (or most) of those functions in one basket and watching that basket very carefully.... » read more

In-System Networks Are Front And Center


This year’s HotChips conference at Stanford was all about artificial intelligence (AI) and machine learning (ML) and what particularly struck me, naturally because we’re in this business too, was how big a role on-chip networks played in some of the leading talks. NVIDIA talked about their scalable mesh architecture, both on-chip and in-package, meshes connecting processing NN processing el... » read more

Interconnect Prominence In Fail-Operational Architectures


When we in the semiconductor world think about safety, we think about ISO 26262, FMEDA and safety mechanisms like redundancy, ECC and lock-step operation. Once we have that covered, any other aspect of safety is somebody else’s problem, right? Sadly no, for us at least. As we push towards higher levels of autonomy, SAE levels 3 and above, we’re integrating more functionality into our SoCs, ... » read more

Memory Architectures In AI: One Size Doesn’t Fit All


In the world of regular computing, we are used to certain ways of architecting for memory access to meet latency, bandwidth and power goals. These have evolved over many years to give us the multiple layers of caching and hardware cache-coherency management schemes which are now so familiar. Machine learning (ML) has introduced new complications in this area for multiple reasons. AI/ML chips ca... » read more

AI: Where’s The Money?


A one-time technology outcast, Artificial Intelligence (AI) has come a long way. Now there’s groundswell of interest and investments in products and technologies to deliver high performance visual recognition, matching or besting human skills. Equally, speech and audio recognition are becoming more common and we’re even starting to see more specialized applications such as finding optimized... » read more

ISO 26262:2018, 2nd Edition: What Changes?


If you’re involved somehow in design for automotive electronics, you probably have more than a cursory understanding of the ISO 26262 standard. What your organization is working from is most likely the 2011 definition. The most recent update is formally known as ISO 26262:2018, less formally as ISO 26262 2nd Edition. Figure 1. Overview of the ISO 26262:2018 series of standards (Source IS... » read more

AI Chips: NoC Interconnect IP Solves Three Design Challenges


New network-on-chip (NoC) interconnect IP is now available for artificial intelligence (AI) systems-on-chip (SoC). Arteris IP launched the fourth generation of the FlexNoC interconnect IP with a new optional AI package. The novel NoC interconnect technologies solves many data flow problems in today’s AI designs. Innovative features address the requirements of the next-generation of AI chips t... » read more

A Primer On Last-Level Cache Memory For SoC Designs


System-on-chip (SoC) architects have a new memory technology, last level cache (LLC), to help overcome the design obstacles of bandwidth, latency and power consumption in megachips for advanced driver assistance systems (ADAS), machine learning, and data-center applications. LLC is a standalone memory that inserts cache between functional blocks and external memory to ease conflicting requireme... » read more

← Older posts Newer posts →