中文 English

Blog Review: Feb. 19

Efficient ML architectures; readable code; healthy supply chains.

popularity

Arm’s Urmish Thakker takes a look at TinyML, some of the challenges in developing efficient architectures for resource constrained devices, and an explanation of Kronecker product compression.

Mentor’s Colin Walls considers whether it’s better to use single or multiple returns for a function when writing understandable, readable code.

Cadence’s Paul McLellan shares highlights from a presentation by Benedict Evans on the current state of the Internet and what can be expected in the near future.

Synopsys’ Matt Novak points to how heads-up displays in vehicles could help reduce crashes caused by distracted driving.

Rambus’ Steven Woo looks at the dynamic between increased compute capability, new AI methods and applications, and the abundance of digital data that needs processing.

VLSI Research’s Julian West warns of significant risks for buyers of critical subsystems for semiconductor manufacturing should their preferred supplier not be able to deliver.

Imagination’s Rys Sommefeldt contends that real-time ray tracing acceleration is the biggest upset to the unwritten rules of the GPU in the last 15 years.

Verification blogger Tudor Timi argues that trivializing compile times is a productivity-draining mistake when it comes to large projects, and dives into how SystemVerilog compilation works.

Silicon Labs’ Kevin Smith digs into parasitic PLLs and how to minimize injection sensitivity and reduce the risk of injection pulling or injection lock.

ANSYS’ Kaitlin Tyler looks at the three to seven year process of designing new medical devices and preparing them to meet regulatory approval.

NXP’s Stuart Forbes predicts that highly connected smart factories and Industry 4.0 will come to rely on 5G for private wireless networks through features like Low Power Wide Area support and Ultra-Reliable Low-Latency Communications.

Western Digital’s Itzik Gilboa says that 5G will require fundamental architectural changes, as common hardware interfaces like e.MMC lack the capabilities to move data fast enough.

Plus, check out the highlighted blogs from last week’s Low Power-High Performance newsletter:

Editor In Chief Ed Sperling points to what’s needed to add more efficiency into complex systems.

Fraunhofer’s Dirk Mayer and Olaf Enge-Rosenblatt offer some perspective on integrating hardware and algorithms.

Rambus’ Niall Sorensen and Malini Narayanammoorthi describe how an IP vendor can help integrate SerDes IP in an ASIC design project and subsequent production ramp up.

Synopsys’ Ron Lowman takes a tour of what edge computing is and how AI will make edge computing and its hybrids pervasive.

Mentor’s Flint Yoder details how performing topological analysis on the schematic netlist quickly identifies latch-up sensitive scenarios.

Cadence’s Paul McLellan weaves chiplets, packaging and some interesting new challenges together.

Moortec’s Stephen Crosher explains why chip designers need more data to hear what chips are saying in real time.

Arm’s Chris Bergey reports that Cloud AI won’t cope with the coming device data deluge on its own. And that is where the AI Edge comes in.



Leave a Reply


(Note: This name will be displayed publicly)