Blog Review: July 20

AI for cameras; acoustics in consumer electronics; HPC hardware configuration; RF filters; designing circuits with AI.


Synopsys’ Ron Lowman examines the various neural networks used in camera applications, the balancing act between camera lens choice and neural networks implemented, and how IP and embedded vision processors help optimize the designs.

Siemens’ Katie Tormala considers the importance of acoustic performance in consumer electronics and why it’s important to understand the relationships between thermal, structural, vibration and acoustic performance starting at the ideation phase.

Cadence’s Vinod Khera looks at the role edge AI could play in the future of healthcare, including detecting links between genetic codes, using surgical robots, improved diagnosis, and maximizing hospital efficiency.

Ansys’ Wim Slagter points to some key considerations when selecting the right high-performance computing (HPC) processor and hardware configuration for engineering simulation workflows to ensure optimal performance.

Lam Research’s David Haynes, Daniel Shin, and Lidia Vereen explore how radio frequency filters for Wi-Fi 6 and 5G devices allow signals in the band to be separated and explain a critical step in RF filter manufacturing.

Renesas’ Tsahi Tal considers the benefits of including a dedicated listening radio in a Wi-Fi Access Point, particularly in situations where there are multiple networks, as in a multi-family dwelling, or locations with a multi-AP mesh network.

SEMI’s Serena Brischetto chats with Antoine Amade of Entegris about bridging the gap between semiconductor design and process engineering and collaboration between the semiconductor and automotive industries in order to help advance semiconductor manufacturing for automotive devices.

A Rambus writer checks out what’s new in DDR5 and how a smarter DIMM helps increase data rate and support high capacity DRAM devices.

Arm’s Chloe Jian Ma notes some key trends in cloud and enterprise storage and disk drives, such as larger capacities and more specialized storage tiers, higher performance interfaces, and more compute near storage.

Nvidia’s Rajarshi Roy, Jonathan Raiman, and Saad Godil discuss using reinforcement learning to design arithmetic circuits for the company’s Hopper GPU architecture, particularly focusing on optimizing the tradeoff between circuit area and delay in prefix circuits.

Onsemi’s Majid Dadafshar looks inside a CMOS image sensor to identify some critical factors to be aware of when designing a power supply solution to support them.

Plus, check out the blogs featured in the latest Low Power-High Performance newsletter:

Fraunhofer IIS EAS’s André Lange shines a light on how individual transistors degrade during normal operation and how these changes affect the circuit’s overall behavior.

Synopsys’ Gary Ruggles shows how new SSD form factors can take advantage of higher data rates and more lines.

Arm’s Remy Pottier examines the driving forces behind the metaverse and which applications could emerge first.

Rambus’ Emma-Jane Crozier explains how to reduce latency in head-mounted displays with video compression.

Cadence’s Shyam Sharma looks at why multiple approaches are needed to deal with data errors in the latest high-speed memories.

Ansys’ Kelly Morgan warns that thermal fatigue can result in warpage, solder weakness, breaking or cracking, and eventually overall product failure.

Synopsys’ Ricardo Borges and Anand Thiruvengadam explain why optimizing memory at advanced nodes requires it to be designed in the context of other technology.

Leave a Reply

(Note: This name will be displayed publicly)