Blog Review: Sept. 18

Hurricane forecasting for semiconductors; Display Stream Compression; growing the photonics ecosystem; container runtime for edge computing.

popularity

Siemens’ Kyle Fraunfelter explores the similarities between hurricane forecasting and semiconductor manufacturing to argue for the value of integrating real-time wafer fabrication measurements into the digital twin models used to simulate the semiconductor fabrication process.

Cadence’s Rohini Kollipara introduces Display Stream Compression (DSC), which can enable higher resolutions and refresh rates while maintaining high image quality by using a visually lossless algorithm to compress the video data transmitted from the source to the display, effectively reducing the bandwidth required.

Synopsys’ Twan Korthorst highlights the role of PhotonDelta in facilitating the growth of photonics startups and building out a photonic IC ecosystem that spans from R&D and design through to manufacturing and packaging to support the creation of new photonic chip applications.

Arm’s Chris Adeniyi-Jones explores heterogeneity in edge computing and describes the implementation of a container runtime that enables the deployment of applications onto additional processors within a system, such as to deploy applications onto a microcontroller embedded alongside application-profile cores in an SoC.

Keysight’s Marie Hattar cautions that as automotive radar systems become more complex, testing strategies must address many more potential scenarios, conditions, and system interactions.

Ansys’ Sak Arumugam and Jeff Bernier suggest adding a dedicated simulation process and data management (SPDM) solution to existing product lifecycle management software to further accelerate the product development process, more effectively use simulation throughout the product life cycle, reduce communication errors, and maximize the return on investment in simulation.

SEMI’s Melissa Grupen-Shemansky, Pushkar Apte, and Mark da Silva look to a future of adaptive manufacturing that integrates human creativity with robotic precision enabled by AI, with data acquisition, data integrity and relevance, and operational digital twins acting as steppingstones to ensure automation and data exchange in every step of manufacturing.

And don’t miss the blogs featured in the latest Low Power-High Performance newsletter:

Siemens’ Farhad Ahmed explains how a proper reset domain crossing sign-off methodology helps avoid metastability and other problems.

Synopsys’ Robert Ruiz offers simulation tips for partial designs, debugging a UVM testbench, AIP validation, ATPG test vectors, and incorporating outside IP.

Fraunhofer IIS/EAS’ Jens Michael Warmuth outlines why ADAS makes it necessary to adopt new approaches for automotive overcurrent protection.

Quadric’s Steve Roddy digs into AI acceleration IP selection, and why companies will likely need to port one or more new ML networks using the vendor toolsets.

Rambus’ Nidish Kamath delves into HBM4 and why increased bandwidth, board space savings, and power efficiency make up for higher implementation and manufacturing costs.

Cadence’s Kos Gitchev shows how next-gen memory module technology doubles the DRAM data rate and maintains the RAS capabilities of RDIMM modules.

Ansys’ Wim Slagter runs the numbers and finds the initial purchase cost typically accounts for about 50% of total expenses incurred over an HPC or AI system’s useful life.

Arm Research’s Alexandre Peixoto Ferreira contends that reference cores and hybrid runtime deployments could play a useful role in mitigating the complexity of heterogeneous edge computing.

Power architect Barry Pangrle suggests that adopting liquid cooling technology could significantly reduce the electricity costs of data centers.



Leave a Reply


(Note: This name will be displayed publicly)