Week In Review: Design, Low Power

Smart-grid chips; Renesas acquires Celeno; EU HPC project; quantum error correction; sign-off for DARPA.

popularity

Deals
Utilidata and Nvidia are teaming up on a software-defined smart grid chip that can be embedded in smart meters to with the aim of improving grid resiliency and integrating distributed energy resources (DERs) such as solar, storage, and electric vehicles. The U.S. Department of Energy’s National Renewable Energy Laboratory (NREL) will test the software-defined smart grid chip as a way to scale and commercialize the lab’s Real-Time Optimal Power Flow (RT-OPF) technology. “To date, the scalability and commercial potential of technologies like RT-OPF have been limited by single-use hardware solutions,” said Santosh Veda, group manager for Grid Automation and Controls at NREL. “By developing a smart grid chip that can be embedded in one of the most ubiquitous utility assets – the smart meter – this approach will potential enable wider adoption and commercialization of the technology and redefine the role of edge computing for DER integration and resiliency. Enhanced situational awareness and visibility from this approach will greatly benefit both the end customers and the utility.”

Renesas completed its acquisition of Celeno Communications, a provider of Wi-Fi chipsets and software solutions for high-performance home networks, smart buildings, enterprise, and industrial markets. It also offers a Wi-Fi based high-resolution imaging technology for tracking tracks the motion, behavior, and location of people and objects. The deal is worth $315 million with payment to be made gradually in cash following certain milestones.

Real Intent joined the DARPA Toolbox Initiative, a program to provide research teams access to commercial products via pre-negotiated, low-cost, non-production access frameworks and simplified legal terms. DARPA research teams will be granted access to Real Intent static sign-off software products for clock domain crossing, reset domain crossing, linting, and design for test sign-off. The products are certified for use in ISO 26262 functional safety compliant flows. “This agreement will streamline the availability of Real Intent’s high performance, multimode static sign-off products in the DARPA community,” said Prakash Narain, president and CEO of Real Intent. “Real Intent has supported military requirements for over a decade and this program deepens our engagement.”

Photonics
Tower Semiconductor and Juniper Networks announced a silicon photonics platform that co-integrates III-V lasers, semiconductor optical amplifiers (SOA), electro-absorption modulators (EAM), and photodetectors with silicon photonics devices monolithically on a single chip. Process design kits are expected to be available by year end and the first open multi-project wafer (MPW) run are expected to be offered early next year. First samples of full 400Gb/s and 800Gb/s PICs reference designs with integrated laser are expected to be available in the second quarter of 2022.

TriEye uncorked a VCSEL powered Electro-Optic (EO) short wave infrared (SWIR) system, integrating the company’s CMOS-based sensor with a vertical-cavity surface-emitting laser (VCSEL) as an illumination source. TriEye said the perception system will have longer range and better accuracy than previous NIR based systems. It targets short-range applications such as mobile, biometrics, industrial automation, and medical.

Juniper Networks adopted Synopsys’ OptoCompiler platform, including the OptSim and PrimeSim HSPICE simulation solutions, for development of photonic-enabled chips for the next generation of optical communications. Juniper plans to use Synopsys solutions to design and optimize its hybrid silicon and InP optical platform for optical connectivity in data centers and telecom networks, as well as new emerging applications in AI, lidar, and other sensors.

HPC
The European Processor Initiative, an effort between a number of companies, universities, and research institutes to develop HPC chip technologies and infrastructure in the EU, shared the results of its first three years. In the General-Purpose Processor group, partners defined the architectural specifications of Rhea, the first generation of the EPI General-Purpose Processor (GPP). With 29 RISC-V cores, the Arm Neoverse V1 architecture seeks to offer a scalable and customizable solution for HPC applications. RTL has been completed and the full design implementation is currently at the validation stage using emulations. The Accelerator group is working on RISC-V vector architectures for HPC acceleration and developed a suite of technologies including a vector processing unit, many-core stencil and tensor accelerator, and variable precision accelerator. It produced a test chip with multiple distributed banks of shared L2 cache and coherence home nodes optimized for the high-bandwidth requirements of the vector processing units and connected via a high-speed NoC. The Automotive group produced a proof of concept for an embedded high-performance compute (eHPC) platform and associated software development kit tailored for automotive applications. It was demonstrated in a road-approved BMW X5.

Lawrence Livermore National Laboratory (LLNL) established the AI Innovation Incubator (AI3) to encourage collaborations around hardware, software, tools, and utilities to accelerate AI for applied science, built upon LLNL’s cognitive simulation approach that combines state-of-the-art AI technologies with leading-edge HPC. Through the incubator, LLNL will work with Google, IBM, and Nvidia, as well as continue existing projects with Hewlett Packard Enterprise, AMD, SambaNova Systems, Cerebras Systems, and Aerotech. “The integration of AI with traditional high performance computing and data analysis methods that is the focus of AI3 will generate fundamentally new and transformative knowledge based computing capabilities for analysis, reasoning and decision making,” said IBM Future of Computing Systems Director James Sexton. Early research areas are expected to include advanced material design, 3D printing, predictive biology, energy systems, “self-driving” lasers, and fusion energy research.

Quantum computing
Sandia National Laboratories proposed a new benchmark for quantum computers to predict how likely it is that a quantum processor will run a specific program without errors. The researchers said that conventional benchmark tests underestimate many quantum computing errors, which can lead to unrealistic expectations of how powerful or useful a quantum machine is. “Our benchmarking experiments revealed that the performance of current quantum computers is much more variable on structured programs” than was previously known, said Timothy Proctor, a member of Sandia’s Quantum Performance Laboratory. “By applying our method to current quantum computers, we were able to learn a lot about the errors that these particular devices suffer — because different types of errors affect different programs a different amount.”

QuTech researchers are working on improved error correction for quantum computers. Based on the theory that by increasing the redundancy and using more and more qubits to encode data, the net error goes down. They designed a logical qubit consisting of seven physical qubits. “We show that we can do all the operations required for computation with the encoded information. This integration of high-fidelity logical operations with a scalable scheme for repeated stabilization is a key step in quantum error correction,” said Prof Barbara Terhal of QuTech. Prof Leonardo DiCarlo of QuTech added, “Our grand goal is to show that as we increase encoding redundancy, the net error rate actually decreases exponentially. Our current focus is on 17 physical qubits and next up will be 49. All layers of our quantum computer’s architecture were designed to allow this scaling.”



Leave a Reply


(Note: This name will be displayed publicly)