CDNLive; generative AI collaborations; Microsoft’s AI chip; 24 Gb/s GDDR6 PHY; 3nm high-speed SerDes and interconnects; MIPI C-PHY/D-PHY IP; data center cooling solutions; RISC-V; quantum photonics.
Cadence rolled out a slew of new products at this week’s CDNLive Silicon Valley, including:
The debate about using AI in all professions hit a fever pitch this month. The chip industry is no exception, discussing the checks, balances and unknowns for AI/ML in semiconductor design in this final article in a 3-part series.
Microsoft is developing its own AI chip, code-named Athena, to power the large language models (LLM), according to The Information publication. The first version is expected to use TSMC’s 5nm process and potentially reduce some dependency on NVIDIA.
Collaborations are increasingly tapping into generative AI, including one involving Microsoft and Siemens. The venture integrates Siemens’ product lifecycle management (PLM) software with Microsoft’s Teams platform, the language models in Azure OpenAI Service, as well as other Azure AI capabilities.
Big pharma company Moderna inked a deal with IBM to utilize generative AI and quantum computing to advance its mRNA science, the technology behind the Covid vaccine.
Rambus achieved a new product milestone for GDDR6 memory interface performance, delivering up to 24 Gigabits per second (Gb/s), providing 96 Gigabytes per second (GB/s) of bandwidth per GDDR6 memory device.
Mixel’s MIPI C-PHY/D-PHY IP solution is now integrated into Hercules Microelectronics HME-H3 FPGA and in mass production. This is the industry’s first FPGA to support MIPI C-PHY v2.0.
Ansys launched an all-in-one developer portal that offers access to enablement, documentation, and collaboration on Ansys simulation technologies.
Marvell demonstrated high-speed, ultra-high bandwidth silicon interconnects based on TSMC’s 3nm process, including 112G XSR SerDes (serializer/de-serializer), Long Reach SerDes, PCIe Gen 6 / CXL 3.0 SerDes, and a 240 Tbps parallel die-to-die interconnect. That speed is essential for chiplets, which Marvell has been selling since 2016 using a menu of options for its customers.
Intel is dumping its Blockscale 1000 Series ASIC, nearly a year after introducing the bitcoin mining chip series.
The amount of data is growing, and so is the need to process it closer to the source. The edge is a middle ground between the cloud and the end point, close enough to where data is generated to reduce the time it takes to process that data, yet still powerful enough to analyze that data quickly and send it wherever it is needed. But to make this all work requires faster conduits for that data in order to reduce latency, and this is where 100G Ethernet is essential.
Intel’s quest for data center cooling solutions continues. In addition to immersion cooling, it is pursuing 3D vapor chambers embedded in coral-shaped heat sinks and AI-adjusted tiny jets that shoot cool water over hot spots in the chip to remove heat.
Fig. 1: 24 powered-on Intel Xeon-based servers in a tank filled with synthetic non-electrically conductive oil. Source: Intel
IBM and Siemens Digital Industries Software are developing a new systems engineering and asset management combined software solution to support traceability and sustainable product development.
Doing what has been done in the past only gets you so far, but RISC-V is causing some aspects of verification to be fundamentally rethought.
In a new paper, University of Bremen researchers propose a Polynomial Formal Verification (PFV) method based on Binary Decision Diagrams (BDDs) to fully verify a RISC-V processor.
Researchers at Georgia Tech and Cal Poly-SLO recently published a technical paper entitled, “Skybox: Open-Source Graphic Rendering on Programmable RISC-V GPUs”, presenting an “open-source hardware implementation and simulation framework of a RISC-V-based 3D graphics rendering accelerator that supports the Vulkan API,” including novel compiler and system optimization to support RISC-V.
A team of researchers from Leibniz University Hannover, the University of Twente and the start-up QuiX Quantum have built an entangled quantum light source fully integrated on a chip. “Our breakthrough allowed us to shrink the source size by a factor of more than 1,000, allowing reproducibility, stability over a longer time, scaling, and potentially mass-production. All these characteristics are required for real-world applications such as quantum processors,” stated Prof. Dr. Michael Kues, at Leibniz University Hannover in this news release.
Photonics is poised for significant growth due a rapid increase in data volumes and the need to move that data quickly and with minimal heat. Read how the industry is responding by incorporating more photonics into tools and highlighting the challenges in testing and standards that need to be addressed.
Find upcoming chip industry events here, including:
Next week’s webinars include: multi-die data center chip design, system-level thermal signoff from chips through to racks, and the path to 1.6TbE with 224G Ethernet PHY IP.
Check out the latest Low Power-High Performance and Systems & Design newsletters for these highlights and more:
If you’d like to receive Semiconductor Engineering newsletters and alerts via email, subscribe here.
Leave a Reply