Choosing The Right Server Interface Architectures For High Performance Computing


The largest bulk and cost of a modern high-performance computing (HPC) installation involves the acquisition or provisioning of many identical systems, interconnected by one or more networks, typically Ethernet and/or InfiniBand. Most HPC experts know that there are many choices between different server manufacturers and the options of form factor, CPU, RAM configuration, out of band management... » read more

Automate Memory Test Through A Shared Bus Interface


The use of memory-heavy IP in SoCs for automotive, artificial intelligence (AI), and processor applications is steadily increasing. However, these memory-heavy IP often have only a single access point for testing the memories. A shared bus architecture allows testing and repairing memories within IP cores through a single access point referred to as a shared bus interface. Within this interface... » read more

Data Retention Performance Of 0.13-μm F-RAM Memory


F-RAM (Ferroelectric random access memory) is a non-volatile memory that uses a ferroelectric capacitor for storing data. It offers higher write speeds over flash/EEPROM. This white paper provides a brief overview of data retention performance of F-RAM memory. Click here to read more. » read more

CXL and OMI: Competing or Complementary?


System designers are looking at any ideas they can find to increase memory bandwidth and capacity, focusing on everything from improvements in memory to new types of memory. But higher-level architectural changes can help to fulfill both needs, even as memory types are abstracted away from CPUs. Two new protocols are helping to make this possible, CXL and OMI. But there is a looming question... » read more

Improving Performance And Simplifying Coding With XY Memory’s Implicit Parallelism


Instruction-level Parallelism (ILP) refers to design techniques that enable more than one RISC instruction to be executed simultaneously in the same instruction, which boosts processor performance by increasing the amount of work done in a given time interval, thereby increasing the throughput. This parallelism can be explicit, where each additional instruction is explicitly part of the instruc... » read more

2022 Chip Forecast: Mixed Signals


Jim Feldhan, president of Semico Research, sat down with Semiconductor Engineering to talk about the outlook for the semiconductor market. SE: What was your final 2021 semiconductor forecast? What is your 2022 semiconductor forecast? Feldhan: For 2021, world semiconductor revenues totaled $558 billion and units totaled over 1.1 trillion units. In terms of growth rate, revenues increased 2... » read more

It’s Official: HBM3 Dons The Crown Of Bandwidth King


With the publishing of the HBM3 update to the High Bandwidth Memory (HBM) standard, a new king of bandwidth is crowned. The torrid performance demands of advanced workloads, with AI/ML training leading the pack, drive the need for ever faster delivery of bits. Memory bandwidth is a critical enabler of computing performance, thus the need for the accelerated evolution of the standard with HBM3 r... » read more

Thin Quad Die Package (QDP) Development


In the world of solid-state memory fabs, bits per mm2 rule. In the memory packaging market, mm2 of silicon per a given package thickness is the defining feature. Both the memory architecture of the wafer and the package technology take advantage of 3D structures to achieve best in class bit density. In the case of the wafer fab, 3D NAND and other technologies are pushing the envelope to meet ev... » read more

HyperRAM As A Low Pin-Count Expansion Memory For Embedded Systems


Rapid advances in microelectronics are driving mega trends across industries, creating a need for new technologies and optimized devices with better performance. With large volumes of data being made available due to the increasing content of electronics in automotive, industrial, smart home and IoT devices, there is a requirement to seamlessly process and render information. Application platfo... » read more

Scaling DDR5 RDIMMs To 5600 MT/s


Looking forward to 2022, the first of the DDR5-based servers will hit the market with RDIMMs running at 4800 megatransfers per second (MT/s). This is a 50% increase in data rate over top-end 3200 MT/s DDR4 RDIMMs in current high-performance servers. DDR5 memory incorporates a number of innovations, such as Decision Feedback Equalization (DFE), and a new DIMM architecture which enable that speed... » read more

← Older posts Newer posts →