Low-Overhead Fault-Tolerant Quantum Memory (IBM)


A new technical paper titled "High-threshold and low-overhead fault-tolerant quantum memory" was published by researchers at IBM Quantum. Abstract "The accumulation of physical errors prevents the execution of large-scale algorithms in current quantum computers. Quantum error correction promises a solution by encoding k logical qubits onto a larger number n of physical qubits, such t... » read more

Data Integrity For JEDEC DRAM Memories


With the DRAM fabrication advancing from 1x to 1y to 1z and further to 1a, 1b, and 1c nodes along with the DRAM device speeds going up to 8533 for LPDDR5 and 8800 for DDR5, data integrity is becoming a really important issue that the OEMs and other users have to consider as part of the system that relies on the correctness of data being stored in the DRAMs for system to work as designed. I... » read more

ETH Zurich Introduces ProTRR, in-DRAM Rowhammer Mitigation


New technical paper titled "PROTRR: Principled yet Optimal In-DRAM Target Row Refresh" from ETH Zurich. The paper was presented at the 43rd IEEE Symposium on Security and Privacy (SP 2022), San Francisco, CA, USA, May 22–26, 2022. This new paper introduces ProTRR, an "in-DRAM Rowhammer mitigation that is secure against FEINTING, a novel Rowhammer attack." The related video presentation can... » read more

More Errors, More Correction in Memories


As memory bit cells of any type become smaller, bit error rates increase due to lower margins and process variation. This can be dealt with using error correction to account for and correct bit errors, but as more sophisticated error-correction codes (ECC) are used, it requires more silicon area, which in turn drives up the cost. Given this trend, the looming question is whether the cost of ... » read more

What Designers Need to Know About Error Correction Code (ECC) In DDR Memories


As with any electronic system, errors in the memory subsystem are possible due to design failures/defects or electrical noise in any one of the components. These errors are classified as either hard-errors (caused by design failures) or soft-errors (caused by system noise or memory array bit flips due to alpha particles, etc.). To handle these memory errors during runtime, the memory subsyst... » read more

Taming Novel NVM Non-Determinism


New memory technologies may have non-deterministic characteristics that add calibration to the test burden — and may require recalibration during their lifetime. Many of these memories are in development as a result of the search for a storage-class memory (SCM) technology that can bridge the gap between larger, slower memories like flash and faster DRAM memory. There are several approache... » read more