incorporating quantum information storage units into quantum computing systems could enable the creation of devices that contain significantly fewer qubits in their processors.
Abstract:
“We analyze the performance of a quantum computer architecture combining a small processor and a storage unit. By focusing on integer factorization, we show a reduction by several orders of magnitude of the number of processing qubits compared with a standard architecture using a planar grid of qubits with nearest-neighbor connectivity. This is achieved by taking advantage of a temporally and spatially multiplexed memory to store the qubit states between processing steps. Concretely, for a characteristic physical gate error rate of 10−3, a processor cycle time of 1 microsecond, factoring a 2 048-bit RSA integer is shown to be possible in 177 days with 3D gauge color codes assuming a threshold of 0.75% with a processor made with 13 436 physical qubits and a memory that can store 28 million spatial modes and 45 temporal modes with 2 hours’ storage time. By inserting additional error-correction steps, storage times of 1 second are shown to be sufficient at the cost of increasing the run-time by about 23%. Shorter run-times (and storage times) are achievable by increasing the number of qubits in the processing unit. We suggest realizing such an architecture using a microwave interface between a processor made with superconducting qubits and a multiplexed memory using the principle of photon echo in solids doped with rare-earth ions.”
View this technical paper here. Published 09/2021.
Gouzien, É., & Sangouard, N.
The industry is gaining ground in understanding how aging affects reliability, but more variables make it harder to fix.
Tools become more specific for Si/SiGe stacks, 3D NAND, and bonded wafer pairs.
Key pivot and innovation points in semiconductor manufacturing.
Thinner photoresist layers, line roughness, and stochastic defects add new problems for the angstrom generation of chips.
The verification of a processor is a lot more complex than a comparably-sized ASIC, and RISC-V processors take this to another layer of complexity.
Less precision equals lower power, but standards are required to make this work.
Open-source processor cores are beginning to show up in heterogeneous SoCs and packages.
New applications require a deep understanding of the tradeoffs for different types of DRAM.
Open source by itself doesn’t guarantee security. It still comes down to the fundamentals of design.
How customization, complexity, and geopolitical tensions are upending the global status quo.
127 startups raise $2.6B; data center connectivity, quantum computing, and batteries draw big funding.
The industry is gaining ground in understanding how aging affects reliability, but more variables make it harder to fix.
Ensuring that your product contains the best RISC-V processor core is not an easy decision, and current tools are not up to the task.
Leave a Reply