中文 English

Tradeoffs In Archiving Data


If you’ve ever had to sort through old technical documents, wondering what still has value and what can be safely tossed, you can identify with the quandary of Thomas Levy, UCSD professor of anthropology and co-founder of the field of cyber-archaeology. Staring at thousands of pieces of pottery in a Jordanian dessert, he erred on the side of keeping it all. “My personal perspective when ... » read more

PCIe 6.0, NVMe, And Emerging Form Factors For Storage Applications


PCIe 6.0 implementations are expandable and hierarchical with embedded switches or switch chips, allowing one root port to interface with multiple endpoints (such as storage devices, Ethernet cards, and display drivers). While the introduction of PCIe 6.0 at 64GT/s helped to increase the bandwidth available for storage applications with minimal or no increase in latency, the lack of coherency s... » read more

How New Storage Technologies Enhance HPC Systems


High-performance computing (HPC) has historically been available primarily to governments, research institutions, and a few very large corporations for modeling, simulation, and forecasting applications. As HPC platforms are being deployed in the cloud for shared services, high-performance computing is becoming much more accessible, and its use is benefiting organizations of all sizes. Increasi... » read more

Improving Memory Efficiency And Performance


This is the second of two parts on CXL vs. OMI. Part one can be found here. Memory pooling and sharing are gaining traction as ways of optimizing existing resources to handle increasing data volumes. Using these approaches, memory can be accessed by a number of different machines or processing elements on an as-needed basis. Two protocols, CXL and OMI, are being leveraged to simplify thes... » read more

CXL and OMI: Competing or Complementary?


System designers are looking at any ideas they can find to increase memory bandwidth and capacity, focusing on everything from improvements in memory to new types of memory. But higher-level architectural changes can help to fulfill both needs, even as memory types are abstracted away from CPUs. Two new protocols are helping to make this possible, CXL and OMI. But there is a looming question... » read more

Power/Performance Bits: April 13


Speedy data transfer Researchers from MIT, Intel, and Raytheon developed a new data transfer system that both boosts speeds and reduces energy use by taking elements from both traditional copper cables and fiber optics. "There's an explosion in the amount of information being shared between computer chips -- cloud computing, the internet, big data. And a lot of this happens over conventiona... » read more

Network Interface Card Evolution


Longer chip lifetimes, more data to process and move, and a slowdown in the rate of processor improvements has created a series of constantly shifting bottlenecks. Kartik Srinivasan, director of data center marketing at Xilinx, looks at one of those bottlenecks, the network interface card, why continuous enhancements and changes will be required, and how to extend the life of NICs as the networ... » read more

Usage Models Driving Data Center Architecture Changes


Data center architectures are undergoing a significant change, fueled by more data and much greater usage from remote locations. Part of this shift involves the need to move some processing closer to the various memory hierarchies, from SRAM to DRAM to storage. There is more data to process, and it takes less energy and time to process that data in place. But workloads also are being distrib... » read more

Virtual Verification Of Computational Storage Devices


Over recent years, there has been a move to replace hard-disk drive (HDD) storage with solid-state drive (SSD) storage. SDDs are faster, contain no moving parts that can fail or be affected by environmental hazards, and the cost of SSDs has been dropping each year. Unfortunately, the verification of an SSD is quite complex. In particular because of hyperscale datacenter enterprise and client-dr... » read more

Moving Data And Computing Closer Together


The speed of processors has increased to the point where they often are no longer the performance bottleneck for many systems. It's now about data access. Moving data around costs both time and power, and developers are looking for ways to reduce the distances that data has to move. That means bringing data and memory nearer to each other. “Hard drives didn't have enough data flow to cr... » read more

← Older posts