Blog Review: April 10


Arm's Paul Whatmough discusses the growing use of real-time computer vision on mobile devices and proposes transfer learning as a way to enable neural network workloads on resource-constrained hardware. Cadence's Anton Klotz highlights a collaboration with Imec and TU Eindhoven on cell-aware test that reduces defect simulation time by filtering out defects with equivalent fault effects. M... » read more

GDDR6 – HBM2 Tradeoffs


Steven Woo, Rambus fellow and distinguished inventor, talks about why designers choose one memory type over another. Applications for each were clearly delineated in the past, but the lines are starting to blur. Nevertheless, tradeoffs remain around complexity, cost, performance, and power efficiency.   Related Video Latency Under Load: HBM2 vs. GDDR6 Why data traffic and bandw... » read more

3D NAND Metrology Challenges Growing


3D NAND vendors face several challenges to scale their devices to the next level, but one manufacturing technology stands out as much more difficult at each turn—metrology. Metrology, the art of measuring and characterizing structures, is used to pinpoint problems and ensure yields for all chip types. In the case of 3D NAND, the metrology tools are becoming more expensive at each iteration... » read more

Memory Architectures In AI: One Size Doesn’t Fit All


In the world of regular computing, we are used to certain ways of architecting for memory access to meet latency, bandwidth and power goals. These have evolved over many years to give us the multiple layers of caching and hardware cache-coherency management schemes which are now so familiar. Machine learning (ML) has introduced new complications in this area for multiple reasons. AI/ML chips ca... » read more

Week In Review: Manufacturing, Test


Chipmakers TrendForce released its foundry rankings for the first quarter of 2019. TSMC is still the clear leader, followed in order by Samsung, GlobalFoundries and UMC, according to the firm. It was a tough quarter for all foundries. Samsung has rolled out its new High Bandwidth Memory (HBM2E) product. The new solution, called Flashbolt, is the industry’s first HBM2E to deliver a 3.2Gbps... » read more

Slow And Cautious Start To 2019 For Memory Manufacturers


Both NAND and DRAM prices began dropping in the second half of 2018 after a couple years at record highs. Product oversupply and excess inventories are signaling a bleak outlook for the memory market in the first half of 2019. With these conditions in mind, SK Hynix and Samsung have slowed or put on hold their plans for capacity expansion in 2H18 and 2019. The chart below shows DRAM capacity... » read more

Memory Tradeoffs Intensify in AI, Automotive Applications


The push to do more processing at the edge is putting a strain on memory design, use models and configurations, leading to some complex tradeoffs in designs across a variety of markets. The problem is these architectures are evolving alongside these new markets, and it isn't always clear how data will move across these chips, between devices, and between systems. Chip architectures are becom... » read more

The Importance Of Using The Right DDR SDRAM Memory


Selecting the right memory technology is often the most critical decision for achieving the optimal system performance. Designers continue to add more cores and functionality to their SoCs; however, increasing performance while keeping power consumption low and silicon footprint small remains a vital goal. DDR SDRAMs, DRAMs in short, meet these memory requirements by offering a dense, high-perf... » read more

GDDR6: Signal Integrity Challenges For Automotive Systems


Signal integrity (SI) is at the forefront of SoC and system designers’ thinking as they plan for upcoming high-speed GDDR6 DRAM and PHY implementations for automotive and advanced driver assistance system (ADAS) applications. Rambus and its partners are closely looking at how GDDR6’s 16 gigabit per second speed at each pin affects signal integrity given the cost and system constraints for a... » read more

Inference Acceleration: Follow The Memory


Much has been written about the computational complexity of inference acceleration: very large matrix multiplies for fully-connected layers and huge numbers of 3x3 convolutions across megapixel images, both of which require many thousands of MACs (multiplier-accumulators) to achieve high throughput for models like ResNet-50 and YOLOv3. The other side of the coin is managing the movement of d... » read more

← Older posts Newer posts →