Why this approach is so interesting today, and what it really entails.
Gideon Intrater, CTO at Adesto Technologies, talks about why in-memory computing is now being taken seriously again, years after it was first proposed as a possible option. What’s changed is an explosion in data, and a recognition that it’s too time- and energy-intensive to send all of that data back and forth between memories and processors on the same chip, let alone to the cloud and back. One new approach includes analog storage, which is still in research.
Less precision equals lower power, but standards are required to make this work.
New applications require a deep understanding of the tradeoffs for different types of DRAM.
Ensuring that your product contains the best RISC-V processor core is not an easy decision, and current tools are not up to the task.
What is it, why is it important, and why now?
Heterogenous integration depends on reliable TSVs, microbumps, vias, lines, and hybrid bonds — and time to digest all the options.
Less precision equals lower power, but standards are required to make this work.
New memory approaches and challenges in scaling CMOS point to radical changes — and potentially huge improvements — in semiconductor designs.
New applications require a deep understanding of the tradeoffs for different types of DRAM.
113 startups raise $3.5B; batteries, AI, and new architectures top the list.
127 startups raise $2.6B; data center connectivity, quantum computing, and batteries draw big funding.
Thermal mismatch in heterogeneous designs, different use cases, can impact everything from accelerated aging to warpage and system failures.
New memory standard adds significant benefits, but it’s still expensive and complicated to use. That could change.
Electromigration and other aging factors become more complicated along the z axis.
Giddy & Ed,
Great job clearly showing how in memory computing works and how it can be applied. Also very clear explanation of benefits.