What kinds of memories work best where and why.
Steven Woo, Rambus fellow and distinguished inventor, talks with Semiconductor Engineering about different memory options, why some are better than others for certain tasks, and what the tradeoffs are between the different memory types and architectures.
Related Articles/Videos
Memory Tradeoffs Intensify In AI, Automotive Applications
Why choosing memories and architecting them into systems is becoming much more difficult.
In-Memory Computing Challenges Come Into Focus
Researchers digging into ways around the von Neumann bottleneck.
In-Memory Vs. Near-Memory Computing
New approaches are competing for attention as scaling benefits diminish.
GDDR6 – HBM2 Tradeoffs
What type of DRAM works best where.
Latency Under Load: HBM2 Vs. GDDR6
Why choosing memory depends upon data traffic.
Target: 50% Reduction In Memory Power
Is it possible to reduce the power consumed by memory by 50%? Yes, but it requires work in the memory and at the architecture level.
Hybrid Memory
Tech Talk: How long can DRAM scalIng continue?
An upbeat industry at the start of the year met one of its biggest challenges, but instead of being a headwind, it quickly turned into a tailwind.
Gate-all-around FETs will replace finFETs, but the transition will be costly and difficult.
The backbone of computing architecture for 75 years is being supplanted by more efficient, less general compute architectures.
The semiconductor industry will look and behave differently this year, and not just because of the pandemic.
SRC’s new CEO sheds some light on next-gen projects involving everything from chiplets to hyperdimensional computing and mixed reality.
New data suggests that more chips are being forced to respin due to analog issues.
The backbone of computing architecture for 75 years is being supplanted by more efficient, less general compute architectures.
The number of options is increasing, but tooling and methodologies haven’t caught up.
Big investment in EV, batteries, and data center chips as 26 companies raise $2.6B.
Interconnects are becoming the limiter at advanced nodes.
Chips are hitting technical and economic obstacles, but that is barely slowing the rate of advancement in design size and complexity.
As implementations evolve to stay relevant, a new technology threatens to overtake SerDes.
Predicting the power or energy required to run an AI/ML algorithm is a complex task that requires accurate power models, none of which exist today.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
Leave a Reply