Will In-Memory Processing Work?


The cost associated with moving data in and out of memory is becoming prohibitive, both in terms of performance and power, and it is being made worse by the data locality in algorithms, which limits the effectiveness of cache. The result is the first serious assault on the von Neumann architecture, which for a computer was simple, scalable and modular. It separated the notion of a computatio... » read more

Enabling Practical Processing in and near Memory for Data-Intensive Computing


Source: ETH Zurich and Carnegie Mellon University Talk at DAC 2019. Technical Paper link » read more

Power, Reliability And Security In Packaging


Semiconductor Engineering sat down to discuss advanced packaging with Ajay Lalwani, vice president of global manufacturing operations at eSilicon; Vic Kulkarni, vice president and chief strategist in the office of the CTO at ANSYS; Calvin Cheung, vice president of engineering at ASE; Walter Ng, vice president of business management at UMC; and Tien Shiah, senior manager for memory at Samsun... » read more

HBM2 Vs. GDDR6: Tradeoffs In DRAM


Semiconductor Engineering sat down to talk about new DRAM options and considerations with Frank Ferro, senior director of product management at Rambus; Marc Greenberg, group director for product marketing at Cadence; Graham Allan, senior product marketing manager for DDR PHYs at Synopsys; and Tien Shiah, senior manager for memory marketing at Samsung Electronics. What follows are excerpts of th... » read more

DAC 2019: Day 2


Day two of DAC started off with a highly anticipated keynote given by Thomas Dolby, musician, producer and innovator. Dolby has always been fascinated with the convergence of music and technology. He started off with a fanfare by balancing a broom on his finger to demonstrate the type of control we have as human beings. He went on to expand the analogy to the hive mind of groups of individuals,... » read more

New Memory Options


Carlos Macián, eSilicon’s senior director of AI strategy and products, talks about how to utilize memory differently and reduce the movement of data in AI chips, and what impact that has on power and performance. https://youtu.be/wItp6wReVts » read more

In-Memory Vs. Near-Memory Computing


New memory-centric chip technologies are emerging that promise to solve the bandwidth bottleneck issues in today’s systems. The idea behind these technologies is to bring the memory closer to the processing tasks to speed up the system. This concept isn’t new and the previous versions of the technology fell short. Moreover, it’s unclear if the new approaches will live up to their billi... » read more

In-Memory Computing Challenges Come Into Focus


For the last several decades, gains in computing performance have come by processing larger volumes of data more quickly and with superior precision. Memory and storage space are measured in gigabytes and terabytes now, not kilobytes and megabytes. Processors operate on 64-bit rather than 8-bit chunks of data. And yet the semiconductor industry’s ability to create and collect high quality ... » read more

Unsticking Moore’s Law


Sanjay Natarajan, corporate vice president at Applied Materials with responsibility for transistor, interconnect and memory solutions, sat down with Semiconductor Engineering to talk about variation, Moore's Law, the impact of new materials such as cobalt, and different memory architectures and approaches. What follows are excerpts of that conversation. SE: Reliability is becoming more of an... » read more

Using Memory Differently


Chip architects are beginning to rewrite the rules on how to choose, configure and use different types of memory, particularly for chips with AI and some advanced SoCs. Chipmakers now have a number of options and tradeoffs to consider when choosing memories, based on factors such as the application and the characteristics of the memory workload, because different memory types work better tha... » read more

← Older posts Newer posts →