Exploring System Architectures For Data-Intensive Applications


The exponential growth of digital data is being driven by a number of factors, including the burgeoning Internet of Things (IoT) and an increased reliance on complex analytics extracted from extremely large data sets. Perhaps not surprisingly, IDC analysts see digital data doubling roughly every two years. This dramatic growth continues to challenge, and in some cases, even outpace industry cap... » read more

Which Memory Type Should You Use?


I continue to get besieged by statements in which memory “latency” and “bandwidth” get misused. As I mentioned in my last blog, latency is defined as how long the CPU needs to wait before the first data is available, while bandwidth is how fast additional data can be “streamed” after the first data point has arrived. Bandwidth becomes a bigger factor in performance when data is stor... » read more

Inside The 5G Smartphone


Amid a slowdown in the cell phone business, the market is heating up for perhaps the next big thing in wireless—5th generation mobile networks or 5G. In fact, major carriers, chipmakers and telecom equipment vendors are all rushing to get a piece of the action in 5G, which is the follow-on to the current wireless standard known as 4G or long-term evolution (LTE). Intel, Samsung and Qualcom... » read more

The Memory And Storage Hierarchy


The memory and storage hierarchy is a useful way of thinking about computer systems, and the dizzying array of memory options available to the system designer. Many different parameters characterize the memory solution. Among them are latency (how long the CPU needs to wait before the first data is available) and bandwidth (how fast additional data can be “streamed” after the first data poi... » read more

Optimizing Analog For Power At Advanced Nodes


As any engineering manager will tell you, analog and digital engineers seem like they could be from different planets. While this has changed somewhat over time, [getkc id="52" comment="analog"] is still something of a mystery to many in [getkc id="81" kc_name="SoC"] design teams. Throw power management into the mix and things really get interesting. Improvements in analog/mixed-signal tools... » read more

Memory Architectures Undergo Changes


By Ed Sperling Memory architectures are taking some new twists. Fueled by multi-core and multiple processors, as well as some speed bumps using existing technology, SoC makers are beginning to rethink how to architect, model and assemble memory to improve speed, lower power and reduce cost. What’s unusual about all of this is that it doesn’t rely on new technology, although there certai... » read more

Experts At The Table: Latency


By Ed Sperling Low-Power/High-Performance engineering sat down to discuss latency with Chris Rowen, CTO at Tensilica; Andrew Caples, senior product manager for Nucleus in Mentor Graphics’ Embedded Software Division; Drew Wingard, CTO at Sonics; Larry Hudepohl, vice president of hardware engineering at MIPS; and Barry Pangrle, senior power methodology engineer at Nvidia. What follows are exce... » read more

Experts At The Table: Latency


By Ed Sperling Low-Power/High-Performance engineering sat down to discuss latency with Chris Rowen, CTO at Tensilica; Andrew Caples, senior product manager for Nucleus in Mentor Graphics’ Embedded Software Division; Drew Wingard, CTO at Sonics; Larry Hudepohl, vice president of hardware engineering at MIPS; and Barry Pangrle, senior power methodology engineer at Nvidia. What follows are exce... » read more

Experts At The Table: Latency


By Ed Sperling Low-Power/High-Performance engineering sat down to discuss latency with Chris Rowen, CTO at Tensilica; Andrew Caples, senior product manager for Nucleus in Mentor Graphics’ Embedded Software Division; Drew Wingard, CTO at Sonics; Larry Hudepohl, vice president of hardware engineering at MIPS; and Barry Pangrle, senior power methodology engineer at Nvidia. What follows are exce... » read more

The Increasing Challenge Of Reducing Latency


By Ed Sperling When the first mainframe computers were introduced the big challenge was to improve performance by decreasing the latency between spinning reels of tape and the processor—while also increasing the speed at which the processor could crunch ones and zeroes. Fast forward more than six decades and the two issues are now blurred and often confused. Latency is still a drag on per... » read more

Newer posts →