What’s Missing From Machine Learning

Machine learning is everywhere. It's being used to optimize complex chips, balance power and performance inside of data centers, program robots, and to keep expensive electronics updated and operating. What's less obvious, though, is there are no commercially available tools to validate, verify and debug these systems once machines evolve beyond the final specification. The expectation is th... » read more

System Bits: June 21

Faster running parallel programs, one-tenth the code MIT researchers reminded that computer chips have stopped getting faster and that for the past 10 years, performance improvements have come from the addition of cores. In theory, they said, a program on a 64-core machine would be 64 times as fast as it would be on a single-core machine but it rarely works out that way. Most computer programs... » read more

System Bits: April 1

“Lock-free” vs. “wait-free” parallel algorithms Since computer chips have stopped getting faster, regular performance improvements are now the result of chipmakers’ adding more cores to their chips, rather than increasing their clock speed. And in theory, doubling the number of cores doubles the chip’s efficiency, but splitting up computations so that they run efficiently in parall... » read more

Experts At The Table: Multi-Core And Many-Core

By Ed Sperling Low-Power Engineering sat down with Naveed Sherwani, CEO of Open-Silicon; Amit Rohatgi, principal mobile architect at MIPS; Grant Martin, chief scientist at Tensilica; Bill Neifert, CTO at Carbon Design Systems; and Kevin McDermott, director of market development for ARM’s System Design Division. What follows are excerpts of that conversation. LPE: How does cloud computing... » read more