FPGA-Proven RISC-V System With Hardware Accelerated Task Scheduling


A technical paper titled “Enabling HW-based Task Scheduling in Large Multicore Architectures” was published by researchers at Barcelona Supercomputing Center, University of Campinas, University of Sao Paulo, and Arteris Inc. Abstract: "Dynamic Task Scheduling is an enticing programming model aiming to ease the development of parallel programs with intrinsically irregular or data-dependent... » read more

Have Processor Counts Stalled?


Survey data suggests that additional microprocessor cores are not being added into SoCs, but you have to dig into the numbers to find out what is really going on. The reasons are complicated. They include everything from software programming models to market shifts and new use cases. So while the survey numbers appear to be flat, market and technology dynamics could have a big impact in resh... » read more

Kahn Process Network: Parallel Programming Without Races And Non-Determinism


Modern personal computing devices feature multiple cores. This is not only true for desktops, laptops, tablets and smartphones, but also for small embedded devices like the Raspberry Pi. In order to exploit the computational power of those platforms, application programmers are forced to write their code in a parallel way. Most often, they use the threading approach. This means multiple parts o... » read more

System Bits: Jan. 31


Optimizing code To address the issue of code explicitly written to take advantage of parallel computing usually losing the benefit of compilers’ optimization strategies, MIT Computer Science and Artificial Intelligence Laboratory researchers have devised a new variation on a popular open-source compiler that optimizes before adding the code necessary for parallel execution. Charles E. Lei... » read more

What’s Missing From Machine Learning


Machine learning is everywhere. It's being used to optimize complex chips, balance power and performance inside of data centers, program robots, and to keep expensive electronics updated and operating. What's less obvious, though, is there are no commercially available tools to validate, verify and debug these systems once machines evolve beyond the final specification. The expectation is th... » read more

System Bits: June 21


Faster running parallel programs, one-tenth the code MIT researchers reminded that computer chips have stopped getting faster and that for the past 10 years, performance improvements have come from the addition of cores. In theory, they said, a program on a 64-core machine would be 64 times as fast as it would be on a single-core machine but it rarely works out that way. Most computer programs... » read more

System Bits: April 1


“Lock-free” vs. “wait-free” parallel algorithms Since computer chips have stopped getting faster, regular performance improvements are now the result of chipmakers’ adding more cores to their chips, rather than increasing their clock speed. And in theory, doubling the number of cores doubles the chip’s efficiency, but splitting up computations so that they run efficiently in parall... » read more

Experts At The Table: Multi-Core And Many-Core


By Ed Sperling Low-Power Engineering sat down with Naveed Sherwani, CEO of Open-Silicon; Amit Rohatgi, principal mobile architect at MIPS; Grant Martin, chief scientist at Tensilica; Bill Neifert, CTO at Carbon Design Systems; and Kevin McDermott, director of market development for ARM’s System Design Division. What follows are excerpts of that conversation. LPE: How does cloud computing... » read more