It depends on whether you can actually utilize the other cores. But even definitions are vague.
By Frank Ferro
Two cores are better than one, right? It reminds me of those AT&T commercials where they ask the kids, “Who thinks two is better than one?” And of course the kids all yell, two! In another version of the commercial they ask; “What’s better, doing two things at once or just one?” And again they all yell, two! Well, this is a good summary or of last week’s Multicore conference, where the conversation focused precisely on these questions: Are two cores (or multiple cores) are better than one core, and is doing two tasks (or multiple tasks) at once are better than one? And if so, how much better?
To answer these question, the conference brought together hardware and software companies to discuss the challenges of developing efficient multicore hardware and software. Before I get too far ahead of myself, I want to define (or try to define) what multicore means. During the closing panel discussion, I was surprised to learn that there is no Wikipedia entry for ‘Multicore.’ It just says, “May refer to: Multi-core computing.”
I was even more surprised at the debate that ensued after the opening question; “what does multicore (or many core) mean?” There was agreement (at least) that it means more than one core, but that’s where the agreement ended. To some multicore means homogeneous CPU cores, to others it means multiple heterogeneous processor cores, and to others any mix of cores in the system. My working definition has been, for the most part, multiple heterogeneous processor cores, but I will admit that I sometimes drift to the third definition meaning any system with ‘lots’ of cores. I will spare you the panel debate over the second question: “define a core.”
There was agreement, however, on the fact that multicore is being “thrust on the masses.” Up to now companies chose to do multicore, but with the slowing down of Moore’s Law, multiple processor cores are the only way to keep up with the performance demands of many consumer applications. Using multiple cores is necessary to stay on the performance curve because the rate of increase in CPU MHz has slowed to the point where this alone is no longer sufficient. “The race to MHz has now become the race to core density,” said Tareq Bustami, vice president of product management at Freescale. According to Linley Gwennap of The Linley Group, about half of the smartphones produced next year will have dual-core processors, and the number of phones with quad-core processors will jump to about 40%.
So how do SoC designers harness the power of all these cores, and when does adding more cores reach a point of diminishing returns? Most existing software has, for the most part, been written for scalar processing (do one thing at a time in sequence). The challenge for programmers is porting this scalar code to a multicore environment where there are multiple CPU cores running concurrent tasks or acting as shared resource for any task. Now add specialized heterogeneous processors such as a GPU or a DSP and programming becomes even more complicated. OpenCL is a framework developed to help with the task of writing programs that can execute across multiple heterogeneous cores like GPUs and DSPs. Systems can also add a hypervisor software layer as a way to abstract or virtualize the hardware from the software. This is all good progress, but there still is a lot of work to do from the software perspective to make the most efficient use of the hardware.
In addition to software optimization, another unanimous conclusion during the panel discussion was that memory and the interconnect are also “challenges to be solved” for optimal multicore system performance. Having multiple cores will not do you any good if you can’t efficiently access data. In an attempt to solve the memory bandwidth problem, the L2 and L3 cache sizes have been growing. Of course, this can be an expensive solution because increasing memory adds to the die cost.
Although not mainstream, Wide I/O is specifically designed to address the memory bandwidth issue and 3D memory is on the horizon. Even with an efficient memory design moving the data from multiple cores to memory requires an optimized interconnect. The need for having flexible topology structures to match the processor configuration, efficient protocols to maintain performance, virtual channels to support system concurrency, and quality-of-service to manage the competing data flow are must-have features to maximize multicore system performance.
So are two cores (or multiple) better than one? For most consumer applications, developers do not really have a choice anymore because all of the new processors have multiple CPU cores. In addition to the CPUs, it is hard to imagine these multi-function applications not having specialized coprocessors. Dinyar Dastoor, vice president of product management at Wind River, provided a good analogy by asking if having a third eye behind your head is better? The answer is only yes if you can process this additional information. The consensus was clear: If you don’t have improvement in multicore software, a memory architected for increased bandwidth, and an efficient interconnect, two cores may not always be better than one.
—Frank Ferro is the Director of Product Marketing at Sonics
Leave a Reply