New Metrics For The Cloud

Data centers are beginning to adjust their definition of what makes one server better than another. Rather than comparing benchmarked performance of general-purpose servers, they are adding a new level of granularity based upon what kind of chips work best for certain operations or applications. Those decisions increasingly include everything from the level of redundancy in compute operations, ... » read more

Cloud 2.0

Corporate data centers are reluctant adopters of new technology. There is too much at stake to make quick changes, which accounts for a number of failed semiconductor startups over the past decade with better ideas for more efficient processors, not to mention rapid consolidation in other areas. But as the amount of data increases, and the cost of processing that data decreases at a slower rate... » read more

Do We Need A “Glue” Engineer?

Design and verification are so complex today and fraught with market risk that it keeps managers awake and sweating at night. So much of design is carved up in IP blocks and subsystems, each with their own verification issues and methodologies. To manage the complexity the design is partitioned, and so too are the teams. But as software verification becomes more crucial to system-design succ... » read more