Time to mend the EE / CS divide

Otherwise semiconductor customers will be tempted to build their own systems and bypass chipmakers.

popularity

There’s been a lot of news out the last few weeks about the future of our industry, and although these news flashes may seem unrelated, they are quite correlated.

First, there was the disturbing news in Mark LaPedus’ article here on Semiconductor Engineering, “EUV Suffers New Setback,” portending a rough ride for the commercialization of EUV lithography. EUV will be needed to create transistors that are smaller than currently possible, and will perhaps enable Moore’s law to continue for a few more years, if successful. The news was so bad that The Register picked up Mark’s story wrote and article about it with the quite cheeky title, “First ‘production-ready’ EUV scanner laser-fries its guts at TSMC. Intel seeks alternative tech.”

Then there was the SEMI ISS conference in early January, where Handel Jones updated us on his cost-per-gate analysis. Paul McLellan covered this thoroughly in his SemiWiki article, “Handel Jones Predicts Process Roadmap Slips.” Here’s one chart from SemiWiki and IBS to scare you:

kurtart

For the first time ever, the newer semiconductor processes are more expensive than their predecessors. I wonder what this will do to the adoption rates of these technologies?

And for the pièce de résistance, legendary VC and Google board member John Doerr confirmed at the ISSCC conference that Google is working on its own proprietary silicon, rather than purchasing commercial server chips for its custom servers.

What’s going on here?

These three news stories are another indication that the “free lunch” of ever cheaper, faster and more power efficient chips simply based on adopting the latest semiconductor manufacturing process is over.  We are going to have to pay more attention to what we put in and on our chips, and less about the manufacturing process.

What I mean is that further advances in chip cost, performance or power consumption will not be based determined by the CMOS manufacturing process chosen. Rather, three other dimensions will drive success or failure:

  1. The chip hardware architecture: As time goes on, we are going to be forced to use the best processing core for the job within our SoCs. We will not have the luxury of throwing software loads at a sea of CPUs for convenience sake. We will evolve to architectures that take advantage of general purpose GPU and DSP processing, in addition to traditional MPUs. This will complicate the on-chip scheduling and processing problem. How to tackle this? Think HSA Foundation
  2. The software integration: Hardware and software engineers no longer will be able to exist in silos. Abstraction is great for time to market, but not so great for performance or power consumption. Apple has learned this and Google has, too. Expect the EE/CS degree to become the EE&CS degree.
  3. The development tools: I’m talking primarily software development tools and frameworks, not EDA tools. Using the best processing unit for the job requires us to write code that processes events in parallel, but humans can’t think in parallel. We need more innovation here than just pthreads and vectorizing compilers.

What this means for us chip-heads is that we are going to have to become more knowledgeable about software, which is in essence the “customer” of our chips. And software developers are going to have to learn more about how the underlying hardware works. This is a big cultural change for people that have grown up thinking they are either an EE or a CS. To be successful in the future, our design teams will have to be both. Otherwise, traditional semiconductor vendors’ customers will be tempted to design their own HW/SW systems, bypassing the semi vendor altogether.



Leave a Reply


(Note: This name will be displayed publicly)