Look Who’s Crosstalking

As more university programs combine electrical engineering with computer science, interesting things will happen.


One of the common complaints among hardware engineers is that software engineers don’t understand how to really optimize their code to take advantage of the hardware. And software engineers complain that hardware engineers live in the past, hardwiring everything that can be done better in software.

Those debates will continue as long as there are distinct groupings for hardware and software engineers, but the new crop of university graduates don’t recognize themselves as one or the other. EEs are now required to take classes in computer science, and vice versa. The result is a much deeper level of crossover than what we’ve witnessed in the past.

This is good news, for sure. Since the millennium, EDA vendors and their customers have been preaching the need for systems engineers who understand more than just one piece of the puzzle. Schools responded with combined programs, and we’re about to see just how effectively these new graduates can put these combined skills to work.

What’s particularly encouraging is that it’s been hard to find new EE graduates. There aren’t enough of them, because the hot areas for technology are in the software realm—Google, Facebook, and a long list of social media startups willing to pay big bucks for college graduates. But given a little time to sort out their careers, it will become very obvious that making software run faster and with less power will require a much deeper understanding of how it’s used by the hardware.

Perhaps even more interesting is the possibility that hardware will respond with changes that can optimize software and open up new opportunities for improving both. The focus for both areas is better functionality and longer battery life or lower energy bills. But there also is an opportunity for reducing the cost of the overall design through optimization. What happens, for example, if a small part of the code is eliminated? Will it require less energy and maybe less silicon to run? And how about if one feature of the code executes fast while another doesn’t respond as quickly? Can there be improvements on that side.

This kind of thinking is necessary for overall system progress. As the price of developing advanced and very complicated SoCs goes up, savings to keep costs in check will have to come from everywhere. Understanding what’s necessary, what isn’t, and putting more very smart people to the task certainly can’t hurt—particularly when they’re all speaking the same language.


Leave a Reply

(Note: This name will be displayed publicly)