New Business Model: Flexible Silos

While silos continue to be the best way to eke out efficiency, they don’t always work for semiconductor design.

popularity

Operational silos within organizations have a long history of streamlining processes and maximize efficiency. In fact, that approach has made enterprise resource planning applications a must-have for most companies, and cemented the fortunes of giants such as SAP and Oracle, as well as the giant consulting companies that recommend them.

But those kinds of delineations don’t work so well for chipmakers—or at least not in all departments. The boundaries change too quickly, or in unexpected ways, to be able to establish firm corporate structures. An organization producing an advanced SoC needs a different structure, for example, than one building memory or MEMS chips. And frequently they need different operational structures within the same organization, depending upon how a chip will be used, what markets they are targeting, and how quickly it needs to be delivered and for what price point.

To compound the issue, power and software run horizontally throughout the development process, while verification often runs vertically at specific intervals—and all of them vary depending on the chip, the process node, the intended customer, and external requirements such as reliability for a given amount of time. Automobile makers may require chips that last 20 years, while consumer electronics may have a two-year lifespan.

“The top challenge is thinking about the process differently,” said Satish Bagalkotkar, president and CEO of Synapse Design. “It used to be fairly straightforward where you had design libraries and CAD tools, and with enough manpower it would take you 3 years to turn out a new chip. Now you have a hodgepodge of libraries, tools, and IP and you have to get a product out in six to nine months. There are big risk factors with both tools and technology, so you list categories and design your plan around it.”

He’s not alone in seeing that problem. It’s ubiquitous across the supply chain, which is a direct reflection of the complexity inside of SoCs.

“We live that story, both internally and externally,” said Mike Gianfagna, vice president of marketing at eSilicon. “We need to make sure the subject matter expertise is robust enough, but you also have to collaborate with other groups. In some cases that’s internal procedures. Externally, you have foundries, test vendors, substrate vendors all by themselves, but everything also has to be working in harmony. I compare it to making a movie, where you have lots of experts that act as one corporation, then they get reconfigured for the next project. And we have to do that everyday while insulating the customer by limiting the number of contact points.”

Different parts of the supply chain are more nimble than others, however. A multi-billion-dollar foundry, for example, needs to be extremely focused on cost reduction and process, while companies interacting within those parameters set by foundries need to act as a buffer for flexibility.

“Different members of the supply chain are important for different parts of the chip,” said Gianfagna. “Our value-add is automation and smart people who can smooth out the rough spots.”

Mileage may vary
There are two factors that need to be considered with any functional silos in chipmaking. One is a clear understanding of what a chip will be used for, in effect shifting the focus from operational to functional silos. That depends on the goal of a particular design project and the target market.

“There are synergies between what we do in verification, in performance analysis, in power analysis, and those synergies are not well exploited,” said Drew Wingard, chief technology officer at Sonics. “So in many ways, the people doing the power analysis are pretty separate from the performance analysis and from the verification. They don’t share much, so they end up duplicating each other’s work. And even worse, they end up getting sometimes what look to be conflicting results, which take a long time to figure out that they actually weren’t running compatible inputs. This was a ‘garbage in, garbage out’ problem. They’re really not conflicting once we dig down.”

Wingard said what’s often missing is a good description of what the end device is supposed to be able to do in the user’s hands. “The description of the use cases, the operating scenarios of the design, help focus the verification effort. It’s also of course essentially what is used when we’re doing performance, and it’s also what’s used to a large degree when we’re doing power analysis. If you take a look at the use cases that you need to look at from a performance perspective, they’re not completing overlapping the power ones because the performance guy doesn’t care that much about the idle mode. How long does it take for the thing to wake up so they guy can answer the telephone call? That’s not a very important phase to the performance analysis guy because he knows its going to be fast enough. But how much power is it going to use when the phone is just sitting in your pocket? That’s incredibly important to the power analysis guy.”

A second factor involves a far less glamorous side of engineering—setting up a highly detailed information exchange as a process so that different groups within an organization—or between organizations—have an understanding of what’s been done before and how to work with it.

“If you can document everything internally, that’s great,” said Kurt Shuler, vice president of marketing at Arteris. “But most people don’t do that, and that’s one of the big problems if you don’t have silos—coordination and handoff. What you really need is a handshake that proves you’ve got all the information available for users of hardware, software and IP. In effect, this is all the meta data of previous test results, and that needs to be in a database. And it has to be find-able within that database.”

Shuler noted that for safety requirements this is fairly standard, particularly in mil/aero and automotive, because the requirements are well defined and suppliers have to be structured to meet those standards. For mobility, it’s much less structured. In fact, it’s possible to have silos form within organizational groups to get chips designed quickly, and then break apart when the chip is completed.

Efficiency, efficiency, efficiency
What makes silos so important is that they’re an intersection between technology and business, and that’s often the place where most problems occur. For example, inside of corporate data centers the separation between hardware and software purchasing and facilities led to runaway utility bills, but it took years to correct the problem because cost centers were located in different silos. Cheap servers fit neatly into the CIO’s budget constraints, while facilities managers were focused on buying electricity in bulk and cheaper cooling architectures, rather than working together to reduce the number of servers—an area that was well beyond the expertise of facilities managers.

“The question is how quickly you can get the job done versus how much cost you can take out of the equation,” said Taher Madraswala, president of Open-Silicon. “That’s particularly difficult with performance and power. There are so many possible combinations that you need to experiment with that it raises all sorts of questions about what is the most efficient approach and how to deal with different silos.”

At least part of the problem involves redundancy, which is where silos can squeeze out costs—again providing there is communication between them. That applies to groups within companies, as well as to chip designs themselves and how the chipmakers and design companies are set up to deal with different markets and applications of technology.

“We believe there is about an 80% overlap if you look at the use cases in these operating scenarios—80% of them are normally useful for both purposes and there’s about 20% that’s custom. But if you can find a way of capturing those use cases in a consistent form where you can use them in all the contexts, then you get rid of the ‘garbage in’ problem. Different teams can compare the results. And it’s a whole lot easier for the chip architect people because they don’t have to do this multiple times and try to argue with themselves or with anybody else about whether these were the right cases to run. There are just a whole lot of benefits there. Technologies like SystemVerilog, more abstract things like UVM – people have pointed out the synergies between how we handle transaction modeling in UVM versus how it’s handled in OSCI TLM-2.0. By taking advantage of some of those things, it looks like there is an opportunity to try to build a common source, if you will, that different people could use.”



Leave a Reply


(Note: This name will be displayed publicly)