Experts at the table, part 2: The impact of security on architectures, what’s missing in software, and why EDA business models are so rigid.
Semiconductor Engineering sat down to discuss the increasing reliance on architectural choices for improvements in power, performance and area, with Sundari Mitra, CEO of NetSpeed Systems; Charlie Janac, chairman and CEO of Arteris; Simon Davidmann CEO of Imperas; John Koeter, vice president of marketing for IP and prototyping at Synopsys; and Chris Rowen, a consultant at Cadence. What follows are excerpts of that conversation. To view part one, click here.
SE: Along with the discussion about scaling and architectures, there is a concurrent discussion about how to make everything secure. Where are we with security?
Janac: Security is really complicated. You need a hardware root of trust. But you also need a lot of other things like security firewalls, which enforce only certain kinds of data on a certain trace in the interconnect. You need differential power analysis resistance. You need key management. At the higher levels of the security stack you need digital rights management. And you also have to keep in mind that you may have guys who wrote their own instruction set, as well as guys who want midrange security. They don’t want the teenage hackers in their system, but they’re not impervious to Chinese cyberwarfare or the NSA. There are different levels for security, depending on what you’re trying to do. You really need a lot of scale to be a security company because there are a lot of different areas.
Rowen: Security is fairly well understood. It’s hardware root of trust, it’s encryption, it’s isolated operating modes and physically unclonable functions. It’s protected key storage. Those are fairly well understood in hardware. But security is governed by the weakest link. In many cases, the weakest link is a little piece of software that wasn’t built properly. There is a hole in security methodology and security verification. If you put in all the right ingredients and you add the right software, how do you know how secure it will be. It’s a big problem and it’s one that will persist for some time. There are different kinds of requirements and people make different levels of investments.
Koeter: We acquired a security company called Elliptic Technologies. We learned three things. First, everyone is worried about security. Second, nobody knows what to do about it. And third, everyone wants to minimize the cost of adding security. It’s a very interesting set of discussions.
Janac: People who have been hacked are willing to pay much more for it than those who have not. So the CEO of a big retailer that has been hacked would be more willing to increase their investment now than before they were hacked. And people are willing to pay for security if they can compute how much losses would cost them, such as digital rights management, so they can show it’s cost effective.
Davidmann: To deal with security you have to change the architecture. The margins are changing all the time and security has become very important.
Janac: But it’s still hard to make money with security. You need scale, and not everyone is going to pay for it. So it’s almost like it has to be an add-on for a bigger company rather than a pure-play effort.
Mitra: One of the key issues here is how you verify whether it is secure or not.
Rowen: You probably want to know that it works.
Mitra: Exactly. How do you prove it’s actually secure? If the only way to do that is to release the timing patterns to see if someone can break into it, that’s not a good solution. There has to be a better way.
Janac: At a system level that’s true. But unit-level security testing is not impossible. If you say only that type of data goes on that line in the interconnect, then you can test it.
Mitra: For the hardware piece of that, yes, I agree.
Rowen: There are some very good formal methods that work in terms of accessibility and isolation. But it’s still that one piece of software that will cause problems.
Mitra: Yes, it’s the weakest link. But the problem is still how you identify the weakest link.
Davidmann: We’ve had several people take us in different directions on this. Years ago I was involved in port simulation, and you would simulate a port to detect errors in it. Today people are simulating in a controlled way so they can put the system into strange places to see if they can trip it up and find weaknesses.
Mitra: It’s random.
Davidmann: Yes, but it does find issues. In the verification area, we’re using constrained random and formal random. This is the early days of this approach.
SE: So now, instead of architecting a chip, you’re really architecting part of a system. It all has to fit together. We’ve never dealt with that as an industry. We’ve been working with very discrete parts. How does change design and architectures?
Koeter: One thing that gets very interesting is the idea of programmable accelerators. They have a very low latency, cache coherent interface. We’re still thinking through all the different possibilities.
Mitra: Even at an SoC level, if you look how the EDA industry is constructed today, it goes from the architecture down. The architecture piece is done using spreadsheets and back-annotated from whatever model you’re using to make sure your verification is correct. There’s going to be innovation in that space because it’s new and it’s different. You’re going to get local optimization if you’re using humans and spreadsheets to do this. It has to be algorithmic.
Davidmann: The system you’re designing isn’t the chip anymore. It includes, at the very least, all the hardware-dependent software. That’s a fundamental thing this industry isn’t addressing. There are very few people driving that from an EDA perspective. It’s not the applications at the App Store. It’s the hardware-dependent stuff. No silicon works without this software, and no one builds a chip without it.
Rowen: The central challenge there is that EDA has always focused on, ‘What’s common on chips? What is it that all chips need in terms of tools and IP building blocks? What is the common denominator?’ They all use the same fab processes, so everything involving fab processes is well covered by the EDA industry. They all use standard cells and place and route. They almost all use some significant analog design capability. So there are some very rich common denominators. But the big issue is that so much value from that silicon is associated with an end vertical market. Someone building a big server has a very different notion of what that system looks like—its applications, its tradeoffs between cost and power and performance, the nature of the software running on it. That’s very different from what goes into a smart lightbulb. The EDA industry historically has not been able to fracture itself to address those vertical capabilities. The big existential question for the EDA industry is how it moves up into a world that is more about applications and much more diverse than the common denominator of silicon design.
Mitra: We do need both.
Rowen: Yes, and that’s more valuable than when you just go to 7nm and 5nm. A lot of people in EDA want to go into automotive because it’s the hottest thing. It has unique requirements. There is some ISO 26262. But does that have a big impact on what you’re going to offer in mobile? No.
Mitra: It’s the same for data centers.
Rowen: And so you have unique investments that need to be made for this industry to live a fuller life.
Janac: The fundamental problem here is that the EDA model doesn’t work at the system level. If you look at a big semiconductor company, they will have 100 people in the RTL-to-layout area, 25 to 30 people in the RTL group, and they’ll have two to three architects. For you to sell to the architects is not very profitable unless you have an IP model where you have some way of participating in the revenue of the architecture. The architects are too few and the problems are too difficult to economically serve them with extreme complex solutions. That’s why the system-level design industry has always had so much promise and not so great results.
Davidmann: It’s not the architecture. It’s the software. There are lots of software engineers, and they need tools like fault simulation.
Janac: You’re absolutely correct. They are much more numerous. But the hardware guys have been trained that you pay $100,000 to $200,000 per seat. The software guys have not been trained in that.
Koeter: The software engineers have been trained that the cost should be near zero.
Davidmann: In software there is less of a structured methodology. From my point of view, no one builds a chip today without simulation. I don’t believe anyone should be building software without simulation, either. There is education that it required for methodologies and tools, and what they need to spend.
Janac: Big EDA companies refuse to throw in other tools to make it available to the software guys. And the software guys view themselves as artists, so they use free programming tools. So how do you train this cadre of software people to become design automation software customers? You need to train the software engineers and management that for every software engineer there needs to be a capital budget to improve productivity.
Davidmann: You have to demonstrate success and predictability. That’s going to take time. How many chips fail because they didn’t do verification. Some of them need many spins.
Overcoming The Limits Of Scaling
Part 1: Complex tradeoffs involving performance, power, cost, packaging, security and reliability come into focus as new markets open for semiconductors and shrinking features becomes increasingly expensive.
Focus Shifts To Architectures
As chipmakers search for the biggest bang for new markets, the emphasis isn’t just on process nodes anymore. But these kinds of changes also add risks.
Stepping Back From Scaling
New architectures, business models and packaging are driving big changes in chip design and manufacturing.