In the EDA industry’s quest to enable more and more activities pre-silicon, a lot of technical challenges have been overcome. As it turns out, though, overcoming organizational challenges within user organizations is sometimes proving just as difficult.
Writing this while I am at DATE in Dresden, Germany, I am also preparing for two panels on system-level trends later today and one on software-driven verification tomorrow. I am also visiting partners and customers to discuss our current and planned technologies. A while ago I had augmented “Leibson’s Law” stating that it takes 10 years for any disruptive technology to be adopted by designers, with my own “Schirrmeister’s Law”. Schirrmeister’s Law states that the likeliness of a new design technology to be adopted by a project team is inversely proportional to the number of changes you ask the project team to do.
As I meet with customers, I am finding more and more examples of this situation being absolutely true. And some of them are exacerbated by organizational challenges within the companies in which they operate. Clayton Christensen has famously remarked in every high-tech product manager’s bible, “The Innovator’s Dilemma,” that sometimes designs will reflect more of the organizational structure of a company than actual customer needs. The same seems to be true for design processes to develop them.
The illustration below shows what we are trying to achieve – which can be looked at as the great shift of design technologies to the left.
It indicates a design flow from spec to silicon, including production and post-silicon validation. Hardware has to be developed as integration of IP into sub-systems, sub-systems into systems on chip (SoCs), and SoCs into systems. Complex software stacks, from bare metal software to operating systems to middleware and applications, have to be able to execute on the various processors in the system. A lot of the improvements currently happening in optimizing system design processes are focused on delivering engines that can execute hardware representations earlier in the design cycle. These engines must also be fast enough that more cycles can be executed on them, both for verification and software development.
Specifically, more and more tests that are done post-silicon are attempted to be run pre-silicon. And, of course, software development has become so complex that some of it has to be brought up as a requirement for tapeout – like booting to the prompt in an operating system. While there remains lots of room for improvement, we have come a long way. Virtual prototyping in some cases allows software development parallel to RTL development. Hybrid combinations of virtual prototypes and hardware-assisted engines like emulation and FPGA-based prototyping accelerate the software execution 10X to 60X while maintaining RTL accuracy of key aspects of the design. Users report that bring-up of designs in processor-based emulation has become so efficient, that they can switch several RTL images per day even though RTL is not mature yet, greatly accelerating their debug cycle.
So what does the dog do when finally reaching the tail he chases? Apparently in the design world, users may need to overcome some organizational challenges when the technical solution arrives! I have now seen cases in which a virtual prototype of a design under development has become available in a timely fashion. It turns out that the organization was not ready for it. In a pipelined project flow, the software developers – the ones that would have been the beneficiaries of that early availability – were simply not available because they were tied up in a previous project, i.e. the chip prior.
Similar effects are happening in the testing and post-silicon validation world. When looking at projects, it seems clear the verification pre-silicon and the testing and validation post-silicon availability could benefit from each other. Tests post-silicon could help pre-silicon, and some of the verification aspects performed pre-silicon could influence post-silicon validation. We have seen some great examples at previous DACs, like Teledyne Lecroy showing in a presentation called “Ubiquitous PCI Express Verification from Simulation through Post-Silicon Development” how to use post-silicon PCI protocol testing equipment together with Palladium® emulation. Another example is Rohde & Schwartz showing in a presentation called “LTE-Advanced with Palladium XP” how to connect wireless test equipment normally used to test silicon to designs running in Palladium emulation well in advance of silicon availability. It turns out though that for adoption of these approaches, some of the challenges are organizational in nature, with the design side and the production validation side of a development house using fundamentally different tools and methods.
Optimizing adoption of new design technologies will remain an interesting challenge going forward, considering the organizational situation project teams find themselves in. I remain optimistic, though, as I have more and more discussions with customers (jointly with partners) that have in the past been seen primarily in the post-silicon validation space. And organizational challenges can be overcome after all, at least when the value of the results is clearly quantifiable and well understood.
Fun times ahead!
[…] month, I blogged about “The Great Shift to the Left,” and I pointed out some of the organizational challenges associated with compressing the […]