What Is A System Now?

As designs become part of connected networks, so do the requirements for what’s needed to make it work properly.

popularity

Defining a system used to be relatively straightforward. But as systems move onto chips, and as those chips increasingly are connected with applications and security spanning multiple devices, the definition is changing.

This increases the complexity of the design process itself, and it raises questions about how chips and software will be designed and defined in the age of the /Everything. For example, will engineers be developing software-defined hardware, network-defined software and hardware, or application-driven connected systems?

Simon Davidmann, CEO of Imperas, asserts that the classical definition of an electronic ‘system’ still stands—a product composed of both hardware and software components. “But what constitutes a typical ‘system’ has evolved considerably, especially software content, so the technologies and methodologies that are used to develop, debug and test the system become critically important.”

To this point, he noted the dramatic increase in software content in systems over the last 10 years. “The ratio of software to hardware engineers is increasing across all markets, including consumer, communications, computing, automotive, medical, and IoT. For example, premium cars have 100 million lines of code. A passenger jet, 15 million. And VDC researchers found that the size of embedded code bases is growing at roughly twice the speed of the embedded developer community. With this increase in software content comes an increase in software and system complexity that is not linear with lines of code. In fact, it increases exponentially as the lines of code increase. This impacts software and system quality, schedule, cost and security/safety.”

Grant Pierce, president and CEO of Sonics guesses what the changes in system design might suggest is that when it comes to things like interoperability, it extends that definition or redefines it in ways and along the lines of subjects that up to now have never been considered for a standard of interoperability perhaps at the chip level. “You go back to when SoCs started. I think people have forgotten because we’ve used the acronym for so long, the ‘S’ stood for system, that it was a system on a chip and as a result, such a system was going to rely upon the idea that if we could not all agree on a standard of, say, a single way of integrating a chip together along the lines of what emerged over time in the PC world — things like PCI and the rest, that gave a way of things plugging in and playing. We haven’t really gotten there. We’ve gotten pretty close with ARM and AMBA, but there are still multiple interfaces out there being supported by multiple different vendors. Now we’re just going to trump all that with a requirement to be interoperable at levels of software drivers, at least how those are architected, and/or levels of things like how we look at each transaction that takes place within a chip, and whether we can comply with it from the point of view of its protocol, as expressed in its format of what’s being transferred or the physical wires that are being connected or the security that’s being applied to it, or the means by which the information that’s being exchanged within the integrated system is being managed for the quality of service, among other things. We will need to find ways to extend what we think of as interoperability standards, or we’re going to have to see the same kinds of consolidation at the IP level, that perhaps we’re seeing at the chip level with regard to the acquistions that are going on between the likes of Avago and Broadcom or Intel and Altera.”

What drives this is the logical outcome of the latest round of convergence that we see, proposed Drew Wingard, CTO of Sonics. “The Internet has been an amazing system for a long time; it is an amazing system composed of other systems. What has changed is that, as communication technology has become so ubiquitious and so cheap, that we now attach devices to the networks that are far less capable than anything we’ve attached before, and that’s great because it allows us to do things we couldn’t do before: every lightbulb has a motion sensor in it; every postage stamp has an RFID — but it drives us to do things on the computing side that we haven’t really done before. In the early days of the Internet, the internet was an internetworking of pretty capable computer. Every node on the Internet was a computer, and it had some pretty substantial hardware associated with it, and it was a real peer-to-peer network. As we get to these low-end devices, thinking about stuff like Internet of Things, wearable subsets of that, etc., everything’s got to run off of a battery so you’re really pressed, not just for cost reasons but for power reasons as well to be very careful about what you do where in your system. You are forced to divide the processing load between the edge devices and the next hop up, which in many cases is the smartphone, or between the smartphone and the fog, between the fog and the cloud — and that’s new.”

“We’ve always had systems of systems — what definition of system was always dependent on where you were looking. To AT&T, the telephone system was the system. But to the guys designing the telephone, the telephone was the system. Those boundaries still exist but we’re trying to optimize acrossthose boundaries in ways we’ve never had to before and that’s where subjects like security become that much more interesting because I’m having to do really low level detailed sharing of private information across a public network so that I can distribute these tasks to the right place. That’s what causes us to rethink the design paradigm. Going back to the Internet analogy, the interfaces defined there — TCP/IP and the like — have been incredibly valuable things, and they won’t go away. The economics of building these systems without those is unthinkable but what it means is we have to first implement those things cheaply, and secondly we have to be able to layer on top of those things the appropriate levels of security to make this partitioning of a larger system – which generally, in all the models you see – has an incredibly valuable service component to them across a set of different hardware devices of very different capability. That’s the new system design challenge,” he continued.

“What we see from an SoC perspective then is its harder and harder to think about designing your SoC as an isolated system. What are the requirements that this new world puts upon the SoC developer? You have to be very careful about how you design your pieces…because there are things about that next level system that you probably won’t have the luxury of knowing at the time you build your system. You have to build in more capabilities. The standards around all of the security stuff are evolving; you’re going to have to build a little bit of extra stuff in there to make it more flexible,” Wingard added.

And then, there are the chipheads who still say the system is the silicon, observed Kurt Shuler, vice president of marketing at Arteris. “In companies that are vertically integrated, they have a different view. If you understand what your customer’s customers want, then you can design for what they want. There are a lot of times when your customer doesn’t know that. The big suppliers who create lots of parts for a system get it, but the discrete parts makers often don’t have a good view into that. Usually it’s the software guys who are your real customers. If there’s a problem with software that’s close to the hardware, it’s probably about power management and security. If chip vendors understand what the software guys need to do, it really helps from a system standpoint.”

The IoT adds a new dimension into this discussion. Davidmann noted that Kevin Ashton, cofounder and executive director of the Auto-ID Center at MIT, first mentioned the Internet of Things in a presentation he made to Procter & Gamble: “Today computers — and, therefore, the Internet — are almost wholly dependent on human beings for information. Nearly all of the roughly 50 petabytes (a petabyte is 1,024 terabytes) of data available on the Internet were first captured and created by human beings by typing, pressing a record button, taking a digital picture or scanning a bar code. The problem is, people have limited time, attention and accuracy — all of which means they are not very good at capturing data about things in the real world. If we had computers that knew everything there was to know about things, using data they gathered without any help from us, we would be able to track and count everything and greatly reduce waste, loss and cost. We would know when things needed replacing, repairing or recalling and whether they were fresh or past their best.”

IoT is also about interconnecting subsystems, Davidmann observed, where some are large and some are small. “IoT systems are large interconnected networks of devices, and I would expect all the edge devices to still have some complexity, have software running on smallish processors/microcontrollers, and to be communicating with larger, more centralized processing (perhaps cloud-based) systems with a lot of complexity and large amounts of software.”

The key, he said, is that there will be large amounts of software running on many interconnected computing elements with a lot of data being transferred, stored, and processed, with the biggest challenge being to make sure these systems can run reliably and be tested to ensure that they do run reliably, with many corner cases being validate before system deployment.

Everything gets connected. Then what?
Jack Harding, president and CEO of eSilicon, believes the new primetime player in system development is security. “It used to be an afterthought or a box that was checked along the way. They used to talk years ago about ‘pretty good security’ as a concept, and today, we’ve all recognized—not just because of the daily headlines, but with the reliance that the world is placing on electronic systems—that data security is going to be critical not only to the acceptance of future products but also to the adoption rate of the other 5 billion people in the world who have yet to use a computer. That’s the big news in system development.”

But security also carries a quantifiable price in system-level design. “How much security are you willing to accept at the cost of performance? Or, how much security are you willing to accept at the cost of area. That calculus is changing and the pendulum is swinging more toward the secure side. Those of us who’ve been in this business a long time have come to realize how critical security is to the long-term success of our development, and therefore the success of both the semiconductor and the system-level industries that we all support.”

He said security will be included as another element in what has been a PPA decision in the past. “And it won’t be an afterthought or a box checked. It will be an integral part of the business case for making that particular product. Said differently, to the extent that any supplier can contribute to a more secure level of hierarchy at any part of the food chain, while they consider the importance of power, performance and area — their solution will win over those who do not. The financial incentives are going to be substantial, and sooner than later to solve this problem in a way that’s happened coincidentally with the typical measurements we are more familiar with.”

Both Harding and Davidmann believe security will have to be implemented throughout a broad-based connected system at every level, a shift that will require both enormous collaboration and cooperation.

“Layered security is absolutely key for systems today, involving all levels of software as well as the underlying hardware,” Davidmann said. “While stimulating and tracking down bugs in embedded systems is a key part of development, we also need to consider the issues of vulnerabilities and security weaknesses in the developed software. These might not be bugs as such, but bad design decisions, missing capabilities, or misunderstood holes.”

This includes everything from encrypting data on wires to performing secure boots that only run trusted software, using containers to sandbox applications, adding firewalls to everything, validated inputs to stop code injections, and much, much more.

“Developers need to consider carefully the hardware design of the system,” Davidmann said. “Everyone is aware you need to use containers for the different software applications, such as using hypervisors/virtual machines, but also you need to consider the hardware design to ensure there are separations so that only trusted software can access certain parts of the hardware. For example, TrustZone from ARM provides a two-section system, and more recently OmniShield from Imagination allows not just the virtualization of parts of the CPU but also virtualization of the GPU and other components.”

Systems are supposed to just work
All of this has to happen faster and on a bigger scale, too. Security and integration are just part of the picture. Srikanth Jadcherla, group director of R&D for low power verification at Synopsys, observed that it took 11 years for power-aware verification to reach a stable and mature state. But put that in context of what’s ahead and the problem looks daunting. Complex SoCs are now being connected to arbitrary sets of networking points and users expect them to communicate with each other—hardware to hardware and software to software. At the same time, those SoCs control a door lock, a thermostat, lighting, and they are layered with security with processing done both on and off the chip.

“You’re looking at connected devices with human interventions to actually work in a very stable and secure manner,” Jadcherla said. “And that is one more order of magnitude higher of a problem in terms of complexity. We not only verify our SoC, now we verify that it talks to other similar SoCs in what we call device chaining.”

To this point, Vic Kulkarni, senior vice president and general manager of the RTL power business at Ansys, noted that the anticipated vast growth of IoT devices and large scale deployment of potentially inefficient, non-secure and even defective IoT devices can lead to critical challenges that impair connectivity for all users—or even worse, bring the network down completely.

The hierarchy of basic IoT functions is as follows, he said:

Collect. This consists of devices, sensor systems and sensor networks. For example, an IoT sensor system will include heterogeneous functions such as proximity or a micro-fluid sensors, WiFi, and a microcontroller function in wearable IoT applications to collect data and connect to the edge node.

Connect. This step connects information through gateway to edge node, for example. Unfortunately the edge node devices tend to be low-cost, open source, and hence open to hardware attacks.

Correlate. This involves big data analytics from descriptive (when and what happened), predictive (something will happen—existence of a potential failure) to prescriptive (how to improve the product or system to avoid the failure)

Above this is the world of apps, which is a traditional target for disabling security tools, Kulkarni said. “The ‘IoT System’ definition cannot be limited to only one function in this stack, but must include hardware and software components needed to build the complete IoT hierarchy. Security must be built in at every stage of this stack.”

He pointed to some of the techniques that are emerging, such as adding randomness in cryptography for both hardware and software, pre-charging registers and buses to mitigate power-leakage signatures, using fixed-time algorithms to reduce data related timing signatures, and camouflaging structures from reverse engineering.

In addition, Kulkarni said the industry must cooperate to support important standards such as 6LoWPAN, which utilizes low power wireless IEEE 802.15.4 networks using IP version 6 (IPv6). This allows each wireless sensor node to be assigned an IP address for communication over the Internet. Security protocols can be built into both “connect” and “collect” functions in an IoT system.

Still, there is very bright side to all of this, said Chris Rowen, a Cadence fellow. “If we can figure out the business model to enable smaller teams with either smaller ASPs or smaller volumes — probably not both — to be able to really attack these things, there is this potential renaissance/explosion/big bang of design that can take place, driven particularly by systems companies who have some know-how or some brand or some data set that they are then able to exploit and expand. You’d have, hypothetically, a Johnson & Johnson saying, ‘You know what? I know more about what a smart Band-Aid should look like than anybody, and I need to design something that enables my smart Band-Aid.’ You still have the problem that they’re going to need to design very productively at relatively lower cost, and we have to go figure out what does IP mean, what do tools mean, what do fab services mean, what does back end mean in that environment? What are the kinds of platforms that people are going to build?”

And for all of this to work—for big systems to work seamlessly and securely across multiple nodes on a complex network—what is needed is collaboration throughout the embedded ecosystem to develop the new technologies and methodologies needed for the systems of today and tomorrow, according to Davidmann. That may be a tall order, but at least the definition of what’s needed is coming into focus.



Leave a Reply


(Note: This name will be displayed publicly)