中文 English

HW/SW Design At The Intelligent Edge

Systems are extremely specific and power-constrained, which makes design extremely complex.

popularity

Adding intelligence to the edge is a lot more difficult than it might first appear, because it requires an understanding of what gets processed where based on assumptions about what the edge actually will look like over time.

What exactly falls under the heading of Intelligent Edge varies from one person to the next, but all agree it goes well beyond yesterday’s simple sensor-based IoT devices. Intelligent edge applications include anything from an autonomous vehicle to a drone, a 5G base station, a smart lighting system, and an automated factory. The common denominators are connectivity, partitioned and optimized data processing, and a level of analytics oversight that until now has been almost entirely done in the data center.

As the amount of data generated by sensors continues to balloon, moving that data from the end points where that data is collected becomes inefficient and costly. The challenge now is to figure out how to partition processing, and that requires much more decisionmaking closer to the source of that data in an area vaguely described as the intelligent edge.

“There’s a lot of uncertainty,” observed Max Odendahl, CEO of Silexica. “Everybody is struggling to understand what exactly the platform is, what the framework is, what the development tooling should be. Right now there are more open questions than solved ones.”

That impacts some of the fundamental design considerations, such as what middleware will need to be supported. “Is it ROS2 or Adaptive AUTOSAR, or is everything manual? Is it going to be Linux? Is it QNX? The answer is, ‘All of the above because we’re not sure yet.’”

The goal is to sharply reduce the amount of data from sensors that needs more intensive processing. But that creates problem at the edge, where power and processing performance are limited, particularly when AI/machine learning are added into the mix.

“When you build systems, there are multiple things you need to care for,” said Raik Brinkman, CEO of OneSpin Solutions. “It’s the same whether you do it in hardware or software, but the challenge is how you keep track of data. Nothing is fixed, and as you get new data, you may find you have gaps and have to retrain systems. There are multiple layers of data. And with machine learning, you need to recompute everything. This is a big management task. People are not aware of the complexity in all of this. They’re happy enough that it works.”

System-level challenges
What’s needed is a way to pick out critical data from data that may be useful over time, and a way to trash data that is considered irrelevant. So a camera on a car, for example, may produce enormous quantities of data, but there is no reason to keep the vast majority of it because it has no bearing on the operation of a vehicle.

“If you can do sufficient pre-processing, then the data that you have to send back to the IoT cloud is not that large,” said Joao Geada, chief technologist at ANSYS. “So for anyone process, that amount of data isn’t that large. If you think about speech recognition, that’s done on a single computer. So the design is narrow in, wide in the middle, and narrow back out. But to adapt the architecture, you have to rethink it from the outset.”

So where to start with designing these systems? According to Odendahl, it begins with the developer tooling and the operating system, along with the middleware, the platform. “At the end, it goes back to the system level. What can be very useful is having insights for the concurrent software. And because at the edge, it’s not even Open CL or Cuda; it’s C and C++, we want to build a ‘Google Earth’ for source code, where there are a lot of different levels. You could start with a high-level overview, but be able to drill down and look for root causes with traceability back to the source code. This is really about execution analytics where there’s traceability back to the original source code. The higher the abstraction level, the bigger the difference between what you’ve written in your source code, and what you hopefully understand versus what you would see in a profile or in a trace. That gap gets bigger and bigger.”

Even though intelligent edge applications are specific, they also must be adaptable over time. That requires development at the highest possible level.

“At the same time you want to squeeze out performance, so you must be specific to the target device,” he said. “This is an interesting dilemma because we have customers looking to the future, whereby they have hardware specific implementations and will start from scratch over and over again, so they want to start at the super high level—C++, for example—and optimize on a system level or an architectural level. And then, only at the end, they optimize the parts that actually make sense. There are a number of companies that really want to move from the way they’re currently doing it, but it will take a while because the moment they need performance, they fall in the trap again. They go back to the original tact of, ‘Let’s figure out the system level, and architectural level later.’ And they might have just done the wrong thing because they didn’t actually understand it on a system level.”

Making this shift requires a change in mindset, and increasingly it applies to technology that fell well under the radar in the past. For example, there is a lot of activity within engineering organizations to add intelligence into sensors at the extreme edge.

“It used to be there was a dedicated processor that would read through a dedicated analog to digital converter from the sensor, and would do all the processing there Jeff Miller, a product marketing manager at Mentor, a Siemens Business. “Now in the intelligence sensors, the analog to digital conversion, the analog filtering—all of those sorts of things are starting to happen on a companion die with the sensor, but still all inside that same package. Beyond that, embedded processors also are being placed there to run software for things like sensor fusion or algorithms to preprocess signal data that are actually running inside of what you would think of as the sensor chips, especially devices that might have multiple sensors built into the same chip.”

In the larger scale of the embedded systems, there still is usually a processor performing processing and managing the communications interface. “Software is present at multiple levels within these devices—in the sensors in individual microcontrollers or in an application processor, and then of course at the cloud,” Miller said. “There is software existing all the way up and down the stack in a way that wasn’t the case not too long ago. There was software in one component of the embedded system, but now there are multiple of those, and the cloud tier, and the gateway tier, and all of these other areas. The complexity of managing all that software is a challenge, but there are some huge benefits to having that programmable capability.”

The biggest benefit is flexibility, which is critical in a nascent technology segment like the edge. “If you can load new software later, you can add new features, you can fix bugs. That aspect is especially critical for security because if you’re fixed function and somebody finds a vulnerability, you have to replace the hardware. If you’ve got programmability, then you can develop new code to do updates in the field and close those vulnerabilities. It’s absolutely critical to have that flexibility when you’re trying to build and maintain a secure system,” he said.

This programmability is built into the hardware, so instead of building a fixed-function processor to processes the data and produce a result, an embedded processor such as an eFPGA, DSP or microcontroller core could be added. These off-the-shelf solutions can be built into sensor chips themselves in order to gain programmability, which provides a lot more flexibility in designing the algorithms that are going to run. In addition, the software being written for the CPU can be tested on the bench, which allows for more flexibility and upgradability.

Engineering teams that have a lot of expertise in designing sensors probably have MEMS designers and physicists, and that’s really their focus, Miller noted. “Then they’ve got an analog team, and now they’re looking at how to step up to the next level with their product and make it into a smart sensor as opposed to just a high-quality sensor. Usually they’re looking at implementing digital onto their chip and implementing the processor onto their chips. The key thing is combining IP, because it isn’t necessary to become an expert in processor implementation in order to make a smart sensor. There are great IP options out there. Just go get one, and assemble the necessary portfolio of IP to build this system. Understanding the costs and benefits here is also important. In trying to understand that equation, whether it makes sense for them, people are pretty surprised when they actually sit down and run the numbers. The number that surprises people the most is the power impact that integrating these things has. In one case study from the oil and gas industry, they took a design that was done using discrete components and assembled it all together into a custom SoC for their application, and cut their power budget by 70%.”

Different tools and approaches
Because this is a new area, with lots of possibilities, it’s also essential to do a lot more experimentation to figure out what works best. This requires a higher level of abstraction, which is the main reason high-level synthesis has received so much attention lately, after years of largely unfulfilled hype.

What makes HLS so useful is that both hardware and software can be specified in C, which allows hardware and software engineers to use the same tools.

“That’s the real trick here,” Miller said. “These are highly customized applications. You get those savings by customizing and tailoring your IoT edge device to the specific market that you’re trying to get in. You usually have very aggressive targets in terms of not just power and performance, but in terms of costs and the ability to field volumes of these things and security and all of these other aspects that come into these systems. So there are engineering teams being stretched to the limit trying to implement these wildly complex systems, and implement many of them in order to target different markets. So this idea of being able to harness all that power with a higher-level language, and especially a higher-level language that’s common to the software that you’re writing, is just really powerful.”

With 5G development activities coming on strong in conjunction with the intelligent edge, the challenge is a deeper understanding of the relationship, and interactions, between hardware and software. This has a direct impact on partitioning of data processing between the edge and the cloud.


Fig. 1: An increasingly complex design challenge at the edge. Source: Cadence

“There is the ultra-low latency side towards the devices, and there is the back-end processing of making sure that the data is actually available at the edge, and can be computed at the edge,” said Frank Schirrmeister, senior group director for product management and marketing at Cadence. “This balancing—how much compute do you do here at the device versus how much compute do you do here versus in the datacenter, which sits somewhere at the Internet backbone—that’s really where the system level challenge kicks in.”

This is a larger system-level problem, because it reaches well beyond just the edge device. “You have to have the interconnect with the low latency toward the devices, while maintaining all the data management,” Schirrmeister said. “This looks very similar to some of the challenges of video hosting for the best bandwidth, and from a hardware-software perspective, you need to take into account the system-level challenge of balancing the ultra-low latency toward the devices, the strong compute, or backend compute in the servers. That’s the key challenge, and that’s the system-level challenge to be sorted out. The rest is all about low power. How do you design the antennas, how do you make sure that you get the data there in the right way? The biggest change is really the system-level challenge to balance low latency and backbone.”

This also has a direct impact on the kinds of tools required to ensure both the hardware and software will work together properly. “Hardware is typically verified out of context with the software, and vice versa,” said Zibi Zalewski, general manager of the hardware division at Aldec. “For example, the software engineers verify on models of hardware, which can be very expensive, and which run slowly.”

This is why hybrid co-emulation has been attracting interest lately. It can solve major problems that hardware and software engineers face when collaborating on the verification of their respective design parts.

In the rapidly evolving hardware/software development environment for the intelligent edge, many technology approaches are likely to coexist as segments of vertical development come to consensus on standards. And at the end of the day, getting a design complete, verified, implemented in the field is what really counts.

—Ed Sperling contributed to this report.

Related Stories
Designing For The Edge
Growth in data is fueling many more options, but so far it’s not clear which of them will win.
Data Confusion At The Edge
Disparities in processors and data types will have an unpredictable impact on AI systems.
Edge Computing Knowledge Center
Top stories, videos, white papers, blogs on the Edge
Partitioning Becomes More Difficult
Exploding gate counts, multiple domains, and hardware/software content are making it tougher to verify that designs will work as planned.



Leave a Reply


(Note: This name will be displayed publicly)