Options Widen For Optimizing IoT Designs

Trading off features, functions, and costs is an increasingly complex and ongoing balancing act.

popularity

Creating a successful IoT design requires a deep understanding of the use cases, and a long list of tradeoffs among various components and technologies to provide the best solution at the right price point.

Maximizing features and functions while minimizing costs is an ongoing balancing act, and the number of choices can be overwhelming. The menu includes SoC selection, OS and software protocols, wireless connectivity options and RF, network use, power and thermal management, battery life, and available IoT standards. Other factors affecting those choices are cybersecurity needs, operating frequency and data rates, real-time and latency requirements, payload size, ruggedness for both consumer and industrial grade, packaging and size, system reliability, AI, and overall cost constraints.

“IoT system designs are under constant pressure to reduce costs,” said Ron Lowman, strategic marketing manager at Synopsys. “Sometimes it is difficult to add a new feature if it is going to cost more. Take a hearing aid, for example, which integrates some very advanced power capabilities in the technology to enable much lower power for audio processing in a small device. While hearing aid manufacturers have been doing this for years, the rest of the IoT space has not adopted some of these techniques because there is an increased cost associated with these techniques. A tradeoff has always existed between power efficiency and low cost, such that there is always a question of how much customers are willing to pay for longer battery life. There are advanced technologies, techniques, and optimizations that take too long to develop or cost too much. Until solution providers come up with integrated solutions to outpace the common next generation incremental upgrades made by traditional semiconductor vendors, cost will continue to be the primary driver of IoT SoC design decisions, in particular for consumer and industrial applications.”

Design optimization effect on overall design
Optimizing an IoT design is application-specific and impacts the overall design parameters. For example, in a large industrial chemical plant, temperature monitoring is important. A malfunctioning process can be hazardous when the temperature is too high, resulting in dangerous chemical leaks and explosions. The IoT can be used to monitor the temperature, as well as leaks.

In this case, the overall design requirements are actually quite simple. An IoT system for monitoring temperature only needs to track whether the temperature exceeds a preset limit. In the case of leak detection, the system would just have to detect when there is a leak.

Here, the MCU in an IoT-based leak detection system needs only interact with the sensor every 30 seconds or less. When there is a leak, the IoT sends an alarm to the server or the manager in charge but remains in sleep mode otherwise. Even when the alert or alarm is set, it only requires a small packet without the need of high throughput. Many low-power wireless networks such as Matter, LTE-M, and Wi-SUN can be used.

Likewise in a surveillance application, where cameras are stored in a large plant with a multi-plant set-up, periodic video streaming using Wi-Fi or similar technologies is required. In this scenario, higher throughput is required to support video. The additional power needed to support higher throughput impacts overall battery life. If the installations involve smart street or traffic lights, there is less concern about battery life because there is usually a power source available.

“What’s driving some of this is the nature of AI is changing things quite a bit when it comes to workload-specific chips,” said Sailesh Chittipeddi, executive vice president at Renesas. “It’s no longer CPUs do X, Y, and Z functions where for every workload there’s no overhead associated with it. That’s no longer the case. That’s why you have all these companies becoming more vertical — to drive the solutions they need. And the interplay between electrical and mechanical features is becoming far more important, where a placement of a particular connector could make a difference. That’s why more CAD companies increasingly are getting into system-level support and system-level design.”

What do you want to accomplish?
Walt Maclay, president of IoT design consultancy Voler Systems, pointed to three important areas in optimizing IoT devices — battery life, cost, and size. “This is particularly important for wearable devices, but most IoT devices have these issues,” he said. “You never can have everything you want. It’s a matter of picking tradeoffs, and engineering design is all about tradeoffs.”

To achieve power efficiency, selecting low-power processors and sensors is essential, but there also must be sufficient processing power or speed for the task the device needs to do.

“Additionally, pick the lowest-power wireless communication that will work in the application,” Maclay said. “Bluetooth LE has the very lowest power consumption of standard wireless, but it only transmits 10 to 30 feet. If you need to transmit to the Internet without a cell phone, NB-IoT or LTE-M are low-power, low-speed wireless technologies that transmit for miles.”

Further, software must be carefully written and tested to make sure it is the lowest power that the devices can provide. And power needs to be shut down to sensors, transmitters, and other parts of the processor when they are not being used.


Fig. 1: Comparison of low-power wide area networks. Source: Voler Systems

Other power efficiency design considerations include choosing the best memory retention options, memory block size, compute and memory coupling, as well as using AI hardware accelerators if AI is needed. Depending on the application, the design may need to be optimized differently.

Maximizing IoT battery life
With the IoT mostly running on batteries, optimization of battery life is key to staying within budget.

“One opportunity to extend battery life involves the device’s active versus inactive duty cycle,” said Prakash Madhvapathy, director of product marketing for Tensilica audio/voice DSPs at Cadence. “When a device works only part of the day, the processing element should not be active while producing no active sensor data. On the flip side, it is important for the processing to begin soon after the device is turned on, obviating manual intervention. Requiring operator oversight could mean missing opportunities for processing.”

However, if the processing element has a small, energy-frugal part to be always-on (AON) while the main processing is turned off, battery charge is preserved when the device is inactive.

“The AON part can detect the device turn-on and wake up the main processing block,” Madhvapathy said. “While a few IIoT devices are using this technique to extend battery life, by and large they have not had access to architectures that make this possible. An example of a DSP combination that enables this use case could be a Cadence Tensilica HiFi 1 DSP performing sensor fusion in the AON domain coupled with a HiFi 5 DSP for the performance domain processing. The HiFi 1 DSP is designed to perform sensor fusion in ultra-low power mode while looking for meaningful sensor activity. It keeps the HiFi 5 DSP in power down mode until it senses the device turn-on. At this point it can power on the HiFi 5 DSP, and the HiFi 5 DSP can process as needed for active mode.”

Along the same lines, Arm’s Helium technology is aimed at designs that need AI and thus demand higher signal processing performance. “For example, the Helium technology in Arm’s Cortex-M55 and M85 accelerate signal processing and machine learning, which will be useful for use cases such as high-end vision applications that may require high performance machine learning capability,” said Thomas Lorenser, director of general-purpose compute at Arm. “Use cases such as speech recognition have less-demanding workloads and may need less machine learning capability. Additionally, part of the chip may be inactive, influencing overall chip power consumption. Selecting the right IP for the applications will help achieve higher energy efficiency in the design. If the design workflows require machine learning with signal processing acceleration, it will consume more power.”

Energy-efficient IoT chips
To enable engineering teams to design energy-efficient products, new technologies are emerging such as InnoPhase IoT’s Wi-Fi + Bluetooth chip. With transmission (Tx) current at 81 mA, receive (Rx) at 37 mA, and idle around 150 uA (DTIM-3). It enables batteries to last up 10+ years for low-power cloud connected IoT sensors and up to a year for power-intensive video cameras and doorbells.


Fig. 2: Energy-efficient battery-operated Wi-Fi video camera usage assumptions. Source: InnoPhase IoT

Wi-SUN technology designed for smart cities, including smart power grids, has a design specification of battery life up to 20 years. There are multiple chip suppliers supporting Wi-SUN, including Texas Instruments and Silicon Labs. The Wi-SUN chip from Silicon Labs, which includes the security feature “Secure Vault” consumes only 2.6 μA in deep sleep mode. By comparison, Texas Instruments Wi-SUN chip consumes only 0.85 μA in sleep mode with full memory retention and clocks running.

For even longer battery life, by incorporating an energy harvesting block in the IoT SoC, it is possible to implement battery-free IoTs or install a battery that will last a lifetime,” said Nick Dutton, senior director of product marketing at Atmosic. “The SoC can harvest energy not only from light, but from nearby energy generated from mobile phone and the like. This would be the perfect application in retail stores, where electronic price tags can be updated wirelessly.”

An important consideration is that consumer-grade batteries have shorter shelf life and may leak chemicals over time. For IoT applications, industrial grade batteries are needed.


Fig. 3: Battery-free SoC incorporates an energy harvesting block. Source: Atmosic

While more and more IoT incorporates AI, how much is really needed?

As illustrated in the above examples, a leak detection monitoring application has a simple, predictable pattern. There is either a leak or no leak. The design does not require a high-performance MCU. But in applications such as surveillance and plant security, AI may be required. To prevent unauthorized personnel from entering, high-resolution cameras are installed to authenticate the identity of authorized personnel. This may require an ID card along with facial or fingerprint detection. At an outdoor plant workers may be wearing gloves. In this case, facial detection is required and thus so is AI. But not every application requires AI.

Synopsys’ Lowman explained there are a lot of innovations in the IoT space. “Traditionally, IC developers keep pushing for higher performance and cost reductions as new products or releases enter the market,” he said. “Higher frequencies and next-generation process node shrinks help bring about improved performance for the price. Several years ago, there was a push to add IoT protocols, including cellular technologies such as LTE-M, narrowband IoT, and LoRaWAN. We continue to see change with new protocols, the latest being Matter and upgrades to Bluetooth.”

Today more thought is being put into the applications, and specifically the AI workloads. “One of the biggest drivers in the market today is the accommodation of AI workloads, which are driving next-generation designs,” said Lowman. “This can be challenging because IoT devices have very little memory and compute resources. When implementing AI workloads, you can never get enough on-chip SRAM, so we’re seeing more and more companies adopt high-density memories – from the data center all the way down to much smaller IoT devices. AI workloads are math-intensive functions. Thus, IoT devices draw more power. So there’s a difficult design challenge — reduce power and cost, but accommodate a function needed for ‘killer apps.’”

Adding to the challenge is that, in most cases, the very latest apps are not fully mature or are always changing due to the speed of innovation. “This means the design goals can only be accomplished by accommodating specific AI workloads and maximizing available compute and memory efficiency within the desired power and cost budgets,” Lowman said. “As a result, IoT-based solutions must continue to reduce cost and power, while trying to accommodate an intense processing function. This effort is speeding incremental changes in the IoT space generation over generation rather than highly disruptive hardware shifts. We expect to continue to see incremental SoC upgrades to keep up with the algorithm and optimization challenges.”

When adding AI to IoT design, it is important to get the most performance out of the least hardware.

“Using flexible hardware such as eFPGAs, which can be reconfigured for optimal use of the hardware with each different inference operator is ideal,” noted Geoff Tate, CEO of Flex Logix. “Each application is different, and having the right amount of AI by licensing the right number of tiles will optimize performance and power efficiency at the same time.”

Security is becoming a key feature
The list of malware attacks on the IoT is almost limitless. In a new report entitled, “Internet-Connected Technologies Can Improve Services, but Face Risks of Cyberattacks,” the U.S. Government Accounting Office pointed to a long list of attack types, including botnets, data breaches, denial of service (DDoS), malware, man-in-the-middle, ransomware, structured query language (SQL) injection, and zero-day exploit. This is just the beginning. New cyberattacks keep cropping up.

The smart sprinkler systems in a large industrial complex will be able to detect rain to shut off valves to save water. If the system is hacked, too much or too little water may be applied. The results could be thirsty grass with brown patches or an excessive water bill. But hackers also may be able to use that sprinkler system to attack the server if the gateway is not secured. In a smart city application an IoT hack may cause malfunctioning traffic lights or ambulance rerouting, both of which may have serious or even deadly consequences.

It has been demonstrated that hackers can potentially use a $170 hacking gadget, such as the Flipper Zero, to control the traffic lights by turning them green. Potential damages caused by controlling traffic lights go far beyond sprinkler hacking. This is analogous to securing the front door. How many deadbolts do you want to put on the door? In IoT designs, how many layers of protection do you want to implement?

Many of the common IoT standards, including Matter, Wi-SUN, LoRaWAN, and even 5G already have security built into their specifications. These are sufficient for most IoT applications. To protect against highly sophisticated hackers, additional layers of security may need to be deployed.

Maarten Bron, managing director at Riscure, observed that IoT attacks have been on the rise and that additional efforts may be required to increase cyber protection. “While an IoT application can be as simple as a voice-controlled smart light, it also can be as complicated as a monitoring system used in preventive maintenance in a smart factory. In a more complicated IoT system, you want to increase cybersecurity by subjecting the designs to malware attacks in the lab to see if the designs can guard against such attacks before real deployments.”

Given that IoT designs range from very simple to highly sophisticated with edge computing and AI with analytics capabilities, how much security is use-dependent. For applications with cost constraints, basic security including secure boot, debug protection, and firmware updates is sufficient. “However, for sophisticated designs needing serious cybersecurity protection, additional security hardware and software countermeasures would be needed,” noted Erik Wood, senior director of IoT product security, IoT, Compute, Wireless Business, and Connected Secure Systems at Infineon Technologies. “This would add processing time and power consumption. Developers will need to prioritize design choices, and it is important to optimize the security design to increase energy efficiency.”

While adding security and minimizing cost is a constant tradeoff, there may be a possible work around.

“Low-end IoT devices have the least amount of security because this is primarily a cost-driven decision,” said Bart Stevens, senior director of product marketing for security IP at Rambus. Though adding security to such devices protects customers, it does ultimately add development costs and detracts from performance and power efficiency of the device itself. However, when using dedicated cryptographic accelerators – these last two drawbacks are eliminated. Compared to the CPU handling cryptographic computations itself, dedicated cryptographic hardware implementations use 90% less energy for such tasks, while also drastically improving security with no reduction in compute performance.”

Conclusion
IoT chips and technologies will keep evolving. They will be more energy-efficient, more AI-scalable, with better security and integration. Honing in on the optimizations will continue to be a challenge, and engineering teams will need to focus on use cases, and how to apply new technologies and approaches to achieve their design requirements.



Leave a Reply


(Note: This name will be displayed publicly)