Preparing For The IoT Data Tsunami

As the number of interconnected devices increases, technology challenges grow.

popularity

Engineering teams are facing a flood of data that will be generated by the , both from the chip design side and from the infrastructure required to handle that data.

There are several factors that make this problem particularly difficult to deal with. First, there is no single data type, which means data has to be translated somehow into a usable format. Second, the amount of power required to handle all of this data will be huge if it is not intelligently managed. And third, the infrastructure required to handle this data is being built or upgraded at a much slower rate than the technology producing it.

“The amount of data is very large and you’ve got to do it in a power envelope that doesn’t suck the planet dry,” said Sundari Mitra, CEO and co-founder of NetSpeed Systems. “It is very heterogeneous. So, if you have a very general purpose computer, your power is going to explode. You cannot have power-hungry things chugging away when they are not in use just because they are general purpose.”

This is why engineering teams are finessing them so they are application-centric, she said.

Given that the communications infrastructure is the foundational undermining of the IoT, it’s no surprise that one of the main topics at this year’s Mobile World Congress in Barcelona was 5G, and in particular how to layer 5G on a 4G infrastructure.

“5G is a portfolio of technology ideas to get bandwidth up and latency down,” said Chris Rowen, a Cadence fellow. “It requires you to use tightly coordinated signals across multiple towers. So with 4G, if you’re standing under a tower you get great reception, but if you stand at the edge of coverage for that tower, reception goes down by one or two orders of magnitude.”

Rowen explained that 5G, in contrast, coordinates signals across multiple towers. So rather than a signal being transferred from Tower A to Tower B as a person drives in that direction, the signal is split between the towers for maximum reception. “This is a mix of signals, not a binary handoff,” he said. “The infrastructure tends to be more programmable, with tighter integration between towers. But that requires significantly fatter pipes and tighter timing synchronization. “

As part of this, designers must understand how to handle designs that are heterogeneous in every respect, and be able to create them very fast. But it also has to be done much more quickly, which could include everything from different methodologies to advanced packaging approaches with a heavy reliance on re-use and subsystems or platforms

“You cannot design the mother of all chips that stays in play for a few months,” said NetSpeed’s Mitra. “If you look at a networking infrastructure design, people don’t change it for 10 or 15 years. [In that case,] you can put in a billion-dollar investment in designing a chip. In a mobile phone, that is not going to happen. You need to be able to make some very quick decisions, do a bunch of quick tape outs to satisfy what appears to be the next 6- to 12-month window. It is a very different way of thinking about semiconductors. It is quick-turn. It means automating as much of the process as we can to get these done correctly.”

Mike Gianfagna, VP of marketing at eSilicon, agreed that the need for massive compute and network infrastructure build-out is very real, and that servicing this demand requires very high throughput at very low power.

“Achieving these goals requires tight integration of substantial processing power with massive amounts of memory. You can’t get this done in a monolithic way anymore – the chip will simply be too big. Instead, 2.5D integration is fast becoming the way forward to address these kind of design challenges,” he said.

But handling the data that comes in from all the sensors and other IoT devices is a huge task, as there are edge devices collecting data at an unprecedented scale. “What happens in the datacenter? How do we deal with this incredible quantity of data from the perspective of the companies that are coming from a position of traditionally building sensors or MEMS or analog/mixed-signal chips that interface with the real world?” asked Jeff Miller, product marketing manager at Mentor Graphics. “The big challenges here are going from just being a sensor to being part of the Internet, which requires some data processing, so they need some digital capability, and they need some communication interface to get it up to the Internet. They are also dealing with power and size requirements.”

As far as integration goes, thanks to standards like High Bandwidth Memory (HBM), it is now possible to tightly integrate large amounts of memory with a processor using a silicon interposer, Gianfagna continued. “This achieves very high bandwidth at reasonable power. Understanding how to perform this integration is a new learning curve for design teams.”

Still, costs are going up that must be managed somehow. NetSpeed’s Mitra said much of this is because people are moving to the advanced geometries. “For people who want to do the quick-turn projects that don’t have a shelf life, whether they can ever afford to go [to the most advanced nodes] and spin their chips I don’t know.”

One methodology does not fit all
There are two aspects to managing big data in these systems. One involves the amount of data generated by IoT devices once they are up and running. The second involves the volume of data used to create them in the first place. Each needs to be brought under control for these systems to proliferate.

On the design side, the general consensus is that methodologies need to be developed to tame the volume of data that needs to be analyzed and fixed, either through more re-use or through bigger, faster verification, debug and prototyping tools. What’s different now that with the first generation of SoCs it that today’s IoT chips have non-standard interfaces, which puts more stress on verification to get it right.

“Internet of Things is kind of the new buzzword of our industry, which means everything and nothing, but SoCs being labeled for the IoT are really just the next generation of SoC,” said Jean-Marie Brunet, marketing director for Mentor Graphics‘ Emulation Division. “We have worked with many customers on IoT, and what’s always interesting is that you look at the chips, you look at the block diagram and how the chip is interfacing to a system, and you notice there is one block that is proprietary. That is the problem with the Internet of Things. To accelerate the ramp-up is also accelerating the overall de-risking of the verification. For everything that is standard in an SoC, no problem. They’ll have either a physical device target, or a virtual model, or an RTL, or a transactor, and everything plugs and plays. But for the only block in the interface diagram that is not standard, then you have no such thing as an RTL that is standard. You have no such thing as a transactor. You don’t have a model. So you are stuck with a physical device connected. Where we see engineering teams struggling the most in their ramp up to production is the verification of things that are external to the Internet of Things device that is non-standard.”

In a very simple example, he explained that with an application that will collect a set of data and capture it on a simplified operating system, that operating system is completely unique to that IoT device. This means there is no standard operating system driver, so it must be plugged externally to a driver where there is probably a set of memory devices, a simple CPU or MCU, and an MCU that will boot the OS and run the driver. While this is standard to that IoT company, it’s not standard to the market. The problem is in how to verify this, so engineering teams spend most of their time verifying that interface to what is probably the only differentiating factor of their IoT solution. For this, a physical target or In-Circuit Emulation (ICE) is used, even though the problem with ICE is that it is physical so it is random and very difficult to debug.

And justifying the cost of an emulation system is no laughing matter, particularly in lower-volume chips such as those developed for the IoT. But the cost of getting something wrong is higher. In fact, almost every chipmaker has at least one emulator these days, and as the market shifts more toward vertical solutions, they are using a mix of hardware. While this is all about speeding up the design process, the underlying problem being solved is massive amounts of data processing.

“Emulators are important on the hardware side, but now that we have more software being bundled with chips we’re seeing a big push toward hybrid emulation,” said Tom De Schutter, director of product marketing for virtual prototyping at Synopsys. “We’re seeing FPGA prototyping of IP, where companies are moving to much bigger subsystems. Those are being prototyped individually, but you need software to validate them.”

There seems to be broad agreement on that from the EDA players. “There are a tough set of pain points and challenges that need to be addressed,” said Frank Schirrmeister, senior group director for product marketing of the System Development Suite at Cadence. “Overarching this is the connectedness of different engines. So you’ve got a verification environment where you move between different platforms and all of the challenges have to be worked through. The challenges span horizontally and vertically, from IP to subsystems. At the end of all this, if you switch on a chip it may not work and then you have to do on-chip debug.”

Looking ahead, the challenges to meet the processing requirements of the exploding mobile and cloud infrastructure are numerous, but these will be conquered as engineering teams discern their role in the IoT ecosystem. Whether it is in the communications infrastructure, the datacenter or the edge devices themselves, tomorrow’s SoCs will leverage the most appropriate process nodes, design techniques, and verification methodologies to meet the task at hand.



Leave a Reply


(Note: This name will be displayed publicly)