Winners And Losers At The Edge

No company owns this market yet — and won’t for a very long time.

popularity

The edge is a vast collection of niches tied to narrow vertical markets, and it is likely to stay that way for years to come. This is both good and bad for semiconductor companies, depending upon where they sit in the ecosystem and their ability to adapt to a constantly shifting landscape.

Some segments will see continued or new growth, including EDA, manufacturing equipment, IP, security and data analytics. Others will likely see their market share erode as the cost of designing advanced-node chips skyrockets and the end markets beneath them continue to fragment. So rather than selling hundreds of millions or even billions of units to turn a profit, they will have to become far more nimble and compete for smaller volumes at the edge, where, so far, there are no clear winners.

This already is causing churn across multiple markets. Among the examples:

  • The biggest prizes for chipmakers have been designs for servers, smart phones and increasingly automotive companies. But systems companies such as Google, Facebook and Apple, and giant automotive OEMs such as Volkswagen, Daimler and BMW are now designing their own chips to take advantage of their proprietary AI/ML/DL algorithms or software. That leaves standalone chipmakers vying for accelerator and control logic designs, which carry significantly lower average sale prices (ASPs).
  • Edge markets are becoming more narrowly focused, particularly as intelligence is added into devices to provide specific solutions. To maximize performance and power efficiency, hardware needs to be co-designed with the software, which makes it difficult to develop one extremely complex SoC that plays across multiple markets.
  • The rollout of the edge coincides with the slowdown in Moore’s Law and the rising cost of developing highly customized SoCs. This makes development of base platforms that work across multiple segments much more important, but it also requires chiplets and other IP developed by multiple vendors. That tends to dilute profits and change the business model.

Taken as a whole, the edge represents an evolutionary inflection point for the chip industry, driven by an explosion in data, the adoption of AI everywhere and the high cost — monetarily and from a memory/bandwidth/power perspective — of sending all of that data to the cloud for processing. For smaller companies struggling to get a toehold, this opens the door to new opportunities. But those market windows will open and close more quickly than in the past, and the competition, price pressures and time-to-market demands are expected to be intense.

In general, the chip market breaks down into four general areas — cloud, edge computing, edge devices, and on-device accelerators. Each of those is very different, even though they are connected by the flow of data.

“The IP selections our customers are making are vastly different,” said Ron Lowman, strategic marketing manager for IP at Synopsys. “We have AI accelerators for cloud, for edge computing, for edge devices and for on-device acceleration, which may be a mobile device with AI added to its mobile apps processor. The same is happening in automotive. It’s an application processor that can do AI. The IP selections are different for those. The AI accelerators are using different innovations just to accelerate those.”

Each of these has very different concerns. “They’re completely different markets,” said Lowman. “The automotive companies are trying to do their own neural networks. Those focused on voice are trying to do RNNs and LPSNs (low-power sensor nodes). The edge device companies are playing around with more advanced technology, like spiking neural networks that are highly compressed and act more like the brain. Camera developers are doing face recognition, and that’s good enough so they just want to reduce the cost. A license plate reader is pretty simple now. It’s about power consumption and cost, rather than trying to improve the algorithms.”

Defining the edge
Putting a single label on all of these developments may be impossible, a fact made worse by the fact that the edge is still being formed and definitions continue to shift. But in general, the edge is the culmination of four waves of computing.

“The first wave was the personal computer, where the focus was on computational performance,” said Zeev Collin, vice president of communications products at Adesto Technologies. “There were millions of units with big, expensive processors in them. The second wave was connectivity, and that opened up an order of magnitude of sales with new ISAs. Then we had the mobility revolution, and that added another two orders of magnitude in the number of units and, once again, miniaturization and cost pressure on the device. The fourth wave, with the IoT, will add tens of billions of devices, but it’s all about a combination of size, cost and power. Unlike the previous wave, the focus is on how much data can be streamed and how far. If you don’t need that much data, you don’t have to push it very far, but it has to be available with low latency. It’s very different traffic problems.”

The edge is the fifth wave. Simon Segars, CEO of Arm, pointed to the convergence of AI, 5G and the IoT as the defining technologies in this new world order. Just building tens of billions of inexpensive devices and sending everything to the cloud doesn’t work for many devices. The movement of large amounts of data adds too much latency for many applications, even with the fastest communications and processing infrastructure, and it raises privacy concerns about sharing personal data.

That, in turn, has prompted a rush to do more processing closer to the data source. But to achieve the kinds of performance improvements required, often on a battery or a limited power budget, requires a fundamental rethinking of the entire design process. And because the data now resides on these devices, in addition to being cleaned and structured, these devices also require more security than simple IoT devices.


Fig. 1: A middle ground, but one that is still being defined and developed. Source: Rambus

Challenges in design
One of the big initial challenges for semiconductor companies competing in this space will be convincing potential customers that their solution is better than those of competitors. Typically that requires apples-to-apples performance and power comparisons, but metrics only work when the same software can be run across different systems. For example, to determine the speed of a PC, the same compute-intensive applications are used. That works for comparing GPUs and MCUs and FPGAs, as well. But in highly targeted applications, use cases may be radically different, and metrics need to be customized to those use cases.

“Picojoules per MAC just says how well you’re doing a MAC computation,” said Kris Ardis, executive director at Maxim Integrated. “There’s much more going on. A lot of the power is spent moving data from place to place. That really supports an architecture where you don’t move things around but operate on things where they are.”

Case in point: Maxim Integrated’s convolutional neural network accelerator, which was designed specifically for that purpose, using a 40nm process to keep costs down.

“It’s designed from the ground up to be a CNN accelerator, as opposed to what we see in the market, where there are neural assists to a microcontroller — or where maybe you do a little more parallel math, but you’re still fetching weights and data and storing your intermediate results,” said Ardis. “This is a ground-up architecture. While the silicon we’re building contains an Arm Cortex M4 with flash and SRAM, those are there mostly for system management. Most of the chip is this huge peripheral: the neural network accelerator. So you boot up, you copy your weights and other configuration out of flash, and then you put that into the weight memory that’s embedded in the neural network accelerator.”

The result is orders of magnitude better performance and lower power than a general-purpose MCU, but the use case is narrower. That’s a big tradeoff, and it represents a different way of approaching edge design. CNNs are used primarily for image and video recognition and natural language processing. But even that is much broader than some of the other applications in the edge, and this is where the economics of designing chips for the edge start becoming challenging — and potentially much more interesting from a design standpoint.


Fig. 2: ML applications such as object identification benefit from edge processors. Source: Maxim Integrated

“If you look at what you have to do to optimize a CUDA implementation for a graphics accelerator, a lot of that involves increasing the throughput to make it faster,” said Raik Brinkmann, CEO of OneSpin Solutions. “This is very similar to what you do to make architectural changes to hardware design. So, on an algorithmic level, what if I change the order in which I compute things? What if I had a way of interleaving operations? The typical things that you do when you parallelize and map things to hardware, people already do because GPUs are big and inherently parallel.”

But this also requires understanding of both hardware and software design tools, as well as a smattering of computer science. “The question is, ‘Will they learn all of this, or will the tools they use become so powerful that they will be able to do it for them?’ The hope for high-level synthesis is that you can actually take an algorithm and map it to the platform pretty efficiently without much thinking,” Brinkmann said. “But I don’t believe that’s going to happen anytime soon.”

This also has prompted much more focus on programmability, and it has boosted interested in customizable instruction set architectures, particularly RISC-V.

Embedded FPGAs are one such approach that is starting to gain traction after years of sitting on the sidelines. “We’re over the experimentation period,” said Geoff Tate, CEO of Flex Logix. “We’ve had some early pioneers. They’ve had successes. The value of embedded FPGAs was always clear. The question in the market was whether it would work. Now that people see it works, we’re getting growing adoption. We have more than a dozen working chips with customers, and we have dozens more in the process of design and in evaluation.”

The advantage of an eFPGA is that it can be incorporated into any chip at any process node and add programmability, which helps both to customize an SoC and to keep an existing chip up-to-date as algorithms and protocols change because it can be programmed in the field.

RISC-V provides a different approach. “In the past, customers wanted to have embedded cores that would do just simple tasks,” said Zdenek Prikryl, CTO at Codasip. “Now they want to have a video-capable core or a much more complex core. You can see a lot of activity in the vectors in AI. If you look at the European Process Initiative, you will find they use co-processors a lot for vector processing. RISC-V will be used in processors, and in a couple of years I believe you will start seeing Android ported to RISC-V.”

Arm likewise announced at TechCon last fall that it would open its MCU ISA, allowing licensees to build their own custom instructions, and it developed its own tools to ensure compatibility with TrustZone, the company’s secure infrastructure. The move goes a long way toward allowing Arm’s embedded processor architectures to be adapted to the edge.

Different security
Security plays an increasingly important role in edge designs, and security vendors have a potentially huge upside in this market. While the designs are highly vertically oriented, security needs to be implemented horizontally. That can include everything from anti-tamper technology to roots of trust and software/firmware security, as well as security services sold alongside of those technologies.

“What’s driving a lot of this is that data movement is impacting system architectures and how we think about security,” said Steven Woo, fellow and distinguished inventor at Rambus. “It’s not feasible to drag all of that data to a data center, so deploy it directly onto your devices and ship a smaller amount back to the data center. So now you’re keeping the raw data closer to the end devices, and that adds interesting new requirements on security. If all you are doing is sending data, you can do things to obscure the data. But if your attack surface is greater — more doors or access points — then you need to think a lot more about the architecture.”

That also means that if the device gets compromised, a backup may not be sitting in the cloud as there is with “traditional” IoT devices. One option for handling this is a focus on resilience in addition to sealing off the perimeter and the data. If a device is compromised, it needs to be able to recover, particularly in edge applications such as automotive and robotics, where safety may be involved.

“If you look at the DoD, their approach is to have a secure network and a non-secure network, and those two are isolated from each other,” said Jason Oberg, CEO of Tortuga Logic. “That approach doesn’t work in modern systems because they’re too complex and intertwined and there are too many types of data. So you need to have some sort of sharing between things like context. You need a way of finding issues and adapting and responding appropriately. But you also want to build that infrastructure. You need to build walls, but if those walls are broken, you need a way of responding to that. There’s a way of assessing that risk. So ‘this’ wall may be broken, ‘this one’ is highly unlikely to be broken.”


Fig. 3: Security layers and steps. Source: Tortuga Logic

“Transparency is important,” Oberg said. “You want to get market feedback about how secure your system is. So you need to design a chip based upon basic security principles. You need a balance of, ‘I built it, but I’m also being transparent about what I built.’ Being open about that is really essential.”

In addition to security, automotive and other safety-critical applications add a couple other key requirements, namely safety and reliability. All of those elements have been separate concerns in the past, but in applications such as automotive each has a bearing on the other.

“There is a lot of uniqueness and a lot of commonality in edge chips,” said Mike Borza, principal security technologist at Synopsys. “And there is a lot of commonality in AI. The trouble is that AI cuts both ways. It can be applied for good and for bad.”

Other issues
There are numerous other challenges, which is not uncommon with new markets. The problem with the edge is that it’s difficult to get critical mass to solve some of these issues because there is so much fragmentation.

“A lot of designs in automotive are highly configurable, and they’re configurable even on the fly based on the data they’re getting from sensors,” said Simon Rance, vice president of marketing at ClioSoft. “The data is going from those sensors back to processors. The sheer amount of data that’s running from the vehicle to the data center and back to the vehicle, all of that has to be traced. If something goes wrong, they’ve got to trace it and figure out what the root cause is. That’s where there’s a need to be filled.”

In addition, much of the data is unique, in part because each OEM is developing its own technology, and in part because of the enormous number of possible use cases and corner cases in automotive. “There needs to be more standardization in the automotive industry and a determination of what kind of data doesn’t need to be unique,” said Rance.

Anything to provide some level of commonality will help. “Everything is use-case based when designing the NoC,” said Kurt Shuler, vice president of marketing at Arteris IP. “You’ve got to understand what the use case is to be able to size up that NoC. There are two aspects of this. One is in the creation of that network on chip and the configuration of it, and what gets burned into the chip. The other step is, once you’ve created all the roads — they’re this long or this wide — that’s it. That determines quality of service. That’s where you have the metering lights on the on-ramp and construction and stoplights. That’s the quality of service characteristics, and that’s dynamic.”

But if algorithms change, there need to be workarounds. “Our chip architects have to balance that,” said Shuler. “If you’re always running three or five use cases and nothing will change much, then they will tweak that configuration of the NoC very finely. That chip will be able to do that and nothing else. If they change that use case, it’s not going to perform well. You’ve really locked that performance scheme. That’s one extreme. Another extreme is someone who’s doing a general-purpose AI chip. They’re going to put extra everything, extra bit width, so you can put in control and status registers, change the priorities of things. In the first case, it’s static. In most cases, except for things that are deeply embedded and will only do a few functions, the quality of service capabilities are super-important. Especially for merchant chip vendors, they’re designing a platform that may be going into multiple markets.”

Conclusion
The edge remains a largely untapped opportunity, driven by splintering end markets and the need to customize AI solutions for specific use cases. But it also is an area that will be difficult for any single player to dominate because the only way to effectively tap these opportunities is by cobbling together various pieces quickly and inexpensively.

This points to new business models that are flexible and low-cost, and that will require some adjustments in go-to-market strategies for chipmakers. But, at least so far, the landscape at the edge appears to be vast, if a bit fuzzy. Companies that miss an opportunity may be able to latch onto a new one fairly quickly. Nevertheless, this sector is likely to see a lot of turbulence for quite some time.

Related Material
Edge Computing Knowledge Center
Top stories, special reports, videos, white papers, and blogs on edge computing
Challenges In Building Smarter Systems
Experts at the Table: A look at the intelligence in devices today, and how that needs to evolve.
Conflicting Demands At The Edge
Cost, power and security clash with the need for intelligence, localized processing and customization.
Revving Up For Edge Computing
New approaches to managing and processing data emerge, along with standard ways to compare them.
Memory Issues For AI Edge Chips
In-memory computing becomes critical, but which memory and at what process node?



Leave a Reply


(Note: This name will be displayed publicly)