Making sense of technology directions requires a different starting point.
Autonomous vehicles, 5G, a security breach at Marriott hotels, and AI. These may seem unrelated, but they’re all linked by one common thread—data.
Data creation, management and processing always have been a winning business formula. In 2004, IBM sold off its PC business on the assumption that it could still achieve significant growth by managing its customers’ data. The rapid buildup by companies like Amazon, Google, Microsoft, Alibaba, Apple, Netflix and Facebook, is all about understanding customer preferences based upon data.
Underlying all of this is a subtle but significant shift in technology. Hardware and software are no longer the starting points for technology design. It’s now about data processing, flow and throughput, and there are three basic considerations for both semiconductor and system design.
Volume. The amount of data being generated across the industry is exploding. Trying to move all of that to a central processing facility is grossly inefficient. In fact, only China is looking to process data from autonomous vehicles centrally over 5G. The rest of the world is focused on doing most of that processing inside the vehicle, and some of it may even be done at the sensor or sensor hub level.
This has a big impact on chip design. Rather than focusing on very fast CPU cores, the emphasis is shifting to partitioning the processing tasks and data throughput across a chip. This typically requires more and different types of processors, each with its own small memory to reduce wait time. The emphasis in these types of designs is fusing together data in a customizable way after it is processed, and keeping all of the processing elements busy so that none of them sits idle for very long.
Value. Not all data is good. Some of it is even wrong. And not all of it has the same value at the same time. For example, data that a person is crossing the road in front of a car moving at 60 miles per hour is extremely valuable. So is data that someone is making a purchase and may be interested in buying a related item. Meanwhile, broad-based information about purchasing habits of customers still has significant value, but not with the same level of immediacy.
This determines what needs to be processed locally and what can be processed centrally. But in either case, the data needs to be cleaned up and prioritized, which is an enormously complex task by itself. There are many different types of data, and specialized processing elements may be required if that data is of high value immediately.
Access. Not everyone needs access to all data all the time. Security breaches are a function of both value and volume of data. The more valuable the data, the more it needs to be secured, regardless of where it is processed, moved or stored. The greater the volume of data, the harder it is to manage and secure, and the more likely that breaches will occur.
The recent breach at Marriott, which exposed the personal information of as many as 500 million people, is only one of a series of major breaches. Even Equifax, which was created to safeguard financial data, was hacked last year. That one exposed sensitive personal information of 143 million people, according to the U.S. Federal Trade Commission.
Until the entire data flow is secured, these kinds of breaches will continue. This isn’t just a chip problem, and while the hardware and software need to be secure, none of this will work unless that data path is understood and locked down.
Data has become the new starting point for efficient, effective and secure computing. Increasingly this will mean less attention paid to the clock speed of a processor core or the amount of memory. It’s all about how to process certain types of data most efficiently, quickly and securely, and this will vary greatly depending on the type and value of that data at any point in time.
Leave a Reply