The next wave of data analytics will be less about infrastructure and more about leveraging patterns in data.
Big data is undergoing some big changes. For years, the challenge was getting enough good data to create models for everything from Wall Street trends to traffic routing. But with an influx of data from billions of sensors and electronic transactions, data is no longer in short supply.
In fact, there is so much data pouring in that companies need to figure out what to do with it. That requires a different skill set, and the who are being called in to get the ball rolling here are economists.
For the past couple decades, much of this was in the hands of people whose primary skill set was data science. They had to build these systems and make figure out ways to mine and leverage data. The quants of Wall Street were known for piecing together market data that could model market fluctuations. And while there were some economics, finance and math experts in that mix, the real emphasis in that world was, and still is, applied software literacy. That is now, more or less, a solved problem, and while quantitative analysis always will be necessary, to some extent this is a solved problem.
The next step builds on the infrastructure data scientists have created, and this is the real entry point for economists. Economic trends begin at a higher level of abstraction, which will be particularly important anywhere that AI is involved. Rather than developing models based upon a limited data set, economists tend to look at broad sets of data to systematically map out empirical relationships. In effect, they are looking for all possible patterns in a technology based on pattern recognition, rather than trying to build a model based upon highly defined patterns. (There is a great article about the impact of economists in the Harvard Business Review).
This is more of a top-down approach to data, rather than a bottom-up look at how to build the models, and it is particularly important in AI for a couple of reasons. First, it is a way of cross-checking for bias in training sets by comparing results in different applications or markets. Economists are very good at this kind of thing, which is why data-driven companies like Amazon, Google, Microsoft and Uber are hiring droves of graduates with PhDs in economics.
The second reason this is important is that economists tend to start from a neutral position, allowing the data to tell the story rather than trying to fit the data to the preconceived story line. So rather than focusing on churning out results faster using an existing set of data, economists will dig into that data to find the trends and the anomalies—and then they will go deeper to understand what drives both.
The challenge is no longer finding a way to collect the data. The focus is now on how to use the data more effectively, and that can include everything from broad trends to extremely narrow niches that can be tapped, leveraged or combined with other niches to create something entirely different.
This bodes well for the chip world because all of this will require massive data crunching engines and far more intelligence in sensors. This is a brand new way of using that processing power and data generation, and the impact will be significant over time, and across existing and new markets that are buried somewhere in that massive quantity of data.
Leave a Reply