Systems & Design
SPONSOR BLOG

What Is ‘Digital’?

Just because logic shows something to be 100% right doesn’t mean it’s the correct answer.

popularity

I saw a LinkedIn article with this title a couple of weeks ago and was curious. Do we not know what digital is and do we need to question it? When I read the first line I was very surprised and somewhat confused. Ved Sen, the author said that, “Despite working in the digital space for years, now I was quite stumped a few weeks ago when I was asked to define it.”

Why would digital be so difficult to define? I continued reading until I came to his definition. “Digital means: exploiting emerging technologies to create user-/customer-centric interfaces and data-driven business models, leading to more agile, responsive and competitive business models.”

Basically he sees it as the rate of change that the impact of technology is having on the business world and the software developments that make it more accessible to a greater number of people. He equates digital with data and the growing mobility of that data.

While the definition does nothing to help us in the semiconductor world, it does make one pause and reflect both the way in which what we create is viewed and, as the old expression between the Americans and English says, “We are two countries divided by a common language.”

Within the semiconductor space, we see digital as being the abstraction of a signal into one of two possible states – a ‘1’ or a ‘0’. The abstraction has been the foundation of logic design, synthesis and much of the tool chain with which semiconductors are designed.

We have to work a lot harder these days to keep that abstraction in place. There has always been that annoying transition between states — the no-man’s land where the signal is neither one nor zero. Logic avoids that by making sure that all values are given time to settle before being sampled. Any breaking of that rule is considered to be a timing error.

But the introduction of many low power technologies has muddied things a lot more. What happens when the voltage to a circuit is modified? The definition of the abstraction becomes variable and level shifters have to try and keep some semblance of order. That becomes even more difficult when the voltage on both sides of the level shifter is being varied simultaneously.

At the same time, the power drawn from an increasing array of devices makes it difficult to design a stable power supply. Voltage droop adjusts the operating characteristics of all of the devices it supplies.

We get around these problems by guard-banding, by looking at statistical variability in the system and attempting to reject common mode effects that could cause the industry to overdesign. It is all a balance between ensuring safety, increasing yield, decreasing risk and minimizing costs.

We have created a reliable abstraction for data that is enabling additional layers to be built on top it, including the business models that allow that abstraction to be exploited.

One only has to wonder what would happen if we were to shift to an abstraction more like the one that happens in the human brain, one where we would speak of the probabilities that this is the likely value of that signal, that it is 80% probably that it is a ‘1’. A paradigm where you can rarely be 100% certain of knowing the answer. IBM’s Watson has shown that this is a viable way to reason about subjects that are way more complex than just working out what someone who buys one product may also be interested in buying. Even Watson is built on the existing abstraction. But what if, just for a moment, we think about logic being built upon probabilities, being designed more like Neural Nets, and being able to come up with approximations very quickly, and only when the confidence interval is not good enough do we strive for greater levels of confidence.

We all know that those “shopping experts” rarely define what we might want to buy in the future. They usually are better at telling us what we bought in the past. It is clear that the confidence level produced by logic with 100% probability of being right often gets the wrong result. Could we do better by relaxing our definition of logic and are we then more likely to be right?



1 comments

Kev says:

Memresitor technology is something that could be used for neural networks – programmable resistors with analog multipliers. Analog circuits are a lot faster than digital; usually less precise, but the imprecision doesn’t matter too much in systems with continuous feedback.

Leave a Reply


(Note: This name will be displayed publicly)