Transitioning States

Understanding the role of finite state machines and how to verify them is fundamental to managing power awareness.

popularity

By Ann Steffora Mutschler
While the concept of finite state machines is mature, understanding their role in design, the transitions between them and how to verify them are fundamental to managing power in today’s large SoCs.

In essence, a finite state machine is a set of inputs and outputs and gate bits that describes the operation of the system.

“Transitions happen from one state to another to describe a change of how computation is going to take place, and the reason it’s popular is because it’s kind of the building block for designing the control or interactions between multiple other blocks,” said Maher Mneimneh, product director for Atrenta’s advanced Lint and formal technology. “For example, the way you’d interact with memory, the way you’d interact with the processor—this requires some type of control, and finite state machines are the best way we know of to systematically design those complex digital circuits.”

Arturo Salz, a Synopsys scientist, pointed out that today’s designs are essentially the same designs of 5 or 10 years ago, which we are now retrofitting with low power. “We can’t make the memory use less power. We can’t the processor itself use less power. But what we can do is shut them down when we don’t need them or clock them at a lower frequency when they are not critical. There’s the whole notion of quality of service per unit of power.”

Given the complexity of designs, and considering what can happen between a power down and a power up event and the power controller, there can be a million cycles happening within the chip. “If you consider this as a state machine, you’re not going to be able to do coverage analysis and validate these scenarios—we need some abstractions for that. That’s why UPF has added constructs for power state entry and hopefully we can add tools to validate,” Salz said.

There are multiple design styles for FSMs, some of which are highly recommended for low power because they guarantee achieving low power with less effort on the designer. The key point here is the encoding which is used.

“The idea is you can encode your state or your computation in various ways. For example, you can do ‘one-hot’ encoding, which is very natural. This is the default way a designer would think about implementing because it’s intuitive, easy to understand. But on the other hand, it’s not good for low power because it results in a lot of transitions in the states and by definition a lot of transitions imply high power,” Mneimneh explained.

He said the recommended style for FSMs that are implemented for low power is Gray encoding because it guarantees that while transitioning between states or computations, the least number of bits possible is being changed in the state. “If we guarantee a minimum number of changes in the bits, we guarantee lower power…If designers just go about writing FSMs the way they are used to without thinking about low power, then they’re most probably are not going to meet their power constraints.”

Parallel to this, he said, “While you’re writing those FSMs, you can also have a lot of dead transitions in your designs which is a little bit outside the FSM area but is as critical. Like we have dead states in FSM, you can have dead code somewhere else outside an FSM and it’s as important to also identify those because whatever effects they have on an FSM, they can have on other parts of your design. Being FSM certified is very good but also you have to make sure this is done for other parts of your chip – making sure there is no dead code.”

Moving forward, it does seem that moving to a higher level of abstraction would be beneficial; Mneimneh said but acknowledged that it has its own complexities and challenges in terms of how to do the verification so that the higher level model really matches the lower level.

Mike Meyer, a Cadence fellow, said that in many situations, rather than taking a C or C++ model for something and then building the FSM and datapath that implements it, engineering teams are looking to just use high level synthesis (HLS) so they don’t have to spend the time developing the FSM. Especially for deeply pipelined designs, HLS can be a tremendous timesaver because it can be used to experiment with different microarchitectures for that algorithm. Low power designers also benefit from using HLS since it can be used to experiment for power/performance tradeoffs, he said.

“Do you want 10 stages in your pipeline loop or 12? That’s something that at the RTL when people are designing it, end up being cautious. You do some experiments to see what you can hit but there’s a lot of margin that gets left in that because if you have to change it, you want to minimize how much you have to go back and tweak your state machine and see if you broke the associated datapath,” he explained. “It’s changing the level of abstraction, and in a lot of cases simplifying it to the point where it’s easier for people to understand.”

Meyer asserted that the state of high level synthesis is rapidly advancing. “It’s not to say that we’ve got all the answers right now, but I think there’s been tremendous progress made in the last several years and it is something that’s enabling people to go this route and cut their design time by several months in the process. It is creating some interesting issues from a verification perspective but the fact that you can do so much of the verification at a higher level I think is one of the big opportunities here.”



Leave a Reply


(Note: This name will be displayed publicly)