System Models Are Changing

Power management is the new 800-pound gorilla, and it’s having a huge effect on every aspect of the architecture and design.

popularity

By Pallab Chatterjee
Historically system-level modeling was based on making sure there were no timing crashes on the main data bus. After that it was multi-core conflict resolution, distributed memory routing and, most recently, verifying the correct core actually has access to the correct memory with the data that is relevant being available.

All of these areas are now subject to an additional constraint—power management. The IP-level designer has been pretty lucky so far in that the functionality of the blocks were relatively agnostic about data and mode. The system designer had to verify the blocks would work in those modes, but the IP designer could basically get away with existence, rather than optimization, of the characteristics in the “non-normal” modes.

Designs for the new technology directions do not have this luxury. The new technology is a battery-operated device that communicates by RF to some network-connected appliance, which then sends the information to a high-speed server operating with high-speed storage, interprets the data and then sends some sort of visual/video/multimedia-based response as to what the data meant or did. In this scenario, all of the applications need to be aware of how the data is managed across this series of steps to ensure data interoperability. A recent example is the launch of the IPV6 addressing format, which changes the Web address space to 128 bits from the current 32 bits. Unless devices are tested and verified for compatibility, and memory strictures are reconfigured to use the 4X memory cycle and pattern, there may be issues transferring data to and from the compute processing machines.

Other examples include the 802.3az or Energy Efficient Ethernet protocol (EEE). This allows for multiple power-down and idle state modes for the MAC and PHY of Ethernet connectors between 1G and 100G. In system testing, all of these new modes and the mix/match (back compatibility) modes of these interfaces have to be checked for timing. While finite in number, the variability and combinations are high. This results in not only a cost impact to perform the simulations, but a real challenge to interpret the output.

There are two major impacts in this area. First, the PHY models are primarily in analog mode. These are RF descriptions that do not translate one-to-one to logic simulation models, and hence have a capacity and interpretation limit. Second, at the very high data rates and in combination with the EEE spec, the blocks have a data pattern dependency on what is being sent and their response.

At the 100G level, options such as configuration (4 lanes of 25G or 10 lanes of 10G) significantly impact the design as to what power down and idle modes are in the system. Also in the multi-lane configuration, the adjacency issues associated with noise, IR drop, ground bounce, and other switching characteristics are directly dependent on the data passed through the load balancer. This requires the switch simulation to include a complete model of a high-level system block into the sub-system simulation to determine data validation. These are aspects not normally in system models and verification flows today.

These issues are only getting more aggravated as data patterns shift to application-specific content, which means the data stream that has been used historically (512 byte blocks, continuous stream) is no longer the standard. AF drives are forcing 4K blocks, and video data and APP data (or Java applet) are pushing long data streams and very short (under 1KB) data sets through the net. According to Cisco, more than 60% of traffic is long string video, which is the new default data traffic. Systems verification models need to be updated to reflect these issues before power optimization can be performed.



Leave a Reply


(Note: This name will be displayed publicly)