In a battle of robots, picking winners isn’t always so obvious.
Lately, my children and I are closely following a new show on ABC called “Battlebots”. The concept is as simple as it is cool—have a massive bulletproof arena where two remote-controlled robots battle it out until one is knocked out or the time is up (and a jury decides the winner). The battles are all about making physical contact with the other robot to either directly deal them damage or push them into the hazards of the arena.
With names like Stinger, Captain Shrederator, Ice Wave and Tombstone, it is clear that the show is all about intimidation and carrying the biggest “stick”. That “stick” can be a rotating blade, flipper or flame thrower. So no wonder that we are on the edge of our seats when a battle is going on. Or like my wife says: Men will always stay boys.
During the battles, the perception of each robot’s abilities often quickly changes. It turns out that some robot designs that looked great on paper, and even after the robot has been built, simply don’t match up against other robots. Some cases are harder to predict. Who would have thought that a small pusher bot could beat a massive looking reptile like spinner bot? In other cases, total destruction is pretty predictable. It just isn’t a good idea to have a robot mostly made of plastic to go up against another robot with a massive blade spinning at more than 300 miles per hour.
So it begs the question: How can you build something and make sure that it will actually perform the way you envisioned?
While less spectacular to watch, financial damage from having an underperforming smartphone can be bigger than the damage robots suffer in Battlebots. If, as a semiconductor company, your new smartphone performs weakly in a benchmark like Antutu, it will be hard to make any money on that phone.
With so much at stake, how do you minimize your risks? If you can reduce your time-to-market you have a better chance of being the first to market to reach a certain benchmark score based on the latest processor and implementation technology. It also would help if, early on, you can optimize your design to perform as well as possible against the benchmark.
Virtual prototyping helps you to do both. In a previous blog post I talked about the value of virtual prototyping to parallelize software development alongside the hardware schedule and shift left the entire product schedule. In context of winning the battle of outperforming key benchmarks against competing products, I would like to zoom in on the value of virtual prototyping for early power and performance optimization.
While spreadsheets are good for aggregating data, static spreadsheet calculations are not accurate enough to estimate performance and power and make design decisions. Dynamic simulation is needed. Traditional RTL simulation is too slow and lacks the configurability and visibility to analyze performance. In addition, the RTL may simply not be available. Risks include over-design, under-design, cost increases, schedule delays and re-spins. All of this might lead to being late with a new smartphone SoC or underperforming in a key benchmark.
With SystemC-based virtual prototypes you can capture, simulate and analyze the system-level performance and power of multicore systems early on in the design cycle. This enables system designers to explore and optimize the hardware-software partitioning and the configuration of the SoC infrastructure, specifically the global interconnect and memory subsystem, to achieve the right system performance, power, and cost.
By doing this early on, it is much easier to tune the SoC for specific workloads and scenarios, hence preparing the design for realistic usage. So rather than hoping for the best, you actually design to get the best.