Feedback loops and assessing costs applies to both ticket sales and verification.
During a recent trip to New York, I managed to see “Hamilton, An American Musical”—despite the running joke about how hard it is to get tickets. The sale of “Hamilton” tickets teaches an interesting lesson about what I would call an “automatic feedback loop of value adjustment”. And believe it or not, it bears some resemblance to how verification users actually choose what engine to apply to which verification task. Bear with me for a bit… here we go.
Besides loving my job leading a product management team, I am a huge fan of stage events—opera, theatre, musicals, concerts from classical to rock, you name it. I try to be in New York twice a year and will try to see three or four shows during a weekend. Over time, I have figured out how to get tickets even if the venue is sold out. Stubhub is a reseller for tickets and clearly some people earn a living re-selling tickets. As I am writing this, it is about 3 hours and 20 minutes to the Saturday evening performance of “Hamilton” and there are 12 tickets left, starting at $820 and going up to $4,412.
Looking at this for a couple of weeks preparing my trip, every Friday night there were anywhere between 50 and 75 tickets available, most of them well above $1000 per ticket. In the three hours running up to the curtain, prices would drop almost every couple of minutes, as the online scalpers are trying to bait people who want to see the show. They would bottom out between $400 and $500 to be bought up. This in itself is a fascinating example how the simple insertion of a re-seller guards the access to a theatre performance, identifying the true market value of access to a seat, at real time. Capitalism at its best. Interestingly, the feedback loop goes even further. The theatre understandably did not consider it being fair that StubHub and its resellers would actually make a bigger profit than the theatre selling a $198 ticket for $750. To buy tickets from the venue for November 2017, a single ticket in orchestra premium seating is $849. Ouch. The theatre adjusted the pricing now that the real market value is understood!
Are you still with me? Ah, back to emulation and verification. In verification, when asking a specific question, users face choices. Do I do this best in formal verification? Do I run this in simulation? Has the focus shifted to software and its interaction with hardware and I should switch to emulation or even FPGA because I need longer cycles to be executed? Then users compare against the queue of tasks they need to run the available options and their capabilities and value. Do I need debug while running faster in hardware that points to emulation? Or do I need to fast-forward to a specific point of interest, which I could do with a hybrid of emulation and virtual platforms with ARM Fast Models? Or do I consider the hardware mature enough to only care about software debug, which would point to FPGA-based prototyping?
Not unlike Stubhub for theatre access in the run up of the last three hours to curtain, users implicitly are building a “cost function” that at any given point in time allows them to decide which is the best engine to use for which task based on its cost, value, capabilities, and scheduled availability. An ideal scenario to strive towards would be a universal job scheduler that understands all engines, pushes tasks accordingly, and delivers the resulting metrics to a central collection point where verification teams can decide how they are converging towards the verification goals.
For the latter, we at Cadence have a great solution already with our vManager environment. For instance, coverage from simulation and emulation can be merged here to allow tracking progress. For the former, users can, within emulation, schedule their verification jobs flexibly according to priorities given to them. Our flexible job-allocation capability in the Palladium Z1 Enterprise Emulation System, as shown above (hey, it even looks like theatre seating) and explained in more detail in Raj Mathur’s blog and a video blog, allows the jobs to be most efficiently mapped into emulation.
In the future, I am sure the cost function will be extended to include simulation, formal, and FPGA to understand where best to execute a verification job. Combine that with machine learning and big data analytics, and verification engineers will be able to significantly improve verification productivity in the not too distant future, just like with Stubhub for theatre access in real time, addressing ever-changing priorities and choosing the best engine available for their projects.
It’s about an hour later now, 2 hours and 20 minutes to curtain, and lower pricing is now at $770, but still 12 tickets are left. Has the feedback loop found the bottom? Not yet! Will emulation pricing be sold off at one point using market value pricing that way? Probably not either… but I thought this was a fun comparison and verification teams definitely assess the value of each engine more closely as they map verification tasks.
Leave a Reply