Why is so little research being conducted in the field of EDA? We have only ourselves to blame.
There is very little EDA research being done in universities today, except for very narrow fields such as formal verification. It has been a steady decline over quite a long period of time.
There are several reasons for this. The first is money. Money has to flow into the universities to pay for the research, and this has to lead to some form of prestige for the establishment, such as a patent portfolio that can be sold, or a company that productizes the work. EDA has become unwilling to buy technology, instead preferring to buy mature companies. All-you-can-eat licensing also makes it very hard for startups to gain enough traction to become sizable, so the funnel of growing companies has dwindled.
You also have to think about the researchers themselves. They have to be interested in the subject, they have to know that there will be good employment opportunities for them when they graduate, and they must have the ability to publish their work and have it recognized by the community. This becomes more difficult in a maturing industry because the easy problems already have been solved. Those that are left mean dealing with large and complex designs or highly sophisticated algorithms. Both of these affect the complexity of the task. What can an individual researcher do that would influence the state of EDA today? It is difficult to carve off small enough pieces that can produce tangible results for each researcher over a span of a few years.
All aspects of that have been in decline for EDA. Some of it was caused by newer, sexier fields of work such as search, social media and other Internet related functions. Even those areas are also now being pressured from even newer fields such as neural networks, artificial intelligence and machine learning.
The leading researchers in the field of AI and machine learning have learned a thing or two from past mistakes, and that is making the field even more interesting and enticing. Aspects of this were made very clear at the recent Synopsys User’s Group event. On the second day, they invited Dr. Peter Stone, founder and director of the Learning Agents Research Group (LARG) within the Artificial Intelligence Laboratory in the Department of Computer Science at The University of Texas at Austin, to give the keynote. His interests are in robots that operate in dynamic, uncertain environments that require computer vision, tactile sensing, compliant motion, manipulation, locomotion, high-level decision-making, and many other areas to be unified.
If this sounds like too big a problem for students to be tackling…well, it is. However, the problem is being worked on in a way that allows the problem to be carved into small pieces, such that each student can incrementally add to the overall knowledge base. Probably even more important is that they have turned it into a competition because the application is autonomous robots playing soccer.
The keynote looked at examples of the progress that is made and how each piece builds on others. For example, while every team gets to use the same robots, his team decided to allow the robots to learn, on their own, the best way to run. Up until that point, it had been programmed into them. The result, after the robots had stumbled around for a while, was that they could now run twice as fast as the other team. Right there could have been the match winner, except for one small problem. They now fell over when trying to kick the ball because they could not offset their momentum. That required them to go from a fast run to a slower run before kicking the ball.
Another example was that they decided to treat the kickoff like an indirect free kick. This is where one player taps the ball to another standing very close by, who is ready to kick the ball as hard and fast as possible. This enabled them to score a goal right from the kickoff, a strategy that the opposing team was not programmed for and did not know how to react to. Of course, this points out the limitation of off-line learning, because it means they can keep doing this over and over and the defending team never learns how to defend themselves.
As with many advances, their initial attempts to program this move met with problems. Having two robots so close to each other, each attempting to kick the ball in essentially the same location resulted in the robots effectively getting into a fight and kicking each other.
The keynote was a lot of fun, but at the same time it showed how a complex problem was broken down into pieces and each could be solved cooperatively. At the end of each “World Cup”, all technology was shared with the other teams and then they prepare for the next round of matches. Their ultimate goal is to have the robots beat a World-class human soccer team and they have set their sights for that being in 2050.
Within EDA, we have competitions but these are in area of design and the usage of existing tools. While that promotes interest in the field of electronics, it does not promote interest in the creation of EDA tools. We have also done everything we can to exclude startups.
Related Stories
What Does An AI Chip Look Like?
As the market for artificial intelligence heats up, so does confusion about how to build these systems.
Ready For Social Robots?
Companies rev up new technologies, but it’s still unclear how successful they will be.
Changing Direction In Chip Design
This year’s Kaufman Award winner digs into scaling issues, industry fragmentation, and what semiconductor designs will look like over the next decade.
Leave a Reply