Architecture-aware design tools, reinforcement learning, and more trends for the next year in AI.
By Arun Venkatachar and Stelios Diamantidis
Artificial intelligence (AI) has emerged as one of the most important watchwords in all of technology. The once-utopian vision of developing machines that can think and behave like humans is becoming more of a reality as engineering innovations enable the performance required to process and interpret previously unimaginable amounts of data efficiently and at practical power consumption levels (but still orders of magnitude higher than the human brain).
We are already seeing AI applications impact our lives in meaningful ways, from the data centers that run systems for communications, transportation, banking, and healthcare, right down to our living rooms where a simple voice command aimed at a home entertainment device can produce a coherent response.
But if AI were a baseball game, we might say we are still in the early innings. Much progress has been made, but in fact much of that innovation has unveiled how deep and complex the challenges of implementing AI can be. We continue to peel the onion. There’s hardly a Big Tech company (or little tech company, for that matter) that doesn’t have significant AI initiatives underway, and they are all rapidly exploring an incredible span of opportunities to which they can apply AI. While the AI transformation is clearly in progress across all aspects of the digital economy, we know there is still much work to do to leverage even more potential from machine learning, neural networks, AI accelerators, and the increasingly large datasets companies are working on.
Synopsys is actively engaged in dozens of AI chip design projects with some of the world’s leading companies, and many pioneers that are as yet unknown. Our work with companies like IBM, where we are collaborating on how to achieve 1,000x better performance in AI chips over the next ten years, helps us refine the design tools, methodologies, and IP that enable the all-important silicon that will power the AI systems of the future.
Our experience and customer interactions give us a good view into where AI is headed. Since this is the season when people like to make predictions of the coming year, we wanted to share a few of ours on the topic of AI in 2021, mainly from a chip-level point of view.
The AI era has placed a renewed emphasis on developing innovative hardware architectures that are both unprecedentedly complex and large, while not forgetting that performance-per-watt is critical to enable real-world applications. The AI “recipe” includes a new computing paradigm, domain-specific architectures, and configurable silicon devices designed and optimized specifically for AI computation. It also entails broad expertise in algorithms, software, system integration, and applications.
It is clear that AI chip development needs a re-tooled approach to end-to-end hardware development in order to proliferate. We see existing tools becoming more architecture-aware, featuring new capabilities and approaches to dramatically accelerate implementation of new compute paradigms. These include:
Unlike CPU- and GPU-driven architectures of the past, AI chips will see some of the more transformative changes in IC design that we’ve ever witnessed.
Synopsys has been investing for several years in innovating new approaches using AI in EDA products and IP. There is a good synergy between AI and EDA as it is close to home given our long history of applying many statistical and heuristics approaches.
We expect to see this trend expand as we research and learn how AI can be applied in more areas of the IC design process. In general, we see great potential to utilize its efficiency in design, verification, and manufacturing areas that have the following characteristics:
Synopsys introduced DSO.ai in 2020. DSO stands for design space optimization, and it’s the EDA industry’s first product foray into applying AI to very complex design tasks — in this case searching the vast combined space of design and silicon technology choices to identify the optimal combination of PPA. This innovative platform uses the power of AI to ingest large data streams generated by design tools like place-and-route and floor planning to explore search spaces. DSO.ai uses reinforcement learning technology to observe how a design evolves over time and adjust design choices, technology parameters, and workflows to guide the exploration process toward multi-dimensional optimization objectives. Companies like DeepMind have successfully used reinforcement learning to deliver breathtaking solutions to ‘unsolvable’ problems, from defeating the world champion in Go (2016) to solving the protein-folding challenge earlier this year.
In 2020, early adopters began using DSO.ai in production. In dozens of design projects, DSO.ai has been able to identify better design solutions in a fraction of the time typically required for such complex tasks. As this technology goes mainstream in 2021, we are starting to see a leap in productivity across the entire design team that can only be compared with the early days of EDA, and the disruptive introduction of RTL synthesis.
In October 2020, DSO.ai was awarded the ASPENCORE “Innovative Product of the Year” award in electronics.
AI holds much potential to enable a leap forward in designer productivity and design team efficiency; we see this as a major area of innovation in EDA for years to come.
Trusted AI is a term we hear more and more, and it has broad implications. We will see more attention focused on all aspects of it as AI progresses.
First and foremost is security of the data as it makes its way through the AI journey, from collection to processing and storage. Companies need a trusted chain throughout the workflow. This places requirements on each aspect of the computing environment — hardware, software, connectivity, and data encryption. AI places a premium not just on the quantity of data (which is essential), but also the quality — Where did it come from, is the data credible and clean, and how did it get to where it is going? We believe this will be a focus of much attention in the coming years as the value of data increases.
Related to security is safety. We need to ensure that an AI-enabled system will far exceed a minimal “better than a human” capability to reduce risks to humans. This is particularly true as we rely more on AI systems in autonomous transportation, robotics, and industrial automation. More, and more robust, datasets to train machine learning algorithms will help in this area. The more information we can capture, the more the data will become relevant. Models can also be trained to run faster, which addresses the all-important latency issue for AI systems to provide the fastest possible response times. We see that as continuing to be a major requirement as we move forward with AI.
Finally, related to trust is reliability. The system must be able to make accurate and fast decisions under whatever conditions it is operating, often in real-time (autonomous navigation, for example, imposes a computational response latency limit of 20ms). This introduces new levels of testing for both durability in environmental extremes as well as in terms of secure data practices.
Many companies have developed dedicated platforms for running huge amounts of data. The primary area of this initial success has been in powerful high-performance computing systems and within data centers.
But we are seeing more signs of AI being utilized in far less powerful and expensive machines, including those that touch us directly every day. Examples include our mobile devices and home entertainment systems, as well as industrial settings that require flexible AI-enriched capabilities. These “smart edge” systems require very different solutions than chips designed for the data center. For one, the cost to develop them must be much less. And they tend to be very specialized for a particular function. We predict there will be a lot of experimentation and both technical and business model exploration as companies seek more ways to deploy AI.
Developing specific applications from scratch without the huge investment required of a highly complex IC based on a leading-edge process is possible. A big part of this is our broad IP portfolio which allows designers to leverage the efficiency of proven blocks of functionality and concentrate on their unique value-add. Our DesignWare IP enables the specialized processing capabilities, high-bandwidth memory throughput, and reliable high-performance connectivity demands in AI chips for application areas including mobile, IoT, automotive, data center, and digital home. We see this as an important way to help proliferate AI more widely and see its impact in many more ways, such as bringing the capability to more people who don’t have all the assets of larger chip powerhouses.
AI has initially proven itself in some very specific realms. Generally, it helps crunch big data with high-performance computers. It has gained acceptance in areas such as predictive maintenance applications, health sciences research, sophisticated financial use cases, and even in highly complex technical tasks such as chip design, as our application of AI in DSO.ai shows. Much of the effectiveness of AI in these defined spaces is a result of extensive dataset development, an expensive and time-consuming proposition that not all companies or markets can bear.
We believe the focus on narrow AI will continue for some time as companies try to fine-tune business models that can support AI in broader markets. There are lessons to be learned from this initial wave of AI adoption by mostly larger companies, and we expect to see a democratization of sorts happening over time. It’s inevitable that as more general-purpose algorithms for technologies like natural language processing and face detection become better and cheaper to implement, we’ll see wider proliferation of AI, particularly in more consumer products.
In summary, we are bullish on the continued expansion of AI into all sorts of applications and products. The challenges facing AI innovators are new, significant, and require much experimentation. We believe that with the right technology, expertise, and data, we can play a large role in continuing to make AI a meaningful part of our lives.
Leave a Reply