Five DAC Keynotes

Thought-provoking talks about the future of technology, how to improve it, and what it means for design engineers.

popularity

The ending of Moore’s Law may be about to create a new golden age for design, especially one fueled by artificial intelligence and machine learning. But design will become task-, application- and domain-specific, and will require that we think about the lifecycle of the products in a different way.

In the future, we also will have to design for augmentation of experience, not just automation of tasks. These are just some of the conclusions coming out of the keynotes from the Design Automation Conference (DAC) this year.

There were five keynotes this year, although technically one was called a visionary talk. That just means it was a little shorter that the others.

A New Golden Age for Computer Architecture

David Patterson, Professor Emeritus for UC Berkeley, and vice-chair of the RISC-V board of directors.

Patterson’s talk was divided into three sections—fifty years of computer architecture in 15 minutes followed by the challenges we are facing today, and finally the opportunities going forward. In the first section he talked about the rise of the x86 architecture, leading to the rise and dominance of RISC. But as he got to today, he added, “We have run out of ideas that are efficient. High-level languages such as Python make it much easier for the programmer, make them much more productive, but they are pretty inefficient. On the hardware side, general-purpose seems too hard. The only thing left are domain-specific architectures. They don’t have to do everything, but what they do they need to do well.”

Patterson went through an exercise for matrix multiplication and showed a series of improvements that could produce a 63,000X improvement. While he admitted that example was cherry-picked, he asked if it was reasonable to set 1,000X as a goal. “What everyone talks about is neural networks, but it could be graphics, virtual reality or programmable networks. What is the magic that makes it work? You can use the memory bandwidth more efficiently, and you can dedicate memory to specific functions—the right size in the right location. You don’t need the accuracy that you need for general-purpose computing. 64-bit IEEE floating point is unnecessary. Even 64-bit integers are unnecessary.”

The industry has always been able to create unique hardware, but where does the software come from? “Domain specific languages are intended to make the programmer more productive in narrow areas by raising the level of abstraction.” He talked about the creation of the Google TPU and how that has created a lot of attention. “It used to be hard to get VCs to invest in hardware startups. That is not a problem any more. There are at least 45 hardware startups around machine learning.”

Patterson’s talk then turned to RISC-V and the importance of it being open source. “Security and open architectures go together. You do not want security through obscurity.” He also talked about the rise of Agile design and how universities are using it to enable them to create chips. “We can build small chips and it is only $14,000 for 100 chips. To build a big chip it is more money, but you can go a long way with small chips. At Berkeley we did 10 chips in 5 years using Agile. With Agile there is no excuse not to make chips anymore. Everyone can afford it.”

He concluded that using domain-specific languages is the new opportunity, with domain-specific architectures beneath them, and that now is a great time to be in the hardware business again.

The Future of Computing

Dario Gil, vice president of AI and IBM Q

Gil’s talk focused on the current state of affairs in AI and where he believes it needs to go. He looked at different architectures, including analog computing and quantum computing as applied to AI. “There is a lot of hype associated with AI, but it has arguably become the most important trend in the technology world today. One indication is to look at what is happening with students. Look at enrollment at MIT and Stanford for courses in machine learning. A class that a decade ago may have had 30 or 40 students, now has over a thousand students enrolled at Stanford and over 700 at MIT.”

After discussing the early years of AI and neural networks he looked at how it applies to circuit design. “We created a system called synthesis tuning system (SynTunSys) to be able run many parallel synthesis jobs so that we could automatically learn about the parameters that experts were implementing. On a recent chip, after experts had performed traditional optimization, we applied machine learning to do the configuration of these parameters and there was an 36% improvement in total negative slack, 60% improvement on latch-to-latch slack and 7% power reduction. These numbers are significant in the context that this was after experts had already done the best they could with what they knew.”

His talk then progressed to problems with AI. “We must create AI that is less of a black box. It must be more explainable. We must have a better understanding of what is happening in the neural networks, that we have debuggers, and that we can deal with errors in those networks. Explainability is foundational for many industries. Second, while it is impressive what neural networks can do, they are fragile. With the emergence of adversarial networks, you can inject noise into the system to fool it into falsely classifying an image. This tells you the fragility that is inherent in these systems. Ethics – this is a big topic. And we have to look at bias. If you create a system that is trained by example, and if the examples that you use are biased from the past, then the lessons that systems could learn would be terrible.”

In order to move from where we are today into the future, there is another step that has to be added. “Learning has made a lot of progress, but reasoning less so. Reasoning has to part of the future of where AI needs to go. Can we really make a difference and crack comprehension? Not just language processing, but we can we build a machine that can comprehend a text?” He extended this notion to writing and automated experimentation.

Gil talked about the inner workings of neural networks and the advantages that would come from bringing computation and memory closer together. “We have been working with phase change memory (PCM) and have built chips with over a million PCM elements. We demonstrated that you can implement deep learning training with 500X improvements over a traditional GPUs with similar levels of accuracy.”

The last section of his talked looked at quantum computing, and he spent his time explaining the technology rather than specifically looking at how quantum computers would be used in machine learning—except to say they may be able to solve problems that are beyond conventional means.

Challenges to Enable 5G System Scaling

Chidi Chidambaram, vice president of engineering for the process technology and foundry engineering team at Qualcomm

Chidambaram’s started out addressing some of the changes Qualcomm is seeing. “The market is moving toward long-term products. It used to be that people would replace their phone every two years, but the upcoming markets are more in automobiles and industrial IoT that will be with people for 10 years. We have to start thinking about design for durability.”

Much of the talk concentrated on the challenges within a cellphone, such as supporting increasing numbers of RF bands, increased levels of computation required to support graphics, or better cameras, while at the same time wanting extended battery life. He also covered some of the technologies challenges associated with the emerging technologies asking for better modeling and predictability associated with the BEOL. “Vdd has to go down to enable power scaling. That has happened only when critical architecture changes were implemented in the process, such as when we went from planar to HKMG or to fin. But in the last three generations, we have stayed on the fin and there is no real way to scale the Vdd in the fin itself. The way to reduce power is by depopulating the fin. The fins are three-dimensional structures that suck up a lot of capacitance and charge, so if you reduce the number of fins, the total charge decreases and CV2f decreases correspondingly. But the challenge is that we are already pretty close to one fin and you can’t go beyond one fin – so upcoming generations have to face the challenge of how to reduce power.”

Chidambaram talked about what the DAC community can do to help. “Help with design for durability, reduce the ppms, better thermal and stress modeling, get the unpredictability error down in the back end. The front-end with SPICE and TCAD are pretty good, but the variation reduction to be achieved by better predictability is real. Any prediction error that stays. When there is variation in the part we can improve the process and bring it down, but the unpredictability error will stay through the life of the technology. So to decrease that prediction error is very valuable. Lastly, help us with 3D integration.”

Living Products

Sarah Cooper, general manager for IoT Analytics and Applications within Amazon Web Services.

Cooper had a very simple message. “The principle of IoT is to be able to get data about the physical world and using that to build more efficiency into their manufacturing platforms or to build consumer devices that delight customers.” She talked about the accelerating rate at which technology is being adopted, and about how technical obsolescence is being built into products. “We have to be able to make products adapt and change even after they have been installed. In situ, these devices have to improve.”

Cooper looked at how difficult it is today to make devices work together. “Why can’t machines work out the interaction, especially when it is basically the same thing over and over again? Systems not only have to figure out their context and your context, but also the broader system context.”

That leads to the importance of machine learning. “Pulling all of this into the Cloud is not practical. There are lots of things that can make use of the cheap compute and memory of the Cloud, but there is an incredible amount that will continue to get pushed farther and farther down to the device. ML is moving closer to the end device.”

Perhaps her most surprising statement: “We are used to focusing on power and cost in our devices, but what happens if we can convince corporations that putting a couple of Easter Eggs in is good for business.”

That might not sit well with the mil/aero, medical or automotive industries.

Automation vs. Augmentation: Socially assistive robotics and the future of Work

Maja Matarić, professor of Computer Science, Neuroscience, and Pediatrics at the University of Southern California, founding director of the USC Robotics and Autonomous Systems Center, and vice dean for research in the Viterbi School of Engineering.

“If we are looking at a future, possibly a near future, in which machines of all kinds, and in particular robots, are doing physical work, then what are people doing? It is impossible to be in any kind of automation and to not think about the implications. It is irresponsible to just do something because you can, or it is cool. You should think about the kind of future is bringing about.”

Matarić talked about a foundation where there is a balance between automation and augmentation. “Today, with these amazing technologies, we see a lot of push for automation. Automation is about optimization of any kind of process, and people are very inefficient. So generally, optimization means taking people out of the loop. This is not the only way to think about using intelligent machines. Another way is to think about augmentation or enhancement of human ability. There are interesting ways that machines can enhance what we do without making us obsolete.”

One possible step is to think about how to enable technologies to fill the gap between what humans can do and what humans need. “What people need, based on scientific research, is to improve our own motivation to do work. For us to survive, to have extended lifespans and to stay healthy, we need a purpose. We also need to encourage socialization. It turns out that one thing that reliably makes you healthier and live longer is to have friends — but not social media friends. Facebook is just not doing anything to make us healthier and live longer. There are now several lines of evidence to show it is doing the opposite.”

Matarić presentation focused on socially assistive robotics (SAR), providing several examples of how robots are being used to help people with special needs, such as those overcoming strokes, suffering from Alzheimer’s or on the Autism spectrum.

She looked at some of the crossovers between engineering and social science, presenting some suggestions about how to design robots. “How large should a robot be, what its shape is, what its implied gender is? How does it use space? How does it point and what body language does it use? How does it affect emotion? Should it have emotion? Most of our collected intuition about robots is essentially wrong. We have to try things, and that is social science.”

Matarić also touched on the problems of bias. “Most of the data we have today is biased. We can detect speech, but not if it highly accented. Not very good with kid’s speech. What about someone who speaks with a lisp, or the elderly who may slur speech? The world is not convenient and available. We have yet to figure out to deal with partial data and multi-model data.”

Related Stories
Wednesday At DAC 2018
What does the future look like and how are we going to get there? Many problems remain to be solved.
Tuesday At DAC 2018
Where AI can go and the problems in getting there, plus how finance sees EDA.
Monday At DAC 2018
What is drawing the industry’s interest? Major themes begin to emerge.



Leave a Reply


(Note: This name will be displayed publicly)