Pushing AI Into The Mainstream

Why data scrubbing and social issues could limit the speed of adoption and the usefulness of this technology.

popularity

Artificial intelligence is emerging as the driving force behind many advancements in technology, even though the industry has merely scratched the surface of what may be possible.

But how deeply AI penetrates different market segments and technologies, and how quickly it pushes into the mainstream, depend on a variety of issues that still must be resolved. In addition to a plethora of technical issues, there needs to be progress in sanitizing data sets, resolving political, legal and ethical issues, and instilling trust in machines.

None of these challenges is insurmountable, but a failure to deal with any of them will could delay the adoption of AI and slow or prevent it from reaching its full potential.

Data cleaning
AI applications start with large data sets. If the data is bad, the application is bad. Data may contain bias that can steer results towards prejudicial solutions or treat certain situations unfairly.

“AI is about data, and predicting the next result is based on the previous pattern or algorithm,” explains Steve Roddy, vice president of special projects in the Machine Learning Group of Arm. “If the data set has an unconscious bias in it, then that bias will continue to be part of the model and will propagate biased results. Technologists have ethical responsibilities to monitor the algorithms they create to ensure that they are fair and unbiased.”

Bias can remain hidden. “A lot of AI companies are in China,” points out Marc Naddell, vice president of marketing for Gyrfalcon Technologies. “Can you imagine the differences in data sets that companies there will create when they try to bring their models to the rest of the world? It is not just about facial and physical aspects, but also behavioral.”

Many companies are in such a rush to release products that they do not properly examine the data set. “Take AI applications for health care as an example,” says Andrew Grant, senior business development director for Vision & AI at Imagination Technologies. “It’s an area of huge potential, yet up to 80% of any AI project can be in finding, cleaning and prepping data so that it can used in an AI system. In a sense, AI is being held back by such barriers to progress. Training, error reduction and getting access to the right data can be a problem. Although some longitudinal data sets are increasingly being made available in anonymised form, this is an example where we might see a market in data-as-a-service, or data prep-as-a-service to sit alongside the more familiar areas of AI.”

Today, we have started to realize how much bias is contained within data sets used for early AI applications. “The more that AI gets used in different domains, the faster issues, such as bias, will be identified and best practices will emerge to avoid these,” adds Naddell. “It is the Wild West, and many use cases are being tried for the first time. You have to think about all of the environments and all of the scenarios around which you need to collect your sample data to make sure it is robust and comprehensive.”

Extrapolating from smaller data sets presents other challenges. This is seen with EDA applications where data sets are often smaller. “We often quickly see benefits of ML [machine learning] in experimental environments, where we limit scope to a small set of chip designs, manually select and clean input data, manually tune our ML algorithms to suit our needs, and can privately fail many times before we produce results for public consumption,” says Jeff Dyck, director of engineering for Mentor, a Siemens Business. “What tends to take way more time and effort than expected is productizing ML techniques. It takes a force of nature to make ML methods generally applicable to a wide range of chips, processes and design flows, to make them dependable and reliable for production, and to make them easy to use for designers. Unanticipated productization effort is why many promising ML techniques will fail to make it into production tools.”

Politics
AI is starting to garner some political attention, as well.

“Legal, political and regulatory issues surrounding AI haven’t been properly addressed in governments yet,” says Sharad Singh, digital marketing engineer for Allied Analytics. “There lies the greatest hidden dangers of AI.”

The tech industry always has had a shaky relationship with government intervention, but AI adds some new elements into the mix. On one hand, there is concern about the impact of this technology, which has been portrayed as the evil force in killer robots, a threat to employment, and a hazard to human life in autonomous vehicles. On the other side, the technology also can be harnessed to track large numbers of people and ferret out patterns and actions that are not favorable to a government.

“Governments that take a partisan interest in AI are a danger to AI,” says Imagination’s Grant. “What’s great right now is that there is a global community with lots of knowledge sharing, open standards and open source to enable developers to access the technologies. There’s always the danger that a government will seek to take control and divert its focus. While there’s plenty of hyperbole, it would be a great shame if a technology that could help us tackle the massive problems in today’s world were subverted to inappropriate, selfish ends.”

A subversion could happen in different ways. “The rise of AI has kick-started the global autonomous weapons race among nations,” says Allied Analytics’ Singh. “Data analysis firms have used AI-enabled target marketing to influence the results of the recent 2016 U.S. presidential election. Meanwhile, China has implemented AI-based facial recognition to track citizens’ behavior in order to determine their social credit score. Implementing rules and regulations concerning the use of AI is mandatory to mitigate the risks and dangers it imposes on society.”

“We have to get ready,” says Raymond Nijssen, vice president and chief technologist for Achronix. “Today, it is just an oncoming train. In the 1970s, in my home country, a minister pushed to have a high tax on microprocessors because they would take away jobs from blue-collar workers. This would have been a really bad thing. You want to be the society that is best prepared for these changes. It is again blue-collar workers that are threatened by this the most. Productivity gains from AI are what will drive GDP growth over the next decades. Economic growth is almost entirely driven by productivity gains. The speed at which change happens is a concern. The transition from manual to automated tasks used to take several decades so they had a chance to adapt over a generation.”

Trust
But can people trust the corporations attempting to deploy AI?

“Ethics is a concern, and although a lot of good work is going on, it really is time for these to be publicly stated,” says Grant. “Otherwise it will be too late to apply pressure to rogue actors. Similarly, there need to be clear guidelines for the use of information and making sure that training is not done on unrepresentative data. There’s no excuse to get this wrong. Yet even the majors have embarrassing moments where they have been visibly unfair to parts of society. We need to keep our eyes open to potential, even accidental abuse and unintended consequences. We need to make sure that hiccups in development don’t result in a consumer response that closes down avenues of thinking in a short-sighted manner, leading to over-regulation and the stifling of the nascent industries of the future.”

Trust often comes from understanding. “Explainable AI is necessary for large-scale decision systems, where the question of how an answer is derived is critical to its acceptance,” says Dave White, senior group director for R&D in the Custom IC & PCB Group at Cadence. “This includes systems that are based on observe-orient-decide-act (OODA) architectures, where optimization-fueled AI will make recommendations to humans who are then asked to make a crucial final decision and act. Explainable AI will be required in OODA applications as a first step toward accepting more autonomous decision systems.”

Some of this will depend on who owns the data.

“Traditionally, data has been aggregated by large companies, and with hacking and leaks in private data falling into the wrong hands, that can be an impairment,” warns Naddell. “We see AI on the edge as presenting a new set of opportunities for the industry because it addresses the privacy issue and also improves latency that can impact the user experience. In addition, when you have to send data back and forth from the edge to the network and back again, you can have variability based on when you are trying to use the system.”

And when they cannot connect at all, are we willing to let AI make the decisions? “There will be disasters that will happen because of this,” predicts Achronix’s Nijssen. “Machines will at some point be placed above the human where the human cannot override, or does not know how to override, the machine even though it may be obvious that the machine behavior is wrong. Who is in charge? Is it the human or the machine? If planes become completely autonomous, it would prevent hijacking because a human could not override it. People would say this is a good thing. But if you ask them about a machine overriding a human in general, you will get a lot of push-back. It seems scary when the machine is smarter. Humans have no tolerance for machines killing people.”

Technical challenges
As with reliability in any technology, verification is a critical step. But verification in developing AI systems has a long way to go.

“We need extremely robust ML methods that reliably deliver a required level of accuracy, that reveal their accuracy to the designer, and that are fully verifiable,” says Mentor’s Dyck.

Cadence’s White agrees. “The lack of sufficient verification methodologies presents a great threat. The pace of development may be outpacing the level of verification needed to ensure stability and safety as AI impacts machines. We see huge investments in the development of training and inference methods. However, there is a much smaller focus on verification of AI-enabled systems in real systems and environments. It’s probably not a huge problem for online marketing applications or image processing benchmarks, but as we integrate AI and deep learning into transportation, manufacturing or other safety-critical environments, it becomes more of a concern.”

But, exactly what does verification mean for an AI system? “Hardware platforms for AI and ML applications need to be verified twice, before and after the application is mapped,” says Sergio Marchese, technical marketing manager for OneSpin Solutions. “Moreover, the size of these chips or multi-chip modules is huge and may include hundreds of thousands of module and IP instances. Even a basic task such as verification of top-level connections to ensure correct integration of IP and subsystems may become intractable not only in simulation but also with traditional formal connectivity checking apps.”

Creating optimized hardware is also a challenge. “From an ASIC perspective, the biggest challenge is that AI algorithms are continually changing,” says Mike Gianfagna, vice president of marketing at eSilicon. “This fact means hardware acceleration is typically done with GPUs or FPGAs, which allow for field updates as algorithms evolve. This trend has its limitations, however. A custom AI accelerator in the form of an ASIC will always deliver superior power, performance total cost of ownership when compared with more general and programmable approaches.”

It is easy to get carried away and assume that AI is the best technology for every problem. “There are still open issues related to the application of AI especially in chip design,” warns Christoph Sohrmann, member of the group advanced physical verification at Fraunhofer EAS. “For instance, how to combine AI algorithms with established rule-based methods? What is really needed is a merge of symbolic and neural concepts. In fact, this is the success formula for human-machine interaction until today.”

Emerging technologies may also suggest new architectures. “Consider different memory technologies being integrated into AI accelerators,” says Gyrfalcon’s Naddell. “We are looking at MRAM instead of SRAM. That brings new capabilities to the edge that can open up new use cases. Because it is non-volatile, it means that when you cycle the power, it is immediately using the software and acting on the data. It does not have to reload software. That is critical for many IoT devices where you may be operating gates or coupled to security cameras. Latency is a big issue for these, and power management is also important.”


Fig 1. Multiple places where inferencing and learning could happen. Source: Rambus.

Some applications may rely on learning on the edge, as well. “The human mind doesn’t impose a rigid distinction between learning and inference, like today’s machine-learning models do,” points out Peter Glaskowsky, computer system architect at Esperanto Technologies. “Humans can learn new faces and recognize them again minutes or years later. With continuous learning—a style of AI that requires chips flexible enough to perform both learning and inference—a computer can become a ‘super recognizer’ able to learn the faces of regular customers, or criminals who visit a shop several times before committing a robbery.”

Nijssen also sees that as an issue. “The strict separation between training and inferencing is an artifact of the limitations of the existing technology. That separation has to disappear. That will bring about new ways to do computations and it has to be power efficient. Learning must be instantaneous and continuous. We cannot live with the dichotomy of training here and inference somewhere else.”

The industry needs to be adaptive. “AI growth will continue to accelerate and there will be different demands for the hardware and the software,” says Arm’s Roddy. “While hardware seeks performance and energy-efficiency, at the other end of the stack, app developers want an open-source, common software framework that will accelerate deployment of AI. New technologies are always tricky due to the interlock of hardware and software: app developers won’t use a feature until it’s in the majority of devices in the market and device manufacturers are reluctant to invest in capabilities will software that is not use yet.”

Taking AI to the next level
Today, AI consumes too much power. “One piece that needs to get solved to take it to the next level is miniaturizing ML both in resource requirement and power consumption,” says Markus Levy, head of AI at NXP. “The future of ML is as much at the edge of the network as it is in the data-heavy cloud. Billions of connected devices must be able to make decisions autonomously without always seeking help from the big brother Cloud, and once the decisions are made, they also need to share the knowledge that they have acquired with other devices. That is when collective data leads to intelligence, collective intelligence leads to knowledge, that is shared by all.”

This may require change. “How can we break away and put different kinds of tools in the hands of developers?” asks Naddell. “There are a lot of developers who are very good at making software applications or designing devices, but AI is new to them. How can we put tools in their hands, so they do not have to spend years learning how to work in the Cloud, but to create the models they specifically need for their applications?”

There is a desperate lack of human resources to make all of this happen. “We need to inspire the next generations of data scientists as there is a massive shortfall in the global demand for smart AI practitioners of all types,” says Grant. “Perhaps we can democratize recruitment and reduce the gender imbalance too, and recruit from all sectors of society. AI is a fascinating opportunity to promote change based on data-driven decision-making, and this is in harmony with many of the post-millennial memes. We live in the most exciting of times. Let’s keep up the momentum.”

Related Stories
AI Chip Architectures Race To The Edge
Companies battle it out to get artificial intelligence to the edge using various chip architectures as their weapons of choice.
What’s Next For AI, Quantum Chips
Leaders of three R&D organizations, Imec, Leti and SRC, discuss the latest chip trends in AI, packaging and quantum computing.
AI’s Growing Impact On Chip Design
Synopsys’ chairman looks at what really got the AI explosion rolling and what it means for hardware and software.



Leave a Reply


(Note: This name will be displayed publicly)