From voice assistant to face recognition; from defeating master players in Go to crushing professional gamers in strategy game StarCraft; the world has witnessed exciting progress in the development of artificial intelligence (AI).
As AI is applied to higher-stake functions – like self-driving cars, automated surgical assistants, hedge fund management and power grid controls – how can we ensure it’s trustworthy?
China’s prestigious Tsinghua University has announced it will step up basic research on third-generation artificial intelligence, in the hope of building trust and preventing abuse and malicious behavior of AI models.
Zhang Bo, director of the Tsinghua Institute for Artificial Intelligence and academician at the Chinese Academy of Sciences, unveiled the plan at the opening of Center for Fundamental Theories under the Institute for Artificial Intelligence on Monday.
Tsinghua researchers have been talking about the future of artificial intelligence since 2014 and expect it to enter the third stage of its development in coming years, said Zhang.
The first-generation artificial intelligence was driven by the knowledge that researchers themselves possessed and they tried to provide the artificial intelligence model with clear logical rules. These systems were capable of solving well-defined problems, but incapable of learning.
In the second-generation, AI started to learn. Machines learn by training a system on a data set and then testing it on another set. The system eventually becomes more precise and efficient.
Zhang said the weakness of the second-generation lies in its explainability and robustness.
AI robustness refers to an acceptably high performance even in worst-case scenarios.
Although artificial intelligence has already outperformed humans in certain areas like image recognition, nobody understands why these systems are doing so well.
Machine learning and deep learning, the most common AI branches of recent years, suffer from the so-called “AI black box”. People find it hard to interpret the AI-based decisions and cannot predict when the AI model will fail and how it will fail.
Meanwhile, even accurate AI models can be vulnerable to “adversarial attacks” in which subtle differences are introduced to input data to manipulate AI “reasoning”.
For instance, an artificial intelligence system might mistake a sloth for a racing car if some unnoticeable changes are made to a photo of sloth.
Researchers therefore need to improve and verify the robustness of artificial intelligence models, leaving no room for adversarial examples or even attacks to manipulate results.
If AI technologies are deployed in security-sensitive or safety-critical scenarios, the next-generation needs to be comprehensible and more robust, said Zhang.
Zhu Jun, director of the new center, said it will carry out interdisciplinary studies and expects to attract talent from around the world, providing them with a relaxed academic environment.
He said Tsinghua University plans to host a high-level and fully-open artificial intelligence meeting every year.
“If anything helps innovation, we’ll give it a try,” said Zhu.
“It’s hard to predict the progress of research on fundamental theories. It could be explosive and trail-blazing.”