A University of Central Florida researcher has received funding from the U.S. Department of Energy (DOE) to enhance the current understanding of artificial intelligence (AI) reasoning.

The project focuses on developing algorithms to create robust multi-modal explanations for foundation, or large, AI models through the exploration of several novel explainable AI methods. The DOE recently awarded $400,000 to fund the project.

The project was one of 22 proposals selected for the DOE’s 2022 Exploratory Research for Extreme-Scale Science (EXPRESS) grant, which promotes the study of innovative, high-impact ideas for advancing scientific discovery.

Unlike task-specific models, foundation models are trained with a large set of data and can be applied to different tasks.

These models are more efficient than humans in many challenging tasks and are being used in real-world applications like autonomous vehicles and scientific research, but few methods exist for explaining AI decisions to humans, blocking the wide adoption of AI in fields that ultimately require human trust, such as science.

By creating algorithms that provide meaningful explanations for a model’s decision-making, AI systems can be deployed with higher levels of human trust and understanding, the researchers say.

Rickard Ewetz, lead researcher of the project and an associate professor in UCF’s Department of Electrical and Computer Engineering, says AI models need to be transparent in order to be trusted by humans.

“It’s not just a black box that takes an input and gives an output. You need to be able to explain how the neural network reasons,” Ewetz says.

Instead of examining model gradients, which are the emphasis of many explainable AI efforts over the last decade, the project focuses on providing meaningful explanations of AI models through innovations such as the implementation of symbolic reasoning to describe AI reasoning with trees, graphs, automata and equations.

The researchers aim to not only provide needed explanations for a model’s decision-making but also estimate the model’s explanation accuracy and knowledge limits.

Sumit Jha, co-researcher of the project and a computer science professor at the University of Texas at San Antonio, says that explainable AI is especially necessary with the rapid deployment of AI models.

“In general, AI will not tell you why it made a mistake or provide explanations for what it is doing,” Jha says. “People are accepting AI with a sort of blind trust that it is going to work. This is very worrying because eventually there will be good AI and bad AI.”

Ewetz received his doctorate in electrical and computer engineering from Purdue University and joined UCF’s College of Engineering and Computer Science in 2016. His primary research focuses include AI and machine learning, emerging computing paradigms and future computing systems, and computer-aided design for very large-scale integration.