uc davis computer science artificial intelligence aaai 2020 zhou yu ian davidson

Zhou Yu and Ian Davidson Combine for Nine Accepted AAAI Papers

Computer science at UC Davis will be well-represented at the thirty-fourth AAAI Conference on Artificial Intelligence this February, as assistant professor Zhou Yu and professor Ian Davidson had a combined nine papers accepted.

The Association for the Advancement of Artificial Intelligence (AAAI) conference is one of the largest and most selective across all computer science disciplines. Out of almost 9000 submissions this year, only 1500 papers were accepted, including five from Yu’s group and four from Davidson’s.

“It’s very unusual for any faculty member to get that many in, especially for a junior faculty member like Zhou,” said Davidson. “That is a rare achievement.”

Yu’s research works to advance seamless communication between humans and machines. Three of her papers focus on dialogue systems—computer programs that talk to humans—which build off her group’s success in the Amazon Alexa Challenge last year. Her current research looks at building a dialogue system using less data and fewer annotations so these systems can be more easily deployed. It has many different applications, including building a noncollaborative chatbot to combat spam calls.

A fourth paper aims to help machines understanding incomplete sentences from human speech, and the, and another looks at training a machine on data from real social media editors to recommend changes to story headlines. What makes Yu’s papers special is that all of their first authors are different students—none of whom have ever had a paper accepted by the AAAI before.

“It’s very exciting for all these students,” said Yu. “Having papers accepted by top conferences not only gives students positive feedback, but it’s also validation that our group is doing things people agree that are interesting, impactful and worth introducing to the field.”

Davidson’s papers focus on fairness and explainability in AI. His goal is to make sure that AI isn’t biased while giving it the means to explain its decision. This is especially important as AI becomes more ubiquitous and begins replacing humans in high-stakes domains like education, hiring and the judiciary. The bigger and more controversial the decisions it makes, the more it’s necessary to make sure the machine explains why it made them and to identify any potential biases. Explainable AI (XAI) will help researchers better understand its thinking and help identify any potential bias.

“Explanation and fairness is critical for AI as it becomes more pervasive in society and starts replacing humans,” said Davidson. “We’re not just making predictions, we’re explaining why we’re making them and ensuring that they’re fair.”

Yu and Davidson will present their work at the thirty-fourth AAAI Conference on Artificial Intelligence, which will be held from February 7-12, 2020 in New York City.

Learn more about the conference.

Primary Category