"Improving Explanations for Model Predictions"

by George Chrysostomou, University of Sheffield

Update: the event has now finished (Oct 25th 2021).

Abstract

Large neural models dominate benchmarks of natural language understanding tasks. Their achievements have led in increasing adoption in critical areas such as that of health and law. A significant drawback of these models is their highly parameterized architecture, which makes their predictions hard to interpret. Previous work has introduced approaches for generating rationales for model predictions (e.g. using feature attribution). However, how accurately these approaches explain the reasoning behind a model’s prediction has only recently been studied. This seminar will introduce three studies which aim to improve explanations for model predictions: (1) Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification (published at ACL2021); (2) Towards Better Transformer-based Faithful Explanations with Word Salience (published at EMNLP 2021); (3) Instance-level Rationalization of Model Predictions (Under review at AAAI 2021).

Speaker’s bio

George Chrysostomou is a PhD student at the University of Sheffield, supervised by Dr. Nikolaos Aletras and Dr. Mauricio Alvarez. His research interests lie in improving explanations for model predictions in Natural Language Processing. Before pursuing his doctoral studies, he did his masters in Data Analytics at the University of Sheffield.

CONTACT DETAILS


RGCL
University of Wolverhampton
Wulfruna Street
Wolverhampton, WV1 1LY
United Kingdom