Why do these match? Explaining the behavior of image similarity models
Files
Accepted manuscript
Date
2020
Authors
Plummer, Bryan A.
Vasileva, Mariya I.
Petsiuk, Vitali
Saenko, Kate
Forsyth, David A.
Version
OA Version
Published version
Citation
Bryan A Plummer, Mariya I Vasileva, Vitali Petsiuk, Kate Saenko, David A Forsyth. 2020. "Why Do These Match? Explaining the Behavior of Image Similarity Models.." ECCV (11), Volume 12356, pp. 652 - 669. https://doi.org/10.1007/978-3-030-58621-8_38
Abstract
Explaining a deep learning model can help users understand
its behavior and allow researchers to discern its shortcomings. Recent
work has primarily focused on explaining models for tasks like image
classiffication or visual question answering. In this paper, we introduce
Salient Attributes for Network Explanation (SANE) to explain image
similarity models, where a model's output is a score measuring the similarity of two inputs rather than a classiffication score. In this task, an explanation depends on both of the input images, so standard methods do not apply. Our SANE explanations pairs a saliency map identifying important image regions with an attribute that best explains the match. We find that our explanations provide additional information not typically captured by saliency maps alone, and can also improve
performance on the classic task of attribute recognition. Our approach's
ability to generalize is demonstrated on two datasets from diverse do-
mains, Polyvore Outfits and Animals with Attributes 2. Code available
at: https://github.com/VisionLearningGroup/SANE