Learning translations via images with a massively multilingual image dataset
MetadataShow full item record
Citation (published version)John Hewitt, Daphne Ippolito, Brendan Callahan, Reno Kriz, Derry Wijaya, Chris Callison-Burch. 2018. "Learning Translations via Images with a Massively Multilingual Image Dataset." Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Annual Meeting of the Association for Computational Linguistics. Melbourne, Australia, 2018-07-15 - 2018-07-20. https://doi.org/10.18653/v1/P18-1239
We conduct the most comprehensive study to date into translating words via images. To facilitate research on the task, we introduce a large-scale multilingual corpus of images, each labeled with the word it represents. Past datasets have been limited to only a few high-resource languages and unrealistically easy translation settings. In contrast, we have collected by far the largest available dataset for this task, with images for approximately 10,000 words in each of 100 languages. We run experiments on a dozen high resource languages and 20 low resources languages, demonstrating the effect of word concreteness and part-of-speech on translation quality. %We find that while image features work best for concrete nouns, they are sometimes effective on other parts of speech. To improve image-based translation, we introduce a novel method of predicting word concreteness from images, which improves on a previous state-of-the-art unsupervised technique. This allows us to predict when image-based translation may be effective, enabling consistent improvements to a state-of-the-art text-based word translation system. Our code and the Massively Multilingual Image Dataset (MMID) are available at http://multilingual-images.org/.
RightsCopyright 2018 Association for Computational Linguistics. Creative Commons Attribution 4.0 International License.