Boston University Libraries OpenBU
    JavaScript is disabled for your browser. Some features of this site may not work without it.
    View Item 
    •   OpenBU
    • BU Open Access Articles
    • BU Open Access Articles
    • View Item
    •   OpenBU
    • BU Open Access Articles
    • BU Open Access Articles
    • View Item

    Learning to scale multilingual representations for vision-language tasks

    Thumbnail
    Date Issued
    2020
    Author(s)
    Burns, Andrea
    Kim, Donghyun
    Wijaya, Derry
    Saenko, Kate
    Plummer, Bryan A.
    Share to FacebookShare to TwitterShare by Email
    Export Citation
    Download to BibTex
    Download to EndNote/RefMan (RIS)
    Metadata
    Show full item record
    Permanent Link
    https://hdl.handle.net/2144/43499
    Version
    Published version
    Citation (published version)
    A. Burns, D. Kim, D. Wijaya, K. Saenko, B.A. Plummer. 2020. "Learning to Scale Multilingual Representations for Vision-Language Tasks.." CoRR, Volume abs/2004.04312, https://arxiv.org/abs/2004.04312
    Abstract
    Current multilingual vision-language models either require a large number of additional parameters for each supported language, or suffer performance degradation as languages are added. In this paper, we propose a Scalable Multilingual Aligned Language Representation (SMALR) that supports many languages with few model parameters without sacrificing downstream task performance. SMALR learns a fixed size language-agnostic representation for most words in a multilingual vocabulary, keeping language-specific features for just a few. We use a masked cross-language modeling loss to align features with context from other languages. Additionally, we propose a cross-lingual consistency module that ensures predictions made for a query and its machine translation are comparable. The effectiveness of SMALR is demonstrated with ten diverse languages, over twice the number supported in vision-language tasks to date. We evaluate on multilingual image-sentence retrieval and outperform prior work by 3-4% with less than 1/5th the training parameters compared to other word embedding methods.
    Collections
    • CAS: Computer Science: Scholarly Papers [257]
    • BU Open Access Articles [4751]


    Boston University
    Contact Us | Send Feedback | Help
     

     

    Browse

    All of OpenBUCommunities & CollectionsIssue DateAuthorsTitlesSubjectsThis CollectionIssue DateAuthorsTitlesSubjects

    Deposit Materials

    LoginNon-BU Registration

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular Authors

    Boston University
    Contact Us | Send Feedback | Help