CLIPTrans: Transferring Visual Knowledge with Pre-trained Models for Multimodal Machine Translation

1Boston College 2BITS Pilani 3Microsoft, India 4Harvard University
Published in International Conference of Computer Vision, 2023

Problem Setup

teaser image

We aim to enhance machine translation by using images in training. For transfer learning approaches, we need to initialise the weights with a pre-trained multilingual multimodal model. However, existing pre-trained models are either multilingual or multimodal.

Overview

gif showing training pipeline

We thus propose CLIPTrans, which can combine existing pairwise models in a data efficient manner. This is done by first aligning the models with a mapping network on a captioning task, followed by thr actual translation task. We find that this beats finetuning text-only machine translation, and furthers the SOTA on multimodal machine translation.

Centered Image

Does captioning really help? We show this qualitatively. Inference is run with two different inputs: images(whose output is "caption") and german source texts(whose output is "translation"). Captions capture the gist needed for translation, however, the exact words to be used can only be known by finetuning on source texts. Notably, even without finetuning, CLIPTrans can translate some of the semantics of german text without seeing paired texts. This also motivates the second translation task.

Abstract

There has been a growing interest in developing multimodal machine translation (MMT) systems that enhance neural machine translation (NMT) with visual knowledge. This problem setup involves using images as auxiliary information during training, and more recently, eliminating their use during inference. Towards this end, previous works face a challenge in training powerful MMT models from scratch due to the scarcity of annotated multilingual vision-language data, especially for low-resource languages.

Simultaneously, there has been an influx of multilingual pre-trained models for NMT and multimodal pre-trained models for vision-language tasks, primarily in English, which have shown exceptional generalisation ability. However, these are not directly applicable to MMT since they do not provide aligned multimodal multilingual features for generative tasks.

To alleviate this issue, instead of designing complex modules for MMT, we propose CLIPTrans, which simply adapts the independently pre-trained multimodal M-CLIP and the multilingual mBART. In order to align their embedding spaces, mBART is conditioned on the M-CLIP features by a prefix sequence generated through a lightweight mapping network. We train this in a two-stage pipeline which warms up the model with image captioning before the actual translation task. Through experiments, we demonstrate the merits of this framework and consequently push forward the state-of-the-art across standard benchmarks by an average of +2.67 BLEU.

Video

Poster

CLIPTrans poster

BibTeX

@inproceedings {gupta2023cliptrans,
    title={CLIPTrans: Transferring Visual Knowledge with Pre-trained Models for Multimodal Machine Translation},
    author={Gupta, Devaansh and Kharbanda, Siddhant and Zhou, Jiawei and Li, Wanhua and Pfister, Hanspeter and Wei, Donglai},
    booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
    year={2023}
}