Papers
Topics
Authors
Recent
2000 character limit reached

MoMo: A shared encoder Model for text, image and multi-Modal representations (2304.05523v1)

Published 11 Apr 2023 in cs.CV, cs.AI, and cs.CL

Abstract: We propose a self-supervised shared encoder model that achieves strong results on several visual, language and multimodal benchmarks while being data, memory and run-time efficient. We make three key contributions. First, in contrast to most existing works, we use a single transformer with all the encoder layers processing both the text and the image modalities. Second, we propose a stage-wise training strategy where the model is first trained on images, then jointly with unimodal text and image datasets and finally jointly with text and text-image datasets. Third, to preserve information across both the modalities, we propose a training pipeline that learns simultaneously from gradient updates of different modalities at each training update step. The results on downstream text-only, image-only and multimodal tasks show that our model is competitive with several strong models while using fewer parameters and lesser pre-training data. For example, MoMo performs competitively with FLAVA on multimodal (+3.1), image-only (+1.1) and text-only (-0.1) tasks despite having 2/5th the number of parameters and using 1/3rd the image-text training pairs. Finally, we ablate various design choices and further show that increasing model size produces significant performance gains indicating potential for substantial improvements with larger models using our approach.

Citations (3)

Summary

  • The paper presents a novel shared encoder model that unifies text, image, and multimodal tasks, reducing complexity and resource usage.
  • It employs a stage-wise training methodology, starting with image pretraining and advancing to text and combined modalities, improving overall learning efficiency.
  • Experimental results demonstrate competitive performance, including a 35% faster inference in Visual Question Answering compared to models like FLAVA.

MoMo: A Shared Encoder Model for Multimodal Representations

MoMo introduces a novel approach in multimodal representation learning by employing a shared encoder model that proficiently handles text, image, and image-text tasks within a unified architecture. As multimodal models strive to minimize resource consumption and maximize efficiency, MoMo leverages a streamlined yet robust framework to achieve competitive performance across diverse benchmarks, demonstrating substantial gains in efficiency.

Model Architecture and Training Strategy

Shared Transformer Encoder

MoMo's architecture revolves around a singular transformer encoder, capable of processing unimodal inputs (either text or image) and cross-modal inputs (text-image combined). This shared encoder framework contrasts with traditional architectures that typically separate encoders by modality. The advantage of MoMo's design lies in its efficiency: it reduces the need for multiple specialized models while simultaneously decreasing runtime and memory demands. Figure 1

Figure 1: Illustrations of model architectures processing various modalities, highlighting MoMo's unified transformer approach.

Stage-wise Training Methodology

The training process for MoMo is meticulously structured into three distinct stages, facilitating efficient learning across modalities. Initially, MoMo is pretrained on image datasets, employing a masked image modeling (MIM) objective to enhance understanding of visual structures. Subsequent stages involve concurrent text and image dataset training, followed by a multimodal focus combining text and image-text datasets. This multi-stage training method assists in mitigating information loss that often occurs when modalities are independently processed, enabling the model to accumulate gradients from diverse datasets efficiently. Figure 2

Figure 2: MoMo's sequential training stages, illustrating progressive unified learning across modalities.

Experimental Results and Discussion

Performance Benchmarks

MoMo is evaluated across a spectrum of unimodal (language, image) and multimodal tasks, demonstrating its competitive edge. In direct comparisons with models such as FLAVA and CLIP, MoMo showcases notable achievements with reduced parameter counts and dataset sizes. Specifically, MoMo achieves Macros Average accuracy improvements in multimodal tasks, exemplifying its effectiveness and efficiency in handling the complexity of cross-modality interactions. Figure 3

Figure 3: Comparison with FLAVA, indicating MoMo's superior macro accuracy with reduced parameters and training data.

Model Efficiency

The model's shared encoder architecture lowers computational costs considerably, presenting up to a 35% speedup in inference tasks, such as Visual Question Answering. MoMo's shared parameters and efficient training pipeline reduce operational overhead, making it a pragmatic choice for deployment scenarios that require versatility across text, image, and multimodal tasks.

Ablation Studies

A thorough ablation paper reveals critical insights into MoMo's architecture and training decisions. Separating training stages for unimodal and multimodal tasks validates the efficacy of MoMo's staged approach. Maintaining separate decoders for each training stage optimizes representation learning for distinct modalities, corroborating the impact of design choices on performance metrics.

Conclusion

MoMo represents a significant step forward in multimodal representation learning, offering a unified model that balances efficiency with competitive performance. Its shared encoder model reduces complexity and enhances deployment flexibility, fostering advancements in multimodal AI applications. Future research prospects include scaling model architectures and datasets to further harness MoMo's potential in expansive multimodal tasks. As AI technologies gravitate towards integrating varied data types, MoMo's contributions underscore the critical importance of adaptable, resource-efficient models in advancing the field.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.