Emergent Mind

MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series

(2405.19327)
Published May 29, 2024 in cs.CL , cs.AI , and cs.LG

Abstract

LLMs have made great strides in recent years to achieve unprecedented performance across different tasks. However, due to commercial interest, the most competitive models like GPT, Gemini, and Claude have been gated behind proprietary interfaces without disclosing the training details. Recently, many institutions have open-sourced several strong LLMs like LLaMA-3, comparable to existing closed-source LLMs. However, only the model's weights are provided with most details (e.g., intermediate checkpoints, pre-training corpus, and training code, etc.) being undisclosed. To improve the transparency of LLMs, the research community has formed to open-source truly open LLMs (e.g., Pythia, Amber, OLMo), where more details (e.g., pre-training corpus and training code) are being provided. These models have greatly advanced the scientific study of these large models including their strengths, weaknesses, biases and risks. However, we observe that the existing truly open LLMs on reasoning, knowledge, and coding tasks are still inferior to existing state-of-the-art LLMs with similar model sizes. To this end, we open-source MAP-Neo, a highly capable and transparent bilingual language model with 7B parameters trained from scratch on 4.5T high-quality tokens. Our MAP-Neo is the first fully open-sourced bilingual LLM with comparable performance compared to existing state-of-the-art LLMs. Moreover, we open-source all details to reproduce our MAP-Neo, where the cleaned pre-training corpus, data cleaning pipeline, checkpoints, and well-optimized training/evaluation framework are provided. Finally, we hope our MAP-Neo will enhance and strengthen the open research community and inspire more innovations and creativities to facilitate the further improvements of LLMs.

MAP-Neo outperforms both popular open-weight and recent transparent large language models in base and chat models.

Overview

  • MAP-Neo is a highly transparent, fully open-sourced bilingual large language model with 7 billion parameters, offering industry-level performance.

  • The model features a comprehensive development pipeline with detailed data curation processes, access to pre-training corpus, intermediate checkpoints, and an evaluation framework.

  • MAP-Neo's transparency and high-quality pre-training data result in strong performance across multiple benchmarks, promoting reproducibility and independent validation in NLP.

An Academic Analysis of MAP-Neo: A Fully Transparent Bilingual Large Language Model

The paper under review introduces MAP-Neo, a novel bilingual language model consisting of 7 billion parameters, which is fully open-sourced and transparent. The authors present a comprehensive overview of the entire pipeline used in developing MAP-Neo, ranging from data curation and training processes to the disclosure of model checkpoints and training frameworks. This essay provides an expert appraisal of the research findings and their implications for the field of NLP.

Overview of MAP-Neo

MAP-Neo stands out in the current landscape of LLMs due to its emphasis on transparency and full open-sourcing. The model addresses several critical gaps in the open-source community, particularly the need for high-performance models that are on par with proprietary solutions. Notably, the paper proclaims that MAP-Neo achieves competitive performance through a transparent development process that includes offering access to the pre-training corpus (Matrix Data Pile), detailed data curation pipelines, checkpoints, and an optimized training and evaluation framework.

Transparency and Open Source Commitment

One of the significant contributions of MAP-Neo, as discussed in the paper, is its unmatched level of transparency. Unlike many open-source models like LLaMA-3 and BLOOM, which often lack comprehensive details about their pre-training data and intermediate checkpoints, MAP-Neo provides complete transparency. This transparency extends to details such as the cleaned pre-training corpus, data cleaning pipeline, training code, intermediate checkpoints, and evaluation framework, making it a highly reproducible model.

Data Curation and Pre-Processing

The authors introduce the Matrix Data Pile, a large-scale pre-training corpus comprising 4.5 trillion tokens. The data curation process combines sophisticated data filtering, deduplication methods, and a robust document conversion pipeline. Given the critical role of high-quality data in LLM development, the paper's comprehensive data processing and cleaning methodologies ensure the reliability and effectiveness of the model. The authors also provide a detailed breakdown of the corpus composition, underlying the rigorous multi-stage data cleaning and quality assurance techniques employed.

Numerical Results and Model Performance

MAP-Neo demonstrates strong performance across multiple benchmarks, particularly in areas such as code generation, mathematical reasoning, and multilingual understanding. Key numerical results highlighted in the paper include a HumanEval score of 23.8 and a GSM8K score of 53.68, which places MAP-Neo close to industry-level models like LLaMA3-8B and Mistral-7B. The robust performance is attributed to the model's high-quality pre-training data and optimized training framework.

Implications and Future Directions

The introduction of MAP-Neo has several implications for both practical applications and future research. From a practical standpoint, the full transparency offered by MAP-Neo lowers the barrier for organizations and researchers to understand and leverage advanced LLM technologies without being constrained by proprietary limitations. The detailed disclosure of the model's training process and data curation paves the way for enhanced reproducibility and independent validation in the research community.

Theoretically, MAP-Neo sets a new standard for developing high-performance, transparent LLMs. This transparency can drive further innovations in NLP by enabling unbiased analyses of model behavior, identification of biases, and understanding of potential risks. The comprehensive release of the pre-training corpus and frameworks can also inspire new methodologies and optimizations in the field.

Conclusion

In conclusion, MAP-Neo represents a significant advancement in the development of open-source, transparent LLMs. Its bilingual capabilities, combined with fully disclosed training and evaluation pipelines, provide a valuable asset for the research community. The model not only demonstrates strong performance across various tasks but also highlights the importance of transparency and reproducibility in advancing NLP research. As the field continues to evolve, models like MAP-Neo will play a crucial role in democratizing access to LLM technologies and driving forward innovative research in artificial intelligence.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.