We introduce the Falcon series: 7B, 40B, and 180B parameters causal decoder-only models trained on a diverse high-quality corpora predominantly assembled from web data. The largest model, Falcon-180B, has been trained on over 3.5 trillion tokens of text--the largest openly documented pretraining run. Falcon-180B significantly outperforms models such as PaLM or Chinchilla, and improves upon concurrently developed models such as LLaMA 2 or Inflection-1. It nears the performance of PaLM-2-Large at a reduced pretraining and inference cost, making it, to our knowledge, one of the three best language models in the world along with GPT-4 and PaLM-2-Large. We report detailed evaluations, as well as a deep dive into the methods and custom tooling employed to pretrain Falcon. Notably, we report on our custom distributed training codebase, allowing us to efficiently pretrain these models on up to 4,096 A100s on cloud AWS infrastructure with limited interconnect. We release a 600B tokens extract of our web dataset, as well as the Falcon-7/40/180B models under a permissive license to foster open-science and accelerate the development of an open ecosystem of LLMs.
The Falcon series includes three open language models, Falcon-7B, Falcon-40B, and Falcon-180B, with the largest being trained on 3.5 trillion tokens.
Falcon models challenged the norm by using high-quality, filtered web data and introduced multigroup attention for efficient inference.
The models were trained on cloud infrastructure utilizing A100-40GB GPUs, 3D parallelism, ZeRO sharding, and FlashAttention kernels for efficiency.
Falcon-180B exhibits strong performance across various NLP tasks, showing promise for specialization in chatbot and code-related tasks.
The release under open licenses promotes AI research democratization and responsible use of LLMs.
The Falcon series, introduced by the Technology Innovation Institute, comprises three models: Falcon-7B, Falcon-40B, and Falcon-180B, each scaling in size and computational resources. The largest, Falcon-180B, is notable for its training on an unprecedented 3,500 billion tokens of text data. These models are presented as significant contributions to the field of open language models, with the 180B variant being released under a responsible AI license while the smaller models are under Apache 2.0 license.
The research leading to the development of Falcon models involved extensive experimentation to fine-tune the architecture and pretraining datasets. The team took an innovative approach by relying heavily on high-quality web data, carefully filtered and deduplicated, challenging the belief that curated datasets are superior for training language models. This led to the decision not to repeat data during training, to avoid issues with data memorization and degradation. For the architecture, the team incorporated a variant of multiquery attention, known as multigroup attention, to improve inference efficiency, particularly in reducing the size of the required memory cache.
Implementation-wise, the Falcon models are trained on cloud infrastructure, using cost-efficient methods and hardware like A100-40GB GPUs. This is enabled by a custom distributed training framework, Gigatron, which utilizes 3D parallelism and ZeRO optimizer sharding to optimize for memory and computational efficiency. Additionally, FlashAttention kernels are used to expedite training further.
Upon evaluation, Falcon-180B demonstrates competitive performance on a variety of natural language processing tasks, positioning itself among the top language models like the ones from OpenAI's GPT series and Google's PaLM. Through evaluations using the EleutherAI Evaluation Harness, the Falcon series models not only exhibit strong performance on NLP benchmarks but also demonstrate potential for specialization in areas like chatbot development and code-related tasks.
The authors acknowledge limitations in their research, including the potential for different results at larger scales and the possible need to decouple training from inference compute to manage downstream deployment costs. Moreover, Falcon models, predominantly trained on English web data, may struggle with out-of-scope languages and domains.
The release of Falcon models and a portion of the RefinedWeb dataset under open licenses represents a push towards democratization of AI research, fostering collaboration, and ensuring responsible use of LLMs. The models and accompanying research documentation have been made publicly available with the intention of contributing to collective advancement in AI technology.
What’s going on with the open llm leaderboard? "https://huggingface.co/blog/evaluating-mmlu-leaderboard".
Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca.
GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. https://github.com/kingoflolz/mesh-transformer-jax.