Emergent Mind

Dual Operating Modes of In-Context Learning

(2402.18819)
Published Feb 29, 2024 in cs.LG

Abstract

In-context learning (ICL) exhibits dual operating modes: task learning, i.e., acquiring a new skill from in-context samples, and task retrieval, i.e., locating and activating a relevant pretrained skill. Recent theoretical work investigates various mathematical models to analyze ICL, but existing models explain only one operating mode at a time. We introduce a probabilistic model, with which one can explain the dual operating modes of ICL simultaneously. Focusing on in-context learning of linear functions, we extend existing models for pretraining data by introducing multiple task groups and task-dependent input distributions. We then analyze the behavior of the optimally pretrained model under the squared loss, i.e., the MMSE estimator of the label given in-context examples. Regarding pretraining task distribution as prior and in-context examples as the observation, we derive the closed-form expression of the task posterior distribution. With the closed-form expression, we obtain a quantitative understanding of the two operating modes of ICL. Furthermore, we shed light on an unexplained phenomenon observed in practice: under certain settings, the ICL risk initially increases and then decreases with more in-context examples. Our model offers a plausible explanation for this "early ascent" phenomenon: a limited number of in-context samples may lead to the retrieval of an incorrect skill, thereby increasing the risk, which will eventually diminish as task learning takes effect with more in-context samples. We also theoretically analyze ICL with biased labels, e.g., zero-shot ICL, where in-context examples are assigned random labels. Lastly, we validate our findings and predictions via experiments involving Transformers and LLMs.

Proposed probabilistic model for pretraining data, explaining ICL's dual modes and phenomena in LLMs.

Overview

  • The paper explores the dual operating modes of in-context learning (ICL) in LLMs, focusing on task learning and task retrieval.

  • It introduces a probabilistic model that uses a Gaussian mixture model to more accurately reflect the clustered nature of real-world data.

  • The study provides a quantitative analysis of how LLMs leverage in-context examples for task adaptation, highlighting phenomena like Component Shifting and Component Re-weighting.

  • It explains the 'early ascent' phenomenon in ICL risk and predicts a 'bounded efficacy' for ICL with biased labels, offering insights into practical applications and future research directions.

Understanding the Dual Operating Modes of In-Context Learning Through Probabilistic Modelling

In-context learning (ICL) has shown remarkable capabilities in leveraging pretrained LLMs for task adaptation with few-shot examples. This learning paradigm enables models to either learn anew or retrieve and fine-tune a relevant pretrained skill based on provided in-context samples. Such flexibility and efficiency in leveraging prior knowledge and adapting to new tasks underscore the dual operating modes of ICL: task learning and task retrieval.

The Study on Dual Operating Modes of ICL

A paper explore the intricate dynamics of these dual modes in ICL by proposing a probabilistic model tailored for analysing in-context learning of linear functions. Central to their approach is the consideration of pretraining data as drawn from a Gaussian mixture model—a choice that reflects the clustered nature of real-world data more accurately compared to previous assumptions of a single Gaussian distribution. This model allows for a rigorous demonstration of how a next-token prediction model, when optimally pretrained, employs Bayesian inference to optimally predict based on in-context examples.

Key Insights and Contributions

Quantitative Understanding of Dual Modes

By rigorously modeling pretraining data and analyzing the behavior of the optimally pretrained model under squared loss, the paper presents a quantitative understanding of the task learning and task retrieval modes in ICL. The analysis indicates the influence of in-context examples on task posterior distribution, introducing two critical phenomena: Component Shifting and Component Re-weighting.

Explaining the Early Ascent Phenomenon

The study sheds light on the puzzling "early ascent" phenomenon observed with LLMs, where ICL risk initially rises with an increasing number of in-context samples before decreasing. The paper offers a plausible explanation by showing how a limited number of in-context samples initially may lead to the retrieval of an incorrect skill. However, as more in-context examples are included, task learning becomes more dominant, effectively diminishing the risk.

Predicted Bounded Efficacy of Biased-Label ICL

The analysis also forecasts a "bounded efficacy" phenomenon for ICL with biased labels—a method where in-context examples are assigned random labels. While initially effective due to task retrieval, the model's performance is predicted to degrade when the number of in-context examples reaches a certain threshold, and the task learning mode becomes dominant.

Practical Implications and Future Directions

This research provides a robust foundation for understanding and predicting the behavior of ICL under various settings. By explaining existing phenomena and predicting new ones, it not only enriches our theoretical understanding but also guides practical applications of ICL in leveraging LLMs. Future research could explore extending these insights to non-linear models and considering more complex in-context example distributions, further bridging the gap between theoretical models and real-world applications.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.