Emergent Mind

Revisiting Mahalanobis Distance for Transformer-Based Out-of-Domain Detection

(2101.03778)
Published Jan 11, 2021 in cs.CL and cs.LG

Abstract

Real-life applications, heavily relying on machine learning, such as dialog systems, demand out-of-domain detection methods. Intent classification models should be equipped with a mechanism to distinguish seen intents from unseen ones so that the dialog agent is capable of rejecting the latter and avoiding undesired behavior. However, despite increasing attention paid to the task, the best practices for out-of-domain intent detection have not yet been fully established. This paper conducts a thorough comparison of out-of-domain intent detection methods. We prioritize the methods, not requiring access to out-of-domain data during training, gathering of which is extremely time- and labor-consuming due to lexical and stylistic variation of user utterances. We evaluate multiple contextual encoders and methods, proven to be efficient, on three standard datasets for intent classification, expanded with out-of-domain utterances. Our main findings show that fine-tuning Transformer-based encoders on in-domain data leads to superior results. Mahalanobis distance, together with utterance representations, derived from Transformer-based encoders, outperforms other methods by a wide margin and establishes new state-of-the-art results for all datasets. The broader analysis shows that the reason for success lies in the fact that the fine-tuned Transformer is capable of constructing homogeneous representations of in-domain utterances, revealing geometrical disparity to out of domain utterances. In turn, the Mahalanobis distance captures this disparity easily. The code is available in our GitHub repo: https://github.com/huawei-noah/noah-research/tree/master/Maha_OOD .

Mahalanobis distance variants achieve OOD detection with less training data, shown on CLINC150 dataset.

Overview

  • This research compares various out-of-domain (OOD) detection methods in dialog systems, emphasizing those not requiring OOD data during training.

  • It specifically highlights the superior performance of Transformer-based models fine-tuned on in-domain data in conjunction with Mahalanobis distance for OOD detection.

  • The study evaluates these methods across three established datasets augmented with out-of-domain utterances, establishing new benchmarks in OOD detection.

  • Findings suggest that the geometric distinctiveness of in-domain vs. out-of-domain utterances captured by Mahalanobis distance is crucial for effective OOD detection.

Comprehensive Evaluation of Out-of-Domain Intent Detection Methods Using Transformer-Based Models

Introduction

The increasing deployment of dialog systems in practical applications necessitates the development of robust out-of-domain (OOD) detection mechanisms. These mechanisms enable dialog agents to distinguish between intents they are trained to handle (in-domain) and those they are not (out-of-domain), thus preventing inappropriate responses. Despite significant interest in this area, a consensus on best practices for OOD detection in dialog systems remains elusive. This research seeks to fill that gap by presenting a detailed comparison of various OOD detection methods, focusing on those that do not require access to OOD data during training—a process often constrained by time and labor due to the diverse nature of user utterances.

Out-of-Domain Detection Methods

OOD detection can be broadly categorized based on the requirement of OOD data for training and the utilization of in-domain (ID) labels. The methods analyzed include classification approaches that necessitate OOD data for supervision and several unsupervised techniques:

  • Classifier Outputs: Utilizing the maximum softmax probability (MSP) from a trained classifier as an OOD score, potentially modified by temperature scaling to adjust confidence levels.
  • Generative Methods: Leveraging the natural ability of generative models to estimate input likelihoods, with adaptations for creating pseudo-OOD utterances.
  • Distance-Based Methods: Employing distance measurements, such as Mahalanobis distance, to estimate the divergence of a given utterance from the in-domain space.

Evaluation Framework

This study undertakes a rigorous evaluation of various contextual encoders and methods across three established datasets for intent classification (CLINC150, ROSTD, and SNIPS) that have been augmented with out-of-domain utterances. Key findings illustrate that Transformer-based models, particularly when fine-tuned on in-domain data and coupled with Mahalanobis distance for OOD scoring, significantly outperform alternative approaches.

Key Results

The utilization of Transformer-based models fine-tuned on in-domain data consistently yields superior OOD detection, with the combination of fine-tuned Transformers and Mahalanobis distance establishing new state-of-the-art results across all tested datasets. This success is attributed to the ability of these models to create homogenous representations of in-domain utterances that are geometrically distinct from out-of-domain representations. Such distinctiveness is efficiently captured by the Mahalanobis distance, suggesting its effectiveness as a metric for OOD detection in this context.

Theoretical and Practical Implications

The significant performance of the Transformer-based models in OOD detection tasks underscores the adaptability of these models beyond conventional natural language understanding tasks. From a theoretical perspective, this work highlights the importance of embedding space geometry in the effectiveness of OOD detection methods. Practically, the study offers a clear direction for developing robust OOD detection mechanisms for dialog systems, emphasizing the utilization of fine-tuned Transformer models and Mahalanobis distance as a potent combination for current and future systems. Furthermore, the research opens avenues for exploring the limitations of these approaches, especially in scenarios involving semantically similar in-domain and out-of-domain utterances.

Conclusion

This paper contributes to the body of knowledge in OOD detection by offering a comprehensive comparison of existing methods, highlighting the effectiveness of Transformer-based models coupled with Mahalanobis distance. By establishing new benchmarks in the field, it sets a foundation for future research directed towards refining and enhancing OOD detection mechanisms in dialog systems. This progression not only has the potential to improve the quality and reliability of dialog systems but also to contribute to their applicational versatility in various real-world settings.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.