Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction (2005.00652v3)

Published 1 May 2020 in cs.CL and cs.LG

Abstract: Decisions of complex language understanding models can be rationalized by limiting their inputs to a relevant subsequence of the original text. A rationale should be as concise as possible without significantly degrading task performance, but this balance can be difficult to achieve in practice. In this paper, we show that it is possible to better manage this trade-off by optimizing a bound on the Information Bottleneck (IB) objective. Our fully unsupervised approach jointly learns an explainer that predicts sparse binary masks over sentences, and an end-task predictor that considers only the extracted rationale. Using IB, we derive a learning objective that allows direct control of mask sparsity levels through a tunable sparse prior. Experiments on ERASER benchmark tasks demonstrate significant gains over norm-minimization techniques for both task performance and agreement with human rationales. Furthermore, we find that in the semi-supervised setting, a modest amount of gold rationales (25% of training examples) closes the gap with a model that uses the full input.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Bhargavi Paranjape (18 papers)
  2. Mandar Joshi (24 papers)
  3. John Thickstun (21 papers)
  4. Hannaneh Hajishirzi (176 papers)
  5. Luke Zettlemoyer (225 papers)
Citations (93)

Summary

We haven't generated a summary for this paper yet.