Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explanation-Based Human Debugging of NLP Models: A Survey (2104.15135v3)

Published 30 Apr 2021 in cs.CL, cs.AI, cs.HC, and cs.LG

Abstract: Debugging a machine learning model is hard since the bug usually involves the training data and the learning process. This becomes even harder for an opaque deep learning model if we have no clue about how the model actually works. In this survey, we review papers that exploit explanations to enable humans to give feedback and debug NLP models. We call this problem explanation-based human debugging (EBHD). In particular, we categorize and discuss existing work along three dimensions of EBHD (the bug context, the workflow, and the experimental setting), compile findings on how EBHD components affect the feedback providers, and highlight open problems that could be future research directions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Piyawat Lertvittayakumjorn (14 papers)
  2. Francesca Toni (96 papers)
Citations (78)

Summary

We haven't generated a summary for this paper yet.