Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Individual Explanations in Machine Learning Models: A Case Study on Poverty Estimation (2104.04148v2)

Published 9 Apr 2021 in cs.LG, cs.AI, and stat.AP

Abstract: Machine learning methods are being increasingly applied in sensitive societal contexts, where decisions impact human lives. Hence it has become necessary to build capabilities for providing easily-interpretable explanations of models' predictions. Recently in academic literature, a vast number of explanations methods have been proposed. Unfortunately, to our knowledge, little has been documented about the challenges machine learning practitioners most often face when applying them in real-world scenarios. For example, a typical procedure such as feature engineering can make some methodologies no longer applicable. The present case study has two main objectives. First, to expose these challenges and how they affect the use of relevant and novel explanations methods. And second, to present a set of strategies that mitigate such challenges, as faced when implementing explanation methods in a relevant application domain -- poverty estimation and its use for prioritizing access to social policies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Alfredo Carrillo (3 papers)
  2. Luis F. CantĂș (2 papers)
  3. Luis Tejerina (1 paper)
  4. Alejandro Noriega (3 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.