Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explaining black box decisions by Shapley cohort refinement (1911.00467v2)

Published 1 Nov 2019 in cs.LG, cs.AI, econ.EM, and stat.ML

Abstract: We introduce a variable importance measure to quantify the impact of individual input variables to a black box function. Our measure is based on the Shapley value from cooperative game theory. Many measures of variable importance operate by changing some predictor values with others held fixed, potentially creating unlikely or even logically impossible combinations. Our cohort Shapley measure uses only observed data points. Instead of changing the value of a predictor we include or exclude subjects similar to the target subject on that predictor to form a similarity cohort. Then we apply Shapley value to the cohort averages. We connect variable importance measures from explainable AI to function decompositions from global sensitivity analysis. We introduce a squared cohort Shapley value that splits previously studied Shapley effects over subjects, consistent with a Shapley axiom.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Masayoshi Mase (6 papers)
  2. Art B. Owen (75 papers)
  3. Benjamin Seiler (1 paper)
Citations (46)

Summary

We haven't generated a summary for this paper yet.