Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluating Predictive Uncertainty and Robustness to Distributional Shift Using Real World Data (2111.04665v2)

Published 8 Nov 2021 in cs.LG and cs.AI

Abstract: Most machine learning models operate under the assumption that the training, testing and deployment data is independent and identically distributed (i.i.d.). This assumption doesn't generally hold true in a natural setting. Usually, the deployment data is subject to various types of distributional shifts. The magnitude of a model's performance is proportional to this shift in the distribution of the dataset. Thus it becomes necessary to evaluate a model's uncertainty and robustness to distributional shifts to get a realistic estimate of its expected performance on real-world data. Present methods to evaluate uncertainty and model's robustness are lacking and often fail to paint the full picture. Moreover, most analysis so far has primarily focused on classification tasks. In this paper, we propose more insightful metrics for general regression tasks using the Shifts Weather Prediction Dataset. We also present an evaluation of the baseline methods using these metrics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Kumud Lakara (5 papers)
  2. Akshat Bhandari (4 papers)
  3. Pratinav Seth (16 papers)
  4. Ujjwal Verma (16 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.