Emergent Mind

Abstract

The performance of computer vision models are susceptible to unexpected changes in input images, known as common corruptions (e.g. noise, blur, illumination changes, etc.), that can hinder their reliability when deployed in real scenarios. These corruptions are not always considered to test model generalization and robustness. In this survey, we present a comprehensive overview of methods that improve the robustness of computer vision models against common corruptions. We categorize methods into four groups based on the model part and training method addressed: data augmentation, representation learning, knowledge distillation, and network components. We also cover indirect methods for generalization and mitigation of shortcut learning, potentially useful for corruption robustness. We release a unified benchmark framework to compare robustness performance on several datasets, and address the inconsistencies of evaluation in the literature. We provide an experimental overview of the base corruption robustness of popular vision backbones, and show that corruption robustness does not necessarily scale with model size. The very large models (above 100M parameters) gain negligible robustness, considering the increased computational requirements. To achieve generalizable and robust computer vision models, we foresee the need of developing new learning strategies to efficiently exploit limited data and mitigate unwanted or unreliable learning behaviors.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.