Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scale Calibrated Training: Improving Generalization of Deep Networks via Scale-Specific Normalization (1909.00182v2)

Published 31 Aug 2019 in cs.CV and cs.LG

Abstract: Standard convolutional neural networks(CNNs) require consistent image resolutions in both training and testing phase. However, in practice, testing with smaller image sizes is necessary for fast inference. We show that trivially evaluating low-resolution images on networks trained with high-resolution images results in a catastrophic accuracy drop in standard CNN architectures. We propose a novel training regime called Scale calibrated Training(SCT) which allows networks to learn from various scales of input simultaneously. By taking advantages of SCT, single network can provide decent accuracy at test time in response to multiple test scales. In our analysis, we surprisingly find that vanilla batch normalization can lead to sub-optimal performance in SCT. Therefore, a novel normalization scheme called Scale-Specific Batch Normalization is equipped to SCT in replacement of batch normalization. Experiment results show that SCT improves accuracy of single Resnet-50 on ImageNet by 1.7% and 11.5% accuracy when testing on image sizes of 224 and 128 respectively.

Citations (3)

Summary

We haven't generated a summary for this paper yet.