Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sample Efficiency of Data Augmentation Consistency Regularization (2202.12230v2)

Published 24 Feb 2022 in cs.LG

Abstract: Data augmentation is popular in the training of large neural networks; currently, however, there is no clear theoretical comparison between different algorithmic choices on how to use augmented data. In this paper, we take a step in this direction - we first present a simple and novel analysis for linear regression with label invariant augmentations, demonstrating that data augmentation consistency (DAC) is intrinsically more efficient than empirical risk minimization on augmented data (DA-ERM). The analysis is then extended to misspecified augmentations (i.e., augmentations that change the labels), which again demonstrates the merit of DAC over DA-ERM. Further, we extend our analysis to non-linear models (e.g., neural networks) and present generalization bounds. Finally, we perform experiments that make a clean and apples-to-apples comparison (i.e., with no extra modeling or data tweaks) between DAC and DA-ERM using CIFAR-100 and WideResNet; these together demonstrate the superior efficacy of DAC.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Shuo Yang (244 papers)
  2. Yijun Dong (11 papers)
  3. Rachel Ward (80 papers)
  4. Inderjit S. Dhillon (62 papers)
  5. Sujay Sanghavi (97 papers)
  6. Qi Lei (55 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.