Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Model soups to increase inference without increasing compute time (2301.10092v1)

Published 24 Jan 2023 in cs.CV and cs.AI

Abstract: In this paper, we compare Model Soups performances on three different models (ResNet, ViT and EfficientNet) using three Soup Recipes (Greedy Soup Sorted, Greedy Soup Random and Uniform soup) from arXiv:2203.05482, and reproduce the results of the authors. We then introduce a new Soup Recipe called Pruned Soup. Results from the soups were better than the best individual model for the pre-trained vision transformer, but were much worst for the ResNet and the EfficientNet. Our pruned soup performed better than the uniform and greedy soups presented in the original paper. We also discuss the limitations of weight-averaging that were found during the experiments. The code for our model soup library and the experiments with different models can be found here: https://github.com/milo-sobral/ModelSoup

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Charles Dansereau (1 paper)
  2. Milo Sobral (2 papers)
  3. Maninder Bhogal (1 paper)
  4. Mehdi Zalai (2 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.