Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 168 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 37 tok/s Pro
GPT-5 High 34 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 214 tok/s Pro
GPT OSS 120B 466 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Analyzing Search Techniques for Autotuning Image-based GPU Kernels: The Impact of Sample Sizes (2203.13577v1)

Published 25 Mar 2022 in cs.DC

Abstract: Modern computing systems are increasingly more complex, with their multicore CPUs and GPUs accelerators changing yearly, if not more often. It thus has become very challenging to write programs that efficiently use the associated complex memory systems and take advantage of the available parallelism. Autotuning addresses this by optimizing parameterized code to the targeted hardware by searching for the optimal set of parameters. Empirical autotuning has therefore gained interest during the past decades. While new autotuning algorithms are regularly presented and published, we will show why comparing these autotuning algorithms is a deceptively difficult task. In this paper, we describe our empirical study of state-of-the-art search techniques for autotuning by comparing them on a range of sample sizes, benchmarks and architectures. We optimize 6 tunable parameters with a search-space size of over 2 million. The algorithms studied include Random Search (RS), Random Forest Regression (RF), Genetic Algorithms (GA), Bayesian Optimization with Gaussian Processes (BO GP) and Bayesian Optimization with Tree-Parzen Estimators (BO TPE). Our results on the ImageCL benchmark suite suggest that the ideal autotuning algorithm heavily depends on the sample size. In our study, BO GP and BO TPE outperform the other algorithms in most scenarios with sample sizes from 25 to 100. However, GA usually outperforms the others for sample sizes 200 and beyond. We generally see the most speedup to be gained over RS in the lower range of sample sizes (25-100). However, the algorithms more consistently outperform RS for higher sample sizes (200-400). Hence, no single state-of-the-art algorithm outperforms the rest for all sample sizes. Some suggestions for future work are also included.

Citations (2)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube