Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 164 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 37 tok/s Pro
GPT-4o 76 tok/s Pro
Kimi K2 216 tok/s Pro
GPT OSS 120B 435 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

If-Conversion Optimization using Neuro Evolution of Augmenting Topologies (1603.01112v1)

Published 3 Mar 2016 in cs.DC

Abstract: Control-flow dependence is an intrinsic limiting factor for pro- gram acceleration. With the availability of instruction-level par- allel architectures, if-conversion optimization has, therefore, be- come pivotal for extracting parallelism from serial programs. While many if-conversion optimization heuristics have been proposed in the literature, most of them consider rigid criteria regardless of the underlying hardware and input programs. In this paper, we propose a novel if-conversion scheme that preforms an efficient if-conversion transformation using a machine learning technique (NEAT). This method enables if-conversion customization overall branches within a program unlike the literature that considered in- dividual branches. Our technique also provides flexibility required when compiling for heterogeneous systems. The efficacy of our approach is shown by experiments and reported results which il- lustrate that the programs can be accelerated on the same archi- tecture and without modifying the original code. Our technique applies for general purpose programming languages (e.g. C/C++) and is transparent for the programmer. We implemented our tech- nique in LLVM 3.6.1 compilation infrastructure and experimented on the kernels of SPEC-CPU2006 v1.1 benchmarks suite running on a multicore system of Intel(R) Xeon(R) 3.50GHz processors. Our findings show a performance gain up to 8.6% over the stan- dard optimized code (LLVM -O2 with if-conversion included), in- dicating the need for If-conversion compilation optimization that can adapt to the unique characteristics of every individual branch.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.