Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 52 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 13 tok/s Pro
GPT-4o 100 tok/s Pro
Kimi K2 192 tok/s Pro
GPT OSS 120B 454 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

If-Conversion Optimization using Neuro Evolution of Augmenting Topologies (1603.01112v1)

Published 3 Mar 2016 in cs.DC

Abstract: Control-flow dependence is an intrinsic limiting factor for pro- gram acceleration. With the availability of instruction-level par- allel architectures, if-conversion optimization has, therefore, be- come pivotal for extracting parallelism from serial programs. While many if-conversion optimization heuristics have been proposed in the literature, most of them consider rigid criteria regardless of the underlying hardware and input programs. In this paper, we propose a novel if-conversion scheme that preforms an efficient if-conversion transformation using a machine learning technique (NEAT). This method enables if-conversion customization overall branches within a program unlike the literature that considered in- dividual branches. Our technique also provides flexibility required when compiling for heterogeneous systems. The efficacy of our approach is shown by experiments and reported results which il- lustrate that the programs can be accelerated on the same archi- tecture and without modifying the original code. Our technique applies for general purpose programming languages (e.g. C/C++) and is transparent for the programmer. We implemented our tech- nique in LLVM 3.6.1 compilation infrastructure and experimented on the kernels of SPEC-CPU2006 v1.1 benchmarks suite running on a multicore system of Intel(R) Xeon(R) 3.50GHz processors. Our findings show a performance gain up to 8.6% over the stan- dard optimized code (LLVM -O2 with if-conversion included), in- dicating the need for If-conversion compilation optimization that can adapt to the unique characteristics of every individual branch.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube