Emergent Mind

Abstract

The introduction of Intel(R) Xeon Phi(TM) coprocessors opened up new possibilities in development of highly parallel applications. The familiarity and flexibility of the architecture together with compiler support integrated into the Intel C++ Composer XE allows the developers to use familiar programming paradigms and techniques, which are usually not suitable for other accelerated systems. It is now easy to use complex C++ template-heavy codes on the coprocessor, including for example the Intel Threading Building Blocks (TBB) parallelization library. These techniques are not only possible, but usually efficient as well, since host and coprocessor are of the same architectural family, making optimization techniques designed for the Xeon CPU also beneficial on Xeon Phi. As a result, highly optimized Xeon codes (like the TBB library) work well on both. In this paper we present a new parallel library construct, which makes it easy to apply a function to every member of an array in parallel, dynamically distributing the work between the host CPUs and one or more coprocessor cards. We describe the associated runtime support and use a physical simulation example to demonstrate that our library construct can be used to quickly create a C++ application that will significantly benefit from hybrid execution, simultaneously exploiting CPU cores and coprocessor cores. Experimental results show that one optimized source code is sufficient to make the host and the coprocessors run efficiently.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.