Emergent Mind

To parallelize or not to parallelize, control and data flow issue

(1311.0731)
Published Nov 4, 2013 in cs.PL and cs.DC

Abstract

New trends towards multiple core processors imply using standard programming models to develop efficient, reliable and portable programs for distributed memory multiprocessors and workstation PC clusters. Message passing using MPI is widely used to write efficient, reliable and portable applications. Control and data flow analysis concepts, techniques and tools are needed to understand and analyze MPI programs. If our point of interest is the program control and data flow analysis, to decide to parallelize or not to parallelize our applications, there is a question to be answered, " Can the existing concepts, techniques and tools used to analyze sequential programs also be used to analyze parallel ones written in MPI?". In this paper we'll try to answer this question.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.