Emergent Mind

Abstract

A myriad of applications ranging from engineering and scientific simulations, image and signal processing as well as high-sensitive data retrieval demand high processing power reaching up to teraflops for their efficient execution. While a standard serial computer would require clock-cycles of less than one per second in this instance, parallel computing is a viable alternative. In adopting parallelism, multiple architectural models such as the PRAM, BSP and DataFlow Models have been proposed and implemented with some limitations due to a number of factors. Perhaps one of the predominant causes is the presence of sequential execution at some extent in these models. This status has trigged the need for improved alternatives. Hence, the Arithmetic Deduction Model has been introduced and its peculiarity can be seen through its use of natural arithmetic concepts to perform computation, and the remarkable substitution or elimination of dependency on variables and states in distributed data processing. Although some initial results about its performance have been published, it is critical to contextualize its genesis. Hence, in this paper we explore the importance of high performance computing and conduct a comparative study of some models of computation in terms of their strengh and limitations and accordingly highlight the need for a new model of computation.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.