Emergent Mind

Abstract

Although reliable long precision floating-point arithmetic libraries such as QD and MPFR/GMP are necessary to solve ill-conditioned problems in numerical simulation, long precision BLAS-level computation such as matrix multiplication has not been fully optimized because tuning costs are very high compared to IEEE float and double precision arithmetic. In this study, we develop a technique to shorten this tuning time by using prediction of computational times in several block sizes for the blocking algorithm, and then selecting the fastest matrix multiplication method for tuning multiple precision dense real matrix multiplication in various precisions, matrix sizes, and degrees of parallelization.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.