Emergent Mind

A readahead prefetcher for GPU file system layer

(2109.05366)
Published Sep 11, 2021 in cs.DC

Abstract

GPUs are broadly used in I/O-intensive big data applications. Prior works demonstrate the benefits of using GPU-side file system layer, GPUfs, to improve the GPU performance and programmability in such workloads. However, GPUfs fails to provide high performance for a common I/O pattern where a GPU is used to process a whole data set sequentially. In this work, we propose a number of system-level optimizations to improve the performance of GPUfs for such workloads. We perform an in-depth analysis of the interplay between the GPU I/O access pattern, CPU-GPU PCIe transfers and SSD storage, and identify the main bottlenecks. We propose a new GPU I/O readahead prefetcher and a GPU page cache replacement mechanism to resolve them. The GPU I/O readahead prefetcher achieves more than $2\times$ (geometric mean) higher bandwidth in a series of microbenchmarks compared to the original GPUfs. Furthermore, we evaluate the system on 14 applications derived from the RODINIA, PARBOIL and POLYBENCH benchmark suites. Our prefetching mechanism improves their execution time by up to 50% and their I/O bandwidth by 82% compared to the traditional CPU-only data transfer techniques.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.