Emergent Mind

Abstract

Matrix multiplication is fundamental in the backpropagation algorithm used to train deep neural network models. Libraries like Intel's MKL or NVIDIA's cuBLAS implemented new and optimized matrix multiplication techniques that increase performance and reduce computational costs. These techniques can also be implemented in CUDA and SYCL and functions with AVX2 and AVX512 instructions, which have lower performance but better precision. The study compares execution times and power consumption using PAPI and PERF and compares accuracy for different matrix sizes. Comparisons were made on architectures such as third and fourth-generation Intel CPUs and NVIDIA V100 and A100 GPUs. The MKL library showed the best performance with a slight loss of precision, while OpenMP and SYCL on the CPU implementation showed the best accuracy but a loss of performance. On the other hand, the results on GPU showed that cuBLAS with tensor cores had the best performance; however, it had a cost in accuracy. The cuBLAS library without these specialized cores shows minimal performance loss and much higher accuracy. The data obtained on different architectures showed that the CPU could achieve performance close to that obtained on the GPU with increased power consumption. These results are conditional on certain hardware specifications, such as the number of cores, clock frequency, processor generation for the CPU, and the speed and bandwidth of the PCI bus and device architecture (compute capability) for the GPU.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.