Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generation of the Single Precision BLAS library for the Parallella platform, with Epiphany co-processor acceleration, using the BLIS framework (1608.05265v1)

Published 18 Aug 2016 in cs.DC

Abstract: The Parallella is a hybrid computing platform that came into existence as the result of a Kickstarter project by Adapteva. It is composed of the high performance, energy-efficient, manycore architecture, Epiphany chip (used as co-processor) and one Zynq-7000 series chip, which normally runs a regular Linux OS version, serves as the main processor, and implements "glue logic" in its internal FPGA to communicate with the many interfaces in the Parallella. In this paper an Epiphany-accelerated BLAS library for the Parallella platform was created (which could be suitable, also, for similar hybrid platforms that include the Epiphany chip as a coprocessor). For the actual instantiation of the BLAS, the BLIS framework was used. There have been previous implementations of Matrix-Matrix multiplication, on this platform, that achieved very good performances inside the Epiphany chip (up to 85% of peak), but not so good ones for the complete Parallella platform (due to inter-chip data transfer bandwidth limitations). The main purpose of this work was to get closer to practical Linear Algebra aplications for the entire Parallella platform, with scientific computing in view.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Miguel Tasende (1 paper)
Citations (3)

Summary

We haven't generated a summary for this paper yet.