Emergent Mind

Abstract

Spiking neural network (SNN), as the third generation of artificial neural networks, has been widely adopted in vision and audio tasks. Nowadays, many neuromorphic platforms support SNN simulation and adopt Network-on-Chips (NoC) architecture for multi-cores interconnection. However, interconnection brings huge area overhead to the platform. Moreover, run-time communication on the interconnection has a significant effect on the total power consumption and performance of the platform. In this paper, we propose a toolchain called SNEAP for mapping SNNs to neuromorphic platforms with multi-cores, which aims to reduce the energy and latency brought by spike communication on the interconnection. SNEAP includes two key steps: partitioning the SNN to reduce the spikes communicated between partitions, and mapping the partitions of SNN to the NoC to reduce average hop of spikes under the constraint of hardware resources. SNEAP can reduce more spikes communicated on the interconnection of NoC and spend less time than other toolchains in the partitioning phase. Moreover, the average hop of spikes is reduced more by SNEAP within a time period, which effectively reduces the energy and latency on the NoC-based neuromorphic platform. The experimental results show that SNEAP can achieve 418x reduction in end-to-end execution time, and reduce energy consumption and spike latency, on average, by 23% and 51% respectively, compared with SpiNeMap.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.