Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Integer-Valued Training and Spike-Driven Inference Spiking Neural Network for High-performance and Energy-efficient Object Detection (2407.20708v3)

Published 30 Jul 2024 in cs.AI

Abstract: Brain-inspired Spiking Neural Networks (SNNs) have bio-plausibility and low-power advantages over Artificial Neural Networks (ANNs). Applications of SNNs are currently limited to simple classification tasks because of their poor performance. In this work, we focus on bridging the performance gap between ANNs and SNNs on object detection. Our design revolves around network architecture and spiking neuron. First, the overly complex module design causes spike degradation when the YOLO series is converted to the corresponding spiking version. We design a SpikeYOLO architecture to solve this problem by simplifying the vanilla YOLO and incorporating meta SNN blocks. Second, object detection is more sensitive to quantization errors in the conversion of membrane potentials into binary spikes by spiking neurons. To address this challenge, we design a new spiking neuron that activates Integer values during training while maintaining spike-driven by extending virtual timesteps during inference. The proposed method is validated on both static and neuromorphic object detection datasets. On the static COCO dataset, we obtain 66.2% mAP@50 and 48.9% mAP@50:95, which is +15.0% and +18.7% higher than the prior state-of-the-art SNN, respectively. On the neuromorphic Gen1 dataset, we achieve 67.2% mAP@50, which is +2.5% greater than the ANN with equivalent architecture, and the energy efficiency is improved by 5.7*. Code: https://github.com/BICLab/SpikeYOLO

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xinhao Luo (4 papers)
  2. Man Yao (18 papers)
  3. Yuhong Chou (10 papers)
  4. Bo Xu (212 papers)
  5. Guoqi Li (90 papers)
Citations (3)

Summary

  • The paper introduces SpikeYOLO and the novel I-LIF neuron model to reduce quantization error while preserving energy efficiency.
  • The paper demonstrates enhanced object detection with 66.2% mAP@50 on COCO and a 5.7× improvement in energy efficiency on event-based datasets.
  • The paper shows that simplifying ANN-derived architectures for SNNs bridges performance gaps, paving the way for practical neuromorphic computing.

Integer-Valued Training and Spike-Driven Inference Spiking Neural Network for High-performance and Energy-efficient Object Detection

The paper explores an innovative approach leveraging Spiking Neural Networks (SNNs) for object detection, specifically proposing the SpikeYOLO architecture. SNNs, motivated by their lower power consumption and biological plausibility, have faced challenges in matching the performance of traditional Artificial Neural Networks (ANNs), particularly in complex tasks beyond simplistic classification. This research proposes architectural and methodological advancements to bridge this performance gap in the field of object detection.

Contributions

  1. SpikeYOLO Architecture: The authors develop SpikeYOLO, an architecture combining elements from the YOLO object detection framework with contextual designs from SNNs. Traditional YOLO designs are perceived as excessively complex for direct conversion into SNN architectures, leading to spike degradation in deeper layers. To mitigate this, the authors introduce a simplified version that retains the macro structure of YOLOv8 but opts for meta spiking neural network blocks for improved compatibility with SNN dynamics.
  2. Integer Leaky Integrate-and-Fire (I-LIF) Neuron: The presented paper introduces the I-LIF spiking neuron model, which enables integer-valued activations during the training process to alleviate quantization error, subsequently converting these into binary spikes during inference through extended virtual timesteps. This dual approach maintains the energy-efficient spike-driven nature of SNNs while enhancing the training accuracy by sidestepping quantization errors.
  3. Practical and Theoretical Implications: The computational validation occurs over both static and event-based datasets, demonstrating significant improvements over previous SNN-based models. Notably, on the widely used COCO dataset, the proposed method achieved 66.2% mAP@50 and 48.9% mAP@50:95, marking improvements of 15.0% and 18.7% above previous SNN solutions, respectively. Moreover, on the neuromorphic Gen1 dataset, the SpikeYOLO model surpassed ANN benchmarks in terms of energy efficiency, boasting a 5.7× improvement.
  4. Impact of Quantization and Architecture Design: The paper presents ablation results indicating the crucial role of integer-valued training in reducing quantization errors. Such design choices enable better allocation of computational resources, ensuring that energy consumption remains sparse while performance metrics improve. Architectural simplifications aligning with spiking characteristics also underscore a trade-off, wherein shallow YOLO-inspired models integrated with simplified spike-driven blocks outperformed more complex ANN architectures when directly converted to SNNs.

Future Directions

The implications of this work are multifaceted, indicating promising directions for real-time neuromorphic computing and energy-efficient AI applications. Future research can further refine the I-LIF neuron model for additional spatio-temporal tasks and integrate similar integer-based methodologies across varying neuromorphic structures. Additionally, exploring the application of SpikeYOLO architecture to other domains within AI and computational neuroscience may unravel further insights into bio-inspired computing paradigms, potentially advancing the design of hybrid models that judiciously balance ANN's computational efficacy with SNN's energy frugality.

In conclusion, this research significantly raises the benchmark for SNN-based object detection, driving the field closer to viable applications in energy-constrained environments and scenarios that benefit from high temporal resolution. The strategic fusion of simplified architecture and advanced spiking neuron models reported herein marks a substantial step toward practical, energy-efficient neuromorphic AI systems.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets