Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 30 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 12 tok/s Pro
GPT-4o 91 tok/s Pro
Kimi K2 184 tok/s Pro
GPT OSS 120B 462 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

DeepBurning-MixQ: An Open Source Mixed-Precision Neural Network Accelerator Design Framework for FPGAs (2308.11334v1)

Published 22 Aug 2023 in cs.AR

Abstract: Mixed-precision neural networks (MPNNs) that enable the use of just enough data width for a deep learning task promise significant advantages of both inference accuracy and computing overhead. FPGAs with fine-grained reconfiguration capability can adapt the processing with distinct data width and models, and hence, can theoretically unleash the potential of MPNNs. Nevertheless, commodity DPUs on FPGAs mostly emphasize generality and have limited support for MPNNs especially the ones with lower data width. In addition, primitive DSPs in FPGAs usually have much larger data width than that is required by MPNNs and haven't been sufficiently co-explored with MPNNs yet. To this end, we propose an open source MPNN accelerator design framework specifically tailored for FPGAs. In this framework, we have a systematic DSP-packing algorithm to pack multiple lower data width MACs in a single primitive DSP and enable efficient implementation of MPNNs. Meanwhile, we take DSP packing efficiency into consideration with MPNN quantization within a unified neural network architecture search (NAS) framework such that it can be aware of the DSP overhead during quantization and optimize the MPNN performance and accuracy concurrently. Finally, we have the optimized MPNN fine-tuned to a fully pipelined neural network accelerator template based on HLS and make best use of available resources for higher performance. Our experiments reveal the resulting accelerators produced by the proposed framework can achieve overwhelming advantages in terms of performance, resource utilization, and inference accuracy for MPNNs when compared with both handcrafted counterparts and prior hardware-aware neural network accelerators on FPGAs.

Citations (4)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.