Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 143 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 29 tok/s Pro
GPT-5 High 33 tok/s Pro
GPT-4o 85 tok/s Pro
Kimi K2 185 tok/s Pro
GPT OSS 120B 433 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Combined Scheduling, Memory Allocation and Tensor Replacement for Minimizing Off-Chip Data Accesses of DNN Accelerators (2311.18246v1)

Published 30 Nov 2023 in cs.LG and cs.AR

Abstract: Specialized hardware accelerators have been extensively used for Deep Neural Networks (DNNs) to provide power/performance benefits. These accelerators contain specialized hardware that supports DNN operators, and scratchpad memory for storing the tensor operands. Often, the size of the scratchpad is insufficient to store all the tensors needed for the computation, and additional data accesses are needed to move tensors back and forth from host memory during the computation with significant power/performance overhead. The volume of these additional data accesses depends on the operator schedule, and memory allocation (specific locations selected for the tensors in the scratchpad). We propose an optimization framework, named COSMA, for mapping DNNs to an accelerator that finds the optimal operator schedule, memory allocation and tensor replacement that minimizes the additional data accesses. COSMA provides an Integer Linear Programming (ILP) formulation to generate the optimal solution for mapping a DNN to the accelerator for a given scratchpad size. We demonstrate that, using an off-the-shelf ILP solver, COSMA obtains the optimal solution in seconds for a wide-range of state-of-the-art DNNs for different applications. Further, it out-performs existing methods by reducing on average 84% of the non-compulsory data accesses. We further propose a divide-and-conquer heuristic to scale up to certain complex DNNs generated by Neural Architecture Search, and this heuristic solution reduces on average 85% data accesses compared with other works.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (27)
  1. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard et al., “TensorFlow: a system for Large-Scale machine learning,” in OSDI, 2016.
  2. B. H. Ahn, J. Lee, J. M. Lin, H.-P. Cheng, J. Hou, and H. Esmaeilzadeh, “Ordering Chaos: Memory-aware Scheduling of Irregularly Wired Neural Networks for Edge Devices,” MLSys, 2020.
  3. L. A. Belady, “A study of replacement algorithms for a virtual-storage computer,” IBM Systems Journal, 1966.
  4. D. S. Berger, N. Beckmann, and M. Harchol-Balter, “Practical Bounds on Optimal Caching with Variable Object Sizes,” POMACS, vol. 2, 2018.
  5. L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethinking Atrous Convolution for Semantic Image Segmentation,” arXiv preprint arXiv:1706.05587, 2017.
  6. T. Chen, T. Moreau, Z. Jiang, L. Zheng, E. Yan, H. Shen, M. Cowan, L. Wang, Y. Hu, L. Ceze et al., “TVM: An automated End-to-End optimizing compiler for deep learning,” in OSDI, 2018.
  7. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale,” arXiv preprint arXiv:2010.11929, 2020.
  8. Google. Accessed:2023-09-01. [Online]. Available: github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/simple_memory_arena.cc
  9. L. Gurobi Optimization. Accessed: 2023-09-01. [Online]. Available: https://www.gurobi.com/documentation/current/refman/index.html
  10. K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in CVPR, 2016.
  11. M. Horowitz, “Computing’s energy problem (and what we can do about it),” in ISSCC, 2014.
  12. A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan et al., “Searching for MobileNetv3,” in ICCV, 2019, pp. 1314–1324.
  13. S. Jégou, M. Drozdzal, D. Vazquez, A. Romero, and Y. Bengio, “The One Hundred Layers Tiramisu: Fully Convolutional Densenets for Semantic Segmentation,” in CVPR, 2017, pp. 11–19.
  14. C. Liu, B. Zoph, M. Neumann, J. Shlens, W. Hua, L.-J. Li, L. Fei-Fei, A. Yuille, J. Huang, and K. Murphy, “Progressive Neural Architecture Search,” in ECCV, 2018.
  15. H. Liu, K. Simonyan, and Y. Yang, “DARTS: Differentiable Architecture Search,” ICLR, 2019.
  16. J. Long, E. Shelhamer, and T. Darrell, “Fully Convolutional Networks for Semantic Segmentation,” in CVPR, 2015, pp. 3431–3440.
  17. M. Maas, U. Beaugnon, A. Chauhan, and B. Ilbeyi, “TelaMalloc: Efficient On-Chip Memory Allocation for Production Machine Learning Accelerators,” in ASPLOS, 2022.
  18. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., “Pytorch: An imperative style, high-performance deep learning library,” NeurIPS, 2019.
  19. E. Real, A. Aggarwal, Y. Huang, and Q. V. Le, “Regularized Evolution for Image Classifier Architecture Search,” in AAAI, 2019.
  20. P. Ren, Y. Xiao, X. Chang, P.-Y. Huang, Z. Li, X. Chen, and X. Wang, “A Comprehensive Survey of Neural Architecture Search: Challenges and Solutions,” ACM Computing Surveys, vol. 54, 2021.
  21. B. Steiner, M. Elhoushi, J. Kahn, and J. Hegarty, “MODeL: Memory Optimizations for Deep Learning,” in ICML, 2023.
  22. D. Tran, H. Wang, L. Torresani, J. Ray, Y. LeCun, and M. Paluri, “A Closer Look at Spatiotemporal Convolutions for Action Recognition,” in CVPR, 2018, pp. 6450–6459.
  23. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is All You Need,” NIPS, vol. 30, 2017.
  24. Z. Wang, C. Wan, Y. Chen, Z. Lin, H. Jiang, and L. Qiao, “Hierarchical Memory-constrained Operator Scheduling of Neural Architecture Search Networks,” in DAC, 2022.
  25. S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated Residual Transformations for Deep Neural Networks,” in CVPR, 2017, pp. 1492–1500.
  26. S. Xie, C. Sun, J. Huang, Z. Tu, and K. Murphy, “Rethinking Spatiotemporal Feature Learning: Speed-accuracy Trade-offs in Video Classification,” in ECCV, 2018, pp. 305–321.
  27. B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, “Learning Transferable Architectures for Scalable Image Recognition,” in CVPR, 2018.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 0 likes.

Upgrade to Pro to view all of the tweets about this paper: