Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scale-Dropout: Estimating Uncertainty in Deep Neural Networks Using Stochastic Scale (2311.15816v2)

Published 27 Nov 2023 in cs.LG, cs.AI, and cs.ET

Abstract: Uncertainty estimation in Neural Networks (NNs) is vital in improving reliability and confidence in predictions, particularly in safety-critical applications. Bayesian Neural Networks (BayNNs) with Dropout as an approximation offer a systematic approach to quantifying uncertainty, but they inherently suffer from high hardware overhead in terms of power, memory, and computation. Thus, the applicability of BayNNs to edge devices with limited resources or to high-performance applications is challenging. Some of the inherent costs of BayNNs can be reduced by accelerating them in hardware on a Computation-In-Memory (CIM) architecture with spintronic memories and binarizing their parameters. However, numerous stochastic units are required to implement conventional dropout-based BayNN. In this paper, we propose the Scale Dropout, a novel regularization technique for Binary Neural Networks (BNNs), and Monte Carlo-Scale Dropout (MC-Scale Dropout)-based BayNNs for efficient uncertainty estimation. Our approach requires only one stochastic unit for the entire model, irrespective of the model size, leading to a highly scalable Bayesian NN. Furthermore, we introduce a novel Spintronic memory-based CIM architecture for the proposed BayNN that achieves more than $100\times$ energy savings compared to the state-of-the-art. We validated our method to show up to a $1\%$ improvement in predictive performance and superior uncertainty estimates compared to related works.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (63)
  1. A. Krizhevsky et al., “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, 2012.
  2. J. Devlin et al., “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
  3. A. Esteva et al., “Deep learning-enabled medical computer vision,” NPJ digital medicine, vol. 4, p. 5, 2021.
  4. T.-W. Tang et al., “Anomaly detection neural network with dual auto-encoders gan and its industrial inspection applications,” Sensors, vol. 20, p. 3336, 2020.
  5. A. Kendall and Y. Gal, “What uncertainties do we need in bayesian deep learning for computer vision?” NeurIPS, vol. 30, 2017.
  6. I. Hubara et al., “Binarized neural networks,” NeurIPS, vol. 29, 2016.
  7. B. Lakshminarayanan et al., “Simple and scalable predictive uncertainty estimation using deep ensembles,” NeurIPS, 2017.
  8. C. Blundell et al., “Weight uncertainty in neural network,” in ICML.   PMLR, 2015.
  9. Y. Gal and Z. Ghahramani, “Dropout as a bayesian approximation: Representing model uncertainty in deep learning,” in international conference on machine learning.   PMLR, 2016, pp. 1050–1059.
  10. P. A. Merolla et al., “A million spiking-neuron integrated circuit with a scalable communication network and interface,” Science, vol. 345, pp. 668–673, 2014.
  11. X. Zou et al., “Breaking the von neumann bottleneck: architecture-level processing-in-memory technology,” Science China Information Sciences, vol. 64, p. 160404, 2021.
  12. S. Yu, “Neuro-inspired computing with emerging nonvolatile memorys,” Proceedings of the IEEE, vol. 106, pp. 260–285, 2018.
  13. S. T. Ahmed et al., “SpinBayes: Algorithm-Hardware Co-Design for Uncertainty Estimation Using Bayesian In-Memory Approximation on Spintronic-Based Architectures,” ACM Transactions on Embedded Computing Systems, vol. 22, pp. 131:1–131:25, Sep. 2023. [Online]. Available: https://doi.org/10.1145/3609116
  14. ——, “Spindrop: Dropout-based bayesian binary neural networks with spintronic implementation,” IEEE Journal on Emerging and Selected Topics in Circuits and Systems, vol. 13, pp. 150–164, 2023.
  15. ——, “Spatial-spindrop: Spatial dropout-based binary bayesian neural network with spintronics implementation,” arXiv preprint arXiv:2306.10185, 2023.
  16. ——, “Binary bayesian neural networks for efficient uncertainty estimation leveraging inherent stochasticity of spintronic devices,” in NANOARCH’22: 17th ACM International Symposium on Nanoscale Architectures.   ACM, 2022, pp. 1–6.
  17. D. Bonnet et al., “Bringing uncertainty quantification to the extreme-edge with memristor-based bayesian neural networks,” 2023.
  18. S. T. Ahmed et al., “Scalable spintronics-based bayesian neural network for uncertainty estimation,” in 2023 Design, Automation & Test in Europe Conference & Exhibition (DATE).   IEEE, 2023, pp. 1–6.
  19. R. Dorrance et al., “An energy-efficient bayesian neural network accelerator with cim and a time-interleaved hadamard digital grng using 22-nm finfet,” IEEE Journal of Solid-State Circuits, vol. 58, pp. 2826–2838, 2023.
  20. M. Rastegari et al., “Xnor-net: Imagenet classification using binary convolutional neural networks,” in European conference on computer vision.   Springer, 2016, pp. 525–542.
  21. A. Bulat and G. Tzimiropoulos, “Xnor-net++: Improved binary neural networks,” arXiv preprint arXiv:1909.13863, 2019.
  22. H. Qin et al., “Forward and backward information retention for accurate binary neural networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 2250–2259.
  23. A. Nguyen et al., “Deep neural networks are easily fooled: High confidence predictions for unrecognizable images,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 427–436.
  24. W. He and Z. Jiang, “A survey on uncertainty quantification methods for deep neural networks: An uncertainty source perspective,” arXiv preprint arXiv:2302.13425, 2023.
  25. N. Srivastava et al., “Dropout: a simple way to prevent neural networks from overfitting,” The journal of machine learning research, vol. 15, pp. 1929–1958, 2014.
  26. L. Wan et al., “Regularization of neural networks using dropconnect,” in International conference on machine learning.   PMLR, 2013, pp. 1058–1066.
  27. J. Tompson et al., “Efficient object localization using convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 648–656.
  28. D. P. Kingma et al., “Variational dropout and the local reparameterization trick,” Advances in neural information processing systems, vol. 28, 2015.
  29. B. Dieny et al., “Opportunities and challenges for spintronics in the microelectronics industry,” Nature Electronics, vol. 3, Aug. 2020.
  30. T. Y. Lee et al., “World-most energy-efficient MRAM technology for non-volatile RAM applications,” in 2022 International Electron Devices Meeting (IEDM).   IEEE, Dec. 2022, pp. 10.7.1–10.7.4, iSSN: 2156-017X.
  31. Q. Shao et al., “Roadmap of spin-orbit torques,” IEEE TransMag, 2021.
  32. H. Awano and M. Hashimoto, “Bynqnet: Bayesian neural network with quadratic activations for sampling-free uncertainty estimation on fpga,” in 2020 Design, Automation & Test in Europe Conference & Exhibition (DATE).   IEEE, 2020, pp. 1402–1407.
  33. H. Fan et al., “High-Performance FPGA-based Accelerator for Bayesian Neural Networks,” in 2021 58th ACM/IEEE Design Automation Conference (DAC).   San Francisco, CA, USA: IEEE Press, Dec. 2021, pp. 1063–1068.
  34. ——, “FPGA-Based Acceleration for Bayesian Convolutional Neural Networks,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 41, pp. 5343–5356, Dec. 2022, conference Name: IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.
  35. A. Malhotra et al., “Exploiting Oxide Based Resistive RAM Variability for Bayesian Neural Network Hardware Design,” IEEE Transactions on Nanotechnology, vol. 19, pp. 328–331, 2020, conference Name: IEEE Transactions on Nanotechnology.
  36. T. Dalgaty et al., “In situ learning using intrinsic memristor variability via Markov chain Monte Carlo sampling,” Nature Electronics, vol. 4, pp. 151–161, Feb. 2021, number: 2 Publisher: Nature Publishing Group.
  37. K. Yang et al., “All-spin bayesian neural networks,” IEEE Transactions on Electron Devices, vol. 67, pp. 1340–1347, 2020.
  38. A. Lu et al., “An Algorithm-Hardware Co-Design for Bayesian Neural Network Utilizing SOT-MRAM’s Inherent Stochasticity,” IEEE-JXCDC, 2022.
  39. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International conference on machine learning.   pmlr, 2015, pp. 448–456.
  40. S.-W. Lee et al., “Emerging Three-Terminal Magnetic Memory Devices,” Proceedings of the IEEE, vol. 104, pp. 1831–1843, Oct. 2016, conference Name: Proceedings of the IEEE.
  41. T. Gokmen et al., “Training deep convolutional neural networks with resistive cross-point devices,” Frontiers in neuroscience, vol. 11, p. 538, 2017.
  42. X. Peng et al., “Optimizing weight mapping and data flow for convolutional neural networks on rram based processing-in-memory architecture,” in IEEE ISCAS.   IEEE, 2019, pp. 1–5.
  43. W. Al-Dhabyani et al., “Dataset of breast ultrasound images,” Data in brief, vol. 28, p. 104863, 2020.
  44. J. Ma et al., “Covid-19 ct lung and infection segmentation dataset. zenodo,” 2020.
  45. T. Mendonça et al., “Ph 2-a dermoscopic image database for research and benchmarking.”   IEEE, 2013, pp. 5437–5440.
  46. Y. Netzer et al., “Reading digits in natural images with unsupervised feature learning,” 2011.
  47. A. Coates et al., “An analysis of single-layer networks in unsupervised feature learning,” in Proceedings of the fourteenth international conference on artificial intelligence and statistics.   JMLR Workshop and Conference Proceedings, 2011, pp. 215–223.
  48. K. He et al., “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  49. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  50. O. Ronneberger et al., “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention.   Springer, 2015, pp. 234–241.
  51. V. B. Alex Kendall and R. Cipolla, “Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding,” in Proceedings of the British Machine Vision Conference (BMVC), 2017, pp. 57.1–57.12.
  52. J. Choi et al., “Pact: Parameterized clipping activation for quantized neural networks,” arXiv preprint arXiv:1805.06085, 2018.
  53. K. Danouchi et al., “Spin Orbit Torque-based Crossbar Array for Error Resilient Binary Convolutional Neural Network,” in 23RD IEEE LATIN-AMERICAN TEST SYMPOSIUM, Montevideo, Uruguay, Sep. 2022.
  54. X. Dong et al., “Nvsim: A circuit-level performance, energy, and area model for emerging nonvolatile memory,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 31, pp. 994–1007, 2012.
  55. R. Ding et al., “Regularizing activation distribution for training binarized deep networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 11 408–11 417.
  56. S. Zhou et al., “Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients,” arXiv preprint arXiv:1606.06160, 2016.
  57. R. Gong et al., “Differentiable soft quantization: Bridging full-precision and low-bit neural networks,” in Proceedings of the ICCV, 2019, pp. 4852–4861.
  58. L. Hou et al., “Loss-aware binarization of deep networks,” in International Conference on Learning Representations (ICLR), 2017.
  59. A. Mobiny et al., “Dropconnect is effective in modeling uncertainty of bayesian deep networks,” Scientific reports, 2021.
  60. R. Cai et al., “Vibnn: Hardware acceleration of bayesian neural networks,” ACM SIGPLAN Notices, vol. 53, pp. 476–488, 2018.
  61. X. Jia et al., “Efficient Computation Reduction in Bayesian Neural Networks Through Feature Decomposition and Memorization,” IEEE Trans. on Neural Networks and Learning Systems, vol. 32, Apr. 2021.
  62. D. Hendrycks and T. Dietterich, “Benchmarking neural network robustness to common corruptions and perturbations,” arXiv preprint arXiv:1903.12261, 2019.
  63. D. Ielmini et al., “Status and challenges of in-memory computing for neural accelerators,” in 2022 International Symposium on VLSI Technology, Systems and Applications (VLSI-TSA), Apr. 2022, pp. 1–2, iSSN: 1930-8868.
Citations (6)

Summary

We haven't generated a summary for this paper yet.