Real-Time FJ/MAC PDE Solvers via Tensorized, Back-Propagation-Free Optical PINN Training (2401.00413v2)
Abstract: Solving partial differential equations (PDEs) numerically often requires huge computing time, energy cost, and hardware resources in practical applications. This has limited their applications in many scenarios (e.g., autonomous systems, supersonic flows) that have a limited energy budget and require near real-time response. Leveraging optical computing, this paper develops an on-chip training framework for physics-informed neural networks (PINNs), aiming to solve high-dimensional PDEs with fJ/MAC photonic power consumption and ultra-low latency. Despite the ultra-high speed of optical neural networks, training a PINN on an optical chip is hard due to (1) the large size of photonic devices, and (2) the lack of scalable optical memory devices to store the intermediate results of back-propagation (BP). To enable realistic optical PINN training, this paper presents a scalable method to avoid the BP process. We also employ a tensor-compressed approach to improve the convergence and scalability of our optical PINN training. This training framework is designed with tensorized optical neural networks (TONN) for scalable inference acceleration and MZI phase-domain tuning for \textit{in-situ} optimization. Our simulation results of a 20-dim HJB PDE show that our photonic accelerator can reduce the number of MZIs by a factor of $1.17\times 103$, with only $1.36$ J and $1.15$ s to solve this equation. This is the first real-size optical PINN training framework that can be applied to solve high-dimensional PDEs.
- Artificial neural networks for solving ordinary and partial differential equations. IEEE transactions on neural networks, 9(5):987–1000, 1998.
- Neural-network-based approximations for solving partial differential equations. communications in Numerical Methods in Engineering, 10(3):195–201, 1994.
- Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics, 378:686–707, 2019.
- Deepreach: A deep learning approach to high-dimensional reachability. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 1817–1824, 2021.
- A neural network approach applied to multi-agent optimal control. In 2021 European Control Conference (ECC), pages 1036–1041. IEEE, 2021.
- PIFON-EPT: Mr-based electrical property tomography using physics-informed fourier networks. arXiv preprint arXiv:2302.11883, 2023.
- Deep learning with coherent nanophotonic circuits. Nature photonics, 11(7):441–446, 2017.
- Parallel convolutional processing using an integrated photonic tensor core. Nature, 589(7840):52–58, 2021.
- Photonics for artificial intelligence and neuromorphic computing. Nature Photonics, 15(2):102–114, 2021.
- Experimental realization of any discrete unitary operator. Physical review letters, 73(1):58, 1994.
- Optimal design for universal multiport interferometers. Optica, 3(12):1460–1465, 2016.
- Flops: Efficient on-chip learning for optical neural networks through stochastic zeroth-order optimization. In 2020 57th ACM/IEEE Design Automation Conference (DAC), pages 1–6, 2020.
- Efficient on-chip learning for optical neural networks through power-aware sparse zeroth-order optimization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 7583–7591, 2021.
- Silicon photonic architecture for training deep neural networks with direct feedback alignment. Optica, 9(12):1323–1332, 2022.
- A general approach to fast online training of modern datasets on real neuromorphic systems without backpropagation. In Proceedings of the International Conference on Neuromorphic Systems 2022, pages 1–8, 2022.
- Forward-forward training of an optical neural network. arXiv preprint arXiv:2305.19170, 2023.
- Training of photonic neural networks through in situ backpropagation and gradient measurement. Optica, 5(7):864–871, 2018.
- Experimentally realized in situ backpropagation for deep learning in photonic neural networks. Science, 380(6643):398–404, 2023.
- Large-scale and energy-efficient tensorized optical neural networks on iii–v-on-silicon moscap platform. APL Photonics, 6(12):126107, 2021.
- Ivan V Oseledets. Tensor-train decomposition. SIAM Journal on Scientific Computing, 33(5):2295–2317, 2011.
- Tt-pinn: a tensor-compressed neural pde solver for edge computing. arXiv preprint arXiv:2207.01751, 2022.
- Wavelength-Parallel Photonic Tensor Core Based on Multi-FSR Microring Resonator Crossbar Array. In Optical Fiber Communication Conference, page W3G.4, San Diego, CA, 2023.
- Tensor-compressed back-propagation-free training for (physics-informed) neural networks. arXiv preprint arXiv:2308.09858, 2023.
- James C Spall. Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. IEEE transactions on automatic control, 37(3):332–341, 1992.
- signsgd: Compressed optimisation for non-convex problems. In International Conference on Machine Learning, pages 560–569. PMLR, 2018.
- signsgd via zeroth-order oracle. In International Conference on Learning Representations, 2019.
- Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019.
- L2ight: Enabling on-chip learning for optical neural networks via efficient in-situ subspace optimization. Advances in Neural Information Processing Systems, 34:8649–8661, 2021.
- Analysis of the hardware imprecisions for scalable and compact photonic tensorized neural networks. In 2021 European Conference on Optical Communication (ECOC), pages 1–4. IEEE, 2021.
- Countering variations and thermal effects for accurate optical neural networks. In Proceedings of the 39th International Conference on Computer-Aided Design, pages 1–7, 2020.
- An energy-efficient and bandwidth-scalable dwdm heterogeneous silicon photonics integration platform. IEEE Journal of Selected Topics in Quantum Electronics, 28(6: High Density Integr. Multipurpose Photon. Circ.):1–19, 2022.