Learning to Program Variational Quantum Circuits with Fast Weights (2402.17760v1)
Abstract: Quantum Machine Learning (QML) has surfaced as a pioneering framework addressing sequential control tasks and time-series modeling. It has demonstrated empirical quantum advantages notably within domains such as Reinforcement Learning (RL) and time-series prediction. A significant advancement lies in Quantum Recurrent Neural Networks (QRNNs), specifically tailored for memory-intensive tasks encompassing partially observable environments and non-linear time-series prediction. Nevertheless, QRNN-based models encounter challenges, notably prolonged training duration stemming from the necessity to compute quantum gradients using backpropagation-through-time (BPTT). This predicament exacerbates when executing the complete model on quantum devices, primarily due to the substantial demand for circuit evaluation arising from the parameter-shift rule. This paper introduces the Quantum Fast Weight Programmers (QFWP) as a solution to the temporal or sequential learning challenge. The QFWP leverages a classical neural network (referred to as the 'slow programmer') functioning as a quantum programmer to swiftly modify the parameters of a variational quantum circuit (termed the 'fast programmer'). Instead of completely overwriting the fast programmer at each time-step, the slow programmer generates parameter changes or updates for the quantum circuit parameters. This approach enables the fast programmer to incorporate past observations or information. Notably, the proposed QFWP model achieves learning of temporal dependencies without necessitating the use of quantum recurrent neural networks. Numerical simulations conducted in this study showcase the efficacy of the proposed QFWP model in both time-series prediction and RL tasks. The model exhibits performance levels either comparable to or surpassing those achieved by QLSTM-based models.
- M. A. Nielsen and I. L. Chuang, “Quantum computation and quantum information,” 2010.
- J. Preskill, “Quantum computing in the nisq era and beyond,” Quantum, vol. 2, p. 79, 2018.
- K. Bharti, A. Cervera-Lierta, T. H. Kyaw, T. Haug, S. Alperin-Lea, A. Anand, M. Degroote, H. Heimonen, J. S. Kottmann, T. Menke et al., “Noisy intermediate-scale quantum algorithms,” Reviews of Modern Physics, vol. 94, no. 1, p. 015004, 2022.
- K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii, “Quantum circuit learning,” Physical Review A, vol. 98, no. 3, p. 032309, 2018.
- J. Qi, C.-H. H. Yang, and P.-Y. Chen, “Qtn-vqc: An end-to-end learning framework for quantum neural networks,” Physica Scripta, vol. 99, 12 2023.
- S. Y.-C. Chen, C.-M. Huang, C.-W. Hsing, and Y.-J. Kao, “An end-to-end trainable hybrid classical-quantum classifier,” Machine Learning: Science and Technology, vol. 2, no. 4, p. 045021, 2021.
- S. Y.-C. Chen, S. Yoo, and Y.-L. L. Fang, “Quantum long short-term memory,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022, pp. 8622–8626.
- J. Bausch, “Recurrent quantum neural networks,” Advances in neural information processing systems, vol. 33, pp. 1368–1379, 2020.
- C. Chu, G. Skipper, M. Swany, and F. Chen, “Iqgan: Robust quantum generative adversarial network for image synthesis on nisq devices,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023, pp. 1–5.
- S. A. Stein, B. Baheri, D. Chen, Y. Mao, Q. Guan, A. Li, B. Fang, and S. Xu, “Qugan: A quantum state fidelity based generative adversarial network,” in 2021 IEEE International Conference on Quantum Computing and Engineering (QCE). IEEE, 2021, pp. 71–81.
- C.-H. H. Yang, J. Qi, S. Y.-C. Chen, P.-Y. Chen, S. M. Siniscalchi, X. Ma, and C.-H. Lee, “Decentralizing feature extraction with quantum convolutional neural network for automatic speech recognition,” in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021, pp. 6523–6527.
- S. S. Li, X. Zhang, S. Zhou, H. Shu, R. Liang, H. Liu, and L. P. Garcia, “Pqlm-multilingual decentralized portable quantum language model,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023, pp. 1–5.
- C.-H. H. Yang, J. Qi, S. Y.-C. Chen, Y. Tsao, and P.-Y. Chen, “When bert meets quantum temporal convolution learning for text classification in heterogeneous computing,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022, pp. 8602–8606.
- R. Di Sipio, J.-H. Huang, S. Y.-C. Chen, S. Mangini, and M. Worring, “The dawn of quantum natural language processing,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022, pp. 8612–8616.
- J. Stein, I. Christ, N. Kraus, M. B. Mansky, R. Müller, and C. Linnhoff-Popien, “Applying qnlp to sentiment analysis in finance,” in 2023 IEEE International Conference on Quantum Computing and Engineering (QCE), vol. 2. IEEE, 2023, pp. 20–25.
- S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
- V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski et al., “Human-level control through deep reinforcement learning,” nature, vol. 518, no. 7540, pp. 529–533, 2015.
- D. Dong, C. Chen, H. Li, and T.-J. Tarn, “Quantum reinforcement learning,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 38, no. 5, pp. 1207–1220, 2008.
- S. Y.-C. Chen, C.-H. H. Yang, J. Qi, P.-Y. Chen, X. Ma, and H.-S. Goan, “Variational quantum circuits for deep reinforcement learning,” IEEE Access, vol. 8, pp. 141 007–141 024, 2020.
- O. Lockwood and M. Si, “Reinforcement learning with quantum variational circuit,” in Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, vol. 16, no. 1, 2020, pp. 245–251.
- A. Skolik, S. Jerbi, and V. Dunjko, “Quantum agents in the gym: a variational quantum algorithm for deep q-learning,” Quantum, vol. 6, p. 720, 2022.
- J.-Y. Hsiao, Y. Du, W.-Y. Chiang, M.-H. Hsieh, and H.-S. Goan, “Unentangled quantum reinforcement learning agents in the openai gym,” arXiv preprint arXiv:2203.14348, 2022.
- Q. Lan, “Variational quantum soft actor-critic,” arXiv preprint arXiv:2112.11921, 2021.
- S. Jerbi, C. Gyurik, S. Marshall, H. J. Briegel, and V. Dunjko, “Variational quantum policies for reinforcement learning,” arXiv preprint arXiv:2103.05577, 2021.
- M. Kölle, M. Hgog, F. Ritz, P. Altmann, M. Zorn, J. Stein, and C. Linnhoff-Popien, “Quantum advantage actor-critic for reinforcement learning,” arXiv preprint arXiv:2401.07043, 2024.
- S. Y.-C. Chen, “Asynchronous training of quantum reinforcement learning,” Procedia Computer Science, vol. 222, pp. 321–330, 2023, international Neural Network Society Workshop on Deep Learning Innovations and Applications (INNS DLIA 2023). [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1877050923009365
- ——, “Quantum deep recurrent reinforcement learning,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023, pp. 1–5.
- ——, “Efficient quantum recurrent reinforcement learning via quantum reservoir computing,” arXiv preprint arXiv:2309.07339, 2023.
- S. Y.-C. Chen, D. Fry, A. Deshmukh, V. Rastunkov, and C. Stefanski, “Reservoir computing via quantum recurrent neural networks,” arXiv preprint arXiv:2211.02612, 2022.
- V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu, “Asynchronous methods for deep reinforcement learning,” in International conference on machine learning. PMLR, 2016, pp. 1928–1937.
- G. Verdon, M. Broughton, J. R. McClean, K. J. Sung, R. Babbush, Z. Jiang, H. Neven, and M. Mohseni, “Learning to learn with quantum neural networks via classical neural networks,” arXiv preprint arXiv:1907.05415, 2019.
- J. Schmidhuber, “Learning to control fast-weight memories: An alternative to dynamic recurrent networks,” Neural Computation, vol. 4, no. 1, pp. 131–139, 1992.
- ——, “Reducing the ratio between learning complexity and number of time varying variables in fully recurrent nets,” in ICANN’93: Proceedings of the International Conference on Artificial Neural Networks Amsterdam, The Netherlands 13–16 September 1993 3. Springer, 1993, pp. 460–463.
- F. Gomez and J. Schmidhuber, “Evolving modular fast-weight networks for control,” in International Conference on Artificial Neural Networks. Springer, 2005, pp. 383–389.
- A. Abbas, D. Sutter, C. Zoufal, A. Lucchi, A. Figalli, and S. Woerner, “The power of quantum neural networks,” Nature Computational Science, vol. 1, no. 6, pp. 403–409, 2021.
- M. C. Caro, H.-Y. Huang, M. Cerezo, K. Sharma, A. Sornborger, L. Cincio, and P. J. Coles, “Generalization in quantum machine learning from few training data,” Nature communications, vol. 13, no. 1, pp. 1–11, 2022.
- Y. Du, M.-H. Hsieh, T. Liu, and D. Tao, “Expressive power of parametrized quantum circuits,” Physical Review Research, vol. 2, no. 3, p. 033125, 2020.
- S. Y.-C. Chen, C.-M. Huang, C.-W. Hsing, H.-S. Goan, and Y.-J. Kao, “Variational quantum reinforcement learning via evolutionary optimization,” Machine Learning: Science and Technology, vol. 3, no. 1, p. 015025, 2022.
- M. Schuld, V. Bergholm, C. Gogolin, J. Izaac, and N. Killoran, “Evaluating analytic gradients on quantum hardware,” Physical Review A, vol. 99, no. 3, p. 032331, 2019.
- V. Bergholm, J. Izaac, M. Schuld, C. Gogolin, C. Blank, K. McKiernan, and N. Killoran, “Pennylane: Automatic differentiation of hybrid quantum-classical computations,” arXiv preprint arXiv:1811.04968, 2018.
- A. F. Atiya and A. G. Parlos, “New results on recurrent network training: unifying the algorithms and accelerating convergence,” IEEE transactions on neural networks, vol. 11, no. 3, pp. 697–709, 2000.
- A. Goudarzi, P. Banda, M. R. Lakin, C. Teuscher, and D. Stefanovic, “A comparative study of reservoir computing for temporal signal processing,” arXiv preprint arXiv:1401.2224, 2014.
- Y. Suzuki, Q. Gao, K. C. Pradel, K. Yasuoka, and N. Yamamoto, “Natural quantum reservoir computing for temporal information processing,” Scientific reports, vol. 12, no. 1, pp. 1–15, 2022.
- M. Chevalier-Boisvert, L. Willems, and S. Pal, “Minimalistic gridworld environment for openai gym,” https://github.com/maximecb/gym-minigrid, 2018.
- Samuel Yen-Chi Chen (64 papers)