Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 91 tok/s
Gemini 2.5 Pro 56 tok/s Pro
GPT-5 Medium 29 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 108 tok/s Pro
Kimi K2 214 tok/s Pro
GPT OSS 120B 470 tok/s Pro
Claude Sonnet 4 40 tok/s Pro
2000 character limit reached

Machine-Learning Insights into the Entanglement-trainability Correlation of Parameterized Quantum Circuits (2406.01997v3)

Published 4 Jun 2024 in quant-ph

Abstract: Variational quantum algorithms (VQAs) have emerged as the leading strategy to obtain quantum advantage on the current noisy intermediate-scale devices. However, their entanglement-trainability correlation, as the major reason for the barren plateau (BP) phenomenon, poses a challenge to their applications. In this Letter, we suggest a gate-to-tensor (GTT) encoding method for parameterized quantum circuits (PQCs), with which two long short-term memory networks (L-G networks) are trained to predict both entanglement and trainability. The remarkable capabilities of the L-G networks afford a statistical way to delve into the entanglement-trainability correlation of PQCs within a dataset encompassing millions of instances. This machine-learning-driven method first confirms that the more entanglement, the more possible the BP problem. Then, we observe that there still exist PQCs with both high entanglement and high trainability. Furthermore, the trained L-G networks result in an impressive increase in time efficiency by about one million times when constructing a PQC with specific entanglement and trainability, demonstrating their practical applications in VQAs.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube