Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 49 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 19 tok/s Pro
GPT-5 High 16 tok/s Pro
GPT-4o 103 tok/s Pro
Kimi K2 172 tok/s Pro
GPT OSS 120B 472 tok/s Pro
Claude Sonnet 4 39 tok/s Pro
2000 character limit reached

Power of Quantum Generative Learning (2205.04730v2)

Published 10 May 2022 in quant-ph and cs.LG

Abstract: The intrinsic probabilistic nature of quantum mechanics invokes endeavors of designing quantum generative learning models (QGLMs). Despite the empirical achievements, the foundations and the potential advantages of QGLMs remain largely obscure. To narrow this knowledge gap, here we explore the generalization property of QGLMs, the capability to extend the model from learned to unknown data. We consider two prototypical QGLMs, quantum circuit Born machines and quantum generative adversarial networks, and explicitly give their generalization bounds. The result identifies superiorities of QGLMs over classical methods when quantum devices can directly access the target distribution and quantum kernels are employed. We further employ these generalization bounds to exhibit potential advantages in quantum state preparation and Hamiltonian learning. Numerical results of QGLMs in loading Gaussian distribution and estimating ground states of parameterized Hamiltonians accord with the theoretical analysis. Our work opens the avenue for quantitatively understanding the power of quantum generative learning models.

Citations (7)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.