Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 75 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 20 tok/s Pro
GPT-5 High 18 tok/s Pro
GPT-4o 95 tok/s Pro
Kimi K2 193 tok/s Pro
GPT OSS 120B 467 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Quantum Data Breach: Reusing Training Dataset by Untrusted Quantum Clouds (2407.14687v1)

Published 19 Jul 2024 in quant-ph

Abstract: Quantum computing (QC) has the potential to revolutionize fields like machine learning, security, and healthcare. Quantum machine learning (QML) has emerged as a promising area, enhancing learning algorithms using quantum computers. However, QML models are lucrative targets due to their high training costs and extensive training times. The scarcity of quantum resources and long wait times further exacerbate the challenge. Additionally, QML providers may rely on a third-party quantum cloud for hosting the model, exposing the models and training data. As QML-as-a-Service (QMLaaS) becomes more prevalent, reliance on third party quantum clouds can pose a significant threat. This paper shows that adversaries in quantum clouds can use white-box access of the QML model during training to extract the state preparation circuit (containing training data) along with the labels. The extracted training data can be reused for training a clone model or sold for profit. We propose a suite of techniques to prune and fix the incorrect labels. Results show that $\approx$90\% labels can be extracted correctly. The same model trained on the adversarially extracted data achieves approximately $\approx$90\% accuracy, closely matching the accuracy achieved when trained on the original data. To mitigate this threat, we propose masking labels/classes and modifying the cost function for label obfuscation, reducing adversarial label prediction accuracy by $\approx$70\%.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com