Emergent Mind

Abstract

Quantum computing (QC) has the potential to revolutionize fields like machine learning, security, and healthcare. Quantum machine learning (QML) has emerged as a promising area, enhancing learning algorithms using quantum computers. However, QML models are lucrative targets due to their high training costs and extensive training times. The scarcity of quantum resources and long wait times further exacerbate the challenge. Additionally, QML providers may rely on a third-party quantum cloud for hosting the model, exposing the models and training data. As QML-as-a-Service (QMLaaS) becomes more prevalent, reliance on third party quantum clouds can pose a significant threat. This paper shows that adversaries in quantum clouds can use white-box access of the QML model during training to extract the state preparation circuit (containing training data) along with the labels. The extracted training data can be reused for training a clone model or sold for profit. We propose a suite of techniques to prune and fix the incorrect labels. Results show that $\approx$90\% labels can be extracted correctly. The same model trained on the adversarially extracted data achieves approximately $\approx$90\% accuracy, closely matching the accuracy achieved when trained on the original data. To mitigate this threat, we propose masking labels/classes and modifying the cost function for label obfuscation, reducing adversarial label prediction accuracy by $\approx$70\%.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.