Papers
Topics
Authors
Recent
2000 character limit reached

Proof of Unlearning: Definitions and Instantiation (2210.11334v2)

Published 20 Oct 2022 in cs.CR

Abstract: The "Right to be Forgotten" rule in ML practice enables some individual data to be deleted from a trained model, as pursued by recently developed machine unlearning techniques. To truly comply with the rule, a natural and necessary step is to verify if the individual data are indeed deleted after unlearning. Yet, previous parameter-space verification metrics may be easily evaded by a distrustful model trainer. Thus, Thudi et al. recently present a call to action on algorithm-level verification in USENIX Security'22. We respond to the call, by reconsidering the unlearning problem in the scenario of machine learning as a service (MLaaS), and proposing a new definition framework for Proof of Unlearning (PoUL) on algorithm level. Specifically, our PoUL definitions (i) enforce correctness properties on both the pre and post phases of unlearning, so as to prevent the state-of-the-art forging attacks; (ii) highlight proper practicality requirements of both the prover and verifier sides with minimal invasiveness to the off-the-shelf service pipeline and computational workloads. Under the definition framework, we subsequently present a trusted hardware-empowered instantiation using SGX enclave, by logically incorporating an authentication layer for tracing the data lineage with a proving layer for supporting the audit of learning. We customize authenticated data structures to support large out-of-enclave storage with simple operation logic, and meanwhile, enable proving complex unlearning logic with affordable memory footprints in the enclave. We finally validate the feasibility of the proposed instantiation with a proof-of-concept implementation and multi-dimensional performance evaluation.

Citations (10)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.